by Sally Chase
Narcissa tapped out a “happy Mother’s Day to my forever role model,” having already poured through her albums to pick out the most attractive photo of herself with her mom from the past year. She posted, then anxiously monitored the likes and comments, handing out a few of her own to prime the pump. So it went for religious and patriotic holidays, friends’ weddings, and other events decidedly not about Narcissa. Each moment of a family vacation was scrutinized for Instagrammability; every passing cause and calamity was examined through the lens of its potential to augment Narcissa’s social capital. She’d long ago forgotten to be actively ashamed of these patterns, but some small voice in the back of her mind still whispered, “This is gross.”
The general consensus was that social media had made them worse people, consumed by the self-perpetuating cycle of jealousy and vanity, and a bottomless thirst for external validation. What mattered most to them, as demonstrated by what dominated their spare time and preoccupied their thoughts, was what others thought of their bodies, their material success, their relationships, their cleverness.
Social media fostered cruelty and intolerance towards others, and dishonesty and cowardice within, not to mention pride, sloth, and intemperance. The likes were never enough, as they stooped to new lows to exploit their friends, their families, and themselves. Social media was a terrible place to be a woman, a child—or anyone, really. But there seemed to be no way out.
Until, that was, Dr. Loveta Readlots unearthed an ancient Aristotelian text on turning social networks into schools of virtue. Philosophers, technologists, social scientists, and policymakers scrutinized the tome, distilling lessons for the present context. Before long, a new social network emerged from the fertile ground: Arete.
Arete had strict time limits. It could be used once weekly, and only in the early evening hours. This tempered the temptation to rise and set with the app, and to squander on it all one’s leisure. The functionality centered around three modes of interaction: questions, compliments, and challenges. Subscribers could use Arete to seek answers or advice, to lift others up, or to work on a skill or habit.
Like all good things, Arete was not immune to entropy and decay. Challenges became the latest mode of virtue signaling, and compliments deteriorated into propositions and flattery. Many questions were lazy, and better answered elsewhere. Arete hadn’t resolved the gross primacy of the self, the siren call of self-promotion. Users and critics bickered about what constituted worthwhile challenges and quality comments—and who got to decide. For many, the lure of Twitter and TikTok night-scrolling proved too much to resist.
And yet, the effort itself towards more virtuous social networking bore some fruit in familial and civic relations and public policies. Major platforms got serious about banning kids under the age of 18. Journalists and members of Congress stopped trawling for the lowest lurking fish. The culture felt, vaguely, more dignified and polite.
Snapshot of today
Several studies have linked social media use to increased feelings of depression, envy, dissatisfaction, anxiety, loneliness, ugliness, and low self-esteem, as well as narcissism, vanity, eating disorders, relationship issues, poor study habits, subpar academic performance, and marital discord. (Other research reveals some positive effects.) Not all of these dispositions and behaviors are directly relevant to character formation and virtuous living, but they are worth taking into account as at minimum potential obstacles to a life fully lived.
In her thesis “Social Media and the Virtues: Could social media be an obstacle to an individual’s ultimate happiness?” scholar Emma Rosén concludes that the answer is ‘yes.’ Social media can harm users’ physical and emotional health, and capacity for true friendship, though individual mileage may vary, largely depending on the exercise of virtue.
Neuroscientist Susan Greenfield goes a step further, arguing that thanks to social media, “the mid-21st century mind might almost be infantilized, characterized by short attention spans, sensationalism, inability to empathize and a shaky sense of identity.” Social media can corrupt our capacity to manage in-person interactions with confidence and care, and may make us too reliant on others’ responses and reassurances, while also diminishing our attention to long-term consequences and other-oriented goals.
Similarly, philosopher Mitchell Haney worries in his essay “The Community of Sanity in the Age of the Meme” that social media is structured such that it degrades rational discourse. The prevalence and power of memes, in particular, Haney sees as detrimental to thoughtful and charitable debate, as bite-sized viewpoints, often imbued with character ascriptions, proliferate and influence public opinion while precluding engagement or critique.
Theologian Ian Paul depicts the “online self” as “hyper-performative,” marked by obsessive and deceptive self-presentation that nonetheless elides much of our humanity, absent the normal guardrails found in physical community. Lacking both real community and real solitude, we increasingly identify with others’ perceptions of our represented selves, and grow comfortable with being packaged and sold. Paul fears that the gradient of social media is towards sedentariness, social atomization, insecurity, reactivity, intemperance, and groupthink. Social media is “structuring and forming us,” he says, “often in disquieting ways,” leading to greater incidence of distraction and unkindness, along with inauthenticity. The work of virtue formation relies on good community, for which a “weak” and “disordered” virtual alternative, typically filtered for familiar or unchallenging features, tenders a poor substitute.
So what’s to be done? Besides extra-media solutions–like teaching children how to behave well on and around social networks despite all their temptations towards vice, while restraining, retraining, and redirecting our own negative inclinations–Philosopher Shannon Vallor has an idea for how we can work with the technology, or rather, how the technology can be made to work for us. She explores the possibility of platforms that facilitate instead of undermining traits like honesty, patience, courage, self-control, and humility in her essay “New Social Media and the Virtues” and her book Technology and the Virtues.
In place of design features that promote addiction and algorithms that effectively prioritize negative, emotionally-charged, hateful, and self-aggrandizing content—a situation Quartz magazine describes as a “battleground” where the “aggressors are the architects of your digital world,” who “map the defensive lines of your brain…and figure out how to get through them”—Vallor envisions platforms with “virtue gradients” that nudge users towards better behavior, on the recognition that platforms aren’t “neutral tools” but incentivize a certain collection of characteristics. (Philosopher Ulises Ali Mejias explains in the book Off the Network that platforms “mediate our social realities according to templates where certain forms of sociality are algorithmically operable and others are impossible for the algorithm to perform.”)
Vallor recalls an important caveat: Human beings are not fungible widgets that respond to the same cues in the same manner. Different nudges and design choices will provoke different thoughts, reactions, and choices in users with different goals, backgrounds, and personalities. Different design strokes, as it were, might work for different folks.
Accordingly, a number of platforms have worked to incorporate a diversity of positive nudges and alternative design elements. In addition to Twitter’s “Are you sure you want to say that mean thing?” prompt, there’s Meetup’s orientation towards in-person activities, and Nextdoor’s space for identifying and serving neighbors in need. Care2, “the world’s largest social network for good,” links activists, while Raftr’s focus is staying informed. Fitocracy gamifies physical health. Vero built in anti-addiction features. Yubo ditches popularity contests in favor of meaningful connections, and WT Social bills itself as the “non-toxic social network,” a place where “you–not algorithms—decide what you see.” Aether’s moderators are elected. Harnu wanted to break down regional and linguistic barriers and facilitate cross-cultural understanding. (Fast Company called it a “Living, Breathing, Transcultural Wikipedia.”) CircleMe is built around users’ passions and interests, as was TagsChat.
Entire countries are also taking action. Though not a digital role model in many respects, China recently issued regulations intended to clean up cyberspace for kids, as the CyberWire reported. The directive covers anti-addiction mechanisms as well as child-driven advertising, materialism, unkindness, vulgarity, and celebrity hype.
Philosophers specializing in history, technology, and ethics are thinking carefully about how technology does and should shape our conception of good behavior, and what techniques and principles the architects of our digital lives should take into account to achieve value-sensitive designs that integrate ethical considerations from start to finish.
At present, the profit motive predominantly dictates legacy platforms’ design choices, often at the expense of user wellbeing. But it doesn’t have to. Alternative, constructive or neutral revenue streams are possible–through subscription models, for example–as are not-for-profit platforms. Keeping in mind philosopher Hans Jonas’ claim that technology has expanded our field of influence geographically, environmentally, and temporally in ways that create novel moral demands, what ideal networked user and community would you cultivate, as Zuckerberg for a day?