FUTURES: Moderating Online Speech – The tightrope between a Ministry of Truth and tragedy of the commons

by Sally Chase

As members of Congress considered their next move, the universe splintered into three possible timelines. In one, the prestigious body had opted for wholesale rejection of the Communications Decency Act’s Section 230 liability protections. Social media platforms were no longer protected for either moderating or failing to moderate content, so they went the way of traditional publishers, hiring writers and printing approved pieces.

In timeline two, only the liability shield for moderating content was thrown out. Platforms quickly descended into anarchy, with the loudest, meanest voices, or, as was often the case, the most persistent bot networks, driving the discourse. Offensive, obscene, and exploitative posts abounded, and polite society politely left the chat, quickly followed by advertisers. As platforms turned to hawking data and subscriptions for revenue, their business model darkened, fueled by criminal gangs, drug and arms deals, and human trafficking. Congress had to reconvene on the issue, since the situation was obviously untenable, but the path forward was no longer clear.  

Timeline three saw the rise of a state Ministry of Truth. Only liability protections for failing to moderate content were removed, so naturally an agency responsible for arbiting acceptable and unacceptable content was needed. There were squabbles about the appropriate balance of political representation in the agency, but eventually one party won, and minority views were sidelined. Legitimate scientific debate was silenced, as was any discussion of alternative worldviews or divergent principles. Labels of “hate speech” and “misinformation” were applied with abandon. Before too long, criticism of governing authorities was off limits, along with private religion and any advocacy outside the topics currently in vogue. Society survived, but it was no longer society as we know it. 

What’s wrong with Section 230, and what have people proposed we do about it?

In hearing after hearing, Congress has reviewed a variety of concerns with the 1996 liability shield that at present protects social media platforms like Facebook, Twitter, and YouTube for “good faith” attempts to moderate online content. If harmful content slips through, they’re not responsible, like a newspaper might be. The platforms are also sheltered from the results of removing “objectionable” content.

The problem, or rather, problems, with Section 230, according to Congress, advocacy organizations, and the general public, are manifold. Big Tech allegedly leans left, and unfairly censors conservatives, while some claim platforms turn users into addicts and conspiracy theorists. Moderation mechanisms alternately let horrific things slip through and banish innocuous content. Algorithms, policies, and design choices perpetuate discrimination, depress children, enable the targeting of vulnerable groups, and spread lies through certain communities. Worst of all, in the eyes of some, Big Tech profits off this mess.

A conservative might want the liability shield for moderating content amended or removed entirely, calling foul on the “good faith” condition and pointing to Big Tech’s demographics and donations. Examples like Twitter’s censoring of the unfavorable New York Post story about the Bidens are representative, they say. The logical extreme of this position, however, is one that seems undesirable to many: an entirely unmoderated morass that might endanger many sectors of society.

Liberals might prefer to alter or do away with the protections that safeguard the platforms that fail to remove all undesirable content. Social media is a hotbed of bigotry, extremism, and fringe views, they say, and the events of January 6th are the natural real-world result. The logical end point of this stance could be the creation of a regulatory body with the power to set the standards of appropriate content. But this world, too, is unappealing to many.

Some advocate for tweaks in place of dramatic overhauls. Facebook CEO Mark Zuckerberg, for example, would like to mandate regular transparency reports, and conditionalize 230 protections on big platforms’ ability to generally do a decent job at moderating content. As long as the percentage of false negatives remains relatively low, Big Tech would be off the hook.  It’s not clear that this proposal would satisfy either conservative or liberal concerns, since censorship could go on unchecked, and large quantities of damaging posts could still appear in peoples’ timelines. 

Twitter’s Jack Dorsey is pitching an alternative proposal, one grounded in design rather than regulatory reform. The platform’s Bluesky initiative would use open source solutions to tackle problems like moderation and transparency. Issues of power distribution, mob rule, and technological literacy could trouble this plan.

At the most recent Section 230 hearing, Representative Tim Walberg (R-MI) offered yet another path forward. There is a principle in Catholic social teaching popular with small-government conservatives, the EU, and the UN called subsidiarity, which says responsibility should generally lie with the lowest possible organizational level. If an individual can do something, the family shouldn’t. If the family can, the church shouldn’t. If community organizations can, local government shouldn’t. If local government can, federal government shouldn’t. Advocates say this model dignifies participants, and empowers those best positioned to troubleshoot a given problem. Walberg called on households, communities, and centers of education to take up the mantle of civilizing online discourse, though whether these nodes of society are equipped to tackle problems as sinister as child exploitation or as global as disinformation rings is up for debate.

Questions of character, humanity, human interactions, virtue gradients, economics, politics, and unintended consequences are at play and at stake. Who do we want to be? How should good people act online, and allow others to act? What sort of exchanges do we want to encourage, or discourage? How can businesses persist through these decisions? How will various interests endure? The right reforms have the potential to re-invigorate public discourse, and bring wildly divergent parties back to the same table. The wrong moves could lead us down a dimmer path.  

How optimistic one is about the trajectory of social media may depend on their general view of technology. Technological optimists believe developments generally work for good in society; technological pessimists see bleaker consequences from most innovations, in terms of human liberty and happiness. Technological determinists say if something can be invented, eventually, it will be—and if inventions can be used a certain way, eventually, they will be. Those embracing the critical approach think each new instance should be carefully evaluated before adoption.

Will we find our way back to a shared set of facts, rigorous and respectful debate, and regard for one another’s humanity? Silicon Valley breakthroughs and Congressional decisions in the coming years, in addition to the efforts of private citizens and community organizations, could shape the answers to these questions, as well as the digital lives and possibilities of future generations. 

The Starset Society

MORE COOL STUFF LIKE THIS

IN YOUR INBOX

[mc4wp_form id=”2223″]

CONTRIBUTE

Have something to  share? Become a Starset Society Contributor today.
BECOME A CONTRIBUTOR