Will New Censorship Bills Increase Cyberbullying on Social Media?

social-media-censorship-abuse-hate-cyberbullying-2

Over the last few months, politicians in many states across America* have introduced and/or passed bills that allow individuals to file civil lawsuits against social media platforms that remove or restrict their online posts. In my home state of Florida, our governor recently signed such a bill into law (SB7072) to protect against “Silicon Valley elites,” “censorship,” and “other tyrannical behavior.” This is a big deal. We can all agree that we do not want our First Amendment rights to free expression to be limited unfairly when it comes to what we want to post and share on the Internet. We can all also agree that we do not want to be subjected to online hate, abuse, or other forms of harm because users of a platform are given no parameters or rules about appropriate social behavior, nor face any consequences for transgressions. But is there a happy medium to be found? Is the status quo good enough? Before I answer that, let us back up and contextualize the current controversy by discussing a cornerstone piece of legislation passed 25 years ago.

Section 230 of the Communications Decency Act

In 1996, the Communications Decency Act became law, and included an important clause in Section 230 asserting that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Around the same time, the Supreme Court began to repeatedly conclude that digital media companies had the same rights and protections as newspapers according to the First Amendment; that is, to choose whether to carry, publish, or withdraw the expressions of others.

This is as it should be; these platforms are privately owned, users choose to participate (and can go elsewhere if they have a problem with the company), and these businesses should be outside of the reach of any governmental influence (assuming they are not breaking any laws). Additionally, nowhere in any legislation does it state that platforms have to be neutral; rather, they can publish and moderate content in keeping with the values and ideals they prioritize (just like how newspapers and news channels differ in their editorial priorities).

Importantly, Section 230 also established that an Internet company cannot be sued or held responsible by any party who is aggrieved by the actions of someone else online. For example, you cannot sue Facebook if one of their users posts something libelous against you on their platform. Rather, you can only go after the other person because the company is simply providing the forum, app, or website in which millions or even billions of people interact. The creator of the post or content is responsible for what they say; the intermediary company is not. (To be sure, there are some limits as Section 230 doesn’t apply to violations of sex trafficking laws, other federal crimes, intellectual property claims, or Electronic Communications Privacy Act violations.). To reiterate, this immunity granted to online platforms – as a third party publisher of information provided by others – is not predicated on any level of neutrality, even though that link has been made by politicians and media pundits.

The new social media censorship bills emerging across the United States are gaining traction on the premise that these companies are not only imposing a standard of online behavior which unfairly silences and deplatforms some users but are doing so with ill-intent – thereby meriting disapprobation and sanction.

Free Expression v. Censorship

As I look at the issues through my “online safety” lens, I realize that things are very complicated. On one hand, we want free speech and free expression to flourish so that everyone around the world feels like they have a voice, and that their voice can be heard on a relatively equal playing field. We want to make sure that the censoring or “takedown” decisions of user content by social media companies involve fair and appropriate judgments that promote individual and societal good. And we realize that groups silenced online are also disproportionately targeted and disenfranchised offline, and that social media companies must intentionally guard against the effect of algorithmic and human biases.

We do not want social media companies to hide behind Section 230 and fall short in their civic responsibility to curtail and restrict truly hurtful or inflammatory posts made by certain users. Rather, we need them to meaningfully build and maintain online communities marked by civility and human decency, and as such, they should have (and use) their authority to make appropriate decisions as to what they will disallow on their platform – for the good of all users. Healthy online interactions and thriving online communities do not occur automatically, just like in the “real world.” Accomplishing those goals requires some level of content moderation, just like local police departments endeavor to moderate the behaviors of a town’s citizenry through law enforcement and order maintenance.

What Should Social Media Companies Do, and What Are They Doing?

From my perspective, many companies are doing their part to ensure positive online participation by their userbase even though they could adopt a completely hands-off approach because of Section 230. Justin and I receive a deluge of help requests every week from targets of various forms of online hate and abuse. As “trusted reporters” to many social media companies, we regularly communicate with Trust & Safety personnel and attempt to help those who have been victimized. In many cases, the harmful content is taken down – as it should be. We also know, based on Transparency Reports** from Facebook/Instagram, Snapchat, TikTok, Twitch, and other companies, that these apps use algorithms to identify and proactively remove harmful/abusive content before it even gets posted to the app. I believe they will continue to improve in this area as they scale adopt better AI for content moderation and work to scale their individualized formal responses to user reports.

Could social media companies be doing more? Yes. Can laws promote additional accountability from social media companies to improve user experiences? Absolutely (for example, see Europe’s General Data Protection Regulation (GDPR) and how it has forced changes in the way that social media companies collect and use data from individuals – we could use something like this in the United States!). But hamstringing social media companies from curtailing online posts or speech that they deem is decidedly and materially harmful towards another individual or group – and that thereby violates their Community Guidelines – will only lead to a Lord of the Flies milieu where unproductive conflict, spiteful conduct, and the baseness of the human condition reigns supreme.

Cyberbullying May Increase

I realize this sounds like hyperbole, but what have we seen in the political arena over the last few years? Increasingly vicious, targeted mudslinging across social media between candidates that has arguably normalized similar conduct among youth and adults alike. If I am being continually abused on a platform by someone who does not like me for stated (e.g., my opinionated post) or unstated (e.g., my identity, appearance, or way of life) reasons, do I want that aggressor to feel impunity to continue their cyberbullying? Or do I want them to hesitate and refrain from doing so because the social media company – whose Trust and Safety team is comprised of individuals who have a conscience, care about others, and who do not want hateful or incendiary content proliferating on their platform – understands its responsibility to protect me and others like me from continued victimization, trauma, or other tragic outcomes?

Biased and uninformed decision-making when it comes to censoring or allowing certain posts has become a huge problem. That is partly why we find ourselves in this situation. But the major social media companies are working to avoid further incidents, and realize that the negative implications from missteps are huge (as we’ve seen). They are not oblivious to the weight of their actions (or inactions). We know Facebook, for instance, has put together an independent Oversight Board to help them wrestle with the toughest cases and determine the best possible policies, and that many of these companies have formal Safety Advisory Boards to similarly advise them.*** Section 230 allows platforms to take actions in good faith “to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”

Although far from perfect, this is the happy medium – at least as things stand right now. Free expression on social media platforms should rule except when it crosses a line and harms another user in a policy-prohibited capacity. While we should continue to demand more from them when it comes to trust, safety, privacy, and security (e.g., avoiding secondary victimization), we should believe these companies operate with the best intentions specifically when it comes to their actioning of content. Those two approaches are not mutually exclusive. They can – and must – co-exist.

Notes

*Our Center is following developments about these bills in Alabama, Arkansas, Florida, Idaho, Iowa, Kansas, Kentucky, Louisiana, Missouri, Nebraska, North Carolina, North Dakota, Oklahoma, South Dakota, Texas, Utah, West Virginia, and Wyoming.

**I realize that transparency reports put out by social media companies share with is what they want us to see. But I would rather have them than be completely out of the loop as to what companies are doing to police problematic content.

***The Cyberbullying Research Center has received funding from some social media companies to conduct objective, unbiased research on various forms of online abuse – but that those companies have had no sway on the findings or the academic objectivity and neutrality with which we analyze, present, and share any/all results.

 Image source: https://bit.ly/2TDzmlW – American Bar Association

Leave a Reply

Your email address will not be published. Required fields are marked *