How Social Media Companies Should Combat Online Abuse

social-media-companies-combat-abuse-safety

A new year is upon us. While we’ve made some progress in reducing cyberbullying, online hate, and other forms of abuse and toxicity, I think we can do better. Social media companies are often seen (and vilified) as accomplices to the harassment and victimization that happens on their platforms, and – admittedly – are an easy target for us to scapegoat. To be sure, they do share some of the blame and are learning as they (and we) go when it comes to what online safety policies and practices work the best.

However, I am convinced more progress must happen ASAP across the social media landscape.

Here are my specific Calls to Action for social media companies in the new year.

Perfect the feedback loop.

social-media-report-abuse-feedback-loop

Those who make reports of abuse or harm need to hear back from the app within 24 hours. I don’t think this is asking too much. I’ve already blogged about the reality of secondary victimization. This is a criminal justice concept describing how victims are re-victimized (emotionally and psychologically) when law enforcement officials respond callously (or in an incomplete or untimely manner) to those who took a chance in reporting their experience – and believed that an authority figure would actually help them. Social media companies have to – at all costs – keep their users from being victimized a second time because they failed to respond to a report of abuse or harm. Most platforms have fallen short when it comes to providing systematic, prompt, and regular updates to those who take the initiative and time to report a safety concern. Everyone who files a report needs to at least be able to say, “Hey, they did get back to me in a meaningful way” – even if they wish the outcome had gone a different way. Responsiveness matters.

Be transparent and fair about takedown decisions.

social-media-companies-transparency

Research from the Journal of Experimental Criminology points out that users who have had their content taken down for a rule violation are less likely to violate the platform’s rules IF THEY BELIEVE THE PROCEDURES INVOLVED ARE FAIR. This is so key. We need to understand (and build an evolving body of knowledge) on what types of posts and user-generated content violate the rules, and what do not. Otherwise, the decision-making seems arbitrary and capricious – which is not a good look for social media companies who already have a reputation for prioritizing profit-making over user safety. Possible good news from one company: Facebook created an independent oversight board to evaluate appeals made by users who believe their content was unfairly taken down. We’ll have to see how this goes. I still think it would be best for this board (or social media companies in general) to create an online repository of categorized examples of problematic content (anonymized) with brief but clear explanations of WHY it was taken down. Then, everyone – potential and actual users of the app, members of vulnerable groups, champions of free speech, media pundits – knows and understands the decision-making process. Then, all of this is much more transparent and fair. Otherwise, people might wonder about the objectivity of the decision-makers who make up the oversight board. But it’s a start.

Devote more resources to social science solutions instead of computer science solutions.

social-science-research-social-media-abuse

Social media platforms are increasingly outsourcing content moderation to third-party companies like Two Hat, Spectrum Labs, and similar others. I’ve written extensively on how AI and machine learning can help us combat online abuse – and so I obviously see much value here. While we are nowhere close to accurately evaluating context (and will probably never be able to accurately evaluate intent), the technology is improving all of the time. Great. Hooray. These are computer science solutions, and they are important. However, we need a lot more focus on identifying and understanding what factors escalate (and de-escalate) abuse and toxicity online.

Some incipient work has been done in this area but so much more is required. For instance the popular multi-player game League of Legends experimented with The Tribunal (a jury of peers) until 2014 and now has a “Honor” System where you can give props to another player for great teamwork, friendliness, leadership, or being a principled opponent. These ideas are not perfect (the Tribunal could only punish players instead of also rewarding them, punishments occurred way too long after the infringing behavior, and receiving “honor” doesn’t really add substantive value to one’s gaming experience), but at least they are innovative. At least Riot Games (makers of League of Legends) is TRYING (and publicizing their attempts even if they fail). I love that, and it goes a very long way to creating good will towards that company.

Another popular game – Blizzard Entertainment’s Overwatch – did something similar (and were perhaps inspired by Riot Games) by creating a system where users could reward and celebrate others in three categories (sportsmanship, good teammate, and shot caller (leadership)). They incentivized it by giving 50 XP for every endorsement, and it has apparently led to a meaningful decrease in toxic behavior. Good job, Blizzard.

With regard to social media platforms, in late 2019 Instagram started using AI not just to detect toxic content but also to get users to pause, reflect, and edit their words before they share something potentially offensive or hurtful via their new Comment Warning and Feed Post Warning systems. Relatedly, a few years ago Facebook began to let users know when their -about-to-be-posted comments or captions are similar to content that was previously reported as abusive. In early 2021, TikTok started to automatically detect language that violates their policies, and gives users a chance to edit or discard their post. In late 2022, LinkedIn introduced algorithmically-driven nudges (e.g., “Please keep LinkedIn respectful and professional”) to encourage positive behavior among users who have previously posted inappropriate content.

But….I need to see way more experimentation and innovation with cognitive restructuring and behavior modification approaches on all social media and gaming platforms. I believe this can help determine how to induce positive conduct and deter negative conduct online. And figuring that out, in my opinion, holds the solution to all of this. Experimentation is great, and it shows the world that your company is actually TRYING unique strategies (and spending money to do so) instead of just being reactive and only putting out fires when they are sparked. I advise strongly that platforms make public their attempts and ideas in this realm! Companies, please let us all know what you’re trying to do. Let us know what is working and what is not working. And we will start to give you more of the benefit of the doubt.

To do the above, social media companies must allocate more resources (and personnel, internal and external) towards social science solutions. This is how we will get a better handle on human behavior. Even though it’s not perfectly predictable, it still operates in patterns and regularities. And we need to use that to our advantage to create healthier, safer, and more constructive online communities.

Focusing on safety in these ways may feel like an opportunity cost, but the reputational and financial benefits that can result down the road are well worth their attention and implementation. My Calls to Action can also fit within a Safety by Design framework that your social media or gaming company may be adopting or retrofitting (shout out to our friends at the Australian eSafety Commission – we’re huge fans of this approach!).

Let me know your thoughts and if I’m missing any other Calls to Action you think are important. I look forward to continuing the conversation and our work in this area.

Image sources:

https://bit.ly/39CpVHe

https://bit.ly/39CW4hF

https://bit.ly/2u9Sawq

https://bit.ly/2ST1v63

Leave a Reply

Your email address will not be published. Required fields are marked *