The Metaverse: Opportunities, Risks, and Harms

metaverse harms and risks

We have all been hearing about the metaverse much more frequently over the last few years, and consideration of its promises and perils has picked up tremendous pace in recent months. It is being designed by multiple companies to further connect us to each other and to the experiences we love. And it’s doing so by providing unique, interactive environments for us to explore as avatars or other embodiments of one’s identity online. In truth, it feels like a natural iteration, building on Web 2.0’s flat and largely asynchronous interfaces and combining our favorite features from the history of social media platforms, gaming worlds, extended reality (Augmented Reality (AR), Virtual Reality (VR), or a combination thereof), and online transactions.

Some of us have participated in some metaverse candidates already – whether it’s by spending time in the games or worlds of Meep City or Nikeland on Roblox, attending Travis Scott’s or Arianda Grande’s concerts in Fortnite, watching a movie together with friends in BigScreen, walking through the virtual halls of an NFT museum, or playing a military-simulation first-person shooter game or running a parkour course with your best buddies. As Mark Zuckerberg has said, “you’re in the experience, not just looking at it.” The catalogue of immersive experiences available to users is growing each day, especially with various players (such as Hololens2 by Microsoft, Quest and Rift by Meta (formerly known as Facebook), Index by Valve, Playstation VR by Sony, and Daydream View by Google), various coding languages and development tools from Unity and Unreal, and incredible momentum and excitement across both industry and society. I’m personally hoping the metaverse can help balance the scales a little more by providing increased opportunities to children and adults around the world for education, entrepreneurship, self-actualization, discovery, and wonder. As they say, the best is yet to come, and we are here for it.

The “metaverse” builds on Web 2.0’s flat and largely asynchronous interfaces and combines our favorite features from the history of social media platforms, gaming worlds, AR/VR, and online transactions.

With this in mind, we are due for a discussion of the potential risks that may accompany the widespread adoption of the metaverse (in whatever form it takes). Most of these are unsurprising because they have existed in related ways for years. As always, we never seek to fear-monger or stir up any sort of panic related to new technologies. The reality is we simply don’t yet know a lot about what the possibilities are, and how common they might be. Research must be funded to understand the true scope and frequency of these issues in metaverse-related environments. Our purpose here is to raise awareness and appreciation of these potentialities, and to encourage the implementation of features and functionality to reduce and prevent harm from the ground up (#safetybydesign).

Let’s begin.

Cyberbullying

We have not seen any research published to date on cyberbullying in the metaverse. What we have seen is anecdotal accounts shared on Reddit as well as in narratives shared by targets and witnesses when interviewed for news articles on the topic. These describe being on the receiving end of insults and invectives, racial slurs, and other forms of toxicity. So far, nothing is surprising to me, nor out of the ordinary – regardless of whether it manifested orally (i.e., voice chat), textually (group chat or private message), or behavioral (specific actions or inactions). The harm reported is representative of what we typically see elsewhere: in multiplayer games, on social apps, in other related Web 2.0 environments, and even in schools. However, there is the potential for it to be more insidious and impactful since the realism that accompanies VR experiences readily translates to fear experienced emotionally, psychologically, and physiologically when individuals are targeted or threatened (Petkova et al., 2011).

There is potential for the harm to be more insidious and impactful since the realism that accompanies VR experiences readily translates to fear experienced emotionally, psychologically, and physiologically when individuals are targeted or threatened.

Sexual Harassment

Have you ever heard of the Proteus Effect? Over a decade ago, research identified that people act socially in accordance with their avatars’ characteristics, and that there is some transference to offline behaviors. For instance, those who were given taller avatars acted more aggressively than those given shorter avatars, and this behavioral difference extended to face-to-face interactions as well. This theory has been applied to antisocial behaviors (Peña et al., 2009; Yoon & Vargas, 2014) by users who self-select certain avatars, and provides the case for increased sexual aggression and harassment towards others – especially female avatars in various online environments. Recently, a woman reported she was sexually harassed and groped within Meta’s Horizon Venues. Another instance involved simulated groping and ejaculating by an aggressor onto a target’s avatar in the game Population One. While these severe instances might be rare, anecdotal accounts indicate that milder forms of harm occur more regularly (another call for research to provide up-to-date, detailed prevalence rates here!). For instance, a study from the Center for Countering Digital Hate identified 100 potential instances of policy violations within a span of 11.5 hours – some of which included bullying, threats, and even grooming behaviors. To be sure, we do not know what these numbers mean since no comparable research to our knowledge has been done on other platforms. If some of the behaviors – however many there actually are – involve blatant sexually-focused statements, suggestions and requests, implications of sexual immorality based on what the target is wearing, solicitations for or offerings of nudes or sexual experiences, stalking and doxing, or pressured non-consensual sexual interactions, these incidents can lead to significant emotional and psychological consequences. They may also detract strongly from the desire and willingness of the target to interact with others online.

Catfishing

If you’re not familiar, “catfishing” refers to the phenomena of “creating and portraying complex fictional identities through online profiles” (Nolan, 2015:54). This often involves setting up a fictitious online profile or persona for the purpose of luring another into a fraudulent romantic relationship, but can also have other end goals such as financial exploitation once trust is deceptively established (Lauckner et al., 2019; Simmons & Lee, 2020). Since 2016, we have received multiple help requests at the Cyberbullying Research Center every single week from victims who have fallen prey to  the related phenomenon of sextortion – many of which involve catfishing to gain trust before taking advantage of the victim. When we studied sextortion across the United States, we found that 5% of youth had been victimized. This finding, though, reflects incidents as they occur via the most popular social media and messaging apps – and does not concentrate on or isolate metaverse environments. When considering the increased realism of avatar-based social interactions, the unique context of metaverse environments, and the immersive nature of VR technologies, it is possible that an increased number of users will be manipulated or deceived. Once trust is given to someone else without verifying their identity, various kinds of victimization can readily take place – as has always been the case.

Hate in the Metaverse

Online hate has been conceptualized as actions that “demean or degrade a group and its members based on their race/ethnicity, national origin, sexual orientation, gender, gender identity, religion, or disability status” (Reichelmann et al., 2020). Research is clear that hate speech assaults the dignity of those targeted (Seglow, 2016; Ștefăniță & Buf, 2021), undermines their emotional and psychological health (Brown, 2015, 2018; Lee-Won et al., 2020; Maitra & McGowan, 2012), and may even promote violence towards marginalized groups (Fyfe, 2017; Müller & Schwarz, 2021). In metaverse worlds, hateful sentiments and slurs can be orally or textually expressed in a one-to-one or one-to-few context, prompted by how another avatar looks, acts, or speaks. Mobs of avatars (including bots!) might also be assembled in private or public rooms where indoctrination and radicalization safely can proliferate before being unleashed on unsuspecting, vulnerable users. More subtle hateful behaviors by aggressors may include: changing one’s skin color (e.g., “blackface”) to demean and dehumanize someone else (Sommier, 2020); wearing virtual skins or clothing with offensive slogans or insulting images that may induce trauma (e.g., Nazi symbols, hate-group insignia or logos) (Ozalp et al., 2020); or repeatedly trolling individuals with alt-right extremism and insensitive denials of documented events and truths (Rieger et al., 2021). In an increasingly polarized political and social context where echo chambers online reify and normalize hateful beliefs (Lee, 2021; Pérez-Escolar & Noguera-Vivo, 2022), the implications for members of protected groups are both serious and imminent.

Prevention and Response to Mitigate Metaverse Harms

New features to prevent cyberbullying, sexual harassment, catfishing, and hate in these environments are being created and released with increased frequency, and many more are presumably down the pike. In 2016, Microsoft introduced the “Space Bubble” in their social VR platform AltspaceVR, where users can only come to about one foot of your avatar before their hands and body disappear from your view. In 2022, Meta released a feature called “Personal Boundary” where another user’s forward movement towards you is halted if they come within four feet. Meta also created a tool called “Safe Zone” which can be activated by a user at any time. When employed, no one else can “touch them, talk to them, or interact in any way until they signal that they would like the Safe Zone lifted.” 

While the utility of these features remains to be empirically evaluated, we continue to encourage all users – in the Metaverse and on any other platform – to use them while also blocking or muting any unwanted communication or contact, and to report that person in-app (these controls need to be made more user-friendly on some platforms). Moreover, some type of age-gating (e.g., there are adult-only environments where IDs are verified on a private Discord channel) and a decent amount of helpful parental controls to allow for disabling of chat (or creating a specific voice-chat bubble with particular friends or individuals of a certain age) must be in place regardless for who the immersive space is targeted and marketed.

A female journalist recently described her metaverse interactions as “uncomfortable” due to a lack of rules about etiquette in these spaces while another used the term “unnerving” when considering the unexpected and risky nature of certain rooms. Legal scholar and activist Larry Lessig (2009) writes about how computer code might be intended to prevent hacking and manipulation, but such a possibility always remains and therefore requires laws and standards to deal with any malfeasance. Similarly, the metaverse is being designed by various companies with various mechanisms in place to prevent interpersonal victimization, but standards and rules must be in place – and faithfully applied – when individuals are inevitably targeted and harmed.  To wit, any virtual environment needs to have a robust (and frequently updated) set of Community Guidelines to define behavioral expectations, as well as to declare the existence of disciplinary policies for conduct breaches. Of course, companies must vigilantly supplement, build out, and effectively promulgate these policies as behaviors evolve over time. Indeed, companies building metaverse properties should have as a corporate mandate the need to maintain a brand identity known for prioritizing the physical, emotional, and psychological safety of its userbase. 

Companies building metaverse properties should have as a corporate mandate the need to maintain a brand identity known for prioritizing the physical, emotional, and psychological safety of its userbase. 

Metaverse Trust and Safety Issues to Address

In closing, I am left with some pressing questions as we attempt to lay the groundwork for safe metaverse experiences. Are young users disproportionately susceptible to be victimized by others – or does it occur evenly among all age groups? Are marginalized or disadvantaged individuals even more susceptible, or can their choice of avatar and behaviors in these environments serve a protective role?  What is the demographic profile of those who harass or harm others, and what is the scope and extent of such harm in terms of health and well-being? How can moderators serve as an effective presence to deter harmful behavior between avatars, and sanction those who violate the established social norms and community guidelines? What type of measures hold promise to deter negative behavior and induce positive interactions? How can users feel deeply invested in the safety of the VR spaces they occupy, and consequently do their part to protect it and defend against harmers? How can communities build collective efficacy towards this end?

In terms of features, will we ever be able to reliably verify age so that certain metaverse spaces can truly be adults-only? Can parents/guardians who buy VR headsets for their children set up specialized “child” profiles or accounts (similar to what we see with YouTube Kids or Messenger Kids) uniquely provisioned with additional safeguards and restrictions in place – such as a “Close Friends” list that only allows interactions with approved peers? Can these settings apply across any VR experience or game the child wants to join? Can we make it easier for users to switch servers or otherwise extricate their avatar from a situation where they feel threatened? What other functionality can be implemented to equip and empower users to protect themselves? As William Gibson famously said, “The future is already here.” It’s time to address these questions to not only reduce the possibilities of harm across the metaverse, but make the experiences of creators and participants much more safe and enjoyable.

Featured Image: Hammer and Tusk on Unsplash

References

Brown, A. (2015). Hate speech law: A philosophical examination. Routledge.

Brown, A. (2018). What is so special about online (as compared to offline) hate speech? Ethnicities, 18(3), 297-326.

Fyfe, S. (2017). Tracking hate speech acts as incitement to genocide in international criminal law. Leiden Journal of International Law, 30(2), 523-548.

Lauckner, C., Truszczynski, N., Lambert, D., Kottamasu, V., Meherally, S., Schipani-McLaughlin, A. M., Taylor, E., & Hansen, N. (2019). “Catfishing,” cyberbullying, and coercion: An exploration of the risks associated with dating app use among rural sexual minority males. Journal of Gay & Lesbian Mental Health, 23(3), 289-306.

Lee-Won, R. J., White, T. N., Song, H., Lee, J. Y., & Smith, M. R. (2020). Source magnification of cyberhate: Affective and cognitive effects of multiple-source hate messages on target group members. Media Psychology, 23(5), 603-624.

Lee, J. (2021). The Effects of Racial Hate Tweets on Perceived Political Polarization and the Roles of Negative Emotions and Individuation. Asian Communication Research, 18(2), 51-68.

Lessig, L. (2009). Code: And other laws of cyberspace. ReadHowYouWant. com.

Maitra, I., & McGowan, M. K. (2012). Speech and harm: Controversies over free speech. Oxford University Press.

Müller, K., & Schwarz, C. (2021). Fanning the flames of hate: Social media and hate crime. Journal of the European Economic Association, 19(4), 2131-2167.

Nolan, M. P. (2015). Learning to circumvent the limitations of the written-self: The rhetorical benefits of poetic fragmentation and internet” catfishing”. Persona Studies, 1(1), 53-64.

Ozalp, S., Williams, M. L., Burnap, P., Liu, H., & Mostafa, M. (2020). Antisemitism on Twitter: Collective efficacy and the role of community organisations in challenging online hate speech. Social Media+ Society, 6(2), 2056305120916850.

Peña, J., Hancock, J. T., & Merola, N. A. (2009). The priming effects of avatars in virtual settings. Communication Research, 36(6), 838-856.

Pérez-Escolar, M., & Noguera-Vivo, J. M. (2022). Hate Speech and Polarization in Participatory Society. Taylor & Francis.

Petkova, V. I., Khoshnevis, M., & Ehrsson, H. H. (2011). The perspective matters! Multisensory integration in ego-centric reference frames determines full-body ownership. Frontiers in psychology, 2, 35.

Reichelmann, A., Hawdon, J., Costello, M., Ryan, J., Blaya, C., Llorent, V., Oksanen, A., Räsänen, P., & Zych, I. (2020). Hate knows no boundaries: Online hate in six nations. Deviant Behavior, 1-12.

Rieger, D., Kümpel, A. S., Wich, M., Kiening, T., & Groh, G. (2021). Assessing the Extent and Types of Hate Speech in Fringe Communities: A Case Study of Alt-Right Communities on 8chan, 4chan, and Reddit. Social Media+ Society, 7(4), 20563051211052906.

Seglow, J. (2016). Hate speech, dignity and self-respect. Ethical Theory and Moral Practice, 19(5), 1103-1116.

Simmons, M., & Lee, J. S. (2020). Catfishing: A look into online dating and impersonation. International Conference on Human-Computer Interaction,

Sommier, M. (2020). “How ELSE are you supposed to dress up like a Black Guy??”: negotiating accusations of Blackface in online newspaper comments. Ethnic and Racial Studies, 43(16), 57-75.

Ștefăniță, O., & Buf, D.-M. (2021). Hate speech in social media and its effects on the LGBT community: A review of the current research. Romanian Journal of Communication and Public Relations, 23(1), 47-55.

Yoon, G., & Vargas, P. T. (2014). Know thy avatar: The unintended effect of virtual-self representation on behavior. Psychological Science, 25(4), 1043-1045.

Leave a Reply

Your email address will not be published. Required fields are marked *