Social Media, Youth, and New Legislation: The Most Critical Components

social-media-laws-youth-well-being-safety-online

In my last piece, I discussed how some legislation in various US states has been proposed without careful consideration of the contributing factors of internalized and externalized harm among youth. More specifically, I expressed concern that the complexities surrounding why youth struggle emotionally and psychologically demand more than simplistic, largely unenforceable solutions. We recognize that these state laws are proposed with the best of intentions and are motivated by a sincere desire to support our current generation of young people. However, they may be more harmful than helpful.

Legislators routinely reach out to us for our input on what we believe should be included in new laws related to social media and youth, and I wanted to share our suggestions here. In an effort to ensure that implementing these elements is feasible, and so that the resultant law(s) are not overly broad, we focus our attention on those we believe are most important.

The Need for Comprehensive Legislation

To begin, we are concerned about increased suicidality, depression, anxiety, and related mental health outcomes related to experiences with cyberbullying, identity-based harassment, sextortion, and use/overuse of social media platforms. Other problems associated with online interactions include the potential for child sexual exploitation and grooming, the exchange of child pornography, and human trafficking – but these occur across a variety of Internet-based environments and may be less specific to social media.

However, prohibitionist approaches have historically failed to work, lack clear scientific backing, are often circumvented, and violate the right to free expression and access to information. For example, Society has spent many years focused on the purported relationship between violent video games and violent behavior, and legislation was created to safeguard children by preventing distribution of video games with certain violent content to minors. However, the courts soon granted injunctions against these state laws (see OK, IL, MI, MN, CA, and LA) in large part because the research was either missing, weak, or inconclusive. Even the APA stated in 2020 that there is “insufficient evidence to support a causal link.” What is more, they articulated that “Violence is a complex social problem that likely stems from many factors that warrant attention from researchers, policy makers and the public. Attributing violence to violent video gaming is not scientifically sound and draws attention away from other factors.” This is how I feel about the relationship between social media and well-being and/or mental health.  As a result, these bans on social media may not be supported by the courts when challenged.

If we can agree, then, that a blanket ban on a particular social media platform is unlikely to prevent the kinds of behaviors we are interested in curtailing, and if we can agree that social media use benefits some people, what legislative elements are likely to have an impact?

Potentially Useful Legislative Elements

When it comes to elements of legislation that we feel could have the greatest positive impact, we have several suggestions. Comprehensive laws should consider including the following components:

  1. Requires third-party, annual audits to assure that the safety and security of minors is prioritized and protected in alignment with a clear, research-established baseline and standard across industry. This will have to be done by a governmental entity, just as national emission standards for air pollutants are prescribed, set, and enforced by the Environmental Protection Agency, national safety and performance standards for motor vehicles by the National Highway Traffic Safety Administration, and national manufacturing, production, labeling, packaging, and ingredient specifications for food safety by the Food and Drug Administration. It will be an extremely difficult task requiring the brightest of minds thinking through the broadest of implications, but we’ve done it before to safeguard other industries. It has to be done here as well, and audits will ensure compliance.

  2. Requires platforms to improve systems that provide vetted independent researchers with mechanisms to access and analyze data while also adhering to privacy and data protection protocols. There are a number of critical public-interest research questions which hold answers that can greatly inform how platforms and society can protect and support users not just in the areas of victimization, but also democracy and public health. The push for transparency is at an all-time high. However, a government agency would need to coordinate this process in a way that protects the competitive interests and proprietary nature of what is provided by companies. In the bloodthirsty demand for platform accountability, I feel like this reality is too easily dismissed. They are private companies that do provide social and economic benefits, and we are trying to find mutually acceptable middle ground. As a researcher myself, I want as much data as possible, and I believe the research questions I seek to answer warrant access given the potential for social good. However, a governmental regulator should decide its merits (and my own). Plus, I should be contractually obligated to abide by all data ethics and protection standards or else face severe penalties by law, as determined by that governmental agency.  Progress made in the EU can serve as a model for how this is done in the US.

  3. Requires the strongest privacy settings for minors upon account creation by default. Minors typically do not make more restrictive their initial privacy settings, and so this needs to be done for them from the start.1, 2

  4. Mandates the implementation of age-verification and necessary guardrails for mature content and conversations. This is one thing that seems non-negotiable moving forward. Research suggests that females between the ages of 11 and 13 are most sensitive to the influence of social media on later life satisfaction while for males it is around 14 and 15. Clearly, youth are a vulnerable population in this regard. For these and related reasons, I am convinced that the age verification conundrum will be solved this decade, even if we have to live through a few inelegant solutions. We know that the EU is working on a standard (see eID and euCONSENT). One might not agree with it, and point to (valid) concerns about effectiveness of the process as well as privacy concerns and the fact that millions in the US do not even have government-issued ID, but I strongly believe the concerns will be overcome soon. And we can learn from the unfolding experiences of other countries. Overseas, the Digital Services Act will apply across the EU by February 17, 2024 and requires platforms to protect minors from harmful content. Companies somehow will have to set and implement the appropriate parameters and protections (and know who minors are) or face fines or other penalties enforced by member states.

  5. Establishes an industry-wide, time-bound response rate when formal victimization reports (with proper and complete documentation and digital evidence) are made to a platform. Many public schools around the US are required to respond to and investigate reports of bullying (or other behavioral violations) within 24 hours (with exceptions for weekends and holidays). To be sure, the volume of reports that platforms receive complicates the matter, but certain forms of online harm will have significant and sometimes traumatic consequences on the target. Research has shown that additional trauma can be caused simply by the incomplete or inadequate response made by trusted authority figures who are contacted for help.3-5 I have written about this extensively, with specific application to the impacts of negligence by social media companies. We know that imminent threats of violence and child sexual exploitation are escalated to the highest level of emergency response immediately. Less serious (based on choice selected in the reporting form and quick evaluation by manual moderators) should still receive a timely and meaningful response, even if this requires more staff and resources, as well as more R&D. Indeed, a legal mandate towards this end should quicken the pace of progress not only in optimizing the workflow of reporting mechanisms, but also preventing the harms in the first place (to thereby decrease the quantity of initial reports received).

  6. Establishes clear and narrow definitions of what constitutes extremely harmful material that platforms must insulate youth from. Research has shown that such exposure to this vulnerable population is linked to lower subjective well-being6 as well as engagement in risky offline behaviors.7 However, a definition cannot include every form of morally questionable content. What it should include without exception is, for example, that which promotes suicide, self-harm, substance abuse, child sexual exploitation material, terrorism or extremism, hate speech, and violence or threats towards others. The platforms youth use already prohibit these forms of content, but what is required is more accountability in enforcing those prohibitions. I believe it is infeasible for companies facilitating the exchange of billions of interactions every day to proactively prevent the posting or sharing of every instance of extremely harmful material across tens or hundreds of millions of youth. That said, though, they should face fines or other penalties for failing to respond promptly and appropriately to formal reports made of their existence.

Given the global nature of social media, it makes more sense to build these elements into a federal law, rather than to attempt to address social media problems piecemeal in different states. As I follow the developments in other countries and consider the momentum built since Frances Haugen’s whistleblower testimony in 2021 I believe federal legislation in the US is now necessary.

Current Federal Legislative Proposals

There are currently a couple of federal proposals focused on safety that have merit, but also hold some concerns. A third bill intended to amend the Children’s Online Privacy Protection Act of 1998 (known as COPPA 2.0) focuses almost exclusively on privacy, marketing, and targeted advertising related to youth and is worth a separate discussion down the road.

To begin, let’s discuss the Kids Online Safety Act. I am on board with the clause that platforms should provide minors with more security and privacy settings (and more education and incentive to use them!). I wonder if parents really do need more controls, though.  Research has shown that parents are already overwhelmed with the amount of safety options they have to work with on platforms, that they want more tools for gaming instead of social media, and that they more heavily lean on informal rules than digital solutions (which I personally think is best!). Also, I formally speak to thousands of parents every year, and many admit to not taking the time to explore the tools are already provided in-app by platforms. I need to hear more about exactly what parental controls are currently missing, and which new ones can add universal value to both youth and their guardians.

I am on board with preventing youth from harmful online content, with the caveats I provided earlier about censoring only the extreme types. Platforms cannot “prevent and mitigate” content related to anxiety and depression, and arguably shouldn’t given its value in teaching and helping information-seeking youth across the nation. I am on board with the clause requiring independent audits for the purposes of transparency, accountability, and compliance. I’m also on board with the clause giving researchers more access to social media data to better understand online harms, but would advocate for a more granular, case-by-case approval process to ensure that proprietary algorithms and anonymized data are properly safeguarded.

However, I am concerned that this law will make it difficult for certain demographics or subgroups of youth to operate freely online. The US Supreme Court has stated that minors have the right to access non-obscene content, and to express oneself anonymously. Furthermore, social media does help youth in a variety of ways (as acknowledged by the US Surgeon General), including strengthening social connections or providing a rich venue for learning, discovery, and exploration. Previously there was concern that KOSA would allow non-supportive parents to have access to the online activities of their LGBTQ+ children. The bill was refined recently to not require that platforms disclose a minor’s browsing behavior, search history, messages, or other content or metadata of their communications. However, it’s not a good look for the lead sponsor of the bill to state in September 2023 that it will help “protect minor children from the transgender in this culture.” I’m not sure how to reconcile this.

Second, we have the Protecting Kids on Social Media Act. It mandates age verification, and I am on board with this because the benefits outweigh its detriments, it is already happening to some extent, and it will eventually become mainstream because of societal and governmental pressure. It just will. The bill bans children under age 13 from using social media; I do not support bans for the reasons mentioned above.

It also requires parental consent before 13- to 17-year-olds can use social media. My stance on this is that if a kid wants to be on social media, a legal requirement for parental consent is not going to be as strong or effective as the parent simply disallowing it and enforcing that rule. Again, regulating a requirement for parental consent before a minor posts a picture or sends a message or watches a short-form video on an app does no better job than laying your own set of “laws” under your roof about what your teen can and cannot do. And if a teen can convince a parent to override their household rules (just talk to the parents around you!), they can also convince them to provide formal consent on a platform they want to use. Regardless, if there is a will, there’s a way – that minor will somehow be able to get access, even if their friend (whose parents don’t care what they do online) creates an account for them. Finally, the Protecting Kids bill mandates the creation of a digital ID pilot program to verify age. As I’ve said, age verification through some sort of digital ID system is going to happen (even if it is a few years away), and so I am on board with this despite the valid concerns that have been shared.

As I close, I cannot emphasize strongly enough that a nuanced discussion of this topic is laden with an endless stream of “if’s, and’s, or but’s.” Every side can make a reasonable, passionate case for their position on any one of these issues (blanket social media bans, age verification systems, restricted access to online resources, appropriateness of content moderation for young persons, the availability and effectiveness of safety controls, the true responsibility of each stakeholder, etc.), and I cannot fault them. I respect their position and endeavor to put myself in their shoes. Certain variables (existing case law, empirical research, and international observations) have informed my own perspective and position, and I remain open to new knowledge that may change where I stand. Ultimately, we all want safer online spaces and a generation of youth to thrive regardless of where and how they connect, interact, and seek information. Hopefully, legislation that is specific, concise, practical, enforceable, and data-driven will be enacted to accomplish that goal.

Featured image: http://tinyurl.com/2p96zmfu (Nancy Lubale, Business2Community)

References

1. Barrett-Maitland N, Barclay C, Osei-Bryson K-M. Security in social networking services: a value-focused thinking exploration in understanding users’ privacy and security concerns. Information Technology for Development. 2016;22(3):464-486.

2. Van Der Velden M, El Emam K. “Not all my friends need to know”: a qualitative study of teenage patients, privacy, and social media. Journal of the American Medical Informatics Association. 2013;20(1):16-24.

3. Figley CR. Victimization, trauma, and traumatic stress. The Counseling Psychologist. 1988;16(4):635-641.

4. Gekoski A, Adler JR, Gray JM. Interviewing women bereaved by homicide: Reports of secondary victimization by the criminal justice system. International Review of Victimology. 2013;19(3):307-329.

5. Campbell R, Raja S. Secondary victimization of rape victims: Insights from mental health professionals who treat survivors of violence. Violence and victims. 1999;14(3):261.

6. Keipi T, Näsi M, Oksanen A, Räsänen P. Online hate and harmful content: Cross-national perspectives. Taylor & Francis; 2016.

7. Branley DB, Covey J. Is exposure to online content depicting risky behavior related to viewers’ own risky behavior offline? Computers in Human Behavior. 2017;75:283-287.

Leave a Reply

Your email address will not be published. Required fields are marked *