In recent years, cyberbullying has become an all-too familiar social problem that many families, communities, schools, and other youth-serving organizations have had to face head-on. Defined as “willful and repeated harm inflicted through the use of computers, cell phones, and other electronic devices,” cyberbullying often appears as hurtful social media posts, mean statements made while gaming, hate accounts created to embarrass, threaten, or abuse, or similar forms of cruelty and meanness online. Over the last fifteen years, research on teens (typically middle and high schoolers) has shown that those who have been cyberbullied – as well as those who cyberbully others – are more likely to struggle academically, emotionally, psychologically, and even behaviorally.
With all of the progress that has been made to better understand cyberbullying among teens, very little is known about the behaviors as they occur among tweens: that momentous developmental stage that occurs roughly between the ages of 9 and 12 years old. To our knowledge, no previous research has explored cyberbullying among tweens across the United States. We do know that young children’s access to and ownership of mobile devices is on the increase, and the COVID-19 pandemic in 2020 may have elevated these numbers even more because of stay-at-home orders and online learning across the United States. It stands to reason, then, that cyberbullying is likely occurring among tweens, and obtaining an accurate picture of its scope can help move us toward more informed responses.
To our knowledge, no previous research has explored cyberbullying among tweens across the United States.
This study explored bullying and cyberbullying behaviors, as well as social media and app usage among a probability-based representative national sample of 1,034 tweens in the United States. Data were collected in June and July of 2020.
This report presents the results of a nationally representative survey of 1,034 children between the ages of 9 and 12 years-old. The survey was conducted online from June 19 through July 6, 2020, and was fielded by Ipsos using their probability-based KnowledgePanel. KnowledgePanel is the largest online panel that is representative of the U.S. population. KnowledgePanel recruitment employs an addressed-based sampling methodology from the United States Postal Service’s Delivery Sequence File—a database with full coverage of all delivery points in the U.S. As such, samples from KnowledgePanel cover all households regardless of their phone status. Member households without Internet access are furnished with a free computing device and Internet service. KnowledgePanel members are randomly recruited through probability-based sampling, and panel members are randomly selected so that survey results can properly represent the U.S. population with a measurable level of accuracy, features that are not obtainable from nonprobability panels. Ipsos currently recruits panel members by using address-based sampling methods (the firm previously relied on random-digit dialing for recruitment). Households without Internet connection are provided with a web-enabled device and free Internet service. In contrast, “convenience” or “opt-in” surveys recruit participants through emails, word-of-mouth, pop-up ads online, or other non-scientific methods.
The sample for this survey includes 9- to 12-year-olds who attend public or private schools. Home schooled children were excluded from the sample. For each child, parental permission was obtained; once the parent had consented, child assent was obtained as well. The survey was offered in English or Spanish. The final dataset is weighted to reflect benchmarks obtained from the 2019 March Supplement of the Current Population Survey (CPS). The 2018 American Community Survey (ACS) was used to obtain language proficiency benchmarks to adjust weights of Hispanic respondents. The margin of error due to design effect at the 95% confidence level is +/- 3.45% for the full sample. The response rate for this study was 44%. Missing data were excluded by analysis, with no individual bullying or cyberbullying variable having more than 5 missing cases.
Operationalizations of bullying and cyberbullying vary widely from one study to the next.40 For this study, bullying was defined as: “Bullying is when someone repeatedly hurts someone else on purpose, such as pushing, hitting, kicking, or holding them down. Bullying can also be when someone calls people mean names, spreads rumors about them, takes or breaks something that belongs to them, or leaves them out of activities on purpose, over and over again. Those who bully others are usually stronger, or have more friends or more money, or some other power over the person being bullied. Bullying can happen in person or can happen online, including cyberbullying.” Cyberbullying was defined as: “Cyberbullying is when someone repeatedly bullies or makes fun of another person (on purpose to hurt them) online or while using cell phones or other electronic devices while playing online games or chatting with others.” These definitions were presented to respondents in text and audio format.
Cyberbullying was defined as: “Cyberbullying is when someone repeatedly bullies or makes fun of another person (on purpose to hurt them) online or while using cell phones or other electronic devices while playing online games or chatting with others.”
Instrument Development and Refinement
The questionnaire was developed by Justin W. Patchin and Sameer Hinduja. Specific items from the 2017 Cartoon Network survey of tweens were included in the current study to allow for comparisons. The draft instrument was then tested among a sample of children at the younger end of the target population. In-depth phone discussions were convened with nine children between the ages of 8 ½ and 10 (5 boys and 4 girls) March 26-31, 2020. These children were recruited from and interviewed by New Amsterdam Consulting, Inc., and all resided in the Phoenix, Arizona area. The goal of these interviews was to assess the age-appropriateness of the questionnaire. Children were sent a copy of the instrument and asked to identify words or phrases they did not understand. They were also asked to read critical passages of the proposed survey so that their comprehension could be evaluated. Minor changes were made to the questionnaire as a result of this feedback.
The questionnaire was next pre-tested among as sample of 25 respondents from June 5 to June 10, 2020. No irregularities were observed and we moved forward with full distribution of the questionnaire.
Deliverables from this project (in PDF format):
Blog posts based on this data set:
This study was made possible through the support of AT&T and Cartoon Network.