Here at the Cyberbullying Research Center, in addition to our own projects, Sameer and I work hard to stay on top of all of the cyberbullying research being done by others. When new reports are released, or when articles are published in journals, we are probably among the first to read them. While there has been a dramatic increase in the number of articles published in journals over the last year or so, we find there is wide variation in the descriptive information reported in these articles about how the study was conducted and what results were obtained. In order to continue building a literature-base marked with quality and rigor, I would like to ask all researchers who are studying this problem to work toward reporting some common baseline information in all of their reports and published articles, so that the data can be accurately synthesized, compared, and contrasted. It is hard to learn from a literature-base that is so disparate on many factors. Let me provide just a few examples.
We have previously discussed the vast difference of cyberbullying prevalence rates reported across several published articles (rates range from 5.5% to 72% in the 42 articles I have read). We might better understand why there is such a difference if researchers better documented what they did and how they did it. For example, it makes sense that online-only, opt-in studies would yield higher prevalence rates as they are restricted to individuals who are regularly online and who volunteer to participate. Moreover, studies that include 18- and 19-year-old respondents in their assessment of “teen” cyberbullying will no doubt find higher lifetime prevalence rates than those that focus only on middle-school-aged youth (because, of course, they have been alive for a much longer period of time). And asking about cyberbullying experiences from the previous 30 days will certainly return fewer incidents than those who ask about lifetime experiences. Another major contributor to differences is the way cyberbullying is defined across studies. These are just a few examples of why there are many discrepancies among cyberbullying prevalence rates reported in the research.
If you are collecting data on cyberbullying, I would ask that you collect and report basic demographic characteristics of the sample and thoroughly describe how you carried out your study. We are more than happy to consult with other researchers about what would be best, so feel free to drop us a note. Here are a few elements that should be included in any published report on cyberbullying:
• What are the demographic characteristics of the sample (total number of students included, gender, race, age)?
• When were the data collected (month, year)?
• How did you define and operationalize cyberbullying (What is cyberbullying? How did you measure it? Can one instance of harassment online be considered cyberbullying based on your measure?)?
• What was the response window of experience with cyberbullying (previous 30 days, 6 months, year, lifetime)?
• How was the information collected (classroom survey, in-person interview, online survey, etc.)?
• How was the sample identified and selected (randomly, based on some unique characteristic, because they were in a particular class, etc.)?
• What is the sample representative of (a particular school or district, state, country) and how do you know that it actually is?
• Prevalence rates of experience with cyberbullying—both victimization and offending (total, and broken down by other demographic characteristics, especially gender).
Working together we can shed more meaningful light on the nature, extent, and consequences of cyberbullying and our efforts can be enhanced exponentially if we all use comparable methodologies. At the very least, we need to take care to document what we did, so that any differences that might be attributable to the way cyberbullying was studied can be identified and taken into consideration when discussing the results.