A Call for Consistency in Information Reported in Cyberbullying Research Articles

default_cyberbullying

Here at the Cyberbullying Research Center, in addition to our own projects, Sameer and I work hard to stay on top of all of the cyberbullying research being done by others. When new reports are released, or when articles are published in journals, we are probably among the first to read them. While there has been a dramatic increase in the number of articles published in journals over the last year or so, we find there is wide variation in the descriptive information reported in these articles about how the study was conducted and what results were obtained. In order to continue building a literature-base marked with quality and rigor, I would like to ask all researchers who are studying this problem to work toward reporting some common baseline information in all of their reports and published articles, so that the data can be accurately synthesized, compared, and contrasted. It is hard to learn from a literature-base that is so disparate on many factors. Let me provide just a few examples.

 

We have previously discussed the vast difference of cyberbullying prevalence rates reported across several published articles (rates range from 5.5% to 72% in the 42 articles I have read). We might better understand why there is such a difference if researchers better documented what they did and how they did it. For example, it makes sense that online-only, opt-in studies would yield higher prevalence rates as they are restricted to individuals who are regularly online and who volunteer to participate. Moreover, studies that include 18- and 19-year-old respondents in their assessment of “teen” cyberbullying will no doubt find higher lifetime prevalence rates than those that focus only on middle-school-aged youth (because, of course, they have been alive for a much longer period of time). And asking about cyberbullying experiences from the previous 30 days will certainly return fewer incidents than those who ask about lifetime experiences. Another major contributor to differences is the way cyberbullying is defined across studies. These are just a few examples of why there are many discrepancies among cyberbullying prevalence rates reported in the research.

 

If you are collecting data on cyberbullying, I would ask that you collect and report basic demographic characteristics of the sample and thoroughly describe how you carried out your study. We are more than happy to consult with other researchers about what would be best, so feel free to drop us a note. Here are a few elements that should be included in any published report on cyberbullying:

 

• What are the demographic characteristics of the sample (total number of students included, gender, race, age)?
• When were the data collected (month, year)?
• How did you define and operationalize cyberbullying (What is cyberbullying? How did you measure it? Can one instance of harassment online be considered cyberbullying based on your measure?)?
• What was the response window of experience with cyberbullying (previous 30 days, 6 months, year, lifetime)?
• How was the information collected (classroom survey, in-person interview, online survey, etc.)?
• How was the sample identified and selected (randomly, based on some unique characteristic, because they were in a particular class, etc.)?
• What is the sample representative of (a particular school or district, state, country) and how do you know that it actually is?
• Prevalence rates of experience with cyberbullying—both victimization and offending (total, and broken down by other demographic characteristics, especially gender).

 

Working together we can shed more meaningful light on the nature, extent, and consequences of cyberbullying and our efforts can be enhanced exponentially if we all use comparable methodologies. At the very least, we need to take care to document what we did, so that any differences that might be attributable to the way cyberbullying was studied can be identified and taken into consideration when discussing the results.

7 Comments

  1. I agree that we should deffiantley stay on top of all the bullying that is going on anywhere. i hate to see kids get bullied. it is really quite sad!

  2. Great suggestion – glad you are making it. Thanks. If you can clone yourselves, it would be great if you would rate the studies coming out for validity. 🙂

  3. It is very sad all of inasiant kids give up their lives just because of people bullying them. that should not have to give up their lives just because of one person bullying them.

  4. Also, if the bullys think it is funny bullying other people, their worng. It is not funny. It is toure. Bullys tuore people because problmes at home or someone is forcing them to do mean things or post mean things for them to be their friend. That shows that that person is not a true friend.

  5. Hello – My colleague and I have are doctorial students here in New York. Our research is parallel to each other. We both agree with your statement in regard that all research regarding cyber bullying should use comparable methodologies. We have found many differences.

    We both would love the opportunity to meet and talk about about our research.

    Looking forward to hear from you.

  6. hi mitchell fried…im jc from phils..i currently making a proposal abou cbl…may i ask to have a copy of urs to be cited as one of related studies…hoping for your positive response…

Leave a Reply

Your email address will not be published. Required fields are marked *