Small Samples Don’t Speak “Truth”

Small Samples Don’t Speak “Truth” Cyberbullying Research Center

Our primary mission at the Cyberbullying Research Center is to translate the research we and others do into something that is meaningful and interpretable to teens, parents, educators, and others dedicated to preventing and responding more effectively to cyberbullying. When we first launched this website (10 years ago!), there wasn’t much research being done, and so it was easy to keep up. These days, however, many scholars are putting cyberbullying under the microscope, which is a very good thing. It is important to recognize, though, that not all studies are created equal. In this post I’d like to discuss one particular problem: small sample sizes. And, to be more specific, I am most concerned with the way some media reports portray results from these studies to be definitive.

For illustration purposes, I’d like to highlight two recently-published papers that gained some measure of attention by the media in the last few weeks. I think they attracted this interest, in part, because their findings speak to the conventional wisdom regarding cyberbullying (that is, that traditional bullying is worse than cyberbullying, and that no one really wants to intervene when they see it happen). I’m all for using data to help validate or refute commonly-held beliefs about cyberbullying. Many of the media reports about these papers, however, make broad, seemingly conclusive generalized statements based on the perspectives or experiences of a very small group of students.

Traditional Bullying Is More Harmful than Cyberbullying

The first paper, entitled “Students’ perceptions of their own victimization: A youth voice perspective” and published in the Journal of School Violence, was written by Emma-Kate Corby and five of her Australian colleagues. The authors analyze responses from 156 middle and high school students (114 female, 42 male) who had been victims of both traditional and cyber bullying for the primary purpose of determining which the student’s themselves believed to be worse. A typical headline about this study stated that “Cyberbullying Not as Concerning as Face-to-Face for Kids.” Is this true?

When looking at the results, 59% of the students who had experienced both forms of bullying said that the face-to-face form was worse while 15% said the cyberbullying was worse (26% said it was about the same). So, at least among the majority of students in this particular sample, the face-to-face form of bullying they experienced was worse. Interestingly, we hear quite often from teens who tell us that the online forms of bullying they experienced were worse for them than the at-school forms they had to endure. Which “sample” is more reflective of the experiences of most youth: theirs or ours?

I personally believe that the answer to the question of “which is worse” varies significantly by student and by experience. A blanket statement that “all cyberbullying is less impactful for all students than all forms of face-to-face bullying” is simply apocryphal. While media reports might suggest that, the authors of the paper certainly did not draw this conclusion. Some teens are significantly impacted by online experiences, whereas others are not. It is very person-specific.

I should point out that I know a few of the authors of this paper and genuinely respect their work. As such, I cannot dismiss the findings outright. But I don’t think they would generalize their results as broadly as some media reports have done (even if their findings do get supported by subsequent research).

Few Students Willing to Step up When They Witness Cyberbullying

A second paper, written by Kelly Dillon and Brad Bushman (both from the School of Communication at Ohio State University) was published in Computers in Human Behavior and entitled “Unresponsive or un-noticed?: Cyberbystander intervention in an experimental cyberbullying context.” This study sought to determine if people would be willing to directly intervene if they witnessed mistreatment in an online chat room. The sample comprised 221 university students (154 female, 67 male). It is unclear how students were selected to participate in the study or if they were representative of the population from which they were drawn (presumably, Ohio State students).

The researchers set up a scenario where students were invited to evaluate a chat support platform for online surveys. Once in the chat room, a confederate (one of the researchers) began mistreating a third party in the room. Only 10% of participants who noticed the behavior intervened directly (by messaging the target or aggressor, or by contacting the lead researcher). In addition, about two-thirds of the participants intervened indirectly by rating the chat environment (or the chat monitor) poorly on an exit evaluation. I’m not sure how this latter behavior represents an “intervention,” but I suppose it is meant to be a proxy for some kind of online reporting mechanism.

Most of the media headlines that reported on this study focused on the finding that few students directly intervened. But also consider this headline from the Inquisitr: “Online Trolls, Cyber-Bullying Succeeds Because No One Intervenes Or Stands-Up Against The Bullies, Prove Scientists.” I am particularly perturbed by the use of “No One Intervenes” and “Prove Scientists.” Nothing was “proven” in this study, and it is factually incorrect to say that “no one” intervened. But that was the headline.

We saw similar conclusions drawn from a video that went viral a little over a year ago that seemingly showed university students unwilling to intervene when they saw someone being roughed up right before their eyes. As with research published in academic journals, we need to ask ourselves if the persons depicted in the video were representative of the general population. That is, we need to carefully consider whether the behaviors featured in the clip (and, by extension, in the previous two studies) are typical of what most people would do. The goal of research is to identify that which is not random, but what occurs with some regularity and consistency. Larger samples help to reduce the likelihood that what is observed is extraordinary.

What Can We Learn?

The important take-home message from these studies (and the media reporting of them) is that more research is necessary. Even though it may well turn out that the results from these studies are valid, it is still unwise to draw concrete conclusions from any single study, especially one that involves just a couple hundred respondents. Small samples are fine for exploratory purposes: to pilot an untested measure or explore a new research question. They should be used to guide more comprehensive investigations in the future, not to create policy or generate page-clicks with disingenuous headlines.

We try to base what few definitive conclusions can be drawn on the weight of prevailing research, rather than just one study. For example, when we say that about one out of every four or five students has experienced cyberbullying at some point in their lifetime, we are basing that on the 10 surveys we have done (which have included more than 15,000 respondents), as well as our painstaking review of 73 articles published in peer-reviewed journals over the last decade (that have included nearly 150,000 respondents).

Determining whether a sample is “small” depends more on the size of the population it is intended to represent than the raw number of people surveyed. If a school has 60,000 university students, for instance, and you only study 200 of them, you are examining just one-third of one percent of the population (0.3%). Should we expect that those 200 are substantially similar (based on perceptions and experiences) to all 60,000? A carefully selected (usually randomly-chosen) sample of sufficient size (5-10%?) would allow us to draw some conclusions without having to survey every single person. But if I poll the first 20 students that walk into my building, it is unlikely that their beliefs and behaviors are representative of my university population of 11,000.

Don’t get me wrong, our samples aren’t perfect either. Much social science research is plagued with problems, some of which are unavoidable. We all make concessions when asking certain questions to certain people at certain times. Gaining access to a large and representative sample of students is very difficult. In order to really make progress in growing our knowledgebase related to cyberbullying, though, it is critical for researchers to do the best they can while acknowledging any limitations. And journalists have an obligation to report findings accurately and responsibly.

1 Comment

  1. cyber bullying should be against the law because their are people that commit suicide and some people don’t care but I do because many loved one by many families get hurt because of these cyber bullies

Leave a Reply

Your email address will not be published. Required fields are marked *