Shortly after beginning my involvement with the Open Science Movement, I was at a conference talking to another early career researcher, someone I had just met. I don’t remember which particular controversy we were discussing – there were a lot in the 2015-2017 era – but the issue of objectivity came up. At some point, I suggested quantitative researchers had a lot to learn about handling the subjectivity inherent in the scientific process, and I withheld a sigh as I processed my discussion partner’s response.
“Maybe qualitative researchers have to deal with that,” he said, “but real research is objective.”
Unfortunately, this isn’t the first time I’ve heard that refrain, and it almost certainly won’t be the last. Still, the joke’s on him: quantitative researchers are as routinely affected by biases as anyone else who does research. Or, you know, is alive. Let us remind ourselves about the Scientific Method:
Our values, perceptions, and biases affect every single part of the research process. It’s unavoidable. It shows up at every step, and it can be difficult to predict what experiences will affect our research, and how.
Observations and Hypothesizing
Our minds are tricked by its own functions on a regular basis. We spent weeks arguing about whether the dress was blue and black or gold and white, about whether we heard Laurel or Yanny. We watch color appear and disappear in unchanging images, look for shapes in clouds, and experience movement where there is none.
We are also subject to many heuristics and fallacies, so much so that many guidelines (like this one from the Purdue OWL) exist to prevent the use of them in one’s writing (as an example). It would be ridiculous to assume that one’s observations and thinking about those observations would not be affected by similar processes.
And that’s just about “objective” data. What about what we (as a society) tend to consider more subjective data, like perceptions about our own well-being, or about whether something is racist or sexist? Why is it crazy to think that someone who hasn’t had to deal with transphobia might not be able to recognize it (or its sources) without intentional focus?
If what we even observe is affected by our perceptions, experiences, and potential biases, our descriptions for those observations and how they interact with the natural world are of course going to be affected by those same perceptions, experiences, and potential biases. Someone who thinks video games make children violent is more likely to blame video games when they see a child shoving another child on a playground; someone who doesn’t is more likely to see a frustrated child who can’t find the words to express those frustrations. Our experiences will undoubtedly affect what we consider plausible causes for the effects we see. This is why interrater reliability is so important — because two people can look at the same set of events happening and come out with different observations.
Study Design and Data Collection
Just as our perceptions, experiences, and potential biases can affect what we even notice and how we interpret and hypothesize about our observations, these aspects of ourselves can affect how we design and collect data. A researcher convinced that social media is what is leading to increased mental health concerns among American teenagers and young adults is more likely to only ask questions about social media use, and may ignore other important considerations like ballooning student debt, decreased earning potential, concerns about systemic racism, or, I dunno, a pandemic. If you don’t design a study that collects that data, you’re missing a potentially large chunk of the puzzle, resulting in an incomplete picture of what contributes to the well-being of American teenagers and young adults.
Then there’s the issue of measurement. Even something like depression, considered the “common cold of mental disorders,” is notoriously difficult to measure. How we measure depression will be affected by things like precedent (What have I used before? What did my PI use? What’s gotten me the “right” results in the past?), personal preferences (Which measure do I like better? Which seems better?), perceptions around quality (Which scales have been validated? What have others liked using?), and other factors, some more scientific than others.
When I teach my students about operational definitions (that is, a definition of a concept that includes how it is measured), I often ask them to come up with an operational definition for love while in small groups. When we come together to share our new operational definitions of love, they are often surprised at a) how difficult it is to come up with an operational definition for love, and b) how different everyone’s answers are, both within the group and across groups.
That’s not a unique problem. Most things in psychology are likely harder to accurately measure than what we let on, and the more complicated the issue, the harder it is to measure. How does someone “objectively” measure the lived experiences of immigrant survivors of torture? How does someone design a 20-item, seven-point Likert scale survey on that?
Huh. Maybe there’s a use for qualitative research after all.
Data Analysis and Interpretation
Obviously, you can’t (or, well, shouldn’t) interpret data that isn’t there. But even once you have your data, it can be interpreted in different ways. Data isn’t data until it’s interpreted. Even people with the same exact research question or hypothesis can interpret data in very different ways.
A recent Psychological Science paper highlights this issue in a particularly interesting way: researchers gave 29 teams the same dataset and the same research question: are soccer referees more likely to give red cards to dark-skin-toned players than to light-skin-toned players? The teams used a variety of analytic methods, resulting in 21(!) different combinations of covariates, and 20 of the 29 teams found a significant positive effect, while the remaining 9 found no effect.
What’s particularly illuminating about that example is there’s no specific motivation, nefarious or otherwise. Someone handed a bunch of analysis teams a research question and dataset and said, “Find the answer to this.” This wasn’t a study these analysts designed. It’s more than likely not something they’ve built their careers off of, not something that’s kept them up at night, that they’ve done hours of reading and theorizing over. They still came up with multiple answers, some of which are surely wrong. Imagine someone who has spent decades studying the same thing, maybe even has a theory or a framework named after them. There’s going to be a motivation to interpret data in a particular way.
As I said before, data requires interpretation, and these interpretations don’t come from thin air. Human beings are the ones doing the interpretation, and they are influenced, again, by experiences, perspectives, and potential biases. It’s not (necessarily) nefarious; it’s just what happens when humans study things.
Where do we go from here?
I think a lot of (quantitative and/or positivist-inclined) people are nervous about admitting that science is not in fact objective because it means an admission that the scientific process itself is not the infallible tool we are taught it is. However, I think this fear of embracing the subjectivity inherent in science is holding us back.
Instead, we should embrace the inherent subjectivity of the human pursuit known as science. Ignoring it does not make it go away. By acknowledging it is there, and examining our role in the creation of knowledge, we can be more transparent and rigorous. Qualitative researchers do this through reflexive practices such as positionality statements, which are statements where researchers reflect on their experiences, perspectives, and potential biases, making clear what is often hidden in scientific reports. A relatively simple example comes from my own work:
The three authors have varying levels of engagement with different gaming communities. The first author has been playing video games since she was four years old, has been involved with gaming-related forums since 2003, and has written for a few gaming websites; the second author plays a variety of video games and has been active on several gaming-related subreddits; and the final author has had experience gaming with family members. While none of the authors identify as FGC [Fighting Game Community] members, the first two authors attended an anime fighting game tournament and the first author has personal and professional contacts within the community. All three authors believe communities should be allowed to speak for themselves, and have made concerted efforts not to overinterpret qualitative responses. Having a member of the author team who was highly involved with gaming facilitated entry and participant recruitment. These different degrees of engagement also enabled the authors to view the FGC and the results from diverse perspectives that, taken together, yield valuable insights.
Why include a statement like this in our research article? Shouldn’t the data speak for itself? Well, no. The idea from this study came from watching Evo with my (now) husband and noticing the closeness among thousands of strangers, coming together for this one big event, and then reflecting on my own experiences growing up as a cisgender woman playing video games. I remembered all the negative press around video games growing up, particularly coming from psychological researchers (cough), and how poorly that coverage meshed with my own experiences. The fighting game community in particular has been negatively affected by the stereotype of the “violent gamer” due to it being a competitive community and one of the most diverse gaming communities. As I considered these events, I thought, Well, what if I, as a community psychologist, took a strengths-based approach to this community?
If you don’t think my experiences (and my reflections on those experiences) affected my study design and how I interpreted the findings, you’d be wrong. And you’d be wrong to think that this story is a unique one. We study what we study because we find it interesting, important, personally meaningful, or some combination of those factors. The approaches we take are affected by what we’ve learned, what we’ve experienced, and what we accept. It doesn’t matter what you study, we are all affected by these factors. Acknowledging this and exploring the effects this may have on our research isn’t a weakness — it’s how we truthfully engage with science. When we pretend objectivity is something we can and regularly do achieve, we absolve ourselves of responsibility for our research and its implications. If we truly want to move science forward, we need to accept responsibility for ourselves as active agents affecting the scientific process, rather than travelers simply moving through the steps to end up at a predetermined destination called the Truth. It’s messier than that, and only by acknowledging this can we begin to grapple with the consequences of that messiness and improve science.