Expert Insight in Criminal Justice: Dr. Lois James on her controversial study 'The Reverse Racism Effect'

Dr. Lois James
Dr. Lois James is an assistant professor at the Washington State University College of Nursing. As a core faculty member in WSU's Sleep and Performance Research Center, Dr. James studies the relationships between sleep, health and performance in police officers, nurses and top-tier athletes, among others. Dr. James also uses simulators to study the impact of suspect race on police officer decisions to shoot. Her research in this area has been heavily featured in the mainstream media, and one 2016 article she co-authored, titled "The Reverse Racism Effect," generated significant controversy. Despite finding that police officers harbored implicit racial bias, associating weapons with black suspects more often than white suspects, the study also found that officers were less willing to shoot black suspects in a simulated policing environment. The article speculated on reasons this disparity might exist; among them, a "counter-bias" or "reverse racism" effect.

Read the full 'Reverse Racism' study here.

Read the 2017 rejoinder article here.

 

This interview has been condensed.

 

Overall, what would you say was the main objective of the study? What were you hoping to do differently from previous studies on officer-involved shootings? 

Our major objective certainly with the methodology was to try and develop a test method that was more realistic than had been previously possible in a laboratory environment. So we wanted to test officers’ reactions to deadly encounters in a controlled environment but do so in a way that much more accurately represented what they may experience in the field, which is why we used high-fidelity simulation.

 

So those were the same kind of simulators that police departments use for training, correct? 

Yes, exactly. And in previous experimental designs the test stimuli had been actually kind of a closer to something like the implicit association test, where officers sat in front of a computer and pressed a button labeled “shoot” or a button labeled “don’t shoot” in response to images of armed or unarmed suspects. So this, officers are exposed to video scenarios featuring very realistic encounters, and they’re armed with munitions, weapons, and have to make decisions in a far more kind of time-pressured and realistic setting. So that was the goal.

 

Since your study has come out, have future studies into officer-involved shootings used similar technology to what you used?

I know of several that are beginning to do so. I don’t believe any of them have published on that work yet. But I’ve certainly spoken with a number of researchers who are trying to incorporate this type of design. One of the challenges is they’re quite expensive, the simulators, and also to really kind of get the most out of the experimental methods you need to develop your own scenarios, which is also quite expensive, and takes some time. So the scenarios that we use in our lab are all custom-made, kind of developed based on decades of research on officer-involved shootings.

 

Can you explain a little bit about what that data collection process looked like? In terms of testing the officers. 

Absolutely. So officers reported to the lab on various different occasions. One of the main purposes of the study was to actually investigate the impact of officer fatigue on their operational performance, so we would test them under various conditions of fatigue, and we would bring them in and expose them to about a five-and-a-half-hour-long protocol, which included deadly-force-judgment decision making simulation, it included driving simulation, tactical-social interaction, and then a battery of cognitive testing. So the data that’s reported in “The Reverse Racism Effect” is just, is kind of one component of what officers experienced throughout their experimental day.

 

So, the report basically found that officers were in fact, in an actual environment that simulated a policing environment, they were actually less likely to shoot a black citizen than a white citizen, and there was a longer period of hesitation before shooting a black suspect, right? 

Correct, yes.

 

That was kind of contrary to a lot of the previous literature in that topic. So at the time that report was published, were you aware how controversial it would be? 

Yes, to a certain degree I was. One thing to keep in mind is that our results in some ways are very much in alignment with prior results. So, for example, when we tested officers using the implicit association test, we found a very strong implicit association between African Americans and weapons. So we did find evidence of implicit racial bias, just like previous studies had found. I think what was so different about our study was the test method for deadly force judgment decision making, or kind of shooting decisions, was very different than previous studies. So using our design we did find evidence of this, what we either call reverse racism or counter bias, that seemed to imply that even when officers do have strong implicit racial bias that associates African Americans and weapons, there’s something else that goes on. There’s some metacognitive process that goes on during these far more realistic simulations. And as I’m sure you know, during, in the course of the paper, we speculate on various reasons why that might be. But yes, I certainly, I did anticipate that it would be a pretty controversial study. I also strongly believed that just because it was a controversial study it wasn’t cause for it not to be published. I mean the data is the data, and we really felt that what we had to do was just present the data as best we could, including all the caveats and all of the limitations of the research.

 

Do you think that your research has been misinterpreted by some groups? 

Yes, certainly, I think that’s a definite possibility, but I don’t think that’s an unusual thing. I think that, you know, this paper especially received a huge amount of media coverage, and no matter how carefully you write your manuscript and no matter how temperate you are in your findings and no matter how many limitations you list--of which there are many, many limitations of the research, all of which are very well-explained in the manuscript--but I think that when it comes to media coverage a lot of that gets overlooked, and the message of the paper can be boiled down to something very simple and unfortunately this is not a very simple result, it’s a very complex and nuanced result. So yes, I think that there is definite potential for it to be misinterpreted and I think there’s definite potential for it to be used in ways that it’s not intended to be used.

 

In reporting on studies that could have a big impact on policy makers and the public, what do you think the media could do to present that information accurately?

I think that they need to take into account the context in which the research occurs. They need to take into account the limitations of research and that it doesn’t occur in a vacuum and that it builds on decades of work that’s come before it and will hopefully inform decades of work that will come after it. And I’ve had some very positive media experiences, both print, for example The New York Times did a very strong and fair piece on the research, the same with the Washington Post. I think the larger, kind of more mainstream media outlets typically do a very good job of presenting the results fairly, and not really sensationalizing them too much, and I’ve also had some very good experiences with TV, the Today Show has done a very good piece, Anderson Cooper for example did a very good piece that is unbiased and fair, but I think where sometimes it can kind of run into problems is we put out a press release and a lot of news sources and blogs, for example, will jump on the press release and they won’t request an interview, and they’ll just pick the pieces of the press release they want and they’ll run with that and they’ll present a very one-sided story, which, you know, to a certain degree it’s an unavoidable aspect of media relations.

 

With the reaction of the research community and the general public, there were certain occasions where you were actually threatened/intimidated. Can you go into that a little bit? 

In terms of the general public I mean there’s always going to be, you know, you’re never going to make everybody happy, especially if you’re just reporting the facts and reporting the data, a lot of people are just flatly not going to believe the data and they’re not going to think that the study is valid because it does not conform to their particular beliefs. And so I’ve certainly received my fair share of correspondence from members of the general public who’ve told me quite clearly that they do not believe the study is valid, because they just don’t believe that this type of effect could exist. In terms of reactions from the research community, I would say 90 percent of it has been very strong and very supportive, and I think has recognized that all we’re really trying to do is just understand the problem as best we can and by using this novel test method of deadly force judgment decision making simulation, we have kind of peeled back another layer of what motivates officers’ decisions to shoot and not to shoot and really dig into the science of deadly force. And I think that most of the research community really values that. They certainly recognize that this is one part in this whole puzzle, or this whole picture. It is certainly not the be-all and end-all, nor would we ever claim it to be. I mean it’s an interesting finding that needs considerable more research before it can be generalized to law enforcement as a whole.

 

Are you still doing research into officer-involved shootings? What projects are you working on now? 

We’re still analyzing data from that study, so we’ve got various manuscripts that are going through the publication process. We’ve also started various other studies with law enforcement and hopefully will be expanding some of our studies to incorporate more of a field component as well, so for example analyzing body camera footage and analyzing incident reports to see whether bias exists in those.

 

What are some of the challenges of analyzing body camera footage? That isn’t really something that people have done in the past, right? At least not extensively.

They’re relatively new, so there’s not been a huge amount of work done on it, although there have been now a couple of studies that have been published that have used that methodology. One of the major drawbacks is that it’s incredibly time-consuming, as you can imagine, going through the footage to isolate the incidents of interest and then figuring out some way of coding them and scoring them to, for example, to pull apart the effect of suspect race versus a multitude of other variables. So there’s a huge amount of work involved, but I think it’s important work to do and they do offer a really interesting new opportunity for field research.

 

I know you found there was a lot of implicit bias from officers. Do you think there’s any way that police officers can train officers so that they don’t have that implicit bias? Since there’s debate about whether implicit bias training actually works, or whether it could just complicate the problem. 

That’s a very interesting, big and reasonably unanswered question. Obviously a lot of money goes into implicit bias training, and I think that the concept of implicit bias training is very good. I think bringing implicit bias to officers’ awareness and providing them with strategies to overcome that bias is incredibly important. What’s also important, though, is determining how well that type of training actually works, and whether it does in fact reduce biased behavior on the street. We have a proposal currently in to conduct a controlled randomized trial in the field testing the effectiveness of various different types of implicit bias training, namely classroom-based implicit bias training versus simulation-based implicit bias training. The simulation-based training is one that we’ve actually developed based on our research at WSU.

 

Is there anything else you’d like to add about anything we’ve talked about? 

Maybe just to say that one of the things we hope to accomplish with our research is to really understand the dynamics of deadly encounters. It’s very easy to assume that officers are able to do a perfect job, and it’s complicated and it’s complex because we don’t really understand a huge amount about what goes on in officers when they make these decisions. And to hold officers accountable for something, which of course we have to hold them accountable for because it’s tremendous responsibility, but to hold them accountable, we have to be asking them to do things that are actually possible. So what we try and do in our lab is really understand what are the limits of human performance. Because officers are human like the rest of us.

 

What do you think is the best way that police departments can build trust with the public? Do you think it’s sort of about that understanding? 

I think in part it is, I think some departments have had great success with bringing members of the public in and, for example, putting them through use-of-force simulations so they can start to appreciate the types of challenges that officers may have to go through. I think things like implicit bias training can help tremendously; even if we don’t know how much they affect behavior on the street, they do seem to have a very positive effect on public perceptions of officers, you know, that they’re taking implicit bias seriously. I think the basic tenets of de-escalation and procedural justice. And I think the more departments can show that they’re really interested and invested in those tactics that tend to be more effective in building trust. Yeah, I think all of those things can work, and can help.