Monday, June 3, 2019

SCOW flops on JI 140; rejects behavioral research in the process

Predictably, SCOW flopped on its chance to change our defective jury instruction on the burden of proof, JI 140.  As the concurrence in the SCOW opinion points out, the instruction engages in a sort of burden-shifting by focusing jurors on the type of doubt the defense must produce (with numerous warnings about what kinds of doubts are not reasonable) rather than what constitutes proof beyond a reasonable doubt.  Worse yet, as most of The Dog’s readers know by now, JI 140 concludes with language that other courts (e.g., Fifth Circuit Ct. App. and Washington Ct. App., among others) have held to be constitutionally defective.  It tells jurors “you are not to search for doubt.  You are to search for the truth.” 

Here’s a summary of the case by SPD’s On Point blog, which also links to the decision itself.  (The post also has a helpful practice tip for defense lawyers, so be sure to read it.)  I’ll have plenty to say about this case in the future—possibly in another law review article.  But for now, I’ll limit my comments to the very small part of the court’s decision discussing the studies I conducted and published with my coauthor Larry White.  These studies demonstrated, unsurprisingly, that when you tell jurors not to search for doubt but instead to search for truth, it will lower the state’s burden of proof.  You can find the studies, along with other JI 140 resources, on my JI 140 resource page (which I’ll be updating soon).   After the jump, I’ll respond to the softballs that SCOW has thrown me. 

As the On Point post notes, SCOW’s entire discussion of the studies is limited to a single footnote.  It doesn’t actually discuss the studies, but merely cites alleged concerns about them.  As we explained in both publications, all studies are necessarily imperfect or flawed.  When a researcher makes a decision on study design that is intended to address one problem, that same decision often exacerbates another problem.  (Larry White and I explained that in this post-studies Gonzaga article.)  Here are SCOW’s feigned concerns, with my replies immediately following.  

  1. Neither study was peer-reviewed by social scientists, as both appeared in law reviews.  A common prosecutor allegation used to be that neither study was peer reviewed; I see that SCOW adopted but subtlety modified that complaint.  First, the claim is not true.  Our second study, published with Columbia, was peer reviewed.  Journals at Harvard, Yale, Stanford, and Columbia, along with a small number of other law journals, use peer review.  We won’t know the identity of the peer reviewer or reviewers, however, as they are kept anonymous—this is true in the handful of law reviews that use peer review and in social science journals.  (Prosecutors would be horrified to learn that I, a criminal defense lawyer, have been asked to be a peer reviewer for an international journal on police practices.)  You can read more about the peer review process on p. 509 of my post-studies Cincinnati article.  Also, here’s an article about recent hoax papers being eagerly devoured by peer-reviewed journals, so the process (which shouldn’t be confused with study replication) isn’t all it’s cracked up to be.
  1. Neither study engaged in an actual trial setting.  True.  We used the written case summary method, which is very well-suited to testing the impact of a written jury instruction.  The method is also accepted and commonly used in social science research.  Other studies have even tested the effectiveness of the case summary method relative to more realistic trial simulations, and the results were good.  We explained this in the studies themselves.  And as Judge Bauer (a Wisconsin trial court judge who holds an advanced degree in the social sciences) explained, as long as all groups of participants receive the same material, it doesn’t matter if it was written, spoken, or acted out by real-life actors.  I also explained all of this on p. 508 of my Cincinnati article (which was available to the court and even cited in the appellate lawyer’s brief and WACDL’s amicus brief).
  1. The studies were limited in that they each utilized only one fact pattern.  This is true.  But that’s two fact patterns in two different studies each leading to the same finding.  Study replication is a huge deal, as the failure to replicate studies is a problem in the social sciences.  We chose a simple study design for the Richmond study, followed by another simple study design in a conceptual replication experiment (the Columbia study), so that the studies were easy to follow.  We also described things in plain language so lawyers untrained in the behavioral sciences could easily read them.  To understand what I mean, pick up the latest copy of Law & Human Behavior, or a similar psychology & law journal, and try to make heads or tails of it.  Brush off your statistics book, and good luck.  Our studies, on the other hand, are designed simply, are clearly written in plain English, and are accessible to lawyers.
  1. The participants engaged in the studies independently and without monitoring, meaning they may have devoted inadequate attention to the studies.  True, but that’s also true of traditional studies that use college students as test subjects, and it’s true of real-life jurors as well.  As we discuss in the studies themselves, numerous real-life cases even uphold convictions despite sleeping jurors.  On top of that, we included attention-check questions in the studies to ensure test participants were, in fact, paying attention.  Besides, even if we hadn’t included such safeguards, this faux complaint does nothing to explain the observed differences in conviction rates between test groups in the studies.
  1. In the first study, there was no procedure to screen participants for potential bias.  This is true not only of the first study, but also the second study.  Participants cannot be screened for potential bias, as this would introduce the bias of the experimenters into the process.  As we explained in the studies themselves, as the appellate lawyer explained in her brief, as Judge Bauer explained in his written work, as we explained again on page 172 of the Gonzaga article, and as I explained again on page 500 of the Cincinnati article, participant bias in controlled experiments is addressed through random assignment of participants to test groups.  This creates groups that are statistically equivalent in all respects, thus allowing us to conclude that the manipulated variable (the jury instruction language) caused the different conviction rates.  In other words, controlled experiments, unlike trials, must not use a voir dire process.  This would destroy the integrity of the study.  Given that this was explained so many times in so many different places, SCOW really botched this one.
That’s it for now, sports fans.  Stay tuned to the JI 140 resource page for updates, including a new pre-trial motion asking trial courts to change JI 140’s disastrous language.  As the On Point post correctly states, the SCOW decision, a footnote in JI 140 itself, and previously existing case law make clear that such changes to JI 140 are to be made in the discretion of trial court judges.

No comments:

Post a Comment