TY - JOUR
T1 - Lay Understanding of Forensic Statistics
T2 - Evaluation of Random Match Probabilities, Likelihood Ratios, and Verbal Equivalents
AU - Thompson, William C.
AU - Newman, Eryn J.
N1 - Publisher Copyright:
© 2015 American Psychological Association.
PY - 2015/8/1
Y1 - 2015/8/1
N2 - Forensic scientists have come under increasing pressure to quantify the strength of their evidence, but it is not clear which of several possible formats for presenting quantitative conclusions will be easiest for lay people, such as jurors, to understand. This experiment examined the way that people recruited from Amazon's Mechanical Turk (n = 541) responded to 2 types of forensic evidence-a DNA comparison and a shoeprint comparison-when an expert explained the strength of this evidence 3 different ways: using random match probabilities (RMPs), likelihood ratios (LRs), or verbal equivalents of likelihood ratios (VEs). We found that verdicts were sensitive to the strength of DNA evidence regardless of how the expert explained it, but verdicts were sensitive to the strength of shoeprint evidence only when the expert used RMPs. The weight given to DNA evidence was consistent with the predictions of a Bayesian network model that incorporated the perceived risk of a false match from 3 causes (coincidence, a laboratory error, and a frame-up), but shoeprint evidence was undervalued relative to the same Bayesian model. Fallacious interpretations of the expert's testimony (consistent with the source probability error and the defense attorney's fallacy) were common and were associated with the weight given to the evidence and verdicts. The findings indicate that perceptions of forensic science evidence are shaped by prior beliefs and expectations as well as expert testimony and consequently that the best way to characterize and explain forensic evidence may vary across forensic disciplines.
AB - Forensic scientists have come under increasing pressure to quantify the strength of their evidence, but it is not clear which of several possible formats for presenting quantitative conclusions will be easiest for lay people, such as jurors, to understand. This experiment examined the way that people recruited from Amazon's Mechanical Turk (n = 541) responded to 2 types of forensic evidence-a DNA comparison and a shoeprint comparison-when an expert explained the strength of this evidence 3 different ways: using random match probabilities (RMPs), likelihood ratios (LRs), or verbal equivalents of likelihood ratios (VEs). We found that verdicts were sensitive to the strength of DNA evidence regardless of how the expert explained it, but verdicts were sensitive to the strength of shoeprint evidence only when the expert used RMPs. The weight given to DNA evidence was consistent with the predictions of a Bayesian network model that incorporated the perceived risk of a false match from 3 causes (coincidence, a laboratory error, and a frame-up), but shoeprint evidence was undervalued relative to the same Bayesian model. Fallacious interpretations of the expert's testimony (consistent with the source probability error and the defense attorney's fallacy) were common and were associated with the weight given to the evidence and verdicts. The findings indicate that perceptions of forensic science evidence are shaped by prior beliefs and expectations as well as expert testimony and consequently that the best way to characterize and explain forensic evidence may vary across forensic disciplines.
KW - Bayesian models
KW - evidence
KW - forensic science
KW - jury decision making
KW - probability
UR - http://www.scopus.com/inward/record.url?scp=84937073376&partnerID=8YFLogxK
U2 - 10.1037/lhb0000134
DO - 10.1037/lhb0000134
M3 - Article
SN - 0147-7307
VL - 39
SP - 332
EP - 349
JO - Law and Human Behavior
JF - Law and Human Behavior
IS - 4
ER -