In the final days of the Trump administration, a curious press release appeared on the Justice Department’s website announcing the publication of a statement in response to a nearly five-year-old report that critiqued a handful of forensic science practices.
The 2016 report from the President’s Council of Advisors on Science and Technology, or PCAST, concluded that a number of forensic feature-comparison methods — including bite-mark, footwear, and firearms analysis — lacked scientific validity and reliability.
Forensic feature-comparison methods involve practitioners taking a piece of evidence and visually comparing it to an exemplar to determine if they match. An apparent bite mark found on a victim, for example, might be compared to the dentition of a suspect. The problem, the PCAST found, was that while these pattern-matching practices have been used as evidence for decades, they have little, if any, scientific underpinning. They haven’t been empirically proven valid and lack meaningful error rates. The council concluded that some of these practices, like bite-mark analysis, should be abandoned, while others, like firearms analysis, should be subjected to further scientific scrutiny — and, if used as evidence in the interim, that judges and juries should be told of their limitations.
“Without appropriate estimates of accuracy,” the report read, “an examiner’s statement that two samples are similar — or even distinguishable — is scientifically meaningless: It has no probative value and considerable potential for prejudicial impact.”
This did not go over well with many forensic practitioners and prosecutors, who were quick to criticize the report’s conclusions. They argued that these forensic practices weren’t suited to testing by traditional scientific methods, they’d been working just fine, and the criminal legal system would flounder without them.
But the PCAST caught the attention of others too. Defense attorneys and reform-minded forensic practitioners have long decried the fallibility of traditional forensic practices, which were developed by police, not scientists. According to the National Registry of Exonerations, roughly a quarter of the more than 2,800 wrongful convictions cataloged since 1989 involved false or misleading forensic evidence. Defense attorneys began to cite the PCAST report in efforts to block the introduction of certain forensic evidence in their criminal cases.
It was amid all this that on January 13, 2021, the Justice Department announced the publication of its belated official response to the PCAST. The unsigned statement blasted the report’s conclusions as wrongheaded and incorrect, encouraging judges to reject them. To date, the department has indicated that it is sticking to this position.
Now, Democracy Forward and the Union of Concerned Scientists are asking the Justice Department to rescind the statement, which they say runs afoul of the federal Information Quality Act requirement that information disseminated by the government be accurate, reliable, and unbiased. “Retraction will ensure that rigorous and independent forensic science can appropriately inform legal decisions,” the groups wrote in a letter, “without the confusion caused by the seemingly authoritative, but misleading, DOJ statement.”
And, the groups say, lives may depend on the agency doing so. “When our government makes decisions that ignore the best available science, it can result in real harm,” Jacob Carter, a senior scientist with the UCS Center for Science and Democracy, said in a press release. “One of the clearest examples is the way the use of dubious forensic evidence could put an innocent person in prison.”
The Problem With Ballistic Evidence
The murder case against Scott Goodwin-Bey hinged on forensic evidence.
In 2015, Goodwin-Bey was charged with the shooting deaths of four people inside a room at the Economy Inn on the north side of Springfield, Missouri. Police said Goodwin-Bey believed that the four had been talking to the cops about his drug use.
On balance, the evidence against him was weak, and Goodwin-Bey maintained his innocence. There was an informant who was apparently in the motel room at the time of the killings, who claimed that he was not involved and instead threw suspicion onto Goodwin-Bey. And there was a gun that Goodwin-Bey allegedly gave to a convenience store clerk two weeks after the crime. The cops confiscated the gun and arrested Goodwin-Bey.
It was the gun that would make or break the state’s case. At the motel, investigators had found 13 shell casings and 11 fired bullets, which were collected for analysis. The question was whether forensic examiners could connect the gun to the crime.
Firearms examination, a branch of “toolmark analysis,” involves forensic practitioners taking spent bullets and shell casings and trying to match them to a suspected crime weapon.
There are two levels of inquiry. First, examiners look for so-called class characteristics, like whether the caliber of the bullet collected at the crime scene matches the caliber of the weapon. If the bullet is a .38, for example, and the gun is a .22, they can’t be related because a weapon can’t fire a bullet of a diameter larger than its barrel. Then there’s rifling, the pattern carved into the gun’s barrel during manufacturing, which spins a bullet when it’s fired to increase accuracy (think of how a quarterback throws a football). The twist is oriented either right or left and comprised of raised and lowered portions of metal called lands and grooves. After a bullet is fired, it generally retains impressions of the rifling from the gun that fired it. So if the impressions on a bullet from a crime scene don’t match the rifling on the suspected gun, you know that gun didn’t fire those bullets.
Then things get more complicated. If a bullet and gun share class characteristics, firearms analysts will look for other similarities by using a comparison microscope, for example, to inspect a crime scene bullet and a bullet test-fired in the lab from the suspected gun. At this point they’re looking for details they call “individual characteristics.” Tiny imperfections in a gun barrel could leave impressions on both bullets, say, a scratch to the lands or other defects that examiners claim are unique to a particular weapon. If an examiner sees those things, they’ll often declare a match.
“The peer community is almost exclusively law enforcement. It is not scientific.”
In the Goodwin-Bey case, the state said the forensic examination matched his gun to the crime scene evidence. But the defense challenged this, citing the PCAST report. When it comes to individual characteristics, the report concluded, firearms analysis is neither scientifically valid nor reliable and lacks meaningful error rates — how often an examiner gets it wrong. The PCAST reported finding only one well-designed empirical study, which revealed an error rate that could be as high as 1 in 46.
State Circuit Judge Calvin Holden held a hearing to decide whether the evidence would be allowed. In December 2016, just weeks before Goodwin-Bey was slated to be tried, Holden ruled mostly in Goodwin-Bey’s favor. “The problem with ballistic evidence is that it is all subjective. There have been no large scientific studies to determine an error rate. The peer community is almost exclusively law enforcement. It is not scientific,” he wrote. “Toolmark identification is a very valuable investigative tool. However, that is where it should stay, in the area of law enforcement, not in the courts.”
But he didn’t toss the evidence altogether: Instead of declaring a match — for which Holden concluded there was no scientific support — the examiner could tell the jury that based on class characteristics, the gun could not be excluded as the murder weapon. “The court very reluctantly will allow the state’s lab person to testify, but only to the point that this gun could not be eliminated as the source of the bullet,” the judge wrote. Shortly thereafter, prosecutors dropped the murder charges against Goodwin-Bey.
The Scourge of Wrongful Conviction
The Goodwin-Bey case appears to have been the first in which a judge cited the conclusions of the PCAST report to block the state from using firearms evidence. In the intervening years, at least nine additional favorable rulings have curtailed the use of such evidence.
Maneka Sinha, an assistant professor of law at the University of Maryland, spent a decade with the Public Defender Service for the District of Columbia. As head of the office’s nationally known Forensic Practice Group, she worked on another case challenging the use of firearms analysis. The question was essentially the same as in the Goodwin-Bey case: Would the state be able to argue that forensics could match shell casings from a crime to a particular gun and put its defendant, Marquette Tibbs, at the scene? The defense team said no; at best, the state could say that the gun could not be excluded as the murder weapon. “Because there’s not sufficient scientific support to go any further than that,” Sinha told The Intercept. At a subsequent hearing, “the goal was to lay bare the flaws with … the discipline as a whole,” she added. “And I think we did that.”
In September 2019, Associate Judge Todd Edelman agreed. “Based largely on the inability of the published studies in the field to establish an error rate, the absence of an objective standard for identification, and the lack of acceptance of the discipline’s foundational validity outside of the community of firearms and toolmark examiners,” he wrote, the evidence must be limited to a conclusion that “the firearm cannot be excluded as the source of the casing.”
The statement rejected the PCAST’s conclusion that studies seeking to validate forensic practices should adhere to basic scientific principles.
The rulings continued into 2020. And then, in January 2021, the Justice Department statement popped up — in direct response to rulings that limited firearms evidence. “Formally addressing PCAST’s incorrect claims has become increasingly important,” it read, “as a number of recent federal and state court opinions have cited the report as support for limiting the admissibility of firearms/toolmarks evidence in criminal cases.” The statement went on to make three assertions, none of which are grounded in science.
The most fundamental was that forensic matching practices don’t belong to the field known as “metrology,” the science of measurement. According to the Justice Department, metrology doesn’t apply to methods like firearms analysis because practitioners don’t really measure anything. Instead, they only use their eyes. “As their reflexive description makes clear, forensic pattern comparison methods compare the features/characteristics and overall patterns of a questioned sample to a known source,” the statement asserted. “They do not measure them.”
The statement also rejected as too stringent the PCAST’s conclusions that studies seeking to validate forensic practices should adhere to a set of basic scientific principles and error rates should be established through well-designed black-box studies that test examiners’ accuracy.
As the PCAST report riled up a section of the forensics community, so too did the Justice Department’s response. Its point about metrology drew a stinging rebuke from Thomas D. Albright, director of the Salk Institute for Biological Studies’ Vision Center Laboratory, which researches how the brain measures visual information.
In a piece published in the Proceedings of the National Academy of Sciences, Albright explained why the Justice Department’s assertion was wrong. “This may seem like a semantic argument of little consequence, but I maintain that it reflects a longstanding and deep-seated misunderstanding within the forensic science community about how people make decisions,” he wrote. The notion that patterns in forensics are not measured but only visually analyzed, he argued, is absurd. “To wit, biological senses employed by human observers measure and discriminate the physical properties of sensory stimuli by simple and well-established rules,” he wrote. “This understanding encourages new ways of thinking about and improving the accuracy of forensic feature comparison thereby limiting the scourge of wrongful conviction.”
To be fair, this isn’t the first time that the Justice Department has looked sideways at the PCAST report. Not long after it was released, then-Attorney General Loretta Lynch said the department would decline to adopt its recommendations. But there’s a critical difference between what Lynch said and the January statement, according to Sinha. “There’s no surprise to us that …. Lynch was not going to adopt the findings at the end of the day. This helps them get convictions,” she said. “What she didn’t go so far as to say was that what the PCAST has done is scientifically unsupportable. That it’s bogus, and they’ve got it all wrong, which is effectively what this statement tries to do.”
And the stakes are high, Sinha said. Where something like bite-mark evidence is rarely used, firearms evidence is all but ubiquitous. So rejecting the PCAST recommendations on shoring up forensic analysis in the field has real consequences.
The Justice Department’s response, Sinha wrote in Slate, “was a smoke-and-mirrors attempt to use the credibility of the federal government to prop up the uncritical use of flawed forensic science that has contributed to hundreds of wrongful convictions.”
Unsigned, Unattributed, Unverifiable
Even as Albright and others began to sound the alarm about the content of the Justice Department’s statement, there remained an open question: Who wrote it? Oddly, the official statement had no author attached. But all signs point to one man: Ted Hunt.
Back when Barack Obama was president, there were a few hopeful indications that forensics reform might be in play. Not long after he was inaugurated, the National Academy of Sciences released its long-awaited study of forensics practices. A precursor to the PCAST report, it was equally unsparing in its assessment that, save for DNA analysis, “no forensic method has been rigorously shown to have the capacity to consistently, and with a high degree of certainty, demonstrate a connection between evidence and a particular individual or source.”
In the wake of the NAS report, the Obama administration launched the National Commission on Forensic Science, a 32-member panel chosen from across disciplines, including science and law, to “enhance the practice and improve the reliability of forensic science.” Although the group’s work was plodding, incremental, and not entirely satisfying, it was at least moving toward something.
But with the election of President Donald Trump, all momentum ceased. Trump installed Sen. Jeff Sessions, a noted opponent of forensics reform, as attorney general. Sessions folded the commission and in its place named Hunt, a career prosecutor from Kansas City, Missouri, as the head of a mysterious Justice Department forensics working group that never seemed to get off the ground. Hunt had been on the commission; he was one of just two members to vote against a recommendation that forensic practitioners standardize the language they use in reporting their results and avoid language that might overstate or exaggerate their findings — the kind of language judges in the firearms cases barred.
In 2017, Hunt penned an article for the Fordham Law Review titled “A Short Response to the PCAST Report.” It had much the same tone and hit some of the same notes as the Justice Department’s official statement. So it wasn’t as though people didn’t have an idea that Hunt was behind the Trump administration’s response to the PCAST report. But no one has been able to say for sure who is to blame. Hunt did not return a message requesting comment.
The statement has already been used in at least five criminal cases to sidestep judicial scrutiny of forensic firearms evidence.
The fact that the Justice Department’s statement was anonymous is among a host of problems raised by the UCS and Democracy Forward, which on June 24 filed a 20-page letter asking the department to immediately pull the statement off its website while it conducts a thorough review and decides what should be done to correct it.
The demand draws on the Information Quality Act and the requirement that “influential information” that is “expected to have a genuinely clear and substantial impact at the national level” undergo peer review before release. Because the statement is “unsigned, unattributed, and unverifiable,” according to the letter, “the public, and courts, are therefore prevented from confirming the expertise of any contributor or understanding the extent to which these opinions are shared by scientists.”
“The possibility that it was in fact written by a lawyer with a prosecutorial agenda,” the letter adds, “renders the statement susceptible to corruption.”
Jessica Morton, senior counsel with Democracy Forward, says the Justice Department’s statement is dangerous. “I think the potential damage is enormous. Whenever a statement is coming from the United States Department of Justice, it carries more weight than a statement coming from elsewhere,” she said. “Once it becomes enshrined in precedent … scientific techniques that don’t meet the standards to merit the name can become tools to continue incarcerating people, including people who may be innocent.”
Where the PCAST was meant to encourage reform in forensic practices — reforms that, to date, have largely failed to materialize — the Justice Department’s response was specifically intended to influence the courts to reject challenges to forensic evidence. The statement says as much in noting that it comes on the heels of firearms evidence rulings in cases like Goodwin-Bey and Tibbs. According to the advocates’ letter, the agency’s statement has already been used in at least five criminal cases across the country to sidestep judicial scrutiny of forensic firearms evidence.
“And the implication of that, I think, is that the DOJ thinks it’s just fine that certain forensic techniques don’t really hold up from a scientific perspective,” said Samara Spence, another senior counsel with Democracy Forward. “And it implies that DOJ doesn’t think that the validity of these techniques needs to be improved. It seems like an endorsement of the status quo in a vote against improving something that scientists have been saying is flawed for years.”
The Information Quality Act gives the Justice Department 120 days from the filing of the letter to review and take action on the groups’ request.
In an email to The Intercept, Dena Iverson, principal deputy director of the Justice Department’s Office of Public Affairs, said the position laid out in the statement remains the agency’s stance. “The department’s January statement provides a detailed explication of a position that has remained unchanged since publication of the PCAST report in 2016,” she wrote. Iverson did not say who authored the statement.
The Justice Department’s current position is in direct conflict with promises President Joe Biden made earlier this year. After he was inaugurated, the administration posted a lengthy memorandum for executive departments and agency heads titled “Restoring Trust in Government Through Scientific Integrity and Evidence-Based Policymaking.” In the memo, Biden announced that his administration would make “evidence-based decisions guided by the best available science and data.”
“Scientific findings should never be distorted or influenced by political considerations,” Biden wrote.
Advocates say the administration’s directives mean that the Justice Department’s statement must be rescinded. “I mean, at a bare minimum it should be taken down,” Sinha said. But it’s been up for nearly eight months now, so she thinks more should be done to counteract the damage it’s already done. “The right thing to do is replace it with a statement acknowledging the importance of the PCAST report and acknowledging a commitment to … scientific integrity,” she said. “Apply that to criminal justice, once and for all.”
This content originally appeared on The Intercept and was authored by Jordan Smith.