There has been lots of talk about liars this election season, but when it comes to the dispassionate eye of science, there is no foolproof way to say who is telling the truth — yet.
But a new University of Pennsylvania study suggests that science is getting closer. The authors compared the age-old technology of polygraphs with a type of MRI scan, and found that the latter is a more accurate tool for lie detection, at least in a laboratory setting.
In their study in Journal of Clinical Psychiatry, the authors also found that a combination of the two methods might be even more effective.
Lead author Daniel D. Langleben, a professor of psychiatry at Penn’s Perelman School of Medicine, said it was too soon to say whether MRI machines, the familiar donut-shaped devices of modern health care settings, would ever become a law enforcement tool.
But the technology would be welcome if it were deemed solid enough to be used in court, said David LaBahn, president and chief executive officer of the Association of Prosecuting Attorneys, a nonprofit group in Washington D.C.
Television has led some jurors to expect unrealistically fast and tidy answers from forensic science, a phenomenon called the CSI effect. The research suggests a way that courts could come closer to those expectations, LaBahn said.
While a polygraph relies on indirect measures, MRI scans reveal the activation of various regions of the brain’s cortex that are associated with deception, a visual that would resonate with a lay audience, he said.
“If we could get to a place where the science could be admissible (in court), that this really does show, with brain scans in these pictures, if that individual is being truthful, that would be great,” LaBahn said. “Jurors are expecting this kind of thing.”
Polygraphs, which record heart rate and the electrical conductivity of the skin, among other physiologic characteristics, generally are not admissible in court, though they are used as a police investigative tool and in background checks.
In the hands of a skilled administrator, the polygraph is a valuable tool for correctly identifying the guilty, but too often falsely accuses an innocent, truthful person of telling a lie, said Scott Faro, an adjunct professor of radiology and biomedical engineering at Temple University’s Katz School of Medicine.
Faro, who was not involved with the Penn research but has studied the issue, said the new findings strengthen the case for the use of MRI scans in conjunction with polygraphs and other measures. He and colleagues have patented such an approach.
“If you’re truly innocent, you would want to have the highest accuracy gold standard available to you, and that’s what this type of technology is suggesting,” said Faro, who also has appointments in Temple’s electrical and computer science engineering departments.
In the Penn study, 28 participants were asked to write down a number from 3 through 8 and hide it, and were told to lie about it when asked.
One by one, researchers asked each person if he or she had written down a 1, then 2, and so on through 9. Participants always said no, meaning they were telling the truth on the eight occasions when they did not have the number, and lying once when they did. The numbers 1, 2, and 9 were added to the list to serve as experimental controls.
Each person underwent the process while hooked up to a polygraph device and while undergoing a functional MRI scan, a type of scan that can be performed in high-end MRI machines. The results of each were evaluated by three raters.
Two out of three polygraph experts correctly detected the lies on 20 out of 28 occasions, whereas two out of three MRI raters correctly detected 24 out of 28 lies. Overall, the MRI raters were 24 percent more likely to detect the lie in any given participant, a difference the authors found to be statistically significant.
Moreover, when the MRI and polygraph approaches were in agreement, they were correct 17 out of 17 times, though Langleben said that finding was preliminary.
He also cautioned that the results came from an lab setting, which was artificial in several respects — among them that everyone was instructed to deceive.
“In this case, we’re not detecting liars,” he said. “We’re detecting lies. Every single participant lied, which is a big step away from natural situations.”