Watch What You Think. Others Can. - The Chronicle of Higher Education

The Chronicle Review

By Christopher Shea September 16, 2013

Chronicle illustration by Scott Seymour, original images from Science Photo Library

Imagine that psychologists are scanning a patient's brain, for some basic research purpose. As they do so, they stumble across a fleeting thought that their equipment is able to decode: The patient has committed a murder, or is thinking of committing one soon. What would the researchers be obliged to do with that information?

That hypothetical was floated a few weeks ago at the first meeting of the Presidential Commission for the Study of Bioethical Issues devoted to exploring societal and ethical issues raised by the government's Brain initiative (Brain Research through Advancing Innovative Neurotechnologies), which will put some $100-million in 2014 alone into the goal of mapping the brain. The commission has been asked to examine issues facing researchers and society now, as well as those around the corner.

It will be months, at least, before we learn the group's answer to that question and others raised by advancing neurotechnology.

But one commissioner present on that August morning at the University of Pennsylvania has been exploring precisely those sorts of far-out scenarios. Will brain scans undermine traditional notions of privacy? Are existing constitutional protections sufficient to guard our freedom of thought, or are new laws required as fMRI scanners and EEG detectors grow evermore precise?

Asking those questions is the Duke University professor of law Nita A. Farahany, 37, a rising voice in her field. "We have this idea of privacy that includes the space around our thoughts, which we only share with people we want to," Farahany said in a telephone interview before the meeting. "Neuroscience shows that what we thought of as this zone of privacy can be breached." In one recent law-review article, she warned against a "coming siege against cognitive liberties."

Her particular interest is in how brain scans reshape our understanding of, or are checked by, the Fourth and Fifth Amendments of the Constitution. Respectively, they protect against "unreasonable searches and seizures" and self-incrimination, which forbids the state to turn any citizen into "a witness against himself." Will "taking the Fifth," a time-honored tactic in American courtrooms, mean anything in a world where the government can scan your brain? The answer may depend a lot on how the law comes down on another question: Is a brain scan more like an interview or a blood test?

Farahany, whom Duke lured away from Vanderbilt University last year, is a child of immigrants from Iran. Her interest in privacy was stoked early, by dinnertime conversations about that country's repressive practices. She began her legal-academic career by looking at the intersection of behavioral genetics and the law—her undergraduate major was in genetics and biology, and she has a Ph.D. in philosophy in addition to her J.D., both from Duke. But neuroscience began to interest her because it has less to do with probabilistic outcomes within populations than with concrete issues involving specific human beings, not to mention its hold on people's imaginations.

Her work—notably two law-review articles published last year, "Incriminating Thoughts," in the Stanford Law Review, and "Searching Secrets," in the University of Pennsylvania Law Review—adds to a robust discussion in the legal literature about how neuroscience will alter American jurisprudence, a discussion that has spawned articles with such titles as "Is My Mind Mine? Neuroethics and Brain Imaging" and "The Government Can Read Your Mind: Can the Constitution Stop It?"

Farahany believes there's no guarantee. And she has plenty of allies in arguing that there's a hole in constitutional law that needs to be patched where mental privacy is concerned.

The commission's work may help to put questions of neuroprivacy on the national agenda. So might a new documentary, Brains on Trial, airing in two parts on PBS this month. Hosted by the actor Alan Alda, the documentary discusses the potential and the limitations of brain scans in the courtroom. Farahany offers commentary, and even gamely climbs into an fMRI scanner to see if her brain reveals whether she recognizes photographs of certain faces. It does. But she can also game the machine by pretending not to recognize them.

The ways of peering into the brain are myriad. Neural lie detection using fMRI scans, which measure blood flow, has attempted to identify areas of the brain that register more activity when someone is trying to conceal information. A separate kind of "concealed information test" makes use of electroencephalography (EEG). In one variation, pursued by J. Peter Rosenfeld, of Northwestern University, the relative strength of a brain wave known as P300 is used to determine whether a test subject is familiar with a particular location, weapon, or plot.

Jack Gallant, at the University of California at Berkeley, has had some success in reconstructing, on video monitors, scenes of images that test subjects are viewing—simply by examining brain data. His team analyzes how subjects' brains respond to a large stock of movie trailers and video clips. The researchers create a "dictionary" of the brain activity caused by those scenes; in subsequent sessions, decoding software infers, by matching patterns of blood flow in the brain with patterns seen in previous viewings, what subjects are watching at the present time—and they recreate those scenes on monitors. In one instance described in Brains on Trial, when a test subject thinks of an image of vegetables, the computer produces a picture of precisely that.

"The more you describe how these techniques are getting better," Alda tells Farahany during Brains on Trial, "the more I wonder how different that is from Big Brother looking into our heads."

Granted, it's easy to get swept up in neurohype. In Brains on Trial, the cognitive scientist Nancy Kanwisher, at the Massachusetts Institute of Technology, dismisses fMRI lie-detection as "noisy, feeble," and "error prone," and says it would be "insane" to rely on it. On the other hand, Berkeley's Gallant says that although it will take an unforeseen breakthrough, "assuming that science keeps marching on, there will eventually be a radar detector that you can point at somebody and it will read their brain cognitions remotely."

What's more, some scholars think that lie detectors, or concealed-information detectors, are already working far better than acknowledged by analysts or by courts, which generally but not completely forbid their use. The machines are far from perfect, but juries rely on all sorts of faulty folk wisdom in determining who is lying and who is telling the truth. (Crucially, courts have excluded the detectors not on principle but because they have not won the approval of scientists.)

Farahany is well aware that we are a long way from portable memory downloaders. "I'm taking 'proof of concept' technology and thinking about the implications," she says. Her goal is to establish a new lens through which to view privacy issues, so her arguments do not hinge on the success or failure of any specific technique.

This brings up the question of whether a brain scan is "physical" evidence—like a fingerprint—or "testimonial" evidence. To grapple with that, it helps to know a bit about the convoluted development of the concept of self-incrimination in the United States.

Some physical evidence used to be harder to introduce into the courtroom. Into the 20th century, some states would allow defendants, on Fifth Amendment grounds, to decline to reveal even their shoe size. In determining the line between physical and testimonial, a crucial case was Schmerber v. California (1966), in which a drunken-driving suspect was forced to give a blood sample, whose results were damning.

James Kin-Holmes, Science Photo Library, The Image Science Photo Library, Monty Rakusen, Science Photo Library

The tools of forensic science, which include fingerprint scans, lie-detector tests,and ballistic analysis, are being extended by neuroscience.

He and his lawyers argued that this was self-incriminating evidence that should have been stricken from the record. But over a blistering dissent by Justices Hugo Black and William O. Douglas, the U.S. Supreme Court ruled that blood was physical evidence and not "testimony." The court also decided that this kind of time-sensitive, minimally invasive search did not require a warrant, because police had probable cause to think that the defendant was drunk, and blood-alcohol content drops with time. (Muddying the waters, in a ruling issued in April, Missouri v. McNeely, the court revisited the issue and concluded that the constitutionality of warrantless blood draws, in the DWI context, must be determined case by case. Justice Antonin Scalia predicted that the decision will cause chaos.)

In a passing comment that hasn't been tested, the justices in Schmerber wrote: "Some tests seemingly directed to obtain 'physical evidence'"—i.e., old-fashioned lie detectors measuring things like heart rate and skin temperature—"may actually be directed to eliciting responses which are essentially testimonial."

With brain scans, the distinction really starts to fall apart. Consider a thought experiment that Farahany uses in "Incriminating Thoughts." A woman is murdered in her home by a masked man wielding a hammer—an act captured on videotape—but first she's able to deliver a blow to his head with the tool. After that counterattack, an accomplice of the man spurs his companion to kill the woman by yelling, "Let's go!" The police (correctly) suspect that the killer was the woman's husband. Brain scans done on the husband, she writes, could determine several things. Did the alleged killer suffer brain damage of the sort a hammer blow might cause? What were his "automatic and physiological responses" to photographs of his wife—disdain and loathing? Love and sadness? They could also see whether the suspect recalled the "Let's go" urging, and could retrieve shards of memories from that night, as in the research at Gallant's lab.

Under current doctrine, it might be consistent to accept all of those scans as permissible. After all, the scans are no more intrusive than a blood draw, and they measure physical characteristics, like blood flow and electrical impulses.

To help think through the puzzles, Farahany proposes a new taxonomy of information, which would capture the types of thought-data being discussed. These more or less track where judges have drawn the line and provide hints about where we might redraw it, if we choose.

Moving roughly from less protected to more protected, her categories are: identifying information, automatic information (produced by the brain or body without effort or conscious thought), memorialized information (that is, memories), and uttered information. (Contrary to idiomatic usage, her "uttered" information can include information uttered only in the mind. At the least, she observes, we may need stronger Miranda warnings, specifying that what you say, even silently to yourself, can be used against you.)

The right against self-incrimination has its roots in efforts to eradicate confessions elicited by torture, but, as many lawyers have pointed out, it can seem hard to justify today. Why shouldn't people have to explain their alleged misdeeds? They have no right to keep their thoughts to themselves when other people's crimes are at issue, nor can they keep silent about their own crimes when they've been immunized against prosecution. So protecting privacy, per se, isn't at the core of the Fifth Amendment.

Miranda warnings might need to specify that even what you say to yourself can be used against you.

One overarching theory that purports to justify the right—the one that Farahany is most drawn to—was elaborated by the late William J. Stuntz, a professor of law at Harvard University: the "excuse theory." It says the Fifth Amendment applies when defendants are placed in the tight spot of having to choose between lying and incriminating themselves, a test many people would fail.

Mating her information categories and Stuntz's theory, Farahany argues, in her thought experiment, that if brain scans were deployed only to determine that the husband has brain damage consistent with a hammer strike, that would count as identifying information. It would be like examining him to see if his skull is broken; he'd have been placed under no coercive pressure. Courts, too, would be unlikely to consider automatic responses to a photograph (visceral revulsion, for example) as self-incriminating, she thinks, because such reactions require no conscious decision to lie or not to lie. In contrast, classic lie-detection, involving yes-no answers and the retrieval of episodic memories, would fall into the utterance category and be the most likely to be protected by the Fifth Amendment.

Memories are particularly vexing: If they can be retrieved like the files on a computer, Farahany suggests, they may be accorded little protection. If they require more-conscious reconstruction, a view that seems to better accord with current brain science, they would seem more like utterances.

Of Farahany's categories, the Stanford law professor Henry T. Greely, a friend and mentor, says: "You don't find that fourfold distinction in the Constitution, you don't find it in 200 years of constitutional precedent, but it's the kind of thing we may need, to deal with privacy issues today."

But what about the Fourth Amendment, which governs when warrants are required, and the limits of lawful searches? Here, Farahany introduces yet another factor to think about how courts might treat brain scans. Courts have tended, when weighing whether searches are overly intrusive, to focus on physical trespass. They have said a warrant is required if people have a "reasonable expectation of privacy" in the space where they are voicing their thoughts, but they've tended to define that language in terms of physical spaces (a telephone booth, one's home). "We haven't been generous about interpreting the Fourth Amendment to protect secrets," she says. "It's been more about physical intrusion." She wants to revive a latent respect for intellectual property that she says is also inherent in the amendment.

Consider another hypothetical: a "stop and frisk" of the sort that has become controversial in New York. If police had hand-held brain scanners, they might be able to judge intoxication even less intrusively than with a Breathalyzer, and whether the person being examined was familiar with the faces of people involved in recent crimes. The greater the extent that people are "authors" of the information being examined, the more likely courts would protect it.

That analysis has ramifications that extend beyond brain scans. Focusing on the sensitivity of secrets, as opposed to the physical intrusiveness of a search, might lead courts or legislatures to pay more deference to personal papers or even diaries, which have largely been considered fair game. What would be decisive is the sensitivity of the information, not whether it's in a diary, on a hard drive, or lodged in the brain.

Where self-incrimination is concerned, Ronald J. Allen, a professor of law at Northwestern University, lauds Farahany's detailed presentation of the neuroscience but thinks her overall findings are more or less consistent with ones he reached a decade ago, using a far simpler analysis. In a paper written with M. Kristin Mace, he began by asking whether it would count as self-incriminating if someone were asked questions while affixed to a lie detector, gave no verbal answers, and yet had telltale bodily functions measured. Would that be testimony? Yes, they concluded.

"When the government compels cognition and uses the substantive results of that against the defendant, that is privileged," Mr. Allen says in an interview—emphasizing that he is describing the arc of precedent and not any moral ideal.

In a book to be published next month, Mind, Brains, and Law: The Conceptual Foundations of Law and Neuroscience (Oxford University Press), Michael S. Pardo and Dennis Patterson directly confront Farahany's work. They argue that her evidence categories do not necessarily track people's moral intuitions—that physical evidence can be even more personal than thought can. "We assume," they write, "that many people would expect a greater privacy interest in the content of information about their blood"—identifying or automatic information, like HIV status—"than in the content of their memories or evoked utterances on a variety of nonpersonal matters."

On the Fifth Amendment question, the two authors "resist" the notion that a memory could ever be considered analogous to a book or an MP3 file and be unprotected, the idea Farahany flirts with. And where the Fourth Amendment is concerned, Pardo, a professor of law at the University of Alabama, writes in an e-mail, "I do think that lie-detection brain scans would be treated like blood draws."

Farahany agrees that the gap between how courts treat automatic information—the kind produced without conscious thought—and people's moral intuitions is problematic, but she argues that the categories can be a tool to expose that gap. More generally, she says, her critics are overly concerned with the "bright line" of physical testimony: "All of them are just grappling with current doctrine. What I'm trying to do is reimagine and newly conceive of how we think of doctrine."

"The bright line has never worked," she continues. "Truthfully, there are things that fall in between, and a better thing to do is to describe the levels of in-betweenness than to inappropriately and with great difficulty assign them to one category or another."

Among those staking out the brightest line is Paul Root Wolpe, a professor of bioethics at Emory University. "The skull," he says, "should be an absolute zone of privacy." He maintains that position even for the scenario of the suspected terrorist and the ticking time bomb, which is invariably raised against his position.

"As Sartre said, the ultimate power or right of a person is to say, 'No,'" Wolpe observes. "What happens if that right is taken away—if I say 'No' and they strap me down and get the information anyway? I want to say the state never has a right to use those technologies."

Farahany stakes out more of a middle ground, arguing that, as with most legal issues, the interests of the state need to be balanced against those of the individual. Her taxonomy, she hopes, will provide a useful framework for a robust democratic debate about how the various interests can be weighted.

Extending as it does beyond brain scans, her theory may also help provide answers to the looming "GPS" question, which is closely related to the hot-button telephone-metadata issue raised by recent revelations about the National Security Agency. In United States v. Jones, a case decided last year, the Supreme Court unanimously concluded that the government had conducted an illegal warrantless search when it placed a GPS device on the defendant's car. But the court decided the case on narrow grounds: Officials had missed the deadline imposed by a warrant by one day, and they had put the device on the car in Maryland, not the District of Columbia, as required.

Justices Sonia Sotomayor and Samuel Alito went along with the decision, but both separately raised far-reaching questions. The information relayed by GPS devices has been viewed by the court as noninvasive, at least when cars are on public roads (after all, the police could always tail the car). Should it still be considered so, in the era of ubiquitous GPS, when suddenly the police can electronically tail all of our cars 24/7—to church, to our lovers' houses, to Tea Party meetings?

"I would ask whether people reasonably expect that their movements will be recorded and aggregated in a manner that enables the Government to ascertain, more or less at will, their political and religious beliefs, sexual habits, and so on," Justice Sotomayor wrote.

But the justices didn't push the issue—at least in that case. "At some point, they are going to have to deal with the informational issue," Farahany says. "They can't ignore it forever." The nonprotection of automatic information, she writes, amounts to "a disturbing secret lurking beneath the surface of existing doctrine." Telephone metadata, another kind of automatic information, can, after all, be as revealing as GPS tracking.

Farahany starts by showing how the secrets in our brains are threatened by technology. She winds up getting us to ponder all the secrets that a digitally savvy state can gain access to, with silky and ominous ease.

 

Corrections (9/16/2013, 2 p.m.): This article originally identified Nita Farahany as an associate professor recruited from Vanderbilt this year. She is a full professor who was recruited last year; the article has been updated to reflect those corrections.

Christopher Shea is a contributing writer to The Chronicle.

http://m.chronicle.com/article/Watch-What-You-Think-Others/141563/