BLOG PAGES

Monday, September 16, 2013

The Coming Siege Against Cognitive Liberties


Image Source: Nature.

The Chronicle of Higher Education reports on how researchers are debating the legal implications of technological advances in neuroscanning:
Imagine that psychologists are scanning a patients' brain, for some basic research purpose. As they do so, they stumble across a fleeting thought that their equipment is able to decode: The patient has committed a murder, or is thinking of committing one soon. What would the researchers be obliged to do with that information?

That hypothetical was floated a few weeks ago at the first meeting of the Presidential Commission for the Study of Bioethical Issues devoted to exploring societal and ethical issues raised by the government's Brain initiative (Brain Research through Advancing Innovative Neurotechnologies), which will put some $100-million in 2014 alone into the goal of mapping the brain. ...

One commissioner ... has been exploring precisely those sorts of far-out scenarios. Will brain scans undermine traditional notions of privacy? Are existing constitutional protections sufficient to guard our freedom of thought, or are new laws required as fMRI scanners and EEG detectors grow evermore precise?

Asking those questions is the Duke University associate professor of law Nita A. Farahany ... . "We have this idea of privacy that includes the space around our thoughts, which we only share with people we want to ... . Neuroscience shows that what we thought of as this zone of privacy can be breached." In one recent law-review article, she warned against a "coming siege against cognitive liberties."

Her particular interest is in how brain scans reshape our understanding of, or are checked by, the Fourth and Fifth Amendments of the Constitution. Respectively, they protect against "unreasonable searches and seizures" and self-incrimination, which forbids the state to turn any citizen into "a witness against himself." Will "taking the Fifth," a time-honored tactic in American courtrooms, mean anything in a world where the government can scan your brain? The answer may depend a lot on how the law comes down on another question: Is a brain scan more like an interview or a blood test? ...

Berkeley's [Jack] Gallant says that although it will take an unforeseen breakthrough, "assuming that science keeps marching on, there will eventually be a radar detector that you can point at somebody and it will read their brain cognitions remotely." ...

Moving roughly from less protected to more protected ... [Farahany's] categories [for reading the brain in legal terms] are: identifying information, automatic information (produced by the brain or body without effort or conscious thought), memorialized information (that is, memories), and uttered information. (Contrary to idiomatic usage, her "uttered" information can include information uttered only in the mind. At the least, she observes, we may need stronger Miranda warnings, specifying that what you say, even silently to yourself, can be used against you.) ...

In a book to be published next month, Mind, Brains, and Law: The Conceptual Foundations of Law and Neuroscience (Oxford University Press), Michael S. Pardo and Dennis Patterson directly confront Farahany's work. They argue that her evidence categories do not necessarily track people's moral intuitions—that physical evidence can be even more personal than thought can. "We assume," they write, "that many people would expect a greater privacy interest in the content of information about their blood"—identifying or automatic information, like HIV status—"than in the content of their memories or evoked utterances on a variety of nonpersonal matters."

On the Fifth Amendment question, the two authors "resist" the notion that a memory could ever be considered analogous to a book or an MP3 file and be unprotected, the idea Farahany flirts with. And where the Fourth Amendment is concerned, Pardo, a professor of law at the University of Alabama, writes in an e-mail, "I do think that lie-detection brain scans would be treated like blood draws." ...

[Farahany] says ... her critics are overly concerned with the "bright line" of physical testimony: "All of them are just grappling with current doctrine. What I'm trying to do is reimagine and newly conceive of how we think of doctrine."

"The bright line has never worked," she continues. "Truthfully, there are things that fall in between, and a better thing to do is to describe the levels of in-betweenness than to inappropriately and with great difficulty assign them to one category or another."

Among those staking out the brightest line is Paul Root Wolpe, a professor of bioethics at Emory University. "The skull," he says, "should be an absolute zone of privacy." He maintains that position even for the scenario of the suspected terrorist and the ticking time bomb, which is invariably raised against his position.
"As Sartre said, the ultimate power or right of a person is to say, 'No,'" Wolpe observes. "What happens if that right is taken away—if I say 'No' and they strap me down and get the information anyway? I want to say the state never has a right to use those technologies." ... Farahany stakes out more of a middle ground, arguing that, as with most legal issues, the interests of the state need to be balanced against those of the individual. ...

The nonprotection of automatic information, she writes, amounts to "a disturbing secret lurking beneath the surface of existing doctrine." Telephone metadata, another kind of automatic information, can, after all, be as revealing as GPS tracking.

Farahany starts by showing how the secrets in our brains are threatened by technology. She winds up getting us to ponder all the secrets that a digitally savvy state can gain access to, with silky and ominous ease.
Much of the discussion among these legal researchers involves thinking about cognitive legal issues (motivations, actions, memories) in a way that is strongly influenced by computer-based metaphors. This is part of the new transhuman bias, evident in many branches of research. This confirms a surprising point: posthumanism is not some future hypothetical reality, where we all have chips in our brains and are cybernetically enhanced. It is an often-unconsidered way of life for people who are still 100 per cent human; it is a way that they are seeing the world.

This is the 'soft' impact of high technology, where there is an automatic assumption that we, our brains, or the living world around us, are like computers, with data which can be manipulated and downloaded.

In other words, it is not just the hard gadgetry of technological advances that initiates these insidious changes in law and society. If we really want to worry about the advent of a surveillance state, we must question the general mindset of tech industry designers, and people in general, who are unconsciously mimicking computers in the way they try to understand the world. From this unconscious mimicry comes changes to society for which computers are not technically responsible.

A false metaphorical correlation between human and machine - the expectation that organic lives must be artificially automated  - is corrosive to the assumptions upon which functioning societies currently still rest. These assumptions are what Farahany would call, "current doctrine." We take 'current doctrine' for granted. But at the same time, we now take for granted ideas that make 'current doctrine' increasingly impossible to maintain.

This is not to say that change is unnecessary or that technology has not brought vast improvements.

But is it really necessary for everything to go faster and faster? Do we need to be accessible to everyone and everything, day and night? Should our bosses, the police, the government, corporations, the media, let alone other citizens, know everything we do and think in private? Do we really need more powerful technology every six months? Why is it necessary that our gadgets increasingly become smaller, more intimate, and physically attached to us? Why is it not compulsory for all of us to learn (by this point) to a standard level how computers are built and how they are programmed?

We are accepting technology as a given, without second guessing the core assumptions driving that acceptance. Because we are not questioning what seems 'obvious,' researchers press on, in an equally unconsidered fashion, with their expectations that a total surveillance state is inevitable.

2 comments:

  1. What should be absolutely imperative in considering the responsibility of mental health workers when stumbling across vivid images of crime in the mind of a patient is what we already know about the reliability of eyewitnesses in criminal investigations: there is none. Human memory, in fact nearly all human memory, is synthesized after the fact and not merely a recording of the raw feeds of sensory input. People have been convicted of capital crimes committed by others based on the testimony of traumatized victims more times than anyone involved is comfortable admitting. Now authorities are seriously conjecturing that the memories of someone known to have mental or emotional problems (or else they wouldn't be a patient, presumably) should be entered into legal proceedings as fact? I think we're scanning the wrong people's brains.

    Any undergraduate pre-med heading into cognitive science professions can tell you that a memory in someone's brain is not necessarily a memory of something that actually happened. It could have been dreamt. And even a murder being planned, even in images that are vivid, is not necessarily something that the patient wants to do. It may not be something they're even capable of doing. And really, if you make mental health workers culpable for what their patients are thinking, exactly how is that law going to be enforced? Rather than polluting our already shaky court system with imagined crimes, any ethical person would NOT report a brain scan as though it were an actual event.

    ReplyDelete
  2. Yes, you're right of course. There are all sorts of questions here. For example, who would be interpreting this brain-reading data, and how and on what grounds? Secondly, for the precrime aspect - how would they distinguish a thought that is contemplated but will never never acted upon, from an intention that really will certainly be carried out by the person?

    The notion that they could arrest someone for contemplating murder and treat that as a criminal offense obviously was dealt with by Philip K. Dick and turned into a movie, 'Minority Report.'

    http://en.wikipedia.org/wiki/Minority_Report_%28film%29

    I also discussed precrime in this post:

    http://historiesofthingstocome.blogspot.com/2010/09/psychometric-assessments-meet-precrime.html

    As to the first idea, that contemplating murder and committing it are two very different things, this is brought up in the movie 'Primal Fear.' Laura Linney's lawyer character uses this distinction as part of her case. She argues that people get enraged and contemplate homicide fairly often, but they don't act on the thought.

    The script is here:

    http://www.script-o-rama.com/movie_scripts/p/primal-fear-script-transcript-norton.html

    From the script:

    /So you have no forensic experience and you're more of an academic? Then you will forgive this rather academic question. I'm driving, somebody cuts me off. I feel like killing this guy, but I don't. Now do l?

    /I would hope not.

    /That's right. Things happen to us. People wrong us. But we don't all invent psychopaths to do our dirty work for us, doctor?

    ReplyDelete