Laurence H. Tribe, "The Constitution in Cyberspace"
PREPARED REMARKS
KEYNOTE ADDRESS AT THE
FIRST CONFERENCE ON COMPUTERS, FREEDOM & PRIVACY
Copyright, 1991, Jim Warren & Computer Professionals for Social Responsibility
All rights to copy the materials contained herein are reserved, except as
hereafter explicitly licensed and permitted for anyone:
Anyone may receive, store and distribute copies of this ASCII-format
computer textfile in purely magnetic or electronic form, including on
computer networks, computer bulletin board systems, computer conferencing
systems, free computer diskettes, and host and personal computers, provided
and only provided that:
(1) this file, including this notice, is not altered in any manner, and
(2) no profit or payment of any kind is charged for its distribution, other
than normal online connect-time fees or the cost of the magnetic media, and
(3) it is not reproduced nor distributed in printed or paper form, nor on
CD ROM, nor in any form other than the electronic forms described above
without prior written permission from the copyright holder.
Arrangements to publish printed Proceedings of the First Conference on
Computers, Freedom & Privacy are near completion. Audiotape and videotape
versions are also being arranged.
A later version of this file on the WELL (Sausalito, California) will
include ordering details. Or, for details, or to propose other distribution
alternatives, contact Jim Warren, CFP Chair,345 Swett Rd., Woodside CA 94062;
voice:(415)851-7075; fax:(415)851-2814; e-mail:jwarren@well.sf.ca.us.[4/19/91]
[ These were the author's *prepared* remarks.
A transcript of Professor Tribe's March 26th comments at the Conference
(which expanded slightly on several points herein) will be uploaded onto the
WELL as soon as it is transcribed from the audio tapes and proofed against
the audio and/or videotapes.]
"The Constitution in Cyberspace:
Law and Liberty Beyond the Electronic Frontier"
by Laurence H. Tribe
Copyright 1991 Laurence H. Tribe,
Tyler Professor of Constitutional Law,
Harvard Law School.
Professor Tribe is the author, most recently, of
"On Reading the Constitution" (Harvard University Press,
Cambridge, MA, 1991).
Introduction
My topic is how to "map" the text and structure of our
Constitution onto the texture and topology of "cyberspace". That's
the term coined by cyberpunk novelist William Gibson, which many
now use to describe the "place" -- a place without physical walls
or even physical dimensions -- where ordinary telephone
conversations "happen," where voice-mail and e-mail messages are
stored and sent back and forth, and where computer-generated
graphics are transmitted and transformed, all in the form of
interactions, some real-time and some delayed, among countless
users, and between users and the computer itself
Some use the "cyberspace" concept to designate fantasy worlds
or "virtual realities" of the sort Gibson described in his novel
*Neuromancer*, in which people can essentially turn their minds into
computer peripherals capable of perceiving and exploring the data
matrix. The whole idea of "virtual reality," of course, strikes a
slightly odd note. As one of Lily Tomlin's most memorable
characters once asked, "What's reality, anyway, but a collective
hunch?" Work in this field tends to be done largely by people who
share the famous observation that reality is overrated!
However that may be, "cyberspace" connotes to some users the
sorts of technologies that people in Silicon Valley (like Jaron
Lanier at VPL Research, for instance) work on when they try to
develop "virtual racquetball" for the disabled, computer-aided
design systems that allow architects to walk through "virtual
buildings" and remodel them *before* they are built, "virtual
conferencing" for business meetings, or maybe someday even "virtual
day care centers" for latchkey children. The user snaps on a pair
of goggles hooked up to a high-powered computer terminal, puts on
a special set of gloves (and perhaps other gear) wired into the
same computer system, and, looking a little bit like Darth Vader,
pretty much steps into a computer-driven, drug-free, 3-dimensional,
interactive, infinitely expandable hallucination complete with
sight, sound and touch -- allowing the user literally to move
through, and experience, information.
I'm using the term "cyberspace" much more broadly, as many
have lately. I'm using it to encompass the full array of
computer-mediated audio and/or video interactions that are already
widely dispersed in modern societies -- from things as ubiquitous
as the ordinary telephone, to things that are still coming on-line
like computer bulletin boards and networks like Prodigy, or like
the WELL ("Whole Earth 'Lectronic Link"), based here in San
Francisco. My topic, broadly put, is the implications of that
rapidly expanding array for our constitutional order. It is a
constitutional order that tends to carve up the social, legal, and
political universe along lines of "physical place" or "temporal
proximity." The critical thing to note is that these very lines, in
cyberspace, either get bent out of shape or fade out altogether.
The question, then, becomes: when the lines along which our
Constitution is drawn warp or vanish, what happens to the
Constitution itself?
Setting the Stage
To set the stage with a perhaps unfamiliar example, consider
a decision handed down nine months ago, *Maryland v. Craig*, where
the U.S. Supreme Court upheld the power of a state to put an
alleged child abuser on trial with the defendant's accuser
testifying not in the defendant's presence but by one-way,
closed-circuit television. The Sixth Amendment, which of course
antedated television by a century and a half, says: "In all
criminal prosecutions, the accused shall enjoy the right . . . to
be confronted with the witnesses against him." Justice O'Connor
wrote for a bare majority of five Justices that the state's
procedures nonetheless struck a fair balance between costs to the
accused and benefits to the victim and to society as a whole.
Justice Scalia, joined by the three "liberals" then on the Court
(Justices Brennan, Marshall and Stevens), dissented from that
cost-benefit approach to interpreting the Sixth Amendment. He
wrote:
The Court has convincingly proved that the Maryland
procedure serves a valid interest, and gives the
defendant virtually everything the Confrontation Clause
guarantees (everything, that is, except confrontation).
I am persuaded, therefore, that the Maryland procedure is
virtually constitutional. Since it is not, however,
actually constitutional I [dissent].
Could it be that the high-tech, closed-circuit TV context,
almost as familiar to the Court's youngest Justice as to his even
younger law clerks, might've had some bearing on Justice Scalia's
sly invocation of "virtual" constitutional reality? Even if
Justice Scalia wasn't making a pun on "virtual reality," and I
suspect he wasn't, his dissenting opinion about the Confrontation
Clause requires *us* to "confront" the recurring puzzle of how
constitutional provisions written two centuries ago should be
construed and applied in ever-changing circumstances.
Should contemporary society's technology-driven cost-benefit
fixation be allowed to water down the old-fashioned value of direct
confrontation that the Constitution seemingly enshrined as basic?
I would hope not. In that respect, I find myself in complete
agreement with Justice Scalia.
But new technological possibilities for seeing your accuser
clearly without having your accuser see you at all -- possibilities
for sparing the accuser any discomfort in ways that the accuser
couldn't be spared before one-way mirrors or closed-circuit TVs
were developed -- *should* lead us at least to ask ourselves whether
*two*-way confrontation, in which your accuser is supposed to be made
uncomfortable, and thus less likely to lie, really *is* the core
value of the Confrontation Clause. If so, "virtual" confrontation
should be held constitutionally insufficient. If not -- if the
core value served by the Confrontation Clause is just the ability
to *watch* your accuser say that you did it -- then "virtual"
confrontation should suffice. New technologies should lead us to
look more closely at just *what values* the Constitution seeks to
preserve. New technologies should *not* lead us to react reflexively
*either way* -- either by assuming that technologies the Framers
didn't know about make their concerns and values obsolete, or by
assuming that those new technologies couldn't possibly provide new
ways out of old dilemmas and therefore should be ignored
altogether.
The one-way mirror yields a fitting metaphor for the task we
confront. As the Supreme Court said in a different context several
years ago, "The mirror image presented [here] requires us to step
through an analytical looking glass to resolve it." (*NCAA v.
Tarkanian*, 109 S. Ct. at 462.) The world in which the Sixth
Amendment's Confrontation Clause was written and ratified was a
world in which "being confronted with" your accuser *necessarily*
meant a simultaneous physical confrontation so that your accuser
had to *perceive* you being accused by him. Closed-circuit
television and one-way mirrors changed all that by *decoupling* those
two dimensions of confrontation, marking a shift in the conditions of
information-transfer that is in many ways typical of cyberspace.
What does that sort of shift mean for constitutional analysis?
A common way to react is to treat the pattern as it existed *prior*
to the new technology (the pattern in which doing "A" necessarily
*included* doing "B") as essentially arbitrary or accidental. Taking
this approach, once the technological change makes it possible to
do "A" *without* "B" -- to see your accuser without having him or her
see you, or to read someone's mail without her knowing it, to
switch examples -- one concludes that the "old" Constitution's
inclusion of "B" is irrelevant; one concludes that it is enough for
the government to guarantee "A" alone. Sometimes that will be the
case; but it's vital to understand that, sometimes, it won't be.
A characteristic feature of modernity is the subordination of
purpose to accident -- an acute appreciation of just how contingent
and coincidental the connections we are taught to make often are.
We understand, as moderns, that many of the ways we carve up and
organize the world reflect what our social history and cultural
heritage, and perhaps our neurological wiring, bring to the world,
and not some irreducible "way things are." A wonderful example
comes from a 1966 essay by Jorge Louis Borges, "Other
Inquisitions." There, the essayist describes the following
taxonomy of the animal kingdom, which he purports to trace to an
ancient Chinese encyclopedia entitled *The Celestial Emporium of
Benevolent Knowledge*:
On those remote pages it is written that animals are
divided into:
(a) those belonging to the Emperor
(b) those that are embalmed
(c) those that are trained
(d) suckling pigs
(e) mermaids
(f) fabulous ones
(g) stray dogs
(h) those that are included in this classification
(i) those that tremble as if they were mad
(j) innumerable ones
(k) those drawn with a very fine camel's hair brush
(l) others
(m) those that have just broken a water pitcher
(n) those that, from a great distance, resemble flies
Contemporary writers from Michel Foucault, in *The Archaeology
of Knowledge*, through George Lakoff, in *Women, Fire, and Dangerous
Things*, use Borges' Chinese encyclopedia to illustrate a range of
different propositions, but the *core* proposition is the supposed
arbitrariness -- the political character, in a sense -- of all
culturally imposed categories.
At one level, that proposition expresses a profound truth and
may encourage humility by combating cultural imperialism. At
another level, though, the proposition tells a dangerous lie: it
suggests that we have descended into the nihilism that so obsessed
Nietzsche and other thinkers -- a world where *everything* is
relative, all lines are up for grabs, all principles and
connections are just matters of purely subjective preference or,
worse still, arbitrary convention. Whether we believe that killing
animals for food is wrong, for example, becomes a question
indistinguishable from whether we happen to enjoy eating beans,
rice and tofu.
This is a particularly pernicious notion in a era when we pass
more and more of our lives in cyberspace, a place where, almost by
definition, our most familiar landmarks are rearranged or disappear
altogether -- because there is a pervasive tendency, even (and
perhaps especially) among the most enlightened, to forget that the
human values and ideals to which we commit ourselves may indeed be
universal and need not depend on how our particular cultures, or
our latest technologies, carve up the universe we inhabit. It was
my very wise colleague from Yale, the late Art Leff, who once
observed that, even in a world without an agreed-upon God, we can
still agree -- even if we can't "prove" mathematically -- that
"napalming babies is wrong."
The Constitution's core values, I'm convinced, need not be
transmogrified, or metamorphosed into oblivion, in the dim recesses
of cyberspace. But to say that they *need* not be lost there is
hardly to predict that they *will* not be. On the contrary, without
further thought and awareness of the kind this conference might
provide, the danger is clear and present that they *will* be.
The "event horizon" against which this transformation might
occur is already plainly visible:
Electronic trespassers like Kevin Mitnik don't stop with
cracking pay phones, but break into NORAD -- the North American
Defense Command computer in Colorado Springs -- not in a *WarGames*
movie, but in real life.
Less challenging to national security but more ubiquitously
threatening, computer crackers download everyman's credit history
from institutions like TRW; start charging phone calls (and more)
to everyman's number; set loose "worm" programs that shut down
thousands of linked computers; and spread "computer viruses"
through everyman's work or home PC.
It is not only the government that feels threatened by
"computer crime"; both the owners and the users of private
information services, computer bulletin boards, gateways, and
networks feel equally vulnerable to this new breed of invisible
trespasser. The response from the many who sense danger has been
swift, and often brutal, as a few examples illustrate.
Last March, U.S. Secret Service agents staged a surprise raid
on Steve Jackson Games, a small games manufacturer in
Austin, Texas, and seized all paper and electronic drafts of its
newest fantasy role-playing game, *GURPS[reg.t.m.] Cyberpunk*,
calling the game a "handbook for computer crime."
By last Spring, up to one quarter of the U.S. Treasury
Department's investigators had become involved in a project of
eavesdropping on computer bulletin boards, apparently tracking
notorious hackers like "Acid Phreak" and "Phiber Optik" through
what one journalist dubbed "the dark canyons of cyberspace."
Last May, in the now famous (or infamous) "Operation Sun Devil,"
more than 150 secret service agents teamed up with state
and local law enforcement agencies, and with security personnel
from AT&T, American Express, U.S. Sprint, and a number of the
regional Bell telephone companies, armed themselves with over two
dozen search warrants and more than a few guns, and seized 42
computers and 23,000 floppy discs in 14 cities from New York to
Texas. Their target: a loose-knit group of people in their teens
and twenties, dubbed the "Legion of Doom."
I am not describing an Indiana Jones movie. I'm talking about
America in the 1990s.
The Problem
The Constitution's architecture can too easily come to seem
quaintly irrelevant, or at least impossible to take very seriously,
in the world as reconstituted by the microchip. I propose today to
canvass five axioms of our constitutional law -- five basic
assumptions that I believe shape the way American constitutional
scholars and judges view legal issues -- and to examine how they
can adapt to the cyberspace age. My conclusion (and I will try not
to give away too much of the punch line here) is that the Framers
of our Constitution were very wise indeed. They bequeathed us a
framework for all seasons, a truly astonishing document whose
principles are suitable for all times and all technological
landscapes.
Axiom 1:
There is a Vital Difference
*Between Government and Private Action*
The first axiom I will discuss is the proposition that the
Constitution, with the sole exception of the Thirteenth Amendment
prohibiting slavery, regulates action by the *government* rather than
the conduct of *private* individuals and groups. In an article I
wrote in the Harvard Law Review in November 1989 on "The Curvature
of Constitutional Space," I discussed the Constitution's
metaphor-morphosis from a Newtonian to an Einsteinian and
Heisenbergian paradigm. It was common, early in our history, to
see the Constitution as "Newtonian in design with its carefully
counterpoised forces and counterforces, its [geographical and
institutional] checks and balances." (103 *Harv. L. Rev.* at 3.)
Indeed, in many ways contemporary constitutional law is still
trapped within and stunted by that paradigm. But today at least
some post-modern constitutionalists tend to think and talk in the
language of relativity, quantum mechanics, and chaos theory. This
may quite naturally suggest to some observers that the
Constitution's basic strategy of decentralizing and diffusing power
by constraining and fragmenting governmental authority in
particular has been rendered obsolete.
The institutional separation of powers among the three federal
branches of government, the geographical division of authority
between the federal government and the fifty state governments, the
recognition of national boundaries, and, above all, the sharp
distinction between the public and private spheres, become easy to
deride as relics of a simpler, pre-computer age. Thus Eli Noam, in
the First Ithiel de Sola Pool Memorial Lecture, delivered last
October at MIT, notes that computer networks and network
associations acquire quasi-governmental powers as they necessarily
take on such tasks as mediating their members' conflicting
interests, establishing cost shares, creating their own rules of
admission and access and expulsion, even establishing their own *de
facto* taxing mechanisms. In Professor Noam's words, "networks
become political entities," global nets that respect no state or
local boundaries. Restrictions on the use of information in one
country (to protect privacy, for example) tend to lead to export of
that information to other countries, where it can be analyzed and
then used on a selective basis in the country attempting to
restrict it. "Data havens" reminiscent of the role played by the
Swiss in banking may emerge, with few restrictions on the storage
and manipulation of information.
A tempting conclusion is that, to protect the free speech and
other rights of *users* in such private networks, judges must treat
these networks not as associations that have rights of their own
*against* the government but as virtual "governments" in themselves
-- as entities against which individual rights must be defended in
the Constitution's name. Such a conclusion would be misleadingly
simplistic. There are circumstances, of course, when
non-governmental bodies like privately owned "company towns" or
even huge shopping malls should be subjected to legislative and
administrative controls by democratically accountable entities, or
even to judicial controls as though they were arms of the state --
but that may be as true (or as false) of multinational corporations
or foundations, or transnational religious organizations, or even
small-town communities, as it is of computer-mediated networks.
It's a fallacy to suppose that, just because a computer bulletin
board or network or gateway is *something like* a shopping mall,
government has as much constitutional duty -- or even authority --
to guarantee open public access to such a network as it has to
guarantee open public access to a privately owned shopping center
like the one involved in the U.S. Supreme Court's famous *PruneYard
Shopping Center* decision of 1980, arising from nearby San Jose.
The rules of law, both statutory and judge-made, through which
each state *allocates* private powers and responsibilities themselves
represent characteristic forms of government action. That's why a
state's rules for imposing liability on private publishers, or for
deciding which private contracts to enforce and which ones to
invalidate, are all subject to scrutiny for their consistency with
the federal Constitution. But as a general proposition it is only
what *governments* do, either through such rules or through the
actions of public officials, that the United States Constitution
constrains. And nothing about any new technology suddenly erases
the Constitution's enduring value of restraining *government* above
all else, and of protecting all private groups, large and small,
from government.
It's true that certain technologies may become socially
indispensable -- so that equal or at least minimal access to basic
computer power, for example, might be as significant a
constitutional goal as equal or at least minimal access to the
franchise, or to dispute resolution through the judicial system,
or to elementary and secondary education. But all this means (or
should mean) is that the Constitution's constraints on government
must at times take the form of imposing *affirmative duties* to
assure access rather than merely enforcing *negative prohibitions*
against designated sorts of invasion or intrusion.
Today, for example, the government is under an affirmative
obligation to open up criminal trials to the press and the public,
at least where there has not been a particularized finding that
such openness would disrupt the proceedings. The government is
also under an affirmative obligation to provide free legal
assistance for indigent criminal defendants, to assure speedy
trials, to underwrite the cost of counting ballots at election
time, and to desegregate previously segregated school systems. But
these occasional affirmative obligations don't, or shouldn't, mean
that the Constitution's axiomatic division between the realm of
public power and the realm of private life should be jettisoned.
Nor would the "indispensability" of information technologies
provide a license for government to impose strict content, access,
pricing, and other types of regulation. *Books* are indispensable to
most of us, for example -- but it doesn't follow that government
should therefore be able to regulate the content of what goes onto
the shelves of *bookstores*. The right of a private bookstore owner
to decide which books to stock and which to discard, which books to
display openly and which to store in limited access areas, should
remain inviolate. And note, incidentally, that this needn't make
the bookstore owner a "publisher" who is liable for the words
printed in the books on her shelves. It's a common fallacy to
imagine that the moment a computer gateway or bulletin board begins
to exercise powers of selection to control who may be on line, it
must automatically assume the responsibilities of a newscaster, a
broadcaster, or an author. For computer gateways and bulletin
boards are really the "bookstores" of cyberspace; most of them
organize and present information in a computer format, rather than
generating more information content of their own.
Axiom 2:
The Constitutional Boundaries of Private Property
and Personality Depend on Variables Deeper Than
*Social Utility and Technological Feasibility*
The second constitutional axiom, one closely related to the
private-public distinction of the first axiom, is that a person's
mind, body, and property belong *to that person* and not to the
public as a whole. Some believe that cyberspace challenges that
axiom because its entire premise lies in the existence of computers
tied to electronic transmission networks that process digital
information. Because such information can be easily replicated in
series of "1"s and "0"s, anything that anyone has come up with in
virtual reality can be infinitely reproduced. I can log on to a
computer library, copy a "virtual book" to my computer disk, and
send a copy to your computer without creating a gap on anyone's
bookshelf. The same is true of valuable computer programs, costing
hundreds of dollars, creating serious piracy problems. This
feature leads some, like Richard Stallman of the Free Software
Foundation, to argue that in cyberspace everything should be free
-- that information can't be owned. Others, of course, argue that
copyright and patent protections of various kinds are needed in
order for there to be incentives to create "cyberspace property" in
the first place.
Needless to say, there are lively debates about what the
optimal incentive package should be as a matter of legislative and
social policy. But the only *constitutional* issue, at bottom, isn't
the utilitarian or instrumental selection of an optimal policy.
Social judgments about what ought to be subject to individual
appropriation, in the sense used by John Locke and Robert Nozick,
and what ought to remain in the open public domain, are first and
foremost *political* decisions.
To be sure, there are some constitutional constraints on these
political decisions. The Constitution does not permit anything and
everything to be made into a *private commodity*. Votes, for
example, theoretically cannot be bought and sold. Whether the
Constitution itself should be read (or amended) so as to permit all
basic medical care, shelter, nutrition, legal assistance and,
indeed, computerized information services, to be treated as mere
commodities, available only to the highest bidder, are all terribly
hard questions -- as the Eastern Europeans are now discovering as
they attempt to draft their own constitutions. But these are not
questions that should ever be confused with issues of what is
technologically possible, about what is realistically enforceable,
or about what is socially desirable.
Similarly, the Constitution does not permit anything and
everything to be *socialized* and made into a public good available
to whoever needs or "deserves" it most. I would hope, for example,
that the government could not use its powers of eminent domain to
"take" live body parts like eyes or kidneys or brain tissue for
those who need transplants and would be expected to lead
particularly productive lives. In any event, I feel certain that
whatever constitutional right each of us has to inhabit his or her
own body and to hold onto his or her own thoughts and creations
should not depend solely on cost-benefit calculations, or on the
availability of technological methods for painlessly effecting
transfers or for creating good artificial substitutes.
Axiom 3:
*Government May Not Control Information Content*
A third constitutional axiom, like the first two, reflects a
deep respect for the integrity of each individual and a healthy
skepticism toward government. The axiom is that, although
information and ideas have real effects in the social world, it's
not up to government to pick and choose for us in terms of the
*content* of that information or the *value* of those ideas.
This notion is sometimes mistakenly reduced to the naive
child's ditty that "sticks and stones may break my bones, but words
can never hurt me." Anybody who's ever been called something awful
by children in a schoolyard knows better than to believe any such
thing. The real basis for First Amendment values isn't the false
premise that information and ideas have no real impact, but the
belief that information and ideas are *too important* to entrust to
any government censor or overseer.
If we keep that in mind, and *only* if we keep that in mind,
will we be able to see through the tempting argument that, in the
Information Age, free speech is a luxury we can no longer afford.
That argument becomes especially tempting in the context of
cyberspace, where sequences of "0"s and "1"s may become virtual
life forms. Computer "viruses" roam the information nets,
attaching themselves to various programs and screwing up computer
facilities. Creation of a computer virus involves writing a
program; the program then replicates itself and mutates. The
electronic code involved is very much like DNA. If information
content is "speech," and if the First Amendment is to apply in
cyberspace, then mustn't these viruses be "speech" -- and mustn't
their writing and dissemination be constitutionally protected? To
avoid that nightmarish outcome, mustn't we say that the First
Amendment is *inapplicable* to cyberspace?
The answer is no. Speech is protected, but deliberately
yelling "Boo!" at a cardiac patient may still be prosecuted as
murder. Free speech is a constitutional right, but handing a bank
teller a hold-up note that says, "Your money or your life," may
still be punished as robbery. Stealing someone's diary may be
punished as theft -- even if you intend to publish it in book form.
And the Supreme Court, over the past fifteen years, has gradually
brought advertising within the ambit of protected expression
without preventing the government from protecting consumers from
deceptive advertising. The lesson, in short, is that
constitutional principles are subtle enough to bend to such
concerns. They needn't be broken or tossed out.
Axiom 4:
The Constitution is Founded on Normative
Conceptions of Humanity That Advances
*in Science and Technology Cannot "Disprove"*
A fourth constitutional axiom is that the human spirit is
something beyond a physical information processor. That axiom,
which regards human thought processes as not fully reducible to the
operations of a computer program, however complex, must not be
confused with the silly view that, because computer operations
involve nothing more than the manipulation of "on" and "off" states
of myriad microchips, it somehow follows that government control or
outright seizure of computers and computer programs threatens no
First Amendment rights because human thought processes are not
directly involved. To say that would be like saying that
government confiscation of a newspaper's printing press and
tomorrow morning's copy has nothing to do with speech but involves
only a taking of metal, paper, and ink. Particularly if the seizure
or the regulation is triggered by the content of the information
being processed or transmitted, the First Amendment is of course
fully involved. Yet this recognition that information processing
by computer entails something far beyond the mere sequencing of
mechanical or chemical steps still leaves a potential gap between
what computers can do internally and in communication with one
another -- and what goes on within and between human minds. It is
that gap to which this fourth axiom is addressed; the very
existence of any such gap is, as I'm sure you know, a matter of
considerable controversy.
What if people like the mathematician and physicist Roger
Penrose, author of *The Emperor's New Mind*, are wrong about human
minds? In that provocative recent book, Penrose disagrees with
those Artificial Intelligence, or AI, gurus who insist that it's
only a matter of time until human thought and feeling can be
perfectly simulated or even replicated by a series of purely
physical operations -- that it's all just neurons firing and
neurotransmitters flowing, all subject to perfect modeling in
suitable computer systems. Would an adherent of that AI orthodoxy,
someone whom Penrose fails to persuade, have to reject as
irrelevant for cyberspace those constitutional protections that
rest on the anti-AI premise that minds are *not* reducible to really
fancy computers?
Consider, for example, the Fifth Amendment, which provides
that "no person shall be . . . compelled in any criminal case to
be a witness against himself." The Supreme Court has long held
that suspects may be required, despite this protection, to provide
evidence that is not "testimonial" in nature -- blood samples, for
instance, or even exemplars of one's handwriting or voice. Last
year, in a case called *Pennsylvania v. Muniz*, the Supreme Court
held that answers to even simple questions like "When was your
sixth birthday?" are testimonial because such a question, however
straightforward, nevertheless calls for the product of mental
activity and therefore uses the suspect's mind against him. But
what if science could eventually describe thinking as a process no
more complex than, say, riding a bike or digesting a meal? Might
the progress of neurobiology and computer science eventually
overthrow the premises of the *Muniz* decision?
I would hope not. For the Constitution's premises, properly
understood, are *normative* rather than *descriptive*. The philosopher
David Hume was right in teaching that no "ought" can ever be
logically derived from an "is." If we should ever abandon the
Constitution's protection for the distinctively and universally
human, it won't be because robotics or genetic engineering or
computer science have led us to deeper truths, but rather because
they have seduced us into more profound confusions. Science and
technology open options, create possibilities, suggest
incompatibilities, generate threats. They do not alter what is
"right" or what is "wrong." The fact that those notions are
elusive and subject to endless debate need not make them totally
contingent on contemporary technology.
Axiom 5:
Constitutional Principles Should Not
*Vary With Accidents of Technology*
In a sense, that's the fifth and final constitutional axiom I
would urge upon this gathering: that the Constitution's norms, at
their deepest level, must be invariant under merely *technological*
transformations. Our constitutional law evolves through judicial
interpretation, case by case, in a process of reasoning by analogy
from precedent. At its best, that process is ideally suited to
seeing beneath the surface and extracting deeper principles from
prior decisions. At its worst, though, the same process can get
bogged down in superficial aspects of preexisting examples,
fixating upon unessential features while overlooking underlying
principles and values.
When the Supreme Court in 1928 first confronted wiretapping
and held in *Olmstead v. United States* that such wiretapping
involved no "search" or "seizure" within the meaning of the Fourth
Amendment's prohibition of "unreasonable searches and seizures,"
the majority of the Court reasoned that the Fourth Amendment
"itself shows that the search is to be of material things -- the
person, the house, his papers or his effects," and said that "there
was no searching" when a suspect's phone was tapped because the
Constitution's language "cannot be extended and expanded to include
telephone wires reaching to the whole world from the defendant's
house or office." After all, said the Court, the intervening wires
"are not part of his house or office any more than are the highways
along which they are stretched." Even to a law student in the
1960s, as you might imagine, that "reasoning" seemed amazingly
artificial. Yet the *Olmstead* doctrine still survived.
It would be illuminating at this point to compare the Supreme
Court's initial reaction to new technology in *Olmstead* with its
initial reaction to new technology in *Maryland v. Craig*, the 1990
closed-circuit television case with which we began this discussion.
In *Craig*, a majority of the Justices assumed that, when the 18th-
century Framers of the Confrontation Clause included a guarantee of
two-way *physical* confrontation, they did so solely because it had
not yet become technologically feasible for the accused to look his
accuser in the eye without having the accuser simultaneously watch
the accused. Given that this technological obstacle has been
removed, the majority assumed, one-way confrontation is now
sufficient. It is enough that the accused not be subject to
criminal conviction on the basis of statements made outside his
presence.
In *Olmstead*, a majority of the Justices assumed that, when the
18th-century authors of the Fourth Amendment used language that
sounded "physical" in guaranteeing against invasions of a person's
dwelling or possessions, they did so not solely because *physical*
invasions were at that time the only serious threats to personal
privacy, but for the separate and distinct reason that *intangible*
invasions simply would not threaten any relevant dimension of
Fourth Amendment privacy.
In a sense, *Olmstead* mindlessly read a new technology *out* of
the Constitution, while *Craig* absent-mindedly read a new technology
*into* the Constitution. But both decisions -- *Olmstead* and *Craig* --
had the structural effect of withholding the protections of the
Bill of Rights from threats made possible by new information
technologies. *Olmstead* did so by implausibly reading the
Constitution's text as though it represented a deliberate decision
not to extend protection to threats that 18th-century thinkers
simply had not foreseen. *Craig* did so by somewhat more plausibly
-- but still unthinkingly -- treating the Constitution's seemingly
explicit coupling of two analytically distinct protections as
reflecting a failure of technological foresight and imagination,
rather than a deliberate value choice.
The *Craig* majority's approach appears to have been driven in
part by an understandable sense of how a new information technology
could directly protect a particularly sympathetic group, abused
children, from a traumatic trial experience. The *Olmstead*
majority's approach probably reflected both an exaggerated estimate
of how difficult it would be to obtain wiretapping warrants even
where fully justified, and an insufficient sense of how a new
information technology could directly threaten all of us. Although
both *Craig* and *Olmstead* reveal an inadequate consciousness about
how new technologies interact with old values, *Craig* at least seems
defensible even if misguided, while *Olmstead* seems just plain
wrong.
Around 23 years ago, as a then-recent law school graduate
serving as law clerk to Supreme Court Justice Potter Stewart, I
found myself working on a case involving the government's
electronic surveillance of a suspected criminal -- in the form of
a tiny device attached to the outside of a public telephone booth.
Because the invasion of the suspect's privacy was accomplished
without physical trespass into a "constitutionally protected area,"
the Federal Government argued, relying on *Olmstead*, that there had
been no "search" or "seizure," and therefore that the Fourth
Amendment "right of the people to be secure in their persons,
houses, papers, and effects, against unreasonable searches and
seizures," simply did not apply.
At first, there were only four votes to overrule *Olmstead* and
to hold the Fourth Amendment applicable to wiretapping and
electronic eavesdropping. I'm proud to say that, as a 26-year-old
kid, I had at least a little bit to do with changing that number
from four to seven -- and with the argument, formally adopted by a
seven-Justice majority in December 1967, that the Fourth Amendment
"protects people, not places." (389 U.S. at 351.) In that
decision, *Katz v. United States*, the Supreme Court finally
repudiated *Olmstead* and the many decisions that had relied upon it
and reasoned that, given the role of electronic telecommunications
in modern life, the First Amendment purposes of protecting *free
speech* as well as the Fourth Amendment purposes of protecting
*privacy* require treating as a "search" any invasion of a person's
confidential telephone communications, with or without physical
trespass.
Sadly, nine years later, in *Smith v. Maryland*, the Supreme
Court retreated from the *Katz* principle by holding that no search
occurs and therefore no warrant is needed when police, with the
assistance of the telephone company, make use of a "pen register",
a mechanical device placed on someone's phone line that records all
numbers dialed from the phone and the times of dialing. The
Supreme Court, over the dissents of Justices Stewart, Brennan, and
Marshall, found no legitimate expectation of privacy in the numbers
dialed, reasoning that the digits one dials are routinely recorded
by the phone company for billing purposes. As Justice Stewart, the
author of *Katz*, aptly pointed out, "that observation no more than
describes the basic nature of telephone calls . . . . It is simply
not enough to say, after *Katz*, that there is no legitimate
expectation of privacy in the numbers dialed because the caller
assumes the risk that the telephone company will expose them to the
police." (442 U.S. at 746-747.) Today, the logic of *Smith* is
being used to say that people have no expectation of privacy when
they use their cordless telephones since they know or should know
that radio waves can be easily monitored!
It is easy to be pessimistic about the way in which the
Supreme Court has reacted to technological change. In many
respects, *Smith* is unfortunately more typical than *Katz* of the way
the Court has behaved. For example, when movies were invented, and
for several decades thereafter, the Court held that movie
exhibitions were not entitled to First Amendment protection. When
community access cable TV was born, the Court hindered municipal
attempts to provide it at low cost by holding that rules requiring
landlords to install small cable boxes on their apartment buildings
amounted to a compensable taking of property. And in *Red Lion v.
FCC*, decided twenty-two years ago but still not repudiated today,
the Court ratified government control of TV and radio broadcast
content with the dubious logic that the scarcity of the
electromagnetic spectrum justified not merely government policies
to auction off, randomly allocate, or otherwise ration the spectrum
according to neutral rules, but also much more intrusive and
content-based government regulation in the form of the so-called
"fairness doctrine."
Although the Supreme Court and the lower federal courts have
taken a somewhat more enlightened approach in dealing with cable
television, these decisions for the most part reveal a curious
judicial blindness, as if the Constitution had to be reinvented
with the birth of each new technology. Judges interpreting a late
18th century Bill of Rights tend to forget that, unless its *terms*
are read in an evolving and dynamic way, its *values* will lose even
the *static* protection they once enjoyed. Ironically, *fidelity* to
original values requires *flexibility* of textual interpretation. It
was Judge Robert Bork, not famous for his flexibility, who once
urged this enlightened view upon then Judge (now Justice) Scalia,
when the two of them sat as colleagues on the U.S. Court of Appeals
for the D.C. Circuit.
Judicial error in this field tends to take the form of saying
that, by using modern technology ranging from the telephone to the
television to computers, we "assume the risk." But that typically
begs the question. Justice Harlan, in a dissent penned two decades
ago, wrote: "Since it is the task of the law to form and project,
as well as mirror and reflect, we should not . . . merely recite .
. . risks without examining the *desirability* of saddling them upon
society." (*United States v. White*, 401 U.S. at 786). And, I would
add, we should not merely recite risks without examining how
imposing those risks comports with the Constitution's fundamental
values of *freedom*, *privacy*, and *equality*.
Failing to examine just that issue is the basic error I
believe federal courts and Congress have made:
* in regulating radio and TV broadcasting without
adequate sensitivity to First Amendment values;
* in supposing that the selection and editing of
video programs by cable operators might be less
than a form of expression;
* in excluding telephone companies from cable and
other information markets;
* in assuming that the processing of "O"s and "1"s
by computers as they exchange data with one
another is something less than "speech"; and
* in generally treating information processed
electronically as though it were somehow less
entitled to protection for that reason.
The lesson to be learned is that these choices and these
mistakes are not dictated by the Constitution. They are decisions
for us to make in interpreting that majestic charter, and in
implementing the principles that the Constitution establishes.
*Conclusion*
If my own life as a lawyer and legal scholar could leave just
one legacy, I'd like it to be the recognition that the Constitution
*as a whole* "protects people, not places." If that is to come
about, the Constitution as a whole must be read through a
technologically transparent lens. That is, we must embrace, as a
rule of construction or interpretation, a principle one might call
the "cyberspace corollary." It would make a suitable
Twenty-seventh Amendment to the Constitution, one befitting the
200th anniversary of the Bill of Rights. Whether adopted all at
once as a constitutional amendment, or accepted gradually as a
principle of interpretation that I believe should obtain even
without any formal change in the Constitution's language, the
corollary I would propose would do for *technology* in 1991 what I
believe the Constitution's Ninth Amendment, adopted in 1791, was
meant to do for *text*.
The Ninth Amendment says: "The enumeration in the
Constitution, of certain rights, shall not be construed to deny or
disparage others retained by the people." That amendment provides
added support for the long-debated, but now largely accepted,
"right of privacy" that the Supreme Court recognized in such
decisions as the famous birth control case of 1965, *Griswold v.
Connecticut*. The Ninth Amendment's simple message is: The *text*
used by the Constitution's authors and ratifiers does not exhaust
the values our Constitution recognizes. Perhaps a Twenty-seventh
Amendment could convey a parallel and equally simple message: The
*technologies* familiar to the Constitution's authors and ratifiers
similarly do not exhaust the *threats* against which the
Constitution's core values must be protected.
The most recent amendment, the twenty-sixth, adopted in 1971,
extended the vote to 18-year-olds. It would be fitting, in a world
where youth has been enfranchised, for a twenty-seventh amendment
to spell a kind of "childhood's end" for constitutional law. The
Twenty-seventh Amendment, to be proposed for at least serious
debate in 1991, would read simply:
"This Constitution's protections for the freedoms of
speech, press, petition, and assembly, and its
protections against unreasonable searches and seizures
and the deprivation of life, liberty, or property without
due process of law, shall be construed as fully
applicable without regard to the technological method or
medium through which information content is generated,
stored, altered, transmitted, or controlled."
[Note: The machine-readable original of this was provided by the
author on a PC diskette in WordPerfect. It was reformatted to
ASCII, appropriate for general network and computer access, by Jim Warren.
Text that was underlined or boldface in the original copy was delimited
by asterisks, and a registered trademark symbol was replaced by
"reg.t.m.". Other than that, the text was as provided by the author.]