Technology and Philosophy

Technology and Philosophy
Mitcham and Nissenbaum:
Technology and Philosophy
Two fundamental distinctions:
1. Traditional vs. “Modern” attitudes toward
technology
2. Anglo-American/Analytic vs. “Continental”
approaches to technology in philosophy
1
Basic Attitudes Toward Technology
I. Traditional Societies
 In most traditional societies—and in the European West
prior to the Renaissance—there is an implicit (and
sometimes explicit) understanding that technology is
something to be treated cautiously.
 Why? Technological change is inherently destabilizing.
And without social stability, people may die even when
nature is abundant (1)…
 So, if social stability is held to a primary value and
technological change is inherently destabilizing…well,
technological change ought to be undertaken only in a
limited fashion or only in extreme circumstances or, in
any case, only after careful consideration.
Example:
Contemporary Amish and Old Order Mennonite
communities.
 In such communities modern technologies may be
allowed (e.g., generators, power tools), but only after
careful appraisal by religious leaders to assess the
impact that such technologies may have on social
values…
2
II. Modernity
 A profound change in attitude—almost a complete
reversal—begin as part of the European Renaissance.
 Philosophers such as Francis Bacon and René
Descartes introduce a new, characteristically modern
view of science and technology as an imperative, as a
virtually a duty of humankind…
Francis Bacon (1561–1626)
“Ipsa scientia potestas est”
(‘Knowledge is power’)
Meditationes Sacræ (1597)
“For like as a man's disposition is never well known or
proved till he be crossed, nor Proteus ever changed
shapes till he was straitened and held fast; so nature
exhibits herself more clearly under the trials and vexations
of art than when left to herself.”
De Augmentis Scientiarum (1623)
 The purpose of science, in short, is to torture (female)
nature to make her reveal her secrets. Through science,
nature can be made to obey human orders…
3
Bacon:
“[Nature] is either free, and follows
her ordinary course of development;
…or she is driven out of her
ordinary course by the
perverseness, insolence, and
forwardness of matter, and violence
of impediments; as in the case of
monsters; or lastly, she is put in
constraint, moulded, and made as it
were new by art and the hand of
man; as in things artificial.”
De Augmentis Scientiarum (1623)
Modern Science
To generalize (and oversimplify) a bit:
 According to ‘modern’ natural science, nature is most
usefully understood in mechanistic terms—as a kind of
vast machine.
The goal of science is to develop theories and
explanations that enable human beings to predict and to
control nature.
 Contrast?: Older (e.g., Aristotelian) science. E.g., in the
role that is accorded to experimentation and in the role
accorded to natural teleology (e.g., the gradual
abandonment of the notion of a ‘final cause’)
4
Modern Science and Modern Technology
 On the ‘modern’ view, technology is seen as essentially
beneficial in as much as it enhances human welfare.
 This in turn implies a certain account of human
autonomy:
Again oversimplifying a bit…
We do better science by giving ourselves permission to
probe nature, to experiment.
Science not only provides knowledge that is valuable for
its own sake, it also allows us to “relieve man’s estate”
through technology. Nature is our opponent; we struggle
against her in order to achieve our independence.
René Descartes (1596-1650)
“We shall become masters
and possessors of nature.”
Discours de la Méthode (1637)
5
“Continental” vs. Anglo-American Views
 For reasons that need not concern us here, the world of
academic philosophy once upon a time saw itself (and,
in some quarters, still sees itself) as divided into two
notionally opposed camps:
So-called “Continental” philosophy and
“Analytic” (or “Anglo-American”) philosophy
 To a large degree, this division no longer makes much
difference in philosophy…
…except, perhaps, with respect to ethics and
technology. In this domain there are still some
significant differences in emphasis and concern.
Perhaps the most noteworthy differences:
 Philosophers associated with the Continental tradition
tend to discuss technology as a whole—that is, as a
unified, big, pervasive phenomenon: “Technology” with
a capital “T”
 Analytic philosophers, by contrast, tend to be oriented
toward “piecemeal assessments of particular
technologies” (1)
6
A (Mildly Speculative) Hypothesis
 One could also argue that the division between analytic
and “Continental” philosophy, at least with respect to
ethics and technology, is a reflection of differing
opinions about the modern upheaval begun by Bacon
and Descartes…
The Analytic Tradition
 Broadly speaking, the analytic tradition sees modernity as
a first step toward getting things objectively right in science
and (thereby) making real improvements in the human
condition.
So, according to this view, our task now is to help work out
the details, to rectify whatever may have gone wrong, and,
in general, “to stay the course”
 Accordingly, in the Anglo-American tradition, the scope of
ethics in technology is usually construed as fairly narrow.
(e.g., research ethics, professional ethics, biomedical
ethics, etc.)
7
The Continental Tradition
 For some in the Continental tradition, by contrast,
Bacon and Descartes represent a first step toward
disaster.
Some representative views
The historical path from Bacon and Descartes leads:
 To our alienation from reality (Heidegger)
 To alienation from our own social nature and from each
other (Marx)
 To cultural self-destruction and, perhaps, to the gates of
Auschwitz. (Horkheimer and Adorno)
Shrader-Frechette:
Technology and Risk
8
Shrader-Frechette: Technology and Risk
 As Aristotle pointed out long ago, we deliberate only
about what is in our power to do. (Nicomachean Ethics,
1112a)
 New technologies open up new possibilities for action.
So, one might expect, as the scope of our power to do
things increases, so should the scope of our
deliberation. (Compare, later on, Jonas’s views)
You’d expect technology to generate lots of new ethical
issues and views…
 Especially in the Anglo-American tradition, however,
new technological developments are characteristically
seen as expanding the scope of existing ethical
concepts rather than generating new ones…
 E.g.: Some hazardous facilities mainly threaten those
who live nearby (e.g., nuclear power plants, toxic waste
dumps). Accordingly:
“Ethicists have expanded the notion of equal treatment
[a basic concept in theories of justice] to include
geographical equality, equal treatment of persons
located different distances from dangerous facilities.”
(Shrader-Frechette, 1231)
9
Five Kinds of Philosophical
Questions about Technology
1. Conceptual or metaethical questions
E.g.: How ought we to define the concept of ‘free
informed consent’ to risks imposed by technology?
Can robots be morally responsible agents? Can future
people be ‘rights-bearers’?
“Metaethics” deals with the meaning of ethical concepts and
ethical discourse. So, very basic metaethical questions include
“What is a value judgment?” “What is a norm?” and “How does
moral language work?”
2. General normative questions
E.g.: Do we have specific duties to future generations
which might be harmed by a new technology?
“Normative” contrasts with “Descriptive”
Normative questions are about what ought to be the case;
descriptive questions about what is (or was or will be) the case.
10
3. Particular normative questions
E.g.: Should Canadian law allow the patenting of
higher life forms?
How safe should new technology X be, if it is to count
as ethically acceptable?
4. Questions about the ethical consequences of
technological developments
E.g.: Does the development of inexpensive, easy-to-use
encryption technology threaten legitimate governmental
functions (e.g., taxation or law enforcement)?
Will ‘digital locks’ (anti-circumvention devices) hinder the
development of information technology?
11
5. Questions about the ethical justifiability of various
methods of technological assessment
E.g.: Does cost-benefit analysis ignore noneconomic, non-measurable components of human
welfare?
An Idealized Cost-Benefit Analysis (CBA)
Aggregate
Benefits
Aggregate
Costs
Plan A
10
15
-5
Plan B
5
2
3
(Benefits)
– (Costs)
12
Aggregate
Benefits
Aggregate
Costs
Plan A
10
15
-5
Plan B
5
2
3
(Benefits)
– (Costs)
 CBA assumes that benefits and costs can be assigned
a quantitative measure, in commensurable units (e.g.,
dollars), and that this quantification can be done in an at
least reasonably objective way.
 Also, as set out above, CBA also (tacitly) assumes that
the magnitude of costs and benefits be known with
(something approaching) certainty…
Uncertainty vs. Risk
 In some cases, costs and benefits are known with
(near) certainty (e.g., ordinary economic decisionmaking about employment) or as a determinate risk
(e.g., roulette)
 But that is not always (indeed, not normally) the case
when it comes to new technologies.
To deal with new technologies, CBA must be adapted to
choice “under risk” and/or choice “under uncertainty.”
In which case people sometimes speak of risk/benefit
analysis…
13
Risk as Expected Value
In decision theory:
Riskx = Utilityx x Probabilityx
 For a ‘sure thing’ (i.e., known with certainty), p = 1
 In, e.g., casino gambling, the probability of some
outcome may be determinate (e.g., the probability of
red coming up in roulette, 18/38, or ≈ 0.473)
 In ‘real life’, however, the probability of an outcome
associated with the use of a new technology may be
indeterminate (and so must be assessed via subjective
probabilities assigned by experts).
An (Over-) Simplified Risk-Benefit Analysis
(Summing expected values)
Expected
Benefits
less
Expected
Costs
Expected
Benefits
Expected
Costs
Plan A
40
(80 x .5)
1
(10,000 x .0001)
39
Plan B
19
(20 x .95)
9.5
(10 x .95)
9.5
14
Leading Philosophical Issues
Concerning Technology and Ethics
(according to Shrader-Frechette)
 Why philosophical issues?
The questions on Shrader-Frechette’s list are
“philosophical” in as much as they cannot be answered
through empirical investigations only.
(I.e., insofar as they are not purely descriptive
questions.)
 That’s to say, science alone cannot provide (complete)
answers to questions like…
1. How should we measure technological risk?
 Quantitatively, in terms of, say, average annual
probability of fatality/injury? Or should we include
more than physical harm (like threats to civil liberties,
personal autonomy etc.)?
 Even if we agree that a quantitative measure of
technological risk is possible, we will need a common
denominator for comparing risks. Typically, the
common denominator used is money. (Cf., the notion of
a compensating variation)
 But are there non-quantitative factors that are as or
more important than the quantitative ones?
E.g., equitable distribution of risk, threats to
democracy, aesthetic values, the value of pristine
wilderness in its own right, the “yuk factor”
15
2. How should we evaluate technologies in the face of
uncertainty?
 As already mentioned, technological risks often involve
techniques or artifacts that have never been used
before.
 Accordingly, there is often no (or only limited) “realworld” data on which to draw in assessing the potential
harm that might be caused by a new technology.
Which is just to say, as also already mentioned, many
technological risks must be assessed “under
uncertainty”…
Choice under uncertainty gives rise to at least two sorts
of questions:
1. How should we assign probabilities to uncertain
possible outcomes? Should we assume that all
uncertain events are equally probable? (Principle of
Insufficient Reason) Or should we accept (and later
revise) the subjective probabilities assigned by
experts? (‘Bayesianism’)
2. Given probabilities, how should we choose? Should we
choose the option with the highest expected utility or
should we choose so as to avoid the worst possible
consequences, even if they are very unlikely?
(‘Bayesian’ vs. maximin strategies)
16
‘Bayesian’ vs. Maximin Choice
Expected utility (‘value of a risk’)
(EU)a ≡ utilitya x probabilitya
 By calculating average EU for possible courses of
action, we can, in principle, compare the
choiceworthiness of those actions (i.e., perform a
risk/benefit analysis).
 But (some would argue) that doesn’t (yet) necessarily
tell us how to choose.
Consider an example (with no uncertainty)…
Suppose I offer you the following choice:
You can (A) take $90 from me right now, or you can
(B) choose to reach into this box and pick one of the
two envelopes inside. One envelope contains $220; the
other contains an IOU for $20 that you will have to pay
to me if you pick it.
(There is no way for you to tell the difference between
the envelopes until you open up the one you choose.)
Which do you choose, A or B?
17
EU (A) = $90 = (90 x 1)
EU (B) = $100 = ((220 x .5) + (-20 x .5))
 A ‘Bayesian’ (“maximize EU”) strategy tells you to
choose the box, since it has the highest expected utility
 A maximin strategy tells you to choose so as to
maximize the minimum possible outcome (i.e., avoid the
worst), so it tells you to take the $90.
Harsanyi: A Paradox of the Maximin Principle
Maximin choice may seem reasonable
in some cases (esp. in the case of
potentially catastrophic hazards), but it
can be questioned by considering some
paradoxical consequences that seem
to follow from it.
John Harsanyi (Stanford, Nobel Prize 1994 w/ John
Nash) asks us to consider the following case…
18
Harsanyi’s Example:
 You live in New York City.
 You have received two jobs offers: One for a tedious
and poorly paid job in New York City, the other for an
interesting and better paid position in Chicago.
 The catch: If you take the Chicago job, you must start
the very next day. Assume that means you must take a
plane to get there in time and that there is (as always) a
small but positive probability that you will be killed in a
plane crash if you do so…
Decision Matrix: Harsanyi’s Example
The NY- Chicago
plane has an
accident
The NY-Chicago plane
has no accident
You choose the You will have a lousy You will have a lousy
NY job
job, but remain alive job, but remain alive
You choose the
You will die
Chicago job
You will have an
excellent job and will
stay alive
19
Harsanyi:
“If you took the maximin principle seriously, the you
could not ever cross a street (after all, you might be hit
by a car); you could never drive over a bridge (after all,
it might collapse) you could never get married (after all it
might end in disaster), etc. If anybody really acted this
way, he would end up in a mental institution.” (595)
John Harsanyi, “Can the Maximin Principle Serve as a Basis for Morality?
A Critique of John Rawls’ Theory,” American Political Science Review, 69,
2, (1975), 594-605.
Still More Concerns About Choice
 The examples used in risk analysis theory are typically
simple; the real choices that have to be made with
respect to technologies are much more complex.
 Not least because technological risks are often taken
on behalf of other people, people who may bear an
inequitable share of potential harms and/or an
inequitable share of the benefits.
20
3. What level of risk is acceptable and how ought risk to
be distributed?
 Some critics argue that some hazards (e.g., death) are
“incompensable risks”–i.e. there is no compensating
variation available that victims of that harm ought to
accept.
 In some cases rights to due process may be violated,
insofar as victims are denied legal redress (Cf., the US
Price-Anderson Act, 1957; Canadian Nuclear Liability
Act, 1974)
 Also: How much economic progress, if any, should be
traded against, say, the negative health consequences
of some technology?…
 Also: Should it matter who faces those risks? (Cf.
Bhopal, 1984)
More generally: Some might argue that quantitatively
greater risk that is equitably distributed may be
preferable to a quantitatively smaller risk that is
inequitably distributed.
 Similarly, what is the ethical significance of individually
negligible risks that pose a hazard cumulatively ? (e.g.,
carcinogenic chemicals that are individually harmless in
small doses, but which may cumulatively or
synergistically harmful.)
21
4. Under what conditions are people genuinely giving free
informed consent to the imposition of technological risk?
 If either freedom or understanding is compromised,
presumably, any ‘consent’ given is not really consent at
all.
 Yet the people most likely to be at risk from certain
technologies are often precisely those who cannot be
said to have given free informed consent to those risks
(e.g., workers in chemical industries)
 Moreover, as we’ve seen, present day technologies
often pose risks for future people, for whom we can, at
best, infer consent.
22