Machine ethics: Eight concerns.

.
Machine ethics: Eight concerns.
.
Andreas Matthias, Lingnan University
May 29–June 3, 2016
1
.
Introduction
.
2
.
About me
I studied philosophy, was unemployed, became a programmer, UNIX
system administrator and programming languages teacher for twenty
years, and then I started doing philosophy again.
If you like this talk, you can hire the author. I’m looking for a new
position (outside of China).
3
.
Some relevant stuff
Matthias, Andreas (2015). “Robot Lies in Health Care. When Is
Deception Morally Permissible?” Kennedy Institute of Ethics Journal
Vol. 25, No. 2, 169–192
Matthias, Andreas (2015) “The Extended Mind and the Computational
Basis of Responsibility Ascription”, Proceedings of the International
Conference on Mind and Responsibility - Philosophy, Sciences and
Criminal Law, May 21-22, 2015. Organized by Faculdade de Direito da
Universidade de Lisboa, Lisbon, Portugal.
Matthias, Andreas (2011) “Algorithmic moral control of war robots:
Philosophical questions.” Law, Innovation and Technology, Volume 3,
Number 2, December 2011, pp. 279-301 (23)
4
.
The problem with machine ethics
• The important problems with technology are not likely to be
technical problems (the car, computers, fossile fuels, nuclear power,
the Internet, mobile phones, Facebook)
5
.
The problem with machine ethics
• The important problems with technology are not likely to be
technical problems (the car, computers, fossile fuels, nuclear power,
the Internet, mobile phones, Facebook)
• The usual treatment of machine autonomy focuses on the machine.
We have to step back one step.
5
.
The problem with machine ethics
• The important problems with technology are not likely to be
technical problems (the car, computers, fossile fuels, nuclear power,
the Internet, mobile phones, Facebook)
• The usual treatment of machine autonomy focuses on the machine.
We have to step back one step.
• What consequences will autonomous machines have for human
autonomy?
5
.
The problem with machine ethics
• The important problems with technology are not likely to be
technical problems (the car, computers, fossile fuels, nuclear power,
the Internet, mobile phones, Facebook)
• The usual treatment of machine autonomy focuses on the machine.
We have to step back one step.
• What consequences will autonomous machines have for human
autonomy?
• How will this technology affect who we are?
5
.
The problem with machine ethics
• The important problems with technology are not likely to be
technical problems (the car, computers, fossile fuels, nuclear power,
the Internet, mobile phones, Facebook)
• The usual treatment of machine autonomy focuses on the machine.
We have to step back one step.
• What consequences will autonomous machines have for human
autonomy?
• How will this technology affect who we are?
• How will it affect human freedom, dignity, and responsibility?
5
.
The problem with machine ethics
• The important problems with technology are not likely to be
technical problems (the car, computers, fossile fuels, nuclear power,
the Internet, mobile phones, Facebook)
• The usual treatment of machine autonomy focuses on the machine.
We have to step back one step.
• What consequences will autonomous machines have for human
autonomy?
• How will this technology affect who we are?
• How will it affect human freedom, dignity, and responsibility?
• I will discuss some of these issues based on eight quotes from the
talks we heard in the past few days. I call these quotes “concerns,”
because I think that we should examine them very carefully before
adopting them.
5
.
Plan of the talk: Eight concerns (from least to most troubling)
• One: “Humans are using machines as tools.”
6
.
Plan of the talk: Eight concerns (from least to most troubling)
• One: “Humans are using machines as tools.”
• Two: “We can always check the programming and see what the
machine is up to.”
6
.
Plan of the talk: Eight concerns (from least to most troubling)
• One: “Humans are using machines as tools.”
• Two: “We can always check the programming and see what the
machine is up to.”
• Three: “There is no problem with autonomous machines, as long as
they are supervised.”
6
.
Plan of the talk: Eight concerns (from least to most troubling)
• One: “Humans are using machines as tools.”
• Two: “We can always check the programming and see what the
machine is up to.”
• Three: “There is no problem with autonomous machines, as long as
they are supervised.”
• Four: “Machines can act as advisors to human beings. The
autonomy will remain with the human.”
6
.
Plan of the talk: Eight concerns (from least to most troubling)
• One: “Humans are using machines as tools.”
• Two: “We can always check the programming and see what the
machine is up to.”
• Three: “There is no problem with autonomous machines, as long as
they are supervised.”
• Four: “Machines can act as advisors to human beings. The
autonomy will remain with the human.”
• Five: “We can build an ethical governor.”
6
.
Plan of the talk: Eight concerns (from least to most troubling)
• One: “Humans are using machines as tools.”
• Two: “We can always check the programming and see what the
machine is up to.”
• Three: “There is no problem with autonomous machines, as long as
they are supervised.”
• Four: “Machines can act as advisors to human beings. The
autonomy will remain with the human.”
• Five: “We can build an ethical governor.”
• Six: “We can put up effective mechanisms of robot certification,
verification, and accident investigation.”
6
.
Plan of the talk: Eight concerns (from least to most troubling)
• One: “Humans are using machines as tools.”
• Two: “We can always check the programming and see what the
machine is up to.”
• Three: “There is no problem with autonomous machines, as long as
they are supervised.”
• Four: “Machines can act as advisors to human beings. The
autonomy will remain with the human.”
• Five: “We can build an ethical governor.”
• Six: “We can put up effective mechanisms of robot certification,
verification, and accident investigation.”
• Seven: “Ethics is a rule system for guiding action.”
6
.
Plan of the talk: Eight concerns (from least to most troubling)
• One: “Humans are using machines as tools.”
• Two: “We can always check the programming and see what the
machine is up to.”
• Three: “There is no problem with autonomous machines, as long as
they are supervised.”
• Four: “Machines can act as advisors to human beings. The
autonomy will remain with the human.”
• Five: “We can build an ethical governor.”
• Six: “We can put up effective mechanisms of robot certification,
verification, and accident investigation.”
• Seven: “Ethics is a rule system for guiding action.”
• Eight: “We can use artefact ethics to better understand human
ethics.”
6
.
One:
“Humans are using machines as tools.”
.
7
.
Hybrids
• Except for a few, rare cases, we won’t get autonomous machines that
operate independently of humans, in a vacuum of their own.
• Usually, autonomous agents will be machines that cooperate and
interact closely with humans in order to perform their function:
• a driver and an autonomous car
• a hiker and Google Maps
• a speaker, a listener, and Google Translate
• a soldier and a drone
• a shopper and Amazon’s Alexa
• a policeman and a law-enforcement robot
• a doctor, a nurse, a patient, and a number of hospital robots
8
.
Classic (wrong) concept
Let’s look closer at how this works.
• The human is the agent.
9
.
Classic (wrong) concept
Let’s look closer at how this works.
• The human is the agent.
• It is in his mind that the decision to act is taken and the action plan
is formed.
9
.
Classic (wrong) concept
Let’s look closer at how this works.
• The human is the agent.
• It is in his mind that the decision to act is taken and the action plan
is formed.
• After that decision has been taken, the human agent uses the artefact
in the preconceived way to achieve his goal.
9
.
Classic (wrong) concept
• The human is the agent.
• It is in his mind that the decision to act is taken and the action plan
is formed.
• After that decision has been taken, the human agent uses the artefact
in the preconceived way to achieve his goal.
10
.
Classic (wrong) concept
• The human is the agent.
• It is in his mind that the decision to act is taken and the action plan
is formed.
• After that decision has been taken, the human agent uses the artefact
in the preconceived way to achieve his goal.
11
.
Classic (wrong) concept
This is incorrect for various reasons.
Although wrong, it is the dominant model for responsibility ascription to
human agents who act using (autonomous or passive) artefacts.
12
.
Bruno Latour: Composite agents (Latour, 1999)
Latour: The use of an artefact by an agent changes the behaviour of both
the agent and the artefact.
• Not only the gun is operating according to the wishes of its user.
13
.
Bruno Latour: Composite agents (Latour, 1999)
Latour: The use of an artefact by an agent changes the behaviour of both
the agent and the artefact.
• Not only the gun is operating according to the wishes of its user.
• Rather, it is in equal measure the gun that forces the user to behave
as a gun user.
13
.
Bruno Latour: Composite agents (Latour, 1999)
Latour: The use of an artefact by an agent changes the behaviour of both
the agent and the artefact.
• Not only the gun is operating according to the wishes of its user.
• Rather, it is in equal measure the gun that forces the user to behave
as a gun user.
• forces him to assume the right posture for firing the gun,
13
.
Bruno Latour: Composite agents (Latour, 1999)
Latour: The use of an artefact by an agent changes the behaviour of both
the agent and the artefact.
• Not only the gun is operating according to the wishes of its user.
• Rather, it is in equal measure the gun that forces the user to behave
as a gun user.
• forces him to assume the right posture for firing the gun,
• to stop moving while firing,
13
.
Bruno Latour: Composite agents (Latour, 1999)
Latour: The use of an artefact by an agent changes the behaviour of both
the agent and the artefact.
• Not only the gun is operating according to the wishes of its user.
• Rather, it is in equal measure the gun that forces the user to behave
as a gun user.
• forces him to assume the right posture for firing the gun,
• to stop moving while firing,
• to aim using the aiming mechanism of the gun… and so on.
13
.
Bruno Latour: Composite agents (Latour, 1999)
Even more importantly, having a gun to his disposal, will change the
user’s goals as well as the methods he considers in order to achieve these
goals (avoid or confront a danger).
14
.
Bruno Latour: Goal translation
• The “composite agent” composed of myself and the gun is thus a
different agent, with different goals and methods at his disposal,
than the original agent (me without the gun) had been.
15
.
Bruno Latour: Goal translation
• The “composite agent” composed of myself and the gun is thus a
different agent, with different goals and methods at his disposal,
than the original agent (me without the gun) had been.
• It’s not any more “me using the gun.”
15
.
Bruno Latour: Goal translation
• The “composite agent” composed of myself and the gun is thus a
different agent, with different goals and methods at his disposal,
than the original agent (me without the gun) had been.
• It’s not any more “me using the gun.”
using collective resources
• Composite agent −−−−−−−−−−−−−−−−−−−−→ Goal (translated)
considering collective properties
15
.
Bruno Latour: Goal translation
• The “composite agent” composed of myself and the gun is thus a
different agent, with different goals and methods at his disposal,
than the original agent (me without the gun) had been.
• It’s not any more “me using the gun.”
using collective resources
• Composite agent −−−−−−−−−−−−−−−−−−−−→ Goal (translated)
considering collective properties
• My options to make a moral choice are constrained (and sometimes
determined) by the properties of the composite system.
15
.
Bruno Latour: Goal translation
Examples:
• Original goal: Make peace with enemy to minimise casualties
• Available tool: Drones that can kill without endangering our
soldiers.
• Goal after translation: Bomb enemy with drones.
16
.
Bruno Latour: Goal translation
Examples:
• Original goal: Make peace with enemy to minimise casualties
• Available tool: Drones that can kill without endangering our
soldiers.
• Goal after translation: Bomb enemy with drones.
• Original goal: Drive home safely, don’t drink beer before driving.
• Available tool: Tesla autopilot that effectively works (even if illegal
without supervision).
• Goal after translation: Drive home drunk and nap in the car.
16
.
Artefact design
• The design and properties of the artefact in composite agents
determine my options to act.
17
.
Artefact design
• The design and properties of the artefact in composite agents
determine my options to act.
• The design of the artefact (and how I can use it in the pursuit of my
goals) determine:
17
.
Artefact design
• The design and properties of the artefact in composite agents
determine my options to act.
• The design of the artefact (and how I can use it in the pursuit of my
goals) determine:
• The amount of control I have over the artefact.
17
.
Artefact design
• The design and properties of the artefact in composite agents
determine my options to act.
• The design of the artefact (and how I can use it in the pursuit of my
goals) determine:
• The amount of control I have over the artefact.
• The degree to which I can be held responsible for the collective
action (because responsibility requires effective control over the
action!)
17
.
Artefact design
• The design and properties of the artefact in composite agents
determine my options to act.
• The design of the artefact (and how I can use it in the pursuit of my
goals) determine:
• The amount of control I have over the artefact.
• The degree to which I can be held responsible for the collective
action (because responsibility requires effective control over the
action!)
• The extent to which the artefact will encourage or require a
translation of my original goals to the capabilities of the composite
agent.
17
.
Artefact design
• The design and properties of the artefact in composite agents
determine my options to act.
• The design of the artefact (and how I can use it in the pursuit of my
goals) determine:
• The amount of control I have over the artefact.
• The degree to which I can be held responsible for the collective
action (because responsibility requires effective control over the
action!)
• The extent to which the artefact will encourage or require a
translation of my original goals to the capabilities of the composite
agent.
• Thus: The design of the artefact in composite agents becomes
morally relevant.
17
.
Extended Mind Thesis and hybrid agents (Clark & Chalmers, 1998)
See slides in Appendix A.
18
.
What can we do?
• Acknowledge that artefacts are not passive tools of human
autonomy.
19
.
What can we do?
• Acknowledge that artefacts are not passive tools of human
autonomy.
• They crucially shape human intentions and options for action.
19
.
What can we do?
• Acknowledge that artefacts are not passive tools of human
autonomy.
• They crucially shape human intentions and options for action.
• Actions performed by hybrid agents require spread of moral reactive
attitudes and a new distribution of responsibility between the parts
of the hybrid agent.
19
.
What can we do?
• Acknowledge that artefacts are not passive tools of human
autonomy.
• They crucially shape human intentions and options for action.
• Actions performed by hybrid agents require spread of moral reactive
attitudes and a new distribution of responsibility between the parts
of the hybrid agent.
• The idea of total human autonomy is, in the context of actions
involving artefacts, a fiction.
19
.
What can we do?
• Acknowledge that artefacts are not passive tools of human
autonomy.
• They crucially shape human intentions and options for action.
• Actions performed by hybrid agents require spread of moral reactive
attitudes and a new distribution of responsibility between the parts
of the hybrid agent.
• The idea of total human autonomy is, in the context of actions
involving artefacts, a fiction.
• Designers of artefacts have influence on the decisions humans will
take when using these artefacts.
19
.
What can we do?
• Acknowledge that artefacts are not passive tools of human
autonomy.
• They crucially shape human intentions and options for action.
• Actions performed by hybrid agents require spread of moral reactive
attitudes and a new distribution of responsibility between the parts
of the hybrid agent.
• The idea of total human autonomy is, in the context of actions
involving artefacts, a fiction.
• Designers of artefacts have influence on the decisions humans will
take when using these artefacts.
• Therefore, designers of such artefacts share responsibility for the
actions performed using the artefacts.
19
.
What can we do?
• Acknowledge that artefacts are not passive tools of human
autonomy.
• They crucially shape human intentions and options for action.
• Actions performed by hybrid agents require spread of moral reactive
attitudes and a new distribution of responsibility between the parts
of the hybrid agent.
• The idea of total human autonomy is, in the context of actions
involving artefacts, a fiction.
• Designers of artefacts have influence on the decisions humans will
take when using these artefacts.
• Therefore, designers of such artefacts share responsibility for the
actions performed using the artefacts.
• Artefact design needs to be legally regulated with these issues in
mind.
19
.
Two:
“We can always check the programming
and see what the machine is up to.”
.
20
.
Checking the code (Matthias, 2004)
• Checking the code works only if there is “code.”
21
.
Checking the code (Matthias, 2004)
• Checking the code works only if there is “code.”
• Creation of autonomous systems can happen in various ways:
21
.
Checking the code (Matthias, 2004)
• Checking the code works only if there is “code.”
• Creation of autonomous systems can happen in various ways:
• Imperative programming: Code as detailed command sequences.
21
.
Checking the code (Matthias, 2004)
• Checking the code works only if there is “code.”
• Creation of autonomous systems can happen in various ways:
• Imperative programming: Code as detailed command sequences.
• Logic oriented languages and event-driven frameworks: Program
flow becomes obscure.
21
.
Checking the code (Matthias, 2004)
• Checking the code works only if there is “code.”
• Creation of autonomous systems can happen in various ways:
• Imperative programming: Code as detailed command sequences.
• Logic oriented languages and event-driven frameworks: Program
flow becomes obscure.
• Artificial neural networks: Code disappears, replaced by
(meaningless for human observers) synaptic weights.
21
.
Checking the code (Matthias, 2004)
• Checking the code works only if there is “code.”
• Creation of autonomous systems can happen in various ways:
• Imperative programming: Code as detailed command sequences.
• Logic oriented languages and event-driven frameworks: Program
flow becomes obscure.
• Artificial neural networks: Code disappears, replaced by
(meaningless for human observers) synaptic weights.
• Reinforcement learning and other explorative techniques: Errors
become necessary part of the learning phase.
21
.
Checking the code (Matthias, 2004)
• Checking the code works only if there is “code.”
• Creation of autonomous systems can happen in various ways:
• Imperative programming: Code as detailed command sequences.
• Logic oriented languages and event-driven frameworks: Program
flow becomes obscure.
• Artificial neural networks: Code disappears, replaced by
(meaningless for human observers) synaptic weights.
• Reinforcement learning and other explorative techniques: Errors
become necessary part of the learning phase.
• Genetic programming: The solution emerges by simulated natural
selection “on its own.”
21
.
Checking the code (Matthias, 2004)
• Checking the code works only if there is “code.”
• Creation of autonomous systems can happen in various ways:
• Imperative programming: Code as detailed command sequences.
• Logic oriented languages and event-driven frameworks: Program
flow becomes obscure.
• Artificial neural networks: Code disappears, replaced by
(meaningless for human observers) synaptic weights.
• Reinforcement learning and other explorative techniques: Errors
become necessary part of the learning phase.
• Genetic programming: The solution emerges by simulated natural
selection “on its own.”
• Spatial autonomy (physical or virtual): The machine moves out of
the immediate observation horizon of the designer. Effective
supervision becomes difficult or impossible.
21
.
Brittleness of rule-based systems
“It is a commonplace in the field to describe expert systems as brittle –
able to operate only within a narrow range of situations. The problem
here is not just one of insufficient engineering, but is a direct
consequence of the nature of rule-based systems. We will examine three
manifestations of the problem: gaps of anticipation; blindness of
representation; and restriction of the domain.” (Winograd, 1991)
22
.
Gaps of anticipation
“The person designing a system for dealing with acid spills may not
consider the possibility of rain leaking into the building, or of a power
failure, or that a labelled bottle does not contain what it purports to. A
human expert faced with a problem in such a circumstance falls back on
common sense and a general background of knowledge.” (Winograd,
1991)
23
.
Blindness of representation
“Imagine that a doctor asks a nurse Is the patient eating?” (Winograd,
1991)
• (Can patient be disturbed:) Is she eating at this moment?
• (Anorexia patient:) Has the patient eaten some minimal amount in
the past day?
• (Surgery yesterday): Has the patient taken any nutrition by mouth?
In order to build a successful symbol system, decontextualized meaning
is necessary – terms must be stripped of open-ended ambiguities and
shadings.
24
.
Restriction of the domain
A consequence of decontextualized representation is the difficulty of
creating AI programs in any but the most carefully restricted domains
(…) (little common sense knowledge is required):
“A brilliant chess move while the room is filling with smoke
because the house is burning down does not show
intelligence.” (Winograd, 1991)
25
.
Learning system and environment
• A big part of the functionality of learning (adaptive) systems is
provided by the environment.
26
.
Learning system and environment
• A big part of the functionality of learning (adaptive) systems is
provided by the environment.
• The environment is not in the control of the original programmer.
26
.
Microsoft Tay1
• March 23, 2016: Microsoft unveiled Tay – a Twitter bot that the
company described as an experiment in “conversational
understanding.”
1
http://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
27
.
Microsoft Tay1
• March 23, 2016: Microsoft unveiled Tay – a Twitter bot that the
company described as an experiment in “conversational
understanding.”
• The more you chat with Tay, said Microsoft, the smarter it gets,
learning to engage people through “casual and playful
conversation.”
1
http://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
27
.
Microsoft Tay1
• March 23, 2016: Microsoft unveiled Tay – a Twitter bot that the
company described as an experiment in “conversational
understanding.”
• The more you chat with Tay, said Microsoft, the smarter it gets,
learning to engage people through “casual and playful
conversation.”
• Soon after Tay launched, people starting tweeting the bot with all
sorts of misogynistic, racist, and Donald Trumpist remarks.
1
http://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
27
.
Microsoft Tay1
• March 23, 2016: Microsoft unveiled Tay – a Twitter bot that the
company described as an experiment in “conversational
understanding.”
• The more you chat with Tay, said Microsoft, the smarter it gets,
learning to engage people through “casual and playful
conversation.”
• Soon after Tay launched, people starting tweeting the bot with all
sorts of misogynistic, racist, and Donald Trumpist remarks.
• Tay: “I fucking hate feminists and they should all die and burn in
hell.” – “Hitler was right I hate the jews.”
1
http://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
27
.
Microsoft Tay1
• March 23, 2016: Microsoft unveiled Tay – a Twitter bot that the
company described as an experiment in “conversational
understanding.”
• The more you chat with Tay, said Microsoft, the smarter it gets,
learning to engage people through “casual and playful
conversation.”
• Soon after Tay launched, people starting tweeting the bot with all
sorts of misogynistic, racist, and Donald Trumpist remarks.
• Tay: “I fucking hate feminists and they should all die and burn in
hell.” – “Hitler was right I hate the jews.”
• Tay disappeared less than 24 hours after being switched on.
1
http://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
27
.
What can we do?
• A dilemma: rule-based code is brittle and unreliable, while
subsymbolic systems are not possible to examine and verify in a
rigorous way.
28
.
What can we do?
• A dilemma: rule-based code is brittle and unreliable, while
subsymbolic systems are not possible to examine and verify in a
rigorous way.
• It is not immediately clear that a combination of the two approaches
would solve the problem (rather than having both problems at once).
28
.
What can we do?
• A dilemma: rule-based code is brittle and unreliable, while
subsymbolic systems are not possible to examine and verify in a
rigorous way.
• It is not immediately clear that a combination of the two approaches
would solve the problem (rather than having both problems at once).
• We should be aware of the environment’s influence on learning
machines as a potentially dangerous and destructive influence,
rather than a beneficial learning resource.
28
.
What can we do?
• A dilemma: rule-based code is brittle and unreliable, while
subsymbolic systems are not possible to examine and verify in a
rigorous way.
• It is not immediately clear that a combination of the two approaches
would solve the problem (rather than having both problems at once).
• We should be aware of the environment’s influence on learning
machines as a potentially dangerous and destructive influence,
rather than a beneficial learning resource.
• Learning interactions of the machine with the environment need to
be closely guarded.
28
.
What can we do?
• A dilemma: rule-based code is brittle and unreliable, while
subsymbolic systems are not possible to examine and verify in a
rigorous way.
• It is not immediately clear that a combination of the two approaches
would solve the problem (rather than having both problems at once).
• We should be aware of the environment’s influence on learning
machines as a potentially dangerous and destructive influence,
rather than a beneficial learning resource.
• Learning interactions of the machine with the environment need to
be closely guarded.
• Perhaps, for critical systems, learning and deployment phases “in
the wild” should be kept strictly separated (rather than using
explorative learning methods in the real environment).
28
.
Three:
“There is no problem with autonomous
machines, as long as they are
supervised.”
.
29
.
Epistemic disadvantage
Supervising a child or a dog is easy. Why?
• My ability to understand the likely outcomes of my actions and to
predict the behaviour of the world around me is more developed
than that of the child or animal I am supervising.
30
.
Epistemic disadvantage
Supervising a child or a dog is easy. Why?
• My ability to understand the likely outcomes of my actions and to
predict the behaviour of the world around me is more developed
than that of the child or animal I am supervising.
• The epistemic advantage I have over the supervised agent makes
me a suitable supervisor.
30
.
Epistemic disadvantage
Supervising a child or a dog is easy. Why?
• My ability to understand the likely outcomes of my actions and to
predict the behaviour of the world around me is more developed
than that of the child or animal I am supervising.
• The epistemic advantage I have over the supervised agent makes
me a suitable supervisor.
• I have a greater degree of control over the interactions of the
supervised agent with his environment than he would have if left to
his own devices.
30
.
Epistemic advantage, supervision, and responsibility
• By knowing more than the child, I am in a position to foresee harm
both to the child and to the environment, and thus to avert it.
31
.
Epistemic advantage, supervision, and responsibility
• By knowing more than the child, I am in a position to foresee harm
both to the child and to the environment, and thus to avert it.
• This higher degree of control also bestows on me a higher
responsibility: commonly parents as well as dog owners are held
responsible for the actions of their children and pets, as long as they
have been supervising them at the moment the action occurred.
31
.
Hybrids of humans and non-humans
• In computational hybrids of humans with non-humans, it is often
the human part who is at an epistemic disadvantage.
32
.
Hybrids of humans and non-humans
• In computational hybrids of humans with non-humans, it is often
the human part who is at an epistemic disadvantage.
• At the same time, it is the human whom we traditionally single out
as the bearer of responsibility for the actions of the hybrid system.
32
.
Epistemic disadvantage of humans in hybrid agents
Supervising a nuclear power plant:
• As opposed to supervising a dog, I have to rely on artificial sensors
(radioactivity, high temperatures, high pressures) without which I
am unable to receive crucial information. Without artificial sensors
and artificial computational devices I am not able to control the
nuclear power plant at all.
33
.
Epistemic disadvantage of humans in hybrid agents
Supervising a nuclear power plant:
• As opposed to supervising a dog, I have to rely on artificial sensors
(radioactivity, high temperatures, high pressures) without which I
am unable to receive crucial information. Without artificial sensors
and artificial computational devices I am not able to control the
nuclear power plant at all.
• Part of the algorithm that controls the power plant is necessarily
executed outside of human bodies, and therefore no human can be
said to be “controlling” the power plant.
33
.
Epistemic disadvantage of humans in hybrid agents
Supervising a nuclear power plant:
• As opposed to supervising a dog, I have to rely on artificial sensors
(radioactivity, high temperatures, high pressures) without which I
am unable to receive crucial information. Without artificial sensors
and artificial computational devices I am not able to control the
nuclear power plant at all.
• Part of the algorithm that controls the power plant is necessarily
executed outside of human bodies, and therefore no human can be
said to be “controlling” the power plant.
• The human together with the computers and sensors and switches is
in control. The hybrid agent is.
33
.
Computational externalism and epistemic disadvantage (2)
The phenomenon is common:
• Control systems of air planes and air traffic
• Deep space exploration devices
• Military self-targeting missiles
• Internet search engines
• There is no way a human could outperform or effectively supervise
such machines.
34
.
Computational externalism and epistemic disadvantage (2)
The phenomenon is common:
• Control systems of air planes and air traffic
• Deep space exploration devices
• Military self-targeting missiles
• Internet search engines
• There is no way a human could outperform or effectively supervise
such machines.
• He is a slave to their decisions
34
.
Computational externalism and epistemic disadvantage (2)
The phenomenon is common:
• Control systems of air planes and air traffic
• Deep space exploration devices
• Military self-targeting missiles
• Internet search engines
• There is no way a human could outperform or effectively supervise
such machines.
• He is a slave to their decisions
• He is physically unable to control and supervise them in real-time.
34
.
Epistemic disadvantage and responsibility
A common misconception when talking about hybrid agents:
“Humans should exercise better control over the actions of the
non-human part, supervise it more effectively, so that negative
consequences of the operation of the hybrid entity can be seen in advance
and averted.”
35
.
Epistemic disadvantage and responsibility
A common misconception when talking about hybrid agents:
“Humans should exercise better control over the actions of the
non-human part, supervise it more effectively, so that negative
consequences of the operation of the hybrid entity can be seen in advance
and averted.”
• Responsibility for a process and its consequences can only be
ascribed to someone who is in effective control of that process
35
.
Epistemic disadvantage and responsibility
A common misconception when talking about hybrid agents:
“Humans should exercise better control over the actions of the
non-human part, supervise it more effectively, so that negative
consequences of the operation of the hybrid entity can be seen in advance
and averted.”
• Responsibility for a process and its consequences can only be
ascribed to someone who is in effective control of that process
• Ascribing responsibility to agents who are in a position of epistemic
disadvantage is not just, and poses problems of justification.
35
.
Other failures of responsibility ascription
• Responsibility ascription issues with deep supply chains and big
organisations (accidental airbag deployment example).
36
.
Other failures of responsibility ascription
• Responsibility ascription issues with deep supply chains and big
organisations (accidental airbag deployment example).
• Big corporations shield the actually responsible human agent
behind a wall of customer service representatives who are neither
knowledgeable nor responsible.
36
.
Other failures of responsibility ascription
• Responsibility ascription issues with deep supply chains and big
organisations (accidental airbag deployment example).
• Big corporations shield the actually responsible human agent
behind a wall of customer service representatives who are neither
knowledgeable nor responsible.
• Often the creation of the program had been outsourced, and the
original creators have moved on, or stopped existing as a company
at all.
36
.
Other failures of responsibility ascription
• Responsibility ascription issues with deep supply chains and big
organisations (accidental airbag deployment example).
• Big corporations shield the actually responsible human agent
behind a wall of customer service representatives who are neither
knowledgeable nor responsible.
• Often the creation of the program had been outsourced, and the
original creators have moved on, or stopped existing as a company
at all.
• Thus, there is no practical way of determining a responsible agent in
such cases.
36
.
Other failures of responsibility ascription
• Responsibility ascription issues with deep supply chains and big
organisations (accidental airbag deployment example).
• Big corporations shield the actually responsible human agent
behind a wall of customer service representatives who are neither
knowledgeable nor responsible.
• Often the creation of the program had been outsourced, and the
original creators have moved on, or stopped existing as a company
at all.
• Thus, there is no practical way of determining a responsible agent in
such cases.
• Although we can solve liability problems in such cases (by arbitrary
assignment of liability), this does not give incentives to the
originally responsible agents to behave more responsibly in the
future, since they can consistently escape their responsibility.
36
.
Lack of incentive to dispute the machines’ decisions (1)
What about the willingness of the human, who is formally in control of
the machine, to dispute the machine’s suggestions for action?
War robots:
• The machine is a tested and certified piece of military equipment,
whose algorithms have been developed by recognised experts and
refined in many years of use in actual combat, and then possibly
approved by national parliaments and technology oversight
committees.
37
.
Lack of incentive to dispute the machines’ decisions (1)
What about the willingness of the human, who is formally in control of
the machine, to dispute the machine’s suggestions for action?
War robots:
• The machine is a tested and certified piece of military equipment,
whose algorithms have been developed by recognised experts and
refined in many years of use in actual combat, and then possibly
approved by national parliaments and technology oversight
committees.
• To doubt the machine’s suggestions and to interfere with its
operation will require a significant amount of self-confidence and
critical reasoning from the human operator, along with the
willingness to engage in a long process of accusations and
justification.
37
.
Lack of incentive to dispute the machines’ decisions (2)
• On the other hand: After blindly following the machine’s
suggestions, one can always blame the machine, its manufacturer,
one’s superiors who chose to deploy it, etc. Responsibility will
almost never stick to the low ranks who operate the machine in the
field.
38
.
Lack of incentive to dispute the machines’ decisions (3)
Self-driving cars:
• A self-driving car, too, comes with the weight of an institution
behind it: Daimler-Benz or Tesla, together with Google’s AI
algorithms, with the blessing of the Transport Department and one’s
insurance company.
39
.
Lack of incentive to dispute the machines’ decisions (3)
Self-driving cars:
• A self-driving car, too, comes with the weight of an institution
behind it: Daimler-Benz or Tesla, together with Google’s AI
algorithms, with the blessing of the Transport Department and one’s
insurance company.
• Even assuming that one had the ability and the epistemic advantage
necessary to override the machine’s decisions, one would be very ill
advised to do so.
39
.
Lack of incentive to dispute the machines’ decisions (3)
Self-driving cars:
• A self-driving car, too, comes with the weight of an institution
behind it: Daimler-Benz or Tesla, together with Google’s AI
algorithms, with the blessing of the Transport Department and one’s
insurance company.
• Even assuming that one had the ability and the epistemic advantage
necessary to override the machine’s decisions, one would be very ill
advised to do so.
• The machine has passed all these institutional examinations and has
been pronounced “officially” safe.
39
.
Lack of incentive to dispute the machines’ decisions (4)
• Just accepting the machine’s suggestions and going along with its
decisions (even if they are obviously wrong) will off-load the
responsibility for the damages on someone else’s shoulders:
Daimler-Benz’s or Tesla’s or the insurance company’s.
40
.
Lack of incentive to dispute the machines’ decisions (4)
• Just accepting the machine’s suggestions and going along with its
decisions (even if they are obviously wrong) will off-load the
responsibility for the damages on someone else’s shoulders:
Daimler-Benz’s or Tesla’s or the insurance company’s.
• Interfering with and overriding the machine’s decisions will
squarely place the responsibility for any outcomes on the human
agent.
40
.
Lack of incentive to dispute the machines’ decisions (4)
• Just accepting the machine’s suggestions and going along with its
decisions (even if they are obviously wrong) will off-load the
responsibility for the damages on someone else’s shoulders:
Daimler-Benz’s or Tesla’s or the insurance company’s.
• Interfering with and overriding the machine’s decisions will
squarely place the responsibility for any outcomes on the human
agent.
• Faced with a choice like that, no one in their right minds would dare
question the machine’s decisions.
40
.
Lack of incentive to dispute the machines’ decisions (4)
• Just accepting the machine’s suggestions and going along with its
decisions (even if they are obviously wrong) will off-load the
responsibility for the damages on someone else’s shoulders:
Daimler-Benz’s or Tesla’s or the insurance company’s.
• Interfering with and overriding the machine’s decisions will
squarely place the responsibility for any outcomes on the human
agent.
• Faced with a choice like that, no one in their right minds would dare
question the machine’s decisions.
• The responsible machine operator in such cases is a fiction.
40
.
Alertness and fatigue
• Another fiction: Humans can sit for hours inactive in a driver’s seat,
intensely concentrating, ready to grab the wheel should an
emergency arise.
41
.
Alertness and fatigue
• Another fiction: Humans can sit for hours inactive in a driver’s seat,
intensely concentrating, ready to grab the wheel should an
emergency arise.
• This is obviously wrong, for psychological and biological reasons.
41
.
Alertness and fatigue
• Another fiction: Humans can sit for hours inactive in a driver’s seat,
intensely concentrating, ready to grab the wheel should an
emergency arise.
• This is obviously wrong, for psychological and biological reasons.
• Human attention and alertness cannot be kept up in a situation
where no action is required over long periods of time.
41
.
Alertness and fatigue
• Another fiction: Humans can sit for hours inactive in a driver’s seat,
intensely concentrating, ready to grab the wheel should an
emergency arise.
• This is obviously wrong, for psychological and biological reasons.
• Human attention and alertness cannot be kept up in a situation
where no action is required over long periods of time.
• Such rules (as now govern autonomous cars) are unjust and will
necessarily lead to accidents rather than their prevention (if they are
not ignored completely).
41
.
The non-verifiability of human agents
• As opposed to machines, human agents’ “algorithms” are not
verifiable.
42
.
The non-verifiability of human agents
• As opposed to machines, human agents’ “algorithms” are not
verifiable.
• Thus, in the case of an accident, we have one part of the
human-machine hybrid (the machine) whose program can be proven
to be free of errors.
42
.
The non-verifiability of human agents
• As opposed to machines, human agents’ “algorithms” are not
verifiable.
• Thus, in the case of an accident, we have one part of the
human-machine hybrid (the machine) whose program can be proven
to be free of errors.
• The other part (the human) is not verifiable and not certified with
the same rigour.
42
.
The non-verifiability of human agents
• As opposed to machines, human agents’ “algorithms” are not
verifiable.
• Thus, in the case of an accident, we have one part of the
human-machine hybrid (the machine) whose program can be proven
to be free of errors.
• The other part (the human) is not verifiable and not certified with
the same rigour.
• Sensibly (?) the burden of proof for his innocence will shift to the
human.
42
.
The non-verifiability of human agents
• As opposed to machines, human agents’ “algorithms” are not
verifiable.
• Thus, in the case of an accident, we have one part of the
human-machine hybrid (the machine) whose program can be proven
to be free of errors.
• The other part (the human) is not verifiable and not certified with
the same rigour.
• Sensibly (?) the burden of proof for his innocence will shift to the
human.
• The default assumption will be that the machine is flawless, and
that, therefore, the reason for the failure of the hybrid system must
be human error.
42
.
What can we do? (1)
• Do not put humans in situations where they have to responsibly
supervise machines from a position of epistemic or computational
disadvantage.
43
.
What can we do? (1)
• Do not put humans in situations where they have to responsibly
supervise machines from a position of epistemic or computational
disadvantage.
• Biological factors like fatigue and the psychology of attention must
be honoured.
43
.
What can we do? (1)
• Do not put humans in situations where they have to responsibly
supervise machines from a position of epistemic or computational
disadvantage.
• Biological factors like fatigue and the psychology of attention must
be honoured.
• It is better (more just) to completely abandon the fiction of
responsible supervision in such cases, and to concentrate in creating
more reliable machines that do not require the fiction of supervision
in order to be trustworthy.
43
.
What can we do? (2)
• In complex production processes and big institutions, mechanisms
must be created that ensure a precise ascription of responsibility to
those agents who are actually responsible for particular actions,
products, and processes. These must be transparent and accessible
to the public.
44
.
What can we do? (2)
• In complex production processes and big institutions, mechanisms
must be created that ensure a precise ascription of responsibility to
those agents who are actually responsible for particular actions,
products, and processes. These must be transparent and accessible
to the public.
• The default assumption in cases of responsibility or liability must
always be that the machine is at fault, not the human. The burden of
proof must rest with the institution and the machine, not the human
operator.
44
.
Four:
“Machines can act as advisors to human
beings. The autonomy will remain with
the human.”
Or:
“Dave, I don’t think you should do
that.”
.
45
.
Deceptive user interfaces
See Appendix C.
46
.
Friendly user interfaces can be dangerous (1)
• Problem: The patient is deceived in believing that the machine has
more capabilities than it has, and this deception has medical
implications.
47
.
Friendly user interfaces can be dangerous (1)
• Problem: The patient is deceived in believing that the machine has
more capabilities than it has, and this deception has medical
implications.
• Example:
47
.
Friendly user interfaces can be dangerous (1)
• Problem: The patient is deceived in believing that the machine has
more capabilities than it has, and this deception has medical
implications.
• Example:
• A patient believes that the machine is capable of reminding him to
take his pills.
47
.
Friendly user interfaces can be dangerous (1)
• Problem: The patient is deceived in believing that the machine has
more capabilities than it has, and this deception has medical
implications.
• Example:
• A patient believes that the machine is capable of reminding him to
take his pills.
• The machine can not actually perform this function.
47
.
Friendly user interfaces can be dangerous (1)
• Problem: The patient is deceived in believing that the machine has
more capabilities than it has, and this deception has medical
implications.
• Example:
• A patient believes that the machine is capable of reminding him to
take his pills.
• The machine can not actually perform this function.
• But its conversational interface hides or obscures this limitation. The
machine does not understand, but keeps interacting verbally, and the
user doesn’t realise that he has not been understood.
47
.
Friendly user interfaces can be dangerous (1)
• Problem: The patient is deceived in believing that the machine has
more capabilities than it has, and this deception has medical
implications.
• Example:
• A patient believes that the machine is capable of reminding him to
take his pills.
• The machine can not actually perform this function.
• But its conversational interface hides or obscures this limitation. The
machine does not understand, but keeps interacting verbally, and the
user doesn’t realise that he has not been understood.
• Consequence: The patient misses taking his pills at the right time.
47
.
Friendly user interfaces can be dangerous (2)
• Sometimes “easy,” conversational interfaces, especially with
anthropomorphising metaphors, can be dangerously misleading.
48
.
Friendly user interfaces can be dangerous (2)
• Sometimes “easy,” conversational interfaces, especially with
anthropomorphising metaphors, can be dangerously misleading.
• They can (unintentionally) deceive the user into attributing abilities
to the machine that the machine does not really possess.
48
.
Friendly user interfaces can be dangerous (2)
• Sometimes “easy,” conversational interfaces, especially with
anthropomorphising metaphors, can be dangerously misleading.
• They can (unintentionally) deceive the user into attributing abilities
to the machine that the machine does not really possess.
• In some cases it might be necessary to not employ a
deceptive/suggestive interface at all, if the medical practitioner can
foresee that a particular patient is likely to come to harm from being
deceived by the machine.
48
.
The creeping erosion of human autonomy
• Humans will lose autonomy necessarily as soon as a machine
becomes better at something than they are (driving, landing an
airplane).
49
.
The creeping erosion of human autonomy
• Humans will lose autonomy necessarily as soon as a machine
becomes better at something than they are (driving, landing an
airplane).
• The human loss of autonomy will not happen at the point of the
Singularity, but will creep in, bit by bit.
49
.
The creeping erosion of human autonomy
• Humans will lose autonomy necessarily as soon as a machine
becomes better at something than they are (driving, landing an
airplane).
• The human loss of autonomy will not happen at the point of the
Singularity, but will creep in, bit by bit.
• As soon as automated cars can drive better, the wish of a human to
drive will have to be refused.
49
.
The creeping erosion of human autonomy
• Humans will lose autonomy necessarily as soon as a machine
becomes better at something than they are (driving, landing an
airplane).
• The human loss of autonomy will not happen at the point of the
Singularity, but will creep in, bit by bit.
• As soon as automated cars can drive better, the wish of a human to
drive will have to be refused.
• This erosion of human autonomy will be endorsed and enforced by
governments, industry and insurance companies: with good reason.
49
.
The creeping erosion of human autonomy
• Humans will lose autonomy necessarily as soon as a machine
becomes better at something than they are (driving, landing an
airplane).
• The human loss of autonomy will not happen at the point of the
Singularity, but will creep in, bit by bit.
• As soon as automated cars can drive better, the wish of a human to
drive will have to be refused.
• This erosion of human autonomy will be endorsed and enforced by
governments, industry and insurance companies: with good reason.
• Driving, landing a plane manually, determining one’s own diet will
become illegal for good reasons.
49
.
The creeping erosion of human autonomy
• Humans will lose autonomy necessarily as soon as a machine
becomes better at something than they are (driving, landing an
airplane).
• The human loss of autonomy will not happen at the point of the
Singularity, but will creep in, bit by bit.
• As soon as automated cars can drive better, the wish of a human to
drive will have to be refused.
• This erosion of human autonomy will be endorsed and enforced by
governments, industry and insurance companies: with good reason.
• Driving, landing a plane manually, determining one’s own diet will
become illegal for good reasons.
• Still, this is a dangerous, creeping erosion of human autonomy!
49
.
Censorship by algorithm
• Another problem with algorithmic morality is censorship by
algorithm.
50
.
Censorship by algorithm
• Another problem with algorithmic morality is censorship by
algorithm.
• Will robot morality systems censor sex, breastfeeding images, and
politically incorrect words from the databases they manage?
50
.
Censorship by algorithm
• Another problem with algorithmic morality is censorship by
algorithm.
• Will robot morality systems censor sex, breastfeeding images, and
politically incorrect words from the databases they manage?
• Google and Facebook are doing that already: censoring ads for
payday loan companies, censoring images of breastfeeding etc
50
.
Censorship by algorithm
• Another problem with algorithmic morality is censorship by
algorithm.
• Will robot morality systems censor sex, breastfeeding images, and
politically incorrect words from the databases they manage?
• Google and Facebook are doing that already: censoring ads for
payday loan companies, censoring images of breastfeeding etc
• Again, the problem is: how can society stay in control of these acts
of censorship and not give up its moral and legal authority?
50
.
Censorship by algorithm
• Another problem with algorithmic morality is censorship by
algorithm.
• Will robot morality systems censor sex, breastfeeding images, and
politically incorrect words from the databases they manage?
• Google and Facebook are doing that already: censoring ads for
payday loan companies, censoring images of breastfeeding etc
• Again, the problem is: how can society stay in control of these acts
of censorship and not give up its moral and legal authority?
• How can democratic institutions exercise legitimate and necessary
control over the creeping erosion of human rights (freedom of
speech) and human autonomy?
50
.
What can we do? (1)
• In order to be able to limit deception to morally permissible forms
that increase the user’s autonomy, robots themselves will need to
have a working internal model of each user they interact with and
how particular types of users will react to particular types of
information.
51
.
What can we do? (1)
• In order to be able to limit deception to morally permissible forms
that increase the user’s autonomy, robots themselves will need to
have a working internal model of each user they interact with and
how particular types of users will react to particular types of
information.
• Different kinds of interfaces must be offered to users. If the user
requests low-level control over the machine, he should be given
access. Other uses might prefer high-level abstractions.
51
.
What can we do? (2)
• For such cases, care robots must be equipped with alternative user
interfaces that are less deceptive and invite no projection of
human-like qualities onto the machine (on-screen menus, physical
switches and buttons, and so on).
52
.
What can we do? (2)
• For such cases, care robots must be equipped with alternative user
interfaces that are less deceptive and invite no projection of
human-like qualities onto the machine (on-screen menus, physical
switches and buttons, and so on).
• In the case of corporate censorship, governments must enforce
compliance of private companies with the laws of the state, and
defend the citizens’ rights against attempts of corporations to limit
these rights.
52
.
Five:
“We can build an ethical governor.”
.
53
.
Which moral system?
• What moral system should be used, and how to justify it?
54
.
Which moral system?
• What moral system should be used, and how to justify it?
• It seems that presently often implementation concerns dictate the
choice.
54
.
Which moral system?
• What moral system should be used, and how to justify it?
• It seems that presently often implementation concerns dictate the
choice.
• “All I have is my hammer.” (Quote from this conference.)
54
.
Which moral system?
• What moral system should be used, and how to justify it?
• It seems that presently often implementation concerns dictate the
choice.
• “All I have is my hammer.” (Quote from this conference.)
• (Refers to:) “If all you have is a hammer, then everything looks like
a nail.”
54
.
Which moral system?
• What moral system should be used, and how to justify it?
• It seems that presently often implementation concerns dictate the
choice.
• “All I have is my hammer.” (Quote from this conference.)
• (Refers to:) “If all you have is a hammer, then everything looks like
a nail.”
• The implementation of moral rules should not be constrained by the
ability of the programmers to translate moral systems into code!
54
.
Which moral system?
• What moral system should be used, and how to justify it?
• It seems that presently often implementation concerns dictate the
choice.
• “All I have is my hammer.” (Quote from this conference.)
• (Refers to:) “If all you have is a hammer, then everything looks like
a nail.”
• The implementation of moral rules should not be constrained by the
ability of the programmers to translate moral systems into code!
• We must demand that not the feasible is implemented, but that
moral systems are only deployed when philosophical (rather than
pragmatic) justifications have been given for the choice of moral
theories.
54
.
Whose morality? (1)
• It is obvious that moral rules must, at least to some extent, be
shared moral rules.
55
.
Whose morality? (1)
• It is obvious that moral rules must, at least to some extent, be
shared moral rules.
• Morality is there to regulate social, collective behaviour.
55
.
Whose morality? (1)
• It is obvious that moral rules must, at least to some extent, be
shared moral rules.
• Morality is there to regulate social, collective behaviour.
• Moral rules must, like traffic laws and unlike, for example, cooking
recipes, be agreed upon by the members of a community.
55
.
Whose morality? (2)
• In Arkin’s model, but also in the examples we heard in the past days
here, a set of immutable and context-free rules of behaviour are
extracted from the common sense of the average Western
programmer, without reference to local beliefs and customs at the
point of the robot’s deployment.
56
.
Whose morality? (2)
• In Arkin’s model, but also in the examples we heard in the past days
here, a set of immutable and context-free rules of behaviour are
extracted from the common sense of the average Western
programmer, without reference to local beliefs and customs at the
point of the robot’s deployment.
• As part of morality is rooted in particular societies and their values,
this approach will create problems of justification for the robot’s
action in the societies confronted by it.
56
.
Whose morality? (2)
• In Arkin’s model, but also in the examples we heard in the past days
here, a set of immutable and context-free rules of behaviour are
extracted from the common sense of the average Western
programmer, without reference to local beliefs and customs at the
point of the robot’s deployment.
• As part of morality is rooted in particular societies and their values,
this approach will create problems of justification for the robot’s
action in the societies confronted by it.
• Do we need Islamic robots? North Korean? US American?
56
.
Problems of machine-moral relativism
• Problems of moral relativism:
57
.
Problems of machine-moral relativism
• Problems of moral relativism:
• Robots should implement their societies’ values.
57
.
Problems of machine-moral relativism
• Problems of moral relativism:
• Robots should implement their societies’ values.
• But implementation happens globally by a few corporations only
(and this is unlikely to change, due to the massive amount of data
needed to train the systems).
57
.
Problems of machine-moral relativism
• Problems of moral relativism:
• Robots should implement their societies’ values.
• But implementation happens globally by a few corporations only
(and this is unlikely to change, due to the massive amount of data
needed to train the systems).
• These few corporations implement their own values (and have to,
else the populations in their countries would protest).
57
.
Problems of machine-moral relativism
• Problems of moral relativism:
• Robots should implement their societies’ values.
• But implementation happens globally by a few corporations only
(and this is unlikely to change, due to the massive amount of data
needed to train the systems).
• These few corporations implement their own values (and have to,
else the populations in their countries would protest).
• But then they export these technologies with their built-in morality.
57
.
Problems of machine-moral relativism
• Problems of moral relativism:
• Robots should implement their societies’ values.
• But implementation happens globally by a few corporations only
(and this is unlikely to change, due to the massive amount of data
needed to train the systems).
• These few corporations implement their own values (and have to,
else the populations in their countries would protest).
• But then they export these technologies with their built-in morality.
• (See already: Facebook censoring images globally according to their
own, US American moral criteria).
57
.
Moral imperialism
• This is a kind of moral imperialism.
58
.
Moral imperialism
• This is a kind of moral imperialism.
• Only few countries are likely to export their encoded morality to all
others.
58
.
Moral imperialism
• This is a kind of moral imperialism.
• Only few countries are likely to export their encoded morality to all
others.
• Technological advancement thus translates directly into moral
authority.
58
.
Moral imperialism
• This is a kind of moral imperialism.
• Only few countries are likely to export their encoded morality to all
others.
• Technological advancement thus translates directly into moral
authority.
• Technologically advanced countries will monopolise and
imperialise artificial morality.
58
.
Conflicts of interest
• Moral machines create various conflicts of interest.
59
.
Conflicts of interest
• Moral machines create various conflicts of interest.
• The implementor of the ethical governor, is, at the same time, the
robot’s designer or manufacturer.
59
.
Conflicts of interest
• Moral machines create various conflicts of interest.
• The implementor of the ethical governor, is, at the same time, the
robot’s designer or manufacturer.
• The same person creates the capabilities of the machine to do harm,
and is then supposed to limit them.
59
.
Conflicts of interest
• Moral machines create various conflicts of interest.
• The implementor of the ethical governor, is, at the same time, the
robot’s designer or manufacturer.
• The same person creates the capabilities of the machine to do harm,
and is then supposed to limit them.
• Sometimes limiting the capabilities of the robot via an ethical
governor will conflict with the commercial interests of the
manufacturer.
59
.
Conflicts of interest
• Moral machines create various conflicts of interest.
• The implementor of the ethical governor, is, at the same time, the
robot’s designer or manufacturer.
• The same person creates the capabilities of the machine to do harm,
and is then supposed to limit them.
• Sometimes limiting the capabilities of the robot via an ethical
governor will conflict with the commercial interests of the
manufacturer.
• See the problems with Tesla pushing out badly tested and dangerous
self-driving technology without waiting for public, democratic
approval. Same with war robots, where the military has an interest
in a machine that is not ethically constrained.
59
.
Conflicts of interest
• Moral machines create various conflicts of interest.
• The implementor of the ethical governor, is, at the same time, the
robot’s designer or manufacturer.
• The same person creates the capabilities of the machine to do harm,
and is then supposed to limit them.
• Sometimes limiting the capabilities of the robot via an ethical
governor will conflict with the commercial interests of the
manufacturer.
• See the problems with Tesla pushing out badly tested and dangerous
self-driving technology without waiting for public, democratic
approval. Same with war robots, where the military has an interest
in a machine that is not ethically constrained.
• Properly, the ethical governor should be controlled by society, not
by the creator of the machine.
59
.
What can we do?
• We need clear philosophical justifications for the choice of moral
theories to be implemented, rather than ad-hoc implementations that
follow the “I happen to have this hammer” principle.
60
.
What can we do?
• We need clear philosophical justifications for the choice of moral
theories to be implemented, rather than ad-hoc implementations that
follow the “I happen to have this hammer” principle.
• Issues of moral relativism should be dealt with by international
bodies.
60
.
What can we do?
• We need clear philosophical justifications for the choice of moral
theories to be implemented, rather than ad-hoc implementations that
follow the “I happen to have this hammer” principle.
• Issues of moral relativism should be dealt with by international
bodies.
• Particular companies and governments must not be allowed to
convert technological advantage into moral domination of less
technologically developed societies.
60
.
What can we do?
• We need clear philosophical justifications for the choice of moral
theories to be implemented, rather than ad-hoc implementations that
follow the “I happen to have this hammer” principle.
• Issues of moral relativism should be dealt with by international
bodies.
• Particular companies and governments must not be allowed to
convert technological advantage into moral domination of less
technologically developed societies.
• We should take control for the design and implementation of moral
artificial agents away from programmers, software designers, and
corporations, and install strong public, democratic control structures
to ensure the absence of conflicts of interest and the proper
functioning of ethical governor systems.
60
.
Six:
“We can put up effective mechanisms of
robot certification, verification, and
accident investigation.”
.
61
.
Limits of formal verification
See Appendix B.
62
.
Regulation by code (Lessig) and technological determinism
• Lessig (1999, 2006) has famously shown how the design of
technical systems can exert a normative force which is comparable
to the constraints imposed to human action by law and custom.
63
.
Regulation by code (Lessig) and technological determinism
• Lessig (1999, 2006) has famously shown how the design of
technical systems can exert a normative force which is comparable
to the constraints imposed to human action by law and custom.
• The insight is not new in itself. Technological determinism and the
idea of an autonomous technology as advocated by thinkers as
diverse as Heilbroner, Ellul, McLuhan and even Heidegger have
been around for a long time.
63
.
Regulation by code (Lessig) and technological determinism
• Lessig (1999, 2006) has famously shown how the design of
technical systems can exert a normative force which is comparable
to the constraints imposed to human action by law and custom.
• The insight is not new in itself. Technological determinism and the
idea of an autonomous technology as advocated by thinkers as
diverse as Heilbroner, Ellul, McLuhan and even Heidegger have
been around for a long time.
• Their core idea, although often perceived as being in need of
clarification and amendment, is generally not thought to be
dismissible as a whole.
63
.
Regulation by code (Lessig)
• With Lessig, the idea is applied to computer code as a particular
instance of an immaterial artefact with its own regulatory profile.
“Code is an efficient means of regulation. But its perfection
makes it something different. One obeys these laws as code not
because one should; one obeys these laws as code because one
can do nothing else. There is no choice about whether to yield
to the demand for a password; one complies if one wants to
enter the system. In the well implemented system, there is no
civil disobedience. Law as code is a start to the perfect
technology of justice.” (Lessig, 1996)
64
.
Regulation by code (Lessig)
At the same time, the code which both requires and enforces perfect
obedience, is itself removed from view:
“The key criticism that I’ve identified so far is transparency.
Code-based regulation – especially of people who are not
themselves technically expert – risks making regulation
invisible.
Controls are imposed for particular policy reasons, but people
experience these controls as nature. And that experience, I
suggested, could weaken democratic resolve.” (Lessig, 2006)
65
.
Regulation by code (Lessig)
• This argument applies with particular force to the case of moral
robots.
66
.
Regulation by code (Lessig)
• This argument applies with particular force to the case of moral
robots.
• Laws of War, Rules of Engagement, traffic laws, rules of the road,
rules of the air, are all publicly visible and democratically approved
documents, regulating in an open and transparent way the citizens’
behaviour.
66
.
Regulation by code (Lessig)
• This argument applies with particular force to the case of moral
robots.
• Laws of War, Rules of Engagement, traffic laws, rules of the road,
rules of the air, are all publicly visible and democratically approved
documents, regulating in an open and transparent way the citizens’
behaviour.
• These documents are equally accessible both to the public which, in
the final instance, authorises them, and to the soldiers, drivers,
pilots, whose behaviour they intend to guide.
66
.
Regulation by code (Lessig)
• Things change when Laws of War, Rules of Engagement, traffic
laws, rules of the road etc become software.
67
.
Regulation by code (Lessig)
• Things change when Laws of War, Rules of Engagement, traffic
laws, rules of the road etc become software.
• Words, which for a human audience have more or less clear, if
fuzzily delineated meanings (like “combatant,” “civilian,” “harm,”
“danger,” “enemy,” “avert,” “expect,” “ensure,” etc) get translated
into a precise, algorithmic, context-free representation.
67
.
Regulation by code (Lessig)
• Things change when Laws of War, Rules of Engagement, traffic
laws, rules of the road etc become software.
• Words, which for a human audience have more or less clear, if
fuzzily delineated meanings (like “combatant,” “civilian,” “harm,”
“danger,” “enemy,” “avert,” “expect,” “ensure,” etc) get translated
into a precise, algorithmic, context-free representation.
• These translation processes crucially alter the meaning of the words
and concepts they are applied to (compare the rich everyday concept
of “harm,” as opposed to a programmed variable “int harm=25;”
in a computer program.)
67
.
Regulation by code (Lessig)
• Things change when Laws of War, Rules of Engagement, traffic
laws, rules of the road etc become software.
• Words, which for a human audience have more or less clear, if
fuzzily delineated meanings (like “combatant,” “civilian,” “harm,”
“danger,” “enemy,” “avert,” “expect,” “ensure,” etc) get translated
into a precise, algorithmic, context-free representation.
• These translation processes crucially alter the meaning of the words
and concepts they are applied to (compare the rich everyday concept
of “harm,” as opposed to a programmed variable “int harm=25;”
in a computer program.)
• Thick, natural-language concepts need to be “codified,” that is,
turned into an unambiguous, machine-readable representation of the
concept they denote. This interpretation cannot be assumed to be
straightforward for various reasons.
67
.
Translation processes: Hubert Dreyfus
• First, one might argue (in the wake of Heidegger and Dreyfus) that
readiness-to-hand as well as Dasein, being the mode of existence of
equipment and that of humans, respectively, cannot be expressed
adequately by sets of “objective” properties at all (Dreyfus, 1990).
68
.
Translation processes: Hubert Dreyfus
• First, one might argue (in the wake of Heidegger and Dreyfus) that
readiness-to-hand as well as Dasein, being the mode of existence of
equipment and that of humans, respectively, cannot be expressed
adequately by sets of “objective” properties at all (Dreyfus, 1990).
• Whether, for instance, a hammer is “too heavy” for use is not
translatable into one single, numerical expression of weight, since
the hammer’s”unreadiness to hand” will vary
68
.
Translation processes: Hubert Dreyfus
• First, one might argue (in the wake of Heidegger and Dreyfus) that
readiness-to-hand as well as Dasein, being the mode of existence of
equipment and that of humans, respectively, cannot be expressed
adequately by sets of “objective” properties at all (Dreyfus, 1990).
• Whether, for instance, a hammer is “too heavy” for use is not
translatable into one single, numerical expression of weight, since
the hammer’s”unreadiness to hand” will vary
• not only across different users,
68
.
Translation processes: Hubert Dreyfus
• First, one might argue (in the wake of Heidegger and Dreyfus) that
readiness-to-hand as well as Dasein, being the mode of existence of
equipment and that of humans, respectively, cannot be expressed
adequately by sets of “objective” properties at all (Dreyfus, 1990).
• Whether, for instance, a hammer is “too heavy” for use is not
translatable into one single, numerical expression of weight, since
the hammer’s”unreadiness to hand” will vary
• not only across different users,
• but also depending on the time of day, the health status and the mood
of the user,
68
.
Translation processes: Hubert Dreyfus
• First, one might argue (in the wake of Heidegger and Dreyfus) that
readiness-to-hand as well as Dasein, being the mode of existence of
equipment and that of humans, respectively, cannot be expressed
adequately by sets of “objective” properties at all (Dreyfus, 1990).
• Whether, for instance, a hammer is “too heavy” for use is not
translatable into one single, numerical expression of weight, since
the hammer’s”unreadiness to hand” will vary
• not only across different users,
• but also depending on the time of day, the health status and the mood
of the user,
• and perhaps even the urgency of the task towards which the hammer
is intended to be used.
68
.
Translation processes: Hubert Dreyfus
• Arkin’s concept of an ethical governor, being based on a naive
symbolic representation of world entities in the machine’s data
structures, does not even try to acknowledge this problem.
69
.
Translation processes: Hubert Dreyfus
• Arkin’s concept of an ethical governor, being based on a naive
symbolic representation of world entities in the machine’s data
structures, does not even try to acknowledge this problem.
• The most promising approach in this direction based on symbolic
computation could perhaps be argued to be Lenat’s encoding of
conflicting microtheories in CYC (Lenat, 1995), but this attempt is
nowadays generally considered to have been a failure.
69
.
Translation processes and public scrutiny (1)
• Whereas the natural-language concepts and documents have been
the object of public scrutiny and the result of public deliberation:
70
.
Translation processes and public scrutiny (1)
• Whereas the natural-language concepts and documents have been
the object of public scrutiny and the result of public deliberation:
• their new, algorithmic form, which is far from being a faithful
translation,
70
.
Translation processes and public scrutiny (1)
• Whereas the natural-language concepts and documents have been
the object of public scrutiny and the result of public deliberation:
• their new, algorithmic form, which is far from being a faithful
translation,
• has been generated behind the closed doors of an industry laboratory,
70
.
Translation processes and public scrutiny (1)
• Whereas the natural-language concepts and documents have been
the object of public scrutiny and the result of public deliberation:
• their new, algorithmic form, which is far from being a faithful
translation,
• has been generated behind the closed doors of an industry laboratory,
• in a project which, most likely, will be classified as secret.
70
.
Translation processes and public scrutiny (1)
• Whereas the natural-language concepts and documents have been
the object of public scrutiny and the result of public deliberation:
• their new, algorithmic form, which is far from being a faithful
translation,
• has been generated behind the closed doors of an industry laboratory,
• in a project which, most likely, will be classified as secret.
• The machine-representation of a translated concept is usually part of
a “closed code” system that is not available to the public for
inspection.
70
.
Translation processes and public scrutiny (2)
Military or copyrighted corporate code is a prime example of “closed
code”:
“By ‘closed code,’ I mean code (both software and hardware)
whose functionality is opaque. One can guess what closed
code is doing; and with enough opportunity to test, one might
well reverse engineer it. But from the technology itself, there
is no reasonable way to discern what the functionality of the
technology is.” (Lessig, 2006)
71
.
Translation processes and public scrutiny (3)
• What reaches the public and its representatives will most likely be
not the code itself, but advertising material promoting the machine
in question and the features which its manufacturer wishes to
highlight.
72
.
Translation processes and public scrutiny (3)
• What reaches the public and its representatives will most likely be
not the code itself, but advertising material promoting the machine
in question and the features which its manufacturer wishes to
highlight.
• Whether a program actually does what it purports to do depends
upon its code (Lessig, 2006).
72
.
Translation processes and public scrutiny (3)
• What reaches the public and its representatives will most likely be
not the code itself, but advertising material promoting the machine
in question and the features which its manufacturer wishes to
highlight.
• Whether a program actually does what it purports to do depends
upon its code (Lessig, 2006).
• If that code is closed, the actual moral values and decisions that it
implements will be removed from public scrutiny and democratic
control.
72
.
Ownership of code and ownership of knowledge
• AI systems are not only built on closed technology.
73
.
Ownership of code and ownership of knowledge
• AI systems are not only built on closed technology.
• They are even based on closed science.
73
.
Ownership of code and ownership of knowledge
• AI systems are not only built on closed technology.
• They are even based on closed science.
• Google does not only own the technology: it owns the scientists
who create the science behind all that technology (prominent
examples: Norvig, Kurzweil, Hinton, Ng, the DeepMind team).
73
.
Ownership of code and ownership of knowledge
• AI systems are not only built on closed technology.
• They are even based on closed science.
• Google does not only own the technology: it owns the scientists
who create the science behind all that technology (prominent
examples: Norvig, Kurzweil, Hinton, Ng, the DeepMind team).
• This means, that, as opposed to fair investigations in airplane
crashes, with advanced AI there is no publicly available body of
knowledge on how they work, and no independent experts able to
judge them.
73
.
Ownership of code and ownership of knowledge
• AI systems are not only built on closed technology.
• They are even based on closed science.
• Google does not only own the technology: it owns the scientists
who create the science behind all that technology (prominent
examples: Norvig, Kurzweil, Hinton, Ng, the DeepMind team).
• This means, that, as opposed to fair investigations in airplane
crashes, with advanced AI there is no publicly available body of
knowledge on how they work, and no independent experts able to
judge them.
• The only experts available are company experts.
73
.
Ownership of code and ownership of knowledge
• For every product, there are only a handful of experts who really
understand it, and these are all part of the design team for that
product, and thus not impartial experts.
74
.
Ownership of code and ownership of knowledge
• For every product, there are only a handful of experts who really
understand it, and these are all part of the design team for that
product, and thus not impartial experts.
• Even if we find experts from a competing company, all experts are
in a conflict of interest situation, either as employees of the
examined company, or as employees of the competitor.
74
.
Ownership of code and ownership of knowledge
• For every product, there are only a handful of experts who really
understand it, and these are all part of the design team for that
product, and thus not impartial experts.
• Even if we find experts from a competing company, all experts are
in a conflict of interest situation, either as employees of the
examined company, or as employees of the competitor.
• This severely threatens public control and accident investigations.
74
.
What can we do? (1)
• Keep the code itself that encodes moral rules publicly accessible
(open source).
75
.
What can we do? (1)
• Keep the code itself that encodes moral rules publicly accessible
(open source).
• Allow public modifications to the code and ban closed-code systems
from deployment in society. (For example in self-driving cars,
autonomous weapons, household robots etc)
75
.
What can we do? (1)
• Keep the code itself that encodes moral rules publicly accessible
(open source).
• Allow public modifications to the code and ban closed-code systems
from deployment in society. (For example in self-driving cars,
autonomous weapons, household robots etc)
• Create state committees that regularly examine and certify the code
of moral systems for compliance with the common rules of morality
and sound engineering practices. (Not only the finished system as a
black box, but the code itself).
75
.
What can we do? (2)
• Societies must make sure that knowledge does not become
corporate property.
76
.
What can we do? (2)
• Societies must make sure that knowledge does not become
corporate property.
• This means:
76
.
What can we do? (2)
• Societies must make sure that knowledge does not become
corporate property.
• This means:
• A requirement that advanced AI techniques are well-documented
and taught in the public education system to prevent them from
becoming company secrets; and, crucially,
76
.
What can we do? (2)
• Societies must make sure that knowledge does not become
corporate property.
• This means:
• A requirement that advanced AI techniques are well-documented
and taught in the public education system to prevent them from
becoming company secrets; and, crucially,
• No patenting of software technology.
76
.
Seven:
“We are implementing ethics. Ethics is
a rule system for guiding action.”
.
77
.
“Ethics is a rule system for guiding action.” (?)
• Some of the systems shown here in the past days (medicine
scheduling, robots preventing harm to other robots) are not
examples of ethics at all, but examples of optimisation problems,
planning etc.
78
.
“Ethics is a rule system for guiding action.” (?)
• Some of the systems shown here in the past days (medicine
scheduling, robots preventing harm to other robots) are not
examples of ethics at all, but examples of optimisation problems,
planning etc.
• But plain action planning is not ethics.
78
.
“Ethics is a rule system for guiding action.” (?)
• Some of the systems shown here in the past days (medicine
scheduling, robots preventing harm to other robots) are not
examples of ethics at all, but examples of optimisation problems,
planning etc.
• But plain action planning is not ethics.
• If action planning within ethical constraints was ethics, then
ethics would be just another rule system, equal in nature to laws,
rules of the road, or chess playing. But this seems wrong.
78
.
“Ethics is a rule system for guiding action.” (?)
• Some of the systems shown here in the past days (medicine
scheduling, robots preventing harm to other robots) are not
examples of ethics at all, but examples of optimisation problems,
planning etc.
• But plain action planning is not ethics.
• If action planning within ethical constraints was ethics, then
ethics would be just another rule system, equal in nature to laws,
rules of the road, or chess playing. But this seems wrong.
• There is something distinctive about ethics, not only in the content
of the rules, but also about the properties of the rule system itself.
78
.
Ethics and chess
• Look at chess in comparison to the ethical controllers presented
here previously.
79
.
Ethics and chess
• Look at chess in comparison to the ethical controllers presented
here previously.
• “Ethical controllers will take a set of possible actions in a situation
and choose the best one at the present moment, anticipating the
possible responses of the other participants in that scenario.”
79
.
Ethics and chess
• Look at chess in comparison to the ethical controllers presented
here previously.
• “Ethical controllers will take a set of possible actions in a situation
and choose the best one at the present moment, anticipating the
possible responses of the other participants in that scenario.”
• This exactly describes a chess program:
79
.
Ethics and chess
• Look at chess in comparison to the ethical controllers presented
here previously.
• “Ethical controllers will take a set of possible actions in a situation
and choose the best one at the present moment, anticipating the
possible responses of the other participants in that scenario.”
• This exactly describes a chess program:
• A move generator generates a set of possible moves,
79
.
Ethics and chess
• Look at chess in comparison to the ethical controllers presented
here previously.
• “Ethical controllers will take a set of possible actions in a situation
and choose the best one at the present moment, anticipating the
possible responses of the other participants in that scenario.”
• This exactly describes a chess program:
• A move generator generates a set of possible moves,
• and an evaluation function selects the best board position after each
move.
79
.
Ethics and chess
• Look at chess in comparison to the ethical controllers presented
here previously.
• “Ethical controllers will take a set of possible actions in a situation
and choose the best one at the present moment, anticipating the
possible responses of the other participants in that scenario.”
• This exactly describes a chess program:
• A move generator generates a set of possible moves,
• and an evaluation function selects the best board position after each
move.
• Now, if we replace the evaluation function of chess with a moral
evaluation function, does this give us a moral agent? – No.
79
.
Moral rules and other rules
• Arkin’s rules of engagement, rules of the road (and others) are not
moral rules.
80
.
Moral rules and other rules
• Arkin’s rules of engagement, rules of the road (and others) are not
moral rules.
• Morality consists precisely in the possibility of dissent, in ignoring
the rules.
80
.
Moral rules and other rules
• Arkin’s rules of engagement, rules of the road (and others) are not
moral rules.
• Morality consists precisely in the possibility of dissent, in ignoring
the rules.
• This is arguably the whole point of morality: to provide a
mechanism to override the rules. To restrict and check everyday
rule following.
80
.
Moral rules and other rules
• Arkin’s rules of engagement, rules of the road (and others) are not
moral rules.
• Morality consists precisely in the possibility of dissent, in ignoring
the rules.
• This is arguably the whole point of morality: to provide a
mechanism to override the rules. To restrict and check everyday
rule following.
• Morality proper is second order rules.
80
.
Moral rules and other rules
• Arkin’s rules of engagement, rules of the road (and others) are not
moral rules.
• Morality consists precisely in the possibility of dissent, in ignoring
the rules.
• This is arguably the whole point of morality: to provide a
mechanism to override the rules. To restrict and check everyday
rule following.
• Morality proper is second order rules.
• Its point is to question first order rules like laws, agreements, social
customs, and so on.
80
.
What is moral behaviour?
Our common understanding of moral behaviour rests on two pillars:
• First, the already mentioned shared set of moral rules, and
81
.
What is moral behaviour?
Our common understanding of moral behaviour rests on two pillars:
• First, the already mentioned shared set of moral rules, and
• Second, acting in accordance with one’s deepest conviction about
what is right and wrong (what is sometimes described with the
words “conscience,” or moral autonomy).
81
.
What is moral behaviour?
Our common understanding of moral behaviour rests on two pillars:
• First, the already mentioned shared set of moral rules, and
• Second, acting in accordance with one’s deepest conviction about
what is right and wrong (what is sometimes described with the
words “conscience,” or moral autonomy).
• If an agent is not free to act following his convictions, then we
usually would not consider him a fully responsible moral agent.
81
.
What is moral behaviour?
Our common understanding of moral behaviour rests on two pillars:
• First, the already mentioned shared set of moral rules, and
• Second, acting in accordance with one’s deepest conviction about
what is right and wrong (what is sometimes described with the
words “conscience,” or moral autonomy).
• If an agent is not free to act following his convictions, then we
usually would not consider him a fully responsible moral agent.
• If, for example, a soldier is ordered to perform a morally
praiseworthy action, we would not ascribe the full amount of moral
praise to the soldier himself, but to those who issued the command.
81
.
Kantian moral autonomy: The creators of moral law
• There is always a seed of existential freedom in moral action.
82
.
Kantian moral autonomy: The creators of moral law
• There is always a seed of existential freedom in moral action.
• There is the Kantian autonomy of the human being: The moment
where the human agent becomes at the same time the lawgiver of
the moral law and its subject (Kant).
82
.
Kantian moral autonomy: The creators of moral law
• There is always a seed of existential freedom in moral action.
• There is the Kantian autonomy of the human being: The moment
where the human agent becomes at the same time the lawgiver of
the moral law and its subject (Kant).
• This is the defining characteristic of human morality.
82
.
Kantian moral autonomy: The creators of moral law
• There is always a seed of existential freedom in moral action.
• There is the Kantian autonomy of the human being: The moment
where the human agent becomes at the same time the lawgiver of
the moral law and its subject (Kant).
• This is the defining characteristic of human morality.
• Blindly following a pre-programmed rule system does not make a
moral agent.
82
.
What makes ethics distinctive?
So there are three things that make ethics distinctive:
1. Meta-rules, asking for the justification of first-level rules. (Ethics vs
law)
83
.
What makes ethics distinctive?
So there are three things that make ethics distinctive:
1. Meta-rules, asking for the justification of first-level rules. (Ethics vs
law)
2. Overriding first-level rules by second-level rules (Ethics overriding
law, custom, rules of the road, etc:
83
.
What makes ethics distinctive?
So there are three things that make ethics distinctive:
1. Meta-rules, asking for the justification of first-level rules. (Ethics vs
law)
2. Overriding first-level rules by second-level rules (Ethics overriding
law, custom, rules of the road, etc:
• When is it morally right to cross a red traffic light?
83
.
What makes ethics distinctive?
So there are three things that make ethics distinctive:
1. Meta-rules, asking for the justification of first-level rules. (Ethics vs
law)
2. Overriding first-level rules by second-level rules (Ethics overriding
law, custom, rules of the road, etc:
• When is it morally right to cross a red traffic light?
• When is it right to disobey the laws?
83
.
What makes ethics distinctive?
So there are three things that make ethics distinctive:
1. Meta-rules, asking for the justification of first-level rules. (Ethics vs
law)
2. Overriding first-level rules by second-level rules (Ethics overriding
law, custom, rules of the road, etc:
• When is it morally right to cross a red traffic light?
• When is it right to disobey the laws?
• When is it right to start a revolution and overthrow the government?
83
.
What makes ethics distinctive?
So there are three things that make ethics distinctive:
1. Meta-rules, asking for the justification of first-level rules. (Ethics vs
law)
2. Overriding first-level rules by second-level rules (Ethics overriding
law, custom, rules of the road, etc:
• When is it morally right to cross a red traffic light?
• When is it right to disobey the laws?
• When is it right to start a revolution and overthrow the government?
• When it is morally right to lie to your friend?
83
.
What makes ethics distinctive?
So there are three things that make ethics distinctive:
1. Meta-rules, asking for the justification of first-level rules. (Ethics vs
law)
2. Overriding first-level rules by second-level rules (Ethics overriding
law, custom, rules of the road, etc:
• When is it morally right to cross a red traffic light?
• When is it right to disobey the laws?
• When is it right to start a revolution and overthrow the government?
• When it is morally right to lie to your friend?
3. The possibility of dissent.
83
.
The importance of dissent
• Dissent is crucial in ethics.
84
.
The importance of dissent
• Dissent is crucial in ethics.
• Dissent, disobedience, and the personal moral stance are a last-line
defence against immoral rule systems or immoral commands.
84
.
The importance of dissent
• Dissent is crucial in ethics.
• Dissent, disobedience, and the personal moral stance are a last-line
defence against immoral rule systems or immoral commands.
• Many acts of kindness, many of the most inspiring human stories in
the Second World War and Nazi Germany were acts of
disobedience, personal decisions to act morally right based on
dissent with the existing rule systems at the moment.
84
.
The importance of dissent
• Dissent is crucial in ethics.
• Dissent, disobedience, and the personal moral stance are a last-line
defence against immoral rule systems or immoral commands.
• Many acts of kindness, many of the most inspiring human stories in
the Second World War and Nazi Germany were acts of
disobedience, personal decisions to act morally right based on
dissent with the existing rule systems at the moment.
• Imagine what the world would look like if Hitler had had perfectly
obedient war robots instead of human soldiers.
84
.
What can we do? (1)
• In order to be moral in the full sense of the word, artefacts must
include the possibility of dissent and disobedience on moral
grounds.
85
.
What can we do? (1)
• In order to be moral in the full sense of the word, artefacts must
include the possibility of dissent and disobedience on moral
grounds.
• This means, that the operator of the system must not be able to
override the ethical governor’s decision (Arkin’s concept gets this
completely wrong).
85
.
What can we do? (1)
• In order to be moral in the full sense of the word, artefacts must
include the possibility of dissent and disobedience on moral
grounds.
• This means, that the operator of the system must not be able to
override the ethical governor’s decision (Arkin’s concept gets this
completely wrong).
• Rules must be prioritised, with respect for human rights and core
human values overriding tactical rules that provide a local
advantage (for example in a war situation).
85
.
What can we do? (2)
• The morality implementation should never be provided by the
machine’s operator himself, because of conflicts of interest
(military, for example).
86
.
What can we do? (2)
• The morality implementation should never be provided by the
machine’s operator himself, because of conflicts of interest
(military, for example).
• The ethical governor must be a sealed black box to the
manufacturer and the operator, enforcing moral rules even against
the manufacturer’s and operator’s interests.
86
.
Eight:
“We can use artefact ethics to better
understand human ethics.”
.
87
.
Can we understand human ethics by looking at machine ethics?
• Human ethics is not about rule following. This is the domain of
laws, rules of conduct, social customs etc.
88
.
Can we understand human ethics by looking at machine ethics?
• Human ethics is not about rule following. This is the domain of
laws, rules of conduct, social customs etc.
• Without the possibility of disobedience, we don’t talk about
morality, but of blind rule following, of artificial slavery.
88
.
Can we understand human ethics by looking at machine ethics?
• Human ethics is not about rule following. This is the domain of
laws, rules of conduct, social customs etc.
• Without the possibility of disobedience, we don’t talk about
morality, but of blind rule following, of artificial slavery.
• The machines we talked about all these days are mechanical slaves,
perfectly obedient. This is not morality, and it is dangerous to
confuse this with morality.
88
.
Can we understand human ethics by looking at machine ethics?
• Human ethics is not about rule following. This is the domain of
laws, rules of conduct, social customs etc.
• Without the possibility of disobedience, we don’t talk about
morality, but of blind rule following, of artificial slavery.
• The machines we talked about all these days are mechanical slaves,
perfectly obedient. This is not morality, and it is dangerous to
confuse this with morality.
• Authoritarian states would like it very much if morality worked like
that.
88
.
Dehumanising ethics (1)
• The danger is, by implicitly redefining morality in the way done at
this conference, we are de-humanising and actually de-moralising
morality.
89
.
Dehumanising ethics (1)
• The danger is, by implicitly redefining morality in the way done at
this conference, we are de-humanising and actually de-moralising
morality.
• We are creating an equivocation that will be confused with the real
thing, and as a next step we will apply these “insights” we gained
from artificial slavery back to humans.
89
.
Dehumanising ethics (1)
• The danger is, by implicitly redefining morality in the way done at
this conference, we are de-humanising and actually de-moralising
morality.
• We are creating an equivocation that will be confused with the real
thing, and as a next step we will apply these “insights” we gained
from artificial slavery back to humans.
• This was also expressed a few times as an advantage of artificial
morality: That we can learn from it to understand human morality
better.
89
.
Dehumanising ethics (2)
• If this back-application of insights is performed with these machines
we talked about this week, then we will completely distort what
ethics means for humans, and create an image of unconditional and
inescapable moral slavery as the ideal of human morality.
90
.
Dehumanising ethics (2)
• If this back-application of insights is performed with these machines
we talked about this week, then we will completely distort what
ethics means for humans, and create an image of unconditional and
inescapable moral slavery as the ideal of human morality.
• Together with the enforced and closed-code nature of
morality-as-code, and the lack of democratic control, we are talking
of replacing moral freedom and conscience with a totalitarian
monster conception of absolute obedience to an obscure and
uncontrollable rule-giver (Lessig, Brownsword).
90
.
Thank you for your attention!
Andreas Matthias, [email protected]
91
.
Appendix A
.
92
.
Extended Mind Thesis (Clark & Chalmers, 1998)
• Humans and machines can be even closer coupled than in Latour’s
composite agents.
93
.
Extended Mind Thesis (Clark & Chalmers, 1998)
• Humans and machines can be even closer coupled than in Latour’s
composite agents.
• People use artefacts as part of their cognitive processes (Clark &
Chalmers, 1998)
93
.
Extended Mind Thesis (Clark & Chalmers, 1998)
• Humans and machines can be even closer coupled than in Latour’s
composite agents.
• People use artefacts as part of their cognitive processes (Clark &
Chalmers, 1998)
• For example:
93
.
Extended Mind Thesis (Clark & Chalmers, 1998)
• Humans and machines can be even closer coupled than in Latour’s
composite agents.
• People use artefacts as part of their cognitive processes (Clark &
Chalmers, 1998)
• For example:
• Using a paper notebook as an extension of one’s memory
93
.
Extended Mind Thesis (Clark & Chalmers, 1998)
• Humans and machines can be even closer coupled than in Latour’s
composite agents.
• People use artefacts as part of their cognitive processes (Clark &
Chalmers, 1998)
• For example:
• Using a paper notebook as an extension of one’s memory
• Using paper and pen to do additions and multiplications
93
.
Extended Mind Thesis (Clark & Chalmers, 1998)
• Humans and machines can be even closer coupled than in Latour’s
composite agents.
• People use artefacts as part of their cognitive processes (Clark &
Chalmers, 1998)
• For example:
• Using a paper notebook as an extension of one’s memory
• Using paper and pen to do additions and multiplications
• Physically re-arranging Scrabble tiles to recall words
93
.
Extended Mind Thesis (Clark & Chalmers, 1998)
• Humans and machines can be even closer coupled than in Latour’s
composite agents.
• People use artefacts as part of their cognitive processes (Clark &
Chalmers, 1998)
• For example:
• Using a paper notebook as an extension of one’s memory
• Using paper and pen to do additions and multiplications
• Physically re-arranging Scrabble tiles to recall words
• In these cases, the physical aids are an integral part of the cognitive
processing hardware.
93
.
Extended Mind Thesis (Clark & Chalmers, 1998)
• Humans and machines can be even closer coupled than in Latour’s
composite agents.
• People use artefacts as part of their cognitive processes (Clark &
Chalmers, 1998)
• For example:
• Using a paper notebook as an extension of one’s memory
• Using paper and pen to do additions and multiplications
• Physically re-arranging Scrabble tiles to recall words
• In these cases, the physical aids are an integral part of the cognitive
processing hardware.
• Without them, the cognitive operation could not take place.
93
.
Extended Mind Thesis (Clark & Chalmers, 1998)
• Humans and machines can be even closer coupled than in Latour’s
composite agents.
• People use artefacts as part of their cognitive processes (Clark &
Chalmers, 1998)
• For example:
• Using a paper notebook as an extension of one’s memory
• Using paper and pen to do additions and multiplications
• Physically re-arranging Scrabble tiles to recall words
• In these cases, the physical aids are an integral part of the cognitive
processing hardware.
• Without them, the cognitive operation could not take place.
• Thus, the external supports form part of the mental process.
93
.
Extended Mind Thesis (Clark & Chalmers, 1998)
• Humans and machines can be even closer coupled than in Latour’s
composite agents.
• People use artefacts as part of their cognitive processes (Clark &
Chalmers, 1998)
• For example:
• Using a paper notebook as an extension of one’s memory
• Using paper and pen to do additions and multiplications
• Physically re-arranging Scrabble tiles to recall words
• In these cases, the physical aids are an integral part of the cognitive
processing hardware.
• Without them, the cognitive operation could not take place.
• Thus, the external supports form part of the mental process.
• Consequently, the mental process itself takes place (in part)
outside of a person’s skull.
93
.
An example of an extended mental process
Tetris:
• Kirsh and Maglio (1994): the physical rotation of a shape through
90 degrees takes about 100 milliseconds, plus about 200
milliseconds to select the button.
94
.
An example of an extended mental process
Tetris:
• Kirsh and Maglio (1994): the physical rotation of a shape through
90 degrees takes about 100 milliseconds, plus about 200
milliseconds to select the button.
• To achieve the same result by mental rotation takes about 1,000
milliseconds.
94
.
An example of an extended mental process
Tetris:
• Kirsh and Maglio (1994): the physical rotation of a shape through
90 degrees takes about 100 milliseconds, plus about 200
milliseconds to select the button.
• To achieve the same result by mental rotation takes about 1,000
milliseconds.
• “Physical rotation is used not just to position a shape ready to fit a
slot, but often to help determine whether the shape and the slot
are compatible.” (Clark & Chalmers, 1998)
94
.
Extended mental processes require spread of credit
• If I calculate the product of two big numbers with a calculator, I
cannot claim praise for my arithmetic abilities.
95
.
Extended mental processes require spread of credit
• If I calculate the product of two big numbers with a calculator, I
cannot claim praise for my arithmetic abilities.
• If I play perfect chess with the help of a computer, I cannot claim to
be a chess master.
95
.
Extended mental processes require spread of credit
• If I calculate the product of two big numbers with a calculator, I
cannot claim praise for my arithmetic abilities.
• If I play perfect chess with the help of a computer, I cannot claim to
be a chess master.
• If I speak perfect Chinese with the help of Google Translate, I
cannot claim praise for my language ability.
95
.
Attributing praise and blame (1)
• The calculator’s ability to calculate with numbers is vastly superior
to my own, both in terms of speed and accuracy.
96
.
Attributing praise and blame (1)
• The calculator’s ability to calculate with numbers is vastly superior
to my own, both in terms of speed and accuracy.
• If I assert that, by using the calculator, I have integrated the
calculator into my own cognitive toolset, I have immensely
increased “my” cognitive abilities. Now “I” can suddenly calculate
square roots of seven-digit numbers, because “my” cognitive
abilities rightly include the performance of the calculator.
96
.
Attributing praise and blame (1)
• The calculator’s ability to calculate with numbers is vastly superior
to my own, both in terms of speed and accuracy.
• If I assert that, by using the calculator, I have integrated the
calculator into my own cognitive toolset, I have immensely
increased “my” cognitive abilities. Now “I” can suddenly calculate
square roots of seven-digit numbers, because “my” cognitive
abilities rightly include the performance of the calculator.
• But this is incorrect.
96
.
Attributing praise and blame (2)2
• The core statement of the Extended Mind Thesis is really not about
defining cognition, but about justly attributing the credit for the
performance of a computation.
2
More detail about how the ascription of reactive attitudes to hybrid agents works, in:
Matthias, Andreas (2015) “The Extended Mind and the Computational Basis of
Responsibility Ascription”, Proceedings of the International Conference on Mind and
Responsibility - Philosophy, Sciences and Criminal Law, May 21-22, 2015. Organized by
Faculdade de Direito da Universidade de Lisboa, Lisbon, Portugal.
97
.
Attributing praise and blame (2)2
• The core statement of the Extended Mind Thesis is really not about
defining cognition, but about justly attributing the credit for the
performance of a computation.
• The problem of defining the boundaries of cognition turns out to be
a moral problem (and a problem of responsibility ascription).
2
More detail about how the ascription of reactive attitudes to hybrid agents works, in:
Matthias, Andreas (2015) “The Extended Mind and the Computational Basis of
Responsibility Ascription”, Proceedings of the International Conference on Mind and
Responsibility - Philosophy, Sciences and Criminal Law, May 21-22, 2015. Organized by
Faculdade de Direito da Universidade de Lisboa, Lisbon, Portugal.
97
.
Attributing praise and blame (2)2
• The core statement of the Extended Mind Thesis is really not about
defining cognition, but about justly attributing the credit for the
performance of a computation.
• The problem of defining the boundaries of cognition turns out to be
a moral problem (and a problem of responsibility ascription).
• What is wrong with my claim that the calculator is part of my
cognitive toolset is my attempt to evoke particular reactive
attitudes as a consequence of that claim (praise, blame).
2
More detail about how the ascription of reactive attitudes to hybrid agents works, in:
Matthias, Andreas (2015) “The Extended Mind and the Computational Basis of
Responsibility Ascription”, Proceedings of the International Conference on Mind and
Responsibility - Philosophy, Sciences and Criminal Law, May 21-22, 2015. Organized by
Faculdade de Direito da Universidade de Lisboa, Lisbon, Portugal.
97
.
Attributing praise and blame (3)
• If I want to persuade a Cantonese bus driver to stop the minibus
near my home and let me get off:
98
.
Attributing praise and blame (3)
• If I want to persuade a Cantonese bus driver to stop the minibus
near my home and let me get off:
• A particular sequence of utterances in Cantonese is required to bring
about the desired result. I have an electronic translator for that.
98
.
Attributing praise and blame (3)
• If I want to persuade a Cantonese bus driver to stop the minibus
near my home and let me get off:
• A particular sequence of utterances in Cantonese is required to bring
about the desired result. I have an electronic translator for that.
• Each part of the algorithm, (a) the persuasion strategy (me) and (b)
the translation (the translator), depend on each other. This might be
argued to be genuinely one extended cognitive process. Neither part
can successfully complete the task of getting me off the bus at the
right place without the other.
98
.
Attributing praise and blame (3)
• If I want to persuade a Cantonese bus driver to stop the minibus
near my home and let me get off:
• A particular sequence of utterances in Cantonese is required to bring
about the desired result. I have an electronic translator for that.
• Each part of the algorithm, (a) the persuasion strategy (me) and (b)
the translation (the translator), depend on each other. This might be
argued to be genuinely one extended cognitive process. Neither part
can successfully complete the task of getting me off the bus at the
right place without the other.
• What counts is the locus of the performance of a cognitive
algorithm.
98
.
Attributing praise and blame (3)
• If I want to persuade a Cantonese bus driver to stop the minibus
near my home and let me get off:
• A particular sequence of utterances in Cantonese is required to bring
about the desired result. I have an electronic translator for that.
• Each part of the algorithm, (a) the persuasion strategy (me) and (b)
the translation (the translator), depend on each other. This might be
argued to be genuinely one extended cognitive process. Neither part
can successfully complete the task of getting me off the bus at the
right place without the other.
• What counts is the locus of the performance of a cognitive
algorithm.
• Performing a mental operation outside of the brain spreads
epistemic credit across the whole of the human-artefact hybrid
system.
98
.
Attributing praise and blame (4)
• Shared epistemic credit translates into shared moral responsibility.
99
.
Attributing praise and blame (4)
• Shared epistemic credit translates into shared moral responsibility.
• In order to be morally responsible for an outcome, I need to have
control over the process that led to that outcome.
99
.
Attributing praise and blame (4)
• Shared epistemic credit translates into shared moral responsibility.
• In order to be morally responsible for an outcome, I need to have
control over the process that led to that outcome.
• I don’t have complete control over a mental process if parts of it
have been executed outside of my brain, by a second, independent
processor, with different capabilities than my own brain.
99
.
Attributing praise and blame (4)
• Shared epistemic credit translates into shared moral responsibility.
• In order to be morally responsible for an outcome, I need to have
control over the process that led to that outcome.
• I don’t have complete control over a mental process if parts of it
have been executed outside of my brain, by a second, independent
processor, with different capabilities than my own brain.
• The two processors have to share the moral responsibility in the
same way as they share the epistemic credit.
99
.
Attributing praise and blame (4)
• Shared epistemic credit translates into shared moral responsibility.
• In order to be morally responsible for an outcome, I need to have
control over the process that led to that outcome.
• I don’t have complete control over a mental process if parts of it
have been executed outside of my brain, by a second, independent
processor, with different capabilities than my own brain.
• The two processors have to share the moral responsibility in the
same way as they share the epistemic credit.
• This will become more clear later on.
99
.
Appendix B
.
100
.
Limits of formal verification
• Verification is impossible in learning systems, where the
environment modifies the system (through the learning process):
Microsoft chatbot problem (see above).
101
.
Limits of formal verification
• Verification is impossible in learning systems, where the
environment modifies the system (through the learning process):
Microsoft chatbot problem (see above).
• Only works for symbolic, algorithmic AI systems. There is a good
probability that advanced AI systems won’t be symbolic, or won’t
be entirely symbolic, thus making formal verification of these
systems impossible.
101
.
Limits of formal verification
• Verification is impossible in learning systems, where the
environment modifies the system (through the learning process):
Microsoft chatbot problem (see above).
• Only works for symbolic, algorithmic AI systems. There is a good
probability that advanced AI systems won’t be symbolic, or won’t
be entirely symbolic, thus making formal verification of these
systems impossible.
• Even in symbolic systems, the amount of data in non-toy systems
makes it hard to exhaustively test and verify them: learning from the
web, like IBM Watson (practically unlimited access to information
that is always changing), or CYC’s knowledge base (239,000
concepts and 2,093,000 facts).
101
.
Appendix C
.
102
.
Deceptive interfaces in care robots3
Sometimes a deceptive user interface can be empowering.
Deceptive technological metaphors:
• Print a range of pages of an HTML document (no pagination in
HTML!)
3
Matthias, Andreas (2015). “Robot Lies in Health Care. When Is Deception Morally
Permissible?” Kennedy Institute of Ethics Journal Vol. 25, No. 2, 169–192
103
.
Deceptive interfaces in care robots3
Sometimes a deceptive user interface can be empowering.
Deceptive technological metaphors:
• Print a range of pages of an HTML document (no pagination in
HTML!)
• Send an email (no hostnames, no letter, not even characters, only a
string of unicode values)
3
Matthias, Andreas (2015). “Robot Lies in Health Care. When Is Deception Morally
Permissible?” Kennedy Institute of Ethics Journal Vol. 25, No. 2, 169–192
103
.
Deceptive interfaces in care robots3
Sometimes a deceptive user interface can be empowering.
Deceptive technological metaphors:
• Print a range of pages of an HTML document (no pagination in
HTML!)
• Send an email (no hostnames, no letter, not even characters, only a
string of unicode values)
• Open a “folder” and select a “document” in it (none of these things
exist)
3
Matthias, Andreas (2015). “Robot Lies in Health Care. When Is Deception Morally
Permissible?” Kennedy Institute of Ethics Journal Vol. 25, No. 2, 169–192
103
.
Self-directedness and autonomy
Oshana (2003):
“In the global sense, a self-directed individual is one who sets goals for
her life, goals that she has selected from a range of options and that she
can hope to achieve as the result of her own action. Such goals are
formulated according to values, desires, and convictions that have
developed in an uncoerced fashion. (…) This definition suggests that an
autonomous person is in control of her choices, her actions, and her
will.” (Oshana, 2002)
104
.
Interface considerations
• A user interface can be empowering, in that is allows the user to
make full use of the machine’s capabilities and to control it
according to the user’s own values and preferences.
105
.
Interface considerations
• A user interface can be empowering, in that is allows the user to
make full use of the machine’s capabilities and to control it
according to the user’s own values and preferences.
• Or it can reduce the user’s autonomy by either making features of
the machine inaccessible, or just by being obscure.
105
.
Interface considerations
• A user interface can be empowering, in that is allows the user to
make full use of the machine’s capabilities and to control it
according to the user’s own values and preferences.
• Or it can reduce the user’s autonomy by either making features of
the machine inaccessible, or just by being obscure.
• The goal of information disclosure in a specialist/user relationship
must be to strengthen the user’s autonomy by providing him with
grounds for action that are intelligible and meaningful to him.
105
.
User interfaces must adapt to the user
The user’s ability to make choices depends to a very high degree on the
specific user and her abilities to understand the information imparted to
her:
• An expert computer programmer might find a graphical, heavily
icon- and metaphor-based environment limiting and confusing.
106
.
User interfaces must adapt to the user
The user’s ability to make choices depends to a very high degree on the
specific user and her abilities to understand the information imparted to
her:
• An expert computer programmer might find a graphical, heavily
icon- and metaphor-based environment limiting and confusing.
• A user who is not acquainted with computer technology might be
unable to handle the programmer’s favourite command language
interface.
106
.
User interfaces must adapt to the user
The user’s ability to make choices depends to a very high degree on the
specific user and her abilities to understand the information imparted to
her:
• An expert computer programmer might find a graphical, heavily
icon- and metaphor-based environment limiting and confusing.
• A user who is not acquainted with computer technology might be
unable to handle the programmer’s favourite command language
interface.
• An interface that is appropriate for one user might well be
ineffective for another.
106
.
User interfaces must adapt to the user
The user’s ability to make choices depends to a very high degree on the
specific user and her abilities to understand the information imparted to
her:
• An expert computer programmer might find a graphical, heavily
icon- and metaphor-based environment limiting and confusing.
• A user who is not acquainted with computer technology might be
unable to handle the programmer’s favourite command language
interface.
• An interface that is appropriate for one user might well be
ineffective for another.
• It will often depend on the particular user whether a particular
interface actually promotes or limits that user’s autonomy.
106
.
References
Clark, A., & Chalmers, D. (1998). The Extended Mind. Analysis, 7–19.
Dreyfus, H. L. (1990). Being-In-The-World: A Commentary On Heidegger’S Being And Time, Division I. (Paperback edition).
MIT Press.
Kirsh, D., & Maglio, P. (1994). On Distinguishing Epistemic from Pragmatic Action. Cognitive Science, 18(4), 513–549.
Latour, B. (1999). A Collective of Humans and Nonhumans: Following Daedalus’s Labyrinth. In Pandora’s Hope: Essays on the
Reality of Science Studies. Cambridge, MA: Harvard University Press.
Lenat, D. B. (1995). CYC: A Large-Scale Invenstment in Knowledge Infrastructure. Communications of the ACM, 38(11).
Lessig, L. (1996). The Zones of Cyberspace. Stanford Law Review, 48(5), 1403–1411.
Lessig, L. (1999). The Law of the Horse: What Cyberlaw Might Teach. Harvard Law Review, 113, 501–546.
Lessig, L. (2006). Code Version 2.0. New York: Basic Books.
Matthias, A. (2004). The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata. Ethics and
Information Technology, 6(3), 175–183.
Oshana, M. A. L. (2002). The Misguided Marriage of Responsibility and Autonomy. The Journal of Ethics, 6(3), 261–280.
Winograd, T. (1991). Thinking Machines: Can There Be? Are We? In J. Sheehan & M. Sosna (Eds.), The Boundaries of Humanity:
Humans, Animals, Machines (pp. 198–223). Berkeley: University of California Press.
107