Roboethics

J. Blackmon




Alan Turing (1950) and The Turing Test
IBM’s Deep Blue beats world champion Garry
Kasparov in chess. (1997)
IBM’s Watson beats two champions on
Jeopardy!, interpreting natural language text
and providing answers without live access to
the Internet.
Ray Kurzweil makes popular the concept of
“the singularity”.


Roboethics
Machine Ethics

Roboethics is human-centered and focuses on
the ethical use of robots in society.

How should (and shouldn’t) we use them? What
new harms might they introduce? What are their
legal implications?

Machine Ethics concerns the development of
robots and AI capable of making moral
decisions.

For example, how should Google’s driverless car
solve a forced choice scenario: hit a school bus or
drive off the bridge?
The core ethical issues are subsumed within…
Five Interrelated Themes
 Safety
 Appropriate Use
 Capabilities
 Privacy and Property Rights
 Responsibility and Liability
Safety: Are robots safe?
 WW: Regarding robots developed so far,
current product liability laws sufficiently cover
this question.
 Robot safety is clearly the legal responsibility
of the companies that produce the robots and
of the end users who adapt them.
Appropriate Use: Are robots appropriate for the
applications for which they are designed?
 Robots as sex toys
 Robots as pets
 Robots as companions
 Robots as nannies and caregivers
Appropriate Use: Are robots appropriate for the
applications for which they are designed?
 Robots as sex toys
 Robots as pets
 Robots as companions
 Robots as nannies and caregivers
Adequately advanced robots could meet some of
the preferences and needs of people without any
cost to a providing human (or animal).
Appropriate Use: Are robots appropriate for the
applications for which they are designed?

Preferences and Needs


entertainment, companionship, care
No cost to a providing human (or animal)

Robots won’t feel harmed, bored, disgusted,
mistreated, or lonely. (Recall the “three Ds”, Dull,
Dirty, Dangerous)
Appropriate Use: Are robots appropriate for the
applications for which they are designed?

Replacing humans (and animals)



Will we lose crucial sensibilities (virtues?) or lessons?
Would using robots as caregivers and nannies be
abusive to those no longer cared for by humans?
Would infants and children be emotionally or
intellectually stunted?
Appropriate Use: Are robots appropriate for the
applications for which they are designed?



Is it inappropriate or wrong to violently or
otherwise abuse a robot? If so, why?
Assuming these are robots incapable of
suffering, what would be wrong with it?
Test Cases for Consideration: animals, plants,
video game characters, toys.
Capabilities: Can robots
live up to the task for
which they have been
designed?
 We tend to
anthropomorphize
robots, expecting
them to have
capabilities they don’t
have.
Capabilities: Can robots
live up to the task for
which they have been
designed?
 Marketers will exploit
this tendency to
anthropomorphize.
 Thus, we may be
systematically duped.
Capabilities: Can robots live up to the task for
which they have been designed?

WW: We need a professional association or
regulatory commission to certify robots for
particular uses. Yes, this will be costly, and it
will have to adapt as the field of robotics
progresses.
Privacy and Property Rights: How will robots
affect the alleged loss/diminution of these rights?
 A robot’s ability to sense and store data is
crucial to its performance, also valuable to the
owner and to a technician when trying to
debug, fix, or upgrade it.
 But if this robot is used in the home or other
private settings, the data will also be a record
of (traditionally) private activity.
Privacy and Property Rights: How will robots
affect the alleged loss/diminution of these rights?
 Such a record could be subpoenaed.
 It would be accessible for various criminal
purposes.
 Also, not mentioned by Wallach, “function
creep” will make much of the record (often
legally but unknowingly) available to third
parties.
Responsibility and Liability: How do we assign
moral and legal responsibility for a robot’s
actions?
 Robots are the product of “many hands”, and
as such, individual developers of a component
may have only a limited understanding of how
it will interact with others, potentially
increasing risks.
 Deadlines and limited funding also contribute
to limited understanding and increased risk.
Responsibility and Liability: How do we assign
moral and legal responsibility for a robot’s
actions?
 The possibility of unknown risks may lead a
company to delay the release of a robot.
 But should this be the default standard?
 Too many delays weaken productivity and
innovation, consequently our economy.
 We could lose the competitive advantage to
other countries.
Responsibility and Liability: How do we assign
moral and legal responsibility for a robot’s
actions?
 “When an intelligent system fails,
manufacturers will try to dilute or mitigate
liability by stressing an appreciation for the
complexity of the system and the difficulties in
establishing who is responsible for the failure.”
Responsibility and Liability: How do we assign
moral and legal responsibility for a robot’s
actions?
 “When an intelligent system fails,
manufacturers will try to dilute or mitigate
liability by stressing an appreciation for the
complexity of the system and the difficulties in
establishing who is responsible for the failure.”
 To address such concerns, Wallach proposes
Five Rules.
Five Rules
1. “The people who design, develop, or deploy a
computing artifact are morally responsible for that
artifact, and for the foreseeable effects of that artifact.
This responsibility is shared with other people who
design, develop, deploy, or knowingly use the artifact
as part of a sociotechnical system.”
All of the creators and users of a robot are morally
responsible for it and its foreseeable effects.
Five Rules
2. “The shared responsibility of computing artifacts is
not a zero-sum game. The responsibility of an
individual is not reduced simply because more people
become involved in designing, developing, deploying
or using the artifact. Instead, a person’s responsibility
includes being answerable for the behaviors of the
artifact and for the artifact’s effects after deployment, to
the degree to which these effects are reasonably
foreseeable by that person.”
One’s moral responsibility is not diminished by the fact
that others were involved in creating or using the robot.
Five Rules
3. “People who knowingly use a particular computing
artifact are morally responsible for that use.”
This is intended to include a “no willful ignorance”
clause.
Five Rules
4. “People who knowingly design, develop, deploy, or
use a computing artifact can do so responsibly only
when they make a reasonable effort to take into account
the sociotechnical systems in which the artifact is
embedded.”
Without such an effort, they would be using them
irresponsibly.
Five Rules
5. “People who design, develop, deploy, promote, or
evaluate a computing artifact should not explicitly or
implicitly deceive users about the artifact or its
foreseeable effects, or about the sociotechnical systems
in which the artifact is embedded.”
Among other things, this ameliorates the effects of our
tendency to anthropomorphize.
Operational Morality
 Technology is developing along two
dimensions: autonomy and sensitivity (to
ethical considerations).



Hammer
Fuel Gauge, Fire Alarm
Thermostat
Operational Morality
 A system is operational moral if it follows
proscribed actions programmed in by
designers for all types of situations it will
encounter.
Operational Morality
 A system is operational moral if it follows
proscribed actions programmed in by
designers for all types of situations it will
encounter.
 Operational morality requires that designers
make ethical decisions to cover all situations.
Robonanny
 Children can put
themselves (and others)
in danger.
 They can abuse
themselves (or others).
 They can ignore the
robonanny’s commands
to stop.
 Should robonanny
intervene?
Robonanny
 As Wallach notes,
parents will be
comforted to be able
to preset responses.
Perhaps it will have
levels of reprimand.
 Manufacturers can
then protect
themselves from
liability.
Functional Morality
 A system is functionally moral if it evaluates
situations according to an array of
considerations, then uses rules, principles, or
procedures to make an explicit judgment.
 Top-Down vs. Bottom-Up Decision-Making
Laws of Robotics
1. A robot may not injure a human being or,
through inaction, allow a human being to
come to harm.
2. A robot must obey the orders given to it by
human beings, except where such orders
would conflict with the First Law.
3. A robot most protect its own existence as long
as such protection does not conflict with the
First and Second Laws.
Laws of Robotics
1. A robot may not injure a human being or,
through inaction, allow a human being to
come to harm.
But there are (plenty of) cases in which it is
logically impossible to follow this law.
Forced Choice Scenarios
Laws of Robotics
In some cases, you will either harm a human or
allow a human to be harmed.
 Stopping a violent crime in progress often
requires harming the attacker.
 If you don’t harm the attacker, you will be
allowing the victim to be harmed.
So, the First Law of Robotics fails in light of this
simple consideration.
Laws of Robotics
In some cases, you will either harm a human or
allow a human to be harmed.
 Google’s driverless car: Hit a school bus or
swerve off a bridge?
 Even sitting there idling: Get hit by a large
oncoming truck or drive into pedestrians who
are in the way of your only escape?
Laws of Robotics
In some cases, you will either harm a human or
allow a human to be harmed.

The Famous(Infamous) Trolley Problem
Laws of Robotics
2.
A robot must obey the orders given to it by
human beings, except where such orders
would conflict with the First Law.
Conflicting orders given by different humans are
logically impossible to follow.
Laws of Robotics
3.
A robot most protect its own existence as long
as such protection does not conflict with the
First and Second Laws.
Even this law is made logically impossible by any
scenario in which the robot’s existence is at stake
but there is no action which would protect it.
As Wallach sees it, Asimov showed that “a simple
rule-based system of ethics will be ineffective”.
We need a combination of bottom-up and topdown approaches.
Top-Down and Bottom-Up Approaches
 Top-down approaches are broad.


But it’s hard to apply them to the vast array of
specific challenges.
Bottom-up approaches can integrate input
from discrete subsystems.

But it’s hard to define the ethical goal for such a
system and hard to integrate them.
Top-Down and Bottom-Up Approaches

We need both: The dynamic and flexible
morality of the bottom-up approach subjected
to the evaluation of the top-down approach.
Top-Down and Bottom-Up Approaches

We need both: The dynamic and flexible
morality of the bottom-up approach subjected
to the evaluation of the top-down approach.


We need to find a computational method for doing
this.
We need to set boundaries/standards for evaluating
an AMA’s moral reasoning.
The Future of AMAs and Wallach’s Proposal

Many challenges remain.




Will AMAs need to emulate all human faculties in
order to function as adequate moral agents?
How do we determine whether an AMA deserves
rights or should be held responsible for its actions?
Should we control the ability of robots to reproduce?
How will we protect ourselves against threats from
AMAs that are more intelligent than we are?
The Future of AMAs and Wallach’s Proposal

Some form of monitoring is required in order
to address a wide array of issues.
Health and Safety
 Environmental Risks
 Funding for R&D
 Intellectual Property Rights
 Public Perception of Risks & Benefits
 Government Oversight
 Competition with Industries Internationally

The Future of AMAs and Wallach’s Proposal
 Governance Coordination Committee


Role: to monitor development of AMAs and flag
issues or gaps in the existing policies, to coordinate
the activities of stakeholders, and to “modulate the
pace of development”.
The GCC would be required to avoid regulation
where possible, favoring “soft governance” and
industry oversight.
End