CS-E4800 Artificial Intelligence Lecture Turing test (1950) First

Lecture
CS-E4800 Artificial Intelligence
Department of Computer Science
Aalto University
History of A.I.
Impact of A.I.
Summary of course
Completing the course
March 24, 2017
Turing test (1950)
“If a machine could carry on a conversation (over
a teleprinter) that was indistinguishable from a
conversation with a human being, then the
machine could be called intelligent.”
Suggested by Turing as a thought experiment, not
an actual test
Some programs have now passed “Turing tests”
Broad indistinguishability from humans far away
First Golden Era 1956–1974
Term “A.I.” coined by McCarthy, Minsky,
Shannon, Rochester in 1956
Lots of research action started
(with great hype and promises)
“In from three to eight years we will have a
machine with the general intelligence of an
average human being.” (Minsky, 1970)
“Computers would never be able to play chess.”
(Dreyfus, 1972)
First A.I. winter 1974–1980
(funding for A.I. research dropped dramatically)
Second Golden Era 1980–1987
Expert systems (example: medical diagnosis)
Case-based reasoning (find examples from past
matching the current situation, mimic the past
human solution)
Neural networks trained by backpropagation
(Werbos 1974 popularized in 1986)
Second A.I. winter 1987–1993 (funding again
dropped)
New Coming of Symbolic Model-Based
A.I.
More detailed models, scalable search and
reasoning
Autonomous vehicles, aircraft and robots
Detailed physical models
Fast model-based decision-making
A.I. embedded in all form of software
Similarly to ML, enabled by scalability due to
faster CPUs
massive amounts of more memory
New Coming of Machine Learning
A.I. + statistics ⇒ Machine Learning (Big Data)
Bringing ad hoc methods in a unified framework
Iteration 3 of neural networks (Deep Learning,
2006)
Enabled by scalability due to
faster CPUs, more CPUs
massive amounts of memory
Human Brain Project (2013–2022,
1190Me)
Yann LeCun (2015): “. . . a big chunk of the
Human Brain Project in Europe is based on the
idea that we should build chips that reproduce the
functioning of neurons as closely as possible, and
then use them to build a gigantic computer, and
somehow when we turn it on with some learning
rule, AI will emerge. I think its nuts . . . ”
Update in Nature (March 2015): “Controversial
European initiative will disband three-person
executive committee after anger from
neuroscientists.”
Latest Hype Cycle 2015-
Singularity
Good, 1965; Vinge, 1983 (term); Solomonoff, 1985; Kurzweil, 2005
Lots of publicity for A.I.
self-driving cars
voice recognition, machine translation (deep learning)
Most of this not really fully works (yet)!
No actual breakthroughs (arguably)
Best applications: huge amounts of engineering
effort
A.I. start-up crash coming in 2018?
Economic Impact of A.I. and Intelligent
Robotics
Cheap A.I. and intelligent robots = Flux of very
competent and cheap labor on the labor market
Earlier revolutions of automation have created
new jobs
Will the ongoing revolution do the same? How?
Robots, A.I. will become more intelligent than
humans, sometime in the future.
More advanced intelligence will accelerate
development of more advanced intelligence.
What effect will this have?
A.I./Robots will “take control”?
Humans still able to use A.I./robots as “slaves”?
Lots of (philosophical) articles published on this.
Also a series of conferences.
Impact of Automation in the Past
Industrial Revolution 1760-1840 (steam engine, water
power)
textile industry (weaving, sewing, ...)
metallurgy, mining
machine tools
transportation (railway, canals, roads)
Impact of Automation in the Past
Impact of Automation in the Past
Automation 1890- (rudimentary mechanical IT)
Automation 1880-1950 (electricity, combustion
engines)
transportation (vehicles with combustion engines)
communication (telegraph 1835-, radio 1897-)
farming (tractors, etc.)
all fields of manufacturing
mining
Impact of Automation in the Past
Automation 1960- (rudimentary electronic IT)
accounting
banking
engineering calculations
punched card systems (1890 US census)
mechanical calculators 1930- (add, subtract,
multiply, divide)
Impact of Automation in the Past
Automation 1980- (advanced IT, communication)
office automation 1980Internet banking 1995Internet travel agencies 1995other Internet services
Impact of Automation Now and Future
Automation 2000-2040 (complex IT, “A.I.”, A.I.)
automation of the physical world accelerates
automated manufacturing, warehouses
autonomous vehicles (buses, taxis, trucks)
software production (still need programmers?)
automation of most office work
Unprecedented flexibility and adaptability of IT
Autonomous Vehicles: Liability in
Accidents
Assume traffic deaths reduced 50 per cent by full
automation of road traffic (40000 annually in US)
20000 “thank you” letters, vs.
20000 lawsuits?
All deaths caused by technology, not human error
Legal Implications of A.I.
Until now: Only humans able to do things.
Future: Robots do almost everything humans can.
Separation of agency and responsibility
System’s designer/owner relieved of responsibility,
because system acquired behavior (through
learning) that could not have been anticipated?
Autonomous Vehicles: Ethics of Design
Autonomy in vehicles unlikely to eliminate all
accidents
Design decisions have impact on who gets hurt
(passengers vs. others)
How does the manufacturer decide how the car
behaves when an accident is imminent?
Autonomous Systems: Crime
High-risk crimes (physical risk, prosecution)
attractive for automation
drug-trafficking (UAV, submarines)
murder
spying
Automation will reduce costs and risks
As a result, crime will become more attractive
What can be done about that?
Military A.I.
A.I. technologies, like IT in general, are dual use:
anything useful for civilian purposes, also
applicable to warfare
Autonomous weapons reduce/eliminate need of
military personnel on the field
Loss of life major disincentive for use of military
force (especially in industrialized democracies)
Will automation increase military aggression?
A.I. Crime: Anonymity
Hijack (a swarm of) Internet-connected robots,
cars, UAV; commit crime
Existing Internet security issues, but with
implications on the physical world
Assassinations by autonomous or tele-operated
robots, drones, quadcopters
All autonomous systems must be registered,
traceable, must develop infrastructure to detect
and prevent the use of unregistered systems?
Military A.I.
Warfare will change.
Soldiers now for carrying and operating weapons
One day both sides of a conflict use autonomous
weapons only
Enables more aggressive warfare
Advantage in technology means direct and
dramatic advantage in warfare
Course Summary
Course Summary
Course focus: model-based (symbolic) A. I.
Model constructed manually (by humans)
Model constructed automatically (“learning”)
Learning, data analytics covered in other courses
How to implement the sense-plan-act loop?
Answer:
Interpret sense data (observations) w.r.t. the model
Choose actions w.r.t. the model (system behavior)
Execute them
The Transition System Model
A.I. research still needed, because
Problems generally NP-complete or (much) harder
Algorithms would need to be far more scalable
Some problems hard to model
Complexity of the physical world
Complexity of interacting with humans
multi-agent scenarios (conflicting goals)
The Transition System Model
0000
×
×
×
0001
0011
×
◦
×◦
×
◦
◦
×
0010
0101
0111
0100
0110
1001
1011
1101
1111
×
◦
×
1000
1010
1100
1110
Algorithms for Searching Graphs
Representation and Reasoning
Logical Consequence
breadth-first, depth-first
bidirectional search
iterative deepening
informed search: A∗, IDA ∗, WA∗, greedy
best-first
John can sing or Mary can sing.
If Mary can sing then Bob can dance.
John can sing or Bob can dance.
Representation and Reasoning
Representation and Reasoning
Logical Consequence
Satisfiability
m0,1 m0,2 m0,3 m0,4
m1,1 1
2
cell at least, at most
1
m1,1 ∨ m0,1 ∨ m0,2 ∨ m0,3 ,
¬(m1,1 ∧ m0,1 ), ¬(m1,1 ∧ m0,2 ), ¬(m1,1 ∧ m0,3 ),
¬(m0,1 ∧ m0,2 ), ¬(m0,1 ∧ m0,3 ), ¬(m0,2 ∧ m0,3 )
2
(m0,2 ∧ m0,3 ) ∨ (m0,2 ∧ m0,4 ) ∨ (m0,3 ∧ m0,4 ),
¬(m0,2 ∧ m0,3 ∧ m0,4 )
Logical consequences: ¬m1,1 ¬m0,1 m0,4 m0,2 ∨ m0,3
You have to color the map by four colors.
4
3
1
2
Every region i has exactly one color:
Ri ∨ Gi ∨ Bi ∨ Wi ¬(Ri ∧ Gi )
¬(Gi ∧ Bi ) ¬(Bi ∧ Ri )
¬(Wi ∧ Ri ) ¬(Wi ∧ Gi ) ¬(Wi ∧ Bi )
Neighboring regions i and j have
different colors:
¬(Ri ∧ Rj ) ¬(Gi ∧ Gj ) ¬(Bi ∧ Bj ) ¬(Wi ∧ Wj )
Satisfiable =⇒ A coloring with 4 colors exists
Formulas as Sets (of Bit-Vectors, States)
Formulas as Relations
Example
Example
A ∧ B represents the set {1100, 1101, 1110, 1111} and
A ∨ B represents the set
{0100,0101,0110,0111,1000,1001,1010,1011,1100,1101,1110,1111}.
(A0 ↔ A1 ) ∧ (B0 ↔ B1 ) ∧ (C0 ↔ C1 ) ∧ (D0 ↔ D1 )
represents the identity relation of 4-bit bit-vectors.
Example
If φ1 and φ2 represent sets S1 and S2 , then
1
φ1 ∧ φ2 represents set S1 ∩ S2 ,
2
φ1 ∨ φ2 represents set S1 ∪ S2 , and
3
¬φ1 represents set S1 .
inc01 = (¬C0 ∧ C1 ∧ (B0 ↔ B1 ) ∧ (A0 ↔ A1 ))
∨(¬B0 ∧ C0 ∧ B1 ∧ ¬C1 ∧ (A0 ↔ A1 ))
∨(¬A0 ∧ B0 ∧ C0 ∧ A1 ∧ ¬B1 ∧ ¬C1 )
∨(A0 ∧ B0 ∧ C0 ∧ ¬A1 ∧ ¬B1 ∧ ¬C1 )
represents the successor relation of 3-bit integers
{(000, 001), (001, 010), (010, 011), (011, 100),
.
(100, 101), (101, 110), (110, 111), (111, 000)}
Graph Search by Satisfiability Testing
High-Level Problems Solved
Connectives as set-operations
Example
Can bit-vector 100 be reached from 000 by four steps,
by increment (inc) and shift (ml2)?
¬A0 ∧ ¬B0 ∧ ¬C0 ∧
(inc01 ∨ ml201 ) ∧ (inc12 ∨ ml212 ) ∧ (inc23 ∨ ml223 ) ∧ · · · ∧
A4 ∧ ¬B4 ∧ ¬C4
These methods solve e.g. the following problems
Planning: What actions would reach a given goal?
Diagnosis: What (unobserved) events/facts would
best explain the observations?
Verification: Is it possible that things go wrong?
All these methods have extensions to probabilistic
versions of these problems (quantifiable uncertainty
about outcomes, observations.)
Uncertainty
Sequential Decisions: Full Observability
Decision Theory: Maximize expected utility
Search trees with chance nodes and decision nodes
Compute bottom-up from leaves to root:
Chance node: Expectation of children
Decision node: Max of children
Value Iteration
Policy Iteration
policy evaluation: Markov Chains
Reinforcement learning (model is learned)
Buy?
no
yes
Oil?
yes
Markov Decision Processes (model exists)
Model not available: learn it
Model changes over time: adaptation
Q-Learning (+ many other algorithms)
no
Sequential Decisions: Partial Observability Multi-agent systems
Different agents have different goals
Current state unknown
Belief state 1: set of possible current states
Belief state 2: probability distribution over states
Belief update: How belief state changes under
action, observation
Partially Observable MDP (POMDP)
Finite-State Policies for POMDPs
Cooperation and coordination difficult
Predicting system dynamics difficult
formal framework: Game Theory
Simpler cases:
Agents’ objectives coincide, unlimited communication
equivalent to centralized control (single agent)
zero-sum scenarios
no coordination at all
agents try to maximize their own utility
similar to (stochastic, nondeterministic) single-agent case
Application Summary
Exam April 6, 2017
Industrial A.I.
automate cognitive tasks in complex systems
top-level supervisory tasks automated, assisted
deeper automation than with “conventional” methods
key feature: complex models, multiple input sources
Autonomous vehicles (land, water, air, space)
Analogous to the industrial applications of A.I.
Must register before Thursday March 30!
next exams: likely in September, December
Intelligent software
Extension of conventional software technology
More flexible and intelligent
Long term goal: integration of everything, Internet
of Things
What to Read for the Exam?
Missing Assignments?
Course presentation slides
Russell and Norvig Artificial Intelligence: A
Modern Approach, topics covered in the lectures
1 Introduction, 2 Intelligent Agents, 3 Search
5 Adversarial Search: 5.1-5.5
13 Quantifying Uncertainty: 13.5 Bayes rule only
16 Making Simple Decisions: 16.1-16.3, 16.6
17 Making Complex Decisions: 17.1-17.3, 17.5 & 17.6
22 Reinforcement Learning: 22.2, 22.3, 22.6
Notes for lectures 2 to 5 Artificial Intelligence:
Propositional Logic (“Additional Reading” in
MyCourses)
Overview articles on the Materials page on
MyCourses (non-technical)
The obligatory assignments can be submitted
anytime (preferably one week before exam.)
Help available for all assignments: Jussi Rintanen,
room TC311
Course Feedback / Questionnaire
Passing the Course
To pass the course:
Fill in the course questionnaire (link emailed soon.)
Which topics were most interesting?
More in depth with some topics?
Missing topics?
Be informative and specific, so that we can
actually figure out what to do about the feedback.
Minimum 40 (?) points from the exam
All assignment completed (before exam)
Grade determined by:
component
exam
exercises
assignments
questionnaire
total
max
100
10
8
2
120
Course must be completed by the end of the year.