Finite automata and their decision problems

Magical YES-NO machines
Non-deterministic Finite State Automata
(NFA)
An idea is born …
Dana Scott
Michael Rabin
In 1957, while getting his doctor's degree in
mathematics at Princeton, he (Scott) took a
summer job at IBM's newly established T. J.
Watson Research Center. There, he and a fellow
student, Michael Rabin, now a professor at
Harvard and the Hebrew University in Jerusalem,
developed a new approach to automata theory,
the mathematical theory of machines. The results
they described in their joint paper subsequently
won them computing's highest honor, the Turing
Award, in 1976.
(See
http://www.cs.berkeley.edu/~agni/education.html)
M. Rabin and D. Scott. Finite automata and their decision problems, IBM J. Res. Dev. 3 (1959),
114-125. (you can download this classic paper from IBM’s website:
http://domino.research.ibm.com/tchjr/journalindex.nsf/0/cdf6b2949432156385256bfa00683d63?OpenDocument)
Turing Award Citation
For their joint paper "Finite Automata and Their Decision Problem," which introduced the
idea of nondeterministic machines, which has proved to be an enormously valuable
concept. Their (Scott & Rabin) classic paper has been a continuous source of inspiration
for subsequent work in this field.
(See http://www.acm.org/awards/turing_citations/rabin.html)
What is the award-winning idea about?
Bluntly put, they added a “magical element” to the usual clock-work of
automata and mathematically analyzed its effect!
The “magical element”:
An “ordinary” clock-work (deterministic) automaton in one of its internal states can react
to an input symbol in only one possible way (given by its transition table/diagram). See
example (1).
But, a magical (non-deterministic) automaton (abbreviated as NFA) in one of its
internal states could react to an input symbol in different (more than one) ways “as the
situation would demand”—depending on the kind of symbols that might follow in the
input string (even before sighting them!). Given an input, one can imagine the
machine to “magically choose” a sequence of states that leads to a final state (if one
exists). See example (2).
b
a
a
2
1
b
(1)
The state-transition of the machine
b
1
a
a
(2)
2
(i) for input “a” is 1  2 (accepts) and
(ii) for input “aa”, 1  1  2 (accepts).
Note the different ways in which the machine
responds to the first symbol ‘a’ in each case.
What is the award-winning idea about?
According to Scott and Rabin*,
“A nondeterministic automaton is not a probabilistic machine but rather a machine with
many choices in its moves. At each stage of its motion across a tape it will be at liberty
to choose one of several new internal states. Of course, some sequence of choices
will lead either to impossible situations from which no moves are possible
or to final states not in the designated class F. We disregard all such failures, however,
and agree to let the machine accept a tape if there is at least one winning combination
of choices of states leading to a designated final state.”
* M. Rabin and D. Scott. Finite automata and their decision problems, IBM J. Res. Dev. 3 (1959), 114-125.
How does the “magic” help?
According to Scott and Rabin,
“The main advantage of these machines is the small number of internal
states that they require in many cases and the ease in which specific
machines can be described.”
To be able to appreciate this, consider the following:
b
1
a
a
a
3
2
b
NFA recognizing L = { w є {a,b}* : w contains an ‘a’ in its last-but-one position }
Try to develop a
DFA recognizing
the same language *
• Use your intuition, and not a standard procedure for converting an
NFA into an equivalent DFA.
How does the “magic” help?
Hypothetical programs that exhibit non-determinism (a facility to make magical
guesses) apparently solve search problems much faster than their
deterministic counterparts.
Go
away
Go
away
Go
away
Welcome
It is not hard to see that making the right
guesses (at cross-roads) reduces the
search and helps reach an unknown
destination (not shown on the map)
quicker.
NFAs: Demonstrating ease of design
Consider the language L = {w є {0,1}* | w has a 0 on the 5th position from
the end }
The NFA
NFAs: Demonstrating ease of design
… and smaller number of states
The equivalent DFA for the NFA on previous slide!! (Do you think, you could’ve
designed this directly, just by intuitive thinking?)
Definition of NFA
Non-deterministic Finite State Automaton (NFA)
(earlier, we had referred to an NFA as a magical “YES-NO machine”.)
A 5-tuple (Q, Σ , δ, s, F )
set of states Q
string w
over Σ
(includes s and F)
set of
final states
F
Q
transition functionstart state
Q
sєQ
δ:
Q x Σε  P (Q)
U
δ: Q x Σε  P (Q)
finite
set of
states
input
alphabet
{1}
1
a
1
???
δ:
2
3
P (Q)
Σ
a
x
b
{1,2}

{1,2,3}
.
.
.
NFAs versus DFAs
By definition, DFAs are specific cases of NFAs.
But, the following are NOT legitimate for DFAs (but, they
are legitimate in the case of NFAs):
 Multiple transitions over the same symbol from a
certain state
 Absence of transitions over some symbol (in the
alphabet) from a certain state
 Presence of ε-transitions
NFAs: Evolution of states
Consider the language L = {w є {a,b}* | w ends with abb}
a
a
1
2
b
b
3
Input string: aababb
b
1
Input string: abb
1
a
a
1
2
1
1
b
b
4
3
4
a
a
1
2
a
a
a
b
1
b
2
b
b
1
b
1
a
b
1
1
3 a
a
b
b
3
2
4
NFAs: Demonstrating ease of design
Consider the language L = { 0m 1n 0k : n, m >= 0 and k >= 1 }
A
0
1
0
ε
Try to develop a
DFA recognizing
the same language *
B
0
C
• Use your intuition, and not a standard procedure for converting an
NFA into an equivalent DFA.
NFAs: The role of ε’s
New machine M: L(M) = L(M1) U L(M2)
ε
M1
f1
s1
f2
s
M2
ε
Start state of the new
“combined machine”
s2
f
final states of the new
“combined machine”
NFA that recognizes the “reverse” of a
language
Given a language L, its “reverse” is R(L) = { w’ | w’ is the reverse of some w є L }
cat
rat
mat
sat
.
.
.
L
tac
tar
tam
tas
.
.
.
R(L)
NFA that recognizes the “reverse” of a
language
Given a machine (NFA / DFA) M that recognizes L, how to
“transform” it into an NFA M’ (a reverse-machine) that recognizes
R(L) ?
1. Change the “start state” into
“final state”.
L: a’s and b’s, starting with an a
b
b
2. Change the “final state” into “start
state”.
1
a
3. Reverse the arrows of all
transitions. Iff δ(q, a) = r in the old machine,
b
then the new machine has δ(r, a) = q
2
a
1
b
a
2
a
R(L):
a’s and b’s, ending with an a
NFA that recognizes the “reverse” of a
language
0
2
0
1
1
3
1
0
2
ε
0
0
1
1
3
ε
1
What if there were more than one final states? Change them into
“ordinary states”; add a new start state and draw ε-transitions (arcs) from
the new start state to the “old” final states