Introduction to Machine Learning

Apaydin slides with a several modifications and additions by Christoph Eick.
Introduction
 Modeling dependencies in input; frequently the order of
observations in a dataset matters:
 Temporal Sequences:


In speech; phonemes in a word (dictionary), words in a sentence
(syntax, semantics of the language).
Stock market (stock values over time)
 Spatial Sequences

Base pairs in DNA Sequences
Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0)
2
Discrete Markov Process
 N states: S1, S2, ..., SN
 First-order Markov
State at “time” t, qt = Si
P(qt+1=Sj | qt=Si, qt-1=Sk ,...) = P(qt+1=Sj | qt=Si)
 Transition probabilities
aij ≡ P(qt+1=Sj | qt=Si)
aij ≥ 0 and Σj=1N aij=1
 Initial probabilities
πi ≡ P(q1=Si)
Σj=1N πi=1
3
Stochastic Automaton/Markov Chain
T
P O  Q | A ,    P q1  P qt | qt 1   q aq q
t 2
1
1 2
aqT 1qT
Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0)
4
Example: Balls and Urns
 Three urns each full of balls of one color
S1: blue, S2: red, S3: green
  0.5,0.2,0.3
T
 0 .4 0 . 3 0 .3 
A  0.2 0.6 0.2
 0.1 0.1 0.8
O  S1 , S1 , S3 , S3
PO | A ,    PS1   PS1 | S1   PS3 | S1   PS3 | S3 
  1  a11  a13  a33
 0.5  0.4  0.3  0.8  0.048
5
Balls and Urns: Learning
 Given K example sequences of length T
ˆi 
aˆij 
# sequences starting with Si 

# sequences 
# transition s from Si to S j 
k

1
q
k 1  S i 
K
# transition s from Si 
k
k

1
q

S
and
q
k t 1 t i
t 1  S j 
T- 1

k

1
q
k t 1 t  Si 
T- 1
Remark: Extract the probabilities from the observed sequences:
s1-s2-s1-s3
s2-s1-s1-s2  1=1/3, 2=2/3, a11=1/3, a12=1/3, a13=1/3, a21=3/4,…
s2-s3-s2-s1
6
http://en.wikipedia.org/wiki/Hidden_Markov_model
Hidden Markov Models
 States are not observable
 Discrete observations {v1,v2,...,vM} are recorded; a
probabilistic function of the state
 Emission probabilities
bj(m) ≡ P(Ot=vm | qt=Sj)
 Example: In each urn, there are balls of different colors,
but with different probabilities.
 For each observation sequence, there are multiple state
sequences
Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0)
7
http://a-little-book-of-r-for-bioinformatics.readthedocs.org/en/latest/src/chapter10.htm
l
HMM Unfolded in Time
Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0)
8
Now a more complicated problem
1
2
3
Markov
Chains
We observe:
Hidden
Markov
Models
What urn sequence create it?
1. 1-1-2-2 (somewhat trivial, as states are observable!)
2. (1 or 2)-(1 or 2)-(2 or 3)-(2 or 3) and the potential sequences have different
probabilities—e.g drawing a blue ball from urn1 is more likely than from urn2!
9
Another Motivating Example
10
Elements of an HMM
 N: Number of states
 M: Number of observation symbols
 A = [aij]: N by N state transition probability matrix
 B = bj(m): N by M observation probability matrix
 Π = [πi]: N by 1 initial state probability vector
λ = (A, B, Π), parameter set of HMM
Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0)
11
Three Basic Problems of HMMs
1. Evaluation: Given λ, and sequence O, calculate P (O | λ)
2. Most Likely State Sequence: Given λ and sequence O, find
state sequence Q* such that
P (Q* | O, λ ) = maxQ P (Q | O , λ )
3. Learning: Given a set of sequence O={O1,…Ok}, find λ* such
that λ* is the most like explanation for the sequences in O.
P ( O | λ* )=maxλ k P ( Ok | λ )
(Rabiner, 1989)
12
Evaluation
Probability of observing O1-…-Ot
and additionally being in state i
 Forward variable:
 t i   PO1  Ot , qt  Si |  
Initialization :
1 i    i bi O1 
Recursion :
N

 t 1  j     t i aij b j Ot 1 
 i 1

Using i the probability of the observed
sequence can be computed as follows:
N
PO |      T i 
i 1
Complexity: O(N2*T)
13
 Backward variable:
Probability of observing Ot+1-…-OT
and additionally being in state i
 t i   P Ot 1 OT | qt  Si ,  
Initializa tion :
T i   1
Recursion :
N
t i    aij b j Ot 1 t 1  j 
j 1
Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0)
14
Finding the Most Likely State Sequence
 t i   Pqt  Si O,  
 t i  t i 
 N
 j 1 t  j  t  j 
t(i):=Probability
of being in
state i at
step t.
Choose the state that has the highest probability,
Observe: O1…OtOt+1…OT
for each time step:
qt*= arg maxi γt(i)
 t i  t i 
Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0)
15
Only briefly discussed in 2014!
Viterbi’s Algorithm
δt(i) ≡ maxq1q2∙∙∙ qt-1 p(q1q2∙∙∙qt-1,qt =Si,O1∙∙∙Ot | λ)
 Initialization:
δ1(i) = πibi(O1), ψ1(i) = 0
 Recursion:
δt(j) = maxi δt-1(i)aijbj(Ot), ψt(j) = argmaxi δt-1(i)aij
 Termination:
p* = maxi δT(i), qT*= argmaxi δT (i)
 Path backtracking:
qt* = ψt+1(qt+1* ), t=T-1, T-2, ..., 1
Idea: Combines path probability computations
with backtracking over competing paths.
16
Baum-Welch Algorithm
BaumWelch
Algorithm
O={O1,…,OK}
Model =(A,B,)
Hidden State Sequence
O
Observed Symbol Sequence
Learning a Model 
from Sequences O
An EM-style algorithm is used!
E  Step :
This is a hidden(latent)
variable, measuring the
probability of going from state i
to state j at step t+1 observing
Ot+1, given a model  and an
observed sequence O Ok.
 t i, j   P qt  Si , qt 1  S j | O,  
 t i aij b j Ot 1  t 1  j 
 t i, j  
k l  t k akl bl Ot 1  t 1 l 
 t i    j 1  t i, j 
K
This is a hidden(latent) variable,
measuring the probability of
being in state i step t observing
given a model  and an
observed sequence O Ok. 18
Baum-Welch Algorithm: M-Step
M  step :
K
ˆ i 
bˆ j m  
k

 1 i 
k 1
aˆij
K



 
Tk 1
k 1
K
t 1
Tk 1
k 1
Tk 1
k 1
t 1
K
k 1
t 1
 tk  j 1Otk  vm 
 
 
K
Tk 1
t 1
 tk i, j 
K
 tk i 
Probability
going from i to j
Probability
being in i
 tk i 
Remark: k iterates over the observed sequences O1,…,OK;
for each individual sequence OrO r and r are computed in the E-step; then,
the actual model  is computed in the M-step by averaging over the estimates
of i,aij,bj (based on k and k) for each of the K observed sequences.
19
Baum-Welch Algorithm: Summary
Estimate initial model   (A, B,  )
REPEAT
E - Step : Estimate  t i  and  t i, j  based on model   (A, B,  ) and O
M - step : Reestimate   (A, B,  ) based on  t i, j 
UNTIL CONVERGENCE
For more discussion see: http://www.robots.ox.ac.uk/~vgg/rg/slides/hmm.pdf
O={O1,…,OK}
BaumWelch
Algorithm
Model =(A,B,)
See also: http://www.digplanet.com/wiki/Baum%E2%80%93Welch_algorithm
Generalization of HMM: Continuous Observations
The observations generated at each time step are vectors
consisting of k numbers; a multivariate Gaussian with k
dimensions is associated with each state j, defining the
probabilities of k-dimensional vector v generated when
being in state j:
P Ot | qt  S j ,   ~ N j , j 
O
Hidden State Sequence
=(A, (j,j) j=1,…n,B)
Observed Vector Sequence
21
Generalization: HMM with Inputs
 Input-dependent observations:
POt | qt  S j , x t ,   ~ N g j x t |  j , j2 
 Input-dependent transitions (Meila and Jordan, 1996;
Bengio and Frasconi, 1996):
Pqt 1  S j |qt  Si , x t 
 Time-delay input:
xt  f Ot  ,...,Ot 1 
Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0)
22