It`s SCIENCE! It`s magic! HMM Solution to the Dice Problem HMM

Markov Processes (Briefly)



We begin with data
generated from a Markov
process.
The process has several
states (loaded and
unloaded dice) which
behave differently.
The goal of the HMM is
to estimate which state
we were in for each data
point.
Markov Processes (Cont.)
The probabilistic switching between states is
described by a transition matrix, A.
The behavior of the states is described by an
emission matrix, b.
Forward-Backward Algorithm



Takes an initial estimate of the A and b matrices
and constructs an estimate of the likelihood that
we were in a given state at each data point.
It does this by finding the likelihood of the data
up to time t, given that we are in state i at time t,
denoted alpha:
It also finds the likelihood of the data after t
given that we are in state i at time t. This is
denoted beta:
Baum-Welch Re-Estimation

Uses the initial guesses at A and b and the
calculated alpha and beta values to reestimate A and b.
When used iteratively,
this often converges to
the right answer, even
if the initial guesses
are random!
It's magic!
It's SCIENCE!
HMM Solution to the Dice Problem
HMM Solution to the Dice Problem
Errors in Estimated Parameters



Finite Data set, ~10 transitions
Simpler problem: assuming A and B are mostly
correct, what are the errors due to counting
statistics?
Use A and the total number of rolls, M, to
determine how many rolls you would observe
for each die.


“Limiting distribution” X is the normalized left
eigenvector of A with an eigenvalue of 1.
Experimental physics trick: count N events, set
the error to sqrt(N).
Errors in Estimated Parameters?
Errors in Estimated Parameters?
Errors in Estimated Parameters?
Daily Temperature as a Markov
Process?
Daily Temperature as a Markov
Process?
Daily Temperature as a Markov
Process?
Daily Temperature as a Markov
Process?