12_Advanced_DCM

Neural population activity
0.4
0.3
0.2
u2
0.1
0
0
10
20
30
40
50
60
70
80
90
100
0
10
20
30
40
50
60
70
80
90
100
0
10
20
30
40
50
60
70
80
90
100
0.6
0.4
u1
x3
0.2
0
0.3
0.2
0.1
0
x1
x2
3
fMRI signal change (%)
2
DCM:
Advanced Topics
1
0
0
10
20
30
40
50
60
70
80
90
100
0
10
20
30
40
50
60
70
80
90
100
0
10
20
30
40
50
60
70
80
90
100
4
3

A

dt

dx
m
uB
i
i 1
n
(i)

x
j 1
j
D
( j)

 x  Cu


2
1
0
-1
3
2
1
0
Rosalyn Moran
SPM Course October 20th – 22nd 2011
Outline
• Bayesian model selection (BMS)
• Families of Models
• Nonlinear DCM for fMRI
• Stochastic DCM
• Post-hoc selection of deterministic DCM
• Integrating tractography and DCM
Outline
• Bayesian model selection (BMS)
• Families of Models
• Nonlinear DCM for fMRI
• Stochastic DCM
• Post-hoc selection of deterministic DCM
• Integrating tractography and DCM
Relative Log Model Evidence
Bayesian Model Selection
attention
M1
stim
Posterior Model Probability
M1 M2 M3 M4
PPC
V1
M3 attention
stim
V1
M2
M2 better than M1
BF 2966
F = 7.995
V5
PPC
PPC
attention
stim
V1
V5
M3 better than M2
BF  12
F = 2.450
V5
M4 attention
PPC
M4 better than M3
BF  23
F = 3.144
stim
V1
V5
Bayes factors
For a given dataset, to compare two models, we compare
their evidences.
positive value, [0;[
B12 
p ( y | m1 )
p( y | m2 )
Kass & Raftery classification:
B12
p(m1|y)
Evidence
1 to 3
50-75%
weak
3 to 20
75-95%
positive
20 to 150
95-99%
strong
 150
 99%
Very strong
or their log evidences
ln( B12 )  F1  F 2
Kass & Raftery 1995, J. Am. Stat. Assoc.
The negative free energy approximation
log p(y | m)
F  log p ( y | m )  KL q  , p  | y , m 
KL
F
balance between fit and complexity = accuracy - complexity
F = log p(y | q, m) q - KL éëq (q ), p (q | m)ùû
Independent Priors
Deviation of posterior mean from prior mean
T
1
1
1
KLLaplace = ln Cq - ln Cq |y + (mq |y - mq ) Cq-1 (mq |y - mq )
2
2
2
Dependent Posteriors
Fixed effects BMS at group level
Group Bayes factor (GBF) for 1...K subjects:
ln GBF ij 
 ln Fi
n 1 .. k
(n)

 ln F j
(n)
n 1 .. k
Problems:
-blind with regard to group heterogeneity
-sensitive to outliers
or
(n)
F
(n)
GBFij = Õ BFij = Õ i (n)
n=1...k
n=1...k Fj
r
mk ~ Mult(m;1, r)
Generative Model:
Select one of K models from a
multinomial distribution and then
generate data, under this model, for
each of the N subjects
y1 ~ p ( y1 | m 1 )
y1 ~ p ( y1 | m 1 )
y2 ~ p ( y2 | m2 )
y1 ~ p ( y1 | m 1 )
Fixed/Random effects BMS for group studies
Generative Model:
Select one of K models from a
multinomial distribution and then
generate data, under this model, for
each of the N subjects

r
r ~ Dir ( r ;  )
mk ~ Mult(m;1,r)
Generative Model:
Select a model for each
subject by sampling from
a multinomial
distribution, and then
generate data under that
subject-specific model:
mk ~ p (mk | p )
mk ~ p (mk | p )
m k ~m p~( mMult
| p () m ;1, r )
k
1
y1 ~ p ( y1 | m 1 )
y1 ~ p ( y1 | m 1 )
y2 ~ p ( y2 | m2 )
y1 ~ p ( y1 | m 1 )
Stephan et al. 2009, NeuroImage
y1 ~ p ( y1 | m 1 )
y1 ~ p ( y1 | m 1 )
y2 ~ p ( y2 | m2 )
y1 ~ p ( y1 | m 1 )
Random effects BMS for group studies

Dirichlet parameters
= “occurrences” of models in the
population
r ~ Dir ( r ;  )
Dirichlet distribution of model
probabilities
mk ~ p (mk | p )
mk ~ p (mk | p )
mk ~ p (mk | p )
m 1 ~ Mult ( m ;1, r )
y1 ~ p ( y1 | m 1 )
y1 ~ p ( y1 | m 1 )
y2 ~ p ( y2 | m2 )
y1 ~ p ( y1 | m 1 )
Multinomial distribution of model
labels
Measured data
y
Model
inversion by
Variational
Bayes (VB)
estimate the parameters  of the posterior
Stephan et al. 2009, NeuroImage
p (r | y, )
Reporting RXF for group studies

r ~ Dir ( r ;  )
The occurences in the population
 1 ... k
The expected likelihood: of obtaining
the k-th model for any randomly
selected member of the population
rk
mk ~ p (mk | p )
mk ~ p (mk | p )
mk ~ p (mk | p )
m 1 ~ Mult ( m ;1, r )
y1 ~ p ( y1 | m 1 )
y1 ~ p ( y1 | m 1 )
y2 ~ p ( y2 | m2 )
y1 ~ p ( y1 | m 1 )
q
  k ( 1     K )
The exceedance probability: belief
that a model is more likely than
another model (of the K tested) given
the group data
 1,2  p ( r1  0.5 | y ,  )
Eg:
Task-driven
lateralisation
Does the word
contain the letter
A or not?
letter decisions > spatial decisions
•
•
•
group analysis (random effects),
n=16, p<0.05 corrected
analysis with SPM2
Is the red letter
left or right from
the midline of
the word?
spatial decisions > letter decisions
Stephan et al. 2003, Science
Theories on inter-hemispheric
integration during lateralised tasks
Information transfer
(for left-lateralised task)
T|RVF

+
T|LVF
LVF
RVF
Predictions:
modulation by task conditional on visual field
asymmetric connection strengths
Ventral stream & letter decisions
Stephan et al. 2007, J. Neurosci.
Left MOG
-38,-90,-4
Left FG
-44,-52,-18
Right FG Right MOG
38,-52,-20 -38,-94,0
LD|LVF
LD>SD, p<0.05 cluster-level corrected
(p<0.001 voxel-level cut-off)
p<0.01 uncorrected
MOG
left
FG
left
FG
right
MOG
right
LD>SD masked incl. with RVF>LVF
LD>SD masked incl. with LVF>RVF
Left LG
-12,-70,-6
LG
left
RVF
stim.
Left LG
-14,-68,-2
LG
right
LVF
stim.
M1: Inter-hemispheric connections modulated by letter decision (LD) task
conditional on visual field of stimulus presentation.
Intra-hemispheric connections modulated by letter decision task alone.
Ventral stream & letter decisions
Stephan et al. 2007, J. Neurosci.
Left MOG
-38,-90,-4
Right FG Right MOG
38,-52,-20 -38,-94,0
Left FG
-44,-52,-18
LD>SD, p<0.05 cluster-level corrected
(p<0.001 voxel-level cut-off)
p<0.01 uncorrected
MOG
left
FG
left
FG
right
MOG
right
LD>SD masked incl. with RVF>LVF
LD>SD masked incl. with LVF>RVF
Left LG
-12,-70,-6
LG
left
Left LG
-14,-68,-2
LG
right
LD|LVF
RVF
stim.
LVF
stim.
M2: Inter-hemispheric connections modulated by letter decision (LD) task alone.
Intra-hemispheric connections modulated by letter decision task
conditional on visual field of stimulus presentation.
Winner! Fixed Effects
LD
m2
MOG
FG
LD|LVF
MOG
FG
LD|RVF
MOG
LD|LVF
RVF
stim.
LD
Subjects
-30
-25
-20
LD
LG
LVF
stim.
-15
LG
RVF LD|RVF
stim.
m2
-35
MOG
FG
LD
LG
LG
FG
m1
LVF
stim.
m1
-10
-5
Log model evidence differences
0
5
Stephan et al. 2009, NeuroImage
RFX
p(r >0.5Analysis
| y) = 0.997
1
5
4.5
4
m2
3.5
p ( r1 > 0.5 y) = 99.7%
m1
p(r 1|y)
3
2.5
2
1.5
r1  84.3%
r2  15.7%
0.5
0
0
 1  11.8
 2  2.2
1
0.1
0.2
0.3
0.4
0.5
r
0.6
1
0.7
0.8
0.9
1
Outline
• Bayesian model selection (BMS)
• Families of Models
• Nonlinear DCM for fMRI
• Stochastic DCM
• Post-hoc selection of deterministic DCM
• Integrating tractography and DCM
Families of Models
Dynamics of intelligible speech vs reversed speech, (Leff et al. 2008):
“She came out of the house”/ ”esuoh eht fo tuo emac ehS”
Posterior Superior
Temporal Sulcus (P)
Pars Orbitalis of
Inferior Frontal Gyrus
(F)
Anterior Superior
Temporal Sulcus (A)
Where does the auditory driving input enter?
Families of Models
Family: f1
Partition
Family: f2
Where does the
driving input enter?
Families of Models
f1
f2
RFX:
FFX:
ln p( fk y) =
åF
m
mÎ fk
lnGBFf1 f2 = å ln p( f1 y) - å ln p( f2 y)
i=1:N
i=1:N
r ~ Dir(a1,....a k )
Dir(åai , å ai )
iÎ f1
iÎ f2
p( fk y1... N ) = å
mÎ f
rm
Parameters of a family
e.g. Modulatory connections
Winning Family: f1
*
*
BMA: weight posterior parameter
densities with model probabilities
p(q n Y, m Î fk ) =
å q(q
mÎ fk
n
yn , m)p(mn Y )
Penny et al., 2010
Outline
• Bayesian model selection (BMS)
• Families of Models
• Nonlinear DCM for fMRI
• Stochastic DCM
• Post-hoc selection of deterministic DCM
• Integrating tractography and DCM
y


y
BOLD
y

activity
x2(t)
hemodynamic
model
λ
activity
x3(t)
activity
x1(t)
neuronal
states
x
integration
modulatory
input u2(t)
driving
input u1(t)
y
t
Neural state equation x  ( A 
u
A
intrinsic
connectivity
t
modulation of
connectivity
direct inputs
Stephan & Friston (2007),
Handbook of Brain Connectivity
B
j
B
( j)

C 
( j)
) x  Cu
 x
x
  x
u j x
 x
u
bilinear DCM
non-linear DCM
modulation
driving
input
driving
input
modulation
Two-dimensional Taylor series (around x0=0, u0=0):
 f x
 f ( x , u )  f ( x 0 ,0 ) 
x
u
ux  ... 2
 ...
dt
x
u
xu
x 2
dx
Bilinear state equation:
m

(i) 
  A   u i B  x  Cu
dt 
i 1

dx
f
f
A
C
 f
2
2
2
D
B
Nonlinear state equation:

A
dt 
dx
m
uB
i
i 1
n
(i)

x
j 1
j
D
( j)

 x  Cu


Neural population activity
0.4
0.3
0.2
u2
0.1
0
0
10
20
30
40
50
60
70
80
90
100
0
10
20
30
40
50
60
70
80
90
100
0
10
20
30
40
50
60
70
80
90
100
0.6
u1
0.4
x3
0.2
0
0.3
0.2
0.1
0
x1
x2
3
fMRI signal change (%)
2
1
0
Nonlinear dynamic causal model (DCM):
m
n


(i)
( j)
  A   u i B   x j D  x  Cu

dt 
i 1
j 1

dx
0
10
20
30
40
50
60
70
80
90
100
0
10
20
30
40
50
60
70
80
90
100
0
10
20
30
40
50
60
70
80
90
100
4
3
2
1
0
-1
3
2
1
Stephan et al. 2008, NeuroImage
0
attention
 modulation of back-
M1
PPC
ward or forward
connection?
stim
 additional driving
effect of attention
on PPC?
M3
stim
M2
M2 better than M1
BF = 2966
V1
attention
PPC
Stephan et al. 2008, NeuroImage
V1
V5
BF = 12
M3 better than M2
V1
V5
M4 attention
 bilinear or nonlinear
modulation of
forward connection?
attention
stim
V5
PPC
PPC
BF = 23
M4 better than M3
stim
V1
V5
Outline
• Bayesian model selection (BMS)
• Families of Models
• Nonlinear DCM for fMRI
• Stochastic DCM
• Post-hoc selection of deterministic DCM
• Integrating tractography and DCM
Stochastic DCMs
Uncertainty was previously assumed at only the point of
observation
Stochastic DCMs accommodate random fluctuations in
hidden states and physiological states i.e. endogenous
dynamics not explained by experimental perturbation
The DCM now comprises a generative model of random
differential equations (cf ω):
x  ( A 
u
k
B
(k )
) x  Cv  
(x)
k
v  u 

{  ~ ~ N ( 0 , V ( )   ( e ))
(v)
Log-Precision
hyperparameter
Smoothness
hyperparameter
Three quantities must now be inferred:
Parameters {A,B,C}, Hyperparameters {σ,π},And the states themselves {x,v,h}
Inversion: Generalised filtering (under the Laplace assumption)
(outperforms DEM which depends on conditional independence between quantities)
Li et al. 2011
Stochastic DCMs
x  ( A 
u
k
B
(k )
) x  Cv  
(x)
k
v  u 
(v)
A new dimension: Hidden causes
~an estimate of afferent neuronal
activity elicited by experimental input
activity
x2(t)
activity
x3(t)
activity
x1(t)
driving
neuronal
states
input u1(t)
modulatory
t
input u2(t)
t
Li et al. 2011
Outline
• Bayesian model selection (BMS)
• Families of Models
• Nonlinear DCM for fMRI
• Stochastic DCM
• Post-hoc selection of deterministic DCM
• Integrating tractography and DCM
Connection Combinatorics
18 possible “A” connections…. >2.6 million models
MOG
left
RVF
stim.
FG
left
FG
right
LG
left
LG
right
MOG
right
LVF
stim.
Overcoming Connection Combinatorics
Post-hoc evidence approach (Friston & Penny, 2001):
Post-hoc estimation of model evidence and parameters for any nested model
within a larger model is possible through a function of the posterior density of the
full model and priors of the reduced model
Assumption: existence of full model, mF which shares likelihood with
reduced models mi, ∨i: mi Ì mF
p(y q, mi ) = p(y q , mF )
Overcoming Connection Combinatorics
Post-hoc evidence approach (Friston & Penny, 2001):
Post-hoc estimation of model evidence and parameters for any nested model
within a larger model is possible through a function of the posterior density of the
full model and priors of the reduced model
Assumption: existence of full model, mF which shares likelihood with
reduced models mi, ∨i: mi Ì mF (identical observation noise)
p(y q, mi ) = p(y q , mF )
p( y mi )
p( y mF )

p ( y , m F ) p ( m i )
p ( y , m i ) p ( m F )
Post-hoc Evidence
Given the Laplace assumption
Log Model Evidence for any reduced model is an analytic function of means
and precisions of full & reduced model prior and posteriors
Means and precisions of reduced model posteriors available through simple algebra
Post-hoc Evidence
p( y mi )

p( y mF )
p ( y , m F ) p ( m i )
p ( y , m i ) p ( m F )
Generalization of Savage-Dickey Density Ratio (Dickey, 1971)

p ( y , m i )
p( y mi )

d 
p( y mF )
p( y mi )

p( y mF )

p ( y , m F )
p ( y , m F )
p ( m i )
p ( m F )
p ( m i )
p ( m F )
d
d d
u
c
(u)Unique to full
(c) Common to both
Delta point parameters on reduced
p ( m i )
u




p (  , y , m F ) p (
c
u
u
p ( m i )
y, m F )
u

p (
p (
u
p (
u
y, m F )
p ( m F )
 0 y, m F )
u
 0 mF )
u
d
u
p ( m F )
u
d d
u
c
Post-hoc Evidence
p (
u
p (
 0 y, m F )
u
 0 mF )
Outline
• Bayesian model selection (BMS)
• Families of Models
• Nonlinear DCM for fMRI
• Stochastic DCM
• Post-hoc selection of deterministic DCM
• Integrating tractography and DCM
Diffusion-tensor imaging
Sporns, Scholarpedia
Parker & Alexander, 2005,
Phil. Trans. B
Probabilistic tractography: Kaden et al. 2007,
NeuroImage
• computes local fibre orientation
density by deconvolution of the
diffusion-weighted signal
• estimates the spatial probability
distribution of connectivity from
given seed regions
• anatomical connectivity = proportion
of fibre pathways originating in a
specific source region that intersect
a target region
• Asymmetry in metric accounted for
by taking average of seed and
target regions when interchanged
1.6
Integration of
tractography
and DCM
1.4
1.2
1
R1
R2
0.8
0.6
0.4
0.2
0
-2
-1
0
1
2
1
2
low probability of anatomical connection
 small prior variance of effective connectivity
parameter
1.6
1.4
1.2
1
R1
R2
0.8
0.6
0.4
0.2
0
Stephan,
Tittgemeyer et al.
2009, NeuroImage
-2
-1
0
high probability of anatomical connection
 large prior variance of effective connectivity
parameter
LD|LVF
FG
(x3)
probabilistic
tractography
FG
(x4)
LD
LD
LG
(x1)
 DCM
structure
φ34 = 6.5%
FG
left
φ24 = 43.6%
φ13 = 15.7%
LG
LG
right
left
φ12 = 34.2%
LG
(x2)

anatomical
connectivity
LD|RVF
RVF
stim.
BVF
stim.
FG
right
LVF
stim.
φ34 = 6.5%
  6.5%
v  0.0384
2
1.8
  15.7%
1.6
 Hypothesised
connection-specific
priors for coupling
parameters
v  0.1070
1.4
1.2
1
0.8
Stephan,
Tittgemeyer et al.
2009, NeuroImage
0.6
φ34 = 43.6%
  34.2%
  43.6%
v  0.5268
v  0.7746
0.4
0.2
0
-3
-2
-1
0
1
2
3
Connection-specific prior variance  as a
function of anatomical connection probability 
 ij 
0
1   0 exp(    ij )
m1: a=-32,b=-32
m2: a=-16,b=-32
m3: a=-16,b=-28
m4: a=-12,b=-32
m5: a=-12,b=-28
m6: a=-12,b=-24
m7: a=-12,b=-20
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1
m10: a=-8,b=-24
0
0.5
1
m11: a=-8,b=-20
0
0.5
1
m12: a=-8,b=-16
0
0.5
1
m13: a=-8,b=-12
0
0.5
1
m14: a=-4,b=-32
0
0.5
1
m15: a=-4,b=-28
0
0.5
1
m16: a=-4,b=-24
0
0.5
1
m17: a=-4,b=-20
0
0.5
1
m18: a=-4,b=-16
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0.5
0
0.5
1
m19: a=-4,b=-12
1
• 64 different
mappings by
systematic search
across hyperparameters  and 
0.5
0.5
0
0
0
0.5
1
m29: a=0,b=-12
1
0.5
0.5
0
0
0.5
0
0
1
0.5
1
0
0.5
m56: a=8,b=32
1
0
1
0.5
0
0
0
0.5
1
0.5
1
0.5
1
0
m58: a=12,b=24
0
0.5
1
0.5
1
0
0.5
0.5
1
1
0.5
0.5
0.5
0
0
1
0
1
0
0.5
0
0.5
1
0
m62: a=16,b=32
1
1
0.5
0
1
0.5
m63 & m64
1
0.5
1
0
0
0.5
0
0.5
m54: a=8,b=24
1
0.5
1
0
1
1
0
1
0.5
0.5
1
0.5
0
0
0
m53: a=8,b=20
m61: a=16,b=28
0
0.5
1
0.5
1
1
0
0.5
m52: a=8,b=16
0.5
0
0.5
1
m45: a=4,b=12
1
0
0.5
0
0
0.5
1
m44: a=4,b=8
1
m60: a=12,b=32
0
0
1
1
0
1
0.5
m43: a=4,b=4
0.5
1
0.5
0
0
0.5
0
0
0
m59: a=12,b=28
0.5
0.5
1
0.5
1
0.5
1
0
m57: a=12,b=20
0.5
0
0.5
0
0
0.5
1
m36: a=0,b=16
0.5
m51: a=8,b=12
0.5
0
0.5
0
0
0.5
1
m35: a=0,b=12
1
0
1
1
1
1
0.5
0.5
1
m42: a=4,b=0
0.5
0
0.5
0
0
1
0.5
1
1
m50: a=4,b=32
1
0
0.5
0.5
m34: a=0,b=8
0
0
1
0
1
m41: a=4,b=-32
m49: a=4,b=28
1
0
0.5
1
0.5
1
1
0
m33: a=0,b=4
0
0
0.5
0
0.5
1
m27: a=0,b=-20
1
0
0
1
1
0.5
0
0.5
1
m55: a=8,b=28
1
0.5
1
0.5
m48: a=4,b=24
0
0.5
0
0
0.5
0
1
1
0.5
0.5
0
m9: a=-8,b=-28
0
0.5
1
m26: a=0,b=-24
1
0
0.5
0
0
0.5
1
m25: a=0,b=-28
0.5
m32: a=0,b=0
0.5
m47: a=4,b=20
1
1
m40: a=0,b=32
0
0
0.5
1
1
1
1
1
0.5
0.5
0
0.5
0.5
m31: a=0,b=-4
m39: a=0,b=28
0
m24: a=0,b=-32
0
1
0.5
1
0.5
0
0.5
0.5
1
0.5
0
0
0
m23: a=-4,b=4
1
1
1
0
0.5
0
m38: a=0,b=24
1
1
0
0.5
0.5
0.5
1
1
0
0
0
m22: a=-4,b=0
m30: a=0,b=-8
0
0
0.5
1
m37: a=0,b=20
1
0.5
0
1
0.5
1
0.5
0
0.5
1
m28: a=0,b=-16
0
0
m21: a=-4,b=-4
1
m46: a=4,b=16
• yields anatomically
informed (intuitive
and
counterintuitive)
and uninformed
priors
0
0
0.5
1
m20: a=-4,b=-8
m8: a=-8,b=-32
0
0
0.5
1
0
0.5
1
log group Bayes factor
600
400
200
log group Bayes factor
0
0
10
20
30
model
40
50
60
0
10
20
30
model
40
50
60
10
20
30
model
40
50
60
700
695
690
685
680
post. model prob.
0.6
0.5
0.4
0.3
0.2
0.1
0
0
m1: a=-32,b=-32m2: a=-16,b=-32m3: a=-16,b=-28m4: a=-12,b=-32m5: a=-12,b=-28m6: a=-12,b=-24m7: a=-12,b=-20 m8: a=-8,b=-32 m9: a=-8,b=-28
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1
m10: a=-8,b=-24m11: a=-8,b=-20m12: a=-8,b=-16m13: a=-8,b=-12m14: a=-4,b=-32m15: a=-4,b=-28m16: a=-4,b=-24m17: a=-4,b=-20m18: a=-4,b=-16
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1
m19: a=-4,b=-12 m20: a=-4,b=-8 m21: a=-4,b=-4 m22: a=-4,b=0
1
1
1
1
0.5
0.5
0.5
0.5
0
0
0
0
0.5
1 0
0.5
1 0
0.5
1
m28: a=0,b=-16 m29: a=0,b=-12 m30: a=0,b=-8
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1
m23: a=-4,b=4 m24: a=0,b=-32 m25: a=0,b=-28 m26: a=0,b=-24 m27: a=0,b=-20
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1
m31: a=0,b=-4 m32: a=0,b=0 m33: a=0,b=4 m34: a=0,b=8 m35: a=0,b=12 m36: a=0,b=16
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1
m37: a=0,b=20 m38: a=0,b=24 m39: a=0,b=28 m40: a=0,b=32 m41: a=4,b=-32 m42: a=4,b=0 m43: a=4,b=4 m44: a=4,b=8 m45: a=4,b=12
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1
m46: a=4,b=16 m47: a=4,b=20 m48: a=4,b=24 m49: a=4,b=28 m50: a=4,b=32 m51: a=8,b=12 m52: a=8,b=16 m53: a=8,b=20 m54: a=8,b=24
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1 0
0.5
1
m55: a=8,b=28 m56: a=8,b=32 m57: a=12,b=20 m58: a=12,b=24m59: a=12,b=28 m60: a=12,b=32 m61: a=16,b=28m62: a=16,b=32 m63 & m64
1
1
1
1
1
1
1
1
1
0.5
0.5
0
0.5
0
0
0.5
1
0.5
0
0
0.5
1
0.5
0
0
0.5
Stephan, Tittgemeyer et al. 2009,
NeuroImage
1
0.5
0
0
0.5
1
0.5
0
0
0.5
1
0.5
0
0
0.5
1
0.5
0
0
0.5
1
0
0
0.5
1
0
0.5
1
Thank You
With thanks to the FIL Methods Group
for slides and images
In particular Klaas Stephan, Maria Joao and Will Penny