12_DCM_Advanced_FIL20 - Wellcome Trust Centre for Neuroimaging

DCM for fMRI: Advanced topics
Klaas Enno Stephan
Neural population activity
0.4
0.3
0.2
u2
Laboratory for Social & Neural Systems
Research (SNS)
University of Zurich
0.1
0
0
10
20
30
40
50
60
70
80
90
100
0
10
20
30
40
50
60
70
80
90
100
0
10
20
30
40
50
60
70
80
90
100
0.6
0.4
u1
x3
0.2
0
0.3
0.2
0.1
0
x1
x2
Wellcome Trust Centre for Neuroimaging
University College London
3
fMRI signal change (%)
2
1
0
0
10
20
30
40
50
60
70
80
90
100
0
10
20
30
40
50
60
70
80
90
100
0
10
20
30
40
50
60
70
80
90
100
4
3

A

dt

dx
m

i 1
n
ui B
(i)


j 1
x jD
( j)

 x  Cu


2
1
0
-1
3
2
1
0
SPM Course, London
13 May 2011
Dynamic Causal Modeling (DCM)
Hemodynamic
forward model:
neural activityBOLD
Electromagnetic
forward model:
neural activityEEG
MEG
LFP
Neural state equation:
dx
 F ( x , u,  )
dt
fMRI
simple neuronal model
complicated forward model
EEG/MEG
complicated neuronal model
simple forward model
inputs
Overview
• Bayesian model selection (BMS)
• Neurocomputational models:
Embedding computational models in DCMs
• Integrating tractography and DCM
definition of model space
inference on model structure or inference on model parameters?
inference on
individual models or model space partition?
optimal model structure assumed
to be identical across subjects?
yes
FFX BMS
comparison of model
families using
FFX or RFX BMS
inference on
parameters of an optimal model or parameters of all models?
optimal model structure assumed
to be identical across subjects?
yes
no
FFX BMS
RFX BMS
no
RFX BMS
Stephan et al. 2010, NeuroImage
FFX analysis of
parameter estimates
(e.g. BPA)
RFX analysis of
parameter estimates
(e.g. t-test, ANOVA)
BMA
Model comparison and selection
Given competing hypotheses
on structure & functional
mechanisms of a system, which
model is the best?
Which model represents the
best balance between model
fit and model complexity?
For which model m does p(y|m)
become maximal?
Pitt & Miyung (2002) TICS
Bayesian model selection (BMS)
Model evidence:
 p ( y |  , m ) p ( | m ) d 
 log p ( y |  , m )
 KL q  , p  | m 
p(y|m)
p( y | m) 
Gharamani, 2004
y
 KL q  , p  | y , m 
all possible datasets
accounts for both accuracy and
complexity of the model
allows for inference about
structure (generalisability) of the
model
Various approximations, e.g.:
- negative free energy, AIC, BIC
McKay 1992, Neural Comput.
Penny et al. 2004a, NeuroImage
Approximations to the model evidence in DCM
Maximizing log model evidence
= Maximizing model evidence
Logarithm is a
monotonic function
Log model evidence = balance between fit and complexity
log p ( y | m )  accuracy ( m )  complexity ( m )
 log p ( y |  , m )  complexity ( m )
No. of
parameters
In SPM2 & SPM5, interface offers 2 approximations:
Akaike Information Criterion:
Bayesian Information Criterion:
No. of
data points
AIC  log p ( y |  , m )  p
BIC  log p ( y |  , m ) 
p
2
log N
AIC and BIC only take into account the number of parameters, but not
the flexibility they provide (prior variance) nor their interdependencies.
The (negative) free energy approximation
• Under Gaussian assumptions about the posterior (Laplace approximation),
the negative free energy F is a lower bound on the log model evidence:
log p ( y | m )
 log p ( y |  , m )  KL q  , p  | m   KL q  , p  | y , m 
 F  KL q  , p  | y , m 
• F can also be written as the difference between fit and complexity:
F  log p ( y |  , m )
q
 K L  q    , p   | m  
The complexity term in F
• In contrast to AIC & BIC, the complexity term of the negative
free energy F accounts for parameter interdependencies.
KL q ( ), p ( | m ) 

1
2
ln C  
1
2
ln C  | y 
1
2

 |y
 

T
1
C

 |y
 
• The complexity term of F is higher
– the more independent the prior parameters ( effective DFs)
– the more dependent the posterior parameters
– the more the posterior mean deviates from the prior mean
• NB: Since SPM8, only F for is used for model selection !

Bayes factors
To compare two models, we could just compare their log
evidences.
But: the log evidence is just some number – not very intuitive!
A more intuitive interpretation of model comparisons is made
possible by Bayes factors:
positive value, [0;[
B12 
p ( y | m1 )
p( y | m2 )
Kass & Raftery classification:
Kass & Raftery 1995, J. Am. Stat. Assoc.
B12
p(m1|y)
Evidence
1 to 3
50-75%
weak
3 to 20
75-95%
positive
20 to 150
95-99%
strong
 150
 99%
Very strong
BMS in SPM8: an example
attention
M1
stim
M1
M2
M3
M4
M3
stim
PPC
V1
attention
V1
V5
PPC
M2
M2 better than M1
PPC
attention
BF 2966
F = 7.995
stim
V1
V5
M3 better than M2
BF  12
F = 2.450
V5
M4
attention
PPC
M4 better than M3
BF  23
F = 3.144
stim
V1
V5
Stephan et al. 2008, NeuroImage
Fixed effects BMS at group level
Group Bayes factor (GBF) for 1...K subjects:
GBF ij 
 BF
(k )
ij
k
Average Bayes factor (ABF):
A B Fij 
K
 BF
(k )
ij
k
Problems:
- blind with regard to group heterogeneity
- sensitive to outliers
Random effects BMS for heterogeneous groups

Dirichlet parameters 
= “occurrences” of models in the population
r ~ Dir ( r ;  )
Dirichlet distribution of model probabilities r
mk ~ p (mk | p )
mk ~ p (mk | p )
mk ~ p (mk | p )
m 1 ~ Mult ( m ;1, r )
y1 ~ p ( y1 | m 1 )
y1 ~ p ( y1 | m 1 )
y2 ~ p ( y2 | m2 )
y1 ~ p ( y1 | m 1 )
Multinomial distribution of model labels m
Measured data y
Model inversion
by Variational
Bayes (VB) or
MCMC
Stephan et al. 2009a, NeuroImage
Penny et al. 2010, PLoS Comp. Biol.
LD
m2
MOG
FG
LD|LVF
MOG
FG
LD|RVF
MOG
LD|LVF
RVF
stim.
LD
LD
LG
LVF
stim.
RVF LD|RVF
stim.
m2
Subjects
MOG
FG
LD
LG
LG
FG
m1
LG
LVF
stim.
m1
Data:
Stephan et al. 2003, Science
Models: Stephan et al. 2007, J. Neurosci.
-35
-30
-25
-20
-15
-10
-5
Log model evidence differences
0
5
p(r >0.5 | y) = 0.997
1
5
4.5
4
p  r1  r2   99 . 7 %
m2
3.5
m1
p(r 1|y)
3
2.5
2
1.5
r1  84.3%
r2  15.7%
0.5
0
0
 1  11.8
 2  2.2
1
0.1
0.2
0.3
0.4
0.5
r
Stephan et al. 2009a, NeuroImage
0.6
1
0.7
0.8
0.9
1
nonlinear
Model space partitioning:
comparing model families
log
GBF
80
Summed log evidence (rel. to RBML)
FFX
linear
60
40
20
p(r >0.5 | y) = 0.986
1
0
5
CBMN CBMN(ε) RBMN RBMN(ε) CBML CBML(ε) RBML RBML(ε)
RFX

12
4.5
10
4
m2
m1
alpha
8
3.5
p  r1  r2   9 8 .6 %
6
3
p(r 1|y)
4
2
2.5
2
0
CBMN CBMN(ε) RBMN RBMN(ε) CBML CBML(ε) RBML RBML(ε)
1.5

1
16
14
12
m1
m2
4
8
1*    k
r2  2 6 .5 %
r1  7 3 .5 %
0.5
0
0
10
alpha
Model
space
partitioning
0.1
0.2
0.3
0.4
0.5
r
k 1
0.6
0.7
0.8
0.9
1
1
6
4
8
 2*    k
2
k 5
0
nonlinear models
linear models
Stephan et al. 2009, NeuroImage
Comparing model families – a second example
• data from Leff et al.
2008, J. Neurosci
• one driving input, one
modulatory input
• 26 = 64 possible
modulations
• 23 – 1 input patterns
• 764 = 448 models
• integrate out uncertainty
about modulatory
patterns and ask where
auditory input enters
Penny et al. 2010, PLoS Comput. Biol.
Bayesian Model Averaging (BMA)
• abandons dependence of
parameter inference on a single
model
• computes average of each
parameter, weighted by
posterior model probabilities
• represents a particularly useful
alternative
– when none of the models (or
model subspaces) considered
clearly outperforms all others
– when comparing groups for
which the optimal model differs
Penny et al. 2010, PLoS Comput. Biol.
p   n | y1 .. N

 p 
n

| y n , m  p  m | y1 .. N

m
NB: p(m|y1..N) can be obtained by either FFX or
RFX BMS
5
BMS for large model spaces
10
Number of models
4
10
3
10
• for less constrained model
spaces, search methods are
needed
A  mi   2
2
10
1
assuming reciprocal
connections
10
0
10
2
3
4
5
6
number of nodes
• fast model scoring via the
Savage-Dickey density ratio:
Log-evidence
100
ln p  y | m i 
0

-100
Free-energy
 ln q   i  0 | m i   ln p   i  0 | m F
-200
-300
-400
-500
Friston et al. 2011, NeuroImage
Friston & Penny 2011, NeuroImage
n  n 1  / 2
-600
-600
-500
-400
-300
-200
Savage-Dickey
-100
0
100
BMS for large model spaces
0.8
0.7
0.6
probability
-100
-200
0.5
0.4
0.3
0.2
-300
0.1
-400
0
0.5
1
1.5
2
2.5
3
3.5
0.4
0.4
0.2
0.2
0
0
-0.2
-0.2
-0.4
-0.4
5
10
15
20
1
25
30
35
1.5
2
2.5
3
-0.6
3.5
4
x 10
MAP connections (sparse)
0.8
0.6
0
0.5
model
0.6
-0.6
0
x 10
MAP connections (full)
0.8
0
4
model
Friston et al. 2011, NeuroImage
Model posterior
0.9
0
log-probability
• empirical example:
comparing all 32,768 variants of a
6-region model
(under the constraint of reciprocal
connections)
Log-posterior
100
0
5
10
15
20
25
30
35
Overview
• Bayesian model selection (BMS)
• Neurocomputational models:
Embedding computational models in DCMs
• Integrating tractography and DCM
Learning of dynamic audio-visual associations
1
Conditioning Stimulus
CS1
Target Stimulus
CS2
0.8
or
p(face)
or
CS
0
Response
TS
200
400
600
Time (ms)
800
0.6
0.4
0.2
2000
±
650
0
0
200
400
600
trial
den Ouden et al. 2010, J. Neurosci.
800
1000
Explaining RTs by different learning models
Reaction times
1
True
Bayes Vol
HMM fixed
HMM learn
RW
450
0.8
430
p(F)
RT (ms)
440
420
0.6
0.4
410
400
390
0.2
0.1
0.3
0.5
0.7
0.9
p(outcome)
0
400
0.7
• Rescorla-Wagner
• Hidden Markov models
(2 variants)
520
560
600
Bayesian model selection:
0.6
Exceedance prob.
• hierarchical Bayesian learner
480
Trial
5 alternative learning models:
• categorical probabilities
440
hierarchical Bayesian model
performs best
0.5
0.4
0.3
0.2
0.1
0
Categorical
model
den Ouden et al. 2010, J. Neurosci.
Bayesian
learner
HMM (fixed) HMM (learn)
RescorlaWagner
Hierarchical Bayesian learning model
p k   1
k
volatility
vt-1
p v t 1 | v t , k  ~ N v t , exp( k ) 
vt
probabilistic association
rt
rt+1
observed events
ut
ut+1
p  rt 1 | rt , v t  ~ Dir rt , exp( v t ) 
1


p red ictio n : p rt , v t , K u 1:t  1 
rt  1 , v t  1  p  v t v t  1 , K
t

 p r
t 1

u p d ate: p rt , v t , K u 1:t 

t

t

u 1:t  1 p  u t rt  d rt d v t d K
Behrens et al. 2007, Nat. Neurosci.
0.6
0.4
p rt , v t , K u 1:t  1 p  u t rt 
 p  r , v , K

, v t  1 , K u 1:t  1 d rt  1 d v t  1
p(F)
 p  r
0.8
0.2
0
400
440
480
520
Trial
560
600
Stimulus-independent prediction error
Putamen
Premotor cortex
p < 0.05
(cluster-level wholebrain corrected)
0
-0.5
0
-0.5
-1
-1.5
-2
BOLD resp. (a.u.)
BOLD resp. (a.u.)
p < 0.05
(SVC)
-1
-1.5
p(F)
p(H)
den Ouden et al. 2010, J. Neurosci .
-2
p(F)
p(H)
Prediction error (PE) activity in the putamen
PE during
reinforcement learning
O'Doherty et al. 2004,
Science
PE during
incidental
sensory learning
den Ouden et al. 2009,
Cerebral Cortex
According to current learning theories (e.g., free energy principle):
synaptic plasticity during learning = PE dependent changes in connectivity
Plasticity of visuo-motor
connections
• Modulation of visuomotor connections by
striatal prediction
error activity
Hierarchical
Bayesian
learning model
PUT
PMd
• Influence of visual
areas on premotor
cortex:
– stronger for
surprising stimuli
– weaker for expected
stimuli
den Ouden et al. 2010, J. Neurosci .
p = 0.017
p = 0.010
PPA
FFA
Prediction error in PMd: cause or effect?
Model 1 minus Model 2
5
4
Model 1
Model 2
log model evidence
3
2
1
0
-1
-2
-3
-4
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15
subject
p(r >0.5 | y) = 0.991
1
5
model 2
model 1
p(r 1|y)
4
3
2
1
0
0
den Ouden et al. 2010, J. Neurosci .
0.2
0.4
0.6
r
1
0.8
1
Overview
• Bayesian model selection (BMS)
• Neurocomputational models:
Embedding computational models in DCMs
• Integrating tractography and DCM
Diffusion-weighted imaging
Parker & Alexander, 2005,
Phil. Trans. B
Probabilistic tractography: Kaden et al. 2007, NeuroImage
• computes local fibre orientation
density by spherical deconvolution of
the diffusion-weighted signal
• estimates the spatial probability
distribution of connectivity from given
seed regions
• anatomical connectivity = proportion
of fibre pathways originating in a
specific source region that intersect
a target region
• If the area or volume of the source
region approaches a point, this
measure reduces to method by
Behrens et al. (2003)
1.6
Integration of
tractography
and DCM
1.4
1.2
1
R1
R2
0.8
0.6
0.4
0.2
0
-2
-1
0
1
2
low probability of anatomical connection
 small prior variance of effective connectivity parameter
1.6
1.4
1.2
1
R1
R2
0.8
0.6
0.4
0.2
0
Stephan, Tittgemeyer et al.
2009, NeuroImage
-2
-1
0
1
high probability of anatomical connection
 large prior variance of effective connectivity parameter
2
Proof of
concept
study
probabilistic
tractography
FG
 34  6.5%
FG
left
FG
FG
right
 24  43.6%
 13  15.7%
LG
left
LG
LG
 12  34.2%
LG
right
 anatomical
connectivity 
 DCM
 connectionspecific priors
for coupling
parameters
  6.5%
v  0.0384
2
1.8
  1 5 .7 %
1.6
v  0 .1 0 7 0
1.4
1.2
1
0.8
0.6
  34.2%
  43.6%
v  0.5268
v  0.7746
0.4
Stephan, Tittgemeyer et al.
2009, NeuroImage
0.2
0
-3
-2
-1
0
1
2
3
Connection-specific prior variance  as a function of
anatomical connection probability 
 ij 
0
m 1: a=-32,b=-32 m 2: a=-16,b=-32 m 3: a=-16,b=-28 m 4: a=-12,b=-32 m 5: a=-12,b=-28 m 6: a=-12,b=-24 m 7: a=-12,b=-20 m 8: a=-8,b=-32 m 9: a=-8,b=-28
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
m 10: a=-8,b=-24 m 11: a=-8,b=-20 m 12: a=-8,b=-16 m 13: a=-8,b=-12 m 14: a=-4,b=-32 m 15: a=-4,b=-28 m 16: a=-4,b=-24 m 17: a=-4,b=-20 m 18: a=-4,b=-16
1
1
1
1
1
1
1
1
1
1   0 exp(    ij )
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
m 19: a=-4,b=-12 m 20: a=-4,b=-8 m 21: a=-4,b=-4 m 22: a=-4,b=0
m 23: a=-4,b=4 m 24: a=0,b=-32 m 25: a=0,b=-28 m 26: a=0,b=-24 m 27: a=0,b=-20
1
1
1
1
1
1
1
1
1
• 64 different mappings
by systematic search
across hyperparameters  and 
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
m 28: a=0,b=-16 m 29: a=0,b=-12 m 30: a=0,b=-8
m 31: a=0,b=-4
m 32: a=0,b=0
m 33: a=0,b=4
m 34: a=0,b=8
m 35: a=0,b=12 m 36: a=0,b=16
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
m 37: a=0,b=20 m 38: a=0,b=24 m 39: a=0,b=28 m 40: a=0,b=32 m 41: a=4,b=-32
m 42: a=4,b=0
m 43: a=4,b=4
m 44: a=4,b=8
m 45: a=4,b=12
1
1
1
1
1
1
1
1
1
0.5
• yields anatomically
informed (intuitive and
counterintuitive) and
uninformed priors
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
m 46: a=4,b=16 m 47: a=4,b=20 m 48: a=4,b=24 m 49: a=4,b=28 m 50: a=4,b=32 m 51: a=8,b=12 m 52: a=8,b=16 m 53: a=8,b=20 m 54: a=8,b=24
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
m 55: a=8,b=28 m 56: a=8,b=32 m 57: a=12,b=20 m 58: a=12,b=24 m 59: a=12,b=28 m 60: a=12,b=32 m 61: a=16,b=28 m 62: a=16,b=32
m 63 & m 64
1
1
1
1
1
1
1
1
1
0
0
0.5
1
0
0
0.5
1
0
0
0.5
1
0
0
0.5
1
0
0
0.5
1
0
0
0.5
1
0
0
0.5
1
0
0.5
0
0.5
1
0
0
0.5
1
log group Bayes factor
600
400
200
log group Bayes factor
0
0
10
20
30
model
40
50
60
0
10
20
30
model
40
50
60
40
50
60
700
695
690
685
680
post. model prob.
0.6
0.5
0.4
0.3
Models with anatomically informed
priors (of an intuitive form)
0.2
0.1
0
0
10
20
30
model
m 1: a=-32,b=-32 m 2: a=-16,b=-32 m 3: a=-16,b=-28 m 4: a=-12,b=-32 m 5: a=-12,b=-28 m 6: a=-12,b=-24 m 7: a=-12,b=-20 m 8: a=-8,b=-32 m 9: a=-8,b=-28
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
m 10: a=-8,b=-24 m 11: a=-8,b=-20 m 12: a=-8,b=-16 m 13: a=-8,b=-12 m 14: a=-4,b=-32 m 15: a=-4,b=-28 m 16: a=-4,b=-24 m 17: a=-4,b=-20 m 18: a=-4,b=-16
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
m 19: a=-4,b=-12 m 20: a=-4,b=-8 m 21: a=-4,b=-4 m 22: a=-4,b=0
m 23: a=-4,b=4 m 24: a=0,b=-32 m 25: a=0,b=-28 m 26: a=0,b=-24 m 27: a=0,b=-20
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
m 28: a=0,b=-16 m 29: a=0,b=-12 m 30: a=0,b=-8
m 31: a=0,b=-4
m 32: a=0,b=0
m 33: a=0,b=4
m 34: a=0,b=8
m 35: a=0,b=12 m 36: a=0,b=16
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
m 37: a=0,b=20 m 38: a=0,b=24 m 39: a=0,b=28 m 40: a=0,b=32 m 41: a=4,b=-32
m 42: a=4,b=0
m 43: a=4,b=4
m 44: a=4,b=8
m 45: a=4,b=12
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
m 46: a=4,b=16 m 47: a=4,b=20 m 48: a=4,b=24 m 49: a=4,b=28 m 50: a=4,b=32 m 51: a=8,b=12 m 52: a=8,b=16 m 53: a=8,b=20 m 54: a=8,b=24
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
m 55: a=8,b=28 m 56: a=8,b=32 m 57: a=12,b=20 m 58: a=12,b=24 m 59: a=12,b=28 m 60: a=12,b=32 m 61: a=16,b=28 m 62: a=16,b=32
m 63 & m 64
1
1
1
1
1
1
1
1
1
0.5
0
0.5
0
0.5
1
0
0.5
0
0.5
1
0
0.5
0
0.5
1
0
0.5
0
0.5
1
0
0.5
0
0.5
1
0
0.5
0
0.5
1
0
0.5
0
0.5
1
0
0.5
0
0.5
1
0
0
0.5
1
Models with anatomically informed priors (of an intuitive form) were
clearly superior than anatomically uninformed ones: Bayes Factor >109
Methods papers: DCM for fMRI and BMS – part 1
•
Chumbley JR, Friston KJ, Fearn T, Kiebel SJ (2007) A Metropolis-Hastings algorithm for dynamic causal
models. Neuroimage 38:478-487.
•
Daunizeau J, Friston KJ, Kiebel SJ (2009) Variational Bayesian identification and prediction of stochastic
nonlinear dynamic causal models. Physica, D 238, 2089–2118.
•
Daunizeau J, David, O, Stephan KE (2011) Dynamic Causal Modelling: A critical review of the biophysical and
statistical foundations. NeuroImage, in press.
•
Friston KJ, Harrison L, Penny W (2003) Dynamic causal modelling. NeuroImage 19:1273-1302.
•
Friston KJ, Mattout J, Trujillo-Barreto N, Ashburner J, Penny W (2007) Variational free energy and the Laplace
approximation. NeuroImage 34: 220-234.
•
Friston KJ, Stephan KE, Li B, Daunizeau J (2010) Generalised filtering. Mathematical Problems in Engineering
2010: 621670.
•
Friston KJ, Li B, Daunizeau J, Stephan KE (2011) Network discovery with DCM. NeuroImage 56: 1202-1221.
•
Friston KJ, Penny WD (2011) Post hoc model selection. NeuroImage, in press.
•
Kasess CH, Stephan KE, Weissenbacher A, Pezawas L, Moser E, Windischberger C (2010) Multi-Subject
Analyses with Dynamic Causal Modeling. NeuroImage 49: 3065-3074.
•
Kiebel SJ, Kloppel S, Weiskopf N, Friston KJ (2007) Dynamic causal modeling: a generative model of slice
timing in fMRI. NeuroImage 34:1487-1496.
•
Li B, Daunizeau J, Stephan KE, Penny WD, Friston KJ (2011). Stochastic DCM and generalised filtering.
NeuroImage, in press.
•
Marreiros AC, Kiebel SJ, Friston KJ (2008) Dynamic causal modelling for fMRI: a two-state model. NeuroImage
39:269-278.
Methods papers: DCM for fMRI and BMS – part 2
•
Penny WD, Stephan KE, Mechelli A, Friston KJ (2004a) Comparing dynamic causal models. NeuroImage
22:1157-1172.
•
Penny WD, Stephan KE, Mechelli A, Friston KJ (2004b) Modelling functional integration: a comparison of
structural equation and dynamic causal models. NeuroImage 23 Suppl 1:S264-274.
•
Penny WD, Stephan KE, Daunizeau J, Joao M, Friston K, Schofield T, Leff AP (2010) Comparing Families of
Dynamic Causal Models. PLoS Computational Biology 6: e1000709.
•
Stephan KE, Harrison LM, Penny WD, Friston KJ (2004) Biophysical models of fMRI responses. Curr Opin
Neurobiol 14:629-635.
•
Stephan KE, Weiskopf N, Drysdale PM, Robinson PA, Friston KJ (2007) Comparing hemodynamic models with
DCM. NeuroImage 38:387-401.
•
Stephan KE, Harrison LM, Kiebel SJ, David O, Penny WD, Friston KJ (2007) Dynamic causal models of neural
system dynamics: current state and future extensions. J Biosci 32:129-144.
•
Stephan KE, Weiskopf N, Drysdale PM, Robinson PA, Friston KJ (2007) Comparing hemodynamic models with
DCM. NeuroImage 38:387-401.
•
Stephan KE, Kasper L, Harrison LM, Daunizeau J, den Ouden HE, Breakspear M, Friston KJ (2008) Nonlinear
dynamic causal models for fMRI. NeuroImage 42:649-662.
•
Stephan KE, Penny WD, Daunizeau J, Moran RJ, Friston KJ (2009a) Bayesian model selection for group
studies. NeuroImage 46:1004-1017.
•
Stephan KE, Tittgemeyer M, Knösche TR, Moran RJ, Friston KJ (2009b) Tractography-based priors for
dynamic causal models. NeuroImage 47: 1628-1638.
•
Stephan KE, Penny WD, Moran RJ, den Ouden HEM, Daunizeau J, Friston KJ (2010) Ten simple rules for
Dynamic Causal Modelling. NeuroImage 49: 3099-3109.
Thank you