model-based decision-making - Institute for Disease Modeling

The value of calibration for decision
making
OR
Does it matter how good the model is?
Graham Medley
Professor of Infectious Disease Modelling
London School of Hygiene and Tropical Medicine
Models are Essential in Public
Health
• Trials cannot always be done
–
–
–
–
Complexity (numbers of permutations & combinations)
Feasibility / practicality / geographic scale / timescale
Cost
All of these interact with ethical issues
• Models form the basis of public health decision making in many areas
• Models do not negate the need for data (surveillance) and research
– Quite the opposite
• Models provide a framework for making informed guesses of the impact of interventions
based on explicit assumptions, data, and implicit assumptions
“All models are wrong...
• Depends on your view of model correctness
• If it is...
… C = πd, then all
models are wrong
… y = c + mx + ε, then all
models are (more or
less) right
Correctness of models
• Models typically don’t include…
– Drivers of behaviour
• E.g. behavioural dis-inhibition which changes risk of infection due to the
intervention
– Correlations between behaviours
• E.g. children with many siblings have more extra-household contacts
– Genetic heterogeneity; Migration / movement; Multiple diseases; Political
upheavals / economic crises
• … so will never get to the same level as gold standard models in engineering
• But the important heterogeneities / correlations can be captured to provide
an overall view of the dynamic behaviour of the system
“…but some are useful”
• Depends on your view of the role of modelling:
– If quantitative prediction (as bridge-building), then all models in public health are
useless
– If acting as a guide to support the best decision given everything, then all models
are (potentially) useful
• Models typically (ideally)
– Sit within a multidisciplinary framework (clinicians, economists, public health etc)
– The model informs decisions rather than makes them
– Can be trumped by political imperatives
• The usefulness of the model is not determined by the model, but by the
context in which it is used
Can we make models correcter?
• The impetus is to make models more accurate
– To aim for the gold standards
• Takes the pressure away from the the decision makers
– Politicians do not decide bridge structures
– “The computer says no”
• How can we know when the model is sufficiently
correct/accurate to base public health decisions on it?
Sources of uncertainty
• Model output = Model structure + parameters
• Parameters = Model structure + data (fitting)
– Structure and parameters are not independent
• Model structure = compromise between logistic feasibility /
knowledge
– The “art” of modelling
• Data = Bias
– Never enough from the same place & time
Measures of accuracy: Validation
• Demonstration that the model has (limited, short-term) value in
making predictions
• Frequently inadequate
– Inappropriate data, bias etc.
– “Where models are given room for validation, they will find it”
• What credibility does it add to be able to predict the past?
– We do not have a measure of credibility – it’s a judgement
– We accept a model as being “good enough” when we think its good
enough
Multiple model comparisons
• Uncertainty in model structure can be assessed by comparing
many different models
– Multiple model comparisons are becoming common
• Differences in models are not pre-determined
– Common prejudices / complexities
– Extant model structures are greatly influenced by data available
• Data / funding / management: Competition
Multi-model Comparisons: paradox
• Tend to show “regression to the mean”
– Nobody likes being an outlier…
• How to combine outputs?
– Wish to have variability between models
• reflecting structural variability
– Wish to have consensus between model to support decision
Structure vs Parameters
• Parameter variability and structural variability seen as
different things
• Nested models with homotopic parameters blur this
distinction
– Example of model structures for respiratory syncytial virus
(RSV)…
– Four homotopy parameters...
1: SIS
Susceptible
Resistant
S
R
4: Partial
immunity
R
5: Waning
partial
immunity
S
R
6:
I
I
S
R
I
I
S
I
S
2: SIR
Primary
Infection
Secondary
Infection
8: Full model:
Waning partial
immunity with
altered secondary
infections
S
I
I
R
I
3: SIRS
S
I
R
7:
Although not all
models are
nested within
each other, they
are all nested
within the full
model (8) and
can be
compared to
that model
Sources of Uncertainty
• The difference between model structure and
parameter values is blurred for multiple model
comparisons
– Most current models designed for policy support are
sufficiently complex that their “structure” is very hard to
define
• More algorithms than “models”
• That’s why they get names
Econo-epidemiology
• Interaction with health economics
– Often the niceties of the biological and epidemiological
model are “integrated out”
– The details of the model don’t matter
– The earlier you find out that, say, the proportion becoming
carriers is irrelevant to the decision, the better
• Epidemiology: what “drives” the dynamics?
• Economics: what “drives” the decision?
Focus on the decision
• Presumes you know the decision…
• But in many circumstances, the decision is clear
– Should we vaccinate boys against HPV given that we are
vaccinating girls already?
– Which people should be offered PrEP against HIV?
• Seeking a robustness of decision, not the right model
Models to understand
transmission dynamics and
predict outcomes of
interventions: Heterogeneity
Models to understand and
predict economic outcomes
of interventions given
epidemiology: Costs/Savings,
Externalities, Equity, Equality
Models to understand
and predict
EPIDEMIOLOGIC and
ECONOMIC outcomes of
interventions:
Costs/Savings,
Heterogeneity / Equality
Model Complexity is not the Solution
• We are getting better at predicting transmission dynamics
– But prediction remains an inherently difficult problem
– There is likely some limit at predictability
• Probabilistic sensitivity analysis and uncertainty analysis are key analytical
steps
– Don’t confuse model complexity with model analysis
• Build model complexity and test decision at each level
– Each model should be its own multi-model, parameter-uncertainty comparison
Conclusions I
• Models are essential tools for public health, but they are new
and we are still learning how to use them
• The usefulness of the model is not determined by the model,
but by the context in which it is used
• The correctness of a model
– Needs further thought – quantitative vs. qualitative accuracy?
– A measure of validation would be useful
– Additional data frequently does not help
Conclusions II
• Models as tools in public health – “fidelity” (Behrend)
– Process control – data integrity (like clinical trials), GitHub
– “true to” uncertainty: all accounted and propagated
– Adequacy / correctness / quality control steps (as engineering)
• Research models are different from decision models
• Model the decision
– Make epi-economic models
– Assess the decision as complexity increases
Some References
• Garnett, G. P. et al. (2011) Mathematical models in the evaluation of health programmes,
The Lancet, 378, 515–525
• White, L.J. et al. (2007) Understanding the transmission dynamics of respiratory syncytial
virus using multiple time series and nested models. Mathematical Biosciences 209, 222239
• Green, L.E. & Medley, G.F. (2002) Mathematical modelling of the foot and mouth disease
epidemic of 2001: strengths and weaknesses. Research in Veterinary Science 73 (3),201205