observational medical outcomes partnership

OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
Informatics Opportunities for Exploring the
Real-World Effects of Medical Products: Lessons
from the Observational Medical Outcomes
Partnership
Patrick Ryan
on behalf of OMOP Research Team
October 19, 2010
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
FDAAA’s impact on pharmacovigilance
• SEC 905 establishes SENTINEL
network of distributed observational
databases (administrative claims and
electronic health records) to monitor
the effects of medicines post-approval
2
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
Outstanding questions for active surveillance
Governance
What are the keys to a
successful public-private
partnership?
Data
Which types of data? administrative
claims, electronic health records
What are viable data access models:
- centralized?
- distributed?
Which sources? healthcare providers,
insurers, data aggregators
Performance
How to maintain
collaborations and
engage research
community?
What are appropriate analyses
for:
- hypothesis generating?
- hypothesis strengthening?
Methods
Architecture
Feasibility
What is the appropriate
infrastructure:
- hardware?
- software?
- processes?
- policies?
Technology
What are best
practices for
protecting
data?
Page33
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
Observational Medical Outcomes Partnership
Established to inform the appropriate use of observational
healthcare databases for active surveillance by:
• Conducting methodological research to empirically evaluate the
performance of alternative methods on their ability to identify true
drug safety issues
• Developing tools and capabilities for transforming, characterizing,
and analyzing disparate data sources
• Establishing a shared resource so that the broader research
community can collaboratively advance the science
4
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
Partnership Stakeholders
A public-private partnership between industry, FDA and FNIH.
Stakeholder Groups
• FDA – Executive Board [chair], Advisory Boards, PI
• Industry – Executive and Advisory Boards, two PIs
• FNIH – Partnership and Project Management, Research Core
Staffing
• Academic Centers & Healthcare Providers – Executive and
Advisory Boards, three PIs, Distributed Research Partners,
Methods Collaborators
• Database Owners – Executive Board, Advisory Board, PI
• Consumer and Patient Advocacy Organizations – Executive
and Advisory Board
• US Veterans Administration – Distributed research partner
5
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
OMOP Data Community
OMOP Extended Consortium
OMOP Research Core
Humana HSRC
Regenstrief
INPC
Partners
HealthCare
Research Lab
Centralized data
MSLR
MDCD
MDCR
CCAE
Thomson Reuters
GE
i3 Drug
Safety*
SDI Health
VA MedSAFE
Distributed Network
Page66
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
OMOP research experiment workflow
OMOP Methods Library
Method 1
Method 2
Method 3
Method 4
Common Data Model
feasibility testing
Health Outcomes of Interest
• Angioedema
• Aplastic Anemia
• Acute Liver Injury
• Bleeding
• GI Ulcer Hospitalization
• Hip Fracture
• Hospitalization
• Myocardial Infarction
• Mortality after MI
• Renal Failure
Drugs
• ACE Inhibitors
• Amphotericin B
• Antibiotics
• Antiepileptics
• Benzodiazepines
• Beta blockers
• Bisphosphonates
• Tricyclic antidepressants
• Typical antipsychotics
• Warfarin
Non-specified conditions
-All outcomes in condition
terminology
-‘Labeled events’ as reference
-Warning
-Precautions
-Adverse Reactions
-Postmarketing Experience
7
OMOP Data Community
OBSERVATIONAL
MEDICAL
Humana
OUTCOMES
Regenstrief
Research
PARTNERSHIP
PHS
OMOP Project Plan Progression
Lab
As we have been conducting the
methodological research, all tools and
processes we’ve developed along the way
are made publicly available at:
http://omop.fnih.org
I3
SDI
VA
OMOP Research Core
OSCAR
OMOP CDM
HOI definitions
RICO
HOI_
OCCURRENCE
NATHAN
G
ER
AL
D
OMOP_CONDITION_
ERA
Analysis methods
(for Phase 3 evaluation)
G
ER
AL
D
DOI definitions
OMOP_DRUG_
ERA
Analysis methods
(for feasibility assessment)
OMOP Methods Library
Method
Self- controlled case series
Method 1
Method 2
Method 3
Method 4
Bayesian logistic regression
Observational screening
0.5
0.5
Case- control estimation
Sources
GE
Regenstrief
VA/MedSAFE
Thomson MSLR
Humana
Meta-analysis
0.5
1.0
1.5
2.0
Relative Risk
2.5
1.0
1.5
2.0
Odds ratio
2.5
1.0
1.5
2.0
Screening rate ratio
2.5
0.5
1.0
1.5
2.0
Odds ratio
8
8
2.5
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
Informatics tools developed to support a systematic
analysis process
• Common Data Model
– http://omop.fnih.org/CDMandTerminologies
• Standardized Terminology
– http://omop.fnih.org/Vocabularies
• Data characteristics tools
– OSCAR - http://omop.fnih.org/OSCAR
– NATHAN - http://omop.fnih.org/NATHAN
• Health Outcomes of Interest library
– http://omop.fnih.org/HOI
• Methods library
– http://omop.fnih.org/MethodsLibrary
• Simulated data
– http://omop.fnih.org/OSIM
9
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
OMOP Analysis Process
Source 1
Source 2
Source 3
Transformation to OMOP common data model
Analysis
method
OMOP
Analysis
results
10
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
•
•
•
Establishing a common data model
Developed with broad stakeholder input
Designed to accommodate disparate types of data (claims and EHRs)
Applied successfully across OMOP data community
http://omop.fnih.org/CDMandTerminologies
Page 11
11
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
Standardizing terminologies to accommodate
disparate observational data sources
System Organ Class
(Level 5)
Standardizing
conditions:
MedDRA
MedDRA
High Level Group Terms
(Level 4)
MedDRA
MedDRA
High Level Terms
(Level 3)
MedDRA
MedDRA
Preferred Terms
(Level 2)
MedDRA
MedDRA
Low-level Terms
(Level 1)
MedDRA
MedDRA
Source codes
MedDRA
MedDRA
SNOMED-CT
SNOMED-CT
SNOMED-CT
SNOMED-CT
Top-level
classification
(Level 3)
SNOMED-CT
SNOMED-CT
Higher-level
classifications
(Level 2)
SNOMED-CT
SNOMED-CT
Low-level concepts
(Level 1)
ICD-9-CM
ICD-9-CM
Read
Read
Oxmis
Oxmis
Mapping
Standardizing
drugs:
Top-level concepts
(Level 4)
NDF-RT
NDF-RT
Classifications
(Level 3)
NDF-RT
NDF-RT
Existing
De Novo
Derived
Mapping
Ingredients
(Level 2)
RxNorm
RxNorm
Low-level drugs
(Level 1)
RxNorm
RxNorm
Existing
De Novo
Derived
Source codes
GPI
GPI
NDC
NDC
Multum
Multum
HCPCS*
HCPCS*
CPT-4*
CPT-4*
ICD-9-Proc*
ICD-9-Proc*
http://omop.fnih.org/Vocabularies
Page 12
12
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
Observational Source Characteristics
Analysis Report (OSCAR)
• Provides a systematic approach for summarizing observational healthcare
data stored in the OMOP common data model
• Creates a structured output dataset of summary statistics of each table and
field in the CDM
– Categorical variables: one-, two-, and three-way stratified counts (e.g. number of
persons with each condition by gender)
– Continuous variables: distribution characteristics: min, mean, median, stdev,
max, 25/75 percentile (e.g. observation period length)
– OSCAR summaries from each source can be brought together to do comparative
analyses
• Uses
– Validation of transformation from raw data to OMOP common data model
– Comparisons between data sources
– Comparison of overall database to specific subpopulations of interest (such as
people exposed to a particular drug or people with a specific condition)
– Providing context for interpreting and analyzing findings of drug safety studies
http://omop.fnih.org/OSCAR
13
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
Characterization example: Drug prevalence
• Context: Typically we evaluate one drug at a time, against one
database at a time
• In active surveillance, need to have ability to explore any
medical product, across a network of disparate data sources
• Exploration: what is the prevalence of all medical products
across all data sources?
– Crude prevalence: # of persons with at least one exposure / # of
persons
– Standardized prevalence: Adjusted by age and gender to US Census
– Age-by-gender Strata-specific prevalence
• Not attempting to get precise measure of exposure rates for a
given product, but instead trying to understand general
patterns across data community for all medical products
14
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
Standardized drug prevalence
(age*gender stratified annualized rates;
standardized to US Census)
15
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
Standardized prevalence for select drugs
16
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
Source-specific drug prevalence, by year by age
by gender
17
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
OMOP Methods Library
• Standardized procedures are
being developed to analyze any
drug and any condition
• All programs being made publicly
available to promote transparency
and consistency in research
• Methods will be evaluated in
OMOP research against specific
test case drugs and Health
Outcomes of Interest
http://omop.fnih.org/MethodsLibrary
Page 18
18
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
What methods are most appropriate for signal refinement?
Multiple alternative approaches identified that deserve empirical
testing to measure performance
Method name
Disproportionality analysis (DP)
Univariate self-controlled case series (USCCS)
Observational screening (OS)
Multi-set case control estimation (MSCCE)
Bayesian logistic regression (BLR)
Case-control surveillance (CCS)
IC Temporal Pattern Discovery (ICTPD)
Case-crossover (CCO)
HSIU cohort method (HSIU)
Maximized Sequential Probability Ratio Test (MSPRT)
High-dimensional propensity score (HDPS)
Conditional sequential sampling procedure (CSSP)
Statistical relational learning (SRL)
Incident user design (IUD-HOI)
Parameter
combinations
112
64
162
32
24
48
84
48
6
144
144
144
Release
date
15-Mar-10
2-Apr-10
8-Apr-10
16-Apr-10
21-Apr-10
2-May-10
23-May-10
1-Jun-10
8-Jun-10
25-Jul-10
6-Aug-10
30-Aug-10
http://omop.fnih.org/MethodsLibrary
19
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
Studying method performance for ‘signal refinement’
Relative risk
• Apply method with specific set of parameter settings to a database for a
drug-outcome pair that has a prior suspicion of being potentially related
• Example: Run method X on database A for ACE inhibitors – Angioedema
Hypothetical data
Database A - Method X
•
Challenges:
• How to put the resulting score in context?
• What if we modified one of the methods parameters?
• What if we applied the method to a different database?
• If we had applied the same method to other drug-outcome pairs, what types
of scores would we expect?
• How many other true positives would get a RR > 1.8?
• How many false positives would be identified with a threshold of 1.8?
20
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
Relative risk
How do effect estimates vary by database?
Hypothetical data
Database A
Database B
Database C
Database D
Method X
•
Each database may have unique source population characteristics that can
influence method behavior, including:
–
–
–
–
Sample size
Length and type of longitudinal data capture
Population demographics, such as age, gender
Disease severity, including comorbidities, concomitant medications and health service
utilization patterns
21
How do estimates vary by method parameter settings?
Relative risk
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
Hypothetical data
Configuration 1
•
3
4
5
6
7 8 9
10 11 12
Database C - Method X
13 14 15 16 17
Performance can be sensitive to various factors, including:
–
–
–
–
–
•
2
Length of washout period to identify incident use
Definition of time-at-risk
Choice of comparator
Number and types of covariates to include in propensity score modeling
Statistical approach for adjustment: matching vs. stratification vs. multivariate modeling
‘Optimal’ settings may vary by database and/or the drug-outcome pair in question
22
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
How does method perform against other ‘benchmark’
true positives and negative controls?
Relative risk
…but also need
calibration against
negative controls
Evaluate how
estimates compare
with other ‘true
positive’ examples….
Database A - Method X
Hypothetical data
• It is important to establish operating characteristics of any method,
when applied across a network of databases, as part of the
‘validation before adoption for signal refinement’
23
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
Receiver Operating Characteristic (ROC) curve
Hypothetical data
•
•
•
ROC plots sensitivity (recall) vs. false positive rate (FPR)
Area under ROC curve (AUC) provides probability that method will score a
randomly chosen true positive drug-outcome pair higher than a random unrelated
drug-outcome pair
AUC=1 is perfect predictive model, AUC=0.50 is random guessing (diagonal line)
24
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
Visualizing performance of alternative methods
across a network of databases
25
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
Ongoing opportunities for clinical informatics
• Developing a robust, systematic process for active
surveillance requires innovative informatics solutions
– Systems architecture and data access strategies for integrating
networks of disparate data sources
– Common data model to standardize data structure and
terminologies
– Standardized procedures to characterize data sources and
patient populations of interest
– Analytical methods to identify and evaluate the effects of
medical products
– Visualization tools to enable interactive exploration of analysis
results and provide a consistent framework for effectively
communicating with all stakeholders
• Further research and development should have broader
applications beyond active surveillance to other
healthcare opportunities
26
OBSERVATIONAL
MEDICAL
OUTCOMES
PARTNERSHIP
Contact information
Patrick Ryan
Research Investigator
[email protected]
Thomas Scarnecchia
Executive Director
[email protected]
Emily Welebob
Senior Program Manager, Research
[email protected]
OMOP website: http://omop.fnih.org
27