Issue Description to RRT Members (May 2012) (PDF: 150KB/8 pages)

1100 1st Street, NE, 12th Floor
Washington, DC 20002-4221
Telephone (202) 484-9220
Fax (202) 863-1763
www.mathematica-mpr.com
MEMORANDUM
TO:
Stefan Gildemeister, MDH
FROM:
Angela Merrill, Nyna Williams, and Beny Wu
SUBJECT:
Revised Hospital Total Care Quality Scores Methodology
and Results
I.
DATE: 5/29/2012
Executive Summary
The Minnesota Department of Health (MDH) is performing a revised provider peer
grouping (PPG) analysis for hospitals, to address issues regarding the research database on cost
of care for the initial hospital run. This provided the opportunity also to consider feedback from
hospitals on the methodology for assigning hospital total care quality scores, which was based on
recommendations from the PPG advisory group and modeled after the federal Hospital Value
Based Purchasing (HVBP) program. In particular, there was concern that the method was not
working well for measures with uniformly high performance (topped out) and other measures
where there was little variation in hospital performance.
MDH and Mathematica performed further examination of the data and decided to make
broader change to the hospital total care quality score methodology. MDH intends to revisit
some of these methodological changes with the advisory group that will be convened this
summer. This memo describes the key changes and the results for the total care hospital scores.
The two key issues and revised approach and results are:
•
MDH and Mathematica re-examined how the previous approach of assigning points
on a relative scale forced variation among otherwise similarly performing hospitals.
This was a particular problem for “topped out” quality measures, but also impacted
many other quality measures with relatively little variation. In response, MDH and
Mathematica tested a number of approaches for assigning points on an absolute
scale.
Results. MDH will use an absolute scoring approach to assign points in the revised
hospital reports. We will use an approach that assigns zero points to hospitals that
have a performance rate of less than 40 percent; 10 points for a performance rate of
98 percent or more; and from 1 to 9 points based on evenly spaced intervals between
40 and 98 percent. This approach results in higher point assignment on almost all
measures compared to the relative scoring approach in the first iteration; for
example, mean scores are above 9 for many measures, and all hospitals receive ten
points for the most topped off measures. This point assignment recognizes
Minnesota hospitals’ high performance on these quality measures.
An Affirmative Action/Equal Opportunity Employer
MEMO TO: Stefan Gildemeister
FROM:
Angela Merrill, Nyna Williams, and Beny Wu
DATE:
5/29/2012
PAGE:
2
•
The goal to include as many hospitals in PPG and require reporting on the same set
of measures across Inpatient Prospective Payment (IPPS) and critical access (CAH)
hospitals resulted in measure domain scores being based on very few measures for a
small number of CAHs. MDH combined the three outcome subdomains into a single
domain and established a stricter requirement for having sufficient numbers of
measures per domain. MDH tested different requirements and will require six
measures per domain (Process and Outcome) to receive a score.
Results. As a result of the stricter requirements on required measures in each
domain, 14 fewer CAHs (from 65 to 51) received a total quality score, though the
number of PPS hospitals remained constant. However, the average number of
measures used in domain construction increased for CAHs, particular within the
Outcome domain, resulting in more representative domain scores.
These changes result in domain and total hospital quality scores that differ from the first
iteration of hospital reports:
•
Overall domain and total care quality scores are significantly higher than in the first
iteration. For example, the average total quality score (out of a possible 100) is 85.9
among IPPS hospitals (compared to 36.4 in the first iteration) and 80.3 among CAHs
(compared to 40.5 in the first iteration).
•
The variation across hospitals in domain and total quality scores is reduced; for
example, the total quality score for IPPS hospitals ranges from 79.8 to 89.6 and the
score for CAHs ranges from 68.0 to 90.4.
The remainder of this memo describes these issues and results in more detail.
II. Purpose of Memo and Background
This memo describes the revised Hospital Total Care Quality Score methodology and
results. The changes from the first iteration of hospital total care provider peer grouping (PPG)
reports described here are responsive further analysis by MDH and MPR and feedback to
hospital stakeholder comments about the scoring and compositing methodologies. In particular,
stakeholders were concerned that the relative scoring methodology—modeled after the federal
Hospital Value-Based Purchasing program (HVBP)—led to large differences in point assignment
for relatively close raw scores. This concern was particularly voiced about the inclusion of
‘topped out’ measures in the composite; however, it held true for many of the measures, due to
generally compressed performance distributions. Another concern expressed by stakeholders was
allowing scores for subdomains (process, inpatient complications, readmissions, and mortality)
to be based on very few measures for some hospitals.
MEMO TO: Stefan Gildemeister
FROM:
Angela Merrill, Nyna Williams, and Beny Wu
DATE:
5/29/2012
PAGE:
3
To address these concerns, this memo describes the results of the following changes:
•
Using absolute thresholds rather than relative thresholds to assign points to each
measure (two options described below)
•
Stricter requirements on the required number of measures per subdomain to receive a
score
•
The combination of the three formerly separate outcome subdomains (readmission,
mortality and inpatient complications) into one combined outcome domain
These changes and other more minor changes are described in more detail within this memo.
III. Data and Methods
The second iteration of Hospital Total Care Quality Scores uses the same data sources,
quality measures, and time period as the first iteration.1 While more recent data are available for
many of the measures, MDH thought it was important to maintain performance period
consistency with the available cost of care data for this revised analysis to ensure that the impact
of the methodological changes could be better understood. Future updates will use more recent
available data for costs and quality.
The basic approach to creating the composite remains the same from the first iteration of
reports. Quality composite scores are calculated separately among the two peer groups: inpatient
prospective payment (PPS) hospitals and critical access hospitals (CAHs). Creating the Total
Care Quality Score consists of four basic steps:
1. Preparing the raw quality data, including transforming negative outcome measures to
positive, and “imputing” measure rates for hospitals that report rates for individual
measures based on fewer cases than the minimum required number of cases.
2. Assigning 0 to 10 points to each hospital for each measure.
3. Combining measure points to create domain scores, calculated among on the number
of measures the hospital has available. (Points based on imputed rates are only used
when a hospital might otherwise not receive a domain score.)
4. Creating the Total Care Quality Score as the weighted average of the Process and
Outcome scores. A hospital has to have a score on each domain to get a total score.
1
See: http://www.health.state.mn.us/healthreform/peer/hospitalmethodology.pdf
MEMO TO: Stefan Gildemeister
FROM:
Angela Merrill, Nyna Williams, and Beny Wu
DATE:
5/29/2012
PAGE:
4
We describe the original approach and revised approach for each of these steps below. 2
A. Data Preparation
In order to calculate composite measures, the rates for negative quality outcome measures
(i.e., those measures in which lower rates are more desired and higher rates are to be avoided) are
transformed to rates of positive outcomes, e.g. from 10% inpatient mortality to 90% inpatient
survival. This includes the 30-day readmission measures, 30-day and AHRQ inpatient mortality
measures, the Minnesota Hospital Association (MHA) infection rate for vaginal hysterectomy,
and the AHRQ inpatient complication measures.
In order to include as many CAHs in the domain and total quality score calculations, the
methodology allows—in certain cases—the use of measures for which a hospital did not meet
the minimum case size restriction. However, before using these data, the rates are ‘imputed’. The
basic approach to “imputing” the results for hospitals with a small number of cases remains the
same from the first round of reports. This process can be considered a smoothing process where
a hospital’s rate is averaged with the peer group mean, with more weight placed on the peer
group mean as the hospital case size (N) decreases. None of the outcome measures for 30-day
readmission or 30-day mortality could be imputed because the data available for these measures
only included results if a hospital met the minimum case size of 25. In addition, for CAHs, there
were six measures that could not be imputed because of a lack of peer group.
Imputed rates are only used in cases in which a hospital does not meet the required
minimum number of measures in a domain, as described further below. While the imputation
methodology has not changed from the first iteration, more CAHs are included using imputed
measures due to the increased requirements on measures in each domain.
B. Point Assignment
In the first iteration of hospital PPG reports, hospitals received 0 points on a measure if their
rate was below an achievement threshold that was the 30th percentile of the rate distribution for
that measure and peer group. Hospitals received 10 points if their rate was at or above a
benchmark that was the 75th percentile of the distribution. Additionally, hospitals received 1 to 9
points based on the evenly spaced intervals in between the 30th and 75th percentiles. With this
relative approach, these cutoffs varied by measure and were higher for measures with higher
overall performance. Where quality measures are “topped out” or in general have highly
compressed distributions or little variation, this led to situations where very small differences in
2
For more information on the methods used in the first iteration of reports, see the summary memo to the RRT
from 5/23/11 which describes the options considered; and a 6/15/11 final results summary memo presenting detailed
results from the chosen approach.
MEMO TO: Stefan Gildemeister
FROM:
Angela Merrill, Nyna Williams, and Beny Wu
DATE:
5/29/2012
PAGE:
5
rates resulted in large differences in points assigned.3 To keep the approach largely consistent
between the initial analysis and this revision, MDH decided to retain the topped out measures in
this next round of peer grouping.
The second iteration of reports uses an absolute threshold approach to assign points; that is,
an approach that assigns points based on predetermined cutoffs on actual performance rates. We
have tested two options of absolute scoring. Consistent with the initial analytic approach, both
options use a lower threshold; in this case it was set at 40 percent performance rate to receive any
points on the measure. Above that, 1 to 10 points are assigned based on evenly spaced intervals.
In the second option, a hospital only receives a top score of 10 if its rate is 98 percent or above,
and 1 to 9 points are assigned in evenly spaced intervals between 40 up to 98 percent. Table 1
presents the cutoffs used in both options.
Table 1. Rate Cutoffs for Absolute Points Assignment
Rate Range
Points
0
1
2
3
4
5
6
7
8
9
10
Option 1
40% Threshold
Option 2
40%/98% Thresholds
Rate<40%
40 <= Rate <46
46 <= Rate <52
52 <= Rate <58
58 <= Rate <64
64 <= Rate <70
70 <= Rate <76
76 <= Rate <82
82 <= Rate <88
88 <= Rate <94
Rate >= 94
Rate < 40%
40 <= Rate < 46.44
46.44 <= Rate < 52.88
52.88 <= Rate < 59.32
59.32 <= Rate < 65.76
65.76 <= Rate < 72.20
72.20 <= Rate < 78.64
78.64 <= Rate < 85.08
85.08 <= Rate < 91.52
91.52 <=Rate < 98
Rate >= 98
The absolute thresholds of a 40 percent minimum to get any points and a 98 percent cutoff
do not recognize different distributions of performance on different measures; however, they
were chosen as reasonable cutoffs of very poor performance and superior performance in
general. These cutoffs may be adjusted in future rounds.
3
“Topped out” is defined by the HVBP program as occurring when the 75th percentile is indistinguishable
from the 90th percentile and the truncated coefficient of variation is less than 0.1. The HVBP program on which this
approach was based excludes topped-out measures for this reason. However, the MN PPG project chose to retain
them for this first iteration of hospital total care reports to reflect the high performance by MN hospitals on these
measures.
MEMO TO: Stefan Gildemeister
FROM:
Angela Merrill, Nyna Williams, and Beny Wu
DATE:
5/29/2012
PAGE:
6
The absolute scoring approach rewards high performance on measures, even if many
hospitals are also doing well. While there are still situations in which hospitals with similar rates
will receive different points, this difference may be as little as one point. This approach leads to
less overall variation in assigned points—especially on ‘topped out’ measures. Variation in the
subdomain/domain and total scores are mainly driven by measures with greater variance.
For the revised hospital PPG reports, MDH and Mathematica also investigated using cluster
analysis to group hospitals for point assignment. Cluster analysis is a statistical procedure used to
group observations (hospitals) with similar values. The procedure is typically used on large data
sets and multiple measures/dimensions on which observations need to be grouped. Options for
pre-defining the number of clusters and their "locations" (in multi-dimensional space) are
typically used in an exploratory fashion, to compare different solutions to the default solution.
For PPG, the approach was used to provide a one-dimensional grouping of hospitals with
‘similar’ rates on a measure. In particular, an approach was used that specified a pre-defined
number of clusters (with the same number of clusters for all measures) based on absolute cutoffs
similar to above (to result in 0 to 10 points). The goal was to group the most alike hospitals
together into clusters.
The approach tested was to use initial cluster center values, otherwise known as “seeds” that
were evenly spread over the range of rates to be clustered.4 With this approach, it should be
theoretically possible to design a cluster model that includes “empty” clusters when no hospitals
fall near the initially specified cluster ‘seeds’, similar to an absolute scoring approach. However,
the results from these analyses were not stable across the multiple software packages tested. Due
to the inconsistencies in results, MDH has chosen to use a more straightforward and replicable
approach of absolute scoring.
C. Domain and Total Care Quality Scores
As in the first round of PPG hospital reports, a hospital is scored on only the measures available
for that hospital.5 Individual measure points are assigned to an aggregated score that measures
the percent of total possible points that a hospital earned:
Domain score = [(total points earned/total points possible)*100]
4
For example, the initial cluster ‘“seeds” are set at the following equally-spaced cut points from 40 to 98: 40,
47.25, 54.5, 61.75, 69, 76.25, 83.5, 90.75, 98.
5
Under the programs that collect the quality measures used here, participating hospitals cannot choose which
measures to report; measures are available for all hospitals with qualifying cases.
MEMO TO: Stefan Gildemeister
FROM:
Angela Merrill, Nyna Williams, and Beny Wu
DATE:
5/29/2012
PAGE:
7
The first iteration of PPG reports had four separate subdomains: Process of Care, Inpatient
Complications, Mortality, and Readmission. In this iteration of reports, three Outcome
subdomains are combined into one Outcome domain (16 measures) and Process of Care is
treated as a separate domain. This creates more balanced Process and Outcome domains and
allows more hospitals to meet the new minimum required number of measures for the Outcome
domain, described as follows.
In the first iteration of reports, a hospital needed only one measure in each of the
subdomains to receive a score; this approach was chosen to be as inclusive as possible for CAHs,
for which many measures are not applicable or had small case sizes. The revised approach
requires a hospital to have at least six measures, at least one of which is not imputed, to receive a
score for the process or outcome domains. Measures with imputed rates are included only if a
hospital would not otherwise meet the minimum of six measures in the respective domain; in
these instances, results from all imputed measures within that domain are used.6 In addition, to
receive an Outcome domain score, a hospital must have measures from at least two of the three
subdomains; this requirement is meant to improve the representativeness of the Outcome domain
score. (Mathematica tested the impact of increasing this minimum number of required measures
on the number of hospitals included and the stability of scores; results are described in the results
section.) These revised rules lead to a total possible number of measures of 42 among PPS
hospitals and 33 among CAHs.
The Total Care Quality Score is then created by a weighted average of the two domain
scores. As in the first iteration, hospitals must have scores in both domains to receive a Total
Care Quality Score. In the first iteration of reports, Process received 30 percent of the weight,
Inpatient Complications received 20 percent, Mortality received 30 percent, and Readmissions
received 20 percent.7 In the revised reports, the Process domain still receives 30 percent and the
combined Outcome domain retains a combined 70 percent weight.
IV. Results
A. Point Assignment
Regardless of option, under the new absolute approach of assigning points to quality
measures, the entire distribution of points is higher in value than the previous relative approach.
This recognizes that there is relatively little variation in the quality performance on the measures
6
For example, if a hospital has six measures meeting minimum N, no other imputed measure rates are used to
calculate the subdomain score, but if a hospital has 4 measures meeting minimum N and 5 more measures that were
imputed because they did not meet the min N, then all 9 measures are used to calculate the subdomain.
7
These subdomain weights were chosen by MDH as policy-based weights.
MEMO TO: Stefan Gildemeister
FROM:
Angela Merrill, Nyna Williams, and Beny Wu
DATE:
5/29/2012
PAGE:
8
chosen for the initial iteration of reports. Mean scores were 9 or above for many measures,
especially for the option with just the 40 percent lower threshold, since 10 points is awarded in
this option to any rate of 96 percent or above (results not shown). A number of measures had all
hospitals receiving 10 points, especially for the first absolute threshold option of a 40 percent
cutoff only; for example, under this option, 9 measures among IPPS and 2 measures among
CAHs had all hospitals receiving 10 points. There is also far less variation in point assignment
compared to the relative approach.
B. Number of Hospitals and Measures Included in Calculations
Using the stricter rules to receive a domain score, the same number and set of PPS hospitals
(49 out of 54) received a Total Care Quality Score in both iterations (Table 2). However, fewer
CAHs received both domain scores and the resulting Total Care Quality Score using the revised
approach: 14 fewer received a Process score, 15 fewer receive an Outcome score, and 14 fewer
receive a Total Care Quality Score. Out of the total 78 CAHs considered in the first iteration peer
grouping, 51 (65 percent) received a Total Care Quality Score in the second iteration, compared
to the 65 CAHs out of the 78 included in the first iteration.
The minimum of six measures strikes a balance between having a score based on a broader
set of measures than in the first iteration of reports and including as many CAHs as possible.
Increasing the minimum from six to ten results in a dramatic drop in the CAHs receiving a score,
an option MDH considers undesirable.
Table 2. Hospitals Included in Subdomains and Overall Composite
IPPS (Total = 54)
Quality Domain
(Subdomains Noted
by Italics
Process
Outcome
Safety
Mortality
Readmission
Overall Composite
First Iteration
Second Iteration
49
-52
49
49
49
49
49
---49
CAH (Total = 78)
First Iteration
73
-76
67
67
65
Second Iteration
59
52
---51
Table 3 presents the distribution of the number of measures used in the construction of each
domain score. As expected, IPPS hospitals had more measures available for domain and
subdomain construction than did CAHs. In the revised Process domain, IPPS hospitals had an
average of 21 measures and CAHs had an average of 10 measures. The combined Outcome
domain used 12 measures on average from IPPS hospital and 7 measures from CAHs. IPPS
hospitals had a minimum of 13 measures used for the Process domain, and 6 measures for the
MEMO TO: Stefan Gildemeister
FROM:
Angela Merrill, Nyna Williams, and Beny Wu
DATE:
5/29/2012
PAGE:
9
Outcome domain, compared to a minimum of 6 measures used for the Process domain and 6
measures used for the Outcome domain among CAHs.
Table 3. Distribution of Measures Used in Subdomain Construction
Quality Domain
(Subdomains Noted
by Italics)
First Iteration- IPPS
Process
Safety
Mortality
Readmission
Mean
Measures
Mean
Imputed
Measures
Minimum
25th Pctl
50th Pctl
75th Pctl
Maximum
21.4
5.3
4.4
2.6
0
0
0
0
13
3
1
2
19
5
3
2
22
6
4
3
26
6
6
3
26
6
7
3
Second Iteration – IPPS
Process
21.4
Outcomes
12.3
0
0
13
6
19
10
22
12
26
15
26
16
First Iteration- CAHs
Process
Safety
Mortality
Readmission
8.4
2.8
1.9
1.9
0.5
0.1
0
0
2
1
1
1
5
2
2
2
7
3
2
2
12
4
2
2
19
5
4
3
Second Iteration – CAHs
Process
10.5
Outcomes
7.4
2.1
0.3
6
6
7
6.5
10
7
13
8
19
10
More imputed measures were used on average in the second iteration due to the requirement
of six measures per domain; however, no imputed measures were required for IPPS hospitals.
Table A.1 in Appendix A presents the quality measure rate distributions for the final sample
of hospitals and measures, as well as counts of hospitals that were imputed because they did not
meet the minimum case size for a measure.
While the results presented in this memo impose a minimum of six measures (with at least
one un-imputed) to receive a domain score, Mathematica also tested increasing this requirement
to 7, 8, 9 and 10 measures in order to evaluate how sensitive the results were to the number of
measures included. In particular, we examined the drop in the number of hospitals and the
change in the mean domain score as the minimum number of measures per domain was
increased. We also examined the change in the correlation of domain scores and ranks between
these different options, to make sure that the results when only requiring six measures are
relatively stable (similar to the results when requiring more measures).
First we examined the number of hospitals receiving domain scores with higher required
numbers of measures. Increasing the number of required measures per domain had a very small
MEMO TO: Stefan Gildemeister
FROM:
Angela Merrill, Nyna Williams, and Beny Wu
DATE:
5/29/2012
PAGE:
10
effect on the inclusion of PPS hospitals (a few hospitals drop out of the Outcome domain when 9
or 10 measures are required), but has a very large effect on the number of CAHs included in both
domains. For example, only 46 CAHs received the Process domain and only 17 received the
Outcome domain with a minimum of 10 measures requirement, compared to 59 and 52 with six
measures (results not shown).
We also examined the correlations between scores and ranks of domain scores across
different minimum measure requirements to test the stability of results with different minimum
measure requirements. For IPPS hospitals, the results were stable across all minimum measure
requirements in both domains (results not shown). For CAHs, results in the Process domain were
stable, although the results for the Outcome domain were less stable (because so many CAHs
drop out with higher measure requirements).
These results suggested that the choice to require only 6 measures to increase the number of
CAHs included does not result in scores and rankings that are very different from when 7, 8, or 9
measures are required, but a requirement of 10 would produce much different results on the
Outcome domain among the only 17 CAHs that would be included.
C. Distribution of Domain and Total Quality Scores
Tables 4a and 4b present the distributions of the subdomain and total quality scores in the
first iteration compared to the two absolute scoring options for the second iteration.
Table 4a. Initial and Revised Domain/Subdomains and Overall Composite Scores for IPPS
Hospitals
Variable
N
Mean
Minimum
10th
Pctl
25th
Pctl
50th
Pctl
First Iteration: Relative Scoring
Process
49
44.4
12.5
24.0
34.6
41.8
Safety
52
38.0
0.0
14.0
20.0
37.0
Mortality
49
34.3
0.0
10.0
20.0
32.9
Readmission
49
27.5
0.0
5.0
13.3
25.0
Total Quality
49
36.4
17.1
23.1
30.1
36.8
Score
Second Iteration: Absolute Scoring – 40% Lower Threshold
Process
49
93.9
76.8
88.6
91.7
94.5
Outcome
49
88.1
85.0
86.0
86.7
88.2
Total Quality
49
89.8
83.7
87.6
88.5
90.0
Score
Second Iteration: Absolute Scoring – 40% Lower/98% Upper Threshold
Process
49
89.8
71.6
83.6
87.1
90.5
Outcome
49
84.2
80.9
82.3
83.1
84.0
Total Quality
49
85.9
79.8
83.7
84.8
85.8
Score
75th
Pctl
90th
Pctl
Maximum
58.2
49.0
47.5
36.7
44.0
68.7
62.5
56.7
65.0
48.5
76.5
100.0
87.5
66.7
54.1
97.2
89.2
91.2
98.9
90.0
92.2
99.5
92.0
93.4
93.1
85.0
87.1
96.0
86.3
88.5
96.7
88.0
89.6
MEMO TO: Stefan Gildemeister
FROM:
Angela Merrill, Nyna Williams, and Beny Wu
DATE:
5/29/2012
PAGE:
11
Table 4b. Initial and Revised Domain/Subdomains and Overall Composite Scores for CAHs
Variable
N
Mean
50th
Pctl
75th
Pctl
90th
Pctl
Maximum
10.0
16.4
10.0
36.7
32.6
30.0
36.9
35.0
60.0
39.4
50.0
54.3
50.0
100.0
47.4
65.0
71.1
65.0
100.0
60.9
90.0
100.0
100.0
100.0
66.1
Second Iteration: Absolute Scoring – 40% Lower Threshold
Process
59
76.4
41.7
55.0
66.7
Outcome
52
87.2
83.3
85.0
85.7
Overall Composite
51
84.3
71.5
78.7
80.2
79.0
87.1
84.4
87.0
88.6
88.0
93.8
90.0
90.4
100.0
93.3
91.7
Second Iteration: Absolute Scoring – 40% Lower/98% Upper Threshold
Process
59
72.4
40.0
51.7
62.9
74.0
Outcome
52
83.2
80.0
81.3
81.7
82.9
Overall Composite
51
80.3
68.0
75.8
77.1
80.3
82.5
84.3
83.9
90.0
85.0
85.5
100.0
90.0
90.4
First Iteration – Relative Scoring
Process
67
30.7
Safety
73
37.5
Mortality
67
33.5
Readmission
76
61.6
Overall Composite
65
40.5
Minimum
0.0
0.0
0.0
0.0
3.2
10th
Pctl
5.0
7.1
0.0
20.0
25.0
25th
Pctl
The absolute scoring approach (either option) results in less variation in the domain and total
scores for both PPS hospitals and CAHs than the relative scoring approach. The mean scores are
much higher due to a shift in the entire distribution. With the second absolute scoring option,
PPS hospitals have overall composite scores that range only from 80 percent to 90 percent (Table
4a), and CAHs scores range from 68 percent to 90 percent (Table 4b). The first absolute scoring
option (under which more hospitals receive 10 points) results in a higher mean and less
dispersion than the second option requiring a score of at least 98 percent to get maximum points.
There is a little more variation in scores among CAHs, driven mainly by variation in the Process
domain rather than the Outcome domain.
V. Summary
Compared to the first iteration of hospital reports, the revised methodology in the second
iteration results in two main changes:
•
The increased number of required measures has decreased the number of CAHs that
will receive a total quality score (and report), but has improved the
representativeness of the subdomain scores, which previously could have been based
on very few measures.
MEMO TO: Stefan Gildemeister
FROM:
Angela Merrill, Nyna Williams, and Beny Wu
DATE:
5/29/2012
PAGE:
12
•
The revised point assignments on individual measures, as well as domain and total
quality scores, are much higher than under the first round of hospital reports. In
addition, there is less variation in scores across hospitals in both peer groups;
differences between providers must be substantial to drive a large difference in
points and resulting domain and total scores.
As with the first iteration of reports, hospitals will be presented with both their absolute
score and a relative ranking of their performance (for example, their percentile ranking) on
individual measures, as well as domain and total scores. Relative ranking may be easier for
consumers to interpret than absolute scores, and is in line with the intent of peer grouping. Some
hospitals that may have received high points on many measures will still receive lower relative
rankings. In addition, relative rankings will increase for some hospitals and decline for others.
MDH and Mathematica are exploring alternative options for presenting absolute scores and
relative rankings and for displaying the relative rankings (given the very compressed
distributions).
cc: Project Team
APPENDIX A
DISTRIBUTION OF MEASURE RATES
Table A.1 - Distribution of Measure Rates
Measures
# Hospitals
with
Measure
Min N
# Hospitals
with
Imputed
Rate
Mean
Min
10th pctl
25th pctl
50th pctl
75th pctl 90th pctl
Max
Std. Dev.
PPS Hospitals
AMI patients given aspirin at arrival (AMI-1)*
AMI patients given aspirin at discharge (AMI-2) *
AMI patients given ACE/ARB for LVSD (AMI-3)
AMI smoking cessation advice/counseling (AMI-4) *
AMI beta blocker at discharge (AMI-5)*
AMI patients given PCI within 90 minutes of arrival (AMI-8a)
HF patients given discharge instructions
HF patients given an evaluation of LVS function (HF-2) *
HF patients given ACE/ARB for LVSD (HF-3) *
HF patients given smoking cessation advice/counseling (HF-4)*
PN patients assessed/given pneumococcal vaccination (PN-2)
PN with ER blood culture before first dose of antibiotics (PN-3b)
PN patients given smoking cessation advice/counseling (PN-4) *
PN patients given antibiotic(s) within 6 hours after arrival (PN-5c)
PN patients given the most appropriate initial antibiotic(s) (PN-6)
PN patients assessed and given influenza vaccination (PN-7)
Surgery patients with recommended VTE prophylaxis ordered (SCIP-VTI-1)
Surgery patients approp VTE prophylaxis within 24 hrs surgery (SCIP-VTE-2)
Surgery patients prophylactic antibiotic 1 hr prior to incision (SCIP-Inf-1a)
Surgery patients appropriate prophylactic antibiotic selected (SCIP-Inf-2a)
Surgery patients antibiotics discontinued w/in 24 hrs (SCIP-Inf-3a)
Cardiac surgery patients controlled post-operative blood glucose (SCIP-Inf-4)
Surgery patients with appropriate hair removal (SCIP-Inf-6) *
Surgery patients taking beta blockers who were kept on them (SCIP-Card-2)
VAP bundle compliance for ICU patients (MHA) *
Central line bundle compliance for ICU patients (MHA)
30-day mortality after hospital admission for AMI
30-day mortality after hospital admission for HF
30-day mortality after hospital admission for PN
AAA repair inpatient mortality rate (IQI-11)
Hip fracture inpatient mortality rate (IQI-19)
PTCA inpatient mortality rate (IQI-30)
CABG inpatient mortality rate (IQI-12)
Decubitus ulcer (pressure ulcer) (PSI-3) *
Death surgical inpatients with serious treatable complications (PSI-4)
Post-op pulmonary embolism or deep vein thrombosis (PSI-12)
Obstetric trauma: vaginal delivery with instrument (PSI-18)
Obstetric trauma: vaginal delivery without instrument (PSI-19)
HAI: SSI rate for vaginal hysterectomy (MHA) *
30-day readmission rate after hospital discharge for AMI
30-day readmission rate after hospital discharge for HF
30-day readmission rate after hospital discharge for PN
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
3
3
25
25
25
25
25
25
25
25
25
25
25
25
3
25
25
25
47
6
98.3
89.0
92.0
99.0
100.0
100.0
100.0
100.0
3.1
46
14
97.1
80.1
88.0
96.0
99.1
100.0
100.0
100.0
4.8
39
24
97.8
88.9
96.0
98.0
98.5
99.1
100.0
100.0
2.7
36
21
99.3
89.9
99.0
99.9
99.9
100.0
100.0
100.0
2.3
47
12
97.2
80.0
89.6
94.0
99.5
100.0
100.0
100.0
4.4
15
0
93.9
87.0
87.0
88.0
95.0
98.0
100.0
100.0
5.0
49
1
83.5
27.0
66.0
79.0
89.0
92.0
96.0
100.0
14.4
49
0
97.2
75.0
94.0
97.0
99.0
100.0
100.0
100.0
5.0
49
7
95.5
81.0
88.2
94.0
96.8
100.0
100.0
100.0
5.2
48
18
98.3
89.5
90.3
99.2
99.9
100.0
100.0
100.0
3.3
50
1
93.3
57.0
84.0
91.0
96.0
98.0
99.5
100.0
7.3
47
0
94.8
80.0
89.0
93.0
96.0
98.0
100.0
100.0
4.6
49
4
95.4
67.0
88.0
93.0
98.0
100.0
100.0
100.0
6.2
50
2
96.1
86.0
91.5
95.0
97.0
98.0
99.5
100.0
3.3
49
2
90.9
78.0
83.0
88.0
92.0
94.0
98.0
100.0
5.3
50
1
90.5
62.0
76.5
87.0
94.0
97.0
100.0
100.0
9.6
49
0
93.9
75.0
86.0
92.0
96.0
98.0
100.0
100.0
5.7
49
0
92.0
66.0
82.0
89.0
94.0
97.0
99.0
100.0
7.4
49
0
94.2
53.0
90.0
94.0
96.0
97.0
98.0
100.0
6.7
49
0
98.4
95.0
97.0
98.0
99.0
99.0
100.0
100.0
1.0
49
0
95.7
88.0
92.0
95.0
96.0
97.0
98.0
100.0
2.5
14
0
87.1
50.0
83.0
86.0
90.0
92.0
97.0
97.0
11.4
49
0
99.0
78.0
99.0
99.0
100.0
100.0
100.0
100.0
3.6
49
0
92.4
52.0
85.0
89.0
94.0
97.0
100.0
100.0
8.2
47
5
93.2
33.3
84.2
91.8
97.6
100.0
100.0
100.0
13.0
45
2
83.2
0.0
60.0
81.1
94.2
97.6
100.0
100.0
25.3
39
0
84.4
80.6
82.3
83.1
84.2
85.6
86.5
87.4
1.6
47
0
89.2
85.9
86.8
88.1
89.6
90.3
90.8
91.7
1.5
49
0
89.2
85.5
86.9
87.8
89.5
90.5
91.1
92.2
1.6
18
7
96.1
90.3
90.7
95.4
96.9
98.0
98.4
100.0
2.7
47
7
97.3
91.1
94.3
95.9
97.8
98.8
100.0
100.0
2.1
17
1
98.8
98.0
98.2
98.4
98.8
99.1
99.4
100.0
0.5
14
0
98.2
95.3
96.0
97.8
98.3
99.3
99.6
100.0
1.3
49
0
99.8
98.2
99.5
99.7
99.9
100.0
100.0
100.0
0.4
49
21
90.3
76.0
85.1
87.8
89.9
93.2
97.2
100.0
4.3
49
0
99.4
97.7
98.8
99.2
99.4
99.7
99.9
100.0
0.5
47
8
85.1
71.7
77.0
81.6
85.2
90.0
92.0
100.0
6.2
47
0
97.6
94.7
95.6
97.1
97.8
98.2
98.8
99.4
1.1
47
0
98.6
88.9
96.2
97.8
99.2
100.0
100.0
100.0
2.0
29
0
80.0
77.5
78.6
79.4
80.1
80.6
81.2
82.4
1.0
49
0
75.5
70.0
73.9
74.6
75.4
76.4
77.5
79.4
1.6
49
0
81.7
78.7
80.4
80.8
81.5
82.5
83.2
84.7
1.2
Table A.1 - Distribution of Measure Rates
Measures
# Hospitals
with
Measure
Min N
# Hospitals
with
Imputed
Rate
Mean
Min
10th pctl
25th pctl
50th pctl
75th pctl 90th pctl
Max
Std. Dev.
Critical Access Hospitals
AMI patients given aspirin at arrival (AMI-1)
AMI patients given aspirin at discharge (AMI-2)
AMI beta blocker at discharge (AMI-5)
HF patients given discharge instructions (HF-1)
HF patients given an evaluation of LVS function (HF-2)
HF patients given ACE/ARB for LVSD (HF-3)
PN patients assessed/given pneumococcal vaccination (PN-2)
PN with ER blood culture before first dose of antibiotics (PN-3b)
PN patients given smoking cessation advice/counseling (PN-4)
PN patients given antibiotic(s) within 6 hours after arrival (PN-5c)
PN patients given the most appropriate initial antibiotic(s) (PN-6)
PN patients assessed and given influenza vaccination (PN-7)
Surgery patients with recommended VTE prophylaxis ordered (SCIP-VTI-1)
Surgery patients approp VTE prophylaxis within 24 hrs surgery (SCIP-VTE-2)
Surgery patients prophylactic antibiotic 1 hr prior to incision (SCIP-Inf-1a)
Surgery patients appropriate prophylactic antibiotic selected (SCIP-Inf-2a)
Surgery patients antibiotics discontinued w/in 24 hrs (SCIP-Inf-3a)
Surgery patients with appropriate hair removal (SCIP-Inf-6)
Surgery patients taking beta blockers who were kept on them (SCIP-Card-2)
VAP bundle compliance for ICU patients (MHA)
Central line bundle compliance for ICU patients (MHA)
30-day mortality after hospital admission for AMI
30-day mortality after hospital admission for HF
30-day mortality after hospital admission for PN
Hip fracture inpatient mortality rate (IQI-19)
Decubitus ulcer (pressure ulcer) (PSI-3) *
Post-op pulmonary embolism or deep vein thrombosis (PSI-12) *
Obstetric trauma: vaginal delivery with instrument (PSI-18)
Obstetric trauma: vaginal delivery without instrument (PSI-19)
HAI: SSI rate for vaginal hysterectomy (MHA) *
30-day readmission rate after hospital discharge for AMI
30-day readmission rate after hospital discharge for HF
30-day readmission rate after hospital discharge for PN
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
3
3
25
25
25
25
25
25
25
3
25
25
25
Note: Table includes results for the final set of hospitals and measures in composite construction.
* Measure is topped out (75th and 90th percentile are equal and the truncated coefficient of variation is < 0.1).
38
34
94.7
78.3
87.2
95.7
96.2
97.2
98.1
100.0
4.6
1
0
64.0
64.0
64.0
64.0
64.0
64.0
64.0
64.0
.
2
0
92.5
85.0
85.0
85.0
92.5
100.0
100.0
100.0
10.6
68
29
62.6
0.0
36.0
46.7
63.2
83.0
93.0
100.0
22.9
71
20
76.6
16.4
50.0
70.0
78.0
93.0
100.0
100.0
18.3
58
48
84.2
44.8
70.0
77.9
87.9
91.9
96.0
100.0
11.3
71
17
76.2
9.0
41.0
65.0
82.9
93.0
97.0
100.0
21.3
61
38
92.4
49.3
84.9
87.5
95.4
97.0
100.0
100.0
8.2
60
46
80.8
20.0
65.1
75.2
85.2
88.5
96.2
100.0
13.4
70
21
92.7
57.5
85.0
90.0
94.5
98.9
100.0
100.0
7.9
69
31
86.8
59.3
76.3
80.0
88.1
94.0
100.0
100.0
9.2
63
19
79.2
6.0
55.0
73.6
84.0
91.0
96.0
100.0
20.2
34
14
87.1
45.0
72.4
78.1
92.5
98.1
100.0
100.0
14.2
34
14
86.3
45.0
71.6
77.9
92.5
96.9
100.0
100.0
14.0
39
13
82.9
18.0
67.6
74.1
88.0
93.0
98.0
100.0
16.0
39
13
93.7
40.0
77.6
95.0
98.0
100.0
100.0
100.0
12.5
39
13
90.8
30.0
79.4
91.0
93.4
97.5
100.0
100.0
12.8
39
11
92.5
19.4
69.0
95.0
99.0
100.0
100.0
100.0
16.7
21
12
81.0
50.0
60.0
73.1
87.6
89.0
100.0
100.0
14.5
11
2
79.9
20.0
28.6
57.1
100.0
100.0
100.0
100.0
30.5
14
6
56.1
0.0
22.2
24.5
49.5
85.4
90.0
100.0
31.5
5
0
83.0
80.5
80.5
83.3
83.5
83.5
84.4
84.4
1.5
49
0
88.4
85.5
87.1
87.9
88.5
89.2
89.6
90.3
1.0
66
0
88.6
84.0
86.9
87.8
88.9
89.4
90.0
90.5
1.3
25
23
96.3
88.5
91.9
95.3
97.0
97.8
98.6
99.9
2.7
76
12
99.6
96.2
98.6
99.6
100.0
100.0
100.0
100.0
0.8
59
16
99.8
98.5
99.4
99.7
100.0
100.0
100.0
100.0
0.3
38
33
81.8
56.7
70.6
77.9
82.6
87.5
90.0
92.0
7.5
52
11
96.6
90.0
92.5
95.4
96.6
98.9
100.0
100.0
2.7
28
7
97.5
66.4
90.9
99.7
100.0
100.0
100.0
100.0
7.0
1
0
80.0
80.0
80.0
80.0
80.0
80.0
80.0
80.0
.
55
0
75.3
72.4
73.6
74.5
75.4
76.3
76.8
77.8
1.2
66
0
82.0
78.2
80.9
81.5
82.0
82.6
83.0
84.1
0.9