Protecting, maintaining and improving the health of all Minnesotans Defining the Unit of Analysis – What is a Provider? What Is The Issue? Provider organizations are not all organized in similar fashion. For example, there are health systems in which particular types of patients are only seen at or admitted to a specific practice location (clinic or hospital). The first iteration of Provider Peer Grouping (PPG) analysis showed that such differences in organizational structure within health systems may impact the results of cost and quality reporting and comparisons for clinics and hospitals. Analyzing insurance claims data in ways that do not incorporate those organizational nuances may create an unfair advantage or disadvantage (introduce a bias) in the results for certain providers. What Past Decisions Have Been Made? 1) Members of the 2009 PPG Advisory Group concluded that consumers would receive more useful information by looking at results in which they could compare specific clinics or hospitals rather than results for care systems or medical groups more broadly. The rationale for this conclusion was that internal differences can and do exist across facilities and clinics within a care system and that those differences are important to consumers. There was a perceived common understanding of “hospital” and “clinic” to mean a single place of service for the delivery of health care. Operationalizing this concept initially seemed relatively easy. 2) Hospitals: While the analysis compares Critical Access hospitals (CAH) and Prospective Payment System (PPS) hospitals in separate peer groups, initially single stand-alone hospitals were compared within those peer groups rather than selectively rolled up into a combined hospital entity and then compared. (The analysis did not allow the existence of separate units within a single hospital campus to exist as smaller “independent” hospitals.) 3) Clinics: The 2009 Advisory Group recommended that, wherever possible, peer grouping should occur at the clinic site level, although there were concerns among the Group about the practicality of doing so. Group members recommended peer grouping at the medical group level only if clinic site peer grouping was not feasible. In fact, the advisory group recommended comparing individual physicians/surgeons on certain analyses, but because CMS rules prohibit analysis in which physician Medicare incomes can be calculated, this approach was not pursued. Analysis has been progressing based on the original assumption that individual clinics would stand alone for purposes of cost and quality comparisons. Clinics and staff are self-identified through registration under the State Quality Reporting and Measurement system. Clinics with staffing capacity to provide mostly specialized care are not included in the PPG reporting framework. 85 East Seventh Place • PO Box 64882• St. Paul, MN, 55164-0882 • (651) 201-3560 http://www.health.state.mn.us An equal opportunity employer Protecting, maintaining and improving the health of all Minnesotans What Are The Options? Hospitals: Option 1: Define hospitals by distinct practice location (the current option), or Option 2: Combine certain hospitals within care systems that exist as physically separate practice locations within a medical campus but nonetheless use a business/practice model, operating as if they are one hospital with more than one location, each of which specializes in a certain type of patients. Clinics: Option 1: Define primary care clinics by separate physical practice location and with reference to information about physician staff and outputs gathered from physician registry (the current option), or Option 2: Combine certain clinics within care systems that exist in separate physical locations but nonetheless use a business/practice model operating as if they are one single clinic with more than one location, each of which treats a certain type of patient. What Are the Considerations (advantages/disadvantages)? 1) Changes will require devising an operational method to identify providers using the shared facility practice model and the additional burden this may place on providers to provide better information about their practice / business model. 2) Certain data needs to be available at the unit of analysis to support the decision; 3) Evaluate the impact against the goal of making reasonable comparisons for provider performance; 4) Impact of the decision on provider ability to identify areas for improvement efforts at particular practice locations needs to be considered; 5) Impact of the decision on consumer capacity to understand peer grouping reports and make informed decisions in selecting a health care provider; and 6) For consistency with PPG principles, current community practice in cost and quality reporting methods need to be considered as part of the decision. Specific Questions on Which We Seek Advisory Group Input 1) Should certain hospitals with separate practice locations within a care system be combined when functioning as a single entity within a medical campus? 2) Should certain clinics within a care system be combined to create a single practice location? For questions 1 and 2 address the following: a. Always or Never and why? b. Only under certain practice, organizational, or structural conditions • What specific conditions? • How can we identify those conditions and combinations? • How might this practice of combining for some but not for all effect the results of the cost and quality measures and their comparison across systems? 85 East Seventh Place • PO Box 64882• St. Paul, MN, 55164-0882 • (651) 201-3560 http://www.health.state.mn.us An equal opportunity employer Protecting, maintaining and improving the health of all Minnesotans Quality Measurement Compositing - Hospitals & Clinics What Is The Issue? The goal of quality compositing is to provide consumers with aggregated indicators of performance to simplify comparisons and decrease the need for expert knowledge in interpreting relative quality. Considering that there are many options for aggregating quality measures into an overall composite score, MDH sought recommendations from the 2009 Advisory Group. This group recommended that MDH take advantage of existing compositing approaches used by agencies and organizations such as the Agency for Healthcare Research and Quality (AHRQ), the Centers for Medicare and Medicaid Services (CMS), and the National Quality Forum (NQF), among others, for the reportable set of composite Provider Peer Grouping (PPG) measures. In keeping with these recommendations, the initial hospital report included relative performance scoring modeled after the federal Hospital Value Based Purchasing (HVBP) program. Following the initial hospital report, there were some shared concerns that the HVBP method was not working as well as hoped when applied to data for Minnesota hospital. Among the concerns expressed by stakeholders was that the method allowed scores to be based on very few measures for some hospitals. As a result, MDH and Mathematica performed further examination of the data and decided to make a set of changes to the hospital total care quality score methodology to construct composite measures. This paper summarizes some of those decisions and seeks input on possible further changes to the method of constructing composite measures to create reasonable composite indicators of care quality in hospitals and physician clinics. What Past Decisions Have Been Made? Initial recommendations of the 2009 Advisory Group were to create a quality composite measure by using four sub-composites and a standalone fifth category of patient experience. The four sub-composites were three outcome components (readmission, mortality, and inpatient complications) and one process component (see Figure 1). In creating the overall composite score, each sub-composite was weighted using weights developed by Dr. Michael Pine, a national expert on contract with MDH. Applying the methodology to actual data on quality showed that in some instances, the calculation of a composite measure score relied heavily on a few measure of quality. This was primarily the case for Critical Access Hospitals (CAHs) that have a low number of patient stays; it occurred rarely for Prospective Payment System (PPS) hospitals. Figure 1: Hospital Quality Composite Method, Initial Recommendation 85 East Seventh Place • PO Box 64882• St. Paul, MN, 55164-0882 • (651) 201-3560 http://www.health.state.mn.us An equal opportunity employer After pursuing a number of alternative methods, including a cluster analysis of the quality scores, MDH and Mathematica opted to revise the compositing formula by eliminating the calculation of sub-composites and instead calculating quality measures in three distinct higher level areas (i.e., domains) of quality. These domains are Outcome, Process, and Patient Experience. In the revision to the initial hospital report the Outcome Domain and Process Domain are weighted 60%/40% respectively in the quality composite, while patient satisfaction is reported but not currently factored into the overall score (as was the case before). Effectively, with this change, the Outcome domain is created by combining three sub-domains of readmission, mortality, and inpatient complications, but the calculation of points occurs at the Outcome domain. While altering the methodology for the construction of the composite score from a weighted aggregation of four sub-composites to using separate domains for outcome and process may help reduce the number of instances in which the composite domains rely on a single measure of quality, it doesn’t necessarily completely eliminate the possibility of a few measures dominating the creation of a score. To address this, the research team also established a set of stricter requirement on the number of measures required per domain. In analyzing the data, Mathematica and MDH agreed that a minimum of six measures per subdomain struck a reasonable balance between inclusiveness (primarily for CAHs) and having a score based on a more reasonable number of measures than in the first iteration of reports. This balance helps achieve two goals of the original Advisory Group: to include as many hospitals as feasible, while developing a composite indicator of quality that is relatively stable and robust in nature. Using the requirement of six measures per domain score to receive a quality composite score, 49 out of 54 PPS hospitals will receive a total care quality score, the same as in the first iteration. However, 14 fewer CAHs receive both domain scores; 51 CAHs will receive a total care quality score in the second iteration (compared to the 65 CAHs in the first iteration). Increasing the minimum from six to ten results in a substantial drop in the CAHs receiving a score, an option MDH considers undesirable given the goal of consumer reporting. OPTION VARIABLE PPS HOSPITALS CAH HOSPITALS 6 Measures Process 49 59 6 Measures Outcome 49 52 6 Measures Composite 49 51 7 Measures Process 49 58 7 Measures Outcome 49 48 7 Measures Composite 49 47 8 Measures Process 49 57 8 Measures Outcome 48 39 8 Measures Composite 48 38 9 Measures Process 49 55 9 Measures Outcome 48 29 9 Measures Composite 48 29 10 Measures Process 49 46 10 Measures Outcome 47 17 10 Measures Composite 47 17 2 In its analysis, Mathematica also evaluated the impact of alternative requirements on the mean domain score and the change in the correlation of domain scores and ranks, to ensure that the results even when using a six measure minimum are relatively stable (similar to the results when requiring more measures). This analysis was presented to the RRT for consideration. Results of the analysis of correlations between scores and ranks of domain scores were mixed. For PPS hospitals, the results were stable across all minimum measure requirements in both domains. For CAHs, results in the Process domain were stable, although the results for the Outcome domain were less so (because so many CAHs drop out with higher measure requirements). These results suggested that the choice to require only 6 measures to increase the number of CAHs included in the reporting system does not result in scores and rankings that are substantially different from requiring 7, 8, or 9 measures. A requirement of 10 measures would produce much different results on the Outcome domain, but only 17 CAHs would be included in the resulting reports. What Are The Options? For future versions of the PPG hospital report MDH could: Option 1: use a different compositing method for evaluating hospital quality. Some of these methods may become highly complex and sophisticated (factor analytic methods or structural equations). Option 2: set the minimum number of measures required specific to the type or location of hospital, i.e., do not maintain consistent methods across PPS and CAH hospitals. Option 3: include additional quality measures beyond those recommended by the 2009 Advisory Group. What Are the Considerations (advantages/disadvantages)? Whatever compositing methodology is chosen for future reports, it must maintain the PPG core principals of inclusiveness and transparency. Each option and decision brings with it gains or losses to the principles of inclusivity and transparency. Additionally, simplicity in interpretation should remain a goal for public reporting. Specific Questions on Which We Seek Advisory Group Input 1) Does the current method to calculate composite scores meet a reasonable-person standard? If not, how should the minimum number of reported/available measures for each quality component be determined and the quality composite be constructed? 2) How many measures should be required for each composite domain and where should the balance be found in imputing measures with low number of observations? 3) How do decisions on issues like including topped out measures impact the ability to provide quality scores on sufficient measures? 4) How should patient experience be incorporated into the calculation of the overall cost score, and how would the weights for other composites be recalibrated? 5) Does it continue to make sense to evaluate PPS and CAH hospitals with the same composite construction and if not, what is the impact on comprehension of providers and consumers of using different methods? 6) What lessons learned from the hospital analysis should be applied to the clinic analysis (Figure 2), e.g., similarity of standards, minimum number of measures per domain, weight used for each domain? 3 Figure 2: Physician Clinic Composite Method, Initial Recommendation 4 Protecting, maintaining and improving the health of all Minnesotans Absolute Versus Relative Scoring of Individual Measures What Is The Issue? Before combining quality measures into an overall composite, the first decision that must be made is whether to compare provider performance on a relative scale of the performance of other providers in their peer group for a given measure, or to compare their “absolute” performance on the measure. What Past Decisions Have Been Made? The 2011 PPG hospital reports used a methodology of relative ranking for the quality composite methodology, as recommended by the 2009 Advisory Group, in which hospitals received “points” based on their performance in comparison to their peer hospitals on that same issue. The assignment of points was relative to peers and was paired with benchmark and performance thresholds (low and high). Specifically, hospitals received zero points if their rate of performance was below the 30th percentile in the distribution of rates achieved by all hospitals in their peer group on a given measure; somewhere between 1 to 9 points based on evenly spaced intervals in between the 30th and 90th percentile, and 10 points if their performance was equal or greater than the mean of the 90thh percentile. This methodology created artificially large differences in the performance, or points, on measures where performance between hospitals was narrowly distributed. This was particularly the case for topped-out measures 1, but also affected other measures with narrow distributions in performance (see Table 1). With input from a discussion of the Rapid Response Team in Summer 2012, MDH has determined that it will use absolute scoring in the next round of PPG hospital reports. Absolute scoring – in which points are awarded based on the provider’s performance against an absolute standard of performance - should address the issue of artificially large differences in performance by creating a point structure that reflects the relative similarity of provider performance. However, in implementing absolute scoring in the next round of hospital reports and potential clinic reports, there are several outstanding questions on which MDH would like recommendations from the Advisory Committee. Specific Questions on Which We Seek Advisory Group Input/What Are The Options? Question One: For the purpose of point assignment, should there be both a lower and an upper performance cutoff, or only a lower performance cutoff? Option 1: Apply certain lower and upper bound cutoffs. Option 2: Apply certain lower bound cutoff only. Question Two: Assuming performance thresholds are used, should they be measure-specific in an effort to achieve a balance between absolute and relative rankings? Option 1: Use measure-specific thresholds, resulting in threshold points being assigned differently for available measures. 1 “Topped out” is defined by the Centers for Medicare & Medicaid (CMS) Hospital Value-Based Purchasing (HVBP) program as occurring when the 75th percentile is indistinguishable from the 90th percentile and the truncated coefficient of variation is less than 0.1. The th th truncated coefficient of variation is equal to the coefficient of variation between the 5 and 9 percentiles. 1 Option 2: Do not use measurement-specific thresholds, i.e., have one set of performance standards. Question Three: Should topped-out measures be included in the hospital (and clinic) quality composite methodology? Option 1: Omit topped-out measures in future hospital reports. Option 2: Do not omit topped-out measures in future hospital reports. Question Four: Considering that there is more variation among clinics’ quality scores than hospitals’ (See Table 2), should the clinic quality composite methodology employ absolute or relative scoring? Option 1: Use absolute scoring in the clinic quality composite methodology. Option 2: Use relative scoring in the clinic quality composite methodology. What Are the Considerations (advantages/disadvantages)? Question One: Should there be both a lower and an upper performance threshold, or only a lower performance cutoff? In implementing the absolute scoring methodology, MDH plans to assign 0 points to hospitals that have a performance rate of less than 40 percent. MDH intends to impose an upper performance threshold of 10 points at a 98 percent performance rate. The implications of alternative approaches to assigning performance thresholds are as follows: A. Either approach maintains similarity in point assignment on almost all measures for facilities with similar performance on each measure; B. Either approach results in higher point assignments for most measures recognizing Minnesota hospitals’ high performance on these quality measures; C. Using the 98% threshold results in a distribution with slightly fewer points awarded overall. The implication at the top of the distribution is that the measure would lack the capacity to distinguish between providers who achieve perfect or near perfect scores (98-100%) from those that obtain excellent scores (94%) (see table 3). The impact at the lower tail of performance scores may be even more important. For example, a hospital that achieves a desirable outcome 65% of the time receives 5 points for that measure without the 98% threshold but only 4 points with the 98% threshold in place (see table 3). Whether the change in point assignment between the two options creates greater or reduced incentive to engage in quality improvement efforts is somewhat of an empirical question. Question Two: Should there be measure-specific thresholds instead of global performance thresholds? One way to implement this approach would be by using a 40 percent minimum threshold with the measurespecific upper thresholds dependent on the underlying distribution of the measurement scores. The remaining points (1 through 9) would then be based on the distribution of performance scores below the upper 10 points down towards the 40 percent lower threshold. The primary advantage of using measure-specific thresholds is that the method could be used to try to find a compromise between relative and absolute scoring. There are several potential disadvantages to consider. a. Providers who are unfamiliar with the changing nature of the measure-specific point scales may find this approach confusing and unpredictable, given that they would need to know the underlying distribution to have a sense of how many performance points they might achieve. Changing the point scale of the measurement instrument for each measure based on its own distribution would create point values 2 with very little comparability to each other in terms of absolute performance and this may result in greater complexity for providers and consumers in understanding composite scores. b. Given that some measures have tight distributions at low performance levels, measure-specific thresholds would result in providers receiving high points for low performance on some measures, while achievement achieving high points on other measures would require high performance. This may decrease motivation to identify and target these opportunities for quality improvement for low performance areas, so long as they are low performance across all facilities. c. Measure-specific thresholds can potentially mask year-to-year results from improvement efforts by providers. Such a system would require recalibration from year to year to reflect changes in the underlying distribution of the measure. If other providers also experienced quality gains in the second year, the provider’s improvements in quality would not result in improvements in their quality scores. Perhaps more importantly, providers who maintain equal performance from year to year could receive fewer points but with no corresponding drop in their absolute performance. A more static benchmark will allow providers to better see improvements in their total score from year to year as they engage in quality improvement. d. Measure-specific thresholds may magnify point differences between providers—a weakness associated with relative scoring and the impetus for transitioning to absolute scoring. Question Three: Should topped-out measures be kept in future hospital reports? Topped out measures are measures on which providers exhibit uniformly high performance – there is not much room for improvement, and there is not much variation. That the choices are not always straight-forward might be reflected by the changing position taken by Hospital Value Based Purchasing (HVBP) system, after which portion of MDH’s approach was modeled. At the time we selected measures for inclusion in PPG, topped out measures were considered; at the time of reporting, HVBP no longer planned on including topped out measures. The advantage of including topped out measures is that it recognizes that hospitals perform at a high level on a range of quality metrics and that past efforts at quality improvement have resulted in reducing variability among facilities. Also, given that non-topped out measures for hospitals show relatively little variability, defining a topped out measure will be subjective – including topped out measures avoids that decision. Finally, the choice to exclude topped out measures should be considered in light of the composition of the current set of measures available for Provider Peer Grouping. As measurement moves further towards outcome measures and patient reported performance measures, quality differences on important domains of performance will likely become more apparent on a greater number of measures. However, although the science in this area is moving rapidly, the availability of nationally vetted, publicly reported measures will still take a number of years before they are available for consideration in the PPG system. Question Four: Considering that there is more variation among clinics’ quality scores than hospitals’, should the clinic quality composite methodology should employ absolute or relative scoring? The advantage of using absolute scoring in the clinic quality composite methodology is that it is consistent with the hospital quality composite methodology. The main disadvantage is that the highest performers on measures with low performance would not be able to receive high points, because they would be held against an absolute performance goal. 3 Table 1. Distribution of Hospital Measure Rates Measures Min N # With Measure # With Imputed Rate Mean Min 10th pctl 25th pctl 50th pctl 75th pctl 90th pctl 92.0 88.0 96.0 99.0 89.6 87.0 66.0 94.0 88.2 90.3 84.0 89.0 88.0 91.5 83.0 76.5 86.0 82.0 90.0 97.0 92.0 83.0 99.0 85.0 84.2 60.0 82.3 86.8 86.9 90.7 94.3 98.2 96.0 99.5 85.1 98.8 77.0 95.6 96.2 78.6 73.9 80.4 99.0 96.0 98.0 99.9 94.0 88.0 79.0 97.0 94.0 99.2 91.0 93.0 93.0 95.0 88.0 87.0 92.0 89.0 94.0 98.0 95.0 86.0 99.0 89.0 91.8 81.1 83.1 88.1 87.8 95.4 95.9 98.4 97.8 99.7 87.8 99.2 81.6 97.1 97.8 79.4 74.6 80.8 100.0 99.1 98.5 99.9 99.5 95.0 89.0 99.0 96.8 99.9 96.0 96.0 98.0 97.0 92.0 94.0 96.0 94.0 96.0 99.0 96.0 90.0 100.0 94.0 97.6 94.2 84.2 89.6 89.5 96.9 97.8 98.8 98.3 99.9 89.9 99.4 85.2 97.8 99.2 80.1 75.4 81.5 100.0 100.0 99.1 100.0 100.0 98.0 92.0 100.0 100.0 100.0 98.0 98.0 100.0 98.0 94.0 97.0 98.0 97.0 97.0 99.0 97.0 92.0 100.0 97.0 100.0 97.6 85.6 90.3 90.5 98.0 98.8 99.1 99.3 100.0 93.2 99.7 90.0 98.2 100.0 80.6 76.4 82.5 100.0 100.0 100.0 100.0 100.0 100.0 96.0 100.0 100.0 100.0 99.5 100.0 100.0 99.5 98.0 100.0 100.0 99.0 98.0 100.0 98.0 97.0 100.0 100.0 100.0 100.0 86.5 90.8 91.1 98.4 100.0 99.4 99.6 100.0 97.2 99.9 92.0 98.8 100.0 81.2 77.5 83.2 Max Std. Dev. PPS Hospitals Process: AMI patients given aspirin at arrival (AMI-1)* AMI patients given aspirin at discharge (AMI-2) * AMI patients given ACE/ARB for LVSD (AMI-3) AMI smoking cessation advice/counseling (AMI-4) * AMI beta blocker at discharge (AMI-5)* AMI patients given PCI within 90 minutes of arrival (AMI-8a) HF patients given discharge instructions HF patients given an evaluation of LVS function (HF-2) * HF patients given ACE/ARB for LVSD (HF-3) * HF patients given smoking cessation advice/counseling (HF-4)* PN patients assessed/given pneumococcal vaccination (PN-2) PN with ER blood culture before first dose of antibiotics (PN-3b) PN patients given smoking cessation advice/counseling (PN-4) * PN patients given antibiotic(s) within 6 hours after arrival (PN-5c) PN patients given the most appropriate initial antibiotic(s) (PN-6) PN patients assessed and given influenza vaccination (PN-7) Surgery patients with recommended VTE prophylaxis ordered (SCIP-VTI-1) Surgery patients approp VTE prophylaxis within 24 hrs surgery (SCIP-VTE-2) Surgery patients prophylactic antibiotic 1 hr prior to incision (SCIP-Inf-1a) Surgery patients appropriate prophylactic antibiotic selected (SCIP-Inf-2a) Surgery patients antibiotics discontinued w/in 24 hrs (SCIP-Inf-3a) Cardiac surgery patients controlled post-operative blood glucose (SCIP-Inf-4) Surgery patients with appropriate hair removal (SCIP-Inf-6) * Surgery patients taking beta blockers who were kept on them (SCIP-Card-2) VAP bundle compliance for ICU patients (MHA) * Central line bundle compliance for ICU patients (MHA) Mortality: 30-day mortality after hospital admission for AMI 30-day mortality after hospital admission for HF 30-day mortality after hospital admission for PN AAA repair inpatient mortality rate (IQI-11) Hip fracture inpatient mortality rate (IQI-19) PTCA inpatient mortality rate (IQI-30) CABG inpatient mortality rate (IQI-12) Inpatient Complications: Decubitus ulcer (pressure ulcer) (PSI-3) * Death surgical inpatients with serious treatable complications (PSI-4) Post-op pulmonary embolism or deep vein thrombosis (PSI-12) Obstetric trauma: vaginal delivery with instrument (PSI-18) Obstetric trauma: vaginal delivery without instrument (PSI-19) HAI: SSI rate for vaginal hysterectomy (MHA) * Readmissions: 30-day readmission rate after hospital discharge for AMI 30-day readmission rate after hospital discharge for HF 30-day readmission rate after hospital discharge for PN 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 3 3 25 25 25 25 25 25 25 25 25 25 25 25 3 25 25 25 47 46 39 36 47 15 49 49 49 48 50 47 49 50 49 50 49 49 49 49 49 14 49 49 47 45 39 47 49 18 47 17 14 49 49 49 47 47 47 29 49 49 6 14 24 21 12 0 1 0 7 18 1 0 4 2 2 1 0 0 0 0 0 0 0 0 5 2 0 0 0 7 7 1 0 0 21 0 8 0 0 0 0 0 4 98.3 97.1 97.8 99.3 97.2 93.9 83.5 97.2 95.5 98.3 93.3 94.8 95.4 96.1 90.9 90.5 93.9 92.0 94.2 98.4 95.7 87.1 99.0 92.4 93.2 83.2 84.4 89.2 89.2 96.1 97.3 98.8 98.2 99.8 90.3 99.4 85.1 97.6 98.6 80.0 75.5 81.7 89.0 80.1 88.9 89.9 80.0 87.0 27.0 75.0 81.0 89.5 57.0 80.0 67.0 86.0 78.0 62.0 75.0 66.0 53.0 95.0 88.0 50.0 78.0 52.0 33.3 0.0 80.6 85.9 85.5 90.3 91.1 98.0 95.3 98.2 76.0 97.7 71.7 94.7 88.9 77.5 70.0 78.7 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 97.0 100.0 100.0 100.0 100.0 87.4 91.7 92.2 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 99.4 100.0 82.4 79.4 84.7 3.1 4.8 2.7 2.3 4.4 5.0 14.4 5.0 5.2 3.3 7.3 4.6 6.2 3.3 5.3 9.6 5.7 7.4 6.7 1.0 2.5 11.4 3.6 8.2 13.0 25.3 1.6 1.5 1.6 2.7 2.1 0.5 1.3 0.4 4.3 0.5 6.2 1.1 2.0 1.0 1.6 1.2 Measures Min N # Hospitals with Measure # Hospitals with Imputed Rate Mean Min 10th pctl 25th pctl 50th pctl 75th pctl 90th pctl 87.2 64.0 85.0 36.0 50.0 70.0 41.0 84.9 65.1 85.0 76.3 55.0 72.4 71.6 67.6 77.6 79.4 69.0 60.0 28.6 22.2 80.5 87.1 86.9 91.9 98.6 99.4 70.6 92.5 90.9 80.0 73.6 80.9 95.7 64.0 85.0 46.7 70.0 77.9 65.0 87.5 75.2 90.0 80.0 73.6 78.1 77.9 74.1 95.0 91.0 95.0 73.1 57.1 24.5 83.3 87.9 87.8 95.3 99.6 99.7 77.9 95.4 99.7 80.0 74.5 81.5 96.2 64.0 92.5 63.2 78.0 87.9 82.9 95.4 85.2 94.5 88.1 84.0 92.5 92.5 88.0 98.0 93.4 99.0 87.6 100.0 49.5 83.5 88.5 88.9 97.0 100.0 100.0 82.6 96.6 100.0 80.0 75.4 82.0 97.2 64.0 100.0 83.0 93.0 91.9 93.0 97.0 88.5 98.9 94.0 91.0 98.1 96.9 93.0 100.0 97.5 100.0 89.0 100.0 85.4 83.5 89.2 89.4 97.8 100.0 100.0 87.5 98.9 100.0 80.0 76.3 82.6 98.1 64.0 100.0 93.0 100.0 96.0 97.0 100.0 96.2 100.0 100.0 96.0 100.0 100.0 98.0 100.0 100.0 100.0 100.0 100.0 90.0 84.4 89.6 90.0 98.6 100.0 100.0 90.0 100.0 100.0 80.0 76.8 83.0 Max Std. Dev. Critical Access Hospitals Process: AMI patients given aspirin at arrival (AMI-1) AMI patients given aspirin at discharge (AMI-2) AMI beta blocker at discharge (AMI-5) HF patients given discharge instructions (HF-1) HF patients given an evaluation of LVS function (HF-2) HF patients given ACE/ARB for LVSD (HF-3) PN patients assessed/given pneumococcal vaccination (PN-2) PN with ER blood culture before first dose of antibiotics (PN-3b) PN patients given smoking cessation advice/counseling (PN-4) PN patients given antibiotic(s) within 6 hours after arrival (PN-5c) PN patients given the most appropriate initial antibiotic(s) (PN-6) PN patients assessed and given influenza vaccination (PN-7) Surgery patients with recommended VTE prophylaxis ordered (SCIP-VTI-1) Surgery patients approp VTE prophylaxis within 24 hrs surgery (SCIP-VTE-2) Surgery patients prophylactic antibiotic 1 hr prior to incision (SCIP-Inf-1a) Surgery patients appropriate prophylactic antibiotic selected (SCIP-Inf-2a) Surgery patients antibiotics discontinued w/in 24 hrs (SCIP-Inf-3a) Surgery patients with appropriate hair removal (SCIP-Inf-6) Surgery patients taking beta blockers who were kept on them (SCIP-Card-2) VAP bundle compliance for ICU patients (MHA) Central line bundle compliance for ICU patients (MHA) Mortality: 30-day mortality after hospital admission for AMI 30-day mortality after hospital admission for HF 30-day mortality after hospital admission for PN Hip fracture inpatient mortality rate (IQI-19) Inpatient Complications: Decubitus ulcer (pressure ulcer) (PSI-3) * Post-op pulmonary embolism or deep vein thrombosis (PSI-12) * Obstetric trauma: vaginal delivery with instrument (PSI-18) Obstetric trauma: vaginal delivery without instrument (PSI-19) HAI: SSI rate for vaginal hysterectomy (MHA) * Readmissions: 30-day readmission rate after hospital discharge for AMI 30-day readmission rate after hospital discharge for HF 30-day readmission rate after hospital discharge for PN 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 3 3 25 25 25 25 25 25 25 3 25 25 25 38 1 2 68 71 58 71 61 60 70 69 63 34 34 39 39 39 39 21 11 14 5 49 66 25 76 59 38 52 28 1 55 66 34 0 0 29 20 48 17 38 46 21 31 19 14 14 13 13 13 11 12 2 6 0 0 0 23 12 16 33 11 7 0 0 0 Note: Table includes results for the final set of hospitals and measures in composite construction. * Measure is topped out (75th and 90th percentile are equal and the truncated coefficient of variation is < 0.1). 5 94.7 64.0 92.5 62.6 76.6 84.2 76.2 92.4 80.8 92.7 86.8 79.2 87.1 86.3 82.9 93.7 90.8 92.5 81.0 79.9 56.1 83.0 88.4 88.6 96.3 99.6 99.8 81.8 96.6 97.5 80.0 75.3 82.0 78.3 64.0 85.0 0.0 16.4 44.8 9.0 49.3 20.0 57.5 59.3 6.0 45.0 45.0 18.0 40.0 30.0 19.4 50.0 20.0 0.0 80.5 85.5 84.0 88.5 96.2 98.5 56.7 90.0 66.4 80.0 72.4 78.2 100.0 64.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 84.4 90.3 90.5 99.9 100.0 100.0 92.0 100.0 100.0 80.0 77.8 84.1 4.6 . 10.6 22.9 18.3 11.3 21.3 8.2 13.4 7.9 9.2 20.2 14.2 14.0 16.0 12.5 12.8 16.7 14.5 30.5 31.5 1.5 1.0 1.3 2.7 0.8 0.3 7.5 2.7 7.0 . 1.2 0.9 Table 2. Distribution of Clinic Quality Measure Rates (Test Data) Variable PREVENTIVE CARE MAMM PAP CHIMM CHLAM COLRCTL SHORT-TERM ACUTE URI PHARYN BRONCH Measure # of Clinics with Measure Min 1st Pctl 10th Pctl 25th Pctl 50th Pctl 75th Pctl 90th Pctl 99th Pctl Max Mean Std Dev % women 52-69 who had mammogram % women ages 24-64 who received a Pap test in the last year % children who received recommended vaccinations by the age of 2 % of sexually active females, ages 16-24, who received a Chlamydia test % of adults, ages 51-80, who received 1 or more of 4 proven screening tests 471 471 68.2 65.6 68.2 67.6 77.7 72.2 79.9 75.1 81.4 79.3 84.0 81.7 87.0 83.4 90.4 84.9 91.2 89.5 81.8 78.3 4.1 4.2 372 54.2 54.2 67.4 74.9 80.8 85.6 89.2 92.2 92.2 79.5 9.0 440 15.0 16.1 31.5 36.3 46.2 54.7 60.7 68.4 81.4 45.9 12.0 425 42.6 42.6 52.6 64.6 71.9 79.2 85.4 93.5 94.1 70.8 12.4 % of children, 3 months to 18 years, diagnosed with a cold and not given an antibiotic 473 45.5 49.6 72.8 79.7 88.8 92.4 94.4 95.6 97.3 85.2 9.8 % of children, ages 2-18, diagnosed with a sore throat and given a strep test and antibiotics % of adults, ages 18-64, diagnosed with acute bronchitis and not given an antibiotic 471 26.8 31.0 72.4 75.5 87.1 93.7 95.3 98.4 98.8 83.2 12.7 428 6.6 6.6 10.8 12.2 14.7 21.3 25.1 47.2 57.2 18.0 8.7 CHRONIC DISEASE OUTCOMES OVC_BP % of vascular disease patients, ages 18-75, who maintain blood pressure less than 130/80 445 25.1 35.1 42.8 50.4 58.5 66.6 73.9 84.2 89.8 58.3 11.6 OVC_LDL % of vascular disease patients, ages 18-75, who lower LDL to less than 100 mg/dl % of vascular disease patients, ages 18-75, who don't smoke 445 26.1 31.6 48.0 55.9 64.1 72.5 76.7 84.1 93.4 63.3 11.5 445 21.7 48.2 70.2 76.2 81.6 85.2 88.6 92.2 93.6 79.8 8.5 % of vascular disease patients, ages 18-75, who take an aspirin daily % of vascular disease patients receiving optimal care 445 37.4 54.7 80.8 88.4 92.9 96.0 97.6 99.4 100.0 90.5 8.6 445 0.3 7.8 16.1 23.7 32.2 41.3 47.6 58.7 63.2 32.2 12.2 442 18.6 25.7 35.3 43.0 55.5 69.1 75.3 82.9 87.5 55.6 15.2 442 21.0 25.5 38.5 46.9 56.3 63.9 69.4 79.3 81.7 54.9 12.0 442 39.2 47.6 59.8 66.5 72.9 78.3 82.1 85.9 89.9 71.9 8.7 442 35.0 54.2 71.7 78.1 82.7 85.9 88.7 93.1 96.2 81.1 7.7 442 22.6 36.1 63.8 75.0 86.4 94.5 96.9 99.1 100.0 82.7 14.2 442 426 0.1 81.5 1.4 84.5 7.3 87.0 12.7 90.3 22.6 91.4 32.9 92.6 40.2 94.3 49.8 96.3 57.2 97.3 23.1 91.3 12.6 2.6 438 33.5 33.5 51.3 59.9 73.1 75.9 80.6 84.7 84.7 67.7 11.7 OVC_TOBACCO OVC_ASPIRIN OVC_OPTIMAL_CARE ODC_BP ODC_LDL ODC_HBA1C ODC_TOBACCO ODC_ASPIRIN ODC_OPTIMAL_CARE ASTHMA BP % of diabetes patients, ages 18-75, who maintain blood pressure less than 130/80 % of diabetes patients, ages 18-75, who lower LDL to less than 100 mg/dl % of diabetes patients, ages 18-75, with A1c level less than 7% % of diabetes patients, ages 18-75, who don't smoke % of diabetes patients, ages 18-75, who take an aspirin daily, for those ages 40 and older % of diabetes patients receiving optimal care % asthma patients, ages 5-56, who were prescribed appropriate medication % of adults, ages 18-85, diagnosed with high blood pressure with BP lower than 130/80 Note: This table includes rates for all providers with positive case size; it does not limit to those with minimum recommended case sizes. Based on 482 clinics in test data that were primary or multispecialty. 6 Table 3: Rate Cutoffs for Absolute Points Assignment Two Options Rate Range Points Option 1 40% Threshold Option 2 40%/98% Thresholds 0 1 2 3 4 5 6 7 8 9 10 Rate<40% 40 <= Rate <46 46 <= Rate <52 52 <= Rate <58 58 <= Rate <64 64 <= Rate <70 70 <= Rate <76 76 <= Rate <82 82 <= Rate <88 88 <= Rate <94 Rate >= 94 Rate < 40% 40 <= Rate < 46.44 46.44 <= Rate < 52.88 52.88 <= Rate < 59.32 59.32 <= Rate < 65.76 65.76 <= Rate < 72.20 72.20 <= Rate < 78.64 78.64 <= Rate < 85.08 85.08 <= Rate < 91.52 91.52 <=Rate < 98 Rate >= 98 7
© Copyright 2026 Paperzz