Development of a set of strategy-based system

International Journal for Quality in Health Care 2005; Volume 17, Number 2: pp. 107–114
Advance Access Publication: 21 January 2005
10.1093/intqhc/mzi007
Quality in Practice
Development of a set of strategy-based
system-level cancer care performance
indicators in Ontario, Canada
ANNA GREENBERG1, HELEN ANGUS1,2, TERRENCE SULLIVAN1,2,3 AND ADALSTEINN D. BROWN3
1
Cancer Quality Council of Ontario, Toronto, Ontario, Canada, 2Cancer Care Ontario, Toronto, Ontario, Canada, 3Department of Health
Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
Abstract
Objectives. To develop a set of scientifically sound and managerially useful system-level cancer care performance indicators for
public reporting in Ontario, Canada.
Implementation. Using a modified Delphi panel method, comprising a systematic literature review and multiple rounds of structured feedback from 34 experts, the Cancer Quality Council of Ontario developed a set of quality indicators spanning cancer
prevention through to end-of-life care. To be useful to decision-makers and providers, indicator selection criteria included a clear
focus on the cancer system, relevance to a diversity of cancer providers, a strong link to the mission and strategic objectives of the
cancer system, clear directionality of indicator results, presence of targets and/or benchmarks, feasibility of populating the indicator, and credibility of the measure as an indicator of quality. To ensure that the selected indicators would measure progress over
time against specific and widely accepted goals, we created a strategy map based on the five strategic objectives of the Ontario cancer system: (i) to improve the measurement and reporting of cancer quality, (ii) to increase the use of evidence and innovation in
decision-making, (iii) to improve access to cancer services and reduce waiting times, (iv) to increase efficiency across the system, (v)
to reduce the burden of cancer. An analysis of the mean indicator ratings by experts, and the strategy mapping exercise resulted in
the identification of 36 indicators deemed suitable for routine performance measurement of the Ontario cancer system.
Lessons learned. The resulting instrument incorporates a credible evidence basis for performance measurement aligned to the
five strategic goals for the Ontario cancer system. It represents the integrating of a management culture, focused on the implementation of a new strategic direction for the cancer system, with the underlying evidence-based culture of clinicians.
Keywords: neoplasms, performance measurement, quality indicators of health care, quality of health care
Background
Pressures facing the cancer system in Ontario
Coordinating cancer care in Ontario presents a number of
challenges, including the large size of the province and the
need to provide equally high-quality care no matter where
patients reside, despite regional variation in the types of cancer most commonly diagnosed and practice patterns for different cancers. Visible symptoms of these challenges include
growing waiting times for cancer treatment over the past decade, and uneven access to both treatment and supportive
services [1,2]. Capacity constraints are exacerbated by an ageing and growing population [3], and the need to continually
integrate emerging advances in diagnostics and treatment into
the delivery system. Moreover, until recently, Ontario has had
a long history of no central oversight of the majority of cancer
services delivered. Due to this legacy, the cancer ‘system’ in
Ontario has been limited to coordination of the radiation
therapy and about half of the systemic therapy delivered.
Reorganization and re-integration of the
cancer system
To respond to these pressures, the Ontario cancer system was
recently restructured [4]. Cancer Care Ontario, the public
agency previously responsible for providing radiation and systemic therapy to patients receiving treatment in regional cancer centres, has shifted away from direct service delivery to a
Address reprint requests to Adalsteinn D. Brown, Assistant Professor, Associate SGS Member, Health Policy, Management and
Evaluation, Faculty of Medicine, University of Toronto, McMurrich Building, 2nd Floor, 12 Queen’s Park Crescent, West Toronto,
Ontario, M5S 1A8416-978-1484. E-mail: [email protected]
International Journal for Quality in Health Care vol. 17 no. 2
© The Author 2005. Published by Oxford University Press on behalf of International Society for Quality in Health Care; all rights reserved
107
A. Greenberg et al.
new purchasing role with responsibility for ensuring quality
for all cancer patients in Ontario. Cancer centres and affiliated
hospitals—traditionally separate entities—were combined into
Integrated Cancer Programs (ICPs) creating a single point of
accountability for service delivery.
Role of performance reporting in guiding the
system
In 2002, the Ontario Minister of Health and Long-term Care
established the Cancer Quality Council of Ontario, an arm’slength body of experts in the fields of cancer medicine, research,
and policy, to monitor and report publicly on cancer system performance and guide improvements. Adopting a simple definition of quality—ensuring that all patients receive ‘medically
appropriate and timely access to specialists and specialist treatment’ throughout the patient journey [5] the council set out to
develop a valid, useful and comprehensive annual report of
quality for the provincial cancer system. The report would
measure progress at a high level against strategic objectives.
However, the culture of the cancer sector in Ontario in
general has long been one in which evidence is highly valued.
As one example, Cancer Care Ontario’s Program in Evidence
Based Care is an internationally recognized clinical practice
guidelines development initiative [6], which provides upto-date expert recommendations based on the latest available
scientific evidence. The recent change in Cancer Care
Ontario’s role introduced a performance management philosophy on top of the existing evidence-based foundation. Vital to
the success of this shift therefore is a sufficient evidence basis
to ensure that indicators, targets, and benchmarks are acceptable to a broad range of stakeholders, many of whom are used to
a clear articulation of the evidence for decision-making.
Previous methods for measuring cancer
system performance
The Ontario experience
In the past, performance measurement within Cancer Care
Ontario was used largely for internal management purposes, and measures were collected and reported on an ad
hoc basis. These measures spanned cancer surveillance, data
quality, organized breast screening, radiotherapy, and systemic therapy. Apart from surveillance data from the Ontario
Cancer Registry, these measures tended to focus only on
those services provided directly by Cancer Care Ontario,
accounting for roughly a quarter of all cancer spending in the
province [7]. Moreover, significant measurement gaps have
existed across much of the continuum of cancer prevention
and treatment including risk factor surveillance, surgery, and
palliative care (see Table 1).
The international experience
A small number of organizations around the world have
developed comprehensive performance measurement systems
108
Table 1 Pre-existing CCO performance measures: area of
interest and types
Area of interest
Types of measures
Cancer surveillance
Ontario breast screening
program
Incidence, prevalence, survival
Utilization, detection rates
(early and late), predictive
validity, waiting times
Completeness of clinical and
pathological data
Utilization, waiting times,
expenditure
Utilization, waiting times
.........................................................................................................
Cancer diagnosis
Systemic therapy
Radiation therapy
specific to cancer care (see Table 2). Common to all of these
systems is an attempt to assess quality across important and
common cancer sites and across the continuum of care. For
example, the National Health Service Executive in the UK
developed a framework to measure quality in the prevention,
early detection, treatment, and palliative care of breast, lung,
and colorectal cancers [12].
Although there was significant value for Ontario in learning from these and other cancer performance measurement
systems, it was not appropriate to adopt any one of the
instruments wholesale. For example, the reliance on medical
chart abstraction for many of these instruments is not suitable for routinely producing a high-level provincial picture of
quality where more than 50 000 new cancer patients need
care every year, in addition to a portion of the nearly 250 000
previously diagnosed patients [15]. Moreover, in no instance
did the target audience, scope, or particular measurement
objectives match that of the council: to monitor the delivery
of medically appropriate and timely cancer services with a
focus on measuring progress against the strategic objectives
of the cancer system. A process of adapting indicators from
other contexts for Ontario’s particular objectives, care delivery system, professional culture, and clinical practice was
needed [16].
Implementation
A four-person working group was established to set the
project in motion. The group included a member of the
Cancer Quality Council of Ontario, an expert in performance
measurement, and two members of the council’s secretariat.
The working group met 11 times over the course of 15
months.
The council emphasized the need for the report to have a
system-level focus. This was to correct the legacy of ad hoc
performance measurement focused in great detail on a small
portion of cancer service delivery; to demonstrate to the
public that the system as a whole was being monitored and
was accountable; to be useful across all modalities of cancer
care; and to complement and support a range of detailed
Strategy-based cancer care performance indicators
Table 2 International examples of comprehensive cancer quality assessment tools
Organization
Cancers
Services
Geography:
country/region
Implementation
status
Data sources
European Union
Publication of
candidate
indicators
To be determined
US national
(five cities)
Preliminary
results compiled
Medical chart,
patient surveys
US national
Third progress
report published
2004
National surveys,
administrative data
US national
Indicators
selected
Medical chart,
administrative
UK national
Indicators in use
Medical chart,
administrative data
.........................................................................................................................................................................................................................
European Cancer
Health Indicator Project
(EUROCHIP) [8]
All
American Society of
Clinical Oncology
(ASCO) [9]
National Cancer
Institute (NCI) [10]
Breast,
colon/rectum
RAND Corporation [11]
National Health Service
(NHS) [12]
SalickNet, Inc. [13]
Prevention,
screening,
treatment,
palliation
Diagnosis,
treatment
Breast, cervix,
colorectal, all
sites combined
Prevention,
early detection,
diagnosis,
follow-up,
end-of-life
Breast, cervical, Screening,
colorectal, lung, diagnosis,
treatment,
prostate, skin
follow-up/
palliation
Breast,
Prevention,
colorectal, lung early detection,
treatment,
palliation
Breast,
Treatment
colorectal, lung
National Comprehensive Breast,
non-Hodgkin’s
Cancer Network
lymphoma
(NCCN) [14]
Treatment, pain
assessment and
management
Indicators in use Medical chart,
USA (company
patient surveys
facilities, physician
networks)
US NCCN member In use since 1997 Project-specific
cancer centers
electronic database
program-level performance indicators already in use in
Ontario.
of which generated a list of over 650 cancer-specific quality
indicators.
Identifying potential measures of cancer system
performance
Selecting indicators of cancer system performance
Phase 1 of the project involved a systematic literature review,
intended to generate a comprehensive list of published and
unpublished cancer-specific quality indicators. The review comprised scholarly literature (1990–2003) and organizational
documents on indicator initiatives in other jurisdictions; cancer
clinical practice guidelines; and personal communication with
health services researchers involved in indicator development.
The search strategy included MeSH terms for 12 cancer
sites, and quality terms (quality indicators, quality control, quality assurance, audit, and benchmarking) in Medline, Cancerlit,
HealthStar, CINAHL, and EMBASE. In addition, we performed web-based searches for gray literature using the keywords ‘cancer’ and ‘quality indicators’. Finally, we conducted
a review of practice guidelines, referencing the National
Guideline Clearinghouse and the Guidelines Advisory Committee of the Ontario Ministry of Health and Long Term Care
and the Ontario Medical Association. We identified a total of
2437 potentially relevant articles and documents, the review
To reduce this list systematically, we initiated a series of modified Delphi panels [17] in which experts reviewed the measures and associated evidence, and rated each according to a
defined set of criteria. To identify panellists, the working
group nominated Ontario experts in each modality of cancer
care and prevention. The nominated panellists in turn nominated other recognized experts. All nominated panellists were
invited to participate in the review exercise. Thirty-four panellists representing expertise across seven modalities (prevention, surveillance, screening, surgery, systemic therapy,
radiotherapy, and supportive and palliative care) were selected
to review the indicators.
Experts were divided into two groups: those internal or
external to Cancer Care Ontario. The first panel consisted of
10 Cancer Care Ontario clinical program leaders. The second
panel consisted of 16 additional clinical experts. Eleven members of the council conducted the final review. In contrast to
the previous two rounds, council members rated all indicators, regardless of area of expertise. In all, response to the
109
A. Greenberg et al.
surveys was high: 90% in the first round, 88% in the second,
and 85% in the third.
In each case, panellists were asked to fill out a paper survey
based on the following criteria:
1.
2.
3.
4.
5.
6.
7.
8.
Captures quality at the level of the cancer system.
Relevant to one or more types of cancer providers.
Linked to the mission of Cancer Care Ontario.
Linked to one or more of the strategic objectives of the
Ontario cancer system.
Has clear directionality.
Has existing targets.
Is feasible to measure and report.
Has credible evidence basis.
The goal was to isolate indicators likely to facilitate valid, feasible, cancer performance information that would be useful to
decision-makers and providers. In all three rounds of review,
additional indicators were suggested and appended to the
candidate list. We analysed mean scores for each indicator on
a five-point scale for the above criteria. The original list was
reduced to 45 indicators. Distinguishing indicators based on
criterion 1—whether the indicator captured quality at the level
of the provincial cancer system—largely drove the reduction.
Constructing a strategy-based scorecard
An interesting synergy occurred between the Council’s development of cancer system quality indicators and the strategic
direction of Cancer Care Ontario. The process of surveying
experts on what would be useful and important measures to
monitor performance played a role in the refinement of
CCO’s strategic objectives. Most importantly, this resulted in
a strong and explicit emphasis on improving the measurement, collection, and reporting of cancer performance information. Through an iterative process that included structured
feedback from Ontario cancer system managers and providers, the strategic goals of Cancer Care Ontario became:
1. To improve measurement, collection and reporting of
cancer system performance.
2. To increase the use of evidence and innovation in decision-making.
3. To increase the effective use of resources across the
system.
4. To improve access to cancer services and reduce waiting
times.
5. To reduce the burden of cancer.
With these general strategic directions in mind, we first
attempted to map the core set of indicators to a typical Kaplan
and Norton balanced scorecard model (financial performance,
customer value, internal processes, and learning and growth)
[18]. However, this approach resulted in three problems. Firstly,
the selection process coupled with the particular emphases of the
scorecard quadrants tended to underemphasize certain aspects of
the continuum of cancer care delivery (prevention to palliation).
110
For example, prevention was underrepresented. Secondly, the
scorecard was unbalanced; the ‘financial performance’ quadrant
had too few indicators. Finally, it was difficult to align the strategy
implicit in the Kaplan and Norton framework with the cancer
system strategic objectives.
Given general agreement, both within Cancer Care Ontario
and among key stakeholders, on the validity of the five strategic goals for fulfilling the vision of improving the cancer system, we restructured the scorecard based on these. The
resulting framework solved all three problems encountered
with the Kaplan and Norton approach. It emphasized all
services across the care continuum, it was more balanced, and
it was by definition aligned with the strategic directions for
the cancer system. As mentioned above, a key challenge of
this initiative was to apply a credible evidence basis to the
development of the performance measurement tool. The
atypical scorecard approach we adopted was an attempt to
mitigate the risks of implementing an untested model by aligning our measures with what is widely seen in the cancer system
as a valid strategic direction.
To complete the scorecard, we first mapped the five strategic
goals in relation to each other, based on a hypothesis of how
each goal influences or is influenced by others (see Figure 1).
As one example, we hypothesized that improving access to
cancer services would require a combination of improved
measurement, increased evidenced-based decision-making, and
enhanced efficiency and capacity.
We then allocated each selected indicator to one of the five
strategic goals. Finally, we analysed each indicator in turn to
hypothesize lead and lag relationships. Taking one example,
we hypothesized that waiting times for breast cancer assessment are likely to be sensitive to change in mammography
rates and innovations in the delivery of diagnostic services for
breast cancer. In turn, changes in waiting times for breast
assessment are likely to influence patient satisfaction with
access to care. The council reviewed and refined these relationships over the course of three in-person sessions. As shown in
Table 3, the core set was ultimately reduced to 36 indicators.
The process helped the council to correct a slight imbalance
under certain strategic goals between lead versus lag indicators.
Specifically, the core set was somewhat weighted towards
measuring performance on our strategic goals in the long
term (e.g. smoking rates versus tobacco prevention activity).
Beyond simply producing data on a series of measures, adapting
and applying the Kaplan and Norton scorecard methodology
will allow us to paint a comprehensive picture with the indicators. That is, based on our hypothesis of the relationships
among the strategic goals, the scorecard is designed to describe,
for example, whether overall performance is moving in the
right direction.
Feasibility of the selected indicators
Only one of the selected indicators is currently infeasible,
‘radiation therapy quality assurance’, due to the lack of a centralized audit process to measure compliance with provincial
guidelines. Measures are currently being taken in Ontario to
address this issue. For a handful of other indicators (e.g. cancer
Strategy-based cancer care performance indicators
Improve
measurement,
collection & reporting
of cancer system
performance
Increase use of
evidence and
innovation in decision
making
Improve access to
cancer services and
reduce wait times
Reduce the
burden of cancer
Increase effective
use of resources
across the system
Figure 1 Cancer Quality Council of Ontario cancer system strategy map.
research funding and clinical trial accrual), the corresponding
information systems are in their infancy and so data are likely
to be incomplete in the council’s first report. Apart from
these issues, the main challenge to the feasibility of calculating
the selected indicators is that many of the measures require
complicated linkages between multiple population databases.
Although do-able, this type of linkage is time consuming, making the routine measurement and reporting of cancer system
performance more difficult.
Lessons learned
Summary of the system and plans for
implementation and public reporting
We set out to create a high-level tool to measure and monitor
performance in Ontario’s cancer system. The resulting instrument incorporates a credible evidence basis for performance
measurement, including a systematic literature review and
modified Delphi panel method, as well as structured feedback from a broad base of cancer system managers and providers. Further, the development of a scorecard for the
selected indicators based on the five strategic goals for the
Ontario cancer system allowed us to integrate a management
culture, focused on the implementation of new strategic
direction for the cancer system, within the underlying evidence-based culture of clinicians. Bridging the gap between
management and clinical practice through evidence-based performance measurement is increasingly critical to the field of
health care generally [19].
The combination of measuring performance at a system
level and across the spectrum of cancer prevention and treatment represents a single tool for managing the cancer system
that was not previously available. The implementation of this
tool should help us to fulfill three major goals of the Cancer
Quality Council of Ontario:
1. To provide timely, credible, public information on cancer system performance where little to none has existed
previously.
2. To assist cancer system managers, practice leaders, and
policy-makers to make informed decisions about targeted
quality improvement.
3. To demonstrate to the public that the system is being
monitored and is accountable.
This initiative is one of several recent efforts around the world
to monitor performance in cancer care delivery systems. These
initiatives reflect widespread acknowledgement that the complexity of the cancer patient journey—which encompasses
multiple disease sites, and a host of providers, care settings,
and specializations—warrants a performance measurement system specific to this group of diseases.
It is important to examine the limitations to the approach
we have adopted. Firstly, while this initiative is based on an
assessment of measures used in other jurisdictions to monitor
quality in cancer care, there is the possibility that we could
have missed important indicators. On the other hand, our
systematic review was a 3-month exercise that included five
medical databases, and concerted efforts to ensure that we
were not simply relying on the published literature, but an
assessment of cancer indicator development initiatives in all
phases of completion through obtaining feedback on our
review from experts in the field. Secondly, our use of the
modified Delphi panel process to select indicators could have
posed some issues of validity. Still, a separate study on surgical quality of care indicators carried out at Cancer Care
Ontario at the same time showed that expert panels were
more likely to select indicators with a stronger evidence base.
This suggests that this type of process helps to improve the
validity of the indicator set. Thirdly, the underlying strategy
for the cancer system could be flawed. However, the strategy
was carefully designed and vetted by all senior executives of
the Ontario cancer system, with structured input from
111
A. Greenberg et al.
Table 3 The selected indicators
Strategic goal
Indicator
Definition
Improve measurement,
collection, and reporting of
cancer system performance
Integrated IT systems
Percentage of hospitals that meet volume cut-off for cancer services
that have single view of patient results available in the hospital to
appropriate providers that include diagnostic, procedural, systemic,
and radiation therapy information
Percentage of hospitals submitting all required data on cancer
diagnosis and treatment on time to Cancer Care Ontario
Percentage of data by cancer service modality that is submitted to
Cancer Care Ontario on time
Percentage of pathology reports submitted to Cancer Care Ontario
that are reported synoptically
Proportion of incident cancer cases in which a cancer stage was
identified
Percentage of medical oncologists using Computerized Physician
Order Entry systems.
Percentage of Ontario cancer cases treated according to selected
Program in Evidence-Based Care (PEBC) guidelines (1–2 example
conditions)
Number of patients recruited to clinical trials for chemotherapy,
radiotherapy, and interventions studies by hospital
Percentage of Integrated Cancer Programs’ annual budgets devoted
to cancer research funding
Hospitals’ self-reported environments for innovation (questionnaire)
Qualitative profiles of cancer-specific CQI initiatives at each
Integrated Cancer Program
.........................................................................................................................................................................................................................
Cancer data capture
Synoptic reporting
Stage capture rate
Increase use of evidence
and innovation in
decision-making
Increase effective use of
resources across the
system
Improve access to cancer
services and reduce
waiting times
CPOE
Guideline application
Clinical trial
participation
Cancer research
funding
Innovation
CQI activity at
Integrated Cancer
Programs
Appropriate resources/
capacity
Appropriate hospital
LOS
Appropriate unit
costs: systemic therapy
Radiation therapy
(RT) quality assurance
Medical error
Treatment
complications
Patient satisfaction with
coordination of care
Regional prevention
programs
Mammography rates
FOBT rates
Appropriate utilization:
systemic therapy
Appropriate
utilization: RT
Estimated current and projected unmet capacity by cancer services
modality and region
Average number of days from admission to discharge for colon and
rectal cancer surgery and radical prostatectomy
Costs per weighted Ontario systemic therapy case for comparable
conditions
Percentage of RT facilities in compliance with Healing Arts
Radiation Protection (HARP) guidelines
Percentage of cancer-related prescriptions that are potentially
inappropriate
Chemotherapy complications/readmissions
Surgical complications
Oncology patient satisfaction survey results related to coordination
of care
Status of cancer prevention programs by region (size, whether active,
coverage of population)
Percentage of screen-eligible women (ages 50–69) receiving
mammography within the past 2 years
Percentage of screen-eligible men and women (ages 50–74) who had
at least one fecal occult blood test (FOBT) during the past 2 years
Percentage of incident cancer patients receiving systemic therapy
post-operatively
Percentage of cancer cases receiving RT within 1 year of diagnosis
continued
112
Strategy-based cancer care performance indicators
Table 3 continued
Strategic goal
Indicator
Definition
.........................................................................................................................................................................................................................
Waiting times:
breast assessment
Percentage of screen-eligible Ontario women (ages 50–69) receiving an
abnormal result whose waiting time from abnormal result to diagnosis was
within 5 weeks (with no open biopsy; or within 7 weeks (with open biopsy)
Waiting times: surgery
Median, 90th percentile surgical waiting times (date of pre-operative
consultation to date of surgery) among patients undergoing breast,
colorectal, lung, and prostate cancer surgery in Ontario
Waiting times:
Median, 90th percentile number of weeks from referral to start of systemic
systemic therapy
therapy for new patients
Waiting times: RT
Median, 90th percentile number of weeks from referral to start of RT for
new patients
Patient satisfaction
Oncology patient satisfaction survey results related to waiting times and
with access to care
access to care
Reduce the burden Body mass index (BMI)
Percentage of Ontarians who are obese, as measured by a BMI of >30
of cancer
Smoking rates
Percentage of population (ages 12–19, 20+) who are current daily or current
occasional smokers
Incidence
Age-standardized incidence per 100 000 by cancer site, and all cancers
combined
Palliative care utilization Rates of palliative care utilization among cancer patients
Pain management
Patients’ self-reported pain and perception of pain management by
providers
Patient satisfaction overall Oncology patient satisfaction survey results related to patient journey overall
Post-operative mortality Number of deaths (i) in hospital; or (ii) within 30 days of selected cancer
surgery procedures (non-emergency admissions)
Mortality
Age-adjusted deaths per 100 000 standard population by cancer site, and all
cancers combined
Survival
Percentage of persons living 5 or more years after diagnosis (disease,
stage-specific)
Rate of disease-free survival by cancer site (disease, stage-specific)
managers and providers throughout the province. We feel
that we can rely on it being a sound approach to system
improvement, and a reasonable basis upon which to measure
progress.
is to validate the data, while ensuring privacy for each institution prior to publication.
Next steps for research
Next steps for implementing the
instrument
There are a few target audiences for the results of this initiative including managers, providers, policy-makers, patients,
and the general public. For managers, decision-makers, and
providers, we plan to report confidential institution- or locallevel performance relative to provincial performance. For our
primary audience, patients, and the general public, we plan to
create a graphical, easily interpretable report (paper and webbased versions) showing only provincial and regional data.
Reporting will occur in two phases. The release of the public report will occur at least 3 months after the release of the
confidential data to allow system stakeholders an opportunity
to comment on and/or challenge the results before they are
aggregated and made public. The rationale for this approach
A cancer-specific quality assessment tool has never before
been implemented anywhere in Canada. As we roll out a program of public reporting of aggregate results and confidential
reporting of local and institutional data, a number of questions
will arise with respect to the utility of the results and their
dissemination.
In the near term, there will be a need to evaluate the
acceptability, interpretability and utility of the indicators
among key stakeholders in the system. There will also be a
need to test the validity of the underlying scorecard, in terms
of how the strategic goals and indicators relate to each other.
In the longer term, other important questions include
whether or not the indicators are being used to promote
quality improvement locally; and whether or not improvements are evident. Finally, given the multitude of cancer
performance measurement initiatives internationally, it will be
113
A. Greenberg et al.
important to compare the impact of this initiative with that of
others over time.
10. National Cancer Institute. NIH, DHHS, Bethesda, MD Cancer
Progress Report—2003 Update. National Cancer Institute,
February 2004. http://progressreport.cancer.gov/ Accessed June
16, 2004.
References
11. Asch SM, Kerr EA, Hamilton EG, Reifel JL, McGlynn EA, eds.
Quality of care for oncologic conditions and HIV: a review of the
literature and quality indicators. Report for RAND Health 2000.
http://www.rand.org/publications/MR/MR1281/ Accessed
June 16, 2004.
1. Simunovic M, Thériault ME, Paszat L et al. Waiting times among
patients undergoing major breast, colorectal, lung and prostate
cancer surgery in Ontario for years 1993 to 2000. Can J Surg
2004; still in press.
2. Sullivan T, Evans W, Angus H, Hudson A, eds. Strengthening the
Quality of Cancer Services in Ontario. Ottawa: Canadian Healthcare
Association Press, 2003.
12. National Health Service Executive. National Performance Assessment Framework (PAF): Consultation on Suggested National
Cancer Performance Indicators. London: NHS Executive, 2000.
3. National Cancer Institute of Canada. Canadian Cancer Statistics
2004, Toronto, Canada: NCI Canada, 2004.
13. Hillner BE, Smith TJ. The quality of cancer care: Does the literature support the rhetoric? Institute of Medicine Background
Paper, U.S.A., 1999.
4. Sulllivan T, Dobrow M, Thompson L, Hudson A. Reconstructing
cancer services in Ontario. Healthcare Papers. Healthcare Papers
2004; 5: 69–80.
14. Weeks J. Outcomes assessment in the NCCN: 1998 Update.
National Comprehensive Cancer Network. Oncology (Huntingt)
1999; 13: 69–71.
5. American Society of Clinical Oncology. Consensus Statement
on Access to Quality Cancer Care (approved March 4, 1998).
http://www.asco.org/ac/1,1003_12-002177–00_18-0011731–
00_19-0011734–00_20-001,00.asp Accessed June 16, 2004.
15. Cancer Care Ontario (Ontario Cancer Registry, 2004); Cancer
Care Ontario. Ontario Cancer Facts: ‘What cancers are Ontarians living with?’ April 2004. http://www.cancercare.on.ca/pdf/
CF-Apr2004-prevalence.pdf Accessed June 7, 2004.
6. Browman GP. Development and aftercare of clinical guidelines:
the balance between rigor and pragmatism. J Am Med Assoc 2001;
286: 1509–1511.
16. Marshall MN, Shekelle PG, McGlynn EA, Campbell S, Brook
RH, Roland MO. Can health care quality indicators be transferred between countries? Qual Safety Health Care 2003; 12: 8–12.
7. Sullivan T, Evans W, Angus H, Hudson A, eds. Strengthening the
Quality of Cancer Services in Ontario. Ottawa: Canadian Healthcare
Association Press, 2003.
17. Helmer-Hirschberg O. Systematic Use of Expert Opinions. Santa
Monica: The RAND Corporation, 1967.
8. Micheli A, Capocaccia R, Martinez C et al. Cancer control in
Europe: a proposed set of European cancer health indicators.
Eur J Public Health 2003; 13(3 Suppl): 116–118.
9. Information on ASCO’s National Initiative on Cancer Care
Quality (NICCQ). http://www.asco.org/ac/1,1003,_12-00217700_18-0011725-00_19-0011728-00_20-001,00.asp Accessed June
16, 2004.
114
18. Kaplan RS, Norton DP. The Strategy-focused Organization: How Balanced Scorecard Companies Thrive in the New Business Environment.
Boston, MA: Harvard Business School Press, 2001.
19. McGlynn EA. An evidence-based national quality measurement
and reporting system. Med Care 2003; 41 (Suppl. 1): I8–I5.
Accepted for publication 8 October 2004