Validity of Cancer Registry Data for Measuring

Validity of Cancer Registry Data for Measuring the
Quality of Breast Cancer Care
Jennifer L. Malin, Katherine L. Kahn, John Adams, Lorna Kwan,
Marianne Laouri, Patricia A. Ganz
Background: Various groups have called for a national system to monitor the quality of cancer care. The validity of
cancer registry data for quality of cancer care has not been
well studied. We investigated the validity of such information in the California Cancer Registry. Methods: We compared registry data associated with care with data abstracted from the medical records of patients diagnosed with
breast cancer. We also calculated a quality score for each
subject by determining the proportion of four evidencebased quality indicators that were met and then compared
overall quality scores obtained from registry and medical
record data. All statistical tests were two-sided. Results: Records of 304 patients were studied. Compared with the medical record data gold standard, the accuracy of registry data
was higher for hospital-based services (sensitivity = 95.0%
for mastectomy, 94.9% for lumpectomy, and 95.9% for
lymph node dissection) than for ambulatory services (sensitivity = 9.8% for biopsy, 72.2% for radiation therapy,
55.6% for chemotherapy, and 36.2% for hormone therapy).
On average, quality scores calculated from registry data
were 11 percentage points (95% confidence interval [CI] = 9
to 13 percentage points, P<.001) lower than those calculated
from medical record data. Quality scores calculated from
registry data were 5 percentage points (95% CI = 3 to
7 percentage points) lower for patients with stage I breast
cancer, 16 percentage points (95% CI = 12 to 20 percentage
points) lower for patients with stage II breast cancer, and
20 percentage points (95% CI = 8 to 32 percentage points)
lower for patients with stage III breast cancer than were
corresponding scores calculated from medical record data
(all P<.001). The greater difference in quality scores for
stage II and III patients revealed that disease severity and
setting of care affected the validity of registry data. Conclusions: Cancer registry data for quality measurement may
not be valid for all care settings, but registries could provide
the infrastructure for collecting data on the quality of cancer
care. We urge that funding be increased to augment data
Journal of the National Cancer Institute, Vol. 94, No. 11, June 5, 2002
collection by cancer registries. [J Natl Cancer Inst 2002;94:
835–44]
Policy makers, professional organizations, and patient advocates have called for a national system to monitor the quality of
cancer care (1–4). Quality of care is generally measured to meet
one of the three following interrelated goals: surveillance, quality improvement, and accountability (5). For surveillance, data
are collected to identify potential problems with the care that
patients are receiving. For quality improvement, data are collected to identify areas in which care could be improved. After
an intervention is implemented to try to improve care, data are
collected to determine whether the intervention produced the
desired outcome. For accountability, data about the quality of
care are collected to compare plans, groups, or providers. Moving from the goal of surveillance to the goal of quality improvement to the goal of accountability demands successively higher
standards of validity for the data used to evaluate quality of care.
The National Cancer Policy Board of the Institute of Medicine concluded that “for many Americans with cancer, there is a
wide gulf between what could be construed as the ideal and the
reality of their experience with cancer care” (4). During the last
Affiliations of authors: J. L. Malin, Divisions of General Internal MedicineHealth Services Research, and Hematology-Oncology, Department of Medicine
and Jonsson Comprehensive Cancer Center, University of California, Los Angeles (UCLA), and RAND, Santa Monica, CA; K. L. Kahn, Division of General
Internal Medicine-Health Services Research, Department of Medicine, UCLA,
and RAND; J. Adams, RAND; L. Kwan, Division of Cancer Prevention and
Control Research, Jonsson Comprehensive Cancer Center; M. Laouri, California
Health Care Foundation, Oakland; P. A. Ganz, UCLA Schools of Medicine and
Public Health and Division of Cancer Prevention and Control Research, Jonsson
Comprehensive Cancer Center.
Correspondence to: Jennifer L. Malin, M.D., UCLA Division of GIM-HSR,
911 Broxton Ave., 1st Floor, Box 951736, Los Angeles, CA 90095–1736
(e-mail: [email protected]).
See “Notes” following “References.”
© Oxford University Press
ARTICLES 835
decade, a number of studies (6–20) have documented variations
in the outcomes and patterns of care of cancer patients. Many of
these studies (15–20) rely on data reported by cancer registries.
Cancer registries collect data to determine incidence, to determine trends in various population groups, to plan epidemiologic research and cancer control, and to support health care
planning (21–25). The cancer registry system in the United
States exists as multiple overlapping, hierarchical systems under
different governing bodies (25), and the systems have different
purposes and speeds of reporting. Many hospitals maintain registries on cancer patients under their care and use data from these
registries for certification by the American College of Surgeons
or to meet state or federal requirements (26). Because hospitals
themselves are expected to support these data collection efforts,
there is probably variability in the resources that different hospitals expend on registry activities. Consequently, the effort that
individual hospital registries expend on data collection may vary
greatly, and the accuracy of their data would be expected to
reflect this variability.
Valid information about the care provided is a prerequisite to
accurately determining quality of care. Although the completeness of cancer registry data for incident cancer cases is very high
[e.g., 97% in the Surveillance, Epidemiology, and End Results
(SEER)1 Program (27)], the validity of cancer registry data for
the quality of cancer care has not been well studied. We conducted this study to evaluate the validity of California Cancer
Registry data for measuring the quality of the initial treatment
for breast cancer by comparing data in this registry with that of
the medical record “gold standard.”
METHODS
We compared the validity of data in the California Cancer
Registry with that of the medical record gold standard for the
following variables: breast biopsy, breast cancer surgery, lymph
node dissection, radiation therapy, chemotherapy, hormone
therapy, and American Joint Commission on Cancer (AJCC)
stage (28).
Case Identification and Sampling
We obtained outpatient medical records of a sample of patients with breast cancer from PacifiCare of California (Cypress,
CA). Potential cases of breast cancer were identified by the
health plan’s quality improvement staff, who used an administrative data system to select all female enrollees with International Classification of Diseases, Version 9 (ICD-9) or Current
Procedural Terminology (CPT) billing codes for breast cancer
(174.*, 198.81, 233.0, 85.4*, 19160, 19162, 19180, 19182,
19200, 19220, and 19240, where * ⳱ any digit from 0 to 9) from
January 1992 through June 1996. The staff of the California
Cancer Registry performed a probabilistic linkage of the identified cases in the health plan with breast cancer cases in the
registry by using Social Security number, name, birth date, and
address. After the California Cancer Registry “de-identified” the
data, we received the linked data file. Three hundred sixty-three
women had multiple entries, representing multiple diagnoses of
breast cancer. We selected the first diagnosis occurring during
the study period for inclusion or, for bilateral cancers occurring
synchronously, the diagnosis of the more advanced cancer. From
this database, we randomly sampled women diagnosed with
breast cancer for the first time from 1993 through 1995 in Los
Angeles County who were enrolled in PacifiCare on or before
836 ARTICLES
the date of diagnosis and during the entire period of follow-up
for this study. Los Angeles County, California, has one of 10
registries that make up the California Cancer Registry and has
been a part of the National Cancer Institute-funded SEER program since 1992 (29). The SEER program is considered the gold
standard for data quality among cancer registries around the
world with a near complete case identification (98%) and a 95%
annual rate of follow-up to determine survival (27,30). Limiting
our study to cases from the Los Angeles County registry ensured
that the registry data would be of the highest quality available.
Pursuit of Medical Records
Our goal was to obtain all medical record data for a sample of
300 patients. We used a professional copier service that worked
with the health plan’s quality improvement program to obtain a
de-identified copy of the medical record for each patient. From
their administrative files, PacifiCare identified the provider organization (i.e., an integrated medical group or an independent
provider organization) that had a contract with the health plan to
provide medical services for patients at the time of their diagnosis. The professional copier service then attempted to obtain
the medical records for 1 year before diagnosis through 2 years
after diagnosis from that provider organization. To be considered adequate, the provider organizations’ medical record had to
include a pathology report for at least one breast procedure and
the notes of at least one physician providing care for the breast
cancer episode (e.g., a surgeon, medical oncologist, or radiation
oncologist) for at least 12 months after the date of diagnosis.
If this information was not available, the medical record was
searched for information regarding other providers that the
patient had seen and the name of the facility where any procedures were performed. Medical records were then requested
from the additional providers and facilities, in the following
order: 1) medical oncologist, 2) radiation oncologist, 3) surgeon,
and 4) hospital. If, in combination with the first record, the
second record did not yield the information specified above, the
next record was requested. This process was continued until the
leads on possible records were exhausted. A case was considered
“incomplete” if we did not find medical records from at least one
physician providing care for breast cancer for at least 3 months
after the date of diagnosis or documentation that treatment was
completed within the time frame of available records. We excluded patients who were found not to have breast cancer after
review of their medical records (in all cases, these patients had
been diagnosed with lobular carcinoma in situ). The institutional
review board of the University of California, Los Angeles, approved the study.
Medical Record Abstraction
Three research assistants (two medical students and a staff
member with a bachelor of science degree in biology and several
years of abstraction experience on other research studies) and an
oncologist (J. Malin) abstracted each medical record by use of a
chart abstraction instrument developed specifically for this study
(available from the authors). We abstracted the medical record
from 6 months before diagnosis through 12 months after diagnosis for information regarding the cancer diagnosis and evaluation, the characteristics and spread of the tumor, the initial
cancer treatment, and the presence of comorbidity by use of the
Charlson Comorbidity Index (30). A physician (J. Malin) reviewed all medical records with questions about abstraction or
Journal of the National Cancer Institute, Vol. 94, No. 11, June 5, 2002
coding. Interrater reliability was assessed among the four abstractors on a 5% sample of the medical records. Reliability of
the variables describing the treatments received was excellent
with a ␬ statistic of consistently greater than 0.80.
Statistical Analyses
Statistical analyses were performed with SAS software (version 6.12; SAS Institute, Cary, NC). We calculated the observed
agreement, ␬ statistic, sensitivity, and specificity of the California Cancer Registry data compared with the medical record gold
standard. To illustrate the importance of valid data when measuring quality of care, we used the following four quality indicators (QIs), grounded in the scientific literature with broad
expert consensus, that could be determined from California Cancer Registry data: QI1 ⳱ patients with stage I through III breast
cancer should have definitive surgery; QI2 ⳱ patients with stage
I through III breast cancer should have a lymph node dissection;
QI3 ⳱ patients with stage I through III breast cancer treated
with breast-conserving surgery should receive radiation therapy;
and QI4 ⳱ patients with stage II or III breast cancer should
receive tamoxifen or chemotherapy.
We limited the eligibility for QI4, the quality indicator for
adjuvant systemic therapy, to patients with stage II or III disease
because, from 1993 through 1995, patients with tumors of 1–2
cm were just beginning to be considered for adjuvant therapy
and consensus recommendations for this group were vague (31–
33). We compared the proportion of patients who had care that
met the quality indicators in California Cancer Registry data
with that in medical record data. We calculated an overall quality score for each subject by determining the proportion of quality indicators that were met relative to those for which patients
were eligible. For example, if a patient had stage I breast cancer
and a lumpectomy, she was eligible for the first three quality
indicators (QI1–3). If she had a lumpectomy but no axillary
lymph node dissection and then received radiation therapy and
tamoxifen, she met only two of three possible quality indicators
(even though she received tamoxifen). Because she did not meet
the eligibility criteria established for QI3, her data for this indicator would not be counted in her quality score. Her quality
score would then be 2/3 or 0.67. The individual quality scores
were averaged to create an overall quality score for the sample.
We then compared the quality of care as measured by the quality
score calculated from California Cancer Registry data with that
calculated from medical record data. To identify subgroups for
which registry data might be less valid, we compared the difference in the quality scores determined from registry data and
from medical record data for the following groups of patients:
those with stage I, II, and III breast cancer; those younger than
70 years versus those 70 years old or older; white patients versus
nonwhite patients; and those with comorbidity counts of 0, 1,
and 2 or higher. We chose 70 years as our cut point for age
because the literature on patterns of care in breast cancer suggests that patients older than 70 years are less likely to receive
standard breast cancer treatment than are younger women
(15,16,19,33–35). We used the ␹2 test to assess differences
across categorical variables and the Student t test to assess differences for continuous variables. To explore statistical interaction effects on registry data validity, we modeled the effects of
age, race, disease stage (stage I versus stages II and III), number
of comorbidities, and the interaction terms for age × stage,
race × stage, age × race, age × comorbidity, and race × comorJournal of the National Cancer Institute, Vol. 94, No. 11, June 5, 2002
bidity on the difference between quality scores determined from
medical record data and from registry data. All statistical tests
were two-sided.
RESULTS
Of 13 104 potential cases of breast cancer identified by
PacifiCare of California, 7395 were matched with an entry in the
California Cancer Registry, representing 7032 unique women
with breast cancer (Fig. 1). Excluded cases most likely represented “rule-out” diagnoses, cases of breast cancer diagnosed
before 1988 (the year the California Cancer Registry was established), and diagnoses outside of California not captured by the
California Cancer Registry. Limiting the sample to cases diagnosed from 1993 through 1995 while the patient was enrolled in
PacifiCare left 2712 women. We attempted to obtain the medical
records of 391 cases, randomly selected from the 658 health plan
enrollees diagnosed in Los Angeles County from 1993 through
1995. Four cases were found to be ineligible because they had
lobular carcinoma in situ, a preinvasive condition that does not
require treatment. Of the 387 eligible cases, we obtained medical
records for 304 (79%). Cases for which we could not obtain
medical records did not appear essentially different from the
study sample, according to California Cancer Registry data, although there was a trend for the cases on which medical records
were not available or not complete to have less extensive surgery
reported by the registry (Table 1; 42.2% [incomplete record]
versus 52.3% [study sample] had a mastectomy, and 68.7% [incomplete record] versus 77.3% [study sample] had a lymph node
dissection; P ⳱ .25 and .11).
Compared with all breast cancer cases in Los Angeles County
and California from 1993 through 1995, our sample was somewhat older and less ethnically diverse (Table 2). In the analytic
sample, 88.1% of women were 50 years old or older compared
with 75.2% in Los Angeles County and 76.1% statewide
(P<.001 for the age distribution in the analytic sample as compared with those in Los Angeles County and California), reflecting the older patient population enrolled in PacifiCare of California, which has a Medicare contract. The analytic sample also
had a lower proportion of ethnic minorities than did Los Angeles
County (6.3% versus 10.9%, respectively, for black; 14.1%
versus 15.0%, for Hispanic; 4.9% versus 7.9%, for Asian; all
P ⳱ .01). Because ethnic minorities are less likely to be insured
(38,39), this difference was not surprising and again represented
a bias of our sampling frame.
California Cancer Registry data were more accurate for hospital-based services than for ambulatory services when compared with the medical record gold standard (Table 3). Agreement between California Cancer Registry data and medical
record data was excellent for the type of surgery (94%) and the
receipt of lymph node dissection (95%) (␬ ⳱ 0.90 and 0.89,
respectively), procedures generally performed in a hospital or
hospital-based outpatient surgery center). Compared with medical record data, registry data were more accurate for hospitalbased services (sensitivity ⳱ 95.0% for mastectomy, 94.9% for
lumpectomy, and 95.9% for lymph node dissection) than for
ambulatory services (sensitivity ⳱ 9.8% for biopsy, 72.2% for
radiation therapy, 55.6% for chemotherapy, and 36.2% for hormone therapy). The specificity of the California Cancer Registry
data compared with the medical record data was 95% or greater
for these procedures.
In contrast, the registry missed between 28% and 90% of
ARTICLES 837
Fig. 1. Study sample. Case identification, sampling, and
pursuit of medical records are shown. HMO ⳱ health maintenance organization; LCIS ⳱ lobular carcinoma in situ.
services provided in the ambulatory setting (Table 3). The
California Cancer Registry identified only 10 women (3.3%) as
having received a breast biopsy examination, typically an officebased procedure, whereas 92 patients (30.3%) had documentation of a breast biopsy examination in their medical records.
Although there was good agreement between California Cancer
Registry data and the medical record data regarding receipt of
radiation therapy and chemotherapy (␬ ⳱ 0.70 and 0.62, respectively), the registry failed to identify 28% and 44% of patients
who received these treatments. The sensitivity of registry data
for receipt of radiation therapy, which is often hospital-based
and occurs immediately after surgery or chemotherapy, was
72.2%. The sensitivity of the registry for chemotherapy, which is
also initiated immediately after surgery but is usually administered in a medical oncologist’s office, was 55.6%. The sensitivity of the registry for hormone (i.e., tamoxifen) treatment, which
is usually the last treatment initiated and is prescribed by a
physician for a patient to take at home, was only 36.2%. However, when the registry data reported that a patient had received
treatment, this treatment was generally confirmed by the medical
record data, as indicated by a specificity of 98.8% for radiation
therapy and chemotherapy and of 94.7% for hormone therapy.
The stage of a patient is determined from the results of the
838 ARTICLES
breast surgery, lymph node dissection and analysis, and testing
for the presence of distant metastases, and this determination
often includes both hospital-based and ambulatory services.
We found only moderate agreement between the stage reported
by the registry and that reported in the medical record (82%,
␬ ⳱ 0.73). Sensitivity of the registry data compared with the
medical record data for stage was 80.6% for stage 0, 86.4% for
stage I, 82.8% for stage II, 72.7% for stage III, and 41.7% for
stage IV. The corresponding specificities were 99.6%, 93.6%,
93.1%, 98.3%, and 100.0%.
The accuracy of the California Cancer Registry did not appear to vary by age or race/ethnicity. The ␬ scores for each of the
variables were not statistically significantly different for patients
younger than 70 years compared with those 70 years and older
or for nonwhite patients compared with white patients.
To illustrate the importance of valid data on the results of
quality measurement, we compared the average percentage of
patients meeting four quality indicators from California Cancer
Registry data with that obtained from medical record data (Table
4). Two of the quality indicators reflected hospital-based services (QI1 and QI2), and two reflected ambulatory services
(QI3 and QI4). The percentages of patients whose care met the
quality indicators for hospital-based services were not statistiJournal of the National Cancer Institute, Vol. 94, No. 11, June 5, 2002
Table 1. Characteristics of cases in the analytic sample compared with cases
with incomplete or no medical records†
Cases in
analytic sample
(n ⳱ 304)
Cases with incomplete or
no medical records
(n ⳱ 83)
67.1 ± 12.7
65.6 ± 15.4
74.0 (225)
6.3 (19)
14.1 (43)
4.9 (15)
0.7 (2)
71.1 (59)
8.4 (7)
14.5 (12)
6.0 (5)
0 (0)
Year of diagnosis, % (No.)
1993
1994
1995
30.9 (94)
35.5 (108)
33.6 (102)
34.9 (29)
33.7 (28)
31.3 (26)
Stage, % (No.)
0
I
II
III
IV
Unknown
8.6 (26)
41.1 (125)
36.2 (110)
4.3 (13)
1.6 (5)
8.2 (25)
9.6 (8)
36.1 (30)
36.1 (30)
4.8 (4)
7.2 (6)
6.0 (5)
Type of breast surgery, % (No.)
Mastectomy
Lumpectomy
No breast surgery
Lymph node dissection§
Radiation therapy
Chemotherapy
Hormone therapy
52.3 (159)
44.4 (135)
3.3 (10)
77.3 (235)
30.6 (93)
10.5 (32)
26.6 (81)
42.2 (35)
53.0 (44)
4.8 (4)
68.7 (57)
32.5 (27)
15.7 (13)
25.3 (21)
Age, y‡
Race, % (No.)
White
Black
Hispanic
Asian
Unknown
†No differences were statistically significantly different at the ␣ ⳱ .05 level.
All statistical tests were two-sided. Data may not add to expected values because
of rounding.
‡Data for age are the mean ± standard deviation.
§P ⳱ .11.
cally significantly different when calculated from registry data
and medical record data. Virtually all patients, 99% (95% CI ⳱
98% to 100%), met the first quality indicator. For the second
indicator, QI2, registry data indicated that 84% (95% CI ⳱ 79%
to 89%) of patients had received the care; medical record data
indicated that 88% (95% CI ⳱ 84% to 92%) had. However, the
percentage of patients whose care appeared to meet the quality
indicators for ambulatory services was statistically significantly
underestimated by registry data. When California Cancer Registry data were used, only 63% (95% CI ⳱ 54% to 72%) of
patients appeared to have received the indicated radiation
therapy after breast-conserving surgery (QI3). When medical
record data were used, however, 85% (95% CI ⳱ 79% to 91%)
of patients appeared to have received such care. Similarly, 46%
(95% CI ⳱ 37% to 55%) of patients from registry data and 90%
(95% CI ⳱ 85% to 95%) of patients from medical record data
appeared to receive the indicated adjuvant therapy (QI4).
Not surprisingly, the overall quality score (determined as a
percentage of the four quality indicators) was lower when registry data were used than when medical record data were used
(Table 5). Variation was observed in quality scores with patient
age, race, and comorbidity. Older patients had lower quality
scores, regardless of data source, compared with younger patients. Patients aged 70 years and older had quality scores of
76 percentage points (95% CI ⳱ 72 to 80 percentage points)
with registry data and 87 percentage points (95% CI ⳱ 83 to
91 percentage points) with medical record data, whereas patients
aged younger than 70 years had scores of 85 percentage points
Journal of the National Cancer Institute, Vol. 94, No. 11, June 5, 2002
Table 2. Demographic characteristics of cases in the analytic sample
compared with those of cases of all women in Los Angeles County and the
State of California reported to the California Cancer Registry with a diagnosis
of breast cancer from 1993 through 1995
Cases in
the analytic
sample
(n ⳱ 304)
Cases in
Los Angeles
County
(n ⳱ 16 345)
Cases in
California
(n ⳱ 63 395)
Age at diagnosis,† % (No.)
<30 y
30–49 y
50–69 y
艌70 y
0.0 (0)
11.8 (36)
43.4 (132)
44.7 (136)
0.7 (110)
24.1 (3939)
43.0 (7026)
32.2 (5270)
0.6 (355)
23.3 (14 795)
42.4 (26 898)
33.7 (21 347)
Race‡
White
Black
Hispanic
Asian
Unknown
74.0 (225)
6.3 (19)
14.1 (43)
4.9 (15)
0.7 (2)
65.3 (10 670)
10.9 (1786)
15.0 (2456)
7.9 (1297)
0.8 (136)
76.6 (48 591)
5.6 (3579)
10.2 (6485)
6.1 (3843)
1.4 (897)
Year of diagnosis, % (No.)
1993
1994
1995
30.9 (94)
35.5 (108)
33.6 (102)
32.8 (5360)
33.2 (5425)
34.0 (5560)
32.5 (20 576)
33.1 (20 997)
34.4 (21 822)
†P<.001 for the comparison of the distribution of age for cases in the analytic
sample with all cases of breast cancer in California and Los Angeles County.
Data may not add to expected values because of rounding. All statistical tests
were two-sided.
‡P ⳱ .01 for the comparison of the distribution of race/ethnicity in cases in
the analytic sample with all cases of breast cancer in Los Angeles County.
(95% CI ⳱ 81 to 89 percentage points) and 96 percentage points
(95% CI ⳱ 94 to 98 percentage points), respectively. It is interesting that nonwhite patients had higher quality scores than
did white patients, regardless of data source. A difference of 11
percentage points (95% CI ⳱ 9 to 13 percentage points; P<.001)
was obtained in the quality score, or the overall percentage of
quality indicators met, when values calculated from registry data
(81 percentage points, 95% CI ⳱ 36 to 126 percentage points)
and from the gold standard medical record data (92 percentage
points, 95% CI ⳱ 59 to 130 percentage points) were compared
(Table 5). This difference was consistent regardless of the age
and race of the patients. That is, even though variation was
observed in the quality scores for patients according to their age
and race, the quality score calculated from registry data was
consistently about 10 percentage points lower than that calculated from medical record data. However, the difference was
more marked when quality scores calculated from the two data
sources for cases of stage I, II, and III breast cancer were compared (Table 5). Although the quality score calculated from registry data was only 5 percentage points (95% CI ⳱ 3 to 7
percentage points) lower than that obtained from the medical
record for patients with stage I breast cancer, the quality score
was on average 16 percentage points (95% CI ⳱ 12 to 20
percentage points) lower for those with stage II, and 20 percentage points (95% CI ⳱ 8 to 32 percentage points) lower for those
with stage III (P<.001). The greater difference in quality scores
for stage II and III patients revealed an important interaction
between disease severity and setting of care that affected the
validity of registry data: patients with a more advanced stage
more often received treatment in the ambulatory setting that was
less likely to be reported by the registry. By definition, stage I
cases were only eligible for three of the four quality indicators,
whereas the quality score for all stage II and III cases included
a quality indicator for receipt of adjuvant therapy (Table 4; QI4).
ARTICLES 839
Table 3. Validity of California Cancer Registry data compared with that from a medical record gold standard
% Agreement
␬ value
(95% CI)†
94
0.90 (0.86 to 0.95)
Cancer registry,
% (No.)
Medical record,
% (No.)
Hospital-based services
Surgery type
Mastectomy
Lumpectomy
No surgery
Node dissection
52.3 (159)
44.4 (135)
3.3 (10)
77.3 (235)
53.0 (161)
44.7 (136)
2.3 (7)
80.3 (244)
95
Ambulatory services
Biopsy
Radiation therapy
Chemotherapy
Hormone therapy
3.3 (10)
30.6 (93)
10.5 (32)
26.6 (81)
30.3 (92)
41.1 (125)
17.8 (54)
69.1 (210)
Stage
0
I
II
III
IV
Unknown
8.6 (26)
41.1 (125)
36.2 (110)
4.3 (13)
1.6 (5)
8.2 (25)
10.2 (31)
43.4 (132)
38.2 (116)
3.6 (11)
3.9 (12)
0.7 (2)
Variable
Sensitivity, %
Specificity, %
0.89 (0.82 to 0.95)
95.0
94.9
71.4
95.9
98.6
96.4
98.6
98.2
71
86
90
54
0.12 (0.04 to 0.20)
0.70 (0.63 to 0.78)
0.62 (0.50 to 0.75)
0.22 (0.16 to 0.29)
9.8
72.2
55.6
36.2
99.5
98.8
98.8
94.7
82
0.73 (0.68 to 0.79)
80.6
86.4
82.8
72.7
41.7
99.6
93.6
93.1
98.3
100.0
†CI ⳱ confidence interval.
Table 4. Proportion of cases in cancer registry data meeting breast cancer quality indicators compared with those in a medical record gold standard, by stage†
Cancer registry
Quality indicator (QI)
Medical record
No. of cases
eligible for
indicator
Proportion of cases
meeting indicator
(95% CI)
No. of
cases eligible
for indicator
Proportion of cases
meeting indicator
(95% CI)
248
125
110
13
0.99 (0.98 to 1.00)
0.99 (0.97 to 1.01)
0.99 (0.97 to 1.01)
1.00
259
132
116
11
0.99 (0.98 to 1.00)
0.99 (0.97 to 1.01)
1.00
0.91 (0.74 to 1.08)
248
125
110
13
0.84 (0.79 to 0.89)
0.80 (0.73 to 0.87)
0.89 (0.83 to 0.95)
0.85 (0.66 to 1.04)
259
132
116
11
0.88 (0.84 to 0.92)
0.85 (0.79 to 0.91)
0.92 (0.87 to 0.97)
0.91 (0.74 to 1.08)
112
63
48
1
0.63 (0.54 to 0.72)
0.59 (0.47 to 0.71)
0.69 (0.56 to 0.82)
0
116
70
46
0
0.85 (0.79 to 0.91)
0.83 (0.74 to 0.92)
0.89 (0.80 to 0.98)
N/A
123
0
110
13
0.46 (0.37 to 0.55)
N/A
0.46 (0.37 to 0.55)
0.46 (0.19 to 0.73)
127
0
116
11
0.90 (0.85 to 0.95)
N/A
0.89 (0.83 to 0.95)
1.00
Hospital-based services
QI1. Patients with stage I–III breast cancer should have a definitive surgery
All eligible patients
Stage I
Stage II
Stage III
QI2. Patients with stage I–III breast cancer should have a lymph node dissection
All eligible patients
Stage I
Stage II
Stage III
Ambulatory services
QI3. Patients with stage I–III breast cancer treated with BCS should receive
radiation therapy
All eligible patients
Stage I
Stage II
Stage III
QI4. Patients with stage II–III breast cancer should receive tamoxifen or
chemotherapy
All eligible patients
Stage I
Stage II
Stage III
†CI ⳱ confidence interval; BCS ⳱ breast-conserving surgery; N/A ⳱ not applicable.
Recall that registry data about adjuvant therapy, which includes
chemotherapy and tamoxifen and is generally provided in the
ambulatory setting, had low validity. Interestingly, for patients
with two or more comorbidities, no difference was observed in
their quality scores for breast cancer care calculated from the
two data sources. Patients with comorbidities were less likely to
get any treatment for breast cancer beyond the initial surgery.
Less outpatient treatment resulted in less bias in quality scores
calculated from cancer registry data.
To explore the statistical interaction effects resulting from
registry data validity, we modeled the effect of age, race, disease
840 ARTICLES
stage, number of comorbidities, and interaction terms for each
of the covariates on the difference between the quality score
determined from medical record data and that obtained from
registry data. Race was not a statistically significant variable
(P ⳱ .17) and was dropped from the model. Of the interaction
terms tested (age × race, age × stage, age × comorbidity, race ×
stage, and race × comorbidity), only age × stage was statistically
significant. Our final model thus included age (<70 years old
versus 艌70 years old), stage (stage I versus stages II and III),
comorbidity (number of comorbidities from the Charlson Comorbidity Index), and an interaction term for age and stage
Journal of the National Cancer Institute, Vol. 94, No. 11, June 5, 2002
Table 5. Difference in quality score for breast cancer care between data from the cancer registry and the medical record, overall and
stratified by stage, age, race, and comorbidity†
Cancer registry
Medical record
Difference
No. of cases
Quality score (QSCCR)
(95% CI)
No. of cases
Quality score (QSMR)
(95% CI)
No. of cases‡
QSMR − QSCCR
(95% CI)
Overall
248
0.81 (0.36 to 1.26)
259
0.92 (0.59 to 1.30)
218
0.11 (0.09 to 0.13)
Stage§
I
II
III
125
110
13
0.85 (0.40 to 1.30)
0.77 (0.34 to 1.20)
0.76 (0.72 to 0.80)
132
116
11
0.91 (0.52 to 1.30)
0.94 (0.65 to 1.23)
0.92 (0.65 to 1.19)
114
96
8
0.05 (0.03 to 0.07)
0.16 (0.12 to 0.20)
0.20 (0.08 to 0.32)
Age, y
<70
艌70
141
107
0.85 (0.81 to 0.89)
0.76 (0.72 to 0.80)
150
109
0.96 (0.94 to 0.98)
0.87 (0.83 to 0.91)
123
95
0.12 (0.08 to 0.16)
0.09 (0.05 to 0.13)
Race
White
Nonwhite
188
60
0.79 (0.75 to 0.83)
0.87 (0.83 to 0.91)
190
69
0.91 (0.89 to 0.93)
0.96 (0.92 to 1.00)
166
52
0.11 (0.09 to 0.13)
0.10 (0.06 to 0.14)
No. of comorbidities¶
0
1
艌2
169
64
15
0.82 (0.78 to 0.86)
0.79 (0.73 to 0.85)
0.74 (0.62 to 0.86)
175
69
15
0.95 (0.93 to 0.97)
0.89 (0.83 to 0.95)
0.78 (0.66 to 0.90)
147
58
13
0.12 (0.08 to 0.16)
0.09 (0.05 to 0.13)
0.00 (−0.10 to 0.10)
†All statistical tests were two-sided. CCR ⳱ California Cancer Registry; MR ⳱ medical records; CI ⳱ confidence interval.
‡Because agreement on American Joint Commission on Cancer (AJCC) stage between the medical record and the cancer registry was 82%, the number of cases
for whom a difference in quality scores could be calculated is less than the number of cases for whom a quality score was determined using each data source.
§P<.001 for the quality score difference by stage groups.
¶P ⳱ .04 for the quality score difference by comorbidity groups.
Table 6. Regression model of the effect of age, comorbidity, and stage on the
difference in quality scores from registry data and from medical record data†
R2
Model
Intercept
Age × Stage
Age‡
Stage§
Comorbidity
Regression coefficient
(95% confidence interval)
.1508
.1280
.0290
0.093 (0.051 to 0.134)
0.107 (0.018 to 0.196)
−0.052 (−0.113 to 0.010)
0.069 (0.011 to 0.128)
−0.04 (−0.073 to −0.012)
†Difference in quality scores ⳱ QSMR − QSCCR, where QS ⳱ quality score,
MR ⳱ medical record, and CCR ⳱ California Cancer Registry. A positive value
indicates that the quality scores calculated from medical record data were higher
than those calculated from registry data. A negative score indicates that quality
scores calculated from registry data were higher than those calculated from
medical record data. The following variables were included for model selection:
age, race, stage, comorbidity, and interaction terms for age × stage, race × stage,
age × race, age × comorbidity, and race × comorbidity.
‡The age of <70 y is referent.
§Stage I is referent.
(Table 6). The model equation was as follows for the difference
in quality score (⌬):
⌬ = QSMR − QSCCR
= 0.09 − 共0.05 × age兲 + 共0.07 × stage兲 + 共0.11 × age × stage兲
− 共0.04 × comorbidity兲,
where QSMR ⳱ the quality score calculated from medical records, and QSCCR ⳱ the quality score calculated from the California Cancer Registry. The model predicts that, for a woman
without clinically significant comorbidity who is 70 years old or
older with stage I breast cancer, her quality score (percentage of
quality indicators met) would be, on average, 4 percentage
points (95% CI ⳱ –1 to 9 percentage points) greater with medical record data than with registry data. If this same patient were
Journal of the National Cancer Institute, Vol. 94, No. 11, June 5, 2002
in the group younger than 70 years, her predicted quality score
would be 9 percentage points (95% CI ⳱ 5 to 14 percentage
points) greater from medical record data than from registry data.
A woman with no clinically significant comorbidity and stage II
or III breast cancer would be expected to have a quality score
22 percentage points (95% CI ⳱ 16 to 27 percentage points)
greater if she was 70 years old or older and 16 percentage points
(95% CI ⳱ 12 to 20 percentage points) greater if she was
younger than 70 years old. Each additional comorbidity decreases the predicted difference in quality scores calculated from
registry data compared with medical record data by 4 percentage
points (95% CI ⳱ –6 to 8 percentage points).
DISCUSSION
Compared with a medical record gold standard, we found that
the validity of registry data on breast cancer treatment varied
with the setting of care. Although the California Cancer Registry
data were accurate for hospital-based services, its data did not
accurately reflect the care that patients received in the ambulatory setting. In the California Cancer Registry, we identified
72% of the women who the medical record reported had definitely received radiation therapy, 56% of the women who the
medical record indicated had received chemotherapy, and
36% of women who the medical record indicated had received
tamoxifen, as indicated by the medical record. Our findings are
similar to those reported by Bickell and Chassin (38). They
found that cancer registry data at three New York City hospitals,
compared with data from a quality improvement project, could
identify 58% of the patients who received radiation therapy and
27% of those who received chemotherapy or tamoxifen.
If providers or health plans are compared by the use of registry data, those with patients who are elderly or have more
advanced disease would appear to be providing worse care because of the quality of the data. In addition, providers or health
plans that provide more care in an ambulatory setting might
ARTICLES 841
appear to have lower quality scores. Reporting the quality of
care derived from data with such validity problems could anger
providers and seriously undermine public confidence in this
process.
We conducted our study by use of a sample of patients diagnosed with breast cancer from a California health maintenance
organization (HMO). Although our methods permitted access to
the outpatient medical records for more than 80% of the sampled
cases, it posed several limitations for the study. First, the resulting study sample reflected the characteristics of the patients
enrolled in the HMO and was, therefore, somewhat older and
less ethnically diverse than the overall population of women
diagnosed with breast cancer in Los Angeles. Second, it is possible that the HMO’s practice of contracting with medical
groups and hospitals tended to exclude hospitals with limited
resources, thereby limiting our ability to detect differences in the
validity of cancer registry data. Third, cases were selected for
this study by linking data from the California Cancer Registry
with a file of women identified by the health plan as having
ICD-9 and CPT codes for breast cancer. This protocol resulted in
cases without a breast cancer claim being excluded from our
study. If our goal was to describe the quality of care of women
in this health plan, this exclusion could be an important bias.
However, the impact of this bias on an evaluation of the validity
of the registry data is likely to be minimal.
The results of this study highlight the importance of using
data that are valid across clinical settings and time to measure
quality of care. We found that, compared with medical record
data, the validity of registry data varied with the setting of care,
being less accurate for ambulatory services than for hospitalbased care. Furthermore, the setting of care varied with patient
characteristics, another source of bias in measurement of quality
of care. As cancer care is increasingly focused in the outpatient
setting, the use of data that are not valid for ambulatory services
would substantially undercut efforts to accurately measure the
quality of care. For example, if these data were used for surveillance of the quality of breast cancer care, the quality of
ambulatory services would be underestimated. If used to identify
areas for quality improvement, resources could be diverted away
from the area most in need of improvement. If such data were
used to hold providers accountable for the quality of their care,
for example, by requiring that a certain standard of quality be
met to be allowed to bill Medicare, some providers could inappropriately be penalized (5).
In addition to improving the accuracy of data on ambulatory
services, a few other modifications of registry procedures are
needed for registry data to be used for quality measurement.
First, registries would need to augment their data collection efforts to include information about a patient’s comorbidities; comorbidity data are necessary for valid measurement of the process and outcomes of care (39). Second, registries may need to
expand their data collection efforts to provide greater clinical
detail (e.g., dosage of chemotherapy drugs and number of treatments). Research is ongoing to determine how much clinical
detail is needed to make valid assessments of the quality of care.
Third, the time between case ascertainment by the hospital registries and data availability from the central registries, typically
2 years (29), would need to be reduced because more timely data
are needed for use by stakeholders and policy makers evaluating
the quality of care.
In spite of the challenges described, cancer registries have
842 ARTICLES
tremendous potential for quality measurement. Because of their
regulatory authority, cancer registries are uniquely situated to
identify a population-based sample of cancer patients, and they
are still the best candidates to provide the infrastructure for
measuring the quality of cancer care.
One approach to improving the accuracy of cancer registry
data for quality measurement is to augment registry data with
administrative data (40–43), such as SEER data augmented with
Medicare claims data (44,45). However, this approach has caveats that limit its application for any national effort to monitor
quality of care. First, administrative data have limited clinical
detail. Second, administrative data are not necessarily accurate
and would need to be validated (46). Third, because of the
fragmented nature of the U.S. health care system, no administrative database provides population-wide data. A database
pieced together from various sources of administrative data for
different cohorts of patients would be fraught with many problems. It is likely that the accuracy of such data would vary
tremendously and that, as in this study, such data could interact
with patient characteristics, setting of care, and other structural
variables (e.g., type of health plan).
Further research is needed to explore novel strategies to obtain population-based data on quality of cancer care that would
not require detailed data collection on every patient. Although
claims data are not a panacea, if validated and found to be
accurate, such data could be used when available. This procedure would allow resources for more detailed medical record
review to be focused on those quality measures for which registry and claims data fall short. Another strategy would be to
perform more detailed data collection (i.e., abstraction of the
ambulatory medical records or patient surveys) on a sample of
patients and then to use standard statistical techniques to impute
the quality score for the entire population.
If registries were to collect additional primary data and systematically review outpatient medical records (as was done in
this study) or survey patients about the medical care they received, the additional cost would be roughly $150 million to
$250 million annually. In contrast, current budgets are approximately $34 million per year for the National Program of Cancer
Registries, $22 million per year for SEER, and $1.2 million per
year for the National Cancer Data Base; however, the costs
associated with data collection are borne by the reporting facilities (47). Unless central registry organizations (e.g., SEER,
National Program of Cancer Registries, or the California Cancer
Registry) assume a more active role in data collection, the costs
of additional data collection would likely be borne by the reporting hospital registries and passed on to the purchasers of
health care. Although this amount is a substantial investment,
this amount is only about 0.2% of the estimated overall annual
costs for cancer of $107 billion (of which direct medical costs
are $37 billion) (48,49). This value is well within the range of
investment in quality assessment and improvement made by
other sectors of the economy, reported to be between 2% and
10% of total sales (50,51). Accurate data on the quality of cancer
care are urgently needed. Health care purchasers and policy
makers should consider investing in our cancer registry system
to obtain these data.
REFERENCES
(1) Hewitt M, Simone JV, editors. Ensuring quality cancer care. National
Cancer Policy Board, Institute of Medicine and National Research Council.
Washington (DC): National Academy Press; 1999.
Journal of the National Cancer Institute, Vol. 94, No. 11, June 5, 2002
(2) The Susan G. Komen Breast Cancer Foundation. News. Leading breast
cancer organization partners with ASCO to evaluate cancer care in the U.S.;
ensure all breast cancer patients receive high-quality care. [Accessed: 08/
13/2001.] Available from: http://www.breastcancerinfo.com/news/
article.asp?ArticleID⳱72.
(3) Skolnick AA. A FACCT-filled agenda for public information. Foundation
for Accountability. JAMA 1997:278:1558.
(4) ASCO initiates study of quality of cancer care. Oncology News 2000;9:1.
(5) Kahn KL, Malin JL, Adams J, Ganz PA. Developing a reliable, valid, and
feasible plan for quality-of-care measurement for cancer: how should we
measure? Med Care. In press 2002.
(6) Nattinger AB, Gottleib MS, Hoffman RG, Walker AP, Goodwin JS. Minimal increase in use of breast-conserving surgery from 1986 to 1990. Med
Care 1996;34:479–89.
(7) Warnecke RB, Johnson TP, Kaluzny AD, Ford LG. The community clinical oncology program: its effect on clinical practice. Jt Comm J Qual
Improv 1995;21:336–9.
(8) Hillner BE, McDonald MK, Penberthy L, Desch CE, Smith TJ, Maddux P,
et al. Measuring standards of care for early breast cancer in an insured
population. J Clin Oncol 1997;15:1401–8.
(9) Walsh GL, Winn RJ. Baseline institutional compliance with NCCN
guidelines: non-small-cell lung cancer. Oncology (Huntingt) 1997;11:
161–70.
(10) Potosky AL, Harlan LC, Stanford JL, Gilliland FD, Hamilton AS, Albertsen PC, et al. Prostate cancer practice patterns and quality of life:
the Prostate Cancer Outcomes Study. J Natl Cancer Inst 1999;91:
1719–24.
(11) Lu-Yao GL, Potosky AL, Albertsen PC, Wasson JH, Barry MJ, Wennberg
JE. Follow-up prostate cancer treatments after radical prostatectomy:
a population-based study. J Natl Cancer Inst 1996;88:166–73.
(12) Young WW, Marks SM, Kohler SA, Hsu AY. Dissemination of clinical
results. Mastectomy versus lumpectomy and radiation therapy. Med Care
1996;34:1003–17.
(13) Guadagnoli E, Shapiro C, Gurwitz JH, Silliman RA, Weeks JC, Borbas C,
et al. Age-related patterns of care: evidence against ageism in the treatment
of early-stage breast cancer. J Clin Oncol 1997;15:2338–44.
(14) Farrow DC, Hunt WC, Samet JM. Geographic variation in the treatment of
localized breast cancer. N Engl J Med 1992;326:1097–101.
(15) Ballard-Barbash R, Potosky AL, Harlan LC, Nayfield SG, Kessler LG.
Factors associated with surgical and radiation therapy for early stage breast
cancer in older women. J Natl Cancer Inst 1996;88:716–26.
(16) Hillner BE, Penberthy L, Desch CE, McDonald MK, Smith TJ, Retchin
SM. Variation in staging and treatment of local and regional breast cancer
in the elderly. Breast Cancer Res Treat 1996;40:75–86.
(17) Lazovich DA, White E, Thomas DB, Moe RE. Underutilization of breastconserving surgery and radiation therapy among women with stage I or II
breast cancer. JAMA 1991;266:3433–8.
(18) Howe HL, Lehnherr M, Katterhagen JG. Effects of physician outreach
programs on rural-urban differences in breast cancer management. J Rural
Health 1997;13:109–17.
(19) Busch E, Kemeny M, Fremgen A, Osteen RT, Winchester DP, Clive RE.
Patterns of breast cancer care in the elderly. Cancer 1996;78:101–11.
(20) Sawka C, Olivotto I, Coldman A, Goel V, Holowaty E, Hislop TG. The
association between population-based treatment guidelines and adjuvant
therapy for node-negative breast cancer. British Columbia/Ontario Working Group. Br J Cancer 1997;75:1534–42.
(21) National Cancer Institute. Process of cancer data collection. SEER’s
Training Web Site. Unit 3. Cancer data. [Accessed: 08/18/2001.] Available
from: http://training.seer.cancer.gov/module_cancer_registration/
unit3_how_collect.html#.
(22) NAACCR (North American Association of Central Cancer Registries).
Standards for completeness, quality, analysis, and management of data. Vol
3. North American Association of Central Cancer Registries. Standards for
Cancer Registries; September 2000. Available from: http://www.naaccr.
org/Standards/files/VolumeIIIwithprefaceandrefs.pdf.
(23) Austin DF. Types of registries: goals and objectives. In: Menck H, Smart
C, editors. Central cancer registries: design, management, and use. Langhorne (PA): Hardwood Academic Publishers; 1994. p. 1–12.
(24) Izquierdo JN, Schoenbach VJ. The potential and limitations of data from
population-based state cancer registries. Am J Public Health 2000;90:
695–8.
Journal of the National Cancer Institute, Vol. 94, No. 11, June 5, 2002
(25) Centers for Disease Control and Prevention. National Program of Cancer
Registries-Cancer Surveillance System (NPCR-CSS). Rationale and approach (July 1999). [Accessed: 08/10/2001.] Available from: http://www.
cdc.gov/cancer/npcr/npcr-css.htm.
(26) Young JL. The hospital-based cancer registry. In: Jensen OM, Parkin DM,
Maclennan R, Muir CS, Skeet RG, editors. Cancer registration principles
and methods. Lyon (France): IARC; 1991. p. 177–84.
(27) Zippin C, Lum D, Hankey BF. Completeness of hospital cancer case reporting from the SEER Program of the National Cancer Institute. Cancer
1995;76:2343–50.
(28) American Joint Committee on Cancer. AJCC cancer staging manual. 5th ed.
Philadelphia (PA): Lippincott-Raven Publishers; 1997.
(29) National Cancer Institute. About SEER. [Accessed: 08/10/01.] Available
from: http://seer.cancer.gov/AboutSEER.html.
(30) Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of
classifying prognostic comorbidity in longitudinal studies: development
and validation. J Chronic Dis 1987;40:373–83.
(31) Goldhirsch A, Wood WC, Senn HJ, Glick JH, Gelber RD. Meeting highlights: international consensus panel on the treatment of primary breast
cancer. J Natl Cancer Inst 1995;87:1441–5.
(32) Consensus Statements/NIH Consensus Development Program. 81. Treatment of early-stage breast cancer. National Institutes of Health-Consensus
Development Conference Statement, June 18–21, 1990. [Accessed 09/09/
2001.] Available from: http://dowland.cit.nih.gov/odp/consensus/cons/081/
081_statement.htm.
(33) Lazovich D, Solomon CC, Thomas DB, Moe RE, White E. Breast conservation therapy in the United States following the 1990 National Institutes of Health Consensus Development Conference on the treatment of
patients with early stage invasive breast carcinoma. Cancer 1999;86:
628–37.
(34) Goel V, Olivotto I, Hislop TG, Sawka C, Coldman A, Holowaty EJ. Patterns of initial management of node-negative breast cancer in two Canadian
provinces. British Columbia/Ontario Working Group. CMAJ 1997;156:
25–35.
(35) Morrow M, White J, Moughan J, Owen J, Pajack T, Sylvester J, et al.
Factors predicting the use of breast-conserving therapy in stage I and II
breast carcinoma. J Clin Oncol 2001;19:2254–62.
(36) Shi L. Type of health insurance and the quality of primary care experience.
Am J Public Health 2000;90:1848–55.
(37) Monheit AC, Vistnes JP. Race/ethnicity and health insurance status: 1987
and 1996. Med Care Res Rev 2000;57 Suppl 1:11–35.
(38) Bickell NA, Chassin MR. Determining the quality of breast cancer care:
do tumor registries measure up? Ann Intern Med 2000;132:705–10.
(39) Brook RH, McGlynn EA, Cleary PD. Quality of health care. Part 2: measuring quality of care. N Engl J Med 1996;335:966–70.
(40) McClish DK, Penberthy L, Whittemore M, Newschaffer C, Woolard
D, Desch CE, et al. Ability of Medicare claims data and cancer registries
to identify cancer cases and treatment. Am J Epidemiol 1997;145:
227–33.
(41) Brooks JM, Chrischilles E, Scott S, Ritho J, Chen-Hardee S. Information
gained from linking SEER Cancer Registry Data to state-level hospital
discharge abstracts. Surveillance, Epidemiology, and End Results. Med
Care 2000;38:1131–40.
(42) Doebbeling BN, Wyant DK, McCoy KD, Riggs S, Woolson RF, Wagner
D, et al. Linked insurance-tumor registry database for health services research. Med Care 1999;37:1105–15.
(43) Potosky AL, Riley GF, Lubitz JD, Mentnech RM, Kessler LG. Potential for
cancer related health services research using a linked Medicare-tumor registry database. Med Care 1993;31:732–48.
(44) Potosky AL, Merrill RM, Riley GF, Taplin SH, Barlow W, Fireman BH, et
al. Breast cancer survival and treatment in health maintenance organization
and fee-for-service settings. J Natl Cancer Inst 1997;89:1683–91.
(45) Riley GF, Potosky AL, Klabunde CN, Warren JL, Ballard-Barbash R.
Stage at diagnosis and treatment patterns among older women with breast
cancer: an HMO and fee-for-service comparison. JAMA 1999;281:
720–6.
(46) Du X, Goodwin JS. Patterns of use of chemotherapy for breast cancer in
older women: findings from Medicare claims data. J Clin Oncol 2001;19:
1455–61.
(47) Hewitt M, Simone JV, editors. Enhancing data systems to improve the
quality of cancer care. National Cancer Policy Board, Institute of Medicine
ARTICLES 843
(48)
(49)
(50)
(51)
and National Research Council. Washington (DC): National Academy
Press; 2000.
American Cancer Society. Cancer facts and figures 2000. [Accessed 04/
23/2002]. Available from: http://www.cancer.org/downloads/STT/F&F00.
pdf.
Brown ML. The national economic burden of cancer: an update. J Natl
Cancer Inst 1990;82:1811–4.
Halevy A, Naveh E. Measuring and reducing the national cost of nonquality. Total Quality Management 2000;11:1095–2003.
Giakatis G, Enkawa T, Washitani K. Hidden quality costs and the distinction between quality cost and quality loss. Total Quality Management
2001;12:179–85.
NOTES
1
Editor’s note: SEER is a set of geographically defined, population-based,
central cancer registries in the United States, operated by local nonprofit
organizations under contract to the National Cancer Institute (NCI). Registry
844 ARTICLES
data are submitted electronically without personal identifiers to the NCI on a
biannual basis, and the NCI makes the data available to the public for scientific
research.
Supported by grants from the Susan G. Komen Breast Cancer Foundation and
PacifiCare of California. Dr. Malin is supported by a CI-10 Damon Runyon-Lilly
Clinical Investigator Award from the Damon Runyon Cancer Research Foundation. Dr. Ganz is supported by an American Cancer Society Clinical Research
Professorship.
We thank William Wright, Sandy Liu, and the staff of the California Cancer
Registry, as well as Kim Allory and Laura Epperson at PacifiCare. We gratefully
acknowledge Tanya Barauskas, Cynthia Wang, Amber Pakilit, and Christine
Reifel for research assistance and Christiann Savage for manuscript preparation.
Dr. Malin is indebted to the members of her dissertation committee, Patricia
Ganz, Katherine Kahn, John Glaspy, Robert Brook, and Ronald Andersen, for
their guidance and support.
Manuscript received November 9, 2001; revised March 18, 2002; accepted
March 29, 2002.
Journal of the National Cancer Institute, Vol. 94, No. 11, June 5, 2002