Modernising Cancer Services

Modernising cancer services:
an evaluation of phase I of the Cancer
Services Collaborative
Final evaluation report
Glenn Robert
Hugh McLeod
Chris Ham
May 2003
Research Report number 43
Health Services Management Centre
School of Public Policy
Published by: Health Services Management Centre, University of Birmingham, Park House, 40 Edgbaston Roark Road,
Birmingham B15 2RT.
© University of Birmingham 2003
First Published 2003
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted, in any form or by any other
means, electronic or mechanical, photocopying, recording and/or otherwise without the prior written permission of the publishers. This
book may not be lent, resold, hired out or otherwise disposed of by way of trade in any form, binding or cover than that in which it is
published.
ISBN 07044 24185
Foreword and acknowledgements
This report presents the results of the external evaluation of phase I of the Cancer Services
Collaborative (CSC) which ended in March 2001. The research was commissioned from the
Health Services Management Centre by the Department of Health. The views expressed are
those of the authors, not necessarily of the Department of Health.
The report includes analyses of the two quantitative ‘standard global measures’ chosen by
the CSC and for which patient-level data were supplied to the evaluation team for the
purpose of assessing outcomes. These measures relate to ‘patient flow’ (the number of days
from referral to first definitive treatment) and ‘access’ (the percentage of patients booked for
first specialist appointment, first diagnostic test, and first definitive treatment)1. The main
aim of the evaluation is to highlight key areas of learning about the structure, process and
outcome of this initiative.
It is not the intention of this report to catalogue all of the changes made to cancer services as
a result of the CSC in the 51 participant projects. The ‘Service Improvement Guides’
published by the Modernisation Agency summarise the specific lessons learned and provide
ideas and examples for other cancer networks, clinical teams and individuals seeking to
improve cancer services2. Some examples of specific improvements are provided in this
report where they either highlight the use of a particular technique or tool (e.g. process
mapping) or the benefits of a particular approach to service improvement (e.g. multidisciplinary team working). However, these are not intended to represent a comprehensive
overview of all the activity in all of the CSC project teams.
Verbatim quotations are used to illustrate the various points raised. It should be noted that
quotations are not presented as “evidence” for the themes, but to give the reader a flavour of
the interviewee’s own words. For this reason, and in the interest of brevity, not every theme
is illustrated by a quotation. Similarly, quotations used are not the only ones that refer to a
particular theme. Multiple quotations are used to illustrate different ways of expressing a
similar view and do not indicate that the theme was more important than those where only
one quotation is used. All interviews conducted as part of this research were given on
condition of anonymity.
Our first report (Parker et al, 2001) provided a brief description of the background and
operational aspects of the CSC and the methodological approach. These early qualitative
findings have been incorporated into this report where appropriate.
We would like to thank all those who took part in interviews for giving their time and
sharing their views with us, and the administrators of the programmes for their efforts in
arranging interview schedules for visits from the research team and providing data for this
report. We are also grateful for the support of Alan Glanz, Department of Health and our
colleagues at HSMC, particularly Hilda Parker, Phil Meredith and Ruth Kipping who
contributed to this study and to Jackie Francis in organising the fieldwork and preparing this
report. Finally, thanks to three anonymous reviewers who provided thoughtful and helpful
comments on an earlier draft.
1
Measures relating to ‘clinical effectiveness’, ‘improving the patient and carer experience’ and ‘capacity and
demand’ were used at the discretion of the programme managers and project teams. In the absence of
standardised data for these measures, they were not included in the evaluation’s quantitative analysis.
2
There are five tumour-specific guides (bowel, breast, lung, ovarian and prostate cancers) and eight topic
summary guides (e.g. chemotherapy, radiology). The reports are available from the NPAT website at
www.nhs.uk/npat.
i
ii
Contents
List of figures
iv
List of tables
v
Executive summary
1
1. Introduction
7
2. What were the gains achieved both in terms of quantitative outcomes and
less quantifiable benefits?
17
3. What were the key levers for change?
39
4. What hindered change/progress?
51
5. What was the perceived value and impact of the methodological approach
that was adopted?
57
6. What are the implications for national and regional roles, cancer networks,
project management and clinical leaders?
79
7. How much did the Cancer Services Collaborative cost and how was the
funding used locally?
93
8. Discussion: what are the key lessons for future collaboratives in the NHS?
97
References
111
Appendices
1.
Patient-level data collection form
115
2.
Postal questionnaire
118
3.
Response rates to postal questionnaire
129
4.
Collection of patient-level activity data
130
5.
CSC project level analysis of selected quantitative outcomes
133
6.
Secondary analysis of waiting times to first definitive treatment
171
7.
Postal questionnaire: aspects of improvement approach
173
8.
Postal questionnaire: organisational aspects
174
9.
Costs questionnaire
175
iii
List of figures
Figure 1
IHI Improvement model
11
Figure 2
Diagrammatic illustration of the collaborative process
12
Figure 3
Management structure
14
Figure 4
Box plots showing waiting times from referral to first definitive treatment by tumour type
26
Figure 5
How helpful did you find process mapping?
40
Figure 6
How helpful did you find having a dedicated project manager?
43
Figure 7
How helpful did you find Capacity and Demand training?
44
Figure 8
How helpful did you find the CSC Improvement handbook?
62
Figure 9
How helpful did you find the CSC change principles?
63
Figure 10
How helpful did you find PDSA?
64
Figure 11
How helpful did you find monthly reports?
67
Figure 12
Team self-assessment scale
68
Figure 13
How helpful did you find team self-assessments?
69
Figure 14
How helpful did you find National Learning workshops?
72
Figure 15
National Learning workshops
73
Figure 16
How helpful did you find National one day meetings?
75
Figure 17
National meetings
75
Figure 18
How helpful did you find the CSC listserv?
76
Figure 19
How helpful did you find conference calls?
77
Figure 20
How helpful did you find local CSC Leadership?
80
Figure 21
How helpful did you find clinical champions?
84
Figure 22
How helpful did you find cancer networks?
86
Figure 23
How helpful did you find the National CSC team?
88
Figure 24
How helpful did you find Trust Chief Executives?
90
Figure 25
How helpful did you find the role of Health Authorities?
92
Figure 26
How helpful did you find the role of regional offices?
92
Figure 27
Programme C Prostate project; waiting time ‘run chart’
134
Figure 28
Programme B Prostate project; waiting time ‘run chart’
135
Figure 29
Programme D Breast project A; waiting time ‘run chart’
137
Figure 30
Programme D Breast project A; booking ‘run chart’ for the first specialist appointment
139
Figure 31
Programme D Breast project A; waiting time ‘run chart’ for admissions
140
Figure 32
Programme C Breast project; waiting time ‘run chart’
140
Figure 33
Programme A Breast project: waiting time ‘run chart’
143
Figure 34
Programme A, Breast project; booking ‘run chart’ for the first specialist appointment
145
Figure 35
Programme B Ovarian project; waiting time ‘run chart’
148
Figure 36
Programme D Ovarian project; waiting time ‘run chart’
149
Figure 37
Programme A Ovarian project; waiting time ‘run chart’
152
Figure 38
Programme A Ovarian project; booking ‘run chart’ for the first specialist appointment
154
iv
Figure 39
Programme A Ovarian project; booking ‘run chart’ for the first diagnostic test
155
Figure 40
Programme A Ovarian project; booking ‘run chart’ for the first definitive treatment
155
Figure 41
Programme B Colorectal project; waiting time ‘run chart’
156
Figure 42
Programme D Colorectal project; waiting time ‘run chart’
157
Figure 43
Programme D Colorectal project; booking ‘run chart’ for first definitive treatment
159
Figure 44
Programme A Colorectal project; waiting time ‘run chart’
160
Figure 45
Programme A Colorectal project; booking ‘run chart’ for first diagnostic test
162
Figure 46
Programme A Colorectal project; booking ‘run chart’ for first definitive treatment
163
Figure 47
Programme A Lung project; waiting time ‘run chart’
164
Figure 48
Programme E Lung project; waiting time ‘run chart’
166
Figure 49
Programme B Lung project; waiting time ‘run chart’
168
List of tables
Table 1
Patient-level data on waiting times by project and tumour type
24
Table 2
Summary main analysis of waiting times from referral to first definitive treatment by
tumour type (14 projects)
25
Table 3
Summary project-level main analysis of waiting times from referral to first definitive
treatment
30
Table 4
Summary project-level analysis of the percentage of patients booked for three stages of
care
32
Table 5
In your view have there been any local benefits from participating in the CSC which were
not directly associated with the formal objectives and processes of the CSC? (n=96)
33
Table 6
Please indicate how well embedded within the participating organisation you believe the
CSC now is (% responses) (n=96)
36
Table 7
Looking at the CSC over the whole course of your involvement, how would you assess the
evolution of the programme? (% responses) (n=96)
51
Table 8
How helpful overall did you find the following components of the CSC improvement
approach in the context of your role in the CSC programme? (presented in order of highest
% of ‘very helpful’ responses) (n=96)
58
Table 9
Most helpful and least helpful components of CSC improvement approach
59
Table 10
How helpful did you find the following broader aspects of the CSC in the context of your
own role? (in order of highest % of ‘very helpful’ responses) (n=96)
79
Table 11
Average income and expenditure for CSC programmes to March 2001 (n=8)
94
Table 12
Response rates to postal questionnaire by respondent group
129
Table 13
Response rates to postal questionnaire by CSC Programme
129
Table 14
Patient-level data on waiting times by programme
132
Table 15
Programme C Prostate project: all cases; waiting time from referral to first definitive
treatment (days)
134
Table 16
Programme B Prostate project: all cases; source of referral
136
v
Table 17
Programme B Prostate project: all cases; waiting time from referral to first definitive
treatment (days)
136
Table 18
Programme D Breast project A: data by treatment type and dates present
138
Table 19
Programme D Breast project A: locally defined urgent cases treated with surgery; waiting
time from referral to first definitive treatment (days)
138
Table 20
Programme D Breast project A: two week wait defined urgent cases treated with surgery;
waiting time from referral to first definitive treatment (days)
138
Table 21
Programme C Breast project: data by treatment type
141
Table 22
Programme C Breast project: all patients; waiting time from referral to first definitive
treatment (days)
141
Table 23
Programme C Breast project: locally defined urgent cases treated with surgery; waiting
time from referral to first definitive treatment (days)
142
Table 24
Programme C Breast project: two week wait defined urgent cases treated with surgery;
waiting time from referral to first definitive treatment (days)
142
Table 25
Programme A Breast project: data by treatment type and dates present
144
Table 26
Programme A Breast project: two week wait defined urgent cases treated with surgery;
waiting time from referral to first definitive treatment (days)
144
Table 27
Programme A Breast project: two week wait defined urgent cases treated with hormone
therapy; waiting time from referral to first definitive treatment (days)
145
Table 28
Programme A Breast project: reported booking for first specialist appointment by
treatment type and event timing
146
Table 29
Programme A Breast project: reported booking for first diagnostic investigation and first
definitive treatment by treatment type and event timing
147
Table 30
Programme B Ovarian project: all patients; waiting time from referral to first definitive
treatment (days)
148
Table 31
Programme D Ovarian project: data by referral type and dates present
150
Table 32
Programme D Ovarian project: all patients; waiting time from referral to first definitive
treatment (days)
150
Table 33
Programme D Ovarian project: patients treated with surgery; waiting time from referral to
first definitive treatment (days)
151
Table 34
Programme D Ovarian project: booking data for first diagnostic test and first definitive
treatment
151
Table 35
Programme A Ovarian project: all patients; waiting time from referral to first definitive
treatment (days)
153
Table 36
Programme A Ovarian project (Trust A): two week wait defined urgent cases treated with
surgery; waiting time from referral to first definitive treatment (days)
154
Table 37
Programme B Colorectal project: all patients; waiting time from referral to first definitive
treatment (days)
156
Table 38
Programme D Colorectal project: all patients; waiting time from referral to first definitive
treatment (days)
158
Table 39
Programme D Colorectal project: all GP referrals treated with surgery; waiting time from
referral to first definitive treatment (days)
158
Table 40
Programme A Colorectal project: data by first definitive treatment and referral type
160
Table 41
Programme A Colorectal project: all patients; waiting time from referral to first definitive
treatment (days)
161
Table 42
Programme A Colorectal project: GP referrals treated with surgery; waiting time from
referral to first definitive treatment (days)
161
vi
Table 43
Programme A Lung project: type of referral
164
Table 44
Programme A Lung project: all patients; waiting time from referral to first definitive
treatment (days)
165
Table 45
Programme A Lung project: urgent referrals treated with chemotherapy; waiting time from
referral to first definitive treatment (days)
165
Table 46
Programme E Lung project: type of first definitive treatment
167
Table 47
Programme E Lung project: all patients; waiting time from referral to first definitive
treatment (days)
167
Table 48
Programme B Lung project: all patients; waiting time from referral to first definitive
treatment (days)
169
Table 49
Programme B Lung project: cases treated with chemotherapy; waiting time from referral
to first definitive treatment (days)
169
Table 50
Programme B Lung project: cases treated with radiotherapy; waiting time from referral to
first definitive treatment (days)
169
Table 51
Programme B Lung project: cases treated with surgery; waiting time from referral to first
definitive treatment (days)
170
Table 52
Summary project-level secondary analysis of waiting times from referral to first definitive
treatment
171
Table 53
% respondents rating aspect ‘very’ or ‘quite helpful’ (by programme A-G)
173
Table 54
% respondents rating aspect ‘very’ or ‘quite helpful’
173
Table 55
% respondents rating aspect ‘very’ or ‘quite helpful’ (by programme A-G)
174
Table 56
% respondents rating aspect ‘very’ or ‘quite helpful’
174
vii
viii
EXECUTIVE SUMMARY
Background:
The Cancer Services Collaborative (CSC) is a major National Health Service (NHS)
programme that aims to improve the experience and outcomes for patients with suspected or
diagnosed cancer by optimising care delivery systems. The initiative is being led by the
National Patients’ Access Team (NPAT) and phase I ran from November 1999 to March
2001. Following a bidding process, nine programmes from each of the then eight health
regions in England (two in London) were selected to participate. The CSC focused on five
cancer tumour groups: breast, lung, colorectal, prostate and ovarian. Seven of the
programmes included projects in all five tumour groups and two programmes concentrated
on four tumour groups. In total there were 51 projects across the nine programmes.
The CSC is part of the National Booked Admissions Programme and therefore linked to the
Government’s aim to modernise the NHS through measures designed to raise standards and
improve access and convenience. In addition, the aim and focus of the CSC complements
other policy initiatives in the national agenda for improving cancer care such as the CalmanHine report and the National Cancer Plan which included a target of a maximum two month
wait from urgent GP referral to treatment for all cancers by 2005. The decision to establish
the CSC was driven by two main factors: an opportunity to implement the booking process
across the whole patient pathway and the results of a national cancer waiting times audit
which suggested that there were unacceptable delays along the care process.
Aim of evaluation:
The Department of Health (DH) commissioned the Health Services Management Centre
(HSMC) to undertake an independent evaluation of the CSC, beginning in April 2000. The
evaluation aims to provide an assessment that will assist in documenting and analysing the
achievements and lessons from the CSC programme.
Method:
The research method consisted of both quantitative and qualitative analysis including:
-
125 semi-structured, tape-recorded interviews with CSC participants and stakeholders
(face-to-face and by telephone) during the period April - December 2000,
-
Six tape-recorded focus groups with CSC participants during the second half of 2000,
-
A postal questionnaire administered after the end of phase I of the CSC (May 2001) to
national leaders of the CSC, programme managers, project managers and project clinical
leads (n=130)
-
Documents and observation of meetings, conference calls, and the use and content of the
CSC electronic mail discussion list (listserv),
-
Patient-level data on (a) waiting times and (b) booking, and
-
Cost data.
Results:
Quantitative outcomes
We sought to analyse patient-level data from the 51 project teams relating to the two
quantitative 'standard global measures' adopted by the CSC: (a) waiting time to first
definitive treatment and (b) the proportion of patients booked at three key stages of the
patients journey. In the event our analyses have been severely constrained by the lack of
1
available data. A full discussion of the difficulties surrounding the quantitative evaluation is
provided in the report.
On the basis of data supplied the number of diagnosed cancer patients whose care and
treatment would have been directly affected by the work of all the teams during phase I
extrapolates to approximately 5,340 per annum1. Breast and colorectal cancer patients
comprise the majority of these cases (2,300 (43%) and 1,527 (29%) respectively). The
numbers of lung, ovarian and prostate cancer patients were much lower (680 (13%), 544
(10%) and 272 (5%) respectively).
Waiting times
Our analysis of the data from the 14 (27%) projects that were able to provide adequate
returns, totalling 487 patients in the baseline quarter (January to March, 2000) and 409
patients in the outcome quarter (January to March, 2001) suggests a wide range of
experience relating to changes in waiting times to definitive treatment2.
Two prostate projects demonstrated good progress towards local waiting time targets as
overall median waiting times were reduced from 140 to 63 days (a 55% decrease). Although
prostate cancer remained the tumour site with the longest median waiting time overall, the
proportion of patients waiting longer than 62 days (the two month target) before starting
treatment decreased considerably (from 96% to 52%).
At the other extreme three ovarian projects did not experience an overall reduction in median
waiting times (in contrast these rose from 17 to 28 days: a 64.7% increase).
Three colorectal projects saw a decrease in median waiting times from 64.5 to 57 days (an
11.6% reduction) but this was not statistically significant.
The three lung and three breast cancer projects saw very little change overall in terms of
median waiting times (44.0 to 44.5 days and 19 to 21 days respectively).
Booked appointments
With regard to the second measure, only 11/51 (22%) of the project teams provided data on
booking for the quarters ending March 2000 and March 2001. Five of these projects
experienced an increase in booking for first specialist appointment between the two periods
and - in the quarter ending March 2001 - 3/11 projects reported 100% booking for this stage
of the patient’s journey. Five projects also experienced an increase in booking for first
diagnostic test and - in the quarter ending March 2001 - 7/11 projects reported 100%
booking. Finally, three projects experienced an increase in booking for first definitive
treatment but three projects experienced a reduction; for the quarter ending March 2001,
7/11 projects reported 100% booking.
Our analysis suggests a somewhat higher proportion of patients were being booked in the
quarter ending March 2001 at each of these three stages as compared to the outcomes
reported by the CSC (Kerr et al, 2002).
1
The work of project teams in phase I of the CSC also covered suspected cancer patients but the quantitative
dataset available to the research team does not provide a means of estimating numbers for this group of
patients.
2
A further 12 projects (24%) were able to provide partial returns that have been used in a secondary analysis.
These projects comprised an additional 207 patients in the baseline quarter and 270 patients in the outcome
quarter which varied from the quarter ending November 2000 to March 2001.
2
Costs
The funds directly allocated to each of the nine regional programmes averaged £554,592
(£507,299 from NPAT plus £47,293 from other sources). The majority of the funding spent
during phase I (54%) was on project-related non-clinical staff time whilst a further 28% was
used for project-related clinical staff time.
There was significant variation across the programmes as to how funding was used locally.
Most strikingly, one programme spent approximately £310,000 (56%) on project-related
clinical staff time, new clinical capacity or waiting time initiatives; in comparison, another
programme spent just over £30,000 (6%) on these elements.
An average of £108,032 (19.5% of the total available funds) was carried forward to phase II
of the CSC (2001/02) across the nine programmes (range 3-41%).
Qualitative findings
In addition to the gains directly associated with the CSC’s formal objectives, over two-thirds
of respondents to the postal questionnaire (96/130, response rate 74%) stated that additional
local benefits were realised. These informal benefits included: spreading the CSC approach
and techniques to other departments and organisations locally, assisting in the development
and strengthening of local cancer networks, developing staff and increasing staff motivation,
stimulating multi-disciplinary team working and raising the profile of participating Trusts.
However, some participants in a minority of the projects questioned the overall scale of the
changes that had been achieved in phase I of the CSC.
Respondents identified two components of the CSC approach as key levers for change: the
adoption of a patient perspective (and particularly the acquisition and application of process
mapping skills) and the availability of dedicated project management time. Both of these
elements were rated as being ‘very’ helpful by over 70% of respondents.
Other highly rated aspects of the CSC were: the training opportunities (especially related to
capacity and demand issues), opportunities for multi-disciplinary working, empowering
frontline staff and networking at the national learning workshops and other national
meetings. On this final point, more opportunities to meet and discuss issues with peers on a
one-to-one or at a small group level would have been welcomed.
As must be expected in such an innovative programme there were a number of aspects which
participants found less helpful and felt could be improved in future phases of the CSC.
Certainly, and as has been acknowledged in hindsight by those leading the programme,
particular elements at the beginning of the CSC could have been improved. Ongoing
concerns mostly centred on the data collection, measurement and monthly reporting aspects
of the process. In particular the compilation and dissemination of team self-assessment
scores were found to be ‘not particularly helpful’ or ‘not at all helpful’ by 50% of
respondents overall (and by over 70% of project managers). There were also particular
doubts about the usefulness of conference calls and the CSC listserv.
At the organisational level the support provided locally by programme managers and
clinicians - where they were closely involved - to the project managers and their teams was
seen as very valuable. Ratings of the contributions of the cancer networks, CSC national
team and Trust chief executives were more varied. The contribution of health authorities and
regional offices in phase I of the CSC were not highly rated by participants although there
was felt to be potential for closer - and more helpful - involvement at these levels in future
phases.
3
As far as likely sustainability and spread are concerned whilst over half of the respondents to
the questionnaire felt that the CSC was ‘very’ or ‘quite’ embedded in their organisation,
almost a third were unconvinced that this was the case - with lead clinicians more sceptical
than their project managers. The differences in approaches adopted by the 51 projects meant
that some had clearly given more thought than others to the issue of ‘spread’.
Discussion:
There was a strong sense from participants that - as a programme - phase I of the CSC had
improved over the course of its 15 months duration. However, it would be misleading to
portray the CSC as a single entity. The marked variation between the nine programmes point
to the importance of understanding the factors which influenced the likelihood of ‘success’
locally.
Participants from two of the nine programmes in the CSC were generally much more
positive in their assessment of its various aspects whilst those from another programme were
relatively negative. There were also some differences between the professional groups which
participated: clinicians were less convinced by the value of the Plan-Do-Study-Act (PDSA)
cycle approach whilst project managers were more positive than clinicians about the likely
sustainability of the improvements that had been made.
Given that the improvement method taught to all the projects - and the mode of its national
introduction to participants - was very similar, explanations for the recorded variations
between the nine programmes must lie elsewhere. It is increasingly clear that the receptive
contexts (Pettigrew et al, 1992) at the individual, team and organisational levels play a
significant role in determining both outcomes and experiences of programmes such as the
CSC.
The research presented here points to two particular factors. Firstly, the vital importance of
local leadership (both clinical and managerial) and - more specifically - the way in which
such improvement programmes are interpreted, disseminated, managed and applied locally.
Secondly, the need to strike the correct balance when a programme is co-ordinated at a
national level - with the aim of achieving nationally shared goals - but which simultaneously
requires a strong sense of local ownership of the work amongst participants. Related to this and a strong theme that emerged from participants comments - was a reaction against some
of the reporting regarding the achievements of the CSC.
It is important to bear in mind that the programmes were selected via a competitive bidding
process and are therefore likely to represent the ‘leading-edge’ of NHS cancer teams; they
were explicitly chosen to demonstrate how the collaborative approach can lead to significant
improvements. Spreading the CSC processes to those teams participating in phase II will be
a different, and in some respects harder, challenge but one that will be facilitated by the
immense learning that has been captured by all involved in phase I.
Conclusions:
The CSC itself was a learning process and this evaluation has been of an innovation in the
NHS which has changed and developed over the course of its implementation. The lessons
learnt from phase I of the CSC should help to design future programmes developed by the
NHS Modernisation Agency.
Whilst broadly based on the IHI ‘Breakthrough’ Collaborative methodology, the CSC was
something of a hybrid between this US approach and lessons learnt from process redesign in
the NHS - for example, one of the most highly rated aspects of the CSC was the use of
process mapping. Such customisation is important given the differences not only between the
4
health care systems in the US and the UK but the seeming reluctance of NHS staff to adopt
wholesale a US-style change programme.
Overall, the CSC was a success in the views of the participants themselves. For some, the
less quantifiable benefits were more significant than those which can be reflected in terms of
the selected quantitative outcomes over a 15 month project. For others, the relatively small
numbers of patients involved and the reported scale of the changes that were brought about
were seen as only the beginning of a much longer term process which the CSC had initiated.
The challenges now for participants and their organisations are, firstly, to maintain the
momentum and to continue to build towards significant improvements across whole services
and, secondly, not only to sustain both these improvements - and the techniques and
processes that has brought them about - but to continually nurture and anchor the cultural
shifts which have been begun and which are so important if the NHS is to truly become a
‘patient-led service’.
5
6
1.
INTRODUCTION
1.1
Background
The Cancer Services Collaborative (CSC) is an innovative National Health Service (NHS)
programme that aims to improve the experience and outcomes for patients with suspected or
diagnosed cancer by optimising care delivery systems. The initiative is being led by the
National Patients’ Access Team (NPAT)1 and phase I ran from November 1999 to March
2001. Each of the eight health regions in England - as they were constituted at that time - had
one programme (two in London). The CSC focused on five cancer tumour groups: breast,
lung, colorectal, prostate and ovarian. Seven of the programmes included projects in all five
tumour groups and two programmes concentrated on four tumour groups. In total there were
51 projects across the nine programmes. The NHS Modernisation Board’s first Annual
Report states that approximately 5,000 NHS staff are participating in phases I and II of the
CSC (NHS Modernisation Board, 2002).
The programme’s goal was to be achieved by (NPAT, 1999):
-
Providing certainty and choice for patients across the process of care,
-
Predicting patient requirements and pre-planning and pre-scheduling their care at
times that suit them,
-
Reducing unnecessary delays and restrictions on access,
-
Improving patient/carer satisfaction by providing a personalised, consistent service,
and
-
Ensuring patients receive the best care, in the best place, by the best person or team.
The decision to attempt to introduce service improvement using a collaborative approach
stemmed from NPAT’s previous experience of service redesign (Locock, 2001; 2003). They
learnt that successful redesign required a programmatic approach comprising persons skilled
in redesign techniques, a system for measurement, and regular reporting. Given the
complexities of cancer care, their view was that a redesign programme in this area would
benefit from a more comprehensive approach; i.e. support both from within an organisation
and nationally, together with a defined methodological package. Prior knowledge of the
“Breakthrough Series” (BTS) (Kilo, 1998) collaborative improvement model developed by
the Institute for Healthcare Improvement (IHI), Boston, USA, suggested to NPAT that this
model offered a suitable approach.
The CSC is therefore based on a combination of ingredients from the BTS and NPAT
experience of service redesign initiatives such as the Booked Admissions Programme (Ham
et al, 2002), as well as on earlier lessons from re-engineering projects in the NHS (Bowns
and McNulty, 1999; McNulty and Ferlie, 2002; Packwood et al, 1998).
1.2
Collaborative approaches to improving health care quality: empirical evidence
IHI is a not-for-profit organisation which supports collaborative healthcare improvement
programmes on an international basis using evidence based improvement principles. The
participation of IHI staff was secured as “coaches” to NPAT to develop and implement the
‘CSC model’. The evidence for the effectiveness of the specific IHI BTS collaborative
approach consists largely of views and commentary pieces from various proponents of the
1
From April 2001 NPAT has been an integral part of the newly formed NHS Modernisation Agency.
7
method on a case-by-case basis (for example: Kilo, 1998; Plsek, 1999; Lynn et al, 2000;
Leape et al, 2000; Turner, 2001; Kerr et al, 2002; Brattebo et al, 2002) and are reliant on
self-reported data (Leatherman, 2002). For example, Brattebo et al (2002) report that
patients’ need for ventilator support in a surgical intensive care unit was decreased by 2.1
days – and length of stay reduced by 1.0 day – following participation in a national quality
improvement collaborative in Norway. In the NHS Bate et al (2002) independently evaluated
an IHI BTS Collaborative focusing on total hip replacement surgery and reported an average
reduction in length of stay of 1.0 day (12.2%) across 28 participating hospitals - compared to
a 0.1 day (1.6%) reduction in four ‘control’ hospitals1 - and that 17 (61%) of the participating
hospitals recorded a statistically significant reduction.
Research to date has no definitive answer to the important question of whether
improvements are likely to be sustained after a collaborative (Øvretveit et al, 2002). There
are some indications that outcome improvements are sustained but less evidence of
continuous improvement or institutionalisation of the methods. Some teams in collaboratives
that have been studied elsewhere did not learn how to institutionalise the changes so as to
‘survive’ individuals leaving - a concern raised by participants in the CSC - but also did not
learn how to recognise when further changes were needed and how to make these.
The methodologies and results of initiatives similar to the BTS approach have been
evaluated2. One of the very first collaborative improvement groups - the Northern New
England Cardiovascular Disease Study Group (NECVDSG) - compiled in-hospital mortality
data from 15,095 coronary artery bypass grafting procedures and - after the focused
intervention period - the group tracked a further 6,488 consecutive cases and reported a 24%
reduction in in-hospital mortality rate (p=0.001) (Plsek, 1997). Rogowski et al (2001) and
Harbor et al (2001) report on the clinical and economic impact of a neonatal intensive care
(NIC) collaborative in the US. They concluded that not only did ‘multidisciplinary
collaborative quality improvement have the potential to improve the outcomes of neonatal
intensive care3’ but also ‘cost savings may be achieved as a result.4’
In addition to tangible improvements in service delivery and patient experience, the CSC is
also about learning how to manage and facilitate a large-scale improvement initiative in the
NHS. It is one of the first attempts to implement a collaborative improvement approach in
the NHS. A full report of the evaluation of the NHS Orthopaedic Services Collaborative
(OSC) which - as mentioned above - applied the IHI model to improve the care provided to
elective total hip replacement patients has been published (Bate et al, 2002) as has an
evaluation of the Booked Admissions Programme (Ham et al, 2002). Related research in the
1
As Plsek (1997) states: ‘much effort is needed to enhance measurement of results in collaborative
improvement efforts, particularly as it relates to comparisons with peers not in the collaborative group.’
2
A large-scale multi-site study - led by RAND (with the University of California, Berkeley) - of a series of
quality improvement Collaboratives directed towards improving chronic illness care, and which are based on
the IHI BTS approach, is currently ongoing in the US.
3
Between 1994-96 the rate of infection with coagulase-negative staphylococcus decreased from 22.0% to
16.6% at the six project NIC units and the rate of supplemental oxygen at 36 weeks adjusted gestational age
decreased from 43.5% to 31.5% at the four NIC units in the chronic lung disease group. The changes observed
at the project NIC units for these outcomes were significantly larger than those observed at the 66 comparison
NIC units over the four year period from 1994-97 (Harbor et al, 2001).
4
Between 1994-96 the median treatment cost per infant with birth weight 501-1500g at the six project NIC
units in the infection group decreased from $57,606 to $46,674; at the four chronic lung disease hospitals, for
infants with birth weights 501-1000g, it decreased from $85,959 to $77,250. Treatment costs at hospitals in the
control group rose over the same period (Rogowski et al, 2001).
8
UK has focussed on in-depth case studies of local project teams that are participating in other
NHS Collaboratives (Robert et al, 2002).
1.3
Policy context
The CSC is part of the National Booked Admissions Programme (Meredith et al, 1999; Ham
et al, 2002) which seeks to give choice and certainty to patients by planning and booking
their care in advance. Cancer was chosen as the disease area for improvement because the
treatment pathway had the potential for pre-planning and scheduling. The other major driver
were the results from the national cancer waiting times audit (Spurgeon et al, 2000) which
illustrated the unacceptably long delays and waiting times across the whole care pathway.
Positioning the CSC programme within the context of national policy initiatives for cancer
and service modernisation was an explicit aim from the outset. This was seen as essential to
enhance the impact and sustainability of improvements brought about by its activity:
“We wanted to make the CSC as mainstream as we can. If it’s seen as this sort of nice initiative that does
good work, but it’s kind of bolted on, then I think it will be less impactful. We were keen that we position
it as a key mechanism for achieving national cancer goals, achieving Calman-Hine, and other things…It is
important to view the CSC in context, as a component of this modernisation.” (CSC Programme Lead)
The 1995 Calman-Hine Report (Expert Advisory Group on Cancer, 1995) is generally
acknowledged as the cornerstone guiding policy developments for cancer care in recent
years. A key recommendation from this report was that care should be integrated from
primary, through secondary to tertiary and palliative care. In addition, at the end of 1998 the
Government announced the introduction of maximum waiting time targets for patients
referred with suspected cancer: the 14-day standard. Improvement in cancer services was
further augmented by the appointment of a National Cancer Director in November 1999
who, together with the National Cancer Action Team (CAT), produced the NHS Cancer Plan
(Department of Health, 2000) outlining future priorities and developments in cancer service
delivery. The Plan provides a comprehensive strategy to tackle the disease with the ultimate
aim that no-one should wait longer than one month from an urgent referral to the beginning
of treatment and progress towards its goals has recently been reported upon (The NHS
Cancer Plan - Making Progress, Department of Health, 2001).
In being part of the National Booked Admissions Programme, the CSC is explicitly linked to
one of the Government’s aims to modernise the NHS through measures designed to raise
standards and improve access and convenience. Its remit to improve the experience and
outcomes of care from referral through diagnosis and treatment to follow-up or palliative
care means, however, that the scope of the CSC is much broader than other aspects of the
National Booked Admissions Programme.
In addition to tangible improvements in service delivery and patient experience, the CSC is
about learning. The NHS Plan’s apt description of the programme as “ground-breaking”
provides a flavour of the unique, and at times daunting, challenges faced by those who have
embarked on making it happen:
“We do not know all the answers at the beginning of the programme but we have a tremendous
opportunity to learn together. Above all, the Cancer Services Collaborative gives us a chance to show
what is possible: to create levels of patient service and systems of cancer care that will be amongst the best
in the world.” (CSC National Clinical Chair)1
1
Prof. David Kerr, CSC National Clinical Chair, Preface to CSC Improvement Handbook, November 1999.
9
At the end of the 15-month programme, achievements and lessons learnt are being shared
with other cancer centres/networks and within the wider NHS, thereby informing and
supporting other efforts at improving healthcare systems.
1.4
The change approach
The CSC improvement model is described by the national programme lead as a “hybrid”
approach; in effect building a new model by borrowing from the IHI BTS where appropriate
and integrating this with the best from NPAT and NHS experience and expertise. Given that
this is one of the first attempts at this type of improvement programme in the NHS, the
methodology should be seen as an evolving process. Although grounded within a firm
conceptual model, there is fluidity in its application that allows for new learning to be
incorporated and adjustments to be made during the process. This flexibility may have
appeared frustrating at times for the participants as frequent changes were to be expected; at
the same time, it was intended that innovation would be stimulated and potential
achievements would not be stifled by methodological boundaries.
A starting point for this approach is that potential for eventual achievement of “spreading” or
rolling-out change is maximised when the approach is first concentrated on teams who can
demonstrate that they are likely to make it work. Beginning with high performing teams who
are likely to be early adopters of new ways of working eases the systematic dissemination of
the learning to other areas and promotes innovation. This means that a substantial layer of
experimentation is removed for the next wave of participants and an evidence base of
learning gained from the original teams is passed on. Another essential factor is the principle
that improvement in health care occurs when the model is passed on from clinician to
clinician. For this reason clinical leaders are prominent members of improvement teams as
their contribution is the key to best possible outcomes (Berwick, 1998).
Before programme teams were established and became operational, preparatory work by the
national team included an “expert panel”. Individuals with expertise in service improvement,
change management, cancer service delivery, and clinicians were invited to participate in a
one-day event which aimed to identify existing knowledge and best practice in improving
cancer care. The panel was charged to identify the small number of potential changes that
were most likely to result in improvements in each of the five tumour groups. This work
produced a set of “change principles” which aim to shorten the initial “discovery phase” that
most improvement projects have to go through and were divided into four strategies:
-
Co-ordinating the patient journey,
-
Improving the patient experience,
-
Optimising care delivery, and
-
Managing capacity and demand.
The work of the panel resulted in the compilation of an “Improvement Handbook” which
provides guidance on change principles, suggestions for “change ideas” and a section
providing an overview of the epidemiology and treatment of each of the cancer tumours.
Figure 1 illustrates the two elements of the model. The first, by concentrating on “current
knowledge” aims to create the best possible starting point for the improvement project. This
involves three steps:
-
10
Setting precise aims
-
Define measures that will show movement towards aims
-
Identify change concepts
The second element focuses on systematic action for learning and improvement. Rapid
cycles of improvement (PDSA = Plan-Do-Study-Act) are used to plan, pilot, reflect upon and
implement changes. Improvement teams are taught to test changes on a small scale with a
small number of clinicians and small patient samples (a “slice”) before implementing them.
FIGURE 1
IHI Improvement model
The model for improvement
Current
knowledge
1
2.
3.
What are we trying to accomplish? (aims)
How will we know that a change is an improvement?
(measures)
What changes can we make that will result in an
improvement? (change concepts)
Act
Plan
Study
Do
System for
learning
improvement
[source: Langley et al, 1996]
The programme itself follows a typical improvement cycle, illustrated in figure 2.
11
FIGURE 2
Diagrammatic illustration of the collaborative process
5 cancer tumour types
9 selected
networks/centres
43 projects
P
P
A
D
A
D
S
expert
panel
workshop
1
improvement
handbook
•
•
•
•
P
A
S
workshop
2
S
workshop
3
action
period
action
period
learning
implementation
communication
reporting
•
•
•
D
action
period
workshop
4
national
forum
action
period
transfer knowledge
conference calls
listserv
Four learning workshops, each held over two days, with action periods in between constitute
the core ‘collaborative’ mechanism. The workshops aim to unite the programme and enable
participants to learn from the training team and colleagues, to gather new information and
ideas about their subject and process improvement, and to develop improvement plans.
‘Improvement teams’ from every project and additional collaborators from local areas attend
every workshop. In the case of the CSC, these meetings were large; approximately three
hundred delegates attended each event. The content consisted of a combination of
presentations and discussion periods in tumour groups or regional teams. The ‘action period’
between each learning session was when teams trialed and implemented changes for
improvement in their own workplace. An electronic mailing list, listserv, and regular preplanned conference calls were established to facilitate communication across the
programmes during the action periods.
Measurement was intended to play a critical role in the CSC by informing on progress,
whether changes have resulted in improvement, and the sustainability of gains made. Each
project team was initially requested to select a small number of measures that reflected the
overall aims of their project. Choice of measures was initially left to the project team,
although the requirement was for at least one measure, so called global measures, from each
of the following five categories:
-
Access,
-
Patient flow,
-
Patient satisfaction,
-
Clinical effectiveness, and
12
-
Capacity and demand.
In June 2000, NPAT requested the programmes to collect data relating to two ‘standard
global measures’. Data were collected only for the patient population (or patient “slice”) that
was the focus of the project. In addition, a system of monthly reporting to the central coordinating team required reporting on measures, as well as other programme activities (ideas
tested, changes implemented, issues and challenges) and a self-assessment score of each
team’s progress. NPAT compiled a monthly report incorporating all these contributions and
this in turn is reported to ministers and fed back to the whole programme.
Finally, the spread or roll-out of changes, once tested and demonstrated as improvements,
was reported to be an integral part of the model. The extent of the spread of change assesses
the eventual value of the improvement programme.
1.5
Operational aspects
Participating programmes were chosen to be part of the CSC following a bidding process.
Selection was led by the national team and convened by each region. In total twenty-four
applications were received. Nine programmes were selected; one in each of the English NHS
regions and a second in the London region. The participating teams are:
-
Mid-Anglia Cancer Network: Eastern Region
-
South East London Cancer Network: London Region
-
West London Environs and Cancer Network: London Region
-
Merseyside and Cheshire Cancer Network: North West Region
-
Northern Cancer Network: Northern and Yorkshire Region
-
Kent Cancer Network: South East Region
-
Leicestershire Cancer Centre: Trent Region
-
Avon, Somerset and Wiltshire Cancer Services: South West Region
-
Birmingham Hospitals Cancer Network: West Midlands Region
On average each programme received approximately £500,000 and were instructed to spend
the money on personnel and activities that would serve to develop their programme aims.
Guidance given to programmes by NPAT, while allowing flexibility in contract specification
and team composition, specified the following basic structure for every programme team:
-
Programme manager/director
-
Programme clinical lead
-
Lead clinician for each tumour group
-
Project manager/facilitator for each tumour group.
The common management structure (and lines of accountability) for each of the programmes
is illustrated in figure 3. Some programmes have variations on this theme.
13
FIGURE 3
Management structure
Programme Manager
Programme Clinical Lead
Breast Project
Lung Project
Colorectal Project
Ovarian Project
Prostate Project
Project manager
Project manager
Project manager
Project manager
Project manager
Lead clinician
Lead clinician
Lead clinician
Lead clinician
Lead clinician
The programme manager is the managerial lead for each programme and accountable to the
national team on behalf of the programme. A programme clinical lead in each programme
works alongside the programme manager to support and encourage clinical colleagues
leading each of the tumour groups and is accountable to the CSC clinical chair.
Project managers in each of the tumour group teams work closely with the tumour group
clinical lead, and report to the programme manager. Their time is dedicated to their
respective projects, although there is an additional commitment to participate in national
CSC events and the system of monthly reports. Most project managers are full-time and
employed on fixed term contracts funded by the CSC. In some programmes, project
managers are employed for a proportion of their time (one to two days per week) by CSC
funds, and continue to work in their existing clinical or managerial posts for the rest of the
time.
Clinical leads for tumour groups are clinicians who have agreed to take on the lead role in a
specific tumour group and their time commitment to the CSC in their local trusts varies
according to their individual preference or the demands of the project. For example, during
early stages when projects were involved in process mapping, these activities could take a
whole day of a clinician’s time. They are, however, expected to attend four workshops
lasting two days each, a number of additional national meetings, conference calls, (one every
five weeks per tumour group) and participate in the CSC e-mail discussion facility.
The autonomy awarded to programmes resulted in nine programmes with broadly similar
team compositions, using the same improvement methodology, but with diversity in the way
they tackled their tasks. Local conditions prior to becoming part of the CSC also contributed
to the uniqueness of each programme. They shared common experiences in implementing
the challenge of service redesign and improvement, but the outcomes reported here will have
been affected by pre-existing local conditions.
1.6
The HSMC evaluation
The evaluation, commissioned by the Department of Health, began in February 2000 and
aims to provide an assessment that will assist in documenting and appraising the
achievements and lessons from the CSC programme. The methodology consists of both
quantitative and qualitative analysis. Data sources include semi-structured interviews with
key individuals, focus groups, a postal questionnaire, documents, observation of meetings
14
and conference calls, use and content of the CSC electronic mail discussion list, patient-level
data on waiting times and booking, and cost data.
Quantitative data
The nine programmes were requested to collect data in order to measure two nondiscretionary ‘standard global measures’:
-
Waiting time from referral to first definitive treatment, and
-
Summary data on booking activity.
Patient-level data relating to waiting times and booking were requested for each project. The
final data specification is shown in appendix 1. The process of collecting data was not
straightforward (appendix 4).
Qualitative data
An interim report based on qualitative data collected during the first round of interviews
conducted from April to June 2000 (most were completed in May) was published in January
2001 (Parker et al, 2001). This report was based on individual interviews with 98
participants and leaders of the CSC consisting of:
-
51 managers (Programme managers and project managers)
-
35 clinicians (Programme clinical leads and tumour group lead clinicians)
-
12 others working within Programmes in another capacity, and
-
individuals leading the national programme and the national Cancer Director.
Most interviews were face to face, with a small number conducted by telephone where
personal meetings were not practical. The interview format was semi-structured and broadly
followed interview topic guides developed by the research team, whilst allowing the
opportunity for interviewees to raise other issues. Interviews lasted from twenty minutes to
one hour and were either tape-recorded and summary transcribed, or notes were taken during
interviews and summaries compiled subsequently. To preserve confidentiality all interview
notes were anonymised and coded using random numbers. Data were analysed by sorting
information and verbatim extracts into emerging theme categories. This final report is based
on the analysis of the data from the above plus further fieldwork undertaken in the second
half of 2000 which included:
-
six focus groups with a total of 26 project managers
-
21 further individual interviews with project managers, and
-
six individual interviews with programme managers.
Finally an end of study postal questionnaire (appendix 2) was completed by CSC project
managers, tumour group clinical leads, programme managers and programme clinical leads
in each of the nine programmes - and selected others - during May-June 2001. The
questionnaire had an overall response rate of 74% (96/130). Appendix 3 provides details of
the response rates by the nine programmes and by different professional groups.
This report is structured around seven key questions and a concluding discussion. The seven
questions are:
1. What were the gains achieved both in terms of quantitative outcomes and less
quantifiable benefits?
2. What were the key levers for change?
15
3. What hindered change/progress?
4. What was the perceived value and impact of the methodological approach that was
adopted?
5. What are the implications for national and regional roles, cancer networks, project
management and clinical leaders?
6. How much did the CSC cost and how was the funding used locally?
7. What are the key lessons for future collaboratives in the NHS?
16
2.
WHAT WERE THE GAINS ACHIEVED BOTH IN TERMS OF
QUANTITATIVE
OUTCOMES
AND
LESS
QUANTIFIABLE
BENEFITS?
Key findings
Participants overall views of the CSC were highly supportive. Particularly positive aspects were the changes in
attitude towards improving services which the CSC engendered, the increased sense of staff empowerment,
good training opportunities and the provision of time to allow staff to stand back from short-term exigencies to
reflect upon, and improve, local services.
Not surprisingly, as this was the first national programme of its kind, there was felt to be room for improvement
in some aspects. A minority of participants had reservations concerning the scale of achievements,
acknowledging that phase I had focussed on relatively small number of patients for the most part.
There were marked variations in the experiences of the nine programmes and - within these programmes between the local project teams. Participants in two of the programmes were relatively much more positive
about the CSC than the other programmes whereas one was relatively negative. Much of this variation was due
to local conditions prior to the CSC.
We sought to analyse patient-level data from the 51 project teams relating to the two standard global measures
adopted by the CSC: (a) waiting time to first definitive treatment and (b) the proportion of patients booked at
three key stages of the patients journey. In the event our analyses have been severely constrained by the lack of
adequate data.
Our analysis of the data from the 14 (27%) projects that were able to provide complete returns suggests a wide
range of experience relating to changes in waiting times to definitive treatment. The analysis also suggests that
there are tumour specific trends. Prostate projects demonstrated good progress towards local waiting time
targets as overall median waiting times were reduced from 140 to 63 days (a 55% decrease). Although prostate
cancer remained the tumour site with the longest median waiting time overall, the proportion of patients waiting
longer than 62 days before starting treatment decreased considerably (from 96% to 52%). At the other extreme
the ovarian projects did not experience an overall reduction in median waiting times (in contrast they rose from
17 to 28 days: a 64.7% increase). The colorectal projects saw a decrease in median waiting times from 64.5 to
57 days (an 11.6% reduction) but this was not statistically significant. The lung and breast cancer projects saw
very little change overall (44.0 to 44.5 days and 19 to 21 days respectively).
With regard to the second quantitative measure, only 11/51 (22%) of the project teams provided data on
booking for the quarters ending March 2000 and March 2001. Five of these projects experienced an increase in
booking for first specialist appointment between the two periods and - in the quarter ending March 2001 - 3/11
projects reported 100% booking for this stage of the patient’s journey. Five projects also experienced an
increase in booking for first diagnostic test and - in the quarter ending March 2001 - 7/11 projects reported
100% booking. Finally, three projects experienced an increase in booking for first definitive treatment but three
projects experienced a reduction; for the quarter ending March 2001, 7/11 projects reported 100% booking.
Over two-thirds of participants reported that the CSC had brought about local benefits over and above the
formal aims of the programme. These benefits included the spreading of the taught techniques and overall CSC
approach locally, the development and strengthening of cancer networks, improvements to staff development
and motivation, a stimulus to multi-disciplinary team working and the raised profile of their Trust or
department either regionally or nationally.
17
2.1
Participants overall views of the CSC
By way of introduction, and prior to reporting on the quantitative outcomes, we present here
some overall comments reflecting the views of a range of participants on their experience of
the CSC. The views below are representative of those expressed towards, or at, the end of the
CSC and the themes which emerge from them are discussed in more detail later in this
report. Although some of the quotations are lengthy we would urge readers to take the time
to read them as we believe they give an accurate flavour of what it was like to be a
participant in the CSC and together they set the context for the remainder of this report.
For the most part participants have been overwhelmingly positive:
“I mean I find it completely exhilarating and I have enjoyed it much more than I have ever enjoyed any
job I have done in the Health Service before. It has been a complete revelation to me.” (Programme
Manager)
“I’ve really enjoyed it and learnt a lot plus met loads of interesting and enthusiastic people. We have made
real changes for patients!” (Project Manager)
“I have thoroughly enjoyed my involvement. I feel proud to have been associated with the CSC and I
believe that it has proved the success and impact of a collaborative approach in healthcare.” (Project
Manager)
“Excellent programme. Really made a difference to patient experience.” (Tumour Group Lead Clinician)
For many, the changes engendered by the CSC have gone beyond process or system changes
and have also had a beneficial impact on the attitudes of staff and the culture of the
participating organisations:
“You can see the difference not only in terms of how the actual pathway has changed, but in attitudes. The
biggest change has been in the attitudes of staff - the resistance to change has almost vanished. Now you
see people - when you’re challenging them with something - you see them actually thinking ‘how could
we do this’ instead of thinking ‘no, we’re not doing this’. That’s pretty amazing.” (Project Manager)
The quotation above refers to a recurrent and powerful theme that emerged from our
qualitative research - the sense of empowerment that the CSC had given to local staff at all
levels:
“I watched eight people who used to come off the shop floor, angry, disillusioned, almost despairing watching their clinical acumen and ability being devalued - turn themselves into eight very keen project
managers with a set of skills that has left them all with fantastic opportunities that they didn’t have before,
and much more of an understanding on what was going wrong around them. They’ve just taken off, and
that’s been wonderful.” (Programme Manager)
“I joined because of a comment made at Dudley about being lucky to have a chance to make a difference.
There have been ups and downs but on the whole we have made a significant difference - which is
immensely satisfying.” (Tumour Group Lead Clinician)
Related to this, and again a constant theme from our interviews, has been what participants
viewed as an absolute prerequisite for much of what has been achieved in the CSC, namely,
the dedicated time available to project managers - and sometimes other staff - to take the
work forward locally:
“I think the highlights have been about the opportunity to have space and time to stand back and really
look at what’s happening in those five tumours and really identify how you can make a difference for
patients. I think the other highlight has been being able to take a multi-disciplinary team through a process
of allowing them to see and understand what the current patient journey looks like and how they can make
a difference to changing it. And then, really through having proper dedicated project management support,
being able to support those teams and the lead clinician to actually make changes in service delivery.”
(Programme Manager)
“The way I see it is that the collaborative has given the oomph and the push and the actual dedicated
project time to go and have the conversations with people … I mean they’ve all got their visions of how
18
they wanted things to be and the way they wanted the service to develop, and if they hadn’t got them
before, they certainly got them after the process mapping and seen how the actual service looked. And I
think the collaboratives role has been to go there and take what they wanted and then do the donkey work
to enable them to do that, to take it forward.” (Project Manager)
On a personal level participants welcomed much of the training - both formal and informal that they had received during the CSC:
“I suppose in a kind of global sense, the highlights have been the training opportunities. Personally I have
learnt an awful lot, and there’s been far more training than you would normally get in a job, far more
money put into that - the formal training - and a lot of informal training.” (Project Manager)
“My enthusiasm’s still there. That’s still going well, my motivation’s actually still there, it’s been a tough
year though, it has been tough, it’s not been easy. I think one of the highlights is how I’ve developed
myself personally in the past year. That for me is my actual biggest highlight. “ (Project Manager)
However, as must be expected, there were specific aspects of the Collaborative that
participants felt could have been improved. Often these were related to the ‘early days’ of the
CSC and the steep learning curve faced by both participants and the leaders of the CSC alike:
“The laptop we didn’t get for yonks. And we’re expected to be on listserv and communicate with listserv well, how do we do that when we haven’t got an office and haven’t got a pc? I think they need to look
after the people that are in the posts and ensure that project managers are set up from the word go. I
suppose it’s a pilot and we’ve learnt that, and as long as people listen …” (Project Manager)
One significant and specific criticism held by many participants related to the level of
measurement and reporting required by the CSC:
“I like the ethos behind the collaboratives, I think it’s the right idea. I just think they underestimate the
resource in each area to sustain the level of measuring, the level of reporting, and what that really
means…A whole lot less measures would be good, but I like the ethos of the collaborative…” (Project
Manager)
A minority of participants also had strongly held views about what they perceived to be the
‘top-down’ nature of the CSC:
“I don’t think we would have got together to share things if we hadn’t had this project as the stimulus, put
it that way. But then the constraints of the project became an irritation and we said ‘bugger that, we’re
going to break out, we’re going to meet at the bar and do it this way’ it was very much a wish to be not too
constrained by the straightjacket of the project and the American methodology.” (Project Manager)
“All my negativity is about the processy bit, it was the processy bits about the collaborative that irritated
me, it was just so dogmatic and so centralised, but if you can learn to live with that, then there are benefits
of the collaborative…” (Project Manager)
Finally, there were perhaps more serious concerns emerging from some participants - often
clinicians - experiences of the CSC and what had been achieved in phase I:
“I am totally committed to the basic CSC concepts and will ensure they are central to our cancer network.
Alas, I have say that involvement in phase 1 has been the most unsatisfactory experience in 30 NHS years
and has caused our dedicated team a great deal of unnecessary and counter productive stress.”
(Programme Clinical Lead)
“I will continue to work with the CSC, as I know what it is trying to achieve is correct, but I have grave
reservations about what has been achieved so far.” (Tumour Group Lead Clinician)
Some felt that while the CSC work might disadvantage non-malignant disease temporarily,
the ‘cancer-only’ focus should be seen as temporary and a way of demonstrating what this
type of programme can achieve:
“Lets demonstrate it in one area, show that it’s achievable, that it’s not that we’re lazy, idle
clinicians…That everybody wants to achieve that high quality, and that when you fund it properly and
manage it properly, it is achievable. And if it is achievable, why can’t other diseases have that? It is
temporary in cancer only.” (Tumour Group Lead Clinician)
19
This small sample of our qualitative data illustrates a common theme throughout this report:
the seeming dissonance between the majority who, overall, greatly valued their involvement
in the CSC and a minority who had major reservations. Why should there be such striking
contrasts amongst participants in the same national programme and who were working to the
same basic methodology?
2.2 Variations in outcomes and experiences between the nine participating programmes
and the participants
Although our evaluation did not directly attempt to compare progress between the nine
programmes which participated in the CSC, our qualitative data suggests that there was
marked variation between them as to the extent to which their participation would be judged
a ‘success’. Much of this variation is likely to be related to conditions at the project team
level prior to the CSC. Nevertheless, exploring this diversity is important as we seek to learn
how to increase the likelihood of achieving improvements in particular settings. Participants
themselves offered some clues:
“The process is generic but the issues that prevent you from implementing it are often very local and
might even come down to one or two personalities even.’ (Project Manager)
“There was little correlation between how each area was organising the projects - a lot of variation.”
(Programme Manager)
Most commonly they identified the need to select the right leaders (managerial and clinical)
for implementing such an initiative:
“I think the project is going to work here because we picked the right people. They’ve got the right vision,
they were half doing it anyway, and they would have delivered. They have clinical credibility amongst
their colleagues within their hospital, but also colleagues in their tumour specific cancer groups in the
country.” (Tumour Group Lead Clinician)
“You do need a certain person for this job, you do need to be personally quite tough because this could
take over my entire life if I let it. It’s the same nationally: you can see people who are coping quite well
and thriving on it, but you can see people who are not, who are so sometimes so overwhelmed by this
shifting sands that come from the centre, and the local difficulties in dealing with some obstinate
clinicians that won’t change…” (Project Manager)
Some participants were concerned about the original selection of both the nine programmes
and the specific tumour sites. The perception was that some areas were more likely to gain
the full benefit of the programmatic approach than others due to local conditions prior to
being selected to be part of the CSC:
“I think that some of the centres are having a great deal more difficulty than others. When we did a trawl
at the first workshop in Dudley, there were some groups who did not even know what it (MDT) meant,
never mind have it up and running.” (Programme Clinical Lead)
In a similar way, the view was that tumour groups were not participating on equal ground.
The amount of national attention to the particular type of cancer, the disease prevalence, and
the current state of knowledge on treatment would affect outcomes. This meant that the
range of achievement is likely to be different and affected by the disease site rather than
reflecting the amount of effort teams may have made:
“Those tumours that hadn’t been ‘Calman-Hined’ are struggling much more than those that had; in other
words the urologists who hadn’t been done, have still all got the mountain to climb, even leaving aside the
problem with the treatment of their cancer, whereas lung, breast and colorectal, most of the bones of how
it should work are all there.” (Tumour Group Lead Clinician)
The autonomy given to the nine CSC programmes in compiling their local project teams and
how they wished to use some of the allocated funding led to the development of nine
20
different models for implementing improvement. Each of the five factors described below
point to the importance of the local ‘receptive context’ (Pettigrew et al, 1992; Van de Ven,
1986). This is entirely consistent with the findings from evaluations of other similar
initiatives:
“NHS initiatives should only embark upon any major programme of change that is based upon any
specific, programmatic approach with careful consideration of:
-
the applicability of the core ideas of the particular approach (e.g. BPR) to the health care setting
-
the current circumstances and state of readiness of their particular organisation for any such approach
-
their willingness and capacity to adopt the particular change-management ideas, tools and techniques
to local circumstances.” (Bowns and McNulty, 1999)
“Clearly the method is important but wide observed variations between participating Trusts suggests that
success is determined less by the method itself … than by the implementation process and the context in
which it is applied ... The method is the infinite number of ways in which it is interpreted and applied in
different settings, hence the wide variation in views and outcomes. The key to the way it works is found in
the ways it is being locally disseminated, (re)interpreted, applied and managed and it is within this process
that ‘success’ or ‘failure’ are largely determined.”.” (Bate et al, 2002)
“This evaluation has shown that there are no magic bullet solutions to the challenge of booking. The main
source of change and service improvement has to come from within each and every NHS organisation.
Renewed effort now needs to be put into developing the staff and organisations that can embrace the kind
of cultural change foreshadowed by the NHS Plan.” (Ham et al, 2002)
In the CSC the nine programmes and their project teams differed mainly in the following
ways:
2.2.1
Team composition
The work experience of managers comprised both clinical (nursing, therapy and teaching)
and health management. Prior expertise included service redesign, clinical service delivery,
project management, and general health service management. Project manager contracts
varied although they were predominantly full-time. Some full-time managers were managing
two projects, while others were released for one or two days a week from existing NHS posts
(both clinical and managerial).
2.2.2
Local developments pre-CSC
For each programme different pre-existing local situations were likely to impact on the work
of the CSC. For example:
-
Trust mergers,
-
Poor co-operation between Trusts who are not working together after regional
boundary changes,
-
Existing cancer networks, and
-
Aspirations to form a cancer network.
2.2.3
Use of CSC money
Directives on how the money allocated to programmes was spent specified that it should be
spent on development and training that was linked to the aims of the programme. It could not
be used, for example, to purchase large items of equipment. Other than travel and
subsistence to attend national learning workshops and meetings the money was used (chapter
7) in some, all, or a combination of the following ways by the respective programmes:
-
Salaries (programme managers, project managers and administrative staff),
21
-
Extra clinical sessions to clear waiting list backlogs,
-
Allocating the majority of the resource to participating Trusts,
-
Payment to lead clinicians on a sessional basis,
-
Pump-priming available for testing new initiatives, and
-
A proportion available to pay for small items of new equipment identified as a need
by CSC activity.
2.2.4
Selection of “slices” for initial focus and choice of clinical leads
Individual projects within each programme selected various configurations for the focus of
their initial improvement activities1. These include working with one clinician and their
patients, all clinicians in a specific speciality in one trust, or all clinicians from all trusts that
provide services for the specified tumour. Choice in recruiting clinicians to lead projects was
possible for some programmes, while others had no or little independence on this aspect and
engaged clinicians already working as local tumour group leads.
Projects that set about working on more than one site from the outset acknowledged this as a
more difficult way of working, especially in the beginning when project managers were new
to the method and reporting to the national programme appeared a huge task. The advantages
of working with more than one site became clearer after the initial stages. Project managers
found that piloting and spreading changes merged into one process and when the national
directive began to encourage rollout, the benefits emerged:
“I’m slicing and spreading and moving around all at the same time, it’s not two distinct processes. If
something works in one hospital, we’d start immediately transferring it if we see a gap in one of the other
hospitals. I think it’s the best idea, because I think if you concentrate on a very small slice and then try and
move that out to a much bigger slice. If you have not been involved and don’t know what you’re talking
about, and you haven’t achieved something in one of their other services, then it’s a much steeper hill.”
(Project Manager)
Other positives were that this approach facilitated an overview of local issues pertaining to a
particular tumour group. When progress was slow in one site the focus could change to
another and prevented alienation of participants who were not part of the initial programme:
“It’s hard, I think, concentrating on so many, but on saying that it’s also nice sometimes when things
aren’t going well, to step out and go to another hospital. To say ‘ok, I’ve done some good things here’.
And also going round five hospitals, some have good practice already that you can pass on, you’re not
rolling that out again and again from scratch. Which I think has been very good, although hard and
sometimes very frustrating. But it’s nice to think that if I’m doing it in all hospitals at once, I’m doing a
big section. Doing it methodologically, has paid off.” (Project Manager)
An obstacle to this approach was the multiple demands made on the project manager’s time
which could affect morale:
“I think sometimes the hardest thing I find, is keeping your head up even though sometimes you come up
against barriers and you go into five different hospitals and some will say can you do this, can you do
1
The methodology recommends that change should first be tested in small samples, e.g. the patients of one
clinician in one hospital. However, some programmes have elected to work with a number of clinicians or in
multiple sites from the outset. Those that chose the latter route were motivated by the rationale that this model
fitted with pre-CSC configuration of services, i.e. where a hub and spoke model of co-operation was
operational and working with one fraction only would be perceived as excluding others. Perhaps
understandably, project managers tended to support whichever route they were engaged in as the most
appropriate. Those working with one clinician only perceived this as preferable because it made the task
manageable and meant other clinicians could be brought on board gradually on the back of demonstrable
changes. Negotiating proposed changes was easier when working with one person; however the choice of this
person was acknowledged as an important factor.
22
that…I’d like to get proper feedback from people who’ve just dealt with one hospital and one surgeon,
and see how they’ve found it. Have they found it harder because people get sick of them being there every
day, or is that easier, but we don’t get to know that from being up here.” (Project Manager)
Both approaches appear to have merits and disadvantages. Working with one clinician is
easier initially but may pose difficulties when projects engage other clinicians in the process.
Including multiple sites from the start is a much larger task in terms of workload and may
serve to demoralise some project managers, yet eventual gains in terms of rollout may be
easier to achieve.
2.2.5
Starting times
For various reasons, for example delays in recruitment processes of employing trusts and
other difficulties in recruiting, teams were formed at different times. The first fully
operational programme team started up to six months earlier than the later ones. Whether this
fact will make a difference to the eventual longer-term outcomes is unknown. However it is
an important element to bear in mind when considering the findings presented in this report.
2.3
Analysis of quantitative outcome measures
During the eighteen months from February 2000, discussion took place between the HSMC
team and the CSC Programme Director, Clinical Director, other members of the NPAT team
and colleagues at the Department of Health as to how the HSMC evaluation could best
assess the Collaborative’s outcomes. The expectation was that the CSC would lead to
demonstrable changes and improvements which would, in part, highlight how future
investment in cancer services should be targeted. It was recognised that HSMC’s evaluation
could help fulfil this expectation by providing evidence of change by means of an analysis of
a standardised minimum dataset covering specific outcome measures. In June 2000, NPAT
announced a move to:
“standardise approach, style and terminology in the project and programme monthly reports across
projects and programmes in order to enable sharing of ideas, experiences and learning in the
collaborative” (NPAT, Monthly reports and measures, June 2000; 1).
As part of this change, the nine programmes were requested to collect data in order to
measure two non-discretionary ‘standard global measures’:
-
Waiting time from referral to first definitive treatment, and
-
Summary data on booking activity.
In July 2000, the HSMC evaluation team suggested how the request for data could be
improved for the purpose of collecting a standardised minimum dataset, and this led to a
draft dataset being produced in February 2001, and a final dataset being agreed in June 2001.
Further information on the collection of activity data is provided in appendix 4.
2.3.1 Waiting time from referral to first definitive treatment: analysis of patient-level data
Twenty-seven percent (14/51) of the projects are included in our main analysis (table 1). The
main analysis covers the ‘before’ and ‘after’ periods (January to March 2000 and 2001)
agreed for the comparative analysis, and the projects included provide some insight into the
change in waiting times experienced by patients within the scope of the CSC phase I. The
main analysis is summarised in table 2 and figure 4, and reported at project-level in table 3.
Further analysis relating to these projects is reported in appendix 5.
23
The ovarian, lung, breast and colorectal tumours are each represented in the main analysis by
three projects, and there are two prostate projects (table 1)1. An additional 24% (12/51) of
the projects supplied incomplete data which provide much more limited insight into changes
in waiting times2. These projects are included in a secondary analysis which are reported in
summary in appendix 63. A further eighteen percent (9/51) of the projects provided some
patient-level data, but these were too limited to be analysed. No patient-level data were
supplied for 31% (16/51) of the projects
TABLE 1
Tumour type
Patient-level data on waiting times by project and tumour type
Projects
included in
main analysis
Projects
included in
secondary
analysis
Projects
excluded from
analysis
Projects that
supplied no
data
total
number (%)
number (%)
number (%)
number (%)
number (%)
Prostate
2
(20)
1
(10)
2
(20)
5
(50)
10
(100)
Colorectal
3
(27)
2
(18)
2
(18)
4
(36)
11
(100)
Lung
3
(30)
3
(30)
2
(10)
2
(20)
10
(100)
Breast
3
(30)
2
(20)
2
(20)
3
(30)
10
(100)
Ovarian
3
(30)
4
(40)
1
(10)
2
(20)
10
(100)
14
(27)
12
(24)
9
(18)
16
(31)
51
(100)
Total
Sixty-five percent (33/51) of the projects provided patient-level data for the quarter ending
March 2000. On the basis of these data, the total number of cancer patients across the 51
projects can be simplistically estimated as 1,335 per quarter (5,340 per annum)4.
2.3.1.1
Main analysis summary
The main analysis includes data for 487 patients in the first quarter of 2000 and 409 patients
in the first quarter of 20015. Only patients reported to have been included in each project’s
‘slice’ are included in the analysis. Patients with breast cancer form the largest group
1
Seventy percent of the ovarian projects are included in the main or secondary analysis, compared to 30% of
the prostate projects.
2
For nine of the 12 projects this is because data were not provided for the ‘outcome’ quarter, January to March
2001, and instead the quarter ending November or December 2000 has been used, depending on the last month
for which data were provided. For three of the 12 projects, the data provided for the outcome quarter were so
limited that they provided only a very poor picture of waiting times. One of these projects only included one
patient in the quarter ending March 2001, one included referral and treatment dates for only 3/17 cases in the
outcome quarter, and another included referral and treatment dates for only 3/7 cases in the outcome quarter.
3
These summary project-level results are reported in response to feedback on a draft of this report which
expressed concern that not all the data provided had been analysed.
4
This calculation extrapolates from the mean number of patients per project in the quarter ending March 2000
for each tumour group: Prostate, 13.6 (68 patients in 5 projects); Colorectal, 34.7 (243 patients in 7 projects);
Lung, 17.0 (122 patients in 7 projects); Breast, 57.5 (345 patients in 6 projects); Ovarian, 6.8 (54 patients in 8
projects).
5
These cases represent 93% (487/524) and 89% (409/457) of the total number of cases reported for these 14
projects in each quarter respectively. On the basis of the simplistic estimate of the total number of cancer
patients across the 51 projects noted above, the cases included in the main analysis can be estimated to be 36%
(487/1335) of the total.
24
included in the main analysis with 49% of cases in the first quarter of 2001, followed by
colorectal (19%), ovarian and lung (13%), and prostate (6%).
Table 2 and figure 4 summarise the change in median and mean waiting times from referral
to first definitive treatment by tumour type for all patients in the 14 projects included in the
main analysis1. The summary analysis would have distinguished between urgent and nonurgent cases if these data had been routinely supplied2.
The waiting times varied across the different tumour types, with the median waiting time in
the quarter ending March 2001 varying from 21 days for breast cancer patients to 62 days for
prostate cancer patients (table 2). The substantial variation in patients’ waiting times by
tumour type is similar to those reported by Spurgeon et al (2000) and Airey et al (2002).
TABLE 2 Summary main analysis of waiting times from referral to first definitive treatment by tumour type
(14 projects) †
Tumour type
(number of
projects)
January to March 2000
median mean (days waited
(days) (days) / number of
patients)
January to March 2001
median
(days)
mean (days waited
(days) / number of
patients)
median
days
Prostate (2)
% waiting 62
days or less in
quarter ending
March
change between quarters
(%)
-77.0 * (-55.0)
mean
days
(%)
2000
2001
-101.4
(-57.4)
4
48
140.0
176.5
(4942/28)
63.0
75.1
(1878/25)
Colorectal (3)
64.5
87.0
(13396/154)
57.0
72.1
(5548/77)
-7.5
(-11.6)
-14.9
(-17.2)
49
53
Lung
(3)
44.0
50.8
(4216/83)
44.5
51.8
(2797/54)
0.5
(1.1)
1.0
(2.0)
69
69
Breast (3)
19.0
26.9
(5378/200)
21.0
26.7
(5319/199)
2.0
(10.5)
-0.2
(-0.6)
93
96
Ovarian (3)
17.0
21.0
(462/22)
28.0
31.3
(1660/53)
11.0
* (64.7)
10.3
(49.1)
95
91
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test).
† Measures of the variation in waiting time in each quarter are included in project-level analysis reported in
appendix 5.
The change in waiting times experienced ranged from reductions in median of 77 days and
mean of 101.4 days for the prostate cancer patients to increases in median of 11 days and
mean of 10.3 days for ovarian cancer patients (table 2). The difference in median waiting
time between the quarter ending March 2000 and the quarter ending March 2001, was
statistically significant for the prostate and ovarian cancer patients.
Each project set local maximum waiting time targets (see table 3). The summary analysis
shown in table 2 compares the proportion of patients in each quarter who waited 62 days or
less before starting treatment, which is the 2005 NHS Cancer Plan target for urgent referrals.
With the exception of the prostate cancer patients, the proportion of patients waiting 62 days
or less was subject to little or no change between the quarters ending March 2000 and 2001.
The proportion of prostate cancer patients waiting 62 days or less before starting treatment
increased considerably (from 4% to 48%), whilst remaining the tumour site with the longest
mean and median waiting times overall.
1
Data from nine projects in four CSC programmes are based on the February 2001 draft data specification, data
from one project are based on the agreed data specification, and data from four projects do not conform to
either specification (see appendix 5).
2
See footnotes to table 3.
25
FIGURE 4
Boxplots† showing waiting times from referral to first definitive treatment by tumour type
† The red line in the box shows the median waiting time, and the box extends from the 25th percentile to the
75th percentile (the interquartile range). The ‘whiskers’ show the range of waiting times that are within 1.5
times the interquartile range. More extreme waiting times, if any, are shown individually. The width of the
boxes indicates patient numbers. The prostate data include two projects and the four other tumour groups each
include data from three projects.
2.3.1.2 Project-level summary analysis
Table 3 provides a project-level summary of changes in waiting times to first definitive
treatment1. As noted above, data on urgency were not always supplied, and so, with the
exception of the breast projects, the following project-level analysis includes all patients in
order to maintain a common denominator.
1
Table 3 also includes the CSC Planning Group’s assessment score for each project included in the analysis at
the end of Phase I (March 2001). Appendix 6 includes further analysis of the data on waiting time from referral
to first definitive treatment and compares the patient-level data with the projects’ waiting time ‘run charts’.
26
The limited data provide little opportunity to make comparisons between different projects
in the same tumour type. However, with the exception of the ovarian projects, it is evident
that the project-level median and mean waiting times varied considerably within each
tumour type.
Prostate projects
The NHS Prostate Cancer Programme (Richards et al, 2000; 15) noted that “services for
patients with urological cancers are less developed than those for some of the other common
cancers, such as breast, colorectal and lung” and set out steps to improve prostate cancer
services.
“Prostate cancer is the second most common cancer amongst men in Britain. There is growing public
concern, shared by the Government, that for too long not enough has been done to detect prostate cancer
and to improve the treatment and care of men diagnosed with it” (Richards et al, 2000; 1).
The report also highlighted the role of the CSC in reducing waiting times from the
“unacceptable” levels reported by Spurgeon et al (2000).
Table 3 shows the summary analysis at project level for the prostate projects included in the
main analysis. Both the projects experienced good progress towards the waiting time targets.
Indeed, prostate patients saw the greatest overall reduction in waiting times to first definitive
treatment (55%) based on data from the two projects (with a total of 28 patients in the
baseline quarter and 25 patients in the outcome quarter).
The data provided for Programme C’s prostate project include only GP referrals and this may
be a factor accounting for the shorter waiting times compared to those for Programme B’s
prostate project, which include data for patients referred from a range of sources (see table
16 on page 136)1.
Colorectal projects
Colorectal cancer followed breast cancer as the second tumour type to be covered by the
‘Guidance on Commissioning Cancer Services’ (NHS Executive, 1997; 7):
“Like breast cancer, colorectal cancer is responsible for about 10% of all new cases of cancer in the UK
population overall. Colorectal cancer is second only to lung cancer in importance as a cause of cancer
death, with about 12% of all cancer deaths (breast cancer causes 9%) … [colorectal cancer is] an
important type of cancer, but one which has not enjoyed the same public, political, and professional
profile that has been accorded to breast cancer. Less, perhaps, can be expected from the public in terms of
their appreciation of the importance of symptoms that might result in the diagnosis of this disease. There
is almost certainly less understanding too amongst health professionals, including health managers, about
the clinical and health service issues that might be associated with improving outcomes for patients with
this disease. Colorectal cancer, by contrast [to breast cancer], is a disease for which there is currently no
national screening programme, and historically little pressure to develop services.”
Improving Outcomes in Colorectal Cancer (NHS Executive, 1997) made a number of
recommendations, including better audit of treatment outcomes.
Colorectal cancer patients represent a higher proportion of cancer patients than prostate and
ovarian cancer patients and the reported outcomes are based on 154 patients in the baseline
quarter and 77 patients in the outcome quarter. Here again there are many routes of entry to
the service and similarly significant pressures on diagnostic services.
1
The option of ‘watchful waiting’ for the first definitive treatment accounted for 36% (9/25) of the cases in
programme C and B’s prostate projects in the quarter ending March 2001. The pattern of change in waiting
times when the ‘watchful waiting’ cases are excluded from the analysis is similar to that for all cases.
27
The three reporting colorectal projects in the main analysis saw an overall decrease in
median waiting times from 64.5 to 57 days (an 11.6% reduction) but this was not
statistically significant. Two of the three colorectal projects (programmes B and D) included
in the main analysis experienced progress towards their waiting time targets (table 3).
Programme B’s project made good progress, while Programme D’s project started from a
more challenging baseline. The experience of Programme A’s project suggests that progress
across the colorectal projects was not uniformly positive,1.
Lung projects
Lung cancer represents approximately 16% of patients covered by the CSC phase I projects.
“Since 1971 lung cancer incidence and mortality have declined dramatically in men and risen in women,
reflecting earlier trends in smoking habits” (Quinn, 2000; 18),
Concerns over the management of lung cancer patients and patient waiting times have been
expressed by specialists (George, 1997; Deegan et al, 1998; Fergusson and Borthwick,
2000).
“It has been apparent to those preparing this report on lung cancer that prevailing attitudes towards this
disease and its treatment are characterised by a sense of pessimism, at the extreme by a degree of nihilism,
amongst some professionals. To put this view crudely, it doesn’t matter what you do for patients with lung
cancer because they all die relatively quickly. Where, then, is the incentive to provide good care? Whilst
lung cancer does carry a poor prognosis for many patients, there is ample evidence to support the view
that better organisation and delivery of treatment and care can make a worthwhile difference. Health
professionals, including those responsible for commissioning and managing services, need to be
encouraged to adopt a more positive view of the improvements in health that can be achieved for large
numbers of patients with lung cancer” (NHS Executive, 1998; 3).
The main analysis is based on the returns from three projects comprising 83 patients in the
baseline quarter and 54 in the outcome quarter (table 3). Two of the three lung projects
(programmes A and E) experienced progress towards their waiting time targets. Programme
E’s project started from a more challenging baseline compared to Programme A. The
experience of Programme B’s project suggests that progress across the lung projects was not
uniformly positive.2
Breast projects
Breast cancer patients formed the largest group within the main analysis: for urgent surgery
the data included 105 patients in the baseline quarter and 103 patients in the outcome quarter
whilst for ‘miscellaneous’ referrals the totals were 96 and 97 respectively.
The main analysis includes three breast projects and table 3 focuses on urgently referred
cases who received surgery as the first definitive treatment. These projects experienced
limited change in waiting times. Although the increase in median waiting time for urgent
referrals treated with surgery in Programme D’s project A was statistically significant, the
project maintained better waiting time performance than the other projects.
The analysis of urgent referrals treated with hormone therapy in Programme A’s breast
project illustrates the shorter waiting times associated with this treatment compared with
surgery.
1
The secondary analysis confirms the mixed findings of our main analysis: the two projects submitting partial
returns saw a 121.4% (from 42 to 93 days based on 23 and 21 patients) and 23% (from 74 to 91 days based on
29 and 18 patients) increase in their waiting times.
2
The Programme G lung project included in the secondary analysis also suggests that progress was mixed.
28
Ovarian projects
Ovarian cancer is not as common as some of the other tumour types included in the CSC as
the following quotation illustrates:
“A district hospital (DGH) serving a quarter of a million population is likely to receive about one new
case of ovarian cancer each fortnight … A general practitioner (GP) will only see a new patient with
ovarian cancer every five years or so … These low volumes put into perspective the challenge of
developing workable and reliable operational arrangements for the care of these patients, and prompt
difficult questions about the optimum configuration of the relevant services” (NHS Executive, 1999;
3).
The ovarian projects saw an overall increase in median waiting times of 64.7%. The three
ovarian projects included in the main analysis also experienced a reduction in the proportion
of patients meeting the local waiting time targets (35 days). The reported increase is based on
patient-level data from only three projects, comprising 22 patients in the baseline quarter and
53 patients in the outcome quarter.1
1
Our secondary analysis comprised four further ovarian projects who provided partial returns relating to waiting
time data (totalling 20 patients in the baseline quarter and 26 patients in the outcome quarter). Together these
projects saw a 24.7% decrease (from 35.3 to 26.5 days) in median waiting times.
29
TABLE 3 Summary project-level main analysis of waiting times from referral to first definitive treatment1
(CSC Planning
change in
Project
January to March 2000
January to March 2001
Group’s
median between
assessment
quarters
score for March median mean (days waited median mean (days waited days
(%)
2001) (days) (days)
/ number of
(days) (days)
/ number of
patients)
change in mean local % meeting
between quarters target local target
(days) in quarter
ending
days
(%)
March
patients)
% meeting
national 62
day target
in quarter
ending March
2000
2001
2000
2001
Prostate - all cases
Programme C2
(4.5)
115.5
143.1
(1717/12)
48.0
51.9
(415/8)
-67.5 * (-58.4)
-91.2
(-63.7)
70
8
88
8
75
3
(3.5)
156.5
201.6
(3225/16)
73.0
86.1
(1463/17)
-83.5 * (-53.4)
-115.5
(-57.3)
70
6
47
0
35
Programme B4
(4.5)
55.0
76.5
(1836/24)
47.0
45.0
(945/21)
-8.0
(-14.5)
-31.5
(-41.2)
56
54
76
58
90
Programme D
5
(3.5)
104.0
103.7
(2593/25)
73.5
83.1
(2160/26)
-30.5
(-29.3)
-20.6
(-19.9)
70
40
46
28
38
Programme A
6
(3.5)
62.0
85.4
(8967/105)
69.0
81.4
(2443/30)
7.0
(11.3)
-4.0
(-4.6)
<50
39
33
52
40
Programme A7
(3.5)
37.0
35.0
(841/24)
27.0
29.8
(357/12)
-10.0
(-27.0)
-5.2
(-15.1)
<42
63
83
92
100
8
(4)
49.0
62.9
(1006/16)
37.0
44.8
(493/11)
-12.0
(-24.5)
-18.1
(-28.7)
<56
56
73
56
73
9
(4.5)
54.0
55.1
(2369/43)
58.0
62.8
(1947/31)
4.0
(7.4)
7.7
(14.0)
56
51
48
60
55
Programme B
Colorectal - all cases
Lung - all cases
Programme E
Programme B
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test).
1 Measures of the variation in waiting time in each quarter are included in project-level analysis reported in appendix 5. Where data are missing referral or treatment dates,
the number of patients with dates and in total in each quarter are shown in the project-specific footnotes.
2 p<0.01 Data: q1 12/14 q2 8/13. All cases were recorded as GP referrals and urgent using locally defined criteria.
3 p<0.01 No data on urgency. 48% (16/33) of cases were recorded as GP referrals.
4 No data on source of referral, treatment type or urgency.
5 Data on urgency were incomplete. All but one case were treated with surgery.
6 Data: q1 105/106 q2 30/30. Data on source of referral and urgency were incomplete.
7 Data: q1 24/24 q2 11/19. Data on urgency were incomplete.
8 No data on urgency.
9 No data on urgency or source of referral. The local waiting time target was for the average waiting time.
30
Project
TABLE 3 continued Summary project-level main analysis of waiting times from referral to first definitive treatment
change in
change in mean local % meeting
(CSC Planning
January to March 2000
January to March 2001
median between between quarters target local target
Group’s
quarters
(days) in quarter
assessment score
ending
for March 2001) median mean (days waited median mean (days waited days
(%)
days
(%)
March
(days) (days)
/ number of
(days) (days)
/ number of
patients)
patients)
% meeting
national 62
day target
in quarter
ending March
2000
2001
2000
2001
Breast - urgent surgery
prog. A10
prog. D project A
11
12
prog. C
(4)
22.5
30.6
(1285/42)
21.0
32.5
(1041/32)
-1.5
(-6.7)
1.9
(6.3)
35
81
78
93
94
(4)
16.0
20.6
(966/47)
21.0
23.1
(1224/53)
5.0
* (31.3)
2.5
(12.4)
30
94
92
94
100
(4.5)
34.0
50.8
(813/16)
45.0
48.3
(869/18)
11.0
(32.4)
-2.5
(-5.0)
40
63
39
81
83
8.0
11.9
(332/28)
8.0
10.5
(272/26)
0.0
(0.0)
-1.4
(-11.8)
35
93
96
96
100
21.0
31.5
(1701/54)
23.5
25.8
(1237/48)
2.5
(11.9)
-5.7
(-18.2)
35
74
79
91
98
22.0
37.6
(338/9)
27.0
32.7
(458/14)
5.0
(22.7)
-4.8
(-12.9)
30
67
57
89
86
28.0
25.8
(129/5)
20.0
33.0
(297/9)
-8.0
(-28.6)
7.2
(27.9)
30
100
89
100
89
Breast - miscellaneous
prog. A hormone therapy13
14
prog. A all other cases
15
prog. D proj. A all other cases
14
prog. C all other cases
Ovarian -all cases
programme D proj. A16
(4.5)
13.0
17.5
(192/11)
18.5
26.5
(371/14)
5.5
(42.3)
9.0
(51.8)
35
100
79
100
93
17
(4)
20.5
33.5
(134/4)
28.0
33.2
(1094/33)
7.5
(36.6)
-0.3
(-1.0)
35
75
73
75
88
18
(5)
14.0
19.4
(136/7)
32.0
32.5
(195/6)
18.0
(128.6)
13.1
(67.3)
35
86
67
100
100
programme A
programme B
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test).
10 Data: q1 42/42 q2 32/34. Urgency defined using the two week wait criteria (data on locally defined urgency were not supplied).
11 p<0.01 Data: q1 47/50 q2 53/55. One case in q1 was excluded because referral date was assumed to be erroneous. Urgency defined using local criteria.
Treatment type was recorded as surgery or ‘other or not known’ only: no patients were recorded as having hormone therapy.
12 Urgency defined using local criteria. The local target was for 95% of patients.
13 Data: q1 28/30 q2 26/26. Urgent cases only defined using the two week wait criteria (data on locally defined urgency were not supplied).
14 i.e. all cases that were not urgent referrals treated with either surgery or hormone therapy.
15 i.e. all cases that were not urgent referrals treated with surgery. One case in q1 was excluded because the waiting time (606 days) was presumed to be erroneous.
16 Data: q1 11/13 q2 14/18. All cases were recorded as urgent defined by both local and two week wait criteria.
17 Data: q1 4/5 q2 33/33. Data on urgency were incomplete.
18 No data on source of referral, urgency or treatment type.
31
2.3.2
Booking for three key stages in the patient’s care
Twenty-two percent (11/51) of the projects supplied patient-level data on booking in the
quarters ending March 2000 and 20011. Table 4 summarises the analysis of these data,
which is based on those cases for whom data on booking were available. The footnotes to
table 4 indicate where data on booking were missing (in seven of the 11 projects).
TABLE 4
Project
and programme
Prostate
programme C
Summary project-level analysis of the percentage of patients booked for three stages of care†
Booking
target: % of
January to March 2000
No.
patients to patients
be booked at
each stage
80, 70 and
50
January to March 2001
% of patients booked
First
First
specialist diagnostic
appointment
test
First
definitive
treatment
No.
patients
% of patients booked
First
First
specialist diagnostic
appointment
test
First
definitive
treatment
14
57
71
361
13
912
912
302
respectively
Colorectal
programme D
95
25
n/a
n/a
n/a
26
1003
1003
1003
Colorectal
programme A
>80
106
0
100
100
30
0
100
100
Colorectal
programme C
75
18
11
44
72
15
47
60
67
Lung
programme B
>90
43
0
0
0
31
77
77
864
Breast (Proj. A)
programme D
95
80
100
100
1005
83
100
100
1006
Breast
programme A
>90
128
817
1007
1007
120
81
100
100
Breast
programme C
95
21
38
100
67
27
74
100
100
Ovarian
programme A
>90
4
n/a
100
100
33
88
100
100
Ovarian
programme B
90
7
43
57
86
6
83
100
100
Ovarian (P. A)
programme D
100
13
77
82
92
18
100
33
678
† For cases in which data on booking were available in the 11 projects that supplied patient-level booking data covering the quarters
ending March 2000 and 2001. Booking data supplied for the following proportion of patients: 1 79%, 2 92%, 3 77%, 4 94%, 5 60%, 6 73%, 7
95%, 8 83%.
2.3.2.1 Booking for first specialist appointment
Nine projects supplied data for both quarters. Three projects (programme D Breast and
programme A Breast and Colorectal) reported no change in the proportion of patients
booked between the two quarters (100%, 81% and 0% respectively). Five projects
experienced an increase in booking, and one project (Programme D project A ovarian)
reported booking all patients in the quarter ending March 2001.
1
Programme C’s ovarian project was excluded because it included only one patient in the quarter ending
March 2001.
32
2.3.2.2 Booking for first diagnostic test
Ten projects supplied data for both quarters. Five projects reported 100% booking in both
quarters. Five projects experienced an increase in booking, and one of these projects
(Programme B ovarian) reported booking all (six) patients in the quarter ending March 2001.
2.3.2.3 Booking for first definitive treatment
Ten projects supplied data for both quarters. Four projects reported 100% booking in both
quarters. Three projects experienced an increase in booking, and two of these projects
(Programme C Breast and Programme B ovarian) reported booking all patients in the quarter
ending March 2001. Three projects experienced a reduction in booking for the first
definitive treatment.
2.4
Qualitative gains
There were other, less quantifiable, gains from participating in the CSC. For instance, one
project manager commented that improvements in local communication were not listed in
his reports because ‘it does not show on a graph’ and another stated that only ‘bigger’
changes had been recorded:
“That’s the issue about the changes, people only accept changes where there are tangible evidence, you
know, you produce a graph, you produces the figures, because that’s what’s bred in, acknowledged as a
change. But I think the whole dynamics of the department has changed but how to quantify that is very
difficult.” (Project Manager)
“Improvements have been made automatically and I think the collaborative can only catch up on some of
these things; the bigger things are recorded but a lot of the fundamental things that are making changes
we don’t even know about because people are going in there and just changing things themselves.”
(Programme Manager)
So as well as the quantitative process measures discussed in the preceding section we asked
participants in the end-of-study questionnaire whether there had been any other local
benefits related to the CSC. Over two-thirds of respondents felt that there had been local
benefits from participating in the CSC over and above the formal objectives and processes
inherent in the approach (table 5):
TABLE 5
In your view have there been any local benefits from participating in the CSC which were not
directly associated with the formal objectives and processes of the CSC? (n=96)
% responses
Yes
69
No
13
Don’t know
13
Missing data
7
[source: end of study postal questionnaire, May 2001]
Whilst the overall percentage of respondents who responded positively was 69%,
respondents from one particular programme (programme C) were more confident that the
CSC had brought about additional benefits over and above the formal objectives of the CSC
(90%) whilst those from another (programme G) were less confident (45%). There also
appears to have been greater uncertainty amongst project managers in respect of ‘additional
local benefits’ (21% 'don't know') whereas lead clinicians felt more able to express an
opinion and also gave a marginally higher rating (72% versus 61%).
33
The following additional benefits were the most commonly identified1:
-
Spread of techniques and approach to other clinical areas locally (‘the team which I
worked with has changed the whole service!’, ‘clinicians have started applying the
CSC principles elsewhere, e.g. ENT’)
-
Development and strengthening of cancer networks (‘enabling cancer network group
to form’, ‘we have used the CSC ideas for our local and network development’)
-
Staff development and motivation which was often couched in terms of ‘cultural’
change (‘a promotion of a culture that everybody and anybody may host a good idea
for service improvement’, ‘training project managers and empowering people to
change the system’)
-
Facilitating a greater amount of, and commitment to, multi-disciplinary team
working (‘increase in belief that we can work together and achieve
change/improvement’, ‘multidisciplinary team working greatly facilitated’), and
-
Raised profile of Trust or department regionally and nationally (‘seems to ease the
way in a manner that wasn’t there before’ and ‘gives some emphasis to it, some
permission to get on and do it’).
Each of these additional local benefits is discussed in more detail in later sections of this
report.
2.5
Scale of change
The question of the ‘scale of change’ brought about by programmes such as the CSC is a
common one (Øvretveit, 2000; Counte and Meurer, 2001; Powell and Davies, 2001).
Participants in the CSC recognised that their projects in phase I of the CSC had, in most
cases, only focused on a relatively small number of patients - and therefore had only begun
the process of redesigning whole services - but nonetheless felt that there was value in what
had been achieved:
“I think the collaborative needs to recognise that you can make certain changes, but there has to be some
long term ownership and I think we’re just fundamentally tinkering around the edges.” (Project Manager)
“Like when we tested out our prostate assessment clinic, we did it with a very small number of patients
going through and we never talk about the numbers when we are presenting nationally - nobody ever asks
us, which I can’t believe.” (Project Manager)
“I know that no one thing sorts everything out and so it was a bit of a get out but I think I would still stand
by it. We have made significant improvement and that’s great but we know when we start addressing
radiotherapy that there are the problems of recruitment and all those things. I think we have to be honest
and say ‘Look there are some things that the Collaborative isn’t going to sort out’, you know ‘they are
much bigger than this’, but it can demonstrate the case much more strongly.” (Programme Manager)
There were, however, some differing views about the scale of changes brought about by the
CSC and the benefits of the approach adopted as evidenced by exchanges in two of our
focus groups with project managers:
“M: The disadvantage is that you’re using small numbers. You may not be able to replicate up to larger
numbers because the system may not run with that. I just think you need to be aware of the fact that when
you’re doing a PDSA you’re starting with very small numbers and it may be a whole lot more difficult to
maximise that up, i.e. doing it with two patients may be quite easy because you’re quite focused. Doing it
for a hundred patients is a whole lot more difficult because the system does need altering to do that.
1
Source: question 26 in the questionnaire.
34
S: I think it helps having smaller chunks for acceptability by other members of the team because they
know that they can stop at any point. I think that was important, especially as we’ve just begun with the
radiology project to make them understand that to implement change you need to try it for a short amount
of time and if it doesn’t work well it doesn’t work and we’ll try something else. We’re willing to take on
smaller projects than we are to take on a huge change like a massive administrative change which, if it
doesn’t work, what do they do at the end of the trial period?” (Focus Group - Project Managers)
Such discussions illustrate wider debates about how best to ‘redesign the system around the
patient’ (Department of Health, 2000): can this be achieved through radical top-down
transformation or through bottom-up incremental improvement (Locock, 2001)? The CSC
sought to combine elements of these different approaches through redesign and whilst this
approach offers:
“… a helpful way to analyse and reconceptualise what [the] problems are, and a way to identify how they
might be tackled … it does not itself provide a set of transferable solutions, and changes in funding and
facilities may be needed to support redesigned processes and systems.” (Locock, 2001)
The conclusions to be drawn from the analysis of the quantitative data summarised above
are perhaps unsurprising given the familiar debates around the scale and impact of different
change management approaches such as total quality management (TQM), redesign,
business process reengineering (BPR):
“Of those hospitals and services which have implemented TQM, few have had great success and many
have found difficulties sustaining their programmes.” (Øvretveit, 2000)
“'It appears to be difficult to translate the potential benefits of CQI into actual gains … in most cases, the
rhetoric does not equal the reality. The continued spread of CQI among health care organisations in the
United States and elsewhere around the world, or what has been called the ‘quality revolution,' has not as
yet been consistently associated with higher levels of service quality.' (Counte and Meurer, 2001)
“Re-engineering has not transformed the performance of the hospital to the extent and at the pace
intended at the outset of the initiative … None of the initiatives we have studied have achieved the
magnitude of benefit that was initially included.” (Bowns and McNulty, 1999)
Similar sentiments were echoed in an earlier evaluation of another Collaborative in the NHS
which was found to have:
“… had both a real and perceived value but the benefits across the Trusts have been variable. The full
potential of this approach to quality improvement has not been fully realised (the classic problem of
'undershoot' in TQM and change programmes) because of a myriad of problems associated with the
implementation of the method and the organisational context within which it was being implemented.”
(Bate et al, 2002)
and in the evaluation of the National Booked Admissions Programme:
“The pilots made rapid progress in the first year of booking. This was followed by some slipping back in
the second year, although overall the performance of the pilots was better at the end of the period under
review than the beginning. There was wide variation in what was achieved.” (Ham et al, 2002)
Added to these empirical findings is the fact that many of the potential benefits from a
programme such as the CSC may take some time to become visible. Given this background
it is perhaps not surprising that some participants harboured doubts about the scale of
changes that phase I of the CSC has brought about to date.
2.6
Sustainability and ‘spread’
2.6.1
Sustainabiltiy
Even if a local project team manages to achieve its target improvement(s), it is by not means
automatic that this level of performance will be sustained. Teams may fail to recognise that
35
work will be needed after the collaborative to anchor the gains and prevent performance
from relapsing or drifting back to lower levels. Collaborative improvements will only last if
they are firmly embedded and connected to the wider organisation:
“I think if you want to really sustain change and try and increase a projects credibility then you’ve got to
really have it integrated into what is happening in the mainstream NHS.” (Programme Manager)
Nearly half of the respondents felt that the CSC was ‘quite’ well embedded in their
organisation, 16% reported that the CSC was ‘very’ well embedded whilst 30% were
unconvinced that this was the case (table 6).
TABLE 6
Please indicate how well embedded within the participating organisation you believe the CSC
now is (% responses) (n=96)
Very
16
Quite
48
Not particularly
24
Not at all
6
Missing data
7
[source: end of study postal questionnaire, May 2001]
Amongst the nine programmes, positive responses (i.e. that the CSC was ‘very’ or ‘quite’
well embedded) ranged from 47% (programme A) to 80% (programme C). Project managers
were generally more positive than clinicians about how well embedded the CSC was locally
(79% reporting it was ‘very’ or ‘quite’ well embedded versus 56%).
Some teams had planned for the eventual departure of their project managers or other
eventualities:
“We wanted to go for sustainability from day one because we knew that even if the more theoretical
methodology elements of the collaborative fell by the wayside because they didn’t work as much as they
thought we would, we didn’t really need to ram them down peoples throats in order to get the same
outcomes.” (Project Manager)
“If this is going to be sustained at the end of two years we have got to start trying to get them thinking
about that now and putting it in their delivery plans: ‘what are you going to do when the collaborative
goes?’” (Programme Manager)
However, there were examples of other teams which believed that the changes would be
sustained in spite of the departure of their project manager and consequent lack of continual
monitoring of performance:
“The changes have been sustained, but the monitoring of these changes has not. The changes have been
sustained since they were incorporated into the service, but monitoring was dependent upon the project
manager who has now left.” (Tumour Group Lead Clinician)
Others were more sceptical that improvements would be sustained in the absence of a
dedicated local project manager:
“Without site specific and Trust specific project managers, I suspect change will not be maintained. Many
clinicians are sceptical about this cost neutral re-engineering.” (Tumour Group Lead Clinician)
Sustainability can also refer to the ongoing network or community of practice which should
have been established: a collaborative is not only an issue of creating a learning organisation
but also of having established a network or community of practice that has the will to
continue. Are project team managers and members still sharing ideas and experiences with
others who participated in the collaborative? Are they continuing to share and spread good
practice?
36
“I think we’ve got to be realistic about what we can achieve certainly in phase II. We haven’t got the
same level of project support and I don’t think that this network in particular is ready to go it alone. It
hasn’t embedded enough yet in the culture.” (Programme Manager)
Collaborative organisers, teams and their management need to allow time for teams to learn
how to sustain any improvements and how to continue to use the methods after the
collaborative. The likely post-collaborative drop in performance needs to be explicitly
recognised in advance, and strategies need to be designed to turn what is essentially a time
limited formal programme into a genuine continuous quality improvement process.
2.6.2
‘Spread’
‘Spread’ can refer to change ideas and quality methods being taken up beyond the teams in a
collaborative by other units in the team’s organisation, or by other organisations.
The different local approaches to identifying the patient ‘slice’ to be involved in phase I of
the CSC will effect the strategies for ‘spreading’ the CSC improvement approach:
“The focus of the collaborative was supposed to be on one slice but really with the colorectal we spread
right from the beginning rather than slicing. So its been wider from the beginning and that made it very
challenging and quite difficult to get round everywhere and keep it going if you like. There’s pros and
cons both ways: if we’d just done it in one area we could have made a lot more difference in that one
area. But I think we actually created the right environment and culture with it being one merged trust - it’s
worked very well.” (Project Manager)
“In prostate I think it was really useful to have a slice. If we’d tried to do it with all the consultants, I
don’t think we would have got anywhere. I think it has been very useful just to focus on the one
consultant’s patients because we can show now what difference we can make.” (Project Manager)
As with other collaboratives, one aim of phase II of the CSC is to spread amongst the
participants practical changes which others have used successfully to improve their service
(change spread). These are the ideas presented to the collaborative meeting by experts, but
are also changes which projects in the collaborative have tested and which they then share
inside and outside the meetings:
“The whole kind of ripple effect is really starting to happen exactly as we have laid out in the
methodology and I had to see it for myself because I thought we would have to start from scratch again
and almost begin again. It doesn’t feel like that at all.” (Programme Manager)
Participants felt that rolling-out the principles and findings of the CSC will be challenging:
“I think what I will be interested to see with the cancer collaborative is whether the teeth will bite at some
point. You know, we’re trying softly-softly approaches at the moment, we’re talking to a captive audience
of enthusiasts, the difficulty comes when you try and get the non-enthusiasts on board, and the persuasion
game starts.” (Tumour Group Lead Clinician)
Guidance about more effective spread also depends on who is the target of spread: the teams
in the collaborative, other units within their organisations, or other organisations. Teams can
also be encouraged to spread their improvements beyond a specific patient population which
they may have selected for testing the change.
There is little evidence about how to spread change ideas and quality methods to teams not
involved in a collaborative, or about how much this has been done by teams in
collaboratives which have formally ended. In part this is because this is not a priority for
many collaboratives, although it is often presented as an aim. However, research does show
that spreading ideas within a collaborative depends on meaningful contact and exchange
between teams inside and outside of the meetings (Øvretveit et al, 2002). This is helped by
leaders of collaboratives giving guidance to teams about how formally to present their
37
changes at the meetings, giving structured opportunities for exchange, as well as by making
informal exchange easier - informality and interaction being the key to effective knowledge
flow (Bate and Robert, 2002).
The remainder of this report focuses on the participants qualitative experiences of the CSC
with the aim of revealing key lessons for the content and method of implementation of
future Collaboratives in the NHS.
38
WHAT WERE THE KEY LEVERS FOR CHANGE?
3
Key findings
There were six key levers for change at the team and individual level in the CSC.
The most important of these were the adoption of a patient perspective - and in particular the use of process
mapping - and the availability of dedicated project management time. Over 80% of participants found these
aspects to be either ‘very’ or ‘quite helpful’.
Other significant levers included the capacity & demand training provided to participants, the facilitation of
multi-disciplinary team working, the empowerment of staff at all levels and the opportunities for networking
with peers.
3.1
Overview of key levers for change
When considering ‘key levers for change’ it is important to be clear as to at what level(s) the
change is to be brought about: the organisational, team or individual level (Ferlie and
Shortell, 2001): the aim in the CSC was to bring about change at all three. Whilst this
chapter, and our research, mostly focuses on how the CSC brought about change at the team
and individual level, it is important not to neglect the importance of bringing about
organisational change in seeking to secure and embed shorter term improvements:
“To me it’s about positioning it in the right place. Having the right leader who has the credibility and the
understanding of how to make things happen. And I think you know, making sure that it’s always
integrated into what’s happening organisationally. And not seen as something separate.” (Programme
Manager)
At the team and individual level six key levers for change were identifiable from our
qualitative data and postal questionnaire1:
-
Adoption of a patient perspective (process mapping and eliciting patient’s views),
-
Availability of dedicated time,
-
Opportunities for training (in particular, capacity & demand training),
-
Facilitation of multi-disciplinary team working,
-
Empowerment of staff, and
-
Opportunities for networking.
Each of these six key levers at the team and individual level are now discussed in turn.
1
The responses to two questions were particularly relevant. Respondents to question 13 identified four aspects
as being the most helpful: process mapping, the national learning workshops, dedicated project management
time and capacity and demand training. Similarly, responses to question 36 identified the following as
‘positive’ aspects of the CSC: the focus placed on the patient journey, opportunities for multi-disciplinary team
working, opportunities for networking, time for reflection, general Collaborative approach, sense of
empowerment, and the training opportunities afforded by the CSC.
39
3.2
Patient perspective: process mapping and eliciting patients’ views
The NHS Plan places the patient perspective, and redesign, at the centre of the
modernisation agenda for the NHS:
“Over the past few years the NHS has started to redesign the way health services work - in the outpatient
clinic, the casualty department and the GP surgery. The work has been led by staff from across the health
service and involves:
-
looking at services from the way the patient receives them: asking their views on what is convenient,
what works well and what could be improved
-
planning the pathway or route that a patient takes from start to finish, to see how it could be made
easier and swifter.
We will now take forward this service redesign approach…Every trust will be expected to set up teams to
implement this new patient centred approach” (Dept. of Health, 2000)
In the CSC such process mapping was described variously as a ‘revelation to the majority’,
‘the most helpful tool for making change’, ‘fundamental to the project’ and ‘as what sets the
CSC apart’. Figure 5 shows just how highly the majority of participants rated this aspect of
the CSC - indeed process mapping was (with dedicated project management time) the
highest rated component of the entire CSC improvement approach.
FIGURE 5
How helpful did you find process mapping?
80
73
70
60
%
50
40
30
20
11
10
8
7
1
0
Very helpful
Quite helpful
Not particularly
helpful
Not at all helpful
Not involved
[end of study postal questionnaire, May 2001, n=96 (74% response)]
Focusing on how patients’ experience the service provided a new understanding, particularly
for clinicians:
“One of the things the collaborative has done which I think is quite good, is that it has focused the medics
on the way the patient looks at things. Of course we are always thinking of the patient, but we tend to
think of the patient from our perspective. And there’s constantly this thing of the patient pathway. So we
looking all the time at how can we improve it for the patient and you’re constantly tending to put yourself
in the patient’s shoes, which is a good thing” (Tumour Group Lead Clinician)
Taking part in an exercise that requests a practitioner to literally follow in the patient’s
footsteps helped to facilitate this learning:
“I think it’s (studying patient pathway) absolutely essential. One of the exercises was to imagine yourself
as a patient, trying to find a parking space, trying to find your way to the clinic, and all this sort of thing.
40
And then you realise that it’s difficult to find a parking space, and you’re pressurised because your
appointment’s due. When you actually get into the hospital the signposts aren’t there, you come to a Tjunction, because I work in the hospital I know which way to go, but quite often patients don’t know
which way to go. So I think it gives you a different perspective. When you see it laid out, you can see
where the problems and delays are.” (Tumour Group Lead Clinician)
Such systematic study of the patient’s pathway through the process of care from referral to
end stage of treatment was universally well-received. As figure 4 shows many regarded this
as the most valuable building block of the methodology. Our interviews abound with
phrases such as ‘the key’, ‘starting point for the whole project’, ‘the single most helpful
aspect’ and ‘ideal for involving all staff and identifying real problems.’ Such comments
accord with the findings of an earlier evaluation of a large-scale re-engineering project in the
NHS:
“Some re-engineering techniques, particularly ‘process thinking’ (the analysis and redesign of patient care
processes) can be used successfully to improve patient care.” (Bowns and McNulty, 1999)
Whilst acknowledged to be a simple concept, participants in the CSC commented on the
different perspective offered by this process and that it revealed new insights into what were
often very complex systems:
“We then went in for the most important bit of the whole thing, which is the mapping exercise. That is
what’s made the difference to us, the mapping exercise, mapping out what actually happens to the patient.
Which is a very simple thing to do, and why we haven’t done it before I can’t imagine, but we haven’t. So
we had all the people here, we had forty here and forty in each of the three hospitals, and with these
dreadful bits of paper … working out what actually happened to the patient was a complete revelation.”
(Tumour Group Lead Clinician)
“Process redesign - and just seeing what’s happening in the patient pathway - is essential before you start.
We did have quite an efficient service but we wouldn’t have made the adjustments, the changes, that we
have made - it was only by sitting down and talking it through.” (Project Manager)
The mapping process was reported to provide new insight for health professionals who
tended to work in isolation on their own fraction of the process. It was uncommon for a
specific aspect of the service to know what happens before and after a patient is seen by
them:
“Everyone you meet, and this is a general thing in the Health Service, they’re all doing their best for the
patient, but they are doing it in isolation. You know, that patient comes in front of them and they will use
their best clinical judgement, you know, their best surgery, but what actually happens upstream and
downstream…is not known.” (Programme Manager)
“The process mapping was incredibly powerful in that you have from the porter, to the clerk, to the
secretaries, up to the clinicians, and they all have a voice. And they all have the opportunity to speak.
When the clinician says ‘I dictate the letter and it goes to so and so’, and his secretary says ‘no it doesn’t
actually, there’s not just one step in there, I have to take it to etc’ – it was almost an appreciation of one
and other’s roles.” (Project Manager)
For example, in Queen Mary’s NHS Trust (South East London) mapping out the processes
for a gynaecological patient from smear test through colposcopy (which was taking up to 10
weeks) identified that there were a number of steps that could be condensed (NPAT, 2001).
The result was a reported reduction in waiting times from 10 weeks to less than 5 weeks and
improved patient certainty through the patient knowing the next steps in the pathway. In
addition:
“The new process was very favourably received by GPs as, although it reduces clinical and administrative
time, GPs remain fully informed about the management of their patients. This particular project has
successfully streamlined the service ensuring that patients who need a colposcopy because of an abnormal
smear do not have any unnecessary delays. Prior to starting this fast track service, an extra colposcopy
session had been added each week. In the year ending March 2000, 655 smear results were returned from
41
Queen Mary’s cytology laboratory to the GPs with a recommendation of colposcopy referral. This gives
an indication of the number of patients who benefited from this redesigned process.”
Similarly, prior to the CSC in Leicester General Hospital there was no individual
responsible for co-ordinating diagnostic tests for prostate cancer1: patients could have to
wait up to 36 weeks for a diagnosis and have to attend several appointments during that
time. After mapping the patient journey and designing an ‘ideal’ pathway, the consultant
now contacts the patient’s GP before the visit to request that the patient be prescribed
antibiotics and the visit to the clinic incorporates clinical history, flow rate measurement,
residual bladder scan, symptom score, repeated blood tests if indicated, digital rectal
examination and transrectal ultrasound if indicated. The net result was reported as:
“in the pilot of the assessment clinic, [the] time to get a diagnosis was reduced to four weeks from 36
weeks.”
One other important corollary of undertaking the process mapping exercise was that it
provided impetus to engaging with all staff (“it encouraged the whole team to see the issues
and understand the complexities”, “the way we got real involvement and ‘ownership’ from
all levels of staff”) and encouraging wider team working by “involving team members from
clinical AND non-clinical backgrounds.”
Some participants, whilst still welcoming the value of the exercise, seemed less certain
about the training they had received relating to process mapping and its place in the CSC
improvement approach, suggesting that this should be made more explicit in future
Collaboratives:
“The reviewing of the patients journey, the mapping, wasn’t really a collaborative approach. And I can
remember going to those early NPAT meetings, where they said to me, “Well why aren’t you testing
changes yet?” And I said “Because I don’t know what I’m trying to test.” And yes, we could test each
clinic and make it better, but the big fundamental question is “Do you need that clinic?” And so I started
to do, with the team, the process mapping and the reviewing of the patients journey, almost on the quiet just got on with it in the background.” (Programme Manager)
“You know what I was thinking though is that we talk about mapping and process mapping but actually,
of all the workshops we tried, I don’t remember doing anything about how you process it or anything,
we’ve never done any training on that. And I was talking to people, and the first thing you do is map it
out but we never did anything about how you’re meant to do that, so I think that needs sorting.” (Focus
Group - Project Managers)
Other initiatives to capture the patient perspective and patient views - running alongside the
need for system changes revealed by process mapping - were equally valuable:
“Well, just getting somebody in quicker, doesn’t actually improve their experience of it. And I felt
surprise rather than resistance, that we were actually using real patients. And that was fabulous, doing
patient information, that was brilliant for me. To actually ask patients’ views and then making something
of them.” (Project Manager)
“Because I think people forget - I know we’re all about patients - but I think people forget about the
patients because they’re so busy looking at getting patients through quicker that they don’t see them as
people. And that for me was one of the things that I’ve enjoyed and felt the most value, really. That
people have actually felt that my input has made a difference…I felt very proud of that, I suppose.”
(Project Manager)
For example, at the Royal Victoria Infirmary, Newcastle-upon-Tyne, patients with a postoperative seroma following breast and axillary surgery would often have to wait up to four
hours before a doctor was available to drain the seroma - even though the procedure usually
takes only a few minutes. By training district and practice nurses patients are now give a
choice of having the seroma drained at home or in the GP surgery (and 50% are choosing to
1
Source: Prostate Cancer. Service Improvement Guide.
42
do so), rather than having to travel to the hospital and waiting to have the seroma drained
there.
The approach of the CSC national team to engaging with patients has evolved over the
course of the Collaborative:
“A variety of techniques have been adopted to obtain patients’ views about the care and treatment they
have received. These techniques include questionnaire, semi-structured interviews and focus groups; all
of which have been carried out to give patients as effective method of feedback, as well as the ability to
influence improvements.” (Patient Information. Service Improvement Guide)
Having begun by suggesting that participants elicit patient views through questionnaires to
rate their experiences (but finding that these were not sufficiently discerning), the national
team moved on to advocating the use of reporting systems - ‘did x, y happen to you’ etc and, in phase II, are recommending the use of ‘patient discovery interviews’1 because of the
richness of the data which can then be fedback to improve the system.
3.3
Dedicated time
As figure 6 shows the presence of a dedicated project manager was seen as ‘very helpful’ by
the vast majority of respondents to the postal questionnaire. Comments made clear that
having such staff in post locally offered continuity and a focal point to the work which the
project teams were undertaking: ‘essential to ensure and push through changes’, ‘would
have failed without project managers’, ‘would not have been possible without project
manager’ and ‘the project manager was invaluable - emphasising the fact that to a huge
extent our problem is lack of bodies and time.’
FIGURE 6
How helpful did you find having a dedicated project
manager?
80
76
70
60
%
50
40
30
20
10
10
3
2
Not particularly
helpful
Not at all helpful
7
0
Very helpful
Quite helpful
Not involved
[end of study postal questionnaire, May 2001, n=96 (74% response)]
Clinicians mentioned that one of the key benefits of the CSC was that it provided good
management and many expressed praise for their managerial colleagues:
1
These have been developed through the CHD Partnership Programme (toolkit on NPAT website:
www.nhs.uk/npat).
43
“As far as this project is concerned, I don’t want to sing praises unnecessarily, but our project manager is
outstanding, and does maintain the incentive and the impetus.” (Tumour Group Lead Clinician)
“Our programme manager…she’s the perfect mix between dynamism and expertise, but she’s got
credibility with everyone. It doesn’t matter whether it’s a toffee nosed consultant or a nurse on a ward, or
a chief executive in a hospital; she’s got credibility with the lot. And I don’t think we would have
achieved half as much as we would have done, without her” (Programme Clinical Lead)
The fact that there is a dedicated manager to focus on doing the work was highly valued:
“I think as far as our project is concerned, the most useful thing is that we’ve had a project manager, and
basically you can say to her ‘look at this’, and she’ll go off and do it and she’ll come back and it’s either
worked or not worked. So you get the chance to fix things quickly. If you have a fix-it person, somebody
who’s there, somebody you can talk to and say ‘why don’t you try this’ and they can actually go and work
on the nuts and bolts and make it happen, then you’ve got changes happening a lot more quickly.”
(Tumour Group Lead Clinician)
“I think it is the protected project time [that has been the lever to change]. I have to say that I don’t think
that for me the methodology has played a big part – perhaps subconsciously it’s been there, but I don’t
think so.” (Project Manager)
However, not all project teams were so fortunate:
“From a personal point of view, I’ve had no more time at all to try and run this project. I mean, I don’t
have dedicated time for this project and at times that’s been difficult to keep up with the reporting, pitch
up at the meetings that the programme manager tries to arrange, [sighs], but we do our best. And the
clinicians, they can’t give their time, there was a nominal amount of money that was given for clinicians
to give a session of their time to this, but they can’t do that…” (Project Manager)
The fact that different project managers had varying amounts of dedicated time to give to the
CSC emphasises that how teams decided to allocate their funding may have had a key role to
play in explaining some of the variable outcomes which were observed across different
project teams and regional programmes; a topic which is explored further in our discussion .
3.4
Capacity and Demand training
Figure 7 shows how well-received the capacity and demand training was by those who
attended these sessions (13% were ‘not involved’),
FIGURE 7
How helpful did you find Capacity and Demand
training?
80
70
60
%
50
48
40
28
30
20
13
9
10
2
0
Very helpful
Quite helpful
Not particularly
helpful
Not at all helpful
Not involved
[end of study postal questionnaire, May 2001, n=96 (74% response)]
44
although some participants felt that this was sufficiently important to have been scheduled
somewhat earlier in the programme:
“The key levers are having the training - the tools - early on, being absolutely sure about measurements,
and concepts like capacity and demand which we did quite late on in the depth that’s necessary. I think
yes, [the key levers were] learning the skills for detailed process mapping, capacity and demand, and
having definite measures.” (Project Manager)
Nonetheless, the training in capacity and demand was identified as one of the key levers for
change as it had ‘helped greatly with the analysis of problems’, ‘enabled us to highlight
areas for improvement’ and was ‘an essential tool in redesign.’ These positive views were
shared by both clinicians and non-clinicians:
“One thing the collaborative has done, is that it has given us an opportunity to look in a critical way at
how we deal with things, specifically at our capacity and demand, which has caused us lots of problems,
and how we manage that best, and maybe then share that with our other units around the region.”
(Tumour Group Lead Clinician)
One example of how capacity and demand training was applied comes from Bromley
Hospitals NHS Trust (South East London). Previously, patients would wait between two to
12 weeks for their first outpatient appointment with a urologist. These long waits failed to
meet the two-week wait target for suspected cases of prostate cancer and unnecessarily
extended the patient journey but consultants felt that other urgent cancers and non-cancer
patients should not suffer because of the fast tracking of prostate cancer patients. The
capacity and demand for new referrals were tracked on a number of clinics and it was found
that demand for new slots exceeded available capacity. To resolve this ‘emergency clinics’
were introduced on a daily basis (except alternate Wednesdays) by consultants for all urgent
referrals with suspected or known prostate cancer. The clinics run from 10.00 am to 12.30
p.m. with three new referrals from the accident and emergency department and five new
patients booked via the central referrals office. Patients from A&E are pre-booked into the
following day’s clinic whilst GP referrals are faxed or sent to the office on standard pro
formas and given the next available clinic slot. They are then seen in clinic within 24 hours
to two weeks.
3.5
Multi-disciplinary team working
Participating in the CSC had the major associated benefit of encouraging the formation and
operation of multi-disciplinary teams (MDT). This tied in closely with national policy
initiatives:
“The development of MDT working has been a key feature of cancer services over the last ten years. The
NHS Cancer Plan states that every cancer patient should receive clinical management that has been
considered by a MDT.” (Multi-disciplinary team working. Service Improvement Guide)
Not only was this an important factor in facilitating progress towards the aims of the CSC, it
was also therefore seen as a longer term benefit for the Trusts concerned:
“Right from the outset I truly believe, and I think will continue to believe, especially doing part of the
Orthopaedic Collaborative as well, that you have to have a multi-disciplinary team approach. I don’t think
any of these changes would have happened if we hadn’t have had open, frank conversations and
discussions every month.” (Project Manager)
“I’m quite strongly of the belief that if you get those MDTs up and running, they’ll be a catalyst for your
other things. Whereas all that time and effort maybe spent doing other things that require major capital
financing could never be achieved - even with the best staff in the world, you would never ever get those
achieved in eighteen months because of all the issues surrounding them.” (Programme Manager)
45
Both clinicians and managers expressed the view that the quality of multi-disciplinary team
working promoted by the CSC was exceptional compared to their previous experiences. A
clinician commented on the particular value of hearing views from other professionals:
“And one of the things about the collaborative is that there are lots of non-doctor people involved and
they say all sorts of sensible things.” (Tumour Group Lead Clinician)
Other clinicians noted that working this closely with other professionals added a new
dimension; an advantage that was not available when they met in their respective
professional specialities. Another unique element was that practising teams were attempting
together to improve services, whereas the usual model for service re-design would be to
have an outside team come in and point out where the problems were:
“I think the power comes in the multi-disciplinary team from all sectors getting together and looking at
what they’re doing and driving it from the team who are running the service and linking it to what they’re
doing on a daily basis, rather than a project team coming and saying ‘this is where your bottlenecks are,
this is where your delays are’”. (Programme Manager)
The MDT ‘Service Improvement Guide’ provides 22 case-studies of changes which mainly
focus on improving the organisation of team meetings:
“Typically this involved making sure that appropriate staff could attend and did attend, that all
information was available, and that the decisions made were recorded. The timing of the meeting was
often reviewed and changed to reduce delays along the patient journey.” (p. 3)
To illustrate, prior to the CSC at University Hospital, Lewisham, patients with bowel cancer
were discussed in a variety of meetings and the MDT meetings took place on alternate
weeks due to the time pressures on key consultant staff1. Consequently, there were delays
before decisions and treatment plans could be discussed. The MDT discussed only patients
with lower gastrointestinal cancer and there was no record of the key information from the
meeting and decisions made as there were no protocols or forms for recording this
information. Following the CSC the MDT meetings now take place once a week and
combine upper and lower gastrointestinal cancer. These measures mean that:
“decisions are made more quickly and are shared with the patient before discharge; patients with upper
and lower gastrointestinal cancers are discussed within the same forum, promoting seamless care and
making best use of the core team; and decisions are recorded by the cancer data manager on a pro forma,
including information about past medical history, diagnosis and management plan.”
Overall, the emphasis given to MDT working as being central to patient focused care in
itself made the CSC - in some participants eyes - a ‘valuable exercise’, regardless of what
else it may or may not have achieved.
3.6
Staff empowerment
A common oversight in other Collaboratives both in the UK and elsewhere - and which
seemed true to some extent early on in the CSC - is to dedicate too much time at the national
learning workshops to didactic presentations by experts rather than providing sufficient time
to enable participants to learn through actively participating in the improvement process
(Øvretveit et al, 2002). Gaining knowledge of quality methods and change concepts is the
easy part; much more difficult is learning how to interpret change concepts, apply the
methods and transfer new practices back into the local setting:
“You can’t just say you can pick up a solution from somewhere else and plop it down here and it will
work. You’ve got to tailor it and staff have got to feel the ownership of it, clearly if you impose
something from one hospital to another, then they’re going to say well that’s another hospital’s and we
1
Source: Bowel Cancer. Service Improvement Guide.
46
don’t own that. Taking the team through deciding what’s the best thing is the crucial part, really.” (Project
Manager)
“If we’d taken a different tack of putting five or six project managers in, all going in, reviewing services
and then presenting what they found to the team, you know, I think we would not have been successful - I
think we would have been in the scenario that you find some of the other programmes in. Personally I
think it’s about the approach, and I was very clear from the beginning that we needed to get the teams
engaged in looking at what they were doing, because if we didn’t get them involved from the start, the
ownership would be with the project managers and not the team.” (Programme Manager)
Developing these competencies are necessary if individuals and teams are to continue
improvements beyond the end of the collaborative:
“I think it is important that the people there [the team] take the credit, because once the collaborative is
long gone, the health service is going to need to take these tools and techniques and keep going with
them, and if they don’t have the ownership now, then I don’t think they’ll ever be able to keep up with the
ownership.” (Project Manager)
It is wrong therefore to assume that because a team attends a collaborative learning meeting,
it is motivated and confident of its ability to make improvements. Rather it should be
recognised that some teams may be there because they have been sent by their management
and may not be motivated or convinced that the improvement is important and achievable or
the methodology robust.
In contrast, if professionals believe that they have it within their power to make
improvements, then they will give it the time and effort that is required:
“If you give people a leash, they’ll develop their own interpersonal relationships and - as long as they
understand the structure, and they’re trained and they’re given all the support that they need, and there’s a
rigorous ‘You must report at the end of the month, and this is what we must see because otherwise we
know it’s not working and you need help’ - then they just got on with it. Totally different culture from the
NHS where somebody tells you what to do and when to blow your nose, and what to be doing at 10
o’clock on Tuesday morning. And it’s also a very unsettling culture, because you know, being in a very
directive command and control structure is actually very safe.” (Programme Manager)
A strong sense of purpose and mission is as important as gaining quality skills and this is
something which the CSC had successfully developed in the majority of instances:
“It doesn’t matter what skills you’ve got as project manager, if there is no will to do it, there is very little
you can do. Because at the end of the day I can’t stand on x-ray reception doing capacity and demand
every day for a month to get the figures, marking every request card. The only way that will work, is for
me to persuade a receptionist and the receptionist’s senior manager, to do this work. That’s the only way I
can do it. They’ve got to own it, and there is no ownership, there is very little.” (Project Manager)
Having engendered such a sense of purpose gives resilience and a feeling of ‘can do’ in the
face of the setbacks that teams will inevitably experience. Building confidence in
participants’ ability to make changes also depends on acknowledging peers who have
actually achieved change and who can give practical examples which teams can translate to
their local settings, and the team themselves believing that they are gaining change skills
which will work. Perhaps the strongest motivation and confidence comes from teams seeing
the improvements which they have made - hence the importance of opportunities for
networking.
3.7
Networking
The CSC adopted the theory that improvement will be accelerated by focusing intensely on
an area of concern (for example, the need to reduce time from referral to treatment), and by
maintaining support for rapidly conducted, small scale tests of change in PDSA cycles: a
simple and well known model for improvement (Langley et al, 1992). However, the real
47
innovation and potential value of Collaboratives lies in the creation of ‘virtual’ horizontal
networks which cut across the traditionally hierarchical organisations that largely make up
the NHS. Such networks enable a wide range of professionals in a large number of Trusts to
come together to learn from each other. They also empower relatively ‘junior’ staff to take
ownership for solving local problems by working with clinicians. Through such mechanisms
Collaboratives aim to implement a sustainable bottom-up process (a learning-based
approach to change) rather than simply applying an ‘off the shelf’ top-down methodology.
Certainly the opportunity to meet and interact with colleagues at the Learning Workshops
(and other national meetings and mechanisms), both within local teams and with others from
elsewhere in the country, has proved to be a valuable component of the CSC: “the
workshops, conference calls and meetings were an invaluable source of ideas and support”,
“networking and idea sharing proved immensely helpful” and “excellent to meet with other
project teams and establish useful contacts”. For some, even finding out that services were
as unsatisfactory elsewhere, has served to reassure:
“I think the strengths have been by sharing common experiences about how the full service runs at
different hospitals, there’s been some useful ideas, but it’s also been interesting that everybody suffers
from the same sort of problems…in a way it reassures you that you’re all in the same boat, in other
words, you’re not the only one who has to give a second class service to patients.” (Tumour Group Lead
Clinician)
Participation from different regions in the country provided services with a wider
perspective and enhanced cross fertilisation of ideas and good practice:
“I think it’s nice to be a national project because it’s nice to see what’s going on in London compared
with us, can we use something that they’re using? Obviously if it was just one area, you’d get very stuck
in a rut and think that what we’re doing is ok…” (Project Manager)
For services that have already achieved credibility as leaders or innovators within their
professional arena, the collaborative provides an opportunity to tell others about what they
do. Having access to a large pool of expertise, which represents a variety of opinions
compared to consulting an individual expert, was another advantage:
“And that is what I think the strength of the collaborative is, that you’re not going to an expert, but you’ve
got this huge group, all looking in slightly different directions, but benefiting from each other’s variation
in approach.” (Programme Clinical Lead)
The emphasis on sharing was perceived by some to introduce a welcome cultural shift from
the way the NHS usually operates and of being particularly important as a way of leveraging
the support of clinicians for the Collaborative:
“I think it’s good that we are encouraged to share, because I still think in a lot of the NHS people don’t
share and people keep things to themselves. And hopefully that it’s (the CSC) trying to change the culture
that you don’t have to keep everything to yourself. So that’s good. ” (Project Manager)
There were clear preferences amongst the different mechanisms by which participants could
interact with peers (face-to-face rather than by e-mail or telephone) and the type of
networking (informal rather than formal):
“informal discussion with colleagues from different hospitals (at National workshops) were probably the
most helpful in developing our service and benefiting from others’ experience.” (Tumour Group Lead
Clinician)
These preferences have consequences for the way in which the CSC should be organising
and structuring national meetings in future phases of the Collaborative. We will return to
this issue when discussing participants reactions to the National Learning Workshops in
chapter 5. However, despite such overwhelmingly positive comments it was still the case
that not everyone saw the benefits of such informal networking:
48
“CSC is a local initiative with solving of local problems - hence all the time supposedly networking was
largely a waste of time - this may reflect that we had a very good local team.” (Programme Clinical Lead)
This was, however, a minority view and, as in the OSC (Bate et al, 2002), it was having the
time and space (the ‘headroom’) to discuss issues with peers and colleagues that was one of
the most highly valued aspects of the CSC.
49
50
4
WHAT HINDERED CHANGE/PROGRESS?
Key findings
In common with findings from evaluation of other Collaboratives in the NHS there was insufficient time
devoted to preparing for phase I of the CSC. This, combined with the largely theoretical content of the first
national learning workshop, led to a slow start. However, the vast majority of participants felt that the CSC
strengthened thereafter with almost half stating that it ‘strengthened considerably.’
Two aspects were regarded as having hindered making changes and overall progress during the CSC: firstly,
there were a number of issues around data collection and measurement and, secondly that the approach and
training content - particularly at the beginning of the CSC - was too theoretical.
A recurrent theme throughout our qualitative research was that the requirements for data collection and
measurement were unclear and, in some respects, unhelpful. Participants raised some doubts about the
usefulness and validity of some of the measures which were adopted.
Participants found the theoretical elements of the CSC, especially at the beginning, to be unhelpful and were
uncomfortable with the use of ‘jargon’. Clinicians were particularly critical in this respect.
4.1
Overview of evolution of CSC and the aspects that hindered change and
progress
It has been acknowledged by all those involved with the CSC - both its leaders and
participants - that it suffered a slow start. This was best exemplified by the comments
relating to the first learning workshop in Dudley and, in particular, the adverse reaction to
the largely theoretical content of that event. In addition, in at least one of the nine
programmes the majority of project managers were not in post until some six months after
the CSC had begun. Such initial difficulties are very similar to experiences reported with the
OSC (Bate et al, 2002) and, in common with these, offer an important lesson for future
Collaboratives.
However, from this slow start 81% of respondents felt that the CSC had either ‘strengthened
considerably’ or ‘strengthened somewhat’ over the whole period (table 7).
TABLE 7
Looking at the CSC over the whole course of your involvement, how would you assess the
evolution of the programme? (% responses) (n=96)
Considerably strengthened
46
Strengthened somewhat
35
Remained strong
6
Weakened somewhat
4
Considerably weakened
1
Missing data
8
[source: end of study postal questionnaire, May 2001]
Respondents from programmes C and D were more confident (100%) that the CSC had
‘considerably’, ‘somewhat’ strengthened or ‘remained strong’ than those from programme F
(65%). Overall, project managers and lead clinicians shared similar views about the
51
evolution of the CSC (82% and 90% felt it had either ‘considerably strengthened’ or
‘strengthened somewhat’).
A stated part of the improvement approach adopted in the CSC is that momentum builds
throughout the process and that, particularly between the second and third national learning
workshops, significant progress is made. Such a ‘learning curve’ has been observed in other
Collaboratives internationally and would seem to have been the case in the CSC as well:
“That’s one thing you can say, we have learnt from it. They didn’t actually know what they wanted. In
fairness it’s come out at the end of it really good and they didn’t have any clear direction themselves to
know exactly what was going to happen or how it was going to work - it’s sort of developed as it’s gone
really.” (Focus group - Project Managers)
Within this context, participants concerns ranged across a number of issues1 but in terms of
those that were perceived to have hindered progress they centred on two important aspects:
-
A lack of clarity around measures and data collection, and
-
There was too much theory and ‘jargon’.
The other issues which were raised are discussed in the following chapters as, whilst
suggesting that these were elements that could be improved upon in future Collaboratives,
participants did not necessarily perceive them as having significantly affected their ability to
make changes - and their rate of progress - during phase I of the CSC.
4.2
Measures and data collection
Initially, project teams in phase I of the CSC were asked to set local improvement aims in
line with the national programme goals and to define their own measures. This was an
established part of the collaborative methodology in line with the approach that IHI had
utilised with teams taking part in collaboratives internationally. Consequently, there was
considerable flexibility around the measurement systems teams could use. Langley et al
(1996: 10) do acknowledge a range in formality with which IHI’s model for improvement
should be applied: ‘A more formal approach might increase the amount of documentation of
the process, the complexity of the tools used, the amount of time spent, the amount of
measurement, the amount of group interaction, and so on’. Given its national profile, phase I
of the CSC was more ‘formal and complex’ than most collaboratives except in its use of
quantitative analysis. Within a few months it became apparent that there was a need for
greater standardisation; it also became apparent which measures worked better than others.
The CSC therefore shifted the emphasis to standard measures and subsequent NHS
Collaborative have sought to apply standard measures in this way.
A frequently mentioned benefit was that being involved in the CSC has encouraged services
to measure what they are doing, thereby providing evidence for what and where the
problems are:
“Just simple things, like patients having to wait hours in clinic. Well how long do they have to wait, how
many of them have to wait? We found that we had a rapid access clinic and some of the patients were
1
Question 13 asked if, and in what way, twelve specific components of the CSC improvement approach had
been unhelpful. Respondents commonly identified four specific components as being the ‘least helpful’: too
much theory and jargon, team self-assessment scores, conference calls and listserv. Question 37 stated that
‘participating in the CSC may have had both positive and less helpful aspects. Identify up to three less helpful
aspects or concerns relating to your participation in and experience of the CSC’. Responses focused on national
workshops, poor initial set up for project managers, ‘hype’, and measurement issues.
52
there for three hours. But until you’d actually timed it and said that that percentage of the patients were
there for - and everyone goes ‘oh dear’. You know, it makes it much more powerful doesn’t it?” (Focus
Group - Project Managers)
“Improvement targets are new to most services - it was refreshing to collect data and information for
measuring improvement as well as towards a point.” (Programme Manager)
Process measurement was for many a new area therefore and previously there has been no or
little incentive to systematically record these data.
“The whole issue of data - when you look at it - you would think that in the network then you would have
information on the time from referral to treatment. It seems so basic and obvious information and
certainly I personally thought it would be available, and I couldn’t believe that it wasn’t. It’s taken a lot of
time to get hold of that, and make sure you’re clear about what is the time and how do you get that data
and it’s up to date. So I think that’s been a challenge, and for us, we underestimated what resource we’d
need to actually put in and make that happen. I’m sure we’re not the only programme.” (Programme
Manager)
One example - from a lung cancer project in Mid Anglia1 - highlights the potential benefits
of encouraging better data collection. Here - across three neighbouring NHS Trusts - there
was no agreed lung cancer database being used. One chest physician in one site was using a
database that he had written himself to collect his data whilst other clinicians were using ad
hoc systems or none at all. As a consequence of the CSC the three hospitals agreed to use
the same database and this change - when fully operational - will enable required data to be
collected across the local cancer network.
Given a similar starting point measurement presented difficulties for most teams but it was
recognised as a necessary endeavour if improvements were to be made to services:
“essential, if you do not measure, you don’t effect change”, “felt like a huge pain initially but was vital to a sense of progress”, and “a pain but necessary.”
A number of respondents commented on the programme’s effect on measurement. The view
was that usually some regional or central directive is required to compel local staff to begin
measuring a particular aspect of their service. Consequently, an infrastructure to monitor
various stages of the cancer patients’ journey would not have developed without the CSC:
“I think to an extent it would have happened (some changes) but we wouldn’t have been able to measure
it as well as I think we’re going to be able to. So I think what the collaborative has done, is to aid us in
terms of getting that infrastructure together, to monitor how things go from now onwards.” (Programme
Clinical Lead)
Some projects reported that their record keeping and audit practices had improved as a direct
result of the CSC, and that the lessons from phase I were being applied to phase II:
“It (the CSC) also concentrates the mind on important things like collecting data, which some hospitals
are much better at than others. I freely admit that we were not very good at collecting data so we’re
making an effort now. At the moment it’s mainly paper data, but we’re employing people to get it
properly on computer databases. That wouldn’t have happened otherwise, I think we probably just
wouldn’t have got round to it. So it’s been a stimulation to do it.” (Tumour Group Clinical Lead)
Most participants, however, were disappointed that the measurement and data collection
requirements were not sufficiently clarified from the outset of the CSC and, in particular,
that the measures were changed during the programme. A regular suggestion was that
guidance regarding measurement should have been prescriptive and clarified at the very
beginning. While there was general approval for the autonomy awarded to programmes in
developing their teams and processes, this was not the preference for measurement. Support
for specifying some standard measures across all the programmes from the outset was
1
Source: Lung cancer. Service Improvement Guide.
53
expressed. As discussed above there was recognition that measuring process was a new area
for the NHS and data were not readily available. Equally, what to measure for each
respective tumour group was not immediately clear. However, if measurement had been a
priority at the beginning of the collaborative, it could probably have been clarified earlier. A
number of participants recommended that teaching and discussion at the first workshop
should have focused on measurement rather than the emphasis given to theoretical aspects of
change management.
“L: That’s been a learning curve though hasn’t it? That framework that came out - the electronic
framework that came out - if we’d have all been given that from day one, there would have been no
confusion. We would have all collected the same data.
S: And if you’d had a session on measures at the first workshop and everyone goes, well the project
manager’s there and no-one can leave with any doubt that’s how you do it. Especially with measures, I
mean they changed over the course of the project - that was very difficult. “ (Focus Group - Project
Managers)
“A lot of us put an awful lot of work into our own measures when we started off, I mean a hefty lot of
work, and all of a sudden we were going to do these new measures. And actually the new measures are
much better, but I think people got a bit disheartened that they’d just set up all these systems, and were
monitoring different things. And whilst you continue measuring those things, there are these other things
to measure. And what wasn’t clear from the beginning as well, was/is anyone ever going to actually look
at this data?” (Project Manager)
Collecting the data related to the measures was reported to be very time consuming;
participants felt there were too many measures. Rather they would have preferred to have
been told much earlier which measures to collect data on and that these would not be
changed subsequently:
“M: I think the reports are too frequent and there are too many measures, but they were irrelevant
measures. To be truthful we’re only interested in the booked admission phase on the three dates of the
journey. I think there were far too many measures. At least in phase two they’ll know what measures they
want because it actually took them months to work that out. Pre-planning wasn’t one of their attributes.
T: I think that was actually one of the problems wasn’t it because we started out looking at what we
wanted to achieve from the project and what do we want to do. It turned out to be ‘oh hang on a minute,
that’s very nice what you’d like to do but we’d like you to do this’.” (Focus Group - Project Managers)
For much of the early part of the Collaborative, participants perceived that teams in the
various programme appeared to be adopting different approaches to measurement. The need
for locally pertinent elements in the measurement was accepted. However, preference for
common measures that should have been specified by the national team from the start, was
expressed: “not enough consistency between pilot sites as to how and what is measured …
many measures only applied to very small samples of patients who did not necessarily
represent the entire cohort”; “I have concerns regarding the quality of data collected for
phase I in that everyone appears to have collected slightly different data etc”; and
“[Measures] should be more clearly defined - not sure if all the projects were measuring the
same things.”
Some participants anticipated that the inconsistency of the data that had been collected
would inevitably dilute the CSC’s ability to present quantitative evidence for its
achievements:
“…Everybody is measuring different things. For patient access, thirteen different things were being
measured. I think basic guidance around measurement is quite important, and getting the measures wrong
will mean that it would be very difficult to evaluate nationally if everyone’s measuring it differently. I
don’t see how it’s going to be possible to evaluate it quantitatively as well as qualitatively. That’s the
main worry that I have generally about the collaborative.” (Programme Manager)
54
“My main concern was always the measurement, that we make proper measurements. My concern is how
am I going to stand up and talk about something and say to people it’s been a great success, or we’ve
engineered this change, if I can’t back it up with some good data that I can use to convince others that it’s
been a useful change.” (Programme Clinical Lead)
Such concerns were not just related to how the achievements of the CSC would be received
externally but also to how the participants themselves were dubious about the value of the
data being collected and the effect that this may have on perceptions of the Collaborative
locally:
“But a lot is hindered by the fact that the data that is currently being collected nationally, tells you
nothing. It tells you what activity people have done, but they haven’t a clue what the demand is on
service, or they didn’t until we walked in. People still don’t know what the true capacity is of their
service. They think that’s how many slots they have…All the data [for the CSC reports] has been
manually collected, it’s been the only way, and that is difficult and that is why I wouldn’t vouch that it is
hundred percent accurate.” (Project Manager)
A particular perceived limitation of the measurement was that the programmes were not
encouraged to first collect accurate baseline data1.
“…And with all that confusion around what to measure, I’m sure that some people that started work in
November, haven’t got what was happening in the early months, and they’ve changed so much already. It
might just be that we take time retrospectively auditing in the end, which wasn’t the point…I was so
worried about collecting data for baselines that I did not want to start making changes because it would
distort the baseline.” (Project Manager)
Despite these concerns projects have persevered and reported that “the measures are getting
there”. The national team has spent many hours supporting and guiding individual projects
regarding their choice of measures and setting up databases. However, the experience of
measurement in the CSC has been troubled for most. As our interim report made clear
measurement was a common early concern, but also an area of learning (Parker et al, 2001).
With hindsight, it appears that the programmes would have welcomed more prescriptive
guidance from the outset and some specified standard measures. The national team has
responded to many of the concerns mentioned and continues to review and improve the
process.
Naturally such issues around measurement will have been effected by a number of variables,
some within and others beyond the control of the CSC programme itself. These include the
lack of routine NHS data, differing levels of prior knowledge and understanding of the
principles of measurement by participants, and variation in the collection of data by
clinicians using tumour-specific datasets developed by specific interest groups2.
4.3
‘Too much theory and jargon’
Although many of the critical comments relating to this issue were made to us at the end of
the CSC it was clear that they were mostly directed at the very beginning of the
Collaborative and, specifically, to the content and style of the first National Learning
Workshop.
1
For example, the CSC Pre-work booklet for the first learning workshop 18-19 November 1999, p21, states that
“You should collect “baseline” data for each of your measures prior to the first learning workshop. You do not
need to collect data on every patient that uses the service. Use sampling systems (ie every 10th patient or all
patients at a certain clinic) that minimise the effort of data collection and measurement”.
2
See, for example, the British Association of Surgical Oncology Breast Unit Database and Dataset v2.1
(www.cancernw.org.uk/clinit/products_baso.htm).
55
The teaching at this first workshop was perceived by many who attended to have contained
too much management theory from an American health care perspective: “the Dudley
workshop was very uneasy - e.g. many clinicians didn’t understand theoretical concepts - ‘it
was management speak’”; “Dudley: dreadful management speak’; and “the first workshop
had far too much health service theory and not enough practical ideas.”
Many of the speakers at this event were perceived to have little understanding or knowledge
of the NHS; consequently much of what they discussed was not relevant to the UK health
care system:
“The first meeting we went to, I thought was dreadful, and left people with a very negative view of the
whole project, I think. Part of it was that the people who were leading the project had no comprehension
of what we were facing, and there were a lot of Americans, who I felt did the whole project a lot of harm.
Because their view, of course, was from American industry where money simply follows and if
something needs doing, problem solved, but that’s not the case here.” (Programme Clinical Lead)
“I think the problem with the initial workshop was that it was overloaded with forty-eight hours of
American health theory and we all said ‘well what the hell has this got to do with what we’ve got to do?’
And it’s taken several months before we could translate it into our own practice.” (Tumour Group Lead
Clinician)
The reaction of clinicians to the emphasis placed upon the method and the theories
underpinning it at the beginning of the CSC was particularly strong - it was too conceptual
and there were not enough practical examples:
“All the management speak - change principles, PDSA cycles - if I ever hear that word again, I’ll
scream!” (Tumour Group Lead Clinician)
The CSC national team have subsequently acknowledged that it was a mistake not to have
‘translated’ and customised the US approach for clinicians in the UK prior to the first
workshop:
“senior clinicians were sceptical because the first meeting made too much of the theoretical model,
alienating those who wanted simple examples they could apply to their own clinical practice.” (Kerr et al,
2002, p. 166)
Failure to do so meant that the CSC did not get off to a strong start, ‘early wins’ did not
always come to fruition and, as a consequence, project teams took longer than necessary to
begin to see the benefits of participating.
56
5
WHAT WAS THE PERCEIVED VALUE AND IMPACT OF THE
METHODOLOGICAL APPROACH?
Key findings
Interim and final views about the overall methodological approach were generally positive.
Views as to the value of specific components of the approach were more mixed: dedicated project management
time and process mapping were very highly rated whilst conference calls and the CSC listserv were less
helpful.
Most strikingly, given their more central role in the overall approach, almost 50% of respondents to the end of
CSC postal questionnaire did not find the team self-assessment scores helpful.
5.1
Interim report findings
The interim evaluation report (Parker et al, 2001) reported that early views about the
improvement model being implemented and developed by the CSC were generally positive.
Guiding the process of change was acknowledged as an important component as it provided
a structure for improvement activities.
“You gather the data, see what’s out there, map the journey, see where the blockages are, try a small
change, if it works, you gain people’s confidence, and then you can start spreading it out a little bit. The
strategies are quite useful because they get you thinking about how it fits what you’re trying to do, what
strand you’re looking at.” (Project Manager)
Perceptions of specific elements of the improvement methodology, however, were mixed.
Aspects that were rated as most helpful were process mapping the patient’s journey and
piloting proposed changes with small samples.
“By keeping that very small and very focused, it keeps the numbers of the patients down, which means
we actually get to look more closely at those patients, rather than trying to tackle such a huge area and a
huge number of patients… we found that if you present change to people in a very small way, with very
small numbers, they’re much more…much more keen to actually have a go, than they are if you are
saying ‘right, we want you to change your practice completely’. It’s almost proving to people that it can
be done.” (Programme Manager)
The most important emerging concerns reflected in the interim report were that at times the
methodology appeared to take precedence over the prime goal of the CSC, i.e. to improve
cancer services, and became a potential constraint to improvement. It was perceived to
prescribe how to go about change and therefore had the potential to inhibit the natural flow
of progress.
“…It’s a bit artificial what people are doing…If there wasn’t this stress on completing a certain number
of cycles then maybe we could concentrate on real issues. If there wasn’t this demand for numbers of
cycles done.” (Tumour Group Lead Clinician)
Even at this early stage of the process, participants suggested factors that in their opinion
either helped or hindered participation in this type of initiative, and the process of change
itself. Less positive views about the early stages of the CSC need to be seen in context. The
tendency to be critical about the method and the way it was introduced is likely to be related
to natural resistance to change; to blame the messenger and the message is a way of coping
with uncertainty. Some participants who were initially cynical admit that since they have
become involved in doing the work, their attitudes have changed and that they now support
the overall methodology.
57
5.2
Participant rating of components of the CSC improvement approach
Table 8 shows how highly respondents to the end-of-study postal questionnaire rated
specific components of the CSC improvement approach.
TABLE 8
How helpful overall did you find the following components of the CSC improvement
approach in the context of your role in the CSC programme? (presented in order of highest %
of ‘very helpful’ responses) (n=96)
Aspect
Very
helpful
Quite
helpful
Not
particularl
y helpful
Not at
all
helpful
Not
involved
Missing
data
Dedicated project manager
78
10
3
2
6
1
Process mapping
75
11
7
1
6
-
National learning
workshops1
50
35
7
2
5
1
Capacity & demand
training
50
28
8
3
11
-
National one day
meetings2
46
23
6
2
24
1
PDSA cycles
33
43
20
2
2
-
Monthly reports
30
43
23
4
1
-
Change principles
24
58
18
1
1
-
Improvement handbook
16
42
29
13
2
-
Team self-assessment
scores
14
33
40
8
5
1
Listserv
9
30
19
20
22
1
Conference calls
9
27
39
5
20
1
[source: end of study postal questionnaire, May 2001]
Those aspects which were rated as being ‘very helpful’ by 75% of respondents - dedicated
project management time and process mapping skills - have been discussed in chapter 3 as
two of the six ‘key levers for change’ in the CSC. Other components which were particularly
positively rated were the national learning workshops and one day meetings, and the
‘capacity and demand’ training (which was also discussed in chapter 3). The workshops and
one day meetings were key mechanisms for facilitating the networking and multidisciplinary team working which were also identified as ‘key levers for change’; more
specific comments relating to the content and style of these events are presented in this
chapter.
Five aspects were rated as either ‘not particularly useful’ or ‘not useful at all’ by over 25%
of respondents:
-
Team self assessment scores
1
See page 73 for ratings for each workshop.
2
See page 75 for ratings for each specific meeting.
58
-
Conference calls
-
Listserv
-
Improvement handbook, and
-
Monthly reports.
A follow-up question asked respondents to identify those specific components of the CSC
improvement approach that they had found most helpful and least helpful, and to describe in
what way were they helpful or less helpful. Table 9 presents the four ‘most helpful’ and four
‘least helpful’ components and, in doing so, lends further weight to the overall ratings in
table 8 and the analysis of the qualitative interview data:
TABLE 9
Most helpful and least helpful components of CSC improvement approach
Most helpful components
Least helpful components
Process mapping
Listserv
National workshops
Conference calls
Dedicated project management time
Team self-assessments
Capacity and demand training
Theory/jargon
[source: end of study postal questionnaire, May 2001]
We also analysed the responses to the end of CSC questionnaire by CSC programme
(appendix 7). The aim was to identify any differences in the experiences of the participants
according to the programme to which they belonged1. The percentages show the proportion
of respondents from each programme who rated the stated aspects of the CSC as ‘very’ or
‘quite helpful’2. With regard to the specific aspects of the CSC improvement approach
respondents from programme C were relatively positive about four aspects (team selfassessments (70%), monthly reports (100%), national one day meetings (90%) and listserv
(70%)); those from programme B were relatively positive about three aspects (PDSA cycles
(100%), Improvement Handbook (89%) and monthly reports (100%)); and those from
programmes D and E were relatively positive about two aspects (D: national one day
meetings (89%) and the Improvement Handbook (89%); E: listserv (62%) and conference
calls (62%)). However, respondents from programme A were relatively negative about four
aspects (PDSA cycles (47%), Improvement Handbook (35%), monthly reports (53%) and
team self-assessments (24%)); those from programmes F and G were relatively negative
about one aspect (F: monthly reports (40%); G: listserv (18%)).
A similar analysis compared the responses of project managers (n= 38) and tumour group
lead clinicians (n=40) (appendix 7). The relatively lower ranking given by lead clinicians to
'capacity and demand training' (66% versus 85%) is presumably explained by the fact that
1
Ideally, the unit of analysis would be at the project team/organisational level. However, the broad scope of
this evaluation has not allowed such in-depth research to be carried out. In addition, given the relatively small
number of respondents from each programme these figures and differentials are purely indicative. A
differential of +/- 20% compared to the overall rating from all the programmes has been taken as indicating a
‘marked difference’. We are not claiming that these analyses in any way prove ‘cause and effect’ but are merely
raising the question of an possible association and generating hypotheses. To demonstrate anything more than
this would require a much more focused and in-depth study.
2
As only five (56%) and two (22%) responses were received from programmes H and I respectively they were
excluded from these comparative analyses.
59
the majority of clinicians did not directly receive any training in this area. The 'national one
day meetings' seem to have been rated higher by the project managers (81% versus 56%) but
almost 30% of lead clinicians did not attend any of these events. PDSA cycles seem to have
been less well received by lead clinicians (59% versus 91% of project managers) which is
perhaps to be expected and consistent with qualitative findings from the OSC (Bate et al,
2002). Interestingly, 'monthly reports' were rated higher by project managers (70% versus
52%) and a similar pattern emerges with the Improvement Handbook (72% versus 45%). An
opposite pattern emerges regarding 'team self-assessment scores' where project managers are
much less positive (only 27% saying 'very’ or quite helpful'). Finally, lead clinicians were
much less positive about 'listserv' (19% versus 60% of project managers - although again a
high proportion of lead clinicians did not participate in this aspect of the approach).
5.3
Overall views of methodology
The importance of the CSC improvement methodology was acknowledged. However, a
spectrum of views ranging from enthusiastic support to questioning of its value were
expressed. Enthusiasts viewed the methodology as helpful to guide the process of change
because it provided a structured approach to facilitating systems improvement.
“There will be dividends for patients in my cancer network … Yes, I can understand the American
philosophy, you’ve got to keep them to targets, you’ve got to have monthly reports, I can understand that.
And I know you need a bit of drive behind people to keep them on target and so on. So that’s why I’m on
board with it, because I think it’s a good, sound, basic concept, and it’s certainly helping us to change the
way that many of my colleagues approach their cancer services. Some of the surgeons at long last are
beginning to realise that they don’t come first. (laughs).” (Programme Clinical Lead)
One important, and novel, aspect to many was that the approach encouraged changes to be
tested on small numbers of patients first and that it was ‘okay’ to fail:
“I think the best bit for me was that it didn’t matter if it didn’t work. Let’s just try it. It doesn’t matter if it
doesn’t work, so there’s no sense of failure. Because most of the time, we change things and you’re stuck
with the change, instead of having the opportunity to step back.” (Focus Group - Project Managers)
For others who were cynical at first, the value of the methodology had become more
relevant over time:
“I’m a bit more converted to it now, I thought the method was mainly hot air to start with, to be honest. I
think it does focus everybody else’s mind…The word bullshit comes to mind. A lot of people were
talking in Americanisms. It seems to me that it appears more relevant now, and the people who were
trying to put it across seemed to have changed their way of doing it, they’ve sort of toned it down a bit.
This ra-ra stuff, they’re trying to make it a bit more relevant to clinical practice now. I think both sides
have probably changed a bit. Clinicians are listening, and it seems to make a bit more sense.” (Tumour
Group Lead Clinician)
Some perceived that most of what the method prescribed was another way of describing
what most professionals do intuitively on a daily basis and that, as common sense, its
importance tended to be oversold by the programme:
“We’ve all done this, without giving it the same name, it’s essentially ‘suck it and see methodology’.
They suck it, see and test. It’s not scientific, that’s one of the fundamental problems and I think people
can criticise it for that. But I think as far as developing process is concerned, which is a lot of the problem
that we’ve got in health care, it’s extraordinarily good…It’s what a good surgeon will always do, or have
done.” (Tumour Group Lead Clinician)
“I haven’t thought at all about PDSAs apart from when I’ve sat down to do my monthly report, I have to
say. And I’ve thought, ‘oh, now what have I done this month?’ And obviously, we know what changes
we’ve made but I haven’t given a second thought to PDSAs until I’ve sat down every month and thought
what I need to put down for this. To me the methodology hasn’t played any significant part, consciously,
in what I’ve done or been involved in at all.” (Project Manager)
60
The difference in health care systems between the USA and the UK prompted a view that
aspects of the methodology were not readily transferable reflecting an earlier point about the
need to customize the approach to the NHS (especially for clinicians):
“It is completely different over there (America). We’ve got a philosophy to change, and an entire culture
to change over here. And they’ve got the money over there to do it, and it’s private, and the GP has a
thousand people on the list…it’s very, very different. That annoys the consultants, blind…It annoys them
so much…I think a lot of it is because they are American, to be honest…I really do” (Project Manager)
A more critical view was that at times the methodology appeared to take precedence over
the prime aim of the CSC, i.e. to improve cancer services, and as such became a potential
constraint to improvement. The methodology was perceived to prescribe how to go about
change and therefore had the potential to inhibit the natural flow of progress. In some ways
it appeared to contradict what was understood as one of the main purposes of the
collaborative, i.e. to try things out until you found out what was the best way.
“PDSA may well be a way of doing something that works, but it’s not the only way…and I think we’re
being straight-jacketed into having to do something in a particular way. Like we’ve got the four headings,
and another four headings within each heading, and I don’t see why we have to slavishly stick to it?”
(Tumour Group Lead Clinician)
“It depends on your sort of personal approach really. Because it doesn’t fit with me, because I want to
appraise things, and critique them, and think does this theory fit, is it going to work on the ground, is it
going to work here, what can we take from it, what can we adapt? And I think a lot of clinicians have that
view as well.” (Project Manager)
Similarly, the requirement to produce specific theoretical components such as change cycles
and monthly reports were viewed to be somewhat bureaucratic and artificial; a distraction
from focusing on the actual tasks:
“I think there is concern that there is a slightly bureaucratic approach to it and that you are expected to go
through a number of PDSA cycles and that you have to list PDSA cycles. I’m sure that some people are
sort of dreaming up PDSA cycles to make their numbers look good. So partly, it’s a bit artificial what
people are doing.” (Tumour Group Lead Clinician)
Managers often reported that they tended to shield clinicians from too much direct contact
with the methodology. For most managers it was a deliberate strategy because they were
concerned that methodological detail would alienate clinicians:
“I don’t think the methodology has been an issue to the clinician. And I think probably some of that’s
down to me because I don’t think the clinician needs to know all about PDSA cycles, because frankly he
doesn’t care about that and they were very turned off by the first conference…they were very turned off
by all the Americanisms and all the jargon and they don’t need to know that. And I don’t think it matters
whether you call it a PDSA or what you call it, and I don’t think they even need to necessarily know
about that…I kind of protected the clinician from it, because I know it irritates him.” (Project Manager)
The most negative view questioned whether there was a need for this particular format for
the national programme at all. Basic training in the techniques of service redesign together
with tumour group meetings was deemed preferable to national events by some:
“Banging on about PDSAs and strategies is not what the clinicians need. But if we had more regular, bimonthly meetings, one day where you got all the people from one tumour group together, that would have
been much better, and you would have got more response from the clinicians.” (Project Manager)
This chapter now goes on to examine those specific components of the methodology which
have not yet been discussed in detail.
61
5.4
CSC Improvement Handbook
At the beginning of the CSC (November 1999) the national team produced an ‘Improvement
Handbook’ which provided the original 43 project teams with:
-
Information about the goals, background and context to the CSC
-
Principles for change to help focus the improvement efforts of the project teams
-
Identification of the potential to improve care for patients with bowel, breast, lung,
ovarian and prostate cancer, and
-
Suggested measures for teams to gauge the impact of their improvement efforts.
Figure 8 shows that a majority of respondents found the handbook to be helpful but that
some 39% did not.
FIGURE 8
How helpful did you find the CSC Improvement
handbook
80
70
60
%
50
41
40
28
30
20
17
11
10
2
0
Very helpful
Quite helpful
Not particularly
helpful
Not at all helpful
Not involved
[end of study postal questionnaire, May 2001, n=96 (74% response)]
Those who found the handbook less helpful felt that they had not received sufficient
practical advice and training as to how to implement the change strategies:
“But we were just given them. We were given an orange book and we went through some very short
tutorial type things but we weren’t really instructed in their use. We weren’t really given the skills to use
them. It was basically there it is, off you go. We asked for some practical examples all the time so that we
can actually learn from it because abstract management text, not many people actually really learn from
reading a management book.” (Focus Group - Project Managers)
In phase II of the CSC such practical examples have been included in the series of ‘Service
Improvement Guides’ - referred to earlier - which have been based directly on the lessons
learnt in phase I and incorporate the change principles discussed below.
5.5
Change principles
Initially the CSC developed change principles that were generic across all the tumour types:
62
“the change principles represent the basic ideas which have been shown to lead to tangible improvements
for people with cancer.” (Service Improvement Guides)
The 28 principles focused on four areas in each of which the participating teams had to make
progress:
-
Connect up the patient journey,
-
Develop the team around the patient journey,
-
Make the patient and carer experience of care central to every stage of the journey,
and
-
Make sure there is the capacity to meet patient need at every stage of the journey.
In response to comments from participants in phase I the CSC national team has now
developed tumour specific guides which are underpinned by the generic principles. These
have been produced as 14 ‘Service Improvement Guides’ for phase II of the CSC (five are
tumour specific and nine cut across all tumours covering topics such as pathology, palliative
care, radiology etc).
Figure 9 shows how helpful participants found the change principles used in phase I of the
CSC. Eighty per cent of respondents found the principles either ‘very’ or ‘quite helpful’ and
none had any strongly held negative views about this component.
FIGURE 9
How helpful did you find the CSC change principles?
80
70
56
60
%
50
40
30
24
19
20
10
1
0
Very helpful
Quite helpful
Not particularly
helpful
Not at all helpful
Not involved
[end of study postal questionnaire, May 2001, n=96 (74% response)]
Similar to the comments on PDSA cycles presented later in this report, there was also a
perception amongst some participants that projects were expected to fill in the change
principle “slots” as if changes should occur according to change principles rather than
following the natural flow of the process:
“S: You can find yourself trying to fit them around what you’re trying to do. They don’t always fit and
you just find yourself putting something and then also you get suggested that perhaps you haven’t got a
C2 anywhere in your PDSAs or a D4.
M: They have to add value don’t they? Reports have to add value, principles have to add value and if they
don’t you would have to question them and I agree, I used to sit watching the reports thinking is this a
C1?
63
P: It’s totally haphazard whether it happened that a report change you were talking about was a C1 or a
D4 or whatever. It’s what fitted in with the evolving change in the service that you were planning locally
and came naturally next. Rather than theoretically driven, ‘oh we must have another C1 because we
haven’t looked sufficiently at patient satisfaction’ or whatever it is.
M: You’re right because people would sit in a room and have an idea and they wouldn’t have the idea
about the change principle, it would be about the idea wouldn’t it, ‘I wonder if we could do this? We
could put a notice board up’ and I certainly never went back and thought ‘now I wonder what change
principle that is’.
T: You’d have just lost wouldn’t you if you’d said right, let’s look at the change principles and use that.
There was quite a bit of intuition and common sense about what seems to be the thing to do next.” (Focus
Group - Project Managers)
5.6
Plan-Do-Study-Act (PDSA) cycles
Despite some of the concerns relating to the use of PDSA cycles voiced earlier in this
chapter, figure 10 shows that this tool was rated relatively highly in the CSC as compared to
that reported in the OSC (Bate et al, 2002). A third of respondents rated this component as
‘very helpful’ and a further 41% as ‘quite helpful’. The CSC national team report that, using
this approach, over 4,465 ideas were tested in the first twelve months of the Collaborative
and over 600 changes were implemented (The Cancer Services Collaborative. Twelve
Months On). Later estimates suggested “4,400 changes between September 1999 and August
2000 involving about 1000 patients” (Kerr et al, 2002).
FIGURE 10
How helpful did you find PDSA?
80
70
60
%
50
40
41
33
30
21
20
10
2
2
Not at all helpful
Not involved
0
Very helpful
Quite helpful
Not particularly
helpful
[end of study postal questionnaire, May 2001, n=96 (74% response)]
Focusing first on small changes with small patient groups, a fundamental principle of the
PDSA cycle, was a popular concept. Breaking tasks into smaller chunks of action was
perceived to make change manageable and facilitated participation by less enthusiastic
colleagues:
“I think it’s (small changes) very good, I think it’s excellent. Too often you get this brick wall, blanket
answer ‘you have to wait for the new IT system before you do that, you can’t do this because of this’ but
you accept that people are going to say you can’t, and then you just do it in a small way and say ‘look,
I’ve done it’ and that’s what happens.” (Programme Clinical Lead)
“By keeping that very small and very focused, it keeps the numbers of the patients down, which means
we actually get to look more closely at those patients, rather than trying to tackle such a huge area and a
huge number of patients. Some of the changes are so radical, we found that if you present change to
64
people in a very small way, with very small numbers, they’re much more…much more keen to actually
have a go, than they are if you are saying ‘right, we want you to change your practice completely’. It’s
almost proving to people that it can be done.” (Project Manager)
Its value was that it gave method to “common sense” and usually had no financial
implications:
“I think the good thing about this project has been, because the changes have been done so small, I mean
you’re only looking at one patient for the first PDSA, then you’re rolling it to two, then you’re rolling it
out to a few more, it’s been so gradual, it’s almost infectious. And it’s given people the confidence and
the reassurance that they’ve needed to know that some of the things that they want to change, are actually
achievable. Whereas if they’d tried a big bang, it wouldn’t have worked. So it’s about building up their
own confidences in what they’re able to do. And also in their own ideas.” (Programme Manager)
One of the key benefits of the small sample change strategy was that it does not attempt to
change the whole system all at once:
“I think that the change in my team is just amazing. And I think it is because of the PDSA’s and that you
haven’t got to change it all now, you can make a bit of a change and then a bit more and a bit more. I
think that’s one of the biggest things because I think if you go in and say to people “Right, you know we
want to change x, y,” it really frightens them, it’s like “how the hell?”. The big bang thing is one of the
big reasons why the NHS is so bad at changing because it expects to do it all in one go.” (Project
Manager)
For some, PDSA cycles were the key lever for change. Asked in one of the project manager
focus groups what were the ‘levers’ in the whole process, two project managers replied:
“L: PDSA’s - in terms of you could make a change without causing havoc, and that was a concern that a
lot of clinicians and other staff had. The system was chaotic already and it would have been the final
straw to cause havoc. But also understanding that if a PDSA didn’t work then that wasn’t a reason to
throw everything out of the pram – it was actually something to study and to work from.
S: PDSAs make you take your time - it stops you going for a big bang and it allows you to evaluate it. So
from my background, I like that. I’m very comfortable with that. A lot of clinicians are very much of the
“oh well, it’s obvious, let’s do it without looking at the consequences up and down the stream”, but by
doing PDSAs it allows you to see what’s actually going to happen and what affect it’s going to have
elsewhere.” (Focus Group - Project Managers)
The similarity of PDSA cycles to the audit cycle was observed as particularly useful. It
provides rapid results compared to research that takes much longer:
“The actual application of PDSA cycles, I like that method. I can see the advantages of audit type cycles.
It is an audit system, if you get into research it can go on and on, and the end result gives you nothing.
Where with this, you test it all the time and you get much quicker results, so I have no problem with the
PDSA cycles.” (Project Manager)
Some clinicians, however, reported having difficulty with working with small samples.
Having been brought up on a ‘diet’ of randomised controlled trials the PDSA approach
contravened their intuitive preference for collecting data on large or representative samples.
A frequent comment was that the method was “unscientific”. However, there has been
gradual understanding that the method attempts to achieve a different outcome compared to
clinical trials:
“The idea of small changes appeal in the sense that you can manage them, but the trouble is that in terms
of medical science we are taught that little numbers like that, nobody takes any notice of them, you’ve got
to have big numbers with statistical significance and things like that, so it’s a difficult concept to get hold
of. But I can see the idea, it’s just to get you to try out little ideas rather than to publish papers.” (Tumour
Group Lead Clinician)
While the general view favoured using this strategy for change, these opinions varied from
the enthusiastic follower, who used the handbook and theory regularly,
65
“I refer to the handbook, and PDSAs, I actually found an old article from 1996 of Donald Berwick’s on
PDSAs, and I cut it out from the BMJ years ago, and it was just sitting there in a file, and I came across it,
and I thought, ‘yes, this all makes sense’.” (Tumour Group Lead Clinician)
to less favourable views claiming that PDSA was common sense, overrated and only one
example of many similar techniques:
“Generally we’re a group of intelligent people, with a reasonable amount of common sense (laughs) and it
does seem to be addressing issues that we probably all knew, and probably need to be honed down. But
naming them with fancy titles isn’t going to make a lot of difference. And having made the effort to do a
bit of reading since I learnt about the PDSA cycle, of course you do discover that there are plenty of other
management principles that might be applied, and this is just one of them. So I’d argue, a rather biased
approach down one management route…” (Tumour Group Lead Clinician)
Some participants had felt a pressure to ‘perform’ and report as many PDSA cycles as
possible regardless of their value and contribution to making service improvements:
“But there definitely is a kind of pressure and a kind of competitive element which some people respond
to by putting little changes in and some people don’t…I think you couldn’t just count up the numbers and
say twenty changes is good, five changes is bad…I mean half of me sort of thinks, oh I don’t want to be
part of this force to achieve - I’m not going to be moulded - but the other half of you still does think, well,
I’ve done all this work and it looks awful.” (Project Manager)
Some reported that while they found the concept helpful, they became tired of the
continuous repetition of the phrase and that it was taught in a patronising manner.
“We make jokes about it (PDSA) all the time (laughs), I don’t know really. I felt that we were treated a
little bit like babies. One felt a little bit as though we were sort of reps being taught by some commercial
concern that ‘this is how you do it boys’…I think a lot of people felt a bit like that about it. But the
concept is all right. It came across as a bit of American hard sell.” (Tumour Group Lead Clinician)
Another view - related to earlier observations on the scale of the changes made - was that
implementing small changes would reach a plateau stage when further changes would
require resources. A suggestion was that there were potentially three different types of
PDSA related to different requirements for funding.
“I think you can categorise the PDSAs into ones that are going to make changes and they don’t need
money, ones that are going to make changes but don’t need masses of money, and the PDSAs that need a
lot of money that can’t materialise overnight.” (Programme Manager)
5.7
Monthly reporting
Submitting monthly reports to a central body was generally viewed as an important element
to provide focus and the reports were seen as a vehicle for showing others what the team is
doing:
“We find it (the reports) useful that we can show the people at the wider team meeting, because they
don’t meet with us regularly, it’s just once a month. They can see the benefits because they’re not as
committed and involved as we are…We had a really good meeting last month because they could actually
see some of the positive things that were coming through. I mean the graphs show it, we’ve got our end of
month achievements, anxieties and things like that, and they get photocopied and shared out to the team
and discussed at the meeting. It also concentrates my mind.” (Project Manager)
In the early stages of the CSC many project managers, however, declared their frustration
with the frequent changes in the format for reports, even though they understood that this
was an inevitable consequence of being part of an initiative unfamiliar to the NHS. It was
acknowledged that the reporting format improved and that the content became more
meaningful:
66
“I wish they can decide how they want it, it changes every flipping month. Hopefully it’s going to be
better now. A couple times I have actually said, ‘why are we doing this because it doesn’t tell me
anything, and I’m writing it, what does it tell anybody else?’ I think it’s useful, I think it’s very good to
say what we’re doing, what we’ve actually achieved this month. That’s probably more beneficial because
it tells you more about what’s going on in the background and how people are beavering away to get their
team together, and all that sort of thing.” (Project Manager)
Figure 11 shows how helpful, overall, the respondents rated the monthly reports.
FIGURE 11
How helpful did you find monthly reports?
50
44
45
40
35
%
30
26
24
25
20
15
10
5
5
1
0
Very helpful
Quite helpful
Not particularly
helpful
Not at all helpful
Not involved
[end of study postal questionnaire, May 2001, n=96 (74% response)]
Amongst the 29% of participants who did not find the monthly reports helpful, the reporting
structure was criticised for being too focused on recording quantity in terms of numbers of
PDSA cycles completed, rather than the quality or scale of the changes that had been made
in each period. Another concern was that the reports did not appear to encourage learning
from less successful activities. The implicit emphasis in the CSC “run charts” appeared to be
focused on an expectation that they should always be moving upwards in the direction of
improvement:
“I don’t know what they are hoping to get out of the monitoring? Maybe it’s because it’s just not
sophisticated enough yet, but is it who’s got the fanciest run-charts and who’s organised enough? Or are
they really studying in detail which direction the run-charts are going in and then linking that back to
which PDSAs have actually taken place? Because then it would mean something.” (Project Manager)
Regular reporting to the central team was accepted as an important element of the
programme and useful for documenting the process and progress of developments.
However, the format and general usefulness of the reports were questioned. Not only were
project managers frustrated by the regular requests for changes to how they were reporting,
they were also concerned that the content was not meaningful:
“Sometimes it felt like requests for information or changes was more important than the redesign work
and had to take precedence over the project work.” (Project Manager)
“S: I guess it’s about how helpful it is what your putting down, if it’s not that helpful then it just seems a
bit bureaucratic to be reporting things that are..
Cl: It’s reporting for reporting’s sake isn’t it?
S: Precisely. I mean it’s a big enough chore already really without having to make it any bigger than it
needs to be.” (Focus group - Project Managers)
67
Less frequent reports, perhaps every second month, was suggested as an alternative by some
as the expectation for change to occur within one month was considered unrealistic:
“You don’t have to measure every month to show an improvement. In fact in some ways if you’ve got a
change in place it can take quite some weeks or months before the product of that has come around and so
every month you’re religiously reporting to your team, if you’re unwise enough to show your team, and
they saw no change therefore in data. I do the reports to satisfy them [NPAT], not us, although I liked the
new reports with the issues and the challenges and the softer things can come out.” (Focus Group Project Managers)
The use of run charts to continually assess progress by project teams has been problematic in
other Collaboratives especially with regard to establishing a baseline from which to evaluate
the impact of subsequent changes (Bate et al, 2002). Global measures illustrated on graphs
for a tumour group with small numbers was another aspect rated to be inappropriate.
5.8
Team self-assessment scores
During the course of the CSC each project team was required to make a monthly selfassessment of its progress using a scale adapted from an original provided by IHI:
FIGURE 12
Team self-assessment scale
1.
Early stages
2.
Activity but no changes
3.
Modest improvement
4.
Significant progress
4.5
Dramatic improvement (achieved all initial targets)
5
Outstanding sustainable progress
The CSC national team has stated that overall the project teams made ‘outstanding progress’
and that internationally - on the basis of the self-assessment scores - the CSC is one of the
highest scoring BTS Collaborative programmes ever with 86% teams having a selfassessment score of 4 or more by the end of the collaborative (Learning Session Four, April
2001). A published report suggest that ‘a national planning group [had] validated selfassessment scores [and] in November 2000 20 teams had a self-assessment score of 4 or 4.5’
(Kerr et al, 2002).
However, our qualitative data point strongly to participant doubts around the validity and
relevance of the scores and, more importantly, the value of the self-assessment exercise
itself as part of the improvement approach. Strikingly, the compilation and dissemination of
team self-assessment scores were not found to be particularly useful by over half of the
respondents to the questionnaire (figure 13):
68
FIGURE 13
How helpful did you find team self-assessments?
80
70
60
%
50
43
40
30
30
20
11
9
10
6
0
Very helpful
Quite helpful
Not particularly
helpful
Not at all helpful
Not involved
[end of study postal questionnaire, May 2001, n=96 (74% response)]
Clearly, this is a mixed response and there were those who found that the monthly scores
offered encouragement to their local team:
“I think they’ve [the scores] almost given the team another lift and onwards and upwards, which I was
quite surprised about. I thought they’d be quite sceptical about it in that they didn’t know that I was doing
this every month.” (Focus Group - Project Managers)
For some participants it seemed unclear whether the scores were comparative - across all the
local teams - and meant for the national teams benefit or an internal improvement measure
for use purely within their own project team. This lead to feelings that the scores were given
too much emphasis externally and left teams with little ownership of the scores:
“I felt very uncomfortable with self-assessments anyway. I just thought they were very strange. If they
were kept to self-assessments, then fine, but they weren’t and they were used as measurement and I think
that was wrong. I think people would have been much more comfortable with it if it had just been a selfassessment, but when you see that you’ve got x number at 4 and x number at 3 at national conferences,
then that is a measurement.” (Programme Manager)
“I think they’re a complete misnomer to be honest, because I mean we were under the impression to begin
with that they were self assessments scores. But to be told later on, that actually they’re being watched
and you’re not quite a 4 even thought you thought you were: that doesn’t really sit with self assessment.”
(Focus group - Project Managers)
“They [the scores] were changed by NPAT. They were upgraded [laughs] on the basis of what they
thought of our monthly reports - and what we were doing - that we actually deserved 0.5 higher than we’d
given ourselves. Fair enough, is that a self- assessment? That’s when I began to not take them very
seriously…It’s strange. I think it should be called something else, I don’t think it should be called selfassessment. I think we should have just submitted our reports and be marked.” (Project Manager)
Perhaps most importantly some felt that the scores didn’t reflect real improvements in
services to patients:
“They’re not very diagnostic of what really is going on. And when you look at the other teams and you
know how well or how badly some of their teams work, and they rate themselves as a 4.5…It’s not
possible…They just concentrate on the bits to meet the targets. Targets are a good focus, the dots on the
graph and the target to aim for, but it’s not the be-all and end-all, it’s just one way of measuring your
progress. Patient focus is the ultimate.” (Project Manager)
69
In some projects it seemed that deliberations about the score did not take place within the
local teams. Rather it was left to the project manager to determine progress each month, with
the result that the scores remained peripheral to the progress of the project as a whole:
“I was the first to say that I think they’re a bit of nonsense. I can’t say I shared mine with the team, and
when I put down a 5, and this is being absolutely honest, and it went through, it was the biggest shock to
me when I heard that it had actually gone through. So I then had to tell the team about the 5 who had
never heard about it. Because I saw it as a project management score, it’s not something I talk – well
occasionally yes, we had a bit of a giggle and said “Oh yes, we’re up to a 3” or whatever, but they never
knew it was a serious – well they still don’t know. I mean I don’t know about anyone else, but it wasn’t
something that drove the project.” (Focus Group - Project Managers)
In whatever way the scores were perceived by the participants and whatever their intended
use, interviewees commonly stated that they needed to be validated in order to be
meaningful:
“The giveaway is when you look at some of the people’s changes, and they obviously give themselves
quite high, and you look at some of the changes they’ve done, and they’re just like pathetic really, I don’t
consider them to be changes, I think there should have been some sort of audit of that maybe.” (Focus
Group - Project Managers)
Related to this was the sense that the criteria used to determine the scores were too vague. It
was apparent that there were lots of misconceptions around the scoring:
“I think in the scoring system, I mean it was a bit vague with its definitions, and I think that was probably
to get around the problem of being too prescriptive, and saying you know, you’re a 4 if you get your
waiting times down to this that and the other, because then they wouldn’t take any notice of where you
started from, and it was supposed to measure improvements rather than reaching a specific goal
necessarily. But the definitions were very vague. If you’ve shown modest changes it’s a 3, if you’ve
shown significant changes it’s a 4, what’s modest, what’s significant? I mean it could have been tightened
up”. (Focus Group - Project Managers)
There was also questioning of why the self-assessment scores were needed if the data
collection systems were already in place; performance was able to be monitored much more
directly in that way:
“I can understand why NPAT want it, because I assume if they can prove it that’s how they can get more
money from the government to carry on phase two - I guess that’s how they can use it - to show that
there’s been improvement. But I would have thought if we’re collecting data which we know is correct
hopefully, that’s where you show that there has been improvements, not by some self assessment because
I think that’s far too arbitrary.” (Focus Group - Project Managers)
However, even though the scores were often seen as arbitrary and unimportant, they could in
fact have a demotivating impact on the team:
“the self assessment score can be demoralising if you can’t improve due to things that are not in your
control.” (Project Manager)
“Again, it’s this sort of ambivalent thing, that I feel that I can live with myself, but you still feel that you
will be judged. And although people say it’s not like that, you feel that it is…I think it’s about where you
start. I think as a service we started at rock bottom, and I suppose you could say that we should have
taken that into account in setting targets and set far more modest targets.” (Project Manager)
Consequently, a number of participants suggested ways of improving the system. Many of
these suggestions focused on the need to establish a meaningful baseline for each local team
as suggested by the last quote above:
“You see, I think that the movement away from what your base line is should be what’s measured rather
than the way it is now, where everyone - where it’s defined as to what point at one is and what point a
two is and that. But I recognise that would be much more difficult to make the definitions of and so forth.
But for me that would be more meaningful.” (Focus Group - Project Managers)
70
5.9
National learning workshops
Unlike other components of the CSC views were less varied about the value of attending
learning workshops; for the majority the workshops were the best part of the collaborative
(although different reasons were given as to why they were of value):
“Great opportunity to share learning and network nationally.” (Programme Manager)
“Great opportunity to share what worked, learn the approach, agree ideal patient pathways, meet others
and hear clinicians talking about their own service.” (Project Manager)
Common to the OSC (Bate et al, 2002) it was the smaller group meetings - the ‘breakout’
sessions - and particularly the tumour specific groups that were most highly valued (these
are discussed further in chapter 8).
Taking participants away from their everyday work in this way was viewed as essential to
meet the goals of the CSC:
“I think this (the learning workshops) has been one of the most important things of the collaborative. It’s
been an opportunity for most people, be it in their separate networks or Trusts to meet and talk about
things. And not necessarily just clinical things, but to mouth about whatever they need to talk about. “
(Tumour Group Lead Clinician)
“You’ve got to take people out of their environment where they can’t be got at, otherwise your phone
goes, your bleep goes, you pop down the road to see the patient who’s ill, and the whole thing collapses.
If you try and have that kind of meeting in working time, you would be jolly lucky if you got half your
clinicians there. You’ve got to actually take them away.” (Programme Clinical Lead)
“Every time when a workshop comes it’s taken extreme effort to convince everybody that they really
need to take two days out. If you could take part of the team, it meant that we could get quite a bit done at
the workshops. We didn’t necessarily go to all of the things that we should have gone to but we had
protected time for the three of us to crack on to do the stuff. And so that was beneficial to me and to them,
they’ve said the same thing…So the workshops got better. The specific interest workshops1 were
excellent.” (Project Manager)
“Those learning workshops are excellent and it focuses everybody and they all get taken away for a day
or two. Yes, it’s hard work, but they have been good…You can have separate days locally, but getting the
national, it’s for the entire national team to get together and just have a huge pat on the back and progress
reports, and tell us what everybody else is up to. Because otherwise you just think, it’s just you and your
project. You need to hear how everybody else is.” (Programme Manager)
Workshops were valued as a necessary element because it formalised the methodology and
brought home the fact that the methods were not merely “common sense”:
“The conferences are important - I know there has been a lot of criticism about them - I think I’m a bit
more open minded about that. As I’ve said, we haven’t been very good at focusing on process in the past,
and clearly there is a science behind the question of process which doctors are not familiar with on the
whole. So when they went along and heard the Americans talk about process, they thought it was a load
of rubbish, because on the whole it’s common sense. It’s like a lot of management, if you buy a book on
management and read it, it seems a bit boring in a sense, it’s common sense. But it’s surprising that
people don’t all have the same degree of common sense, do they? So you have to formalise it. And you
formalise it in a sort of methodology. So I’m not totally cynical of going along and listening to the
methodology as long as it’s put in context.” (Programme Clinical Lead)
For many, the highlight of these events was the opportunity to share experiences and ideas
with colleagues who work in a similar area:
“Yes, I do think going away for two days is good. I think it makes me get out of the hospital and focus
totally on what’s going on. Also it helps me to see what’s going on in other places, and I think the
collaborative has encouraged us to visit other sites.” (Tumour Group Lead Clinician)
1
The national 1-day meetings are discussed in the following section.
71
“Oh yes, that’s a good bit, because there are sessions within the workshops where you’d get the project
teams together, which was good for half hour. But we were meeting with our project teams regularly and
seeing them all, and we didn’t need time for just us, we wanted to be networking with other people. In
fact more time spent networking would have been better than a lot of sitting through sessions - in fact we
didn’t we went and sat outside and did work in another room didn’t we?” (Focus Group - Project
managers)
Project managers mentioned that clinicians hearing about initiatives elsewhere in the country
while attending national workshops, often served as a spur to promote local action:
“These national meetings and sense of learning from others has been very useful, because the lead
clinician in this project went to Harrogate in February and heard that people were getting patients in and
biopsy them in a one-stop service, and realised that it was taking three months here. Realising that it was
a very different here and that other people can do it, it really kicked him up the bum a bit.” (Project
Manager)
Reflecting these positive responses only 10% of respondents to the questionnaire (and who
attended the national workshops) found them ‘not particularly’ or ‘not at all helpful’ (figure
14):
FIGURE 14
How helpful did you find National Learning workshops?
80
70
60
%
50
46
39
40
30
20
8
10
2
6
0
Very helpful
Quite helpful
Not particularly
helpful
Not at all helpful
Not involved
[end of study postal questionnaire, May 2001, n=96 (74% response)]
Figure 15 shows the proportions of respondents who (a) attended each of the national
workshops1 and (b) rated each as either ‘very’ or ‘quite’ helpful. The ratings reflects the
comments from interviewees - and indeed those from the national team (Kerr et al, 2002) that the first workshop in Dudley was not particularly well received but that subsequent
events were much improved.
1
The Newport workshop was a ‘splinter’ event run in parallel with the larger workshop in Blackpool - hence
the relatively low attendance.
72
FIGURE 15
National Learning Workshops
95
100
89
90
%
60
88
76
80
70
88
79
69
64
55
50
40
30
20
20
10
0
Dudley
Harrogate
Canary Wharf
Blackpool
Newport
Meeting
% respondents who attended
% attendees rating 'very helpful' or 'quite helpful'
[end of study postal questionnaire, May 2001, n=96 (74% response)]
Interviewees were not asked to comment in detail on each workshop, however in response to
a broad question on their views of the collaborative methodology, many chose to give strong
views about the first workshop.
As figure 15 shows, over the course of the CSC as a whole only a minority of attendees
rated attending the national conferences as a poor use of their time. However, even amongst
those who rated the workshops more highly there was often a preference either for shorter
(one day) national meetings or more regionally-based events. A particular concern was that
the two-day national workshops took practitioners away from patient care:
“And it’s not just doctors, but it’s nurses, managers, taking them away for two days is a great chunk of
my time, and the work I can’t do in those two days, doesn’t go away. Particularly a place like this where
the work is largely consultant based, I have to do it when I come back.” (Tumour Group Lead Clinician)
“Because you’ve got a government telling you this, this, this and this, these are all your targets - but
we’ve got no money in the Trust and then they take me - who’s expensive - two consultants, a nurse
specialist… we cancelled theatres, then you’re put up in a hotel and its just a lot of money. And they
started saying ‘wow, they could have given us that money and we could have done it locally with the
whole team’.” (Project Manager)
Some participants who broadly supported the workshop experience viewed the content as
inconsistent or repetitive:
“I don’t mind going away for two days if it’s all productive, highly thought out…But the conferences
aren’t…there’s an awful lot of duplication, and they haven’t really spent enough time thinking or liaising
with people to say ‘what do you want’. (Tumour Group Lead Clinician)
“I felt that a lot of those – I mean the first couple of learning workshops – well the programme manager
training days were useful. But then we’d go and we’d be sat there, and we were going over the same
things, again, and I didn’t find that useful at all.” (Focus Group - Project Managers)
As already mentioned meeting in tumour specific groups was supported by most as the best
part of the workshops. For others meeting as a local project team was the most useful:
“The best bit about them (the workshops), really, was that it allowed our own team to get together for
periods of time and to brainstorm things, whereas you probably would never do it if we were all here
73
because we are all so busy. But when you’re bundled together in a hotel, then it allows you to get
together. So paradoxically it’s helped teambuilding internally, rather than anything else.” (Tumour Group
Lead Clinician)
As revealed by some of the comments above, keeping clinicians engaged was viewed as the
key challenge for workshop organisers. Contents of sessions need careful consideration and
planning to make them relevant for clinicians. Managers often reported that they listened to
contents of workshops from a clinician’s perspective and that, although the meetings did
improve, there was room for further progress:
“The generic stuff, that takes a day for them to explain, is pretty worthless, I have to say. Consultants
have no time, they’re very busy, they just want to start, finish, end, gone. And quite often we come back
from the big workshops and they’d say we could have got that done in a half a day, it became, I don’t
know, a business all of it’s own.” (Project Manager)
A general view from those interviewed was that the second workshop (Harrogate) was much
better than the first (Dudley) and that the third workshop (Canary Wharf) was also beneficial
- the fourth workshop (Blackpool) was less highly rated but still an improvement from
Dudley. The national team clearly learnt the lessons from Dudley and this is reflected in the
assessments shown in figure 13 and participants comments:
“My guys didn’t understand the big meetings for the first two meetings, you know, and the nurse
specialist just kept saying ‘I don’t need to do this, I don’t want to do this’ and I kept on saying ‘you have
to do it, because like if I have to suffer it, then so do you’ which is not the right approach, really. I’d say
that the first couple of workshops were wholly damaging. But they did get better, they did get better, they
did begin to listen…” (Project Manager)
The fact that known clinical leaders took leading roles in the later workshop presentations
was particularly valued, as was less time spent on methodology and more in tumour specific
discussion groups.
5.10
One day workshops
As well as the two-day national learning workshops discussed above the CSC organised a
series of, usually, one day ‘topic-specific’ meetings on a number of issues ranging from
engaging with the Chief Executives of Trusts to palliative care. Figure 16 shows that,
although almost a quarter of our respondents did not attend any of these days, they were
generally found to be helpful by those that did:
74
FIGURE 16
How helpful did you find National one day meetings?
80
70
60
%
50
44
40
30
24
23
20
6
10
2
0
Very helpful
Quite helpful
Not particularly
helpful
Not at all helpful
Not involved
[end of study postal questionnaire, May 2001, n=96 (74% response)]
Figure 17 shows participant ratings for each of the specific events. Attendance amongst our
respondents ranged from 7% (‘Primary Care’) to 25% (‘one-day radiology’). Both the
radiology events were highly rated as was the patient information meeting:
FIGURE 17
National meetings
100
100
90
90
74
80
70
71
60
60
%
90
82
50
40
24
30
20
25
10
10
10
7
11
10
0
Chief
Clinical leads
Executives
& Don
Berwick
1 day
radiology
% respondents attending
2 day
radiology
Primary care
Palliative
care
Patient
information
% attendees rating 'very helpful' or 'quite helpful'
[end of study postal questionnaire, May 2001, n=96 (74% response)]
The general sense was that these events provide more focus than the national workshops and
‘gave lots of ideas’:
“T: these have been really good because it just seems to be you go with a theme and you sit there and you
achieve it and come away at the end of the day and those have been much better, you know, small and
focused because you’re still mixing with people who are doing it and you get something out of it at the
end of the day.” (Focus Group – Project Managers)
75
The only criticism of any of the national one day meetings was what some saw as the
‘unfocused’ and ‘waste of time’ of the ‘Chief Executives’ day. Otherwise the radiology
events were ‘very well run and informative’ and Don Berwick’s session with clinical leads
was variously described as ‘excellent’, ‘it rescued the project’, ‘spoke with a clarity that
would have transformed and inspired the start of the project at Dudley’ and ‘inspirational.’
5.11
Listserv
The listserv, an electronic mail discussion list dedicated to the CSC, was one of the lowest
rated components of the Collaborative approach. As Figure 18 shows 40% of respondents
did not find it helpful and a further 23% did not use it:
FIGURE 18
How helpful did you find the CSC listserv?
80
70
60
%
50
40
28
30
21
19
Not particularly
helpful
Not at all helpful
20
10
23
9
0
Very helpful
Quite helpful
Not involved
[end of study postal questionnaire, May 2001, n=96 (74% response)]
Nonetheless views about the value of listserv were still varied. Support for this facility
appeared to be related to participants’ access to and previous use of electronic
communication. Project managers were regular users, albeit with varying enthusiasm, while
a number of clinicians did not have ready access to computers and relied on the project
manager to alert them to any items of interest:
“T: It was a ridiculous system [listserv], it wasn’t considering the technology that’s about, why revert to
something like that, which was you know, a couple of years out of date, didn’t work particularly
efficiently, didn’t work very fast…
S:But even with direct access to the Internet, there were lots of better ways of facilitating that level of
communication. And on a breast list I don’t think any clinicians ever put anything on there. So it was
useful for project managers, there was an element there that we – people were like putting in things
anyone done this, so in that sense it was quite good, but from a clinical sort of clinicians – as I said I don’t
think anybody used it.” (Focus Group - Project Managers)
Given that listserv is positioned as a key mechanism for promoting communication between
collaborating programmes, the fact that it is not accessed by all is concerning but similar to
findings in the OSC (Bate et al, 2002). This probably represents a practical difference
between the UK and USA health care, where the former is not universally familiar with
electronic communication and for most it is not yet an integral part of their practice. For
76
those who do use the system, it was rated as helpful to promote discussion. However, a
general view was that its potential for sharing ideas was not yet fully utilised.
“They (clinicians) did go on it (listserv), but they didn’t like it. They did have access themselves, but we
took it all off, because they didn’t like it, because, as I said, rubbish was coming through. And prostate
does not want to know what was going on in breast…So they were switching off, they weren’t even
reading the ones that were relevant. So we took them off, it was in the early part of the project and we
could tell it was a switch off.” (Programme Manager)
Early interviews for this report were conducted before listserv was divided into five tumour
group lists and one administrative. Later interviewees welcomed this change, especially
because in the early period arrangements for meetings tended to dominate the content and
this frustrated and turned away some early users.
“C: It was a bit of a mess to start with wasn’t it. And of course we couldn’t access it anyway could we for
the first six months or so, so it was of no use to us at - well, I talk of myself, it was of no use to me for the
first six months which would have been when it was more useful than later, as you get to know people
more perhaps.
Sa: They’ve got better haven’t they.
S: They have got better yes - better than they were, much better.” (Focus group - Project Managers)
5.12
Conference calls
The use of conference calls was another component of the Collaborative approach which
was not highly rated. Figure 19 shows that 45% of respondents did not find it helpful and a
further 19% did not participate in any conference calls.
FIGURE 19
How helpful did you find conference calls?
80
70
60
%
50
39
40
27
30
19
20
10
9
6
0
Very helpful
Quite helpful
Not particularly
helpful
Not at all helpful
Not involved
[end of study postal questionnaire, May 2001, n=96 (74% response)]
Similar to listserv, conference calls were perceived as a new method of communicating and
an unfamiliar technique that would take some time to get used to. Support for the advantages
offered by this medium was expressed: it was perceived as a particularly good use of time
for geographically dispersed individuals to share ideas. The usefulness of calls, however,
depended on who attended. Those involving a number of clinicians were generally rated as
77
the most helpful. Despite the fact that clinician attendance at calls was seen as vital some
project managers reported that they had difficulty persuading clinicians to participate:
“Conference calls I think is a great idea, but hasn’t worked well, in that I thought conference calls was to
get people together who weren’t talking to each other like the consultants. But you go on a conference
call, and you might get one or two consultants on it if you’re lucky. You can get some good discussions
going, but we (project managers) can’t make decisions.” (Project Manager)
However, as with listserv, this seemed to an aspect which improved and became more useful
as the CSC went on:
“S: I’m really into them now.
Cl: Yes, I am as well, had a really fantastic one yesterday.
Sa: I think they’re brilliant, and they work.” (Focus Group - Project Managers)
Both the CSC listserv and use of conference calls were relatively new and innovative ways
of communicating for most participants. Such mechanisms have an important role to play in
achieving the aim of establishing an active and vibrant network between nationally
organised days. Both of these mechanisms have the potential to facilitate this but, as with
other UK Collaboratives, their implementation needs to be improved if this potential is to be
fulfilled.
78
6
WHAT ARE THE IMPLICATIONS FOR NATIONAL AND
REGIONAL ROLES, CANCER NETWORKS, PROJECT
MANAGEMENT AND CLINICAL LEADERS?
Key findings
Leadership at the local CSC Programme level was found to very helpful as was the contribution and role of
clinical champions.
The contribution of the cancer networks, national CSC team and Trust Chief Executives varied widely across
the forty-three projects. Given the likely importance of these three aspects for supporting and facilitating the
work of the local project teams, such variation may be a significant factor in explaining the mixed response of
participants to some of the constituent parts of the CSC as well as their overall experience.
The contribution of health authorities and regional offices was not found to be helpful by the majority of
respondents.
6.1
Overall comments
This chapter examines the various organisational aspects of the CSC at national, regional
and local levels. Local CSC Programme leadership and clinical champions were seen as
‘very helpful’ by almost 50% of respondents (table 10). Responses to the role of the cancer
networks, the national CSC team and Trust Chief executives were more mixed suggesting
some significant variations between project teams in terms of the level and nature of support
that they received during the CSC. The role of health authorities and regional offices were
not highly rated.
TABLE 10
How helpful did you find the following broader aspects of the CSC in the context of your
own role? ( in order of highest % of ‘very helpful’ responses) (n=96)
Aspect
Very
helpful
Quite
helpful
Not
particularly
helpful
Not at all
helpful
No
opinion
Missing
data
Local CSC
Programme leadership
46
36
7
5
5
-
Clinical champions
45
27
13
4
10
1
Cancer networks
33
33
28
5
1
-
National CSC team
22
35
31
4
4
-
Trust CE
21
30
28
15
6
1
Health Authority
4
21
31
32
16
-
Regional office
3
26
31
23
15
3
[source: end of study postal questionnaire, May 2001]
Marked differences between the ratings from particular programmes and the overall ratings
from all participants (appendix 8) suggest that programme D (high ratings for four of the six
aspects: cancer networks, regional offices, clinical champions and national CSC team –
although low for Trust Chief Executives) is likely to have made better relative progress than
79
programme B (low ratings for three of the six aspects: cancer networks, regional offices and
clinical champions)1.
They were fewer differences between project managers and lead clinicians in relation to
these organisational aspects than to the various aspects of the improvement method
(appendix 8). The only exception being that lead clinicians perhaps underestimated their
own contribution (47% versus 85%). Local CSC Programme leadership was rated very
highly by both groups (84% and 82%).
Each of these broader aspects of the CSC are discussed in more detail in the following
sections.
6.2
Local CSC Programme Leadership - Programme Managers
A strong view shared by many interviewees was that ‘without local leadership the initiative
would have collapsed between workshops.’ As well as the project managers who lead each
local team, each of the cancer networks was expected to appoint a ‘full-time, credible,
capable Programme Manager for the duration of the project.’ (CSC Improvement
Handbook). Figure 20 shows how highly the local leadership structure was rated by
respondents; only 13% did not find it helpful.
FIGURE 20
How helpful did you find local CSC Programme Leadership?
80
70
60
%
50
44
36
40
30
20
10
7
6
6
Not particularly
helpful
Not at all helpful
No opinion
0
Very helpful
Quite helpful
[end of study postal questionnaire, May 2001, n=96 (74% response)]
1
Respondents from programme D were more positive with regard to the 'cancer networks' (100%) whilst those
from programme B were more negative (33%). Respondents from programmes D and F were more positive
about the role of Regional Offices (56% and 50%); those from programme B, E and G were more negative (0%,
15% and 9%). Respondents from programmes C and D were more positive about the role of clinical champions
(both 100%); those from B and F were more negative (56% and 45%). Respondents from programmes E and G
were more positive about the role of Trust Chief Executives (92% and 73%); those from programmes A and D
were more negative (18% and 0%). Respondents from programme D were more positive about the role of the
national CSC team (78%).
80
Project managers welcomed the support and advice of the nine local Programme Managers.
Programme Managers themselves spoke of the balance that they needed to strike between
directing and supporting the Project Managers within their network:
“As a programme manager I think it is about knowing when to let go and knowing - and again it is a skill
that comes with senior management - about having the confidence to leave them alone but being flexible
enough to be ready at short notice if need be.” (Programme Manager)
The Programme and Project Managers in some of the networks were almost a ‘mutual
support group’ to each other. As one Programme Manager put it:
“We have a very - almost like a Chatham House rule - professional relationship in that there has been a
real sense of confidentiality between four or five of us. If there was a particular clinician that was causing
real problems there would be a safe environment in our project to meet and blow off about that. One
project would say ‘we are struggling with one bit and just didn’t see the point’: the others would suggest
‘Try it this way’ or ‘Do it this way’ so there was an awful lot of horizontal support from them for each
other.” (Programme Manager)
Similarly, the Programme Managers themselves would meet to discuss problems and
progress within their networks:
“What was really, really helpful for me was the other programme managers - that peer group - and that
did feel a very safe environment. The people that did feel they needed support could be more open and
honest because they were so supported back at the ranch - we seemed to feel it was a very safe
environment to do that in.” (Programme Manager)
The role of Programme Manager required substantial management experience in the views
of those who fulfilled it in phase I of the CSC:
“The reason for being able to hang on - particularly in that first early period - was having seen it before,
and done it before, and being there before, and having had a bullet proof vest with lots of holes in it
before.” (Programme Manager)
“A lot of those conversations, the e-mails, the hate mail, happened in a broom cupboard in the corridor,
and it is bullying, and it’s very difficult to counteract it, and I think the only thing I could do for the
project managers was for them to see that I was going through it as well and to explain to them that it
wasn’t a personal thing that was happening to them, but it was an absolutely normal reaction.”
(Programme Manager)
6.3
Local CSC Programme Leadership – Project Managers
The importance of the availability of dedicated project management time has already been
discussed - this was the most valued of all the aspects of the CSC. Despite this some project
managers, and those that supported them, clearly felt that they needed more time to give to
their CSC work as, whilst some were full-time, others were not:
“We decided that we would not fund full-time project managers for one tumour - we wanted to put
somebody working across the network. So where we had a project running, it would be in two trusts, one
designated facilitator, who worked in theory across the two trusts. And they were not full-time, they were
part-time, one day a week essentially. We found that that was insufficient, in that the other four day a
week job tended to be quite important and left less time for the collaborative. There’s a lot of paperwork
on a regular basis and that one day a week, four days a month, can very easily get crowded up with filling
in forms and doing run-charts.” (Programme Manager)
“Oh, must be full time, never ever have part time: it’s just impossible.” (Project Manager)
There were not any strong views as to whether it mattered if project managers had a clinical
or managerial background so long as the individuals appointed were able to bring credibility
to the post:
81
“You need to have really credible project managers because if they had been lightweight then it also
would have fallen down - perhaps a few months later - but it would have fallen down.” (Programme
Manager)
“It was nice to have a mix of clinical and managerial. I have to say I don’t think there was any evidence
from my project that three out of four didn’t have clinical backgrounds. In fact in many ways they got
further because they were able to challenge really and I suppose what we might think, sometimes were
idiotic questions but they opened up massive opportunities really.” (Programme Manager)
There were differing views as to whether project managers should be selected from within
the organisations concerned or come from outside. Some participants felt that bringing in
project managers from outside of the organisation gave the individuals more authority and
scope for initiating change and at least some of the project managers preferred a ‘fresh start’:
“We did put people in different trusts to that they had worked in before. It’s very interesting that the
individuals that actually stayed in their original trust had a much tougher time because they were going
around with the label on them that said “Oh well you previously were the ward manager of x ward, what
do you know about radiotherapy?” Whereas those project managers that went into trusts as strangers,
there was no baggage. You know, they had a clear role, “I’m the project manager in this, and this is what
we’re gong to try and do.” And they found it much, much easier, and the results were better. I’m
judgmental there, but certainly the qualitative experience of the teams with external project managers …
what they’ve delivered was better.” (Programme Manager)
“I think it [not being part of the team] made a big difference because I do think it really helped in a sense
that I did not know the politics, I wasn’t seen as anybody’s mouthpiece or anything like that, and I think it
was really useful because I could ask the silly questions. I could say ‘what do you mean by that? What
does that abbreviation mean, why do you do that? What’s that?’ you could ask the silly questions and I do
think that makes a big difference.” (Project Manager)
Others took the opposite view and saw benefits in appointing project managers who knew
the organisation concerned as they were more likely to be aware of how to overcome local
barriers and obstacles to progress with the initiative:
“It’s not an essential requirement to draw project managers from existing Trust staff - I don’t think it had
to be like that but where it has worked like that there’ve been clear benefits. Because you’ve had people
who could either make decisions themselves or certainly influence decisions at a higher level. Whereas
some of the project managers have not been in that position and have had to rely on purely interpersonal
skills to move things. But I think we’ve maybe relied on that kind of thing too much. You’ve just got to
be realistic: you do need a certain amount of decision making force around you to make things happen
sometimes.” (Programme Manager)
Whatever the background and time commitment of each individual project manager, the
tasks and responsibilities of the position were challenging:
“you can’t generalise because you get exceptional people who just have never had the opportunity but I
think these jobs are more challenging than I think certainly we first thought. They really are cutting across
traditional ways of doing things and egos and sensitivities and very busy, very tired people being asked to
do things differently and those skills, I don’t think you can learn those skills. You can learn all the
methodology and you can learn most other things but those facilitative, interpersonal skills I think they I
think are the kind of mandatory requirements on that persons’ spec.” (Programme Manager)
Similar sentiments were echoed by the majority of project managers: the real challenges lay
in building relationships with local staff and ‘negotiating’ change. These are skills that the
project managers needed to bring with them to the post rather than being ‘taught’ them
during the CSC (although no doubt they were further developed through participating):
“I think it’s all about the way that you are as a person, and I think that’s one of the important attributes of
anybody doing any of this type of work, that you have to be able to do it in a non-threatening way, and
you’ve got to be able to plant the seeds, and you’ve got to be able to challenge and say ‘well, is this the
best way of doing it, or have you thought about doing it this way?’ without pointing the finger and saying
‘this isn’t good enough’ or anything. It’s about how you do that and how you get that across without
making anybody feel threatened or uncomfortable or targeted at all.” (Project Manager)
82
“I’m a people person, and I think rather than having the confidence, going in there and sort of telling them
what to do, I’d rather work with them and they respect me more as a person, rather than a colleague, if
you know what I mean. So I think the way I do it, is actually befriending them in a way, and working with
them.” (Project Manager)
“You’ve got to build relationships with people, and there’s no point falling out or arguing because then
they’re definitely not going to do it, just for spite then, because they don’t want it to succeed. So you’re
being sympathetic to the approach that they’ve got, and I think that’s the best way really, rather than
trying to force something down people’s throats.” (Project Manager)
Given the challenging nature of the post the training needs of the individuals appointed
should be identified as early as possible and met intensively at the start of a collaborative:
“I think we probably should have identified the training needs sooner, the training tended to concentrate
around the improvement methodology and redesign, which is right because that’s what it was about but
we should have done a little more on project management as such. I mean I don’t think we fared badly,
and I think possibly the communication hasn’t been as good as it could have been both within the project
and also outside to let other people know what was going on. We’re now beginning to realise how
important that is.” (Programme Manager)
“I think it needs a week, not an odd day here and an odd day there because as you know from conferences
you’ve done yourself in the past, your knowledge builds up over the week and you sort of keep re-testing
it and re-using it and by the end of the week it’s gone in. If you have a day and then you come away and
then you go back and have another day it doesn’t work.” (Focus Group - Project Managers)
Finally, participants were clear that project managers need to be in post at the start of a
collaborative: in at least one programme in the CSC this was not the case and, not
surprisingly, was forwarded as an explanation for relatively poor performance: ‘ensuring key
people were in place before project was to start (project managers – full time)’, ‘a project
manager is essential to support the clinical leads, and needs to be in place at the start, not
months after the start of the project’ and ’have people in post at beginning of project.’
Despite such concerns in some of the programmes the contribution of project managers
generally was a very valuable one - and a role that was crucial to the success of the CSC as
well as helping to develop local staff and enable them to lead such change programmes.
6.4
Clinical Champions
Each of the nine networks appointed a Programme Clinical Lead to ‘serve as a sponsor for
the overall programme and as a champion for the spread of the changes in practice.’ (CSC
Improvement Handbook). In addition, each tumour project also had a lead clinician. Clinical
engagement and clinical leadership of the programme was viewed by those leading the CSC
as ‘key to success’ (The Cancer Services Collaborative. Twelve Months On). Similarly
participants saw these clinical leaders and champions as essential to the success of the CSC:
‘clinical leadership and the will to change practice was the most helpful aspect’, ‘role of
clinical champion is vital in order to drive change locally’, ‘getting clinical staff on board is
vital to success and ownership’, ‘the success of the CSC is highly dependent upon high
profile clinical champions’ and ‘having lead clinician on board is absolute necessity in
making this project work.’ Figure 21 shows that 70% of respondents valued the role of
clinical champions but some 16% had not:
83
FIGURE 21
How helpful did you find clinical champions?
80
70
60
%
50
43
40
27
30
20
11
10
11
5
0
Very helpful
Quite helpful
Not particularly
helpful
Not at all helpful
No opinion
[end of study postal questionnaire, May 2001, n=96 (74% response)]
There were differences between the level of input and support from clinical leads across the
projects. In those teams where the contribution made by their clinical champions was not
valued this was a serious problem:
“We did not get - looking at other programmes and looking at the kind of clinical leads that we had in the
very beginning - we didn’t have that kind of pioneering evangelism that sort of took the message
intuitively and went for it. I mean I think ours were much more sceptical than in other places.”
(Programme Manager)
“If you list these then, probably at the top of that would be clinical support, and my project didn’t have it.
So whether you should then say, it’s a waste of time, let’s not do it? But then what do you about the really
poor services? Because they are really poor because they don’t have that clinical support.” (Project
Manager)
Most clinicians thought that the workload in their role as lead clinicians was more than they
had anticipated. CSC activity, as with other initiatives in the NHS, had to be fitted in on top
of their other responsibilities.
“…I think if I’d known what I know now, before I’d started, I would have thought twice about it. As
clinicians you don’t have any let up on your normal workload, and you are expected to continue that and
do this on top. I know a lot of clinicians are being paid a session a week to do it, but that again is on top
of their normal work, they’ve not dropped a session.” (Tumour Group Lead Clinician)
“There is a problem about protected time. There’s always a problem about protected time because we’re
running a service, if you stop doing something, it doesn’t go away. It’s always like that in the health
service, you know, it just piles up, look at this desk…” (Tumour Group Lead Clinician)
Attending CSC meetings was an aspect that caused particular concern for clinicians as the
consequence was that they build of a backlog because clinics have to be cancelled.
“I think we’re expected as lead clinicians to attend probably too many meetings, which actually takes us
away from delivering patient care, it has actually meant cancelling clinics, has meant that patients have
therefore to wait a bit longer to be seen. So I think we’ve got to strike a balance somewhere.” (Tumour
Group Lead Clinician)
One suggestion was that it ought to be feasible to arrange more clinician specific sessions
and to seek to promote a greater understanding of the role that clinicians are able to play
(both to local teams and to the wider collaborative):
84
“I think maybe we’ve gone maybe a bit over the top in the ‘we can’t separate groups out’; really there’s
nothing wrong in that. It’s horses for courses isn’t it? I mean, clinicians need to cover certain elements,
project managers need to cover others, but I think there’s a big perception that everybody must be in on
everything all of the time. And I think that is not always what people want, and at the end of it doesn’t
give the different groups what they need. I mean, I felt that personally, that maybe some of those groups
need to be separated out a bit.” (Programme Clinical Lead)
“The collaborative stimulated and focused and gave some direction to change and he [the clinician] was
committed to that and it has made a difference. But he’s not the sort of person who wants to go to team
meetings or committee meetings or get your team around and debate what score we should give ourselves
this month, because he’s not that sort of person. At times it’s been a little difficult getting that message
across to the network’s project management, I think. I think they do accept that some people work
differently.” (Project Manager)
The dilemma of clinicians having no or little spare capacity in their existing workload is not
peculiar to the CSC. This is a wider issue for the NHS and the increasingly prominent role
clinicians are expected to take in service development. Given that clinician leadership and
involvement is crucial to the success of the CSC - and to other similar initiatives (Bate et al,
2002; Ham et al, 2002) - this aspect needs to be addressed when considering future
initiatives where clinicians are expected to play key roles:
“Currently the NHS has been fortunate in that clinicians have always responded and spent time on things
that they thought were of some value, and even things that they didn’t think was of value sometimes…
But I don’t think it’s the way to run the service in the future. I think there probably is difficulty in
involving clinicians and certainly in clinicians being involved… maybe they see what it would be nice to
do, but there isn’t the time to do it quite as they’d like to.” (Programme Clinical Lead)
Decisions regarding payment of lead clinicians for their involvement in the CSC were made
at the discretion of respective programme teams (see chapter 7). Not all programmes decided
to pay clinicians and those who did are doing so in different ways. Mostly payments take the
form of one clinical session per week. Another arrangement is payment of a session to the
clinician and an additional session to the employing medical directorate on the
understanding that they will provide cover for the clinician’s time.
Payment for clinicians’ time is part of the workload equation discussed above. Even if
clinicians are offered payment, money does not buy time, as there is no spare specialist time
to be purchased. Nonetheless, some programme managers perceived payment as an
important gesture to show that clinician contribution is valued and thought that it served as a
potential lever to ensure continued commitment:
“Again, if you want to an initiative to be taken seriously and you say it must be clinician led and it must
have input from clinicians, leaving aside the problem about actually freeing up a session, I think in
principle, they should be paid. If they’re expected to make quite a commitment to it. And if they are paid,
I think they’ve got to deliver. It just adds some leverage to moving things forward.” (Programme
Manager)
“In a sense that (payment) kind of bought their commitment to see it through, to actually contribute to
what the collaborative was trying to achieve in terms of the whole pathway, attend the workshops, that
sort of thing. But in a sense, it was also a gesture of goodwill because these were the same people who
tended to come forward for everything. So the fact that we had some money for the collaborative, it was
like, well, let’s kind of reward them in a sense. It was slightly a token gesture, but I have to say, it’s
worked. They attend all the meetings, I can e-mail them, they e-mail me, they attend the conference
calls.” (Programme Clinical Lead)
A clinician from one of the programmes considered payment to clinicians as essential. It
provides a structure of accountability, creates goodwill, and provides the clinician with the
necessary authority when approaching trusts outside their own patch:
“The other thing is that we got our management team to pay our lead clinicians. We said we won’t do it if
you don’t pay, you run your own project. This is terribly important. It’s a huge amount of work, and it’s
mostly done at the weekends and in the evenings and you cannot reasonably expect people to do that for
85
free just because they’re nice guys. And there is very little goodwill left in the NHS at the moment..”
(Tumour Group Lead Clinician)
It was notable that early scepticism from some clinicians had given way, by the end of the
project, to a greater understanding and more enthusiasm:
“Clinicians are key to this process, though I don’t think they’re understanding the whole of the argument.
But I’ve watched six very cynical clinicians who were really quite rebellious and anti and incredibly
argumentative - I received amazing hate mail and telephone calls and things right at the beginning of the
project - and now they want to be part of the next phase, and have seen the light and are now being
evangelical about it, having realised that it is a way of providing changes in a service that does not
encourage a blame culture.” (Programme Manager)
6.5
Cancer Networks
Respondents reaction to the contribution of the nine cancer networks was rather mixed with
some participants stating that the ‘network did all the work’ and others commenting that
‘our network has been hindered by a lack of clear vision and leadership’. Figure 22 shows
that for two-thirds of respondents their local cancer networks were helpful but that for the
remaining third they were not:
FIGURE 22
How helpful did you find cancer networks?
80
70
60
%
50
40
35
31
30
27
20
6
10
1
0
Very helpful
Quite helpful
Not particularly
helpful
Not at all helpful
No opinion
[end of study postal questionnaire, May 2001, n=96 (74% response)]
For some participants the existence of a functioning cancer network was helpful but there
was potential for the networks to play a greater role in phase II:
“Brilliant, excellent: it made all the difference, I have to say. It’s here, we’re geographically close, we
work together, the collaborative is central and core business for the network. We were a network before
we came a collaborative. It is vital, it is mutually beneficial and supportive, and it does mean that you can
get issues looked at, the same issues are being looked at from a number of different perspectives.”
(Tumour Group Lead Clinician)
“I think the network is quite well developed and therefore, I don’t think it’s been a hindrance, I mean
they’ve had structures, they’ve had a strategy board, they’ve had a clinical advisory board. I think it
probably needs to be stepped up in phase two, the integration of the collaborative in the network. But I
think where the networks haven’t been as advanced as here, they’ve probably suffered, or that’s what I
pick up. They’ve felt that that was a bit of a drawback… They’re still a little separate, I mean, they were
items on the agenda, but there wasn’t the level of activity say through the tumour specific groups, it was
they did there work and the collaborative did theirs, and I suppose the network lead was the link, but
86
really it needed to be more active, more working to the same agenda. So I think there’s still a fair bit of
work to be do on that.” (Programme Manager)
For others, even at the end of the CSC, their local cancer network had not yet been fully
formed:
“It’s incredibly disorganised. You know, it’s interesting, in again going out in to the region, and seeing
where people have got networks organised and just seeing the calibre of the bids that came in, you know,
and who was doing them and who was putting them together and the team that went to doing that were so
different to what we experienced here.” (Project Manager)
But this was also one of the ‘informal’ benefits of the CSC: in all cases - whatever the
starting point of the networks - it seemed that the collaborative had helped in ‘pushing
along’ the development of networks although this was especially true where this was
otherwise not happening:
“One of the main things the CSC has done is provide a focus, an impetus to actually get things going.
And it’s spurred people on, it’s certainly spurred on lead clinicians to actually sort out some of the
network agenda which they’ve had to do and they’ve used the CSC to address some of those issues. It
would have happened, but I don’t think it would have happened as quickly as it has happened.” (Project
Manager)
“If you take it from where we started from - and I don’t think any of the reporting system really does take
that into account - I mean we weren’t even a fledgling network, we just didn’t exist. So having got that
kick started, I think that’s something the collaborative can actually feel quite proud of.” (Programme
Manager)
The relative role of the different cancer networks perhaps best exemplifies the wide
variation in starting points across the project teams in the CSC. A major feature of the
collaborative approach is the establishment of an active and ongoing network to facilitate
sharing and learning. In some programmes significant progress had already been made in
terms of cancer networks prior to the CSC whereas in others very little had been achieved:
this suggests that some teams were starting from a much stronger position than others and
this may have significant implications especially in terms of sustaining momentum across
networks beyond the end of phase I of the CSC.
6.6
National CSC Team
NPAT initiated and continue to oversee the work of the collaborative and as such have had
to manage multiple demands and fulfil diverse roles. Their main role was to set up the CSC
and facilitate its functioning throughout the operational period. This has been a great deal of
work, acknowledged with hindsight as being more than anticipated at the outset. NPAT
estimated that it took 400 person days to complete the preparatory work required to establish
the collaborative.
Even though all participants knew that the methodology was evolving during the CSC - and
that much of the CSC is about learning - in practice this uncertainty and change was
demanding. At times NPAT had to manage the frustrations of teams who wanted them to be
more prescriptive leaders. However, the model embraces risk taking and, similar to project
teams, NPAT needed to test various aspects of the approach; only by experimenting would
the best way forward emerge.
“What I also think in terms of how it’s different, is that we’re learning too…Again I think we have a
culture in the NHS that if we’re going to create some change, it comes top down, it’s typically structural
change or new performance targets and it comes as a fait accompli. This is what you do. In a sense, this
has been learning as much for us, and maybe more for us, than as much as it is for the teams.” (National
CSC Lead)
87
Another important function of the national team was to work closely with the IHI team to
select and implement the most useful elements of IHI improvement methodology. Together
the national team evaluated monthly progress as reported by programmes, supported teams
that are experiencing difficulties and encouraged those that were progressing well to excel
even more.
A third aspect is their role as intermediary between the nine programmes and the
Department of Health. This relationship was constraining at times; especially when there
was ministerial pressure for early results coupled with feedback from programmes that the
expectations were unrealistic.
“I think it’s fair to say that there’s a strong ministerial push to make things happen very, very quickly…
And I think a lot of the stuff that we’re talking about with the cancer collaborative, the challenges and
some of the changes are really complex, and it isn’t’ something that you can just push through…It’s
almost like there’s two processes that need to go on… one process is the change process, change of
system, roles and structure and then there’s this psychological process that’s going on, transitions that
people have to work through, and you can’t speed it up. You’ve got to let people go though those
psychological change processes.” (National CSC lead)
Perhaps reflecting to a large degree on these different roles and tensions figure 23 shows that
the overall response to the contribution of the National CSC team was somewhat mixed.
FIGURE 23
How helpful did you find the National CSC team?
80
70
60
%
50
40
30
32
32
23
20
10
5
5
Not at all helpful
No opinion
0
Very helpful
Quite helpful
Not particularly
helpful
[end of study postal questionnaire, May 2001, n=96 (74% response)]
Positive aspects of the contribution of the national team included: ‘provided co-ordination
and focus’, ‘invaluable because of (a) systematic approach and (b) loads of managing
change experience’, ‘very responsive’, ‘ooze enthusiasm and motivation’ and ‘must take the
credit for the success of the project.’ Other participants were more ambivalent in their
response - whilst generally satisfied with the support they received they had relatively low
expectations of the contribution that the national team could make locally:
“So my expectations of the National team I suppose were realistic and I didn’t get any more and I didn’t
get any less than that and I feel like I kind of grew at the same time as them really.” (Programme
Manager)
“They [the national team] were slightly irrelevant to me on a daily basis, I don’t mean that disrespectfully
but I knew they were there basically and that was a really nice comfort blanket and I felt they were really
on my side if I had buggered up in any way locally and I hadn’t had the local support that I had. So I
88
didn’t really need to go to them to bail me out but if I had I felt that it would have been there.” (Project
Manager)
As has been emphasised throughout this report the CSC was a learning experience for the
national team as well as for the participants, especially at the start. This was widely
acknowledged and it was generally - though not universally - felt that the national team had
demonstrated sufficient flexibility as the programme progressed and had been ‘very
responsive’:
“It’s a learning curve for everyone. We all talk about the highs and lows, it was a definite roller coaster at
the beginning, it’s very challenging and I think it’s not the job you go in if you want a quiet life.” (Project
Manager)
“People’s perceptions of what this project was about and what it actually was about, were very different,
and that caused a lot of problems and held us up for a very long time. Until people actually came on board
and realised what was going on and what the impact of this project could be, because operationally it’s
had a huge impact, but that wasn’t realised at the beginning.” (Project Manager)
What criticisms there were of the role and style of the national team focused on what was
perceived as poor communication at the beginning (‘often sent information regarding
workshops too late and coordination of accommodation booking etc. was often last minute’,
‘deadlines and meetings far too short notice’) and too much ‘top-down’ pressure.
Additionally, participants felt that differences between local teams and programmes were
not always recognised. As a consequence, for some, the CSC was too uniform in approach
and neglected local circumstances and requirements:
“There are some pressure to deliver: NPAT may deny that but there is and we all feel that. Trying to
deliver when you don’t fully understand the mechanisms is very difficult and it causes mistakes and
problems because of that.” (Project Manager)
“It’s my same argument about the collaborative, they never asked people’s starting points, they just
assumed we were all naff and that our access was really poor, - they just did.” (Project Manager)
“These guys were proud to be a beacon site and had something to be proud of, the collaborative hasn’t
made them any prouder, they’ve been in some respects, slightly curbed by the collaborative, I think,
because they never noted them.” (Project Manager)
The national CSC team had a difficult balancing act to perform; trying to fulfil and
complement the aims and objectives driven by the - relatively short term - national agenda
around cancer services as well as seeking to facilitate, develop and support the various
longer term local drivers for change in the - often very different - forty-three projects (as
well as trying to help overcome the various barriers in the projects when they arose). A
manifestation of the inevitable tensions that this dual role led to is reflected in participants
criticisms of ‘hype’ and ‘spin’ as discussed in more detail in chapter 8. The general sense
was that the approach of the national team improved as the CSC progressed; future
collaboratives need to build on this experience and adopt a less rigid approach and recognise
local differences and needs from the beginning.
6.7
Trust Chief Executives
The support of Trust Chief Executives, where it was present, was ‘critical in driving
changes’, ‘adds a huge amount of weight to the projects’, ‘gives local credibility’, ‘has been
a great advocate and support throughout the project’ and was ‘important when discussing
local implementation and priorities’. The notion of Chief Executives bestowing credibility
on the work of the CSC locally was a common observation and was particularly important at
the beginning of the CSC:
89
“If it hadn’t delivered anything then I would have just moved on and they would have gone ‘God she was
a bad move!’ but at least at the beginning, because T was championing it as a Chief Executive. All of us
at that stage didn’t really know what we were championing but we were kind of credible enough to get
away with those early months when we didn’t really know what we were talking about.” (Project
Manager)
“Any hospital would be able to bad mouth these projects if the chief executive isn’t putting them high
enough on the agenda because they can hide behind the chief executive and say ‘but we’ve been asked to
sort out something else and that’s taking our efforts, this will have to wait it’s turn…’” (Tumour Group
Lead Clinician)
Others noted that the presence and involvement of Chief Executives helped in the task of
enlisting clinical support:
“The Chief Executive role I feel was imperative. Having him sanction the programme and support
throughout made our lives much easier. It was also a ‘draw’ to get lead clinicians etc to meetings
regularly.” (Project Manager)
Some projects felt that they had not had sufficient support from their local Chief Executives
and suggested that this was a barrier to progress:
“The only people who knew anything about the CSC were those directly involved. Others, especially in
management, seemed not to know and less still to care unless it affected the bottom line.” (Project
Manager)
“And I think the approach does work, I think it needs to have much more senior management
commitment. I think what we haven’t done is expose some of the data that we’ve collected to the right
people, or exposed the people to the data, whichever way. And that’s partly the nervousness around
knowing that if you do, then it may create more problems because you’re actually starting to challenge
people’s previous plans.” (Project Manager)
Reflecting this, figure 24 shows that - as with the cancer networks and national CSC team the response to the role of Trust Chief Executives was mixed with over 40% of respondents
stating that their Chief Executive was either ‘not particularly helpful’ or ‘not helpful’ at all.
FIGURE 24
How helpful did you fund Trust Chief Executives?
80
70
60
%
50
40
30
30
22
27
20
14
7
10
0
Very helpful
Quite helpful
Not particularly
helpful
Not at all helpful
No opinion
[end of study postal questionnaire, May 2001, n=96 (74% response)]
Again, and in this case related to an important facet of the collaborative approach locally,
there was wide variation in the level of support that the project teams received from their
Chief Executives. Quality improvement needs the time and attention of local senior
clinicians (see section 6.4) and managers as well as from project managers and team
members. This is a common finding from other quality improvement research (Øvretveit et
90
al, 2002) including research in the UK (Locock, 2001; Bate et al, 2002; Ham et al, 2002). It
is not sufficient to sign up senior leaders at the beginning of a collaborative and to then fail
to engage with them beyond this further. Nor is it enough to gain their willingness to support
the programme: they need to know how that support is to be given and in what form.
Without visible ongoing sponsorship and support from senior leaders it is unlikely that any
improvement will be significant or sustained. Although it is not necessary to include senior
leaders on local project teams, project managers should be encouraged to ensure that a
senior clinician and manager has an active role on local steering groups, not least to ensure
that the work of the collaborative is aligned to other local or national initiatives. Other
mechanisms for involving leaders should be considered by organisers of a collaborative. For
example, the Mental Health Collaborative in the UK included at one of its learning sessions
a half-day on ‘spread’ and ‘sustainability’ with the chief executives from all the participating
hospitals. The CSC did hold a one day meeting for Chief Executives and 60% of attendees
rated this as helpful but figure 23 would suggest that more ongoing engagement was
required in some of the projects.
6.8
Health Authorities and Regional Offices
The final section in this chapter examines the contribution of Health Authorities and
Regional Offices to the work undertaken in the local project teams. The role of neither of
these organisations were rated highly (figures 25 and 26): they were found helpful by only
22% and 29% of respondents respectively. However, there was some suggestion that they
could have had a useful wider role to play:
“Individual centres within our lung cancer network are anxious to ‘look after their own patch’ and no
clinician with understanding of the wider problems is empowered to make regional strategic planning
decisions.” (Tumour Group Lead Clinician)
“NPAT have not involved regional offices as they could/should. Our regional cancer co-ordinator has
been very supportive and could have been a greater resource if allowed to be.” (Project Manager)
Stronger links with Health Authorities had been facilitated by the need to involve them more
closely because of phase II of the CSC. The changes made and demonstrated in phase I had
enabled participants to secure Health Authority funding for some aspects of their services:
“On the back of Phase one we actually got health authorities to put some recurring funding together
regardless of what was coming in Phase two because we didn’t know at that stage that there would be
anything else. For two health authorities and associated organisations we were able to say to them ‘Look
this isn’t a fad, here is the development manager if you are struggling to meet your two week wait they
either know someone who can help you or they can do a piece or work that will support you’ and even
just to keep communication going and put all these things together.” (Programme Manager)
“So we might have called it something different and we might have tweaked it around a bit but yeah there
was a commitment from day one really from the Health Authority which has actually meant that we have
got a project manager now funded in every single Trust for Phase two and that’s, that’s just puts it into a
whole different kind of footing really.” (Programme Manager)
91
FIGURE 25
How helpful did you find the role of Health
Authorities?
80
70
60
%
50
40
31
30
34
19
20
10
16
3
0
Very helpful
Quite helpful
Not particularly
helpful
Not at all helpful
No opinion
[end of study postal questionnaire, May 2001, n=96 (74% response)]
FIGURE 26
How helpful did you find the role of regional offices?
80
70
60
%
50
40
26
30
28
25
15
20
10
3
0
Very helpful
Quite helpful
Not particularly
helpful
Not at all helpful
No opinion
[end of study postal questionnaire, May 2001, n=96 (74% response)]
92
7
HOW MUCH DID THE CSC COST AND HOW WAS THE
FUNDING USED LOCALLY?
Key findings
The total cost of phase I of the CSC was in the region of £6.5 million. A further £22.5 million will be invested
in future phases of the CSC over the next two financial years (April 2001-March 2003).
The funds directly allocated to each of the nine regional programmes in phase I averaged £554,592 (£507,299
from NPAT plus £47,293 from other sources). The majority of the funding that was used during phase I (54%)
was spent on project-related non-clinical staff time whilst a further 28% was used for project-related clinical
staff time. An average of £108,032 (19.5% of total available funds) was carried forward to 2001/02 across the
nine programmes (range 3-41%).
There was significant variation across the programmes. Most strikingly whereas one programme spent
approximately £310,000 (56%) on project-related clinical staff time, new clinical capacity or waiting time
initiatives another programme spent just over £30,000 (6%) on these same elements.
Common suggestions from participants were that less money should have been spent on the large national
workshops and more should have been available to facilitate changes locally, and that clinical involvement
should have been recognised through payment to those who committed time to the project.
7.1
How the funding was allocated in phase I
Phase I of the CSC cost in the region of £6.5 million and it has been stated that ‘£7.5 million
has been invested into the expansion of the Cancer Service Collaboratives this year [2001]
and £15 million will be invested next year’1. In phase I each of the nine regional
programmes received central funding from NPAT and following the end of this phase we
asked each programme to provide brief details of how this money was spent locally.
In response eight of the nine CSC programmes provided data on their budgets and
expenditure. On average the eight programmes received £507,000 from NPAT (range
£417,000 to £550,000). Three of the eight programmes reported receiving additional funding
and in one programme this was substantial (£300,000 per annum from the existing cancer
network to “assure Network CSC integration”).
Table 11 shows that the majority of funds (54%; range 34% to 79%) were used for projectrelated non-clinical staff time. Twenty eight percent of funds (range 6% to 56%) were used
for project-related clinical staff time, new clinical capacity or waiting list initiatives. On
average the eight programmes carried forward £108,000 (19.5% of available funds; range
3% to 41%) to 2001/02.
1
Source: ‘John Hutton: cancer modernisation pilots slash waiting times. Results from Cancer Services
Collaborative released,’ Press Release: ref 2001/0386, 21st August 2001
93
TABLE 11
Average income and expenditure for CSC programmes to March 2001 (n=8)
Average to March 2001
£
(%)
Budget
NPAT
507,299
(91.5)
Other (e.g. participating Trusts)
47,293
(8.5)
Total
554,592
(100.0)
Project-related non-clinical staff time
(e.g. programme and project managers)
242,538
(54.3)
Project-related clinical staff time, new clinical capacity, waiting
list initiatives
125,559
(28.1)
CSC Learning workshops, meetings and other CSC events
(including related travel)
30,496
(6.8)
Other (including overheads)
47,696
(10.7)
Total
446,290
(100.0)
Funds carried forward to 2001/02
108,302
(19.5)
Expenditure
NPAT did not respond to our questionnaire (appendix 9) concerning the central costs of the
CSC and which requested data under the following headings:
7.2
-
Budget allocated for CSC activity: Department of Health, other sources (e.g.
transfers from Booked Admissions Programme), and
-
Expenditure: CSC programmes, NPAT CSC-related staff time, CSC Learning
workshops, meetings and other CSC events (including related travel), IHI,
publications, other (including overheads).
Suggestions on how to allocate funding in future phases
A common theme emerging from question 33 in the postal questionnaire (‘which sought
suggestions for improvement on ‘how allocated funds were used’) was that too much had
been spent on the national workshops in phase I of the CSC: ‘far too much wasted on
learning workshops’, ‘wasted funds on large numbers of people travelling to workshops’,
‘big national conferences wasted money’, ‘waste of public money on expensive jollies’ and
‘national workshops are too expensive and extravagant’. Rather respondents felt that some
funding should have been ‘held back’ and been available to use pro-actively depending on
local circumstances: ‘fighting fund to employ more administrative staff in key areas and be
prepared to use it quickly’, and ‘funds held back to resource and pump prime initiatives and
backlog deduction’. In some cases lack of significant funding to improve facilities led to a
sense of apathy towards the small-scale collaborative approach (perhaps revealing a lack of
understanding as to what the CSC was seeking to achieve):
“Now we’ve made quite a few changes through PDSA cycles that have made improvements, but we’re
getting to a point now where the PDSAs that we could do, are going to start costing more money. It’s
quite sad in a way because when you go in, you see that the water is coming through the roof onto the
94
equipment. And I can see when I go in they think ‘no, we haven’t got time for this theory, we need money
to build a better building.’” (Manager)
A third and final suggestion related to payment for clinicians: ‘need protected and funded
time to perform’, ‘as much to the clinical teams as possible’, ‘need to ring-fence payment to
clinicians’, and ‘clinicians should have been paid right across the board.’
95
96
8
DISCUSSION: WHAT ARE THE KEY LESSONS FOR FUTURE
COLLABORATIVES IN THE NHS?
Key findings
Many of the lessons from phase I of the CSC for future Collaboratives in the NHS are similar to those arising
from other large-scale change programmes in the health care sector. It is increasingly clear that the receptive
contexts (Pettigrew et al, 1992) at the individual, team and organisational levels play a significant role in
determining both outcomes and experiences of programmes such as the CSC.
Six ‘key levers’ for change from phase I of the CSC have been identified in chapter 3. Further specific
suggestions for changes that should be made to future phases of the CSC centred on five areas.
There is a need to review existing measurement and reporting mechanisms and requirements. In particular,
possible alternatives to the current use of run charts, team self-assessments and the regularity of monthly
reporting should be investigated.
More preparatory work - and ongoing liaison - with senior management at a local level should seek to secure a
closer alignment between the CSC and other related national and local initiatives.
As already noted the availability of dedicated project management time was viewed as very valuable by
participants. However - and this relates to issues around measurement and reporting mechanisms consideration should be given to ensuring that local teams also have sufficient specific capabilities and
dedicated time to collect and monitor the required data. In some teams this may only require - as mentioned
above - closer liaison, and sharing of resources, with other ongoing initiatives in Trusts whereas other teams
may need to direct a greater proportion of their funding to this end.
The tone and content of the national learning sessions should give less emphasis to documenting and reporting
the ‘success’ of the collaborative. Participants reacted strongly against what they perceived as the ‘political’
spin which they saw as being placed on their efforts and achievements at these events.
The level of ownership of the work undertaken by teams has important implications for the likely
sustainability, and impact of, the changes made. This could be encouraged further through greater emphasis on
tumour specific work and more local networking and less emphasis on the national learning sessions. All of the
previous four ‘lessons’ would also serve - to varying degrees - to increase local ownership.
8.1
Overall comments
Phase I of the CSC was a complex intervention in which an innovative approach to
improving health care services - supported by earmarked funding, novel management
arrangements and changed working practices - was introduced to a high-profile clinical area
facing multiple challenges. Whilst discussion of earlier drafts of this report have centred on
detailed interpretation of the quantitative outcomes reported in chapter 2, the ‘success’ or
otherwise of this initiative can be viewed from different perspectives.
8.1.1
Impact of the CSC?
Our assessment of the impact of the CSC is severely handicapped by lack of quantitative
data. While previous studies of cancer waiting times have shown that data can be collected
retrospectively (Spurgeon et al 2000), and notwithstanding the provision of additional
resources to the nine programmes to support data collection, the projects included in this
study found it difficult to comply with the requirements agreed between the evaluation team
and those responsible for leading the CSC at a national level (see appendix 4). It appears that
97
this was because of the workload involved in data collection and the low priority attached to
providing data to the evaluation by those involved in the projects. There are therefore limits
to which it is possible to draw firm conclusions from this study.
It is also important to emphasise again the variations that existed in the CSC. This is evident
in relation to different experiences between tumour types both in relation to their starting
position and what was achieved in practice. There were also variations between programmes
and projects in the goals that were set and the outcomes achieved. For these reasons, it is not
helpful to seek to generalise about the CSC as a whole, even though policy makers and
researchers are often tempted to make summative judgements in examining initiatives of this
kind. To be sure, our assessment of the qualitative findings do highlight a number of lessons
about the structure and process of the CSC, but even then, as earlier chapters have
demonstrated, majority views need to be qualified by minority dissents.
Even if the data available were more complete and experiences across tumour types and
programmes and projects more consistent, there are two further difficulties in drawing
conclusions from the CSC. The number of patients with cancer included in phase 1 was very
small. The available data indicate that the average number of patients per quarter ranged
from 58 in the breast projects to 7 in the ovarian projects (see page 24). Considerable caution
is needed in generalising from the experience of such small numbers. Also, the absence of
any data on cancer patients treated outside the CSC makes it difficult to determine whether
the changes reported by the projects were different from what was happening elsewhere.
For these reasons, we are unable to either verify or contradict some of the claims made for
the CSC (for example, Kerr et al, 2002; Cancer Services Collaborative Planning Team 2000;
and NHS Modernisation Board, 2002) which are affected by many of the same limitations as
this study. Indeed, given the inconsistencies between the data reported to us and those
supplied to NPAT in the form of run charts (see appendix 5), the extent of the improvements
in access made by the CSC remains an open question. The lessons learnt from the difficulties
involved in data collection will enable future progress to be monitored more systematically.
To be sure, some tumour types and some projects did demonstrate impressive progress for
those patients who experienced the changes that were introduced. But as with other studies
of collaborative and redesign methods, the variations in outcomes that occurred, and the
limited changes brought about in a number of projects, underline the continuing challenges
in making the NHS more patient centred and tackling long standing capacity and cultural
constraints (Bate et al, 2002; Ham et al, 2002). And at a time when the sustainability and
successful spread of the CSC remains unclear, there are additional reasons for treating the
more ambitious arguments made for the CSC as a programme with caution.
Against this background, we now go on to draw out the lessons for future collaboratives
from our research. In so doing, we would emphasise that the value of studies of pilot
programmes like the CSC lie as much in their ability to inform policy and practice as in the
conclusions they reach about the impact of such programmes during the developmental
stage. Put another way, given the rapidly changing policy environment, research of the kind
we have undertaken is valuable because of its formative contribution and its ability to draw
on evidence to assist policy makers and practitioners to achieve more effective
implementation of their chosen strategies. In this spirit, we now stand back from our detailed
findings to offer some more general lessons for the future.
98
8.1.2
Lessons for the future
As noted at the beginning of this report, phase I of the CSC was in many ways an
experiment and a learning process. Consistent with this view is the sentiment - echoed in the
quotation below relating to business process (BPR) - that what is important to take from
initiatives such as the CSC are the specific and, we would argue, particularly the local
lessons that will enable better outcomes and longer-term benefits to be realised in the future:
‘Reflective learning about the dynamics of change may be more valuable than the slavish adoption of a
business concept whose application remains unproven.’ (Powell and Davies, 2001)
As with earlier approaches to quality improvement in the NHS the general lessons about the
management of change (Iles and Sutherland, 2001; Locock, 2001; Powell and Davies, 2001;
Bate et al, 2002; Ham et al, 2002; McNulty and Ferlie, 2002; Locock, 2003) from the CSC
are similar and have been reflected in the comments and quotations from participants which
are the basis for this report.
For instance, the following four lessons from the BPR experience in the NHS (Powell and
Davies, 2001) can, to varying degrees, equally be applied to the CSC:
-
Sustained support and commitment from the chief executive of the trust and senior
managers is essential, together with ‘local product champions’. Support must also be
secured from a critical mass of clinicians,
-
Many of the savings made from change programmes take time to become visible and
may be notional rather than real,
-
Hospitals are highly politicised organisations; different functional and professional
groupings have their own cultures and values which managers need to take into
account and work with, and
-
Context is highly influential: concepts, tools and techniques must be adapted to fit
local circumstances, including the readiness of the organisation for change.
Six ‘key levers’ for change have been discussed at length in chapter 3 and clearly need to be
retained - and in some cases developed further - in future collaboratives. The six levers
placed importance on process mapping, dedicated project management time, capacity and
demand training, multi-disciplinary team working, staff empowerment and networking.
More specific suggestions from participants for facilitating phase II of the CSC centred on
five areas:
8.2
-
Revisiting and reviewing measurement and reporting requirements,
-
Securing a closer alignment of the CSC with other national and local initiatives,
-
Less emphasis on what many participants termed the ‘hype’ surrounding the CSC,
-
The need for more full-time local staff in project teams, and
-
An emphasis placed on greater local ownership of the process (including suggestions
for more regional events and tumour group meetings).
Measures and reporting
As discussed in chapter 4 the measurement and reporting aspects of the CSC drew the most
comment from participants. When asked what changes they would suggest for phase II it
was clear that greater support and advice around data collection and measurement was top of
many participants priorities:
99
“C: I think just a plea really from me would be for phase two, get data collection people in from the word
go to baseline and know what we’re all collecting and that we’re all collecting the same nationally.
Because I don’t think you can compare any data unless we are.
Cl: That’s the benefit isn’t it: that if the measures have been agreed that’s going to help us. because
obviously it being a pilot things will change so much but now if a measure is agreed, you know what
you’ve got to get from day one where as C just said, the goal post kept moving so much and you think
you’ve got your system set up and then you think “oh God I’ve got to start from scratch again”. And I
personally felt I was playing catch up the entire time with thinking what’s going to come down next, what
else?
S: I think to me that would be the single thing I would want them to change if anything - get the data
collection sorted out because I think that’s been the biggest headache.” (Focus Group - Project Managers)
“I suppose if I have got one criticism now probably one of the weaknesses has been the way it has all
panned out around data. I think what we have learned from that whole thing is so valuable really and
could be put right and is being put right for phase two anyway.” (Programme Manager)
A regular suggestion was that guidance regarding measurement should have been
prescriptive and clarified at the very beginning. While there was general approval for the
autonomy awarded to programmes in developing their teams and processes, this was not the
preference for measurement. Support for specifying some standard measures across all the
programmes from the outset was expressed. Examples of the numerous comments included:
‘plan and decide measures and standardise now’, ‘clear guidelines on reporting and stick to
them’ and ‘better management of measuring baseline position.’
There was recognition that collecting data on basic measures such as waiting times across
the patient pathway depended on local clinician-led initiatives, and that these data were not
routinely available. Equally, what to measure - beyond the global measures - for each
respective tumour group was not immediately clear. A number of participants recommended
that teaching and discussion at the first workshop should have focused on measurement
rather than the emphasis given to theoretical aspects of change management.
Regular reporting to the central team was accepted as an important element of the
programme and useful for documenting the process and progress of developments.
However, the format and general usefulness of the reports were questioned. Less frequent
reports, perhaps every second month, was suggested as an alternative by some as the
expectation for change to occur within one month was considered unrealistic.
Global measures illustrated on graphs for a tumour group with small numbers was another
aspect rated to be inappropriate. So not only were project managers frustrated by the regular
requests for changes to how they were reporting, they were also concerned that the content
was not meaningful:
“Four measures are fine and also for phase 2, but what needs to be sorted is how they are measured. This
is still not sorted. And the communication from NPAT on how to measure should be put in writing. They
need to sort out the confusion about positive histology, the data needs to be specific to the patient not the
numbers diagnosed and treated in a specific month. The fact that NPAT used these data to calculate
waiting data meant that the figures were misleading because it was not patient specific.” (Project
Manager)
“the measures have been a waste of time for clinical teams because they don’t want to know them - they
don’t look at them. Well they might look at them to have a bit of a hoot: ‘okay, you’ve done that for ten
patients, wow: what about the other sixty that’s going to come through?’. I haven’t heard anything but
cynical views on the reporting style.” (Project Manager)
Such questions of how best to support the measurement of improvement at the local level
are central to discussions about how to begin to identify the ‘key success factors’ for
implementing a quality improvement programme.
100
Our review of the project teams monthly run charts in comparison with the available patient
level data for the same period (see appendix 5) raises some important questions with regard
to understanding the data collected and the improvements made in each project. Our
evaluation showed, for example, that some of the projects were not able to create run charts
with all the features requested during the CSC and highlights the importance of documenting
the number of patients each month and clearly labelling when major interventions were
implemented. Even if all the projects had been able to produce run charts, arguably they
would not have been as reliable as control charts.
The CSC’s use of run charts is recommended in the Modernisation Agency’s recent
publication explicitly in order to address one element of Langley et al’s ‘model for
improvement’: “Question: how do we know a change is an improvement? Answer: by
measuring the impact of the changes” (NHS Modernisation Agency, 2002; 5). This
promotion of run charts conflicts with Langley et al (1994; 84) which emphasises that
“Shewhart’s concept of variation is particularly important for answering “what changes can
we make that will result in improvement?” The use of control charts provides a statistical
basis for answering this question, both in terms of monitoring progress towards chosen
targets, and providing evidence for specific action related to desired improvement (Bennyan,
1998). The collaborative’s use of run charts was possibly influenced by the comment by
Langley et al (1996; 73) that “although there are many situations in which the statistical
formality of control charts is useful, often it is adequate to rely on run charts, or simple plots
over time … . Statistically minded readers are encouraged to learn and apply Shewhart
control charts.”
Research into other Collaboratives suggests that some project teams appear to have
difficulty (a) in collecting relevant data in order to establish a baseline to follow progress in
reaching the target, and then (b) organising and managing the ongoing collection, analysis
and reporting of the data in an effective way (Øvretveit et al, 2002). One of the stated
benefits of a collaborative approach to quality improvement is that it can help to overcome
just some of these difficulties which local project teams in the CSC encountered when they
might otherwise be working in isolation and with relatively small numbers of patients:
“When multiple organisations band together to form a collaborative improvement group around a focused
clinical topic, they take a giant step forward in solving the problem of small data sets that plagues local
improvement efforts. The challenge … is that the organisations must work together to define a consistent
data set and associated collection methods.” (Plsek, 1997)
There are clear advantages (in terms of time and credibility) for collaboratives where project
teams already share a standard and well developed data base or register with outcomes and
patient characteristics which is credible to clinicians. The NECVDSG and the neonatal
intensive care collaboratives are examples (Plsek, 1997) as are some Swedish and
Norwegian collaboratives which use national medical registers. Validated and long
established data bases allow teams to consider how their changes affect a wider range of
outcomes than they would otherwise be able to assess, although there are still problems in
assessing the impact of confounding variables.
8.3
Alignment with national and local initiatives
The prominence of the CSC as central to national policy initiatives in cancer was an explicit
intention in its conception. This relationship provoked various views from interviewees but
there was general support for the view that the aims of the CSC reinforced existing policy
developments in cancer:
101
“… writing the phase II bid its been quite good because we’ve been able to look at the peer review
standards - and some of the other bits and pieces within the cancer plan - and actually identify things in
that which they’re going to have to deal with and we’re going to have to do this: why not do it now in a
structured, planned way, rather than having to do it all at the last minute?” (Project Manager)
“The new national cancer guidance has provided an impetus for keeping the project going. This has also
made the MDT perceive the project as important and valuable and look beyond the national guidance.”
(Tumour Group Lead Clinician)
Calman-Hine was regularly mentioned as the common early trigger to improving cancer
services. The CSC was viewed as building on the principles advocated in that document.
Some services reported that the CSC had served as an incentive to re-invigorate local cooperation encouraged by Calman-Hine that were otherwise showing signs of fatigue. The
similarity in the aims of the two initiatives was generally viewed to be positive. It was a
stimulant to improvement although there was some duplication:
“The two driving forces really, are the collaborative project and the Calman-Hine accreditation. They’re
very similar. We’re being accredited in late November so we’ve got that going along in parallel. But
we’ve already been through all of that, so we’re not going to have an enormous amount of difficulty in
fulfilling their criteria, it’s just a question of fine-tuning. I mean, we’ve got our audit people and our
multi-disciplinary meetings, they’re all working so there isn’t a great deal to do locally with all of that.”
(Tumour Group Lead Clinician)
“I think a lot of things have just slotted into place, like data collection, by the MDT... So there’s a lot of
things that have been sort of influenced by national direction that have made things easier.” (Project
Manager)
The current focus on national cancer guidelines was perceived to boost the activities of the
CSC, especially as the push to improve quality was being served by both:
“…There’s been a big impetus in guidelines, and probably without that the collaborative would have been
a bit stuck I think. But I think it’s (the CSC) about implementation of those things, and it’s all part of this
improving the overall quality.” (Tumour Group Lead Clinician)
Views about the relationship with the 14-day standard were more diverse. Requirements for
services to meet the standard was perceived as either beneficial or a hindrance to CSC
projects. Some services reported that without the CSC they probably would not have
achieved the 14-day standard. The stimulus provided by the programme helped services to
obtain practical tools such as dedicated fax machines and other wider organisational changes
necessary to meet the standard. Another positive effect was that the standard served as a spur
to promote the initial changes CSC projects were focused on. By facilitating compliance
with the standard, project managers gained entry and credibility to work with teams.
“With the timing of the colorectal one (14 day standard), I just feel as though the panic is there. But it’s
also given us a lot of opportunities to get into GP education, to get into trust management structures
which was extremely difficult before this two-week wait thing. Suddenly everybody wants to talk to you,
so the door is open, there’s an awful lot of opportunities there, and I’m pursuing it. Once the system is in
place, it will die down and take care of itself and I will be able to move on to the other bits.” (Project
Manager)
“The 14 day standard has helped with much of what we are trying to achieve because Trusts have to meet
the targets of the cancer plan they work together with less resistance.” (Programme Manger)
In contrast, some project managers described the 14-day standard as an obstacle to
implementing the collaborative work or else promoting duplication. They found it
frustrating that initially their work for the CSC was confused with the standard and that they
were seen as merely helping to implement the standard:
“Everybody is very concerned and alarmed about the two week wait, so an obstacle is trying to make
people understand that this project is not about the two week wait. The problem is, that a lot of places are
talking about fast tracking those patients through, and the effect this has on the general system, just causes
102
even bigger waiting lists than before because it pushes them to the front of the queue. And all the other
cases, who may even have cancer themselves, and the early cases, the potentially curable ones, are pushed
to the back. And that’s what concerns us.” (Project Manager)
Similarly, for others the concern was that the 14-day standard appeared to be
counterproductive to the aims of the CSC. The standard was viewed as reinforcing a division
between urgent and non-urgent referrals. This conflicted with evidence from CSC mapping
and learning about capacity and demand which suggested that these divisions were an
integral constraint to promoting more efficient service delivery and resource use1.
Some participants believed that the 14-day standard was easier to implement than the CSC.
The standard was prescribed as a political imperative with clear targets, whereas the CSC
was attempting to bring about attitudinal change without imposing explicit targets and little
expectation for delivery from high level management:
“This two-week waiting list for outpatients, it’s very interesting how when it is a political imperative that
something should be done, that the Trust and everybody gets a direction and it’s done. Where with the
collaborative it’s a bit more difficult. That is providing a useful focus, it’s hard to move the managers in
trusts to seeing it as important. I think it is a fundamental problem with the NHS, we have clinicians at the
sharp end, trying to do these things, working with other clinicians, and we have managers who are still
separate.” (Tumour Group Lead Clinician)
This apparent difference in the way services respond to a directive to meet non-negotiable
targets compared to an invitation to collaborate towards service improvement tells us
something about the complexities and challenges of implementing change for improvement.
More locally participants would have welcomed even closer links with other redesign
initiatives, such as the National Booked Admissions Programme (Ham et al, 2002):
“That surprised me: why weren’t we in with Booked Admissions right from the word go? Because we’re
both doing the same work - albeit from slightly different angles - but we’re all doing it so why weren’t we
encouraged or placed with them?” (Project Manager)
“I hope that they use phase II to tie up and tie in with a lot of the other cancer initiatives that are coming
through and that they use that as a proper modernisation agenda. And I hope that the regional posts will
allow that to happen: that we’ll be involved with the modernisation team and that will take things forward
together and start looking at joined up thinking rather than at different and separate initiatives.”
(Programme Manager)
The responsibility for closer alignment should not rest simply with the leaders of a
collaborative as all of this needs an important local contribution as well:
“What I see as one of the fundamental problems of this kind of work is the lack of co-ordination with the
operational things that are happening on the ground. So the trust has a strategy, for example, for radiology
- a five year view - which didn’t mention the work that we were doing, which, is ridiculous. There is the
work we’re doing, there’s outpatient improvement, there’s booked admissions, there’s projects for
millions of things but nobody brings these strands together and sees whether they’re pulling in the right
direction, whether they’re delivering, whether they’re going in different directions.” (Project Manager)
1
Initially teams decided to use average data for treatment waiting times but as the programme developed it was
recognised that this did not demonstrate the variability in the times for individual patients. When teams began
to use individual patient data they noticed that younger patients tended to wait longer for diagnosis and
treatment because they did not benefit from fast track systems established for patients meeting age criteria in
Royal College guidelines. Some of the projects realised that they could redesign the process to develop a faster
service and test this but that – in the short term – this would disadvantage younger, ‘routine’ patients with
unsuspected cancer, and average times to treatment would not be reduced. Using queue theory four breast
projects eliminated the routine backlog, balanced the variation in demand with the variation in capacity that was
the cause of the queue and were then able to see all patients in two weeks (Kate Silvester, personal
communication).
103
“And I think where the collaborative has failed totally is that we’ve tried doing this as a bottom up
project. I’ve got fantastic nurses who’ve pushed the boat out until the point of exhaustion and then I’ve
got people in senior posts who look at it and say ‘oh it’s just a little project.’ Where we should have
concentrated our efforts - and where I would be disappointed if phase II didn’t concentrate its efforts - is
that the chief executives need pulling together big style.” (Project Manager)
All collaboratives - but perhaps particularly the CSC - need to be closely aligned with other
concurrent quality improvement and change initiatives related to the modernisation agenda.
Participants in the CSC would have benefited from a clearer understanding of the respective
contribution of each of the initiatives mentioned above, the overlap between them, possible
synergies and economies of scale. This again points to the need for leaders of a collaborative
to explain how all these initiatives might be brought together locally (and the potential role
of local modernisation teams). The need for alignment is a further reason for undertaking
more preparatory work at the local level and securing Chief Executive and/or senior
management involvement.
8.4
Dedicated local data collection staff
As already discussed the availability of dedicated project management time was invaluable
in chapter 3, although the amount of time available did differ between projects. In addition,
those projects that were fortunate enough to have some staff capacity for data collection
perceived this to be a major advantage:
“S: I was saying yesterday we were quite lucky that they’ve now appointed a new office manager and his
data knowledge is fantastic and the information’s there. So on a monthly basis he can just print it off and
we can generate a report from that. If you start looking at breast projects and radiology projects, you’re
talking huge numbers and to write it down manually, if you can’t extract it from your system, is a massive
task and then you obviously have to analyse that work as well.
M: I had a data clerk actually that had been continuously auditing my breast cancer pathway and
continuing to collect data for the [name of place] audit for colorectal so we were quite lucky the data was
there. It was manual, there weren’t any IT systems, but it did mean that I didn’t have to go back and trawl
through the stuff.” (Focus Group - Project Managers)
For those projects that did not have any data collection support this was a key
recommendation for phase II:
“It’s been difficult in terms of data collection because there is such a lack of resource within trusts, of
people collecting data. Ideally, in an ideal world, each individual unit will have an audit person. And I
think for the amount of work that we’ve ended up putting in and if you take in account the staff that we
have involved, you have enough for a full-time data person.” (Project Manager)
It is likely that in some Trusts more preparatory work and closer alignment with other local
initiatives (section 8.3 above) may have enabled teams to identify existing sources of
expertise and staff time for data collection purposes at an earlier stage. The realisation of
such synergies may also have further benefits in terms of embedding the CSC more firmly in
organisational structures and processes, thereby increasing the likelihood of changes being
sustained.
8.5
‘Tone’ and ‘language’ of CSC
For many interviewees an underlying concern was that the CSC was politically driven and
they were reluctant to accept that the political drive was necessarily beneficial: ‘too much
time wasted listening to the political message’, ‘less political rhetoric’ and ‘too overtly
political which puts backs up.’ The notion that the purpose of the CSC was to provide the
Government with good NHS news stories was mentioned frequently:
104
“I think everybody’s very cynical about the national politics. There was a lot of cynicism around the
political agenda associated with it.” (Project Manager)
“It became a bit of a joke - it was almost like they were filling a day to tell us the political story that they
wanted us to retell.” (Project Manager)
“I feel like the project will be used for political gain next year. And that is a major driving force and that
ministers are telling us that we’ve got to make improvements and that we’ve got to show it. I don’t feel
that I’m an agent of the government, I don’t feel that should be driving it.” (Project Manager)
“I have enjoyed it and we have improved our service and made changes. I am sorry it is in danger of
being used as government propaganda.” (Tumour Group Lead Clinician)
This view led some to speculate that the aim of the collaborative was perhaps not primarily
to serve the interests of patients. The political imperative was seen to have the potential to
manipulate the direction the CSC will be expected to take:
“Well, I have a slight concern that it’s not all for the patient’s benefit. I think there’s a large political
element about it, as there is for so many aspects of the health service, and this does concern me
somewhat.” (Tumour Group Lead Clinician)
“It was so political. I mean, I’ve been involved with regional projects before but that was nothing like
this. I mean this was my first brush with a Department of Health driven project and I’ve disliked it
enormously. It’s just been too much about getting the figures right and a whole lot less about what that
means for the patient.” (Tumour Group Lead Clinician)
Whilst approving of the changes and improvements the CSC had brought about, the
language and tone of its implementation were unhelpful: ‘less hype, less haste, more critical
deliberation’ and ‘less evangelical proselytizing tone - we know already it is the right way
forward.’ Some accepted that this was part of the way new initiatives were conceived and
that working within these constraints had to be accommodated whereas others were left
feeling uncomfortable by the ‘spin’ put on their efforts and achievements (particularly at the
national learning sessions):
“I think if I had one plea, it would be for them to stop evangelising. I just felt it was a bit ‘fingers down
the throat’, and a lot of people in the audience when they said is there anyone from the collaborative here,
thought twice before putting their hand up.” (Project Manager)
“The national team had clearly decided from the very start that the CSC team was a great team everything else, especially the national meetings, were tailored to proclaim this success scientifically - too
much hype.” (Project Manager)
Another fear concerning the perceived political nature of the programme was that inequities
in service provision that become exposed as a result of the work may remain hidden because
they would distract from the “good news” and achievement:
“What I would hate to see with this project is that if a number of inequities that are shown up by it, that
they will be get buried somewhere because it’s all bad news. There’s bound to be an awful lot of things
shown up by this that are not terribly pleasing. Not only locally, but even centrally. And it would be
hideous if a political agenda was superimposed on it and it somehow got buried because it wasn’t very
flattering.” (Tumour Group Lead Clinician)
We have already discussed the tensions between having strong national and local
components to the CSC. Many of the comments above reflect that the drivers for change are
different at these two levels. These different perspectives can create suspicion and doubts on
the part of some participants, particularly when they are dubious about the quality and
content of the data that are used to assess progress within a collaborative. Resolving some of
the measurement and data collection issues raised earlier would go some way to reassuring
participants but what is also required is a change in tone and content in the way that progress
is fed back to participants, especially at the national workshops. Whilst celebrating success
is important a more realistic appraisal of the scope of changes and improvements that had
105
been made (and exploration of reasons for relative ‘failure’) would have been welcomed by
participants. This links closely with issues around local ownership of the work which is
another key lesson from phase I of the CSC.
8.6
Local Ownership
To continue to use the methods they have been taught, teams will need to learn how to use
them flexibly and be convinced of the value of continuing to doing so. Some teams may not
have acquired this ‘deeper’ learning and conviction which allows flexible and continuous
application, and which also makes it more likely that they can pass improvement concepts
and ideas on to others. In most cases project teams in the CSC will need to continue to make
changes to wider procedures, systems and processes. The danger is that some teams may
overlook the need to sufficiently anticipate, learn about or plan how to sustain improvements
after the collaborative formally ended.
As discussed in chapter 3 the empowering of local staff was a key lever for change in the
CSC:
“I think the ownership is the key thing, isn’t it, because you can’t make people do things that they don’t
want to do…because if you do, then they’ll just do it when you’re there and then when you’ve gone, it
will all fall apart again. So if you want it to continue and be sustained, then they’ve got to own it and
they’ve got to want to do it.” (Project Manager)
However, there is a tension between the need for teams to take local ownership of the
process - which, by necessity, takes time - and the integral part of the adopted improvement
approach which emphasizes the need for quick results:
“It was something that they had to own themselves, not for us to be the ones going in and saying this is
what you do and how you do it. The ideas: we had to make the team come up with the ideas and want to
make the changes themselves, otherwise there’s no point. But again it didn’t happen as fast as maybe
NPAT would have liked them to.” (Programme Manager)
The speed at which project teams can begin to ‘own’ the process is also closely related to
how well the CSC was aligned to existing initiatives and local structures. In some cases
these were already in place. However, where such integration was slower to take place it
seems that more focused regional one day events and tumour group meetings would have
encouraged greater and earlier local ownership. Indeed the leaders of the CSC
acknowledged that the ‘turning point’ after the slow start to the CSC was when teams began
to engage in more tumour specific work. A consistent message was that participants would
have welcomed less time spent during, and at, national meetings and more space for smaller
breakout sessions and tumour specific meetings:
“I think they’re talking about a lot less national work in phase II and I think that will be better and will be
helped by the shift from national to regional teams. Certain things will happen more locally than
nationally so you won’t be dragging people away for as much time. There’ll still be some national things and I think that is right - but not as much and not for two days at a time.” (Programme Manager)
To this end the greater emphasis being placed on regional meetings in phase II of the CSC
was welcomed: “regions as focus for phase II will be a more cost-effective basis,” “more
done at the network level; less at the national level,” and “I want a regional roll-out similar
to Beacon sites and helping to graft improvements to local sites.”
The translation of national aspirations and policy to local action and implementation
(Exworthy et al, 2002) seen in subsequent phases of the CSC is therefore to be welcomed.
Inevitably, national policies aimed at shaping local policy agendas are mediated by central
and local expectations of policy (Pressman and Wildarsky, 1973) - ‘great expectations in
106
Westminster may be dashed locally’ - and so the emphasis now being placed on creating
capacity for modernisation and improvement locally is the right one.
8.7
Receptive context
Given that the improvement method taught to all the projects - and the mode of its national
introduction to participants - was very similar, explanations for the recorded variations (both
quantitative and qualitative) between the nine programmes and their constituent 51 projects
must lie elsewhere. It is increasingly clear that the receptive contexts (Pettigrew et al, 1992)
at the individual, team and organisational levels play a significant role in determining both
outcomes and experiences of programmes such as the CSC. Ferlie and Shortell (2001)
suggest that the development of a receptive context, or organisational culture, is an
‘important force for any change’; this assertion being supported by the findings of studies in
the US (Shortell et al, 1995; Douglas and Judge, 2001) and elsewhere. Indeed, Pettigrew et
al (1992) suggested that identifying receptive contexts for change may be more important
than identifying effective levers for change which might work across all contexts.
With this in mind, an in-depth case study identified five factors that helped explain much of
the observed variation in the rate and pace of change across different clinical services,
specialities and directorates participating in a large-scale change programme (McNulty and
Ferlie, 2002: 282ff). The factors were the different:
-
organization, management and resourcing structures of the programme (e.g. devolution
to clinical specialties and directorates),
-
receptive and non-receptive contexts for change (e.g. clinical support and leadership),
-
scope and complexity of patient processes (e.g. work jurisdictions),
-
approaches to planned change (e.g. internal/external change leadership), and
-
levels of resourcing for change (e.g. investment in IT).
The research suggested that:
‘Receptivity to patient process redesign within directorate and specialty settings is explained in relation
to: perceived determinancy of patient processes; the presence of clinicians, especially medical
consultants willing to lead, support and sanction redesign interventions; generic processes of
organisation and management at directorate and specialty levels; and the quality of existing
relationships within and between clinical settings.’ (p. 287)
Research into another NHS Collaborative also focused on receptivity as a key variable in
explaining marked differences in outcomes from a programmatic approach to quality
improvement (Bate et al, 2002: pp. 80-83), as did a recent evaluation of the National Booked
Admissions Programme (Ham et al, 2002).
Factors that make up a ‘receptive context’ have been described as including (Bate et al,
2001, adapted from Pettigrew et al, 1992 ):
-
the role of intense environmental pressure in triggering periods of radical change
-
the availability of visionary key people in critical posts leading change
-
good managerial and clinical relations
-
a supportive organisational culture (which is closely related to the three preceding
factors)
107
-
the quality and coherence of ‘policy’ generated at a local level (and the ‘necessary’ prerequisite of having data and being able to perform testing to substantiate a case)
-
the development and management of a co-operative interorganisational network
-
simplicity and clarity of goals and priorities, and
-
the change agenda and its locale (for example, whether there is a teaching hospital
presence and the nature of the local NHS workforce).
Our quantitative analysis of 14 project teams in phase I of the CSC provides only a very
limited basis from which to compare the performance of the 51 projects and the nine
programmes. Nor do we have a sufficient breadth or depth of interview data to permit
analysis of the specific reasons for the range of experiences identified amongst the project
teams. However, important variables in local context and implementation strategies across
the Trusts participating in phase I of the CSC included:
-
Local conditions and developments prior to CSC: for instance, important factors
might have included whether the host organisation had recently merged with another
NHS Trust or whether the local cancer network was already well established.
-
Start date: work was underway in some project teams up to six months before other
teams
-
Leadership: the extent, and visibility, of senior (managerial and clinical) leadership
from within the host organisations varied significantly
-
Project management: although most project managers in phase I of the CSC were fulltime some were not, and whilst some project managers were drawn from existing staff,
others were newly appointed from outside the organisation
-
Team composition: the backgrounds of project team members varied (including
clinical, nursing, therapy, management) and whilst some teams had members with
experience of redesign and project management, others did not. In addition, some teams
had greater experience of multi-disciplinary team working
-
Clinical involvement: some teams worked with one clinician, some worked with more
than one; some worked on one site whereas others worked across a number of sites
-
Expenditure: for example, use of CSC funding for project-related non-clinical staff time
ranged from £188,561 (34%) to £438,127 (79%) across the nine programmes. Similarly,
funding for project-related clinical staff time, new clinical capacity or waiting list
initiatives varied tenfold (range from £33,275 (6%) to £310,571 (56%)).
The challenge of improving services across the five tumour groups also varied greatly. For
example, at one extreme, having already benefited from the Calman-Hine reforms, most
breast cancer patients were already treated within 62 days, and the three breast projects
included in our analysis all set more ambitious targets. Here the challenge was to ensure
that all patients benefited from rapid treatment and not just the majority. Our analysis of
outcomes suggests that this remains a very difficult challenge. At the other extreme,
prostate cancer services had not yet been ‘Calman-Hined’, and so the challenge was to
implement the basics such as MDT working. The two prostate projects included in our
analysis both made substantial reductions in waiting times. Nevertheless, the comparatively
long waiting times still experienced by patients with prostate cancer suggest that the
challenge of meeting national waiting time targets is also considerable.
108
Our dataset is too far removed to allow us to comment in detail on local conditions for
change (i.e. we have at most only interviewed the project manager and one clinician from
each project). To present correlations and derive conclusions from such a dataset would be
misleading and superficial; more detailed comparisons would require further data collection
from other project team members and local stakeholders. Future studies should seek to focus
more closely on such issues and to identify the key determinants of effectiveness.
109
110
References
Airey C, Becher H, Erens B, Fuller E. (2002). National Surveys of NHS Patients – Cancer:
National Overview 1999/2000. London; Department of Health
Bate SP, Robert G, McLeod H. (2002). Report on the ‘Breakthrough’ Collaborative
approach to quality and service improvement within four regions of the NHS. A research
based investigation of the Orthopaedic Services Collaborative within the Eastern, South &
West, South East and Trent regions. Birmingham; Health Services Management Centre,
Birmingham University
Bate, SP and Robert, G. (2002) ‘Knowledge Management and communities of practice in
the private sector: lessons for modernising the National Health Service in England and
Wales,’ Public Administration, 80(4): 643-663
Benneyan JC. (1998) ‘Use and interpretation of statistical quality control charts’
International Journal for Quality in Health Care, 10(1): 69-73
Berwick, DM. (1998) ‘Physicians as leaders in improving health care: A new series in
Annals of Internal Medicine’, Annals of Internal Medicine, 128 (4):15
Bowns IR and McNulty T. (1999) Re-engineering Leicester Royal Infirmary: an
independent evaluation of implementation and impact. Sheffield; SCHARR, University of
Sheffield
Brattebo G, Hofoss D, Flaatten H et al. (2002) Effect of a scoring system and protocol for
sedation on duration of patients’ need for ventilator support in a surgical intensive care unit,
British Medical Journal, 324: 1386-9
Cancer Services Collaborative Planning Team. (2000) The Cancer Services Collaborative
Twelve Months On, NPAT
Cancer Services Collaborative. (2001a) Lung Cancer Improvement Guide, NPAT
Cancer Services Collaborative. (2001b) Ovarian Cancer Improvement Guide, NPAT
Counte MA and Meurer S. (2001) ‘Issues in the assessment of continuous quality
improvement implementation in health care organisations’, International Journal for
Quality in Health Care, 13(3): 197-207
Deegan P, Heath L, Brunskill J et al. (1998) ‘Reducing waiting times in lung cancer’,
Journal of the Royal College of Physicians of London, 32(4): 339-343
Department of Health. (2000) The NHS Cancer Plan. A plan for investment, a plan for
reform. London; HMSO
Department of Health. (2001) The NHS Cancer Plan – Making Progress, Department of
Health
Douglas TJ and Judge WQ. (2001) ‘TQM implementation and competitive advantage: the
role of structural control and exploration’, Academy of Management Journal, 44(1): 158-169
Expert Advisory Group on Cancer. (1995) A policy framework for commissioning cancer
services. London, Department of Health
Exworthy M, Berney L and Powell M. (2002) ‘How great expectations in Westminster may
be dashed locally’: the local implementation of national policy on health inequalities,’
Policy and Politics, 30(1): 79-96
Fergusson R and Borthwick D. (2000) ‘Organizing the care of lung cancer patients’,
Hospital Medicine, 61(12):841-843
111
Ferlie E and Shortell S. (2001) ‘Improving quality of health care in the United Kingdom and
the United States: a framework for change’, Milbank Quarterly79(2): 281-315
George P. (1997) ‘Delays in the management of lung cancer’, Thorax, 52: 107-108
Ham C, Kipping R, McLeod H and Meredith P. (2002) Capacity, Culture and Leadership:
lessons from experience of improving access to hospital services. Birmingham; Health
Services Management Centre, University of Birmingham
Harbor JD, Rogowski J, Plsek P et al. (2001) ‘Collaborative quality improvement for
neonatal intensive care’, Pediatrics, 107(1): 14-22
Kerr D, Bevan H, Gowland B, Penny J and Berwick D. (2002) ‘Redesigning cancer care’,
British Medical Journal, 324: 164-166
Kilo CM. (1998) ‘A Framework for Collaborative Improvement: lessons from the Institute
for Healthcare Improvement’s Breakthrough Series’, Quality Management in Health Care,
6(4): 1-13
Langley GJ, Nolan KM and Nolan TW. (1992) The foundation for improvement. Silver
Spring, MD; API Publishing
Langley G, Nolan K and Nolan T. (1994) ‘The Foundation of Improvement’, Quality
Progress. June, 81-86
Langley J, Nolan K, Nolan T, Norman C and Provost L. (1996) The Improvement Guide.
San Francisco; Jossey-Bass
Leape LL, Kabcenell AI, Gandhi TK, Carver P, Nolan TW, Berwick DM. (2000) ‘Reducing
adverse drug events: lessons from a breakthrough series,’ Joint Commission Journal on
Quality Improvement, 26(6): 321-331
Leatherman, S. (2002) ‘Optimizing quality collaboratives’, Quality and Safety in Health
Care, 11: 307
Locock L. (2001). Maps and journeys: redesign in the NHS. Birmingham; Health Services
Management Centre, University of Birmingham
Locock L. (2003). ‘Healthcare redesign: meaning, origins and applications’, Quality and
Safety in Healthcare, 12: 53-58
Lynn J, Schall MW and Milne C. (2000) ‘Quality improvements in end of life care: insights
from two Collaboratives,’ Journal on Quality Improvement, 26(5): 254-267
McNulty T and Ferlie E. (2002) Reengineering health care. The complexities of
organisational transformation, Oxford; Oxford University Press
Meredith P, Ham C, Kipping R (1999). Modernising the NHS: booking patients for hospital
care. Birmingham; Health Services Management Centre, University of Birmingham
NHS Executive. (1997) Guidance on Commissioning Cancer Services Improving Outcomes
in Colorectal Cancer, Department of Health
NHS Executive. (1998) Guidance on Commissioning Cancer Services Improving Outcomes
in Lung Cancer, Department of Health
NHS Executive. (1999) Guidance on Commissioning Cancer Services Improving Outcomes
in Gynaecological Cancers, Department of Health
NHS Modernisation Agency. (2002) Improvement Leader’s Guide to Measurement for
improvement, Department of Health
112
NHS Modernisation Board. (2002) The NHS Plan – A Progress Report. The NHS
Modernisation
Board’s
Annual
Report,
2000-2001.
(www.doh.gov.uk/
modernisationboardreport)
NPAT. (1999) Cancer Services Collaborative Improvement Handbook, Version 1,
November
NPAT. (2001b) Media Day 21 August 2001 Case Studies, NPAT
Øvretveit J. (2000) ‘Total Quality Management in European healthcare’, International
Journal of Health Care Quality Assurance, 13(2): 74-80
Øvretveit, J., Bate, S.P., Cleary, P., Cretin, S., Gustafson, D., McInnes, K., McLeod, H.,
Molfenter, T., Plsek, P., Robert, G., Shortell, S., and Wilson, T. (2002) ‘Quality
collaboratives: lessons from research’, Quality & Safety in Health Care, 11: 345-351
Packwood T, Pollitt C and Roberts S. (1998) ‘Good medicine? A case study of business
process reengineering in a hospital’. Policy and Politics, 26: 401-415
Parker H, Meredith P, Kipping R, McLeod H and Ham C. (2001). Improving patient
experience and outcomes in cancer: early learning from the Cancer Services Collaborative.
Interim report. Birmingham; Health Services Management Centre, University of
Birmingham
Pettigrew, A., Ferlie, E. and McKee, L. (1992) Shaping Strategic Change. London: Sage
Plsek P. (1997) ‘Collaborating across organisational boundaries to improve the quality of
care’, American Journal of Infection Control, 25: 85-95
Plsek P.(1999) ‘Evidence-based quality improvement, principles and perspectives. Quality
Improvement methods in clinical medicine,’ Pediatrics, 103(1): 203-214
Powell AE and Davies HTO. (2001) ‘Business process re-engineering: lost hope or learning
opportunity?’ British Journal of Health Care Management, 7(11): 446-449
Pressman J and Wildavsky I. (1973) Implementation: how great expectations in Washington
are dashed in Oakland, Berkeley, CA: University of California Press
Richards M et al. (2000) The NHS Prostate Cancer Programme, London: Department of
Health
Robert G, Hardacre J, Locock L, Bate SP. (2002) ‘Evaluating the effectiveness of the Mental
Health Collaborative as an approach to bringing about improvements to admission, stay
and discharge on Acute Wards in the Trent and Northern & Yorkshire regions. An Action
Research project.’ Birmingham; Health Services Management Centre, University of
Birmingham.
Rogowski JA, Habor JD, Plsek PE et al. (2001) ‘Economic implications of neonatal
intensive care unit quality improvement,’ Pediatrics, 107(1): 23-29
Quinn M. (2000) ‘Cancer trends in England and Wales, 1950-1999,’ Health Statistics
Quarterly, 8: 5-19
Shortell SM, O’Brien JL, Carman JM. (1995) ‘Assessing the impact of continuous quality
improvement/total quality management: concept versus implementation’, Health Services
Research, 30(2): 377 – 401
Spurgeon P, Barwell F and Kerr D. (2000) ‘Waiting times for cancer patients in England
after general practitioners’ referrals: retrospective national survey’, British Medical Journal,
320: 838-9 (25 March)
Turner J. (2001) ‘Collaborating Care,’ IHM, November: 18-19
113
Van de Ven, A. (1986) ‘Central problems in the management of innovation’, Management
Science, 32(5): 590-607
114
Appendix 1
Patient level data collection form
Final version
Cancer Services Collaborative activity data
CSC Network name:
Cancer type:
Contact name:
Contact's telephone number:
Definition of patients included in the 'slice':
Definition of urgency (see note 3):
Please complete one row for each NHS patient diagnosed with cancer during January to March 2000 and January to March 2001.
(See note 2)
Patient ref
Was this
Name of the
patient
Trust patient
included in
referred to
your project's
slice? Y / N
see note 1
see note 2
e.g.
y
source of
referral
Was the
Was the
referral
referral
urgent, as
urgent in
defined by
terms of
the 2 week
locally
wait criteria?
defined
Y/N
criteria? Y / N
see note 3
St Fred's
1
see note 4
y
y
Date of
referral
Date of first
specialist
appointment
Date of
confirmed
diagnosis
Date of first What was the Was the first Was the first Was the first
definitive
first definitive specialist
diagnostic
definitive
treatment
treatment? appointment investigation
treatment
booked?
booked?
booked?
Y / N / NK
Y / N / NK
Y / N / NK
see note 5
see note 6
03-Dec-99 16-Dec-99 03-Jan-00 30-Jan-00
see note 7
see note 8
see note 8
see note 8
1
y
n
n
Please supply the data to Hugh McLeod by 1 October 2001. Please supply the data in the form of an Excel spreadsheet by email to [email protected]
If you would like an Excel file containing the above layout, please email Hugh McLeod. If you have any questions about this request for data please contact:
Hugh McLeod
Research Fellow
Health Services Management Centre
University of Birmingham
Park House, 40 Edgbaston Park Road
Birmingham B15 2RT tel 0121 414 7620
115
Cancer Services Collaborative activity data
Please supply data for each CSC project separately. Please include NHS patients only and
exclude any private patients.
Notes
1 Patient ref
2
Was this patient
included in your
project's slice? Y/N
Please use any reference which would allow the data for the
patient to be queried if necessary.
The NHS patients for whom data are to be collected are all
those under a consultant in one of the following two
categories:
(1) All consultants participating in the project’s ‘slice’, and
(2) any consultants in the same trust(s) working in the same
cancer type who are not involved in the slice
For example, if the ‘slice’ includes one consultant, and there
are other consultants working at the same trust on the same
cancer type, record data for all consultants and mark this
data field Y or N as appropriate.
If a project includes three Trusts, but consultants at only one
Trust have been included in the ‘slice’, please only provide
data for all patients at the one Trust in the ‘slice’.
If the ‘slice’ changed over time, please provide data based on
the ‘slice’ existing between January and March 2001. Please
record the definition of the project's slice at row 7 of the
'CSC data' worksheet.
3
Source of referral
The request here is for the source of referral. Please use the
following codes:
1 GP
2 Consultant Physician
3 Consultant Surgeon
4 A&E
5 Emergency Admission
6 Screening programme
7 Other or not known
4
Was the referral urgent If your project has recorded 'urgency', but not used the two
week wait criteria, please record the definition used at row 8
in terms of locally
of the 'CSC data' worksheet.
defined criteria?
5
Date of confirmed
diagnosis
Data are to be collected on all NHS patients diagnosed with
one of the five CSC cancers during the two quarters (January
to March 2000 and 2001).
In cases where the diagnosis is made on the basis of
histopathology, please record the date of the histopathology
116
report. In other cases, please record the date of clinical
diagnosis made by the patient's consultant/multi-disciplinary
team.
6
Date of first definitive
treatment
If the patient is waiting to treatment when the data are
supplied to HSMC, please record as “waiting”
7
What was the first
definitive treatment?
Please use the following codes:
1 Surgery
2 Chemotherapy
3 Hormone therapy
4 Radiation therapy
5 Palliative and best supportive therapy
6 No immediate active treatment
7 Other or not known
8
Booking data
In order to understand the use of booking for cancer patients,
it is important that the booking data are provided at patient
level. We appreciate that these data may not be available, in
which case please record as “NK” for not known.
117
Appendix 2
Postal questionnaire
COVERING LETTER
Dr T. Smith
St Elsewhere Hospital
Nomansland.
Dear Dr Smith,
External evaluation of the Cancer Services Collaborative: Questionnaire
As you may know, the Department of Health commissioned the Health Services
Management Centre to conduct an external evaluation of the Cancer Services Collaborative
(CSC).
The enclosed questionnaire is about your views of participating in the CSC over the last
sixteen months. It is being sent to all CSC clinical leads, project, and programme managers
and others closely associated with the Collaborative. It will be most helpful if you were able
to complete the questionnaire and return it in the postage paid envelope.
This questionnaire is confidential and individual respondents will not be identifiable. The
code on the front sheet is to enable the research team to record returns and issue reminders
where necessary. Once you have returned the questionnaire, the front sheet with the code
will be removed before data are entered and analysed. To enable a distinction between views
of programme lead clinicians, tumour group lead clinicians, programme managers, project
managers and “others”, the questionnaire is printed on different coloured paper for each
respective group.
In addition to questionnaire data, the research team have conducted individual and group
interviews with CSC project and programme managers in all nine Programmes, and are
collecting outcome and cost data. A final evaluation report will be produced in the summer.
Thank you in advance for your help. If you have any questions regarding the questionnaire
or the evaluation please contact Dr Glenn Robert ([email protected]) or on the above
telephone number.
Yours sincerely,
118
CONFIDENTIAL
End of CSC questionnaire to all
project managers, programme managers
and clinical leads
External evaluation of the Cancer Services Collaborative
Health Services Management Centre
119
Throughout the questionnaire, please circle the relevant statement. Disregard the
column of boxes on the extreme right of the page, this is for analysis purposes only
A. THE IMPROVEMENT APPROACH
We would like to know HOW HELPFUL overall you found the following components of the
CSC improvement approach in the context of your role in the CSC programme:
1. Process mapping
Not involved/
not applicable
Very
helpful
Quite
helpful
Not particularly
helpful
Not at all
helpful
Very
helpful
Quite
helpful
Not particularly
helpful
Not at all
helpful
Very
helpful
Quite
helpful
Not particularly
helpful
Not at all
helpful
Very
helpful
Quite
helpful
Not particularly
helpful
Not at all
helpful
Quite
helpful
Not particularly
helpful
Not at all
helpful
Very
helpful
Quite
helpful
Not particularly
helpful
Not at all
helpful
Very
helpful
Quite
helpful
Not particularly
helpful
Not at all
helpful
2. PDSA cycles
Not involved/
not applicable
3. Capacity and demand training
Not involved/
not applicable
4. Change principles
Not involved/
not applicable
5. Improvement handbook (orange folder)
Not involved/
not applicable
Very
helpful
6. Dedicated project manager
Not involved/
not applicable
7. Monthly reports
Not involved/
not applicable
120
8.
Team self-assessment scores
Not involved/
not applicable
9.
Very
helpful
Quite
helpful
Not particularly
helpful
Not at all
helpful
Very
helpful
Quite
helpful
Not particularly
helpful
Not at all
helpful
Very
helpful
Quite
helpful
Not particularly
helpful
Not at all
helpful
Quite
helpful
Not particularly
helpful
Not at all
helpful
Not particularly
helpful
Not at all
helpful
Conference Calls
Not involved/
not applicable
10. Listserv
Not involved/
not applicable
11. National learning workshops
Not involved/
not applicable
Very
helpful
12. National One-day meetings on specific topics
Not involved/
not applicable
Very
helpful
Quite
helpful
13. In relation to the above list, for those components that you found most helpful and
least helpful we would like to know IN WHAT WAY they were helpful or unhelpful.
Most helpful components:
…………………………………………………………………………..……………
………………………………………………………………………………………
………………………………………………………………………………………
………………………………………………………………………………………
Most unhelpful components:
…………………………………………………………………………..……………
………………………………………………………………………………………
………………………………………………………………………………………
121
B. YOUR PARTICIPATION
We wish to know HOW USEFUL you found the following events in the context of your role in the
CSC programme:
14. National Learning workshops
- Dudley (November 1999)
Did not
Attend/use
Not at all
useful
Not particularly
useful
Quite
useful
Very
useful
Not at all
useful
Not particularly
useful
Quite
useful
Very
useful
Not particularly
useful
Quite
useful
Very
useful
Not particularly
useful
Quite
useful
Very
useful
Not particularly
useful
Quite
useful
Very
useful
- Harrogate (February 2000)
Did not
Attend/use
- Canary Wharf (June 2000)
Did not
Attend/use
Not at all
useful
- Blackpool (November 2000)
Did not
Attend/use
Not at all
useful
- Newport (prostate only)
Did not
Attend/use
Not at all
useful
Further comments on usefulness of National Learning Workshops you attended:
…………………………………………………………………………..……………
………………………………………………………………………………………
………………………………………………………………………………………
………………………………………………………………………………………
………………………………………………………………………………………
15. One day CSC events
-
Chief Executives workshop
Did not
Attend/use
-
Not particularly
useful
Quite
useful
Very
useful
Quite
useful
Very
useful
Clinical leaders workshop with Don Berwick
Did not
Attend/use
122
Not at all
useful
Not at all
useful
Not particularly
useful
-
One day meeting on Radiology
Did not
Attend/use
-
Very
useful
Not at all
useful
Not particularly
useful
Quite
useful
Very
useful
Not at all
useful
Not particularly
useful
Quite
useful
Very
useful
Not at all
useful
Not particularly
useful
Quite
useful
Very
useful
Not particularly
useful
Quite
useful
Very
useful
Palliative Care
Did not
Attend/use
-
Quite
useful
Primary Care
Did not
Attend/use
-
Not particularly
useful
Two day meeting on Radiology
Did not
Attend/use
-
Not at all
useful
Patient Information
Did not
Attend/use
Not at all
useful
Further comments on usefulness of days you attended:
…………………………………………………………………………..……………
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
C. FACTORS THAT CONTRIBUTED TO CSC ACHIEVMENTS
16. We are interested in your views on the extent to which central policies on cancer (e.g.
national cancer guidance, the cancer plan, 14 day standard etc.) have contributed to
achieving the objectives of the CSC. Please comment on how particular policies have
helped or hindered your progress:
…………………………………………………………………………..……………
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
123
We list below some of the broader aspects of the CSC and we would like to know HOW
HELPFUL you found these in the context of your own role in the CSC:
17.
Role of national CSC team
Very
helpful
18.
No
opinion
Not particularly
helpful
Not at all
helpful
No
opinion
Quite
helpful
Not particularly
helpful
Not at all
helpful
No
opinion
Quite
helpful
Not particularly
helpful
Not at all
helpful
No
opinion
Not particularly
helpful
Not at all
helpful
No
opinion
Not particularly
helpful
Not at all
helpful
No
opinion
Quite
helpful
Quite
helpful
Role of trust chief exec
Very
helpful
24.
Not at all
helpful
Role of regional office
Very
helpful
23.
Not particularly
helpful
Quite
helpful
Role of HA
Very
helpful
22.
No
opinion
Role of clinical champions
Very
helpful
21.
Not at all
helpful
Local CSC programme leadership
Very
helpful
20.
Not particularly
helpful
Role of cancer networks
Very
helpful
19.
Quite
helpful
Quite
helpful
In relation to the above list, for those aspects that you found most helpful and least
helpful we would like to know IN WHAT WAY they were helpful or unhelpful.
Most helpful aspects:
…………………………………………………………………………..……………
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
124
Most unhelpful aspects:
…………………………………………………………………………..…………
……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………
D. ADDITIONAL GAINS
25.
In your view, have there been any local benefits from participating in the CSC
which were not directly associated with the formal objectives and processes of the CSC?
Yes
26.
No
Don’t know
If yes, please briefly state what these unforeseen benefits have been:
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
E. SUGGESTIONS FOR CHANGES IN THE WAY THE CSC WAS
IMPLEMENTED
Thinking back over your experience of the CSC during the last 16 months, is there anything you
would do differently if the programme was to start in three months time, in respect of:
27. Selection of tumour groups
………………………………………………………………………………………………
……………………………………………………………………………………………
……………………………………………………………………………………………
……………………………………………………………………………………………
28. Compilation of improvement teams (e.g. full-time/part- time project managers, clinical leads)
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
125
29. Learning workshops
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
30. Measurement
…………………………………………………………………………………………………
…………………………………………………………………………………………………
…………………………………………………………………………………………………
…………………………………………………………………
31.Monthly reports
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
32. Local ownership (e.g. embedding initiative in existing structures)
…………………………………………………………………………………………
…………………………………………………………………………………………………
…………………………………………………………………………………………………
…………………………………………………………………………
33. How allocated funds were used
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
34. Any other changes you would like to suggest:
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
126
F. SUGGESTIONS FOR CSC PHASE 2
35.
Please give any suggestions that may help to facilitate phase 2 of the CSC:
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
G. BEST AND LESS HELPFUL ASPECTS OF PARTICIPATING IN THE CSC
Participating in the CSC may have had both positive and less helpful aspects. In this section you
have the opportunity to identify up to 3 particularly positive aspects or “plusses” and up to 3 less
helpful aspects or concerns relating to your participation in and experience of the CSC. Please list in
order starting with the most positive (Q36) and least helpful (Q37):
36.
Particularly positive aspects
(i)………………………………………………………………………………………………
…………………………………………………………………………………………………
(ii)………………………………………………………………………………………………
…………………………………………………………………………………………………
(iii)………………………………………………………………………………………………………
…………………………………………………………………………………………
37.
Less helpful aspects or concerns
(i).…………………………………………………………………………………………………………
…………………………………………………………………………………………………
(ii) ……………………………………………………………………………………………..
…………………………………………………………………………………………………
(iii)………………………………………………………………………………………………
…………………………………………………………………………………………………
H. SUSTAINABILITY
We would like to have your assessment of the sustainability of the local CSC programme in which
you have been involved.
38.
Please indicate HOW WELL-EMBEDDED within the participating organisation you
believe the CSC now is:
Not at all
well-embedded
Not particularly
well-embedded
Quite
well-embedded
Very
well-embedded
127
Further comments on likely sustainability of changes you have made:
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
I. FINAL COMMENTS
39.
Looking at the CSC over the whole course of your involvement, how would you
ASSESS THE EVOLUTION of the programme?
Considerably strengthened over the whole period
Strengthened somewhat over the whole period
Remained strong over the whole period
Weakened somewhat over the whole period
Considerably weakened over the whole period
Remained weak over the whole period
40.
Please add here any other comments you wish to make about your participation in
the CSC or the initiative as a whole
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
THANK YOU FOR YOUR HELP
Please return the questionnaire in the postage paid addressed envelope
to: Health Services Management Centre, University of Birmingham
128
Appendix 3 Response rates to postal questionnaire
TABLE 12
Response rates to postal questionnaire by respondent group
Group
No. questionnaires
mailed
Number
questionnaires
returned
Response rate
(%)
Project Managers
55
38
69
Tumour Lead group clinicians
54
40
74
Programme Managers
9
6
66
Programme Clinical Leads
7
7
100
Others
5
5
100
130
96
74
TOTAL
TABLE 13
Response rates to postal questionnaire by CSC Programme
Programme
No. questionnaires
mailed
Number questionnaires
returned
Response rate (%)
A
18
17
94
B
14
9
64
C
11
10
91
D
10
9
90
E
15
13
87
F
24
20
83
G
15
11
73
H
9
5
56
I
9
2
22
Others
5
5
100
TOTAL
130
96
74
129
Appendix 4
Collection of patient-level activity data
In February 2000, the HSMC evaluation team discussed with NPAT options for evaluating
the outcomes of the CSC. It was recognised that the CSC programmes had already been
asked by NPAT to develop their own targets and measurement arrangements. This approach
had been used for the first wave of the booked admissions programme, and did not facilitate
comparative analysis suitable for the purpose of evaluation. It was agreed that the evaluation
should consider outcomes, and not just structure and process issues, because of the CSC’s
focus on outcomes. Following discussion between HSMC and NPAT it was agreed in March
2000 that the CSC participants would be requested to collect a minimum dataset covering
‘before’ and ‘after’ periods for the purpose of the evaluation.
In March 2000, the HSMC evaluation team made a proposal to NPAT about the analysis of
activity data as part of the evaluation. The key measure was waiting time for three stages in
the patient 'journey': from initial referral to first outpatient appointment, from first outpatient
appointment to date of confirmed diagnosis, and from date of confirmed diagnosis to date of
first definitive treatment. Secondary measures included the use of appointment booking
arrangements for first outpatient appointment and the first definitive treatment.
HSMC’s proposed dataset was presented to the CSC programme managers in April and
rejected. However, in June 2000, NPAT requested that the CSC programme managers
collect activity data. The key measure was waiting time from referral to first definitive
treatment. In addition, waiting time data were requested “where possible” for each of the
three stages in the patient’s journey noted above. The request also included summary data on
booking activity. The waiting time data were requested for all patients between January
2000 and March 2001, and the request included an instruction to “keep the raw data for the
external evaluator”.
In July 2000, the HSMC evaluation team commented on the NPAT’s June request, and
suggested some changes. In December 2000, following a request from the HSMC evaluation
team, NPAT asked the CSC programme managers to send all currently available activity
data to the HSMC evaluation team.
In January 2001, a summary analysis of the limited available data was presented at a
meeting at the Department of Health. Following this meeting and subsequent discussion, it
was decided that a standardised specification for the data was necessary in order to facilitate
the collection of a satisfactory dataset.
In February 2001, the HSMC evaluation team sent a draft specification for the activity data
to the CSC programme managers and leaders of the CSC. The key changes from NPAT’s
June 2000 request were as follows:
-
Data were required for all patients between two periods only (the first quarters of
2000 and 2001).
-
Waiting time data would cover all three stages in the patient’s journey.
-
Basic contextual data would be collected (urgency and type of first definitive
treatment).
-
The booking data would be at patient level where available.
Two main definitional issues were also raised, and it was suggested that (i) the patients
should be selected on the basis of when they started their treatment, and (ii) ‘all’ patients
meant all patients treated in Trusts participating in the CSC, rather than all patients within a
project’s ‘slice’.
130
In March 2001, the specification was discussed between the programme managers, leaders
of the CSC and the HSMC research team; several key changes to the draft specification were
requested. The most important changes were that patients were to be selected on the basis of
(i) their diagnosis date, rather than the date on which they started their first definitive
treatment, and (ii) being treated in Trusts participating in a ‘slice’.
HSMC received data from 10 projects in four CSC programmes which were based on the
February draft data specification. It is not clear why these projects sent these data as that
specification remained a draft only. These 10 projects presumably thought that the
specification had been accepted, in which case the final request for data would appear to be
a request for the ‘same data’.
In June 2001, after further discussion NPAT agreed to the new data specification (see
appendix 1). In early September, the CSC programme managers were asked to supply the
activity data using the agreed data specification. In November, NPAT supplied the HSMC
evaluation team with data conforming to the final data specification for one additional
project1.
In April 2002, data for six more projects were provided. This resulted in data such that 27%
(14/51) of the projects are included in our main analysis. The main analysis covers the
‘before’ and ‘after’ periods (January to March 2000 and 2001) agreed for the comparative
analysis, and these projects provide some insight into the change in waiting times
experienced by patients within the scope of the CSC phase I (see table 3). A further 24%
(12/51) of the projects are included in a secondary analysis. These projects provide much
more limited insight into changes in waiting times. For nine of the 12 projects this is because
data were not provided for the ‘outcome’ quarter, January to March 2001, and instead the
quarter ending November or December 2000 has been used, depending on the last month for
which data were provided. For three of the nine projects, the data provided for the outcome
quarter were so limited that they provided only a very poor picture of waiting times.
Eighteen percent (9/51) of the projects provided some patient-level data, but these were too
limited to be analysed. No patient-level data were supplied for 31% (16/51) of the projects.
Table 14 shows that all the projects in Programmes A and B supplied data which are
included in the main analysis. A minority of the projects in Programmes C, D and E are
included in the main analysis, although the secondary analysis allows some additional
limited insight. Our view of the projects in Programmes F and G is very limited and none of
the data provided for projects in Programmes H and I were usable.
1
Of five files containing 'new' data, three contained 'baseline' data only (i.e. data up to March 2000 only), and
one file contained data for a lung project up to September 2000 only.
131
TABLE 14
Programme
Patient-level data on waiting times by programme
Projects that
Projects
Projects
Projects
total
supplied no
included in included in
excluded
data
main
secondary
from
analysis
analysis
analysis
number (%) number (%) number (%) number (%) number (%)
programme A
4
(100)
0
(0)
0
(0)
0
(0)
4
(100)
programme B
4
(100)
0
(0)
0
(0)
0
(0)
4
(100)
programme C
2
(40)
2
(40)
1
(20)
0
(0)
5
(100)
programme D
3
(27)
1
(9)
0
(0)
7
(64)
11
(100)
programme E
1
(20)
4
(80)
0
(0)
0
(0)
5
(100)
programme F
0
(0)
3
(60)
1
(20)
1
(20)
5
(100)
programme G
0
(0)
2
(33)
0
(0)
4
(67)
6
(100)
programme H
0
(0)
0
(0)
5
(100)
0
(0)
5
(100)
programme I
0
(0)
0
(0)
2
(33)
4
(67)
6
(100)
14
(27)
12
(24)
9
(18)
16
(31)
51
(100)
total
132
Appendix 5
CSC Project level analysis of selected quantitative outcomes
This appendix presents additional analysis for the 14 projects included in the main
analysis.
Prostate project programme C
Page
134
Prostate project programme B
135
Breast project programme D project A
136
Breast project programme C
140
Breast project programme A
143
Ovarian project programme B
147
Ovarian project programme D
149
Ovarian project programme A
152
Colorectal project programme B
156
Colorectal project programme D
157
Colorectal project programme A
159
Lung project programme A
163
Lung project programme E
165
Lung project programme B
168
133
CSC programme C Prostate project
Waiting time target
The project’s report for March 2001 states that the waiting time target (for referral to
first definitive treatment) was 70 days. Figure 27 shows the ‘run chart’ for the project
reported to NPAT. No information is recorded about the patients’ status (eg urgency)
or treatment type. This project’s team self-assessment score and CSC Planning Group
score in March 2001 was 4.5.
FIGURE 27
Programme C Prostate project; waiting time ‘run chart’
Patient leaves clinic with scan &
follow-up appt
Average days to treatment
Average days to
treatment
Started pre-booking procedures/appts
250
200
150
100
50
0
Pre-booking considered
Jan
Feb
Mar
Apr
Initiated daily ‘Emergency Clinics’
May
June
Jul
Aug
Sept
Oct
Nov
Dec
Jan
Feb
Av days to treatment 141.4 149.5 99.4 123.3 82.2
48.9
74.3
74.1
68
98.3
56.5
54.4
37.2
47.5
Target
70
70
70
70
70
70
70
70
70
70
70
70
70
70
No of Pts
5
6
7
3
4
9
12
13
3
3
2
7
5
2
Source: project report to NPAT for March 2001
Analysis of patient-level data
This project supplied data using the draft data specification. All the cases were
recorded as GP referrals and urgent using locally defined criteria. Data on waiting
times were available for 86% (12/14) of cases in the quarter ending March 2000, and
62% (8/13) of cases in the quarter ending March 2001. Of the cases with waiting time
data, all but one in each quarter were treated with hormone therapy. Table 15 shows
that across all patients starting the first definitive treatment in each quarter for whom
waiting time are available, the mean waiting time from referral to first definitive
treatment reduced from 143.1 days to 51.9 days and the median waiting time reduced
from 115.5 days to 34.8 days.
TABLE 15
Programme C Prostate project: all cases; waiting time from referral to first definitive
treatment (days)
Jan-Mar 2000
Jan-Mar 2001
% change between periods
mean
143.1
51.9
-63.7
minimum
22.0
27.0
22.7
first quartile
80.0
34.8
-56.6
median
115.5
48.0
third quartile
231.3
58.8
*
-74.6
-58.4
maximum
273.0
100.0
-63.4
Inter quartile range
151.3
24.0
-84.1
total number of patients
12
8
-33.3
total number of days
1717
415
-75.8
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
134
Mar
Apr
70
70
In terms of the project’s target, 8% (1/12) of patients waited 70 days or less in the
quarter ending March 2000, and this compares to 88% (7/8) of patients who waited 70
days or less in the quarter ending March 2001.
Booking
See table 4 on page 32.
CSC programme B Prostate project
Waiting time target
The project’s report for March 2001 states that the waiting time target (for referral to
first definitive treatment) was 70 days. Figure 28 shows the ‘run chart’ for the project
reported to NPAT. No information is recorded about the patients’ status (eg urgency)
or treatment type. In March 2001, this project’s team self-assessment score was 4 and
CSC Planning Group score was 3.5.
FIGURE 28
Programme B Prostate project; waiting time ‘run chart’
CHART 1a - Patient Flow: Merseyside & Cheshire PROSTATE
Av number of days between each major stage
250
annual leave and
cancellations
caused delays
time from GP referral
to 1st OP reduced at
start of project
200
biopsy sessions increased in
July. Prebooking also
improving time to diagnosis
target
ultrasound probe broke
and lost sessions have
led to delays in biopsy
150
100
50
Ma
r-01
Feb
-01
Jan
-01
-00
Dec
Nov
-00
Oct
-00
Sep
-00
Aug
-00
00
Jul-
Jun
-00
Ma
y-0
0
-00
Apr
Ma
r-00
Feb
-00
0
Jan
-00
days
treatment
diag test
1st OP
Source: CSC programme B report to NPAT for March 2001
Analysis of patient-level data
The project’s data did not conform to the draft or agreed data specification. The data
on treatment type were incomplete, but included a range of treatment options
including radical prostatectomy, transurethral resection of the prostate (TURP),
radiotherapy and hormone therapy. The reported sources of referral are shown in table
16. Table 17 shows that the mean waiting time from referral to first definitive
treatment reduced from 201.6 days to 86.1 days and the median waiting time reduced
from 156.5 days to 73.0 days. In terms of the project’s target, 6% (1/16) of patients
waited 70 days or less in the quarter ending March 2000, and this compares to 47%
(8/17) of patients in the quarter ending March 2001.
135
TABLE 16
Programme B Prostate project: all cases; source of referral
source of
referral
Jan-Mar 2000
number
9
GP
Other
(%)
(56.3)
4
Jan-Mar 2001
number
8
(25.0)
4
(%)
(47.1)
(23.5)
Open
1
6.3)
1
(5.9)
Within
1
(6.3)
1
(5.9)
Blank
1
(6.3)
0
(0.0)
Urologist
0
(0.0)
3
(17.6)
Total
16
(100.0)
17
(100.0)
TABLE 17
Programme B Prostate project: all cases; waiting time from referral to first definitive
treatment (days)
Jan-Mar 2000
Jan-Mar 2001
% change
between periods
mean
201.6
86.1
-57.3
minimum
68.0
5.0
-92.6
first quartile
109.5
58.0
-47.0
median
156.5
73.0
third quartile
259.0
101.0
*
-53.4
-61.0
maximum
501.0
330.0
-34.1
Inter quartile range
149.5
43.0
-71.2
total number of patients
16
17
6.3
total number of days
3225
1463
-54.6
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
Booking
Patient-level data on booking were not supplied.
CSC programme D Breast project A
Waiting time target
The CSC programme D report to NPAT for February 2001 states that the waiting time
target (for referral to first definitive treatment) for the Trust A Breast project was 30
days. The report noted that the average waiting time for February 2001 was 22 days.
Figure 29 shows the ‘run chart’ for the project reported to NPAT (rather than the total
number of days noted in the figure’s title, it is likely that the figure shows the mean
number of days). No information is recorded about the number of patients, or their
status (eg urgency). Figure 28 shows that mean waiting times remained lower than the
target in each of the seven months before the project started and the eight months after
it started. However, it shows that the mean waiting time was higher in each month
after the project started than in any of the previous seven months. This project’s team
self-assessment score in March 2001 was 4.5 and the CSC Planning Group score was
4.
136
FIGURE 29
Programme D Breast project A; waiting time ‘run chart’
B re a s t
A c c e s s - T h e t o t a l n o o f d a y s ( in c w e e k e n d d a y s ) f r o m
r e f e r r a l t o s p e c ia l is t t o 1 s t d e f in i t iv e t r e a t m e n t
50
NBT
Trust
T atarget
rg e t
30
P r o je c t C o m m e n c e d - A u g u s t
2 0 .5
20
10
9
8
7
10
16
14
11
8
1
19
22
15
9
M
ar
-0
1
Fe
b01
Ja
n01
D
ec
-0
0
ov
-0
0
N
O
ct
-0
0
Se
p00
ug
-0
0
A
Ju
l-0
0
Ju
n00
ay
-0
0
M
pr
-0
0
A
Fe
b00
0
M
ar
-0
0
3
Ja
n00
Access
40
N o t e : 1 . P r o je c t c o m m e n c e d
Source: CSC programme 1 report to NPAT for February 2001
Analysis of patient-level data
This project supplied data using the agreed data specification. The data for this project
include two treatment types only: ‘surgery’ and ‘other or not known’. Table 18 shows
that a majority of patients in each quarter received surgery, and that nearly all of these
patients have both the referral and treatment dates recorded. By comparison most of
the patients in the ‘other or not known’ category did not have a treatment date
recorded.
Most patients receiving surgery and having complete date data recorded were noted as
having been urgent referrals defined by unspecified local criteria (98% (48/49) in the
first quarter and 85% (53/62) in the second quarter). Table 19 shows a range of
measures relating to waiting times from referral to treatment for this patient group in
each quarter. (Please note that data for one patient from the first quarter of 2000 was
excluded from the analysis because the referral date (August 1996) shown was so long
ago that it may be erroneous.) Table 19 shows that waiting times did not decrease
between the first quarter of 2000 and the first quarter of 2001. The mean waiting time
increased from 20.6 days to 23.1 days and the median increased from 16 days to 21
days. In terms of the project’s target, 94% (44/47) of patients waited 30 days or less in
the quarter ending March 2000, and this compares to 92% (49/53) of patients who
waited 30 days or less in the quarter ending March 2001.
A subset of the patients included in table 19 were also recorded as being urgent
referrals as defined by the two week wait criteria. Table 20 shows the waiting times
analysis for these patients. It is notable that the patients fulfilling the two week wait
criteria form a minority of the locally defined urgent referrals in the first quarter of
2000. The waiting time findings are the same for both groups of urgent classification.
137
TABLE 18
Programme D Breast project A: data by treatment type and dates present
January to March 2000
January to March 2001
Number of
patients
Number of
patients
(%)
(%)
Data including referral and treatment dates
surgery
49
(61.3)
62
(74.7)
other or not known
10
(12.5)
5
(6.0)
2
(2.5)
2
(2.4)
19
(23.8)
14
(16.9)
80
(100.0)
83
(100.0)
Data not including referral and treatment dates
surgery
other or not known
Total patients
TABLE 19
Programme D Breast project A: locally defined urgent cases treated with surgery;
waiting time from referral to first definitive treatment (days)
Jan-Mar 2000
mean
20.6
Jan-Mar 2001
% change between
periods
23.1
12.4
minimum
7.0
10.0
42.9
first quartile
12.5
18.0
44.0
median
16.0
21.0
third quartile
21.5
26.0
20.9
maximum
120.0
62.0
-48.3
Inter quartile range
9.0
8.0
-11.1
1
47
53
12.8
total number of days
966
1224
26.7
total number of patients
*
31.3
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
1 One patient from the first quarter of 2000 was excluded from the analysis because the referral date
(August 1996) shown was so long ago that it may be erroneous.
TABLE 20
Programme D Breast project A: two week wait defined urgent cases treated with
surgery; waiting time from referral to first definitive treatment (days)
Jan-Mar 2000
Jan-Mar 2001
% change between
periods
mean
17.0
23.0
35.0
minimum
11.0
10.0
-9.1
first quartile
15.0
18.0
median
17.0
21.0
third quartile
19.0
24.8
30.3
20.0
*
23.5
maximum
22.0
62.0
181.8
Inter quartile range
4.0
6.8
68.8
total number of patients
9
42
366.7
total number of days
153
964
530.1
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
138
Booking
The project’s booking target was to book at least 95% of all patients at all stages of
their care pathway. The patient-level data on booked admissions supplied by the
project show that in both the quarters ending March 2000 and March 2001, all patients
were booked for the first specialist appointment and the first diagnostic test. Data on
whether or not patients were booked for the first definitive treatment were supplied
for 60% (48/80) of patients in the quarter ending March 2000 and 73% (61/83) of
patients in the quarter ending March 2001. In both quarters, all of the patients for
whom data on booking were supplied had been booked, and had surgery.
As noted above, this project supplied data using the agreed data specification. Hence,
the date of the first diagnostic test was not supplied and it is not possible to determine
the extent to which the first diagnostic test occurred on the same day as the first
specialist appointment.
The patient-level data contradict the run charts shown in figures 30 and 31, which
show that no patients were booked during the quarter ending March 2000 for the first
specialist appointment (one stop clinic) and admission. Both graphs show a change in
practice from no patients being booked before June 2000 to all patients being booked
from June 2000 and after. Feedback on the draft of this report noted that the booking
was initiated in April 2000, and first shown on the run charts in June because of
delays in collecting the data.
FIGURE 30
Programme D Breast project A; booking ‘run chart’ for the first specialist
appointment
B re a s t
P a tie n t F lo w -T h e % o f p a tie n ts w h o h a ve a b o o k e d
a d m is s io n w ith a c h o ic e o f d a te o f th e ir c a re p a th w a y - S ta g e 1
(1 -S to p C lin ic )
100% 100% 100% 100% 100% 100% 100% 100% 100%
100%
90%
N BTrust
T
80%
T atarget
rg e t
Patient Flow
70%
60%
50%
40%
30%
S e p a ra te P ro je c t C o m m e n c e d - A u g u s t
20%
1
10%
0%
00
nJa
0%
0%
0%
0%
0
0
-0
ar
0
r-0
Ap
0
-0
ay
0
bFe
M
M
0%
Ju
n-
00
Ju
0
l-0
Au
00
g-
pSe
00
Oc
t-0
0
v
No
0
-0
00
cDe
nJa
01
bFe
01
M
-0
ar
1
1 . P ro je c t c o m m e n c e d
Source: CSC programme 1 report to NPAT for February 2001
139
FIGURE 31
Programme D Breast project A; waiting time ‘run chart’ for admissions
B re a s t
P a t ie n t F lo w - T h e % o f p a t ie n t s w h o h a v e a b o o k e d
a d m is s io n w it h a c h o ic e o f d a t e o f t h e ir c a r e p a t h w a y - S t a g e 2
( A d m is s io n )
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%
Trust
T target
a rg e t
90%
NBT
80%
Patient Flow
70%
60%
50%
40%
S e p a r a t e P r o je c t C o m m e n c e d - A u g u s t
30%
20%
1
10%
0%
00
nJa
0%
0%
0%
0%
00
bFe
0
-0
ar
0
r-0
Ap
0
-0
ay
M
M
0%
nJu
00
l-0
Ju
0
00
gAu
pSe
00
t
Oc
0
-0
vNo
00
cDe
00
01
nJa
01
bFe
M
1
-0
ar
1 . P r o je c t c o m m e n c e d
Source: CSC programme 1 report to NPAT for February 2001
CSC programme C Breast project
Waiting time target
The project’s report for March 2001 states that the waiting time target (for GP referral
to first definitive treatment) was 40 days for 95% of patients. Figure 32 shows the
‘run chart’ for the project reported to NPAT, which does not include data on the
number of patients. The project includes one Trust. In March 2001, this project’s team
self-assessment score and the CSC Planning Group score was 4.5.
FIGURE 32 Programme C Breast project; waiting time ‘run chart’
Access
Consultant
Annual Leave.
60
50
Da
ys
40
30
Lost 3 Theatre
sessions, due to
Xmas and theatre
audit.
20
10
-
Months
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sept Oct
Nov
Dec
Jan
Feb
Mar
Source: March 2001 project report to NPAT
140
Analysis of patient-level data
Patient-level data were provided by the project using the draft data specification for
patients starting the first definitive treatment in each quarter. All the patients were
recorded as GP referrals. With the exception of three patients in the quarter ending
March 2000, all the patients in both quarters were defined as urgent using local
criteria. Table 21 shows the type of first definitive treatment provided in each quarter.
TABLE 21
Programme C Breast project: data by treatment type
January to March 2000
January to March 2001
Number of
patients
Number of
patients
(%)
(%)
Surgery
18
(85.7)
18
(66.7)
Adjuvant
2
(9.5)
4
(14.8)
Chemotherapy
0
(0.0)
4
(14.8)
Palliative DXT
0
(0.0)
1
(3.7)
Follow-up
1
(4.8)
0
(0.0)
21
(100.0)
27
(100.0)
Total patients
Table 22 shows that across all patients starting the first definitive treatment in each
quarter, the mean waiting time from referral to first definitive treatment fell from 44.9
days to 43.2 days and the median waiting time increased from 33 days to 41 days. In
terms of the project’s target, 71% (15/21) of patients waited 40 days or less in the
quarter ending March 2000, and this compares to 44% (12/27) of patients who waited
40 days or less in the quarter ending March 2001.
TABLE 22
Programme C Breast project: all patients; waiting time from referral to first definitive
treatment (days)
Jan-Mar 2000
mean
44.9
Jan-Mar 2001
% change between
periods
43.2
-3.7
minimum
14.0
4.0
-71.4
first quartile
27.0
28.0
3.7
median
33.0
41.0
24.2
third quartile
41.0
49.0
19.5
maximum
186.0
123.0
-33.9
Inter quartile range
14.0
21.0
50.0
1
21
27
28.6
total number of days
942
1166
23.8
total number of patients
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
1 The year of referral for one patient in the first quarter was changed from 2000 to 1999 in order to
avoid a negative waiting time and make the date consistent with the first diagnostic investigation date.
Focusing on the largest group of patients within the project, table 23 shows the
waiting time measures for urgent GP referrals treated with surgery. The mean waiting
time fell by 2.5 days and the median waiting time increased by 11 days.
141
TABLE 23
Programme C Breast project: locally defined urgent cases treated with surgery;
waiting time from referral to first definitive treatment (days)
Jan-Mar 2000
Jan-Mar 2001
% change between
periods
mean
50.8
48.3
-5.0
minimum
21.0
25.0
19.0
first quartile
30.0
34.5
15.0
median
34.0
45.0
32.4
third quartile
43.3
51.3
18.5
maximum
186.0
117.0
-37.1
Inter quartile range
13.3
16.8
26.4
1
16
18
12.5
total number of days
813
869
6.9
total number of patients
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
1 The year of referral for one patient in the first quarter was changed from 2000 to 1999 in order to
avoid a negative waiting time and make the date consistent with the first diagnostic investigation date.
A subset of the locally defined urgent cases (table 24) were also recorded as urgent in
terms of the two week wait criteria. Table 24 shows that for these patients the mean
waiting time fell by 9.1 days and the median waiting time increased by 12 days.
TABLE 24
Programme C Breast project: two week wait defined urgent cases treated with surgery;
waiting time from referral to first definitive treatment (days)
Jan-Mar 2000
Jan-Mar 2001
% change between
periods
mean
54.7
45.6
-16.6
minimum
26.0
25.0
-3.8
first quartile
33.0
35.5
7.6
median
33.0
45.0
36.4
third quartile
42.0
51.3
22.0
maximum
186.0
69.0
-62.9
Inter quartile range
9.0
15.8
75.0
1
9
14
55.6
total number of days
492
638
29.7
total number of patients
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
1 The year of referral for one patient in the first quarter was changed from 2000 to 1999 in order to
avoid a negative waiting time and make the date consistent with the first diagnostic investigation date.
Booking
The project’s booking target was that 95% of all patients with diagnosed cancer
should be booked at each step of the patient’s journey. The patient-level data supplied
by the project recorded that 38% (8/21) of patients in the quarter ending March 2000
and 74% (20/27) of patients in the quarter ending March 2001 had been booked for
the first specialist appointment. All patients in both quarters were recorded as booked
for the first diagnostic test. In the first quarter, 29% (6/21) of patients had the first
diagnostic test on the same day as the first specialist appointment, compared to 59%
(16/27) of patients in the second quarter. In the first quarter 67% (14/21) of patients
were reported to be booked for the first definitive treatment compared to all patients
in the quarter ending March 2001. The booking ‘run charts’ show similar trends.
142
CSC programme A Breast Project
Waiting time target
This project included four trusts. The CSC programme A report to NPAT for
February 2001 states that the waiting time target (for referral to first definitive
treatment) for the Breast project was 35 days. Figure 33 shows the ‘run chart’ for the
project reported to NPAT. No information is provided about the patients’ status
(urgency). This project’s team self-assessment score in March 2001 was 4.5 and the
CSC Planning Group score was 4.
FIGURE 33
Programme A Breast project: waiting time ‘run chart’
G lo b a l M e a s u r e 1 - A v e r a g e d a y s to T r e a tm e n t
60
55
45
40
T a rg e t < 3 5 d a y s
35
1 p t n o t in c lu d e d a s t r y in g
a lt e r n a t iv e t h e r a p ie s in s t e a d o f
h a v in g s u r g e r y
H o s p 5 d a ta n o t
in c lu d e d
30
25
1 p t n o t in c lu d e d a s t o o k
4 4 4 d a y s to tre a tm e n t
20
H o s p 5 d a ta
in c o m p le t e
18
38
37
38
29
25
Dec-00
34
Nov-00
34
Oct-00
32
Sep-00
37
Jun-00
28
May-00
25
Apr-00
N u m b e r o f p a t ie n t s :
Feb-00
15
Jan-00
36
21
Source: March 2001 CSC programme report to NPAT
Analysis of patient-level data
The project supplied data using the draft data specification. Patients appear to have
been generally selected on the basis of the first specialist appointment date in each
quarter. Table 25 shows that this project includes data for patients receiving hormone
therapy.
143
Mar-01
Feb-01
Jan-01
Aug-00
Jul-00
10
Mar-00
Average days to treatment
50
TABLE 25
Programme A Breast project: data by treatment type and dates present
January to March 2000
January to March 2001
Number of
patients
Number of
patients
(%)
(%)
Data including referral and treatment dates
Surgery
64
(50.0)
54
(45.0)
Hormone therapy
54
(42.2)
45
(37.5)
Chemotherapy
5
(3.9)
4
(3.3)
Other
0
(0.0)
3
(2.5)
2
(1.6)
4
(3.3)
Data not including referral and treatment dates
Surgery
Hormone therapy
3
(2.3)
1
(0.8)
Chemotherapy
0
(0.0)
1
(0.8)
0
(0.0)
8
(6.7)
128
(100.0)
120
(100.0)
Other
Total patients
Table 26 shows that waiting times for urgent cases (defined by the two week wait
criteria) treated with surgery did not decrease between the two quarters. The mean
waiting time increased from 30.6 days to 32.5 days and the median reduced from 22.5
days to 21 days. In terms of the project’s target, 81% (34/42) of patients waited 35
days or less in the quarter ending March 2000, and this compares to 78% (25/32) of
patients who waited 35 days or less in the quarter ending March 2001.
TABLE 26
Programme A Breast project: two week wait defined urgent cases treated with
surgery; waiting time from referral to first definitive treatment (days)
Jan-Mar 2000
Jan-Mar 2001
% change between
periods
mean
30.6
32.5
6.3
minimum
7.0
10.0
42.9
first quartile
16.0
15.0
-6.3
median
22.5
21.0
-6.7
third quartile
32.0
31.5
-1.6
maximum
153.0
194.0
26.8
Inter quartile range
16.0
16.5
3.1
total number of patients
42
32
-23.8
total number of days
1285
1041
-19.0
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
Table 27 shows that waiting times for urgent cases (defined by the two week wait
criteria) treated with hormone therapy experienced a small (statistically insignificant)
reduction in mean waiting time and no change in median waiting time between the
two quarters. Ninety three percent (26/28) of patients waited 35 days or less in the
quarter ending March 2000, and this compares to 96% (25/26) of patients who waited
35 days or less in the quarter ending March 2001.
144
TABLE 27
Programme A Breast project: two week wait defined urgent cases treated with
hormone therapy; waiting time from referral to first definitive treatment (days)
Jan-Mar 2000
% change between
periods
Jan-Mar 2001
Mean
11.9
10.5
-11.8
Minimum
0.0
0.0
0.0
first quartile
3.8
3.3
-13.3
Median
8.0
8.0
0.0
third quartile
13.3
13.8
3.8
Maximum
69.0
59.0
-14.5
Inter quartile range
9.5
10.5
10.5
total number of patients
28
26
-7.1
total number of days
332
272
-18.1
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
Booking
The project’s booking objective was to book more than 90% of patients for three key
stages in patients’ journey. Figure 34 shows the project’s booking run chart for the
first specialist appointment.
FIGURE 34
Programme A, Breast project; booking ‘run chart’ for the first specialist appointment
G lo b a l M e a s u re 2 - % p a tie n ts w ith a b o o k e d a d m is s io n - 1 s t s p e c ia lis t a p p o in tm e n t
100%
T a rg e t > 9 0 %
90%
B re a s t s u rg e o n le ft a t
h o s p ita l 1
B re a s t p h y s ic ia n le ft
a t h o s p ita l 1
80%
H o s p 5 d a ta n o t a v a ila b le
yet
C lin ic s c u t
70%
B re a s t p h y s ic ia n s ta rte d - n o t
fu lly tra in e d
60%
50%
40%
B re a s t p h y s ic ia n fu lly tra in e d
w a itin g to e m p lo y n e w s u rg e o n .
P u t in trig g e r s y s te m to e n s u re
1 4 d a y h it. A ls o h o s p ita l 5 d a ta
n o w in c lu d e d
30%
20%
N u m b e r o f p a tie n ts :
38
Mar-00
Apr-00
May-00
Jun-00
Jul-00
Aug-00
37
38
29
25
30
22
Mar-01
18
Feb-01
34
Jan-01
34
Dec-00
32
Nov-00
37
Oct-00
28
Sep-00
25
Feb-00
0%
Jan-00
10%
Source: March 2001 CSC programme report to NPAT
The project’s booking run charts for the first diagnostic test and the first definitive
treatment show that all patients were booked each month from January 2000.
Analysis of patient-level data
Analysis of the patient-level data supplied by the project shows that 81% of patients
treated for breast cancer were recorded as booked for their first specialist appointment
in both quarters (100/128 in the first quarter of 2000 and 97/120 in the first quarter of
145
2001) (table 28). The proportion of booked patients referred by a GP increased from
81% (69/85) in the first quarter of 2000 to 84% (81/96) in the first quarter of 2001.
TABLE 28
Programme A Breast project: reported booking for first specialist appointment (FSA)
by treatment type and event timing
January to March 2000
January to March 2001
Number of
patients
Number of
patients
(%)
(%)
Surgery
58
(91)
46
(79)
Hormone therapy
39
(74)
40
(87)
3
(50)
11
(69)
100
(81)
97
(81)
Other
All
Data on booking not available
5
0
Table 29 shows that all patients were recorded as booked for their first diagnostic test
in both quarters. Very few patients (3% (4/128) in the first quarter and 4% (5/120) in
the second quarter) did not have the first diagnostic test on the same day as the first
specialist appointment (table 29). All patients were recorded as booked for their first
definitive treatment in both quarters. Table 29 shows that the majority of patients
treated with hormone therapy (79% (44/56) in the first quarter and 70% (32/46) in the
second quarter) started the first definitive treatment on the same day and the first
specialist appointment and the first diagnostic test. This finding illustrates the extent
to which hormone therapy can be started on the same day as the first specialist
appointment. By comparison, with the exception of three patients in the first quarter,
patients receiving surgery did not receive treatment on the same day as the first
specialist appointment.
146
TABLE 29
Programme A Breast project: reported booking for first diagnostic investigation and
first definitive treatment by treatment type and event timing
January to March 2000
January to March 2001
Number of
patients
Number of
patients
(%)
(%)
First diagnostic test (FDT) booked
on same day as first diagnostic test (FSA)
surgery
63
(100)
55
(100)
hormone therapy
53
(100)
44
(100)
8
(100)
16
(100)
surgery
1
(100)
3
(100)
hormone therapy
3
(100)
2
(100)
other
not on same day as FSA
other
all
data on booking not available
First definitive treatment booked
0
n/a
0
n/a
128
(100)
120
(100)
0
0
on same day as FSA and FDT
surgery
3
(100)
0
n/a
44
(100)
32
(100)
0
n/a
0
n/a
surgery
61
(100)
54
(100)
hormone therapy
12
(100)
14
(100)
hormone therapy
other
not on same day as FSA and FDT
other
all
5
(100)
7
(100)
125
(100)
107
(100)
data on booking not available
3
5
data on dates missing
0
8
CSC programme B Ovarian project
Waiting time target
The CSC programme B report to NPAT for March 2001 states that the waiting time
target for referral to first definitive treatment was 35 days (expressed as 14 days from
GP referral to first specialist outpatient appointment and 21 days from to first
specialist outpatient appointment to first definitive treatment). Figure 35 shows the
project’s ‘run chart’. In March 2001, the project’s team self-assessment score and the
CSC Planning Group score was 5.
147
FIGURE 35
Programme B Ovarian project; waiting time ‘run chart’
CHART 1a - Patient Flow: OVARIAN
Av number of days to definitive treatment
45
40
Patient unfit for surgery delaying
definitive diagnosis
Introduced Nurse led
RAPAC
35
days
30
25
target
20
1 patient chose to delay initial
clinic appointment to after
Xmas break
15
treatment
pats sampled
10
1
3
3
2
-01
2
Ma
r-01
1
Jan
-01
Jul00
2
Feb
Jun
-00
3
Dec
-00
Ma
y-0
0
1
Nov
-00
6
Oct
-00
3
Sep
-00
2
Aug
-00
3
Apr
-00
Jan
-00
Feb
-00
4
0
Ma
r-00
5
Source: March 2001 CSC programme report to NPAT
Analysis of patient-level data
The patient-level data did not conform to the draft or agreed data specification. No
data were supplied on the source or urgency of referral or treatment type. Table 30
shows that across all patients starting the first definitive treatment in each quarter, the
mean waiting time from referral to first definitive treatment increased from 19.4 days
to 32.5 days and the median waiting time increased from 14 days to 32 days. In terms
of the project’s target, 86% (6/7) of patients waited 35 days or less in the quarter
ending March 2000, and this compares to 67% (4/6) of patients in the quarter ending
March 2001.
TABLE 30
Programme B Ovarian project: all patients; waiting time from referral to first
definitive treatment (days)
Jan-Mar 2000
Jan-Mar 2001
% change between
periods
mean
19.4
32.5
67.3
Minimum
4.0
13.0
225.0
first quartile
7.5
28.3
276.7
Median
14.0
32.0
128.6
third quartile
30.0
39.5
31.7
Maximum
43.0
49.0
14.0
Inter quartile range
22.5
11.3
-50.0
total number of patients
7
6
-14.3
total number of days
136
195
43.4
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
Booking
Patient-level data on booking were not supplied.
148
CSC programme D Ovarian project
Waiting time target
The CSC programme D report to NPAT for February 2001 states that the waiting time
target (for referral to first definitive treatment) was 35 days. The report noted that the
average waiting time for February 2001 was 35 days. Figure 36 shows the ‘run chart’
for the project reported to NPAT (rather than the total number of days noted in the
figure’s title, it is likely that the figure shows the mean number of days). No
information is provided about the number of patients, or their status (eg urgency). In
March 2001, this project’s team self-assessment score and the CSC Planning Group
score was 4.
FIGURE 36
Programme D Ovarian project; waiting time ‘run chart’
O va ry
A c c e s s - T h e to ta l n u m b e r o f d a ys (in c lu d in g w e e k e n d s )
fro m d a te o f re fe rra l to S p e c ia lis t to d a te o f firs t d e fin itive
tre a tm e n t.
70
6 6 .6
60
S e p a ra te P ro je c t
com m enced June 2000
Access
50
Trust
target
UBHT
TARG ET
40
20
35
30
30
2 3 .6
2 2 .7
2 6 .3
2 5 .6
24
2 3 .3
35
22
1 5 .5
10
1
2
8 .3
6 .4
-0
1
M
ar
Fe
b01
Ja
n01
ec
-0
0
D
N
ov
-0
0
ct
-0
0
O
0
Se
p00
A
ug
-0
Ju
l-0
0
Ju
n00
M
ay
-0
0
A
pr
-0
0
-0
0
M
ar
Fe
b00
Ja
n00
0
N O T E : 1 . Im p ro ve d b o o k in g a rra n g e m e n t a t k e y s ta g e s
2 . Ad d itio n a l P ro je c t M a n a g e m e n t s u p p o rt
Source: February 2001 CSC programme report to NPAT
Analysis of patient-level data
Patient-level data were provided by the project using the draft data specification for
patients starting the first definitive treatment in each quarter. Table 31 shows that in
the first quarter a minority (38%; 5/13) of patients were referred by a GP compared to
the majority in the second quarter (67%; 12/18).
All patients in both quarters were recorded as urgent in terms of the two week wait
criteria. With the exception of one patient in the first quarter who was treated with
palliative therapy, all the patients had surgery for the first definitive treatment. In the
quarter ending March 2001, 61% (11/18) had surgery, 22% (4/18) had chemotherapy,
11% (2/18) had palliative therapy, and one patient (6%; 1/18) was classified as
‘watchful waiting’.
Table 32 shows that across all patients starting the first definitive treatment in each
quarter, the mean waiting time from referral to first definitive treatment increased
from 17.5 days to 26.5 days and the median waiting time increased from 13 days to
18.5 days. In terms of the project’s target, 100% (11/11) of patients waited 35 days or
149
less in the quarter ending March 2000, and this compares to 79% (11/14) of patients
who waited 35 days or less in the quarter ending March 2001.
TABLE 31
Programme D Ovarian project: data by referral type and dates present
January to March 2000
January to March 2001
Number of
patients
Number of
patients
(%)
(%)
Data including referral and treatment dates
GP
4
(30.8)
10
Consultant physician
Consultant surgeon
(55.6)
1
(7.7)
0
(0.0)
4
(30.8)
2
(11.1)
Emergency admission
2
(15.4)
1
(5.6)
Other or not known
1
(7.7)
1
(5.6)
1
(7.7)
2
(11.1)
Data not including referral and treatment dates
GP
Consultant surgeon
(0.0)
1
(5.6)
Emergency admission
(0.0)
1
(5.6)
(100.0)
18
(100.0)
Total patients
TABLE 32
13
Programme D Ovarian project: all patients1; waiting time from referral to first
definitive treatment (days)
Jan-Mar 2000
mean
17.5
Jan-Mar 2001
26.5
% change between
periods
51.8
minimum
3.0
6.0
100.0
first quartile
6.0
12.3
104.2
median
13.0
18.5
42.3
third quartile
29.0
33.5
15.5
maximum
35.0
74.0
111.4
Inter quartile range
23.0
21.3
-7.6
total number of patients
11
14
27.3
total number of days
192
371
93.2
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
1 The first quarter includes 10 cases treated with surgery and one case treated with palliative care.
One case is excluded because the recorded referral date is after the treatment date. The second
quarter includes 11 cases treated with surgery and three cases treated with chemotherapy.
Focusing on all patients treated with surgery, the mean waiting time from referral to
first definitive treatment increased from 16.8 days to 22.8 days and the median
waiting time increased from 11 days to 16 days (table 33).
150
TABLE 33
Programme D Ovarian project: patients treated with surgery; waiting time from
referral to first definitive treatment (days)
Jan-Mar 2000
% change between
periods
Jan-Mar 2001
mean
16.8
22.8
35.8
minimum
3.0
6.0
100.0
first quartile
6.0
12.5
108.3
median
11.0
16.0
45.5
third quartile
30.5
26.0
-14.8
maximum
35.0
60.0
71.4
Inter quartile range
24.5
13.5
-44.9
total number of patients
10
11
10.0
total number of days
168
251
49.4
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
Booking
The project’s booking target was that all patients with diagnosed cancer should be
booked at each step of the patient’s journey.
The patient-level data supplied by the project recorded that 77% (10/13) of patients
were booked for the first specialist appointment in the quarter ending March 2000. In
the quarter ending March 2001, all 18 patients were recorded as booked for the first
specialist appointment.
In the quarter ending March 2000, one patient was recorded as booked for the first
diagnostic test, which took place on the same day as the first specialist appointment.
Three patients were recorded as not booked for the first diagnostic test (table 34). The
majority of patients in both quarters were recorded as ‘pre referral’ which indicates
that the first diagnostic test took place before the first specialist appointment. One
third (6/18) of the patients were recorded as booked in the quarter ending March 2001.
Table 34 shows that the proportion of patients booked for the first definitive treatment
decreased between the quarters.
TABLE 34
Programme D Ovarian project: booking data for first diagnostic test and first
definitive treatment
January to March 2000
Number of
patients
First diagnostic test
First definitive treatment
(%)
January to March 2001
Number of
patients
(%)
booked
1
(8.3)
6
(33.3)
not booked
3
(25.0)
2
(11.1)
‘pre referral’
8
(66.7)
10
(55.6)
all
12
(100.0)
18
(100.0)
missing data
1
booked
12
(92.3)
10
(66.7)
not booked
1
(7.7)
5
(33.3)
all
13
(100.0)
15
(100.0)
missing data
0
0
3
151
The booking ‘run charts’ supplied by the project to NPAT contradict the patient-level
data. The ‘run chart’ reporting data to February 2001 for first specialist appointment
shows that no patients were booked before July 2000 (and all patients thereafter are
shown as booked). The ‘run charts’ for first diagnostic test and first definitive
treatment show that no patients were booked before August 2000, and that all patients
thereafter were booked.
CSC Programme A Ovarian project
Waiting time target
The CSC programme A report to NPAT for February 2001 states that the waiting time
target (for GP referral to first definitive treatment) for its Ovarian project was 35 days.
Figure 37 shows the ‘run chart’ for the project reported to NPAT (rather than the total
number of days noted in the figure’s title, it is likely that the figure shows the mean
number of days). Data are provided on the number of patients shown in figure 36, but
not their status (eg urgency). In March 2001, this project’s team self-assessment score
and the CSC Planning Group score was 4.
FIGURE 37
Programme A Ovarian project; waiting time ‘run chart’
G lo b a l M e a s u r e 1 - A v e r a g e D a y s to T r e a tm e n t
80
75
70
60
55
50
45
40
T a rg e t < 3 5 d a y s
35
30
25
20
15
7
May-00
Jun-00
Jul-00
Aug-00
7
5
7
8
11
9
Feb-01
6
Jan-01
2
Nov-00
2
Oct-00
3
Sep-00
2
Apr-00
N u m b e r o f p a tie n ts :
5
Mar-00
10
Source: March 2001 CSC programme report to NPAT
Although the waiting time target is recorded as the waiting time from GP referral to
first definitive treatment, figure 36 appears to include more than GP referrals only.
This is because while two patients are shown in figure 36 for March 2000, the patientlevel data supplied by the project include no patients referred by a GP during the
quarter ending March 2000. The patient-level data supplied by the project included 13
patients referred by a GP during the quarter ending March 2001 (mean waiting time
27.8 days).
Analysis of patient-level data
Data for five patients at Trust A were supplied for the quarter ending March 2000.
Data for 25 patients at Trust A and eight patients at Trust B were supplied for the
quarter ending March 2001. All the patients had surgery as the first definitive
treatment. Table 35 summarises the waiting time experience from referral to surgery
152
Mar-01
Dec-00
Feb-00
0
Jan-00
Average days to treatment
65
for the patients that had surgery within each quarter. Table 35 shows that the mean
waiting time was almost unchanged at 33 days, and the median waiting time increased
from 20 days to 28 days. In terms of the project’s target, 75% (3/4) of patients waited
35 days or less in the quarter ending March 2000, and this compares to 73% (24/33)
of patients who waited 30 days or less in the quarter ending March 2001.
Of the five patients during the first quarter of 2000, one was recorded as urgent in
terms of the two week wait criteria, two were not urgent and the urgency of two
patients was not known (no data on any locally defined urgency were supplied). In the
first quarter of 2001, 58% (19/33) of patients were recorded as urgent using the two
week wait criteria, 12% (4/33) were not urgent, 6% (2/33) were emergency
admissions, and type of admission was not known for 24% (8/33). In the first quarter
of 2000, the single urgent patient was referred by a consultant physician. In order to
compare like with like, table 36 compares waiting times from referral to first
definitive treatment for urgent patients referred by a consultant physician at Trust A in
each quarter.
NPAT (2001b:3) reported that the ovarian project at Trust B had introduced a ‘rapid
access clinic’ in which tests and their results were provided on the same day as the
specialist consultation. As a result of this innovation, “average time from a patient
being referred to having their surgery is now 3 weeks”. As noted above, although no
data were provided for Trust B for the quarter ending March 2000, data for eight
patients at Trust B were supplied for the quarter ending March 2001. The mean
waiting time from referral to first treatment was 32.4 days (median 28.5 days,
minimum 5 days, maximum 76 days). This finding suggests that the reported
reduction in mean waiting time was not sustained.
TABLE 35
Programme A Ovarian project: all patients; waiting time from referral to first
definitive treatment (days)
Jan-Mar 2000
Jan-Mar 2001
% change between
periods
mean
33.5
33.2
-1.0
minimum
15.0
5.0
-66.7
first quartile
18.0
23.0
27.8
median
20.5
28.0
36.6
third quartile
36.0
36.0
0.0
maximum
78.0
90.0
15.4
Inter quartile range
18.0
13.0
-27.8
1
4
33
725.0
total number of days
134
1094
716.4
total number of patients
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
1 the referral date was missing for one patient in the quarter of 2000. Three patients included in the quarter ending March 2000
and 11 patients included in the quarter ending March 2001, had the first definitive before the quarter started. The data for these
patients have been included in the analysis in order to maximise the number of patients, although the basis on which they were
supplied is indeterminate.
153
TABLE 36
Programme A Ovarian project (Trust A): two week wait defined urgent cases referred
by a consultant physician and treated with surgery; waiting time from referral to first
definitive treatment (days)
Jan-Mar 2000
mean
% change between
periods
Jan-Mar 2001
15.0
31.4
109.6
minimum
15.0
14.0
-6.7
first quartile
15.0
23.0
53.3
median
15.0
35.0
133.3
third quartile
15.0
36.0
140.0
maximum
15.0
59.0
293.3
Inter quartile range
0.0
13.0
n/a
total number of patients
1
9
800.0
total number of days
15
283
1786.7
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
Booking
The project’s objective was to book more than 90% of patients for three key stages in
patients’ journey. Figure 38 shows the project’s booking run chart for the first
specialist appointment. The patient-level data supplied by the project recorded that it
was not known whether the five patients included for the quarter ending March 2000
had been booked for the first specialist appointment. Eighty eight percent (29/33) of
patients included for the quarter ending March 2001 were recorded as booked for the
first specialist appointment. All patients in both quarters were recorded as being
booked for both the first diagnostic investigation and first definitive treatment (see
figures 39 and 40). Hence, the patient-level data and the run chart for booking the first
diagnostic test are inconsistent. The date of the first diagnostic investigation was not
recorded for the five patients included for the first quarter of 2000. Only 12% (4/33)
of patients included for the quarter ending March 2001 did not have the first
diagnostic investigation on the same day as the first specialist appointment.
FIGURE 38
Programme A Ovarian project; booking ‘run chart’ for the first specialist appointment
G lo b a l M e a s u re 2 - % p a tie n ts w ith a b o o k e d a d m is s io n - 1 s t s p e c ia lis t a p p o in tm e n t
100%
T a rg e t > 9 0 %
90%
3 p ts in itia lly re fe rre d
to w ro n g d e p a rtm e n t
80%
8 out of 9
p a tie n ts tu rn e d
a ro u n d w ith in 4 8
h o u rs . 6 b y
phone and 2 by
le tte r.
70%
60%
1 2 o u t o f th e 1 3 p a tie n ts
w e re tu rn e d a ro u n d w ith in 4 8
h o u rs . 1 0 w e re b y p h o n e c a ll,
1 in p e rs o n a n d 1 le tte r.
50%
40%
30%
7
5
6
8
13
9
Dec-00
Jan-01
Feb-01
7
Nov-00
Jun-00
6
Oct-00
2
Sep-00
2
Aug-00
3
Jul-00
2
May-00
N um ber of
p a tie n ts :
Apr-00
10%
Mar-00
20%
Source: March 2001 CSC programme report to NPAT
154
Mar-01
Feb-00
Jan-00
0%
FIGURE 39
Programme A Ovarian project; booking ‘run chart’ for the first diagnostic test
G lo b a l M e as u re 2 - % p atien ts w ith a b o o ke d a d m is sio n - 1s t d iag n o s tic te st
100%
T a rg e t > 9 0 %
90%
80%
70%
3 p ts in itia lly re fe rre d
to w ro n g d e p a rtm e n t
60%
50%
40%
30%
7
5
6
8
13
9
Feb-01
7
Jan-01
6
Dec-00
May-00
2
Nov-00
2
Oct-00
3
Aug-00
2
Apr-00
N u m b e r o f p a tie nts:
Mar-00
10%
Jul-00
20%
Mar-01
Sep-00
Jun-00
Feb-00
Jan-00
0%
Source: March 2001 CSC programme report to NPAT
FIGURE 40
Programme A Ovarian project; booking ‘run chart’ for the first definitive treatment
G lo b a l M e a s u r e 2 - % p a tie n ts w ith a b o o k e d a d m is s io n - 1 s t d e fin itiv e tr e a tm e n t
100%
T a rg e t > 9 0 %
90%
80%
70%
60%
50%
40%
30%
5
6
8
13
Jan-01
7
Dec-00
7
Nov-00
6
Oct-00
2
Sep-00
2
Aug-00
3
Jul-00
2
May-00
N u m b e r o f p a tie n ts :
Apr-00
10%
Mar-00
20%
9
Source: March 2001 CSC programme report to NPAT
155
Mar-01
Feb-01
Jun-00
Feb-00
Jan-00
0%
CSC programme B Colorectal project
Waiting time target
The project’s report for March 2001 states that the waiting time target (for referral to
first definitive treatment) was 56 days. Figure 41 shows the ‘run chart’ for the project
reported to NPAT. In March 2001, the project’s team self-assessment score was 4 and
the CSC Planning Group score was 4.5.
FIGURE 41
Programme B Colorectal project; waiting time ‘run chart’
CHART 1a - Patient Flow: Merseyside & Cheshire
COLORECTAL
Av number of days from referral to 1st treatment
180
3 pts ref by GP to
Gen Phys
2 pts ref by GP to
Gen Phys
160
140
days
120
2 pts ref as routine, prior to
clinics being recategorised
100
target
Av wait
pats sampled
80
60
40
2
Feb-01
Jan-01
Dec-00
5
Mar-01
11
9
7
Nov-00
7
Sep-00
Jul-00
Jun-00
16
Oct-00
19
10
Aug-00
11
7
Apr-00
Mar-00
Feb-00
Jan-00
14
6
4
0
May-00
20
Source: March 2001 CSC programme report to NPAT
Analysis of patient-level data
The patient-level data did not conform to the draft or agreed data specification. Data
on the source of referral were incomplete and data on urgency and treatment type
were missing. Table 37 shows that across all patients starting the first definitive
treatment in each quarter, the mean waiting time from referral to first definitive
treatment fell from 76.5 days to 45 days and the median waiting time fell from 55
days to 47 days. In terms of the project’s target, 54% (13/24) of patients waited less
than 56 days in the quarter ending March 2000, and this compares to 76% (16/21) of
patients in the quarter ending March 2001.
TABLE 37
Programme B Colorectal project: all patients; waiting time from referral to first
definitive treatment (days)
Jan-Mar 2000
Jan-Mar 2001
% change between
periods
mean
76.5
45.0
-41.2
minimum
8.0
19.0
137.5
first quartile
32.5
28.0
-13.8
median
55.0
47.0
-14.5
third quartile
92.5
56.0
-39.5
maximum
323.0
100.0
-69.0
Inter quartile range
60.0
28.0
-53.3
total number of patients
24
21
-12.5
total number of days
1836
945
-48.5
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
156
Booking
Patient-level data on booking were not supplied.
CSC programme D Colorectal project
Waiting time target
The project’s report for February 2001 states that the waiting time target (for referral
to first definitive treatment) was 70 days. Figure 42 shows the ‘run chart’ for the
project reported to NPAT, which does not include data on the number of patients. The
project includes one Trust. This project’s team self-assessment score in March 2001
was 4 and the CSC Planning Group score was 3.5.
FIGURE 42
Programme D Colorectal project; waiting time ‘run chart’
C o lo re c ta l
A c c e s s - T h e to ta l n o o f d a y s fro m d a te o f re fe rra l to th e s p e c ia lis t
to th e d a te o f 1 s t d e fin itiv e tre a tm e n t.
115
111
Access
75
79
74
72
50
35
2
7 4 .7
surgeon
61
60
1
78
6 9 .5
55
15
RTrust
UH
T a rg e t
Target
M ik e W
108
102
95
48
5 3 .4
D e v e lo p m e n t &
im p le m e n ta tio n o f c o lo re c ta l
re fe rra l fo rm
3
ug
-0
0
Se
p00
O
ct
-0
0
N
ov
-0
0
D
ec
-0
0
Ja
n01
Fe
b01
M
ar
-0
1
-0
0
A
Ju
l
-0
0
M
ay
-0
0
Ju
n00
A
pr
Ja
n00
Fe
b00
M
ar
-0
0
-5
N o te : 1 . P re b o o k e d R a d io th e ra p y
2 . P ro je c t C o m m e n c e d
3 . Im p le m e n ta tio n o f C o lo re c ta l R e fe rra l fo rm
Source: February 2001 CSC programme report to NPAT
Analysis of patient-level data
Patient-level data were provided by the project using the draft data specification for
patients starting the first definitive treatment in each quarter. In the first quarter, 88%
(22/25) of the patients were referred by a GP, and the other three cases were
emergency admissions. In the second quarter, 62% (16/26) of the cases were GP
referrals, 27% (7/26) were emergency admissions and 12% (3/26) of cases attended
A&E. The available data on urgency were incomplete. With the exception of one
patient in the first quarter who was treated with radiation therapy, all the patients had
surgery for the first definitive treatment.
Table 38 shows that across all patients starting the first definitive treatment in each
quarter, the mean waiting time from referral to first definitive treatment fell from
103.7 days to 83.1 days and the median waiting time fell from 104 days to 73.5 days.
In terms of the project’s target, 40% (10/25) of patients waited less than 70 days in the
157
quarter ending March 2000, and this compares to 46% (12/26) of patients who waited
less than 70 days in the quarter ending March 2001.
TABLE 38
Programme D Colorectal project: all patients; waiting time from referral to first
definitive treatment (days)
Jan-Mar 2000
Jan-Mar 2001
% change between
periods
mean
103.7
83.1
-19.9
minimum
0.0
0.0
0.0
first quartile
59.0
41.0
-30.5
median
104.0
73.5
-29.3
third quartile
125.0
106.8
-14.6
maximum
355.0
212.0
-40.3
Inter quartile range
66.0
65.8
-0.4
total number of patients
25
26
4.0
total number of days
2593
2160
-16.7
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
Table 39 compares waiting times for the most common group of patients who were
referred by a GP and were treated with surgery. For these cases the mean waiting time
from referral to first definitive treatment fell from 119.4 days to 101.7 days and the
median waiting time fell from 115 days to 80.5 days. In terms of the project’s target,
29% (6/21) of patients waited less than 70 days in the quarter ending March 2000, and
this compares to 31% (5/16) of patients who waited less than 70 days in the quarter
ending March 2001.
TABLE 39
Programme D Colorectal project: all GP referrals treated with surgery; waiting time
from referral to first definitive treatment (days)
Jan-Mar 2000
Jan-Mar 2001
% change between
periods
mean
119.4
101.7
-14.9
minimum
40.0
38.0
-5.0
first quartile
69.0
60.5
-12.3
median
115.0
80.5
-30.0
third quartile
128.0
123.0
-3.9
maximum
355.0
212.0
-40.3
Inter quartile range
59.0
62.5
5.9
total number of patients
21
16
-23.8
total number of days
2508
1627
-35.1
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
Booking
The project’s booking target was that 95% of all patients with diagnosed cancer
should be booked at each step of the patient’s journey.
The patient-level data supplied by the project recorded that booking data were not
available for the quarter ending March 2000. In the quarter ending March 2001,
booking data were reported for 77% (20/26) of patients. All 20 patients were reported
as booked for the first specialist appointment, first diagnostic test and first definitive
158
treatment. All 20 patients had the first diagnostic test on the same day as the first
specialist appointment.
The booking ‘run charts’ supplied by the project to NPAT contradict the patient-level
data. The ‘run charts’ reporting data to February 2001 for first specialist appointment
and first diagnostic test show that no patients were booked between January 2000 and
February 2001. Figure 43 suggests that patients were booked for surgery by one
surgeon only.
FIGURE 43
Programme D Colorectal project; booking ‘run chart’ for first definitive treatment
RUH
Trust
Target
Target
Msurgeon
ike W
Holiday and diary not
located in clinic
0
0
0
0
0
0
0
0
Fe
b01
M
ar
-0
1
Ju
n00
0
ov
-0
0
D
ec
-0
0
Ja
n01
0
N
0
M
0
pr
-0
0
0
A
0
Target
3
ay
-0
0
2
Ju
l-0
0
A
ug
-0
0
Se
p00
O
ct
-0
0
1
Fe
b00
M
ar
-0
0
100
90
80
70
60
50
40
30
20
10
0
Ja
n00
Patient Flow
Colorectal
Patient Flow - The % of patients w ith a booked
adm ission for three key stages in the patients journey - Stage 3
(1st Definitive Treatm ent)
Note: 1. Prebooked Radiotherapy
2. Project Com m enced
3. Choice offered for hom e bow el preparation - M ay 2000
Source: February 2001 CSC programme report to NPAT
CSC programme A Colorectal project
Waiting time target
The project included three Trusts. The CSC programme A report to NPAT for
February 2001 states that the waiting time target (for GP referral to first definitive
treatment) for its Colorectal project was 50 days. Figure 44 shows the ‘run chart’ for
the project reported to NPAT. Data are provided on the number of patients shown in
figure 44, which also identifies ‘direct referrals’ but not urgency. This project’s team
self-assessment score in March 2001 was 4 and the CSC Planning Group score was
3.5.
159
FIGURE 44
Programme A Colorectal project; waiting time ‘run chart’
G lo b a l M e a s u r e 1 - A v e r a g e d a y s to tr e a tm e n t
200
180
160
140
1 p a t ie n t o n ly r e f e r r e d f o r
c h e m o - r a d io t h e r a p y
1 p o ly p p a t ie n t a n d 2 in it ia l
m e d ic a l r e f e r r a ls
1 p a t ie n t r e f e r r e d
v ia g y n a e
2 p h y s ic ia n
r e f e r r a ls 1 v e r y
e ld e r ly p a t ie n t
120
in c o m p le t e d a t a
2 p o ly p p a t ie n t s
D ir e c t r e f e r r a ls
100
in c o m p le t e d a t a o n ly 2 h o s p it a ls
d a ta
80
60
T a rg e t < 5 0
24
14
30
20
21
10
20
14
17
11
15
14
12
7
7
4
9
6
5
4
Feb-01
26
20
Jan-01
35
29
Dec-00
32
23
Nov-00
27
19
Sep-00
20
Aug-00
40
Mar-01
Oct-00
Jul-00
Jun-00
May-00
Apr-00
Mar-00
Feb-00
Jan-00
0
Source: March 2001 CSC programme report to NPAT
Analysis of patient-level data
Data were provided for patients starting the first definitive treatment in each quarter
(106 patients in the quarter ending March 2000 and 30 patients in the quarter ending
March 2001). The data include four patients who started the first definitive treatment
in March 2001, compared to 39 patients in March 2000. This difference in activity
suggests that the data for the quarter ending March 2001 may be incomplete (and
figure 43 also indicates missing data).
In both quarters, over 85% of patients had surgery for the first definitive treatment
(table 40). The source of the referral was not known for 43% (46/106) of patients in
the quarter ending March 2000, and 10% (3/30) of patients in the quarter ending
March 2001.
TABLE 40
Programme A Colorectal project: data by first definitive treatment and referral type
January to March 2000
Number of
patients
(%)
January to March 2001
Number of
patients
(%)
Surgery
GP referral
31
(29.2)
16
(53.3)
Emergency admission
19
(17.9)
7
(23.3)
2
(1.9)
0
(0.0)
41
(38.7)
3
(10.0)
GP referral
5
(4.7)
4
(13.3)
Emergency admission
3
(2.8)
0
(0.0)
Other
Not known
Other
Not known
Total patients Other
160
5
(4.7)
0
(0.0)
106
(100.0)
30
(100.0)
A ll R e f e r r a ls
Table 40 shows that the most common identified source of referral was GP referral,
followed by emergency admission, in both quarters. In the quarter ending March
2000, apart from the 21% (22/106) of patients recorded as emergency admissions, two
patients were recorded as urgent (defined using the two week wait criteria) and the
urgency of the other patients was not known. In the quarter ending March 2001,
urgency was recorded as follows: emergency admission 23% (7/30), urgent 23%
(7/30), and not known 53% (16/30). Table 41 summarises the waiting time experience
from referral to first definitive treatment for all patients in each quarter. Table 41
shows that the mean waiting time decreased from 85 days to 81 days, and the median
waiting time increased from 62 days to 69 days.
TABLE 41
Programme A Colorectal project: all patients; waiting time from referral to first
definitive treatment (days)
Jan-Mar 2000
Jan-Mar 2001
% change between
periods
mean
85.4
81.4
-4.6
minimum
0.0
1.0
n/a
first quartile
26.0
38.8
49.0
median
62.0
69.0
11.3
third quartile
116.0
107.0
-7.8
maximum
544.0
274.0
-49.6
Inter quartile range
90.0
68.3
-24.2
total number of patients1
105
30
-71.4
total number of days
8967
2443
-72.8
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
1 One case during the quarter ending March 2000 was excluded from the analysis because the waiting time of 1,029 days from
referral by a consultant surgeon to first outpatient appointment suggests that the referral date may be erroneous.
Thirty-nine percent of the cases in the quarter ending March 2000 met the local
waiting time target of less than 50 days to first definitive treatment compared to 33%
of cases in the quarter ending March 2001.
Table 42 summarises the waiting time experience from referral to first definitive
treatment for the largest group of patients; those referred by GP and treated with
surgery.
TABLE 42
Programme A Colorectal project: GP referrals treated with surgery; waiting time
from referral to first definitive treatment (days)
Jan-Mar 2000
Jan-Mar 2001
% change between
periods
mean
118.9
96.6
-18.8
minimum
8.0
2.0
-75.0
first quartile
48.5
47.0
-3.1
median
76.0
79.0
3.9
third quartile
140.5
134.8
-4.1
maximum
544.0
274.0
-49.6
Inter quartile range
92.0
87.8
-4.6
total number of patients
31
16
-48.4
total number of days
3687
1545
-58.1
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
161
Table 42 shows that the mean waiting time decreased from 119 days to 97 days, and
the median waiting time increased from 76 days to 79 days. It is desirable to compare
waiting times for patients classified as urgent. As noted above, however, only two
patients in the first quarter of 2000 were recorded as urgent defined using the two
week wait criteria. Both these cases were GP referrals to the same Trust and were
treated with surgery. These patients waited 50 days and 117 days from referral to first
definitive treatment. During the first quarter of 2001, only two patients fulfilled the
same criteria. These patients waited 27 days and 54 days.
In terms of the project’s target, 26% (8/31) of patients waited less than 50 days in the
quarter ending March 2000, and this compares to 31% (5/16) of patients who waited
less than 50 days in the quarter ending March 2001. Table 42 suggests little change in
waiting experience between the quarters, and that considerable reductions are required
in order to achieve the project’s 50 day target.
Booking
The project’s booking target was that more than 80% of all patients with diagnosed
cancer should be booked at each step of the patient’s journey. The patient-level data
supplied by the project recorded that no patients in either quarter had been booked for
the first specialist appointment. In contrast, all patients in both quarters were recorded
as booked for the first diagnostic test and the first definitive treatment. In the first
quarter of 2000, 51% (49/96) of patients (for whom the first specialist appointment
and first diagnostic test dates were recorded) had the first diagnostic test on the same
day as the first specialist appointment, compared to 46% (13/28) of patients in the first
quarter of 2001.
The patient-level data conflict with the booking ‘run charts’ supplied by the project to
NPAT (figures 45 and 46)
FIGURE 45
Programme A Colorectal project; booking ‘run chart’ for first diagnostic test
G lo b a l M e a s u r e 2 - % p a tie n ts w ith a b o o k e d a d m is s io n - 1 s t d ia g n o s tic te s t
100%
90%
T a rg e t > 8 0 %
80%
70%
60%
A ll a d m is s io n s
50%
A d m is s io n s m in u s e m e rg e n c ie s
36
27
21
20
10%
20
18
17
16
15
14
12
11
7
6
9
8
Jan-01
32
27
o n ly 2 h o s p ita ls d a ta
30
24
Dec-00
20%
24
16
Oct-00
30%
26
18
27
21
Sep-00
40%
5
4
Mar-01
Feb-01
Nov-00
Aug-00
Jul-00
Jun-00
May-00
Apr-00
Mar-00
Feb-00
Jan-00
0%
Source: March 2001 CSC programme report to NPAT
162
FIGURE 46
Programme A Colorectal project; booking ‘run chart’ for first definitive treatment
G lo b a l M e a s u re 2 -% p a tie n ts w ith a b o o k e d a d m is s io n - 1 s t d e fin itiv e tr e a tm e n t
100%
T a rg e t > 8 0 %
90%
o n ly 2 h o s p ita ls d a ta
80%
70%
60%
A ll a d m is s io n s
50%
A d m is s io n s m in u s e m e rg e n c ie s
40%
30%
24
16
30
24
21
20
20
18
17
16
15
14
12
11
7
6
9
8
5
4
Feb-01
26
189
Jan-01
36
27
Jul-00
32
27
Jun-00
27
23
May-00
10%
N u m b e r o f p a tie n ts :
Apr-00
20%
Mar-01
Dec-00
Nov-00
Oct-00
Sep-00
Aug-00
Mar-00
Feb-00
Jan-00
0%
Source: March 2001 CSC programme report to NPAT
CSC programme A Lung project
Waiting time target
The CSC programme A Lung project included three Trusts. The project’s report to
NPAT for February 2001 states that the waiting time target for referral to first
definitive treatment was less than 42 days. Figure 47 shows the ‘run chart’ for the
project reported to NPAT, and indicates the number of patients, but not their status
(eg urgency). Figure 47 indicates that data are missing from September 2000. In
March 2001, this project’s team self-assessment score and the CSC Planning Group
score was 3.5.
163
FIGURE 47
CSC programme A Lung project; waiting time ‘run chart’
M e a s u r e 1 - A v e r a g e d a y s to T r e a tm e n t ( P a t ie n t F lo w )
110
100
O n ly 1 h o s p ita ls d a ta a n d
d a ta in c o m p le te
80
D a ta in c o m p le te
70
60
G lo b a l
T a rg e t < 4 2 d a y s
50
40
30
N u m b e r o f p a tie n ts :
12
11
5
2
2
Dec-00
17
Nov-00
18
Oct-00
15
Sep-00
14
Aug-00
17
Jul-00
11
Apr-00
14
Mar-00
20
Mar-01
Feb-01
Jan-01
Jun-00
May-00
Feb-00
10
Jan-00
Average days to treatment
90
Source: February 2001 project report to NPAT
Analysis of patient-level data
Patient-level data were provided by the project using the draft data specification for
patients starting the first definitive treatment in each quarter. The urgency of the cases
is summarised in table 43.
TABLE 43
Programme A Lung project: type of referral
January to March 2000
Number of
patients
urgent (two week wait criteria)
16
(%)
January to March 2001
Number of
patients
(%)
(67)
10
(83)
emergency
2
(8)
2
(17)
non-urgent
3
(13)
0
(0)
not known
3
(13)
0
(0)
24
(100)
12
(100)
Total
Table 44 shows that across all patients starting the first definitive treatment in each
quarter, the mean waiting time from referral to first definitive treatment fell from 35
days to 29.8 days and the median waiting time fell from 37 days to 27 days. In terms
of the project’s target, 63% (15/24) of patients waited less than 42 days in the quarter
ending March 2000, and this compares to 83% (10/12) of patients who waited less
than 42 days in the quarter ending March 2001.
164
TABLE 44
Programme A Lung project: all patients; waiting time from referral to first definitive
treatment (days)
Jan-Mar 2000
Jan-Mar 2001
% change between
periods
mean
35.0
29.8
-15.1
minimum
9.0
14.0
55.6
first quartile
19.8
22.5
13.9
median
37.0
27.0
-27.0
third quartile
45.0
37.3
-17.2
maximum
90.0
54.0
-40.0
Inter quartile range
25.3
14.8
-41.6
total number of patients
24
12
-50.0
Total number of days
841
357
-57.6
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
The most common patient group were those urgent referrals treated with
chemotherapy. Table 45 shows the change in waiting time measures for this group.
TABLE 45
Programme A Lung project: urgent referrals treated with chemotherapy; waiting time
from referral to first definitive treatment (days)
Jan-Mar 2000
Jan-Mar 2001
% change between
periods
mean
32.4
26.7
-17.6
minimum
9.0
14.0
55.6
first quartile
12.5
16.5
32.0
median
26.0
22.0
-15.4
third quartile
43.0
30.5
-29.1
maximum
90.0
54.0
-40.0
Inter quartile range
30.5
14.0
-54.1
total number of patients
11
6
-45.5
total number of days
356
160
-55.1
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
Booking
Patient-level data on booking were not supplied.
CSC programme E Lung project
The CSC programme E Lung project included three hospitals. The patient-level data
supplied by the project included patients starting the first definitive treatment between
January 2000 and March 2001. These data show 67 cases during 2000. Analysis
presented in the CSC Lung Cancer Improvement Guide (CSC, 2001a: 8) suggests that
the patient-level data provided by the project cover one of the three hospitals included
in the project. The CSC Lung Cancer Improvement Guide states that the annual
number of newly diagnosed lung cancers across the three hospitals is 430.
165
Waiting time target
The project’s report to NPAT for February 2001 states that the waiting time target for
referral to first definitive treatment was less than 56 days. Figure 48 shows the ‘run
chart’ for the project reported to NPAT. Figure 48 shows the number of patients, but
not their status (eg urgency). In March 2001, this project’s team self-assessment score
and the CSC Planning Group score was 4.
FIGURE 48
Programme E Lung project; waiting time ‘run chart’
Average Days to Treatment
Average Days to Treatment
210
Extended days for CT
Direct referral from Radiology
160
Target
110
60
10
Jan-00 Feb-00 Mar-00 Apr-00 May-00 Jun-00 Jul-00 Aug-00 Sep-00 Oct-00 Nov-00 Dec-00 Jan-01 Feb-01 Mar-01 Apr-01 May-01
Avgdays
94.00
38.25
66.50
50.20
60.50
24.50
62.50
29.50
72.86
52.89
57.40
33.80
39.20
Patients
2
4
10
5
8
4
6
2
7
9
5
5
5
64.50
2
0
Target
56
56
56
56
56
56
56
56
56
56
56
56
56
56
56
High
109
69
165
100
104
45
86
41
182
96
131
99
52
65
Low
79
22
2
30
8
8
20
18
14
18
18
7
29
64
Source: February 2001 project report to NPAT
Analysis of patient-level data
Patient-level data were provided by the project using the draft data specification for
patients starting the first definitive treatment in each quarter. The data did not include
the source of referral or the urgency of the cases.
Table 46 shows the range of treatment options used in each quarter. Table 47 shows
that across all patients starting the first definitive treatment in each quarter, the mean
waiting time from referral to first definitive treatment fell from 62.9 days to 44.8 days
and the median waiting time fell from 49 days to 37 days. In terms of the project’s
target, 56% (9/16) of patients waited less than 56 days in the quarter ending March
2000, and this compares to 73% (8/11) of patients who waited less than 56 days in the
quarter ending March 2001.
166
TABLE 46
Programme E Lung project: type of first definitive treatment
January to March 2000
Number of
patients
Pall RT
8
January to March 2001
Number of
patients
(%)
(50.0)
(%)
7
(63.6)
Chemotherapy
3
(18.8)
2
(18.2)
Surgery
3
(18.8)
0
(0.0)
Pall chemotherapy
1
(6.3)
0
(0.0)
‘wait and watch’
0
(0.0)
2
(18.2)
No treatment
1
(6.3)
0
(0.0)
16
(100.0)
11
(100.0)
Total
TABLE 47
Programme E Lung project: all patients; waiting time from referral to first definitive
treatment (days)
Jan-Mar 2000
Jan-Mar 2001
% change between
periods
mean
62.9
44.8
-28.7
minimum
2.0
26.0
1200.0
first quartile
31.5
32.0
1.6
median
49.0
37.0
-24.5
third quartile
80.8
55.0
-31.9
maximum
165.0
85.0
-48.5
Inter quartile range
49.3
23.0
-53.3
total number of patients
16
11
-31.3
total number of days
1006
493
-51.0
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
The CSC Lung Cancer Improvement Guide describes the impact of introducing direct
referral by the radiologist to the chest physician following a chest x-ray suggestive of
cancer.
“Before starting the scheme, the average wait from GP referral to first outpatient appointment
was 24 days [January 2000]. This meant that patients were waiting an unacceptably long time
for an appointment and the trust was not complying with the national two-week target. …
After the introduction of direct referral, the wait was reduced to an average of ten days by the
following month [February 2000]. The average wait has been less than 14 days for the last
eight months [June 2000 to January 2001]” (CSC, 2001a: 7-8. The dates in square brackets
match those shown in an accompanying ‘run chart’)
The reduction in waiting time appears to be claimed on the basis of one month’s
activity: the run chart (CSC, 2001a: 8) shows an average wait of 24 days for January
2000. However, the run chart also shows that only two patients were included in this
month. The patient-level data supplied by the project are consistent with the run chart
and show that in January 2000 one patient waited nine days from referral to first
specialist appointment and another waited 39 days. Hence, it appears that the
comparatively high baseline position reflects the experience of only one patient.
Furthermore, while the patient-level data confirm that the mean waiting time between
June 2000 and January 2001 was less than 14 days (7.3 days), 10% (4/41) of patients
waited more than 14 days during this period. When account is taken of the patientlevel data, it would appear prudent to take a more cautious line to the change in
waiting time to first specialist appointment.
167
Booking
Patient-level data on booking were not supplied.
CSC programme B Lung project
Waiting time target
The project’s report to NPAT for March 2001 states that the waiting time target for
referral to first definitive treatment was less than 56 days on average. Figure 49 shows
the ‘run chart’ for the project reported to NPAT. Figure 49 shows the number of
patients, but not their status (eg urgency). The project’s team self-assessment score
and the CSC Planning Group score in March 2001 was 4.5.
FIGURE 49
Programme B Lung project; waiting time ‘run chart’
C H A R T 1 - P a tie n t F lo w :
M e rs e y s id e & C h e s h ire - L U N G
A v n u m b e r o f d a y s fr o m re fe rr a l to tre a tm e n t
90
80
70
days
60
P a tie n t w a itin g tim e s a p p e a r to
in c re a s e . T h is is a c tu a lly d u e to th e
h ig h e r p ro p o rtio n o f p a tie n ts h a vin g
s u rg e ry , w h o w a it lo n g e r, m a k in g th e
a v e ra g e w a it lo n g e r.
50
T a rg e t = 8 w e e k s
(5 6 d a y s )
40
ta rg e t
T re a tm e n t
p a ts s a m p le d
30
20
r-01
Ma
Feb
-01
Jan
00
13
-01
13
9
Dec
-
11
Nov
-00
Sep
-00
00
Jul00
-00
12
6
Oct
-00
7
5
Aug
-
13
Jun
y-0
0
r-00
Ma
Feb
-00
-00
Jan
10
8
Ma
13
0
-00
22
Apr
10
Source: March 2001 project report to NPAT
Analysis of patient-level data
Patient-level data did not conform to the draft or agreed data specification. No data
were supplied on the source or urgency of referral. Table 48 shows that across all
patients starting the first definitive treatment in each quarter, the mean waiting time
from referral to first definitive treatment increased from 55.1 days to 62.8 days and
the median waiting time increased from 54 days to 58 days. The project was unusual
in expressing its waiting time target in terms of an average waiting time (56 days). By
this measure the project met its target in the quarter ending March 2000, and did not
meet the target in the quarter ending March 2001 (table 48).
The most common treatment was chemotherapy (table 49). Comparison of waiting
time measures for chemotherapy (table 49), radiotherapy (table 50) and surgery (table
51) indicate that the deterioration of chemotherapy waiting times was the dominant
influence on overall performance.
168
TABLE 48
Programme B Lung project: all patients; waiting time from referral to first definitive
treatment (days)
Jan-Mar 2000
Jan-Mar 2001
% change between
periods
mean
55.1
62.8
14.0
minimum
8.0
14.0
75.0
first quartile
32.0
37.0
15.6
median
54.0
58.0
7.4
third quartile
73.5
82.0
11.6
maximum
149.0
177.0
18.8
Inter quartile range
41.5
45.0
8.4
total number of patients
43
31
-27.9
total number of days
2369
1947
-17.8
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
TABLE 49
Programme B Lung project: cases treated with chemotherapy; waiting time from
referral to first definitive treatment (days)
Jan-Mar 2000
Jan-Mar 2001
% change between
periods
mean
51.8
70.4
36.0
minimum
8.0
15.0
87.5
first quartile
31.0
48.0
54.8
median
47.0
63.0
34.0
third quartile
64.8
87.0
34.4
maximum
114.0
177.0
55.3
Inter quartile range
33.8
39.0
15.6
total number of patients
20
13
-35.0
total number of days
1035
915
-11.6
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
TABLE 50
Programme B Lung project: cases treated with radiotherapy; waiting time from
referral to first definitive treatment (days)
Jan-Mar 2000
mean
59.8
Jan-Mar 2001
% change between
periods
55.1
-7.9
minimum
9.0
14.0
55.6
first quartile
34.5
32.8
-5.1
median
56.5
51.5
*
-8.8
third quartile
78.0
74.0
-5.1
maximum
149.0
117.0
-21.5
Inter quartile range
43.5
41.3
-5.2
total number of patients
12
16
33.3
Total number of days
718
882
22.8
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
169
TABLE 51
Programme B Lung project: cases treated with surgery; waiting time from referral to
first definitive treatment (days)
Jan-Mar 2000
Jan-Mar 2001
% change between
periods
mean
56.0
75.0
33.9
minimum
14.0
28.0
100.0
first quartile
41.5
51.5
median
57.0
75.0
third quartile
77.5
98.5
24.1
*
31.6
27.1
maximum
89.0
122.0
37.1
Inter quartile range
36.0
47.0
30.6
total number of patients
11
2
-81.8
total number of days
616
150
-75.6
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
170
Appendix 6
Secondary analysis of waiting times to first definitive treatment
TABLE 52 Summary project-level secondary analysis of waiting times from referral to first definitive treatment1
(CSC Planning
Group’s
assessment
score for March
2001)
Project
January to March 2000
median
(days)
mean
(days)
January to March 2001
(days waited
/ number of
patients)
median
(days)
mean
(days)
change in
median between
quarters
(days waited
/ number of
patients)
days
-25.0
(%)
change in mean local % meeting
between quarters target local target
(days) in quarter
ending
days
(%)
% meeting
national 62
day target
in quarter
ending
March outcome March outcome
2000 quarter2 2000 quarter2
Prostate - all cases
Programme E3
(3.5)
96.0
113.1
(1584/14)
71.0
67.4
(1551/23)
(-26.0)
-45.7
(-40.4)
70
43
48
43
39
Programme F4
(4)
42.0
61.7
(1418/23)
93.0
92.6
5
(1944/21)
51.0 * (121.4)
30.9
(50.2)
<56
52
24
57
29
(3.5)
74.0
83.7
(2428/29)
91.0
106.9
(1924/18)
17.0
(23.0)
23.2
(27.7)
<70
45
33
45
28
(4)
38.0
37.0
(370/10)
30.0
34.3
(103/3)
-8.0
(-21.1)
-2.7
(-7.2)
49
70
100
90
100
(4)
25.0
30.4
(213/7)
41.0
58.7
(176/3)
16.0
(64.0)
28.2
(92.8)
50
100
67
100
67
(3.5)
41.5
51.0
(408/8)
118.0
116.6
(1399/12)
76.5 * (184.3)
65.6
(128.6)
56
63
8
63
8
Colorectal - all cases
Programme E
Lung - all cases
Programme C6
Programme D proj. A
Programme G
8
7
* p<0.05 The difference in median waiting time is statistically significant (Mann-Whitney U test)
1 Measures of the variation in waiting time in each quarter are included in project-level analysis reported in appendix 5. Where data are missing referral or treatment dates, the number of patients with
dates and in total in each quarter are shown in the project-specific footnotes.
2 The outcome quarter is shown is each project-specific footnote.
3 The outcome quarter is October to December 2000. No data on urgency. Data on source of referral were incomplete: seven were recorded as A&E.
4 p=0.04 Data: q1 23/23 q2 21/25. The outcome quarter is September to November 2000. Data on urgency were incomplete.
5 Data: q1 29/34 q2 18/32. The outcome quarter is October to December 2000.
6 Data: q1 10/17 q2 3/17. Data on referral and treatment dates for only a minority of cases in the outcome quarter ending March 2001. Data on urgency and treatment type were incomplete.
7 Data: q1 7/8 q2 3/7. Data on referral and treatment dates for only three of seven cases in the outcome quarter ending March 2001. All cases were urgent GP referrals.
8 Data: q1 8/8 q2 12/14. The outcome quarter is October to December 2000. No data on urgency or treatment type.
171
TABLE 52 continued
(CSC Planning
Project
Group’s assessment
score for March
2001)
Summary project-level secondary analysis of waiting times from referral to first definitive treatment1
change in
change in mean local % meeting
January to March 2000
January to March 2001
median between between quarters target local target
quarters
(days) in quarter
ending
median mean (days waited median Mean (days waited days
(%)
days
(%)
(days)
(days)
/ number of
patients)
(days)
(days)
/ number of
patients)
% meeting
national 62
day target in
quarter
ending
March outcome March outcome
2000 quarter2 2000 quarter2
Breast
prog. C hormone therapy 9
10
prog. E surgery
17.5
17.5
(35/2)
10.0
11.0
(44/4)
-7.5
(-42.9)
-6.5
(-37.1)
40
100
100
100
100
27.0
29.4
(1704/58)
32.0
38.4
(4378/114)
5.0
* (18.5)
9.0
(30.7)
35
72
61
100
90
12.0
15.4
(323/21)
13.0
16.9
(304/18)
1.0
(8.3)
1.5
(9.8)
35
95
94
100
100
30.0
30.6
(153/5)
25.0
27.1
(217/8)
-5.0
(-16.7)
-3.5
(-11.4)
35
60
88
100
100
(4)
35.0
49.8
(498/10)
41.5
42.6
(851/20)
6.5
(18.6)
-7.3
(-14.6)
35
60
35
80
90
(3.5)
prog. E hormone therapy
10
10
prog. E chemotherapy
prog. F
11
Ovarian -all cases
programme C12
(4)
41.0
45.3
(181/4)
21.0
21.0
(21/1)
-20.0
(-48.8)
-24.3
(-53.6)
35
50
100
75
100
13,14
(4)
39.0
37.7
(226/6)
30.0
28.0
(224/8)
-9.0
(-23.1)
-9.7
(-25.7)
35
33
88
100
100
13,15
(4)
19.5
19.3
(77/4)
21.0
21.5
(129/6)
1.5
(7.7)
2.3
(11.7)
35
100
100
100
100
(4.5)
17.0
36.8
(221/6)
30.0
28.7
(316/11)
13.0
(76.5)
-8.1
(-22.0)
28
67
45
83
91
programme E
programme F
programme G
13,14,16
* p<0.05 The difference in median waiting time is statistically significant. (Mann-Whitney U test)
1 Measures of the variation in waiting time in each quarter are included in project-level analysis reported in appendix 5. Where data are missing referral or treatment dates, the number of patients with
dates and in total in each quarter are shown in the project-specific footnotes.
2 The outcome quarter is shown is each project-specific footnote.
9 Shown here in the secondary analysis rather than the main analysis because the number of cases in each quarter is so small. All cases were urgent using both criteria. The outcome quarter is January
to March 2001.
10 The outcome quarter is October to December 2000. No data on urgency.
11 The outcome quarter is September to November 2000. No data on treatment type or urgency.
12 Data: q1 4/7 q2 1/1. The single patient in the outcome quarter (January to March 2001) was reported to have had a private ultrasound scan before being referred by their GP.
13 The outcome quarter is October to December 2000.
14 No data on urgency. All patients treated with surgery.
15 Data on time waited only. No data on urgency or treatment type.
16 Data: q1 6/6 q2 11/12. One patient was excluded from q2 because the referral date was after the first outpatient date. The local waiting time target was for 90% of patients.
172
Appendix 7 Postal questionnaire: aspects of improvement approach
TABLE 53
% respondents rating aspect ‘very’ or ‘quite helpful’ (by programme A-G)
Specific Aspects of
Improvement Approach
Overall
(n=96)
A
B
C
D
E
F
G
(17)
(9)
(10)
(9)
(13)
(20)
(11)
Dedicated PM time
88
82
100
90
78
92
75
100
Process mapping
86
82
89
100
78
92
70
91
National Learn Wshops
85
82
100
90
89
92
80
73
Change principles
82
76
67
90
100
77
75
91
Capacity & demand
78
65
89
90
89
69
75
82
PDSA cycles
76
47
100
90
89
62
85
73
Monthly reports
73
53
100
100
78
77
40
82
National one day meetings
69
59
78
90
89
69
50
73
Improvement handbook
58
35
89
60
89
69
45
55
Team self assessment
scores
47
24
56
70
44
38
35
36
Listserv
39
24
44
70
33
62
30
18
Conference calls
36
29
44
40
22
62
35
27
TABLE 54
% respondents rating aspect ‘very’ or ‘quite helpful’
Specific aspects of
Improvement Approach
Overall
(n=96)
Project managers
(38)
Tumour Group
Clinical leads
(40)
Dedicated PM time
88
82
89
Process mapping
86
85
79
National Learn Wshops
85
93
81
Change principles
82
79
79
Capacity & demand
78
85
66
PDSA cycles
76
91
59
Monthly reports
73
82
57
National one day meetings
69
81
56
Improvement handbook
58
72
45
Team self assessment scores
47
27
45
Listserv
39
60
19
Conference calls
36
51
23
173
Appendix 8
Postal questionnaire: organisational aspects
TABLE 55
% respondents rating aspect ’very’ or ’quite helpful’ (by programme A-G)
Broader, organisational
aspects
Overall
(n=96)
A
B
C
D
E
F
G
(17)
(9)
(10)
(9)
(13)
(20)
(11)
Local CSC lead
82
71
89
100
67
100
80
64
Clinical champions
72
59
56
100
100
85
45
73
Cancer networks
66
47
33
60
100
85
85
27
National CSC team
57
59
56
70
78
54
45
45
Trust Chief Executives
51
18
67
60
0
92
50
73
Regional Office
29
24
0
20
56
15
50
9
Health Authority
25
6
22
20
33
31
25
9
TABLE 56
% respondents rating aspect ’very’ or ’quite helpful’
Broader, organisational
aspects
Overall
(n=96)
Project managers
(38)
Tumour Group
Clinical leads
(40)
Local CSC programme lead
82
84
82
Clinical champions
72
85
47
Cancer networks
66
66
66
National CSC team
57
60
47
Trust Chief Executive
51
45
50
Regional Office
29
18
32
Health Authority
25
12
18
174
Appendix 9
Costs questionnaire
Evaluation of the Cancer Services Collaborative (CSC). NPAT cost data
questionnaire
Budget allocated for CSC activity
Source:
To March 2001
Estimated for 2001/02*
Department of Health
£
£
£
£
Other sources (eg transfers from
Booked Admissions Programme)
* It is appreciated that there may be a ‘nil’ entry for 2001/02
Expenditure:
To March 2001
Estimated for 2001/02*
CSC programmes
£
£
NPAT CSC-related staff time
£
£
meetings and other CSC
£
£
events (including related travel)
£
£
IHI
£
£
Publications
£
£
Other (including overheads)
£
£
Total
£
£
CSC Learning workshops,
* It is appreciated that there may be a ‘nil’ entry for 2001/02
175