as a PDF

PSRP Feedback from incident reporting systems PS028
FINAL REPORT
Professor Louise Wallace
Professor Psychology and Health, Director
Health Services Research Centre
Coventry University
Priory Street
Coventry
CV1 5FB
CONTENTS
Page No.
Authors
vi
Abstract
vii
Abbreviations
xi
Acknowledgements
xi
1.0
Overview of the study and structure of the report
1
1.1
References
16
2.0
Review Section
22
2.1
Introduction
27
2.2
Overview of Research Method
35
2.3
Results from the literature searches
43
2.4
Synthesis and discussion of principle review findings
63
2.5
Summary of findings and investigations
112
2.6
References and Included Articles
121
3.0
Survey
140
3.1
Background to the Survey
142
3.2
Part 1: Patient Safety Culture
146
3.3
Part 2: Reporting Systems
149
3.4
Part 3: Analysis of Incidents and Patient Safety Information
152
3.5
Part 4: Formulating solutions and recommendations for change
161
3.6
Part 5: Implementing recommendations
164
3.7
Part 6: Feedback and Dissemination
166
3.8
Conclusions
172
4.0
Expert Workshop
183
4.1
Key Results
184
4.2
Feedback levels in the SAIFIR model
187
4.3
Outcome from the Expert Workshop
191
ii
5.0
Case Studies
192
5.1
Case Studies section: Overview
192
5.2
Case Study 1 : Developing a Personal Digital Assistant (PDA) Platform
195
for Clinical Risk Reporting to include Feedback to the reporter
5.3
The Newsletter Case Studies: Overview
203
5.4
Case Study 2: Leicestershire Partnership Trust
205
5.5
Case Study 3: Lancashire Partnership NHS Trust Bluelights Newsletter
207
Case Study
5.6
Case Study 4: Survey of Trust Newsletters
212
5.7
Overall Conclusions
217
5.8
References
222
6.0
Discussion, Recommendations and Conclusions
223
6.1
Discussion and recommendations
223
6.2
Implications for practice in NHS trusts from the scoping project
235
6.3
Implications for patient safety research
236
6.4
References
242
APPENDICES
Appendix 1: Imperial College, London
•
Scoping review and synthesis of information on effective feedback systems for
incident monitoring in health care (Version 4-9)
Appendix 2: Survey of Trusts
•
Questionnaire – web and paper versions
Appendix 3: Expert Workshop
•
Attendance list
•
Programme
Appendix 4: Case studies
a) University Hospitals Coventry and Warwickshire NHS Trust (UHCW) Incident
reporting system; interview schedule for UHCW site; interview schedule for
Newsletter case study sites
b) Leicestershire Partnership NHS Trust TRIAL example
c) Lancashire Partnership NHS Trust Bluelight analysis
d) Lancashire Partnership NHS Trust Bluelight example
e) Lancashire Partnership NHS Trust Bluelight poster
f) Newsletter case study survey data collection form
iii
Figures
1.
Overview of the study and structure of the report
Figure 1: Overview of the PSRP Feedback research study
2.
1
Review Section
Figure 2: Overview of the PSRP Feedback research programme including Part A
26
Review and Part C Expert Workshop
Figure 2.1: Statement of key aims for the review
28
Figure 2.2: Information flows between key patient safety entities
30
Figure 2.3: A cybernetic definition of safety feedback processes for
31
organisational systems
Figure 2.4: Two routes to improving operational safety – feedback of information
33
and action from incident monitoring
Figure 2.5: Main features of the review method
37
Figure 2.6: Systematic scoping review plan and sub-tasks
38
Figure 2.7: Matrix of coverage of key knowledge areas within expert panel
39
Figure 2.8: Incident reporting systems represented in the expert panel
41
Figure 2.9: Breakdown of bibliographic records obtained from sources included
43
within the systematic search strategy
Figure 2.10: Overview of key review phases for articles
44
Figure 2.11: Breakdown of high relevance literature sources by content and
46
article
Figure 2.12: Local requirements for incident management, reporting, analysis
49
and learning
Figure 2.13: Summary of the information and action feedback processes
56
implemented within 23 operational health care reporting systems identified within
the review
Figure 2.14: Requirements for effective feedback from incident monitoring at an
64
organisational level based upon consultation with a panel of subject-matter
experts
Figure 2.15: Recursive feedback or learning loops operating at multiple levels of
68
analysis within embedded organisational systems
Figure 2.16: Overview of reporting and feedback processes spanning both the
70
organisational and supra-organisational level
Figure 2.17: The safety feedback or control loop for organisational systems
71
Figure 2.18: Functional stages in the incident-based safety feedback loop
72
iv
Figure 2.19: The SAIFIR framework
75
Figure 2.20: Correspondence between features of the SAIFIR framework and
76
expert-driven requirements for effective safety feedback systems
Figure 2.21: Relationship between the functional stages within safety feedback
80
or control loop and the main stages of the safety issue management process
Figure 2.22: Safety Issue Management Process for Learning from events
81
(SIMPLE workflow)
Figure 2.23: Description of different types of feedback corresponding to
84
feedback modes A-E within the SAIFIR framework
Figure 2.24: SAIFIR framework depicting key dialogue processes or inputs from
93
the reporting community to the safety issue management process
Figure 2.25: Description and classification of health care incident monitoring and
95
feedback systems
Figure 2.26: Architecture of a hospital risk management and reporting system
100
reproduced from Nakajima et al (2005)
Figure 2.27: Key aim and outputs from the review
112
Figure 2.28: 15 requirements for effective safety feedback systems
113
Figure 2.29: Five modes of feedback from incident monitoring
114
Figure 2.30: Examples of further forms of safety feedback mechanisms for
115
incident monitoring
Figure 2.31: Current limitations of NHS safety feedback systems
117
Figure 2.32: Recommendations for enhanced safety feedback systems in UK
118
healthcare based upon the review findings
5.
Case Studies
Figure 5: Types of feedback
193
Figure 5.1: Comparative analysis of the case studies against SAIFIR framework
216
requirements
6.
Discussions
Figure 6: A checklist of actions to implement the recommendations from the
scoping study of feedback from incident reporting system
v
236
Reporting Systems: a scoping study of methods of providing feedback within an
organization
Report to the Department of Health Patient Safety Research Programme
November 2006
Research collaboration:
Professor Louise M Wallace
Applied Research Centre, Health & Lifestyles Interventions
Coventry University
Dr Maria Koutantji
Clinical Safety Research Unit
Imperial College London
Professor Peter Spurgeon
West Midlands Deanery &
University of Warwick
Professor Charles Vincent
Clinical Safety Research Unit
Imperial College London
Additional authors:
Dr Jonathan Benn
Clinical Safety Research Unit
Imperial College London
Dr Louise Earll
Consultant Psychologist
Researchthatworks Limited
vi
ABSTRACT
Introduction: This report describes a systematic scoping study funded by the Department
of Health Patient Safety Research Portfolio (PSRP) entitled: Reporting Systems: a scoping
study of methods of providing feedback within an organisation (Grant code: PS-028). The
study aims to investigate potential mechanisms for providing feedback from incident
reporting systems in the UK NHS. Clinical incident reporting systems are a key part of the
detection and monitoring of patient safety in NHS Trusts, within trust level NHS risk
management systems, linked to the national system known as the National Reporting and
Learning System (NRLS) established by the National Patient Safety Agency (NPSA). The
promotion of safety through learning from reported adverse incidents in healthcare is being
actively pursued through similar state or national agencies throughout the developed world.
This study aimed to draw on international expertise, not only in healthcare, but in other high
risk industries. The commission highlighted the disproportionate attention to date worldwide
on the mechanisms of reporting and driving up reporting, than there has been on the
development of learning and system changes from the incident reporting system.
Methods: The study was commissioned to be completed in 18 months; therefore
programmes of work were conducted in parallel, with iteration between sub elements to
ensure that learning was integrated into a model which could be used to assess the current
readiness of NHS systems to learn from incidents. It is a model that could be used to
influence the design of new initiatives and system changes, and to generate testable
hypotheses about the features and modes of operation of patient safety incident reporting
and feedback systems. The commission excluded the focus on national systems of reporting
which have already the focus of intense scrutiny and comparison.
The programmes of work were a worldwide structured literature review, which after
systematic screening produced 2,002 reports of studies of relevance to feedback from
safety incident reporting systems, 193 were selected for data extraction. Twenty nine
articles referred to 23 case reports of healthcare incident reporting feedback systems which
were examined in depth. This programme was informed at several stages by interviews
with healthcare and non healthcare industry experts (n=18). The second programme was
empirical studies of the NHS healthcare systems at the level of all 607 NHS trusts. The
programme began with a survey of all NHS trusts in England and Wales (n=607), achieving
responses from (n= 351), a 57.8% response rate. The survey was informed by two surveys
in 2005 and 2005 of provider trusts in England conducted during the initial phases of the
project by the National Audit Office, thereby expanding the breadth of data on which to draw
conclusions. Both programmes were presented at an Expert Workshop of (n=71) NHS risk
vii
management managers and clinicians, and representatives of professional healthcare staff
and regulators, and experts from other high risk industries in the UK (rail, maritime, aviation
industries and the Health and Safety Executive). This synthesis phase helped to both further
refine the model arising from the review, and target in depth case studies of aspects of
feedback in NHS trusts. Three of the four case studies selected reflected feedback in the
form of newsletters, while one describes initial progress in building an electronic mobile
system for incident reporting that could provide a platform for several levels of feedback that
are currently not in evidence in the UK.
Results and discussion: The review section defines feedback as the process by which the
safety of a health care delivery system is incrementally improved as a result of changes to
work systems and processes within the organisation based upon observed, safety-relevant
outcomes, such as reported patient safety incidents and adverse events. Given the
multifaceted nature and modes of feedback communication in healthcare including normal
workflows of supervision and management, it is difficult to distinguish those specifically
derived from learning from patient safety reporting systems. Indeed, it is desirable that such
systems are integrated. This poses a problem of scope for this study, which was addressed
pragmatically by following systematic review methods combined with expert advice on these
boundary issues. s may be expected from the diverse literature in many different industries,
the methodological quality was often very poor. However, the most instructive studies were
those that described in some detail systematic attempts to create opportunities for learning
from incident reporting systems at a number of levels simultaneously. System requirements
were extracted both from the literature and from expert interviews, and through feedback at
an NPSA expert meeting, as well as the Expert Workshop. The result is a framework for
Safety Action and Information Feedback from Incident Reporting (SAIFIR), which includes
five distinct modes of action and information feedback and how they map onto a generic
safety issue management process for organisational level risk management systems.
Feedback encompasses both action and information outputs that are designed to enhance
patient safety through increasing safety awareness and improving systems of care delivery.
Mode A feedback describes immediate feedback to the reporter or others in the affected
service or team termed “bounce back information: Acknowledgement and debriefing of
reporter immediately following report submission.
Mode B feedback is aimed at rapid
response actions, i.e. measures taken against immediate and serious threats to safety (i.e.
where the issue is fast-tracked through the process of remedial action and/ or investigation).
Mode C feedback is risk awareness information on current system vulnerabilities from the
analysis of incident reports that may be broadly disseminated within an organisation (e.g.
Safety Newsletters). Mode D is feedback information that informs staff of actions taken. This
includes reporting back to the reporter and reporting community on issue progress and
viii
Comment [LW1]: Reviewer
one page 1 final para
actions taken based upon reports. Mode E is systems improvement actions. This is the
development and implementation of specific action plans for improvements to work systems
that address specific contributory factors identified through analysis of reported issues. The
model includes 15 system requirements for requirements for organisational level safety
feedback processes, based upon expert opinion regarding best practices in this area.
The survey of NHS trusts showed that risk reporting and feedback systems are highly
variable in terms of coverage of reporting and of feedback. Mode A feedback of
acknowledgement to the reporter was given by a third of trusts. The Expert Workshop
showed that Mode B remediation and recovery advice was not given by any trusts
represented. The survey showed that mode C dissemination of risk awareness information
was practiced in all trusts, mainly via newsletters, group meetings and training. Mode D
feedback of issue management and outcome to the reporter was in place for two thirds of
trusts but a tenth gave no outcome information to reporters. Mode E feedback of
improvements in work systems occurs for two thirds of trusts, but 25% have no systems for
monitoring impact, and 27% of respondent risk management leads state that they believe it
is not acted upon. The survey results also reflect shortcomings against many of the 15
requirements of safety systems, again showing that there is considerable progress to be
made in most trusts, particularly in the integration of sources of risk information, sharing of
solutions in changing working practices within and between trusts, implementation and
monitoring of recommendations and promoting a safety culture with visible senior
leadership.
The case studies of feedback within NHS trusts showed that mode A and B feedback would
be welcomed and could potentially be supported by e working technology for both reporting
and feedback. Newsletters were received from 90 trusts and subjected to textual analysis.
There is great variation in practice, with few making use of basic design features to make
them attractive. Content varies from general awareness raising to very vivid incident
vignettes and specific recommendations for changes in working practices. Newsletter based
initiatives in two partnership trusts were studied in detail, both showing features that
demonstrated modes C, D and E feedback and met many of the 15 requirements in the
SAIFIR model and, including the use of regular audit of content and reach, self assessment
tests and face to face dialogue with front line staff to achieve buy in to changes in working
practices.
Conclusions: The overall conclusions were that the SAIFIR model provides a means of
describing the essential features of effective feedback from patient safety reporting systems,
and recommendations are made both for practice the NHS and for research. However,
ix
Comment [LW2]: Reviewer
2- final comment page 7
feedback can only be effective in the context of comprehensive and sufficiently resourced
patient safety systems.
Words 1403
x
ABBREVIATIONS:
CNST- Clinical Negligence Scheme for Trusts
HB- Health Board (Wales)
HRO-High Risk Organisations
NAO- National Audit Office
NHS-National Health Service
NICE- National Institute for Health and Clinical Excellence
NPSA-National Patient Safety Agency
NRLS-National Reporting and Learning System
OWAM-Department of Health (2000) Organisation with a memory. Report of an expert
group on learning from adverse events in the NHS. The Stationery Office, London
PCT-Primary Care Trust (England)
RCA-Root Cause Analysis
RM-Risk Management
SHA-Strategic Health Authority (England)
Acknowledgements:
The research team are indebted to many people for ideas, suggestions and participation in
the programmes of this research. In addition to those listed as contributing as researchers in
each section, we wish to record our thanks to Mrs. Jo Foster and Prof. Richard Lilford of the
DH Patient Safety Research Programme for supporting this research. We thank Captain
David Lusher and Terema Ltd for advice, to Dr. Mike Rejman, Dr. Sally Adams, Dr. Richard
Thomson and Dr. Jane Carthey of the NPSA, and the many Patient Safety Managers who
contributed advice and helped forge contacts with the NHS. We are indebted to the risk
management leads of participating trusts, in particular those who assisted with piloting
survey items (Pam Wilcox, Rachel Freeman, Jo Beales of South Warwickshire NHS Trust)
and Maria Dineen, Consequence UK for advice on case study sites. For enthusiastic
contribution to our case study programme we thank Claire Armitage of Leicestershire
Partnership NHS Trust, Justin Thomas of Lancashire Partnership NHS Trust, Paul Martin,
Yvonne Gateley and Dr Duncan Watson of University Hospitals Coventry and Warwickshire
NHS Trust.
xi
1.0 - OVERVIEW OF THE STUDY AND STRUCTURE OF THE REPORT:
The report describes a systematic scoping study funded through the Department of Health
Patient Safety Research Programme (PSRP) (Grant code: PS-028). The study draws upon
three interrelated work streams to investigate mechanisms for providing feedback from
patient safety incident reporting systems in the UK NHS. The focus is on reporting systems
as a means of detecting safety issues in NHS Trusts. In this context, feedback refers to the
process by which organisations “learn from failure”, as described in the Chief Medical
Officers’ report “An Organisation with a Memory” (Department of Health, 2000). A major
proposition in this research is that feedback encompasses both action and information
outputs that are designed to enhance patient safety through increasing safety awareness
and improving working practices, systems and environments for healthcare delivery. In a
mature system, feedback so defined is a sub set of the normal workflows of supervision and
management. In the context of healthcare, where such systems are very much under
development, the scope of the study is those actions purposively designed to learn from and
reduce the likelihood and impact of future patient safety incidents.
The study involved a research consortium including researchers in clinical safety from the
Universities at Coventry and Birmingham and Imperial College London, and safety experts
from healthcare and other industries. Figure 1, developed by the Imperial Team, shows
schematically the Work streams and their contribution to the overall study.
g
Part A:
Scoping review of feedback from safety monitoring
systems in health care and high risk industries
(Imperial College London)
1. Systematic
Literature Review
• Requirements for effective
safety feedback with rationale
• Model of feedback/control
process for safety monitoring
systems
2. Expert Panel
Review
Part
PartB:
B:
Empirical
Empiricalstudies
studiesofoffeedback
feedbacksystems
systemsand
and
in-depth
in-depthcase
casestudies
studies
(Coventry
(Coventry&&Birmingham
BirminghamUniversities)
Universities)
Part
PartC:
C:
Expert
ExpertReview
ReviewWorkshop
Workshopwith
withhealth
health
care
careprofessionals
professionalsand
anddomain
domainexperts
experts
OUTPUT:
OUTPUT:Design
Designfor
foreffective
effectivesafety
safetyfeedback
feedback
systems
systemsfor
forhealth
healthcare
care
Figure 1: Overview of the PSRP Feedback research study
1
Comment [LW3]: reviewer
one page 1 final para
Report structure:
Reflecting the leadership of the work streams, after a brief introduction to the policy and
research literature which is a product of the combined research team, the sections of the
report are written by the lead research team. In each section, there have been numerous
contributions in all phases of the research and subsequent report writing, from the whole
research team. However, each section is also intended to be largely self contained to make
each project easier to understand, and to aid rapid dissemination to the diverse audiences
for each section. While this may lead to some repetition, we also believe that readers will be
able to select more easily those aspects which are of interest to their research or safety
practice.
In place of the traditional background literature review, the report begins with the work of the
Imperial College team which contains not only a detailed review of incident reporting safety
literature from healthcare and other high risk industries and NHS patient safety policy, but
also synthesises opinion from selected experts worldwide. This work stream ran throughout
the study, and informed the design of the empirical research (survey and case studies)
undertaken by the Coventry University lead team. Both work streams fed into the Expert
Workshop. The outputs from the Expert Workshop informed the final case study phase and
the conclusions drawn in the review work stream. We have therefore included the detail of
the rationale, methods, results and key conclusions in each section, drawing the main points
together in an overall conclusion at the end.
The Steering Group:
The Steering Group met six times, and members also provided invaluable ad hoc advice. In
addition to the research collaborators named on the front page, full members were Captain
David Lusher on behalf of Terema, independent aviation safety consultant and ex head of
safety at British Airways. Dr. Michael Rejman attended to represent the NPSA and to
provide expertise in safety in rail and other high risk industries. Mr. Laurie MacMahon,
Director of The Office of Public Management provided advice on the conceptualisation of,
and direct facilitation of, the Expert Workshop. Denise James, Research Project Manager
(Applied Research Centre, Health & Lifestyles Interventions Coventry University) provided
technical support to the Steering Group as well as project managing the empirical studies
lead by The Coventry University team. As the study was run in parallel in the first year with
the evaluation of the NPSA’s Root Cause Analysis (RCA) training (DH PSRP PS045), we
ran the Steering Groups on the same day, enabling synergy to be achieved between these
projects, particularly as regards access to information from the NPSA, and advice in a
2
personal capacity from Dr. Sally Adams (who represented the NPSA on the RCA project
Steering Group). Mrs. Jo Foster was a member of both Steering Groups on behalf of the DH
PSRP.
The following introduction is based on the review in our proposal, and was lead by Dr. Maria
Koutantji with input from Professor Charles Vincent and Professor Louise Wallace.
Background of study, policy relevance and related research
The UK Department of Health report “An Organisation with a Memory” (OWAM) (2000)
stresses the need for the National Health Service (NHS) to operate as a learning
organisation actively learning from past failures. It asserts that in order to achieve this,
reporting systems that allow individuals and the organisation to learn need to be developed,
used and maintained. This should translate into safer services for patients. Effective
mechanisms at each stage of identification, reporting, analysis and feedback, and links
between each stage, are crucial to achieving safer patient care and to maintaining the
integrity of the risk reporting system itself.
It is important to examine the way that learning occurs at a local organisational level.
Feedback seen as specific responses to reports of incidents or audit data (as defined in the
original PSRP call for proposals) is a mechanism that can facilitate learning from failures at
the local level.
The following section reviews the purpose and functions of learning organisations, where
almost
all
the
development
of
these
concepts
has
been
outside
healthcare
in high reliability organisations (HRO). The rationale for the proposed relationships to
learning from incident reporting systems, and the key role of feedback as a learning
mechanism in HROs is examined. The emerging research in the NHS is used to establish
the potential for improving risk reporting systems in the English and Welsh NHS. The
research was designed to take this forward by synthesising knowledge from HROs, non UK
healthcare and the NHS, and building a model to be tested by empirical work in the NHS,
with a third phase of synthesis of the two streams of work to create grounded proposals for
feedback systems to be developed and tested in the NHS.
Learning organisations and High Risk Organisations (HROs)
The Learning Organisation is a popular, if ill defined, concept. Proponents of the learning
organisation centrally assert that organisations must learn from the collective experience of
all its members as a means to maintain flexibility and competence in response to uncertainty
3
and rapid change in their environment. Learning is also essential for the maintenance and
improvement of their capacity to innovate and compete and hence to survive. Most authors
assert that learning organisations are characterised by: open systems thinking where the
different activities in the organisation are understood by the workforce and there is
integration of the way operations happen; constant effort to improve individual capabilities;
team based learning, updating of mental models (cognitive representations of how things
are done); presence of a cohesive vision with clear direction, strategy and values shared by
the members of the organisation (Senge, 1990; Davies & Nutley, 2000). Healthcare is a
rapidly evolving field where new research evidence and technologies should have a direct
impact on clinical practice, but not all actions can be prescribed based on evidence, and
team working is essential (Wallace, Cooke and Spurgeon, 2003; Bayley, Wallace, Spurgeon
and Barwell, in press). Healthcare is therefore a sector where the learning organisation is
most appropriate in order to function effectively and safely for patients and staff.
Iles & Sutherland (2001) in their review of organisational change concepts and theories for
healthcare professionals, define organisational change as “a transformational process which
seeks to help organisations develop and use knowledge to change and improve themselves
on an ongoing basis”. Argyris and Schon (1978) refer to three levels of learning in
organisations: single-loop learning which is adaptive as it focuses on how to improve the
existing conditions by using incremental change and narrowing the gaps between
desired/ideal and actual conditions; double loop learning which is generative learning aiming
at changing the status quo by questioning and changing existing assumptions and
conditions within which single–loop learning operates; and meta-learning, which is learning
about the best way to learn and sharing this knowledge across the organisation. In this
context, feedback is a mechanism that facilitates learning form experience which operates
both at the single and double loop levels.
However, for effective individual, team and organisational learning to happen, conducive
cultural values have to be in place. Organisational culture has many facets (Mannion,
Davies and Marshall, 2003), so unsurprisingly there is variation in how the values of a
learning organisation are described. However, Mintzberg’s model has been influential and
includes: celebration of success; absence of complacency; tolerance of mistakes; belief in
human potential; recognition of tacit knowledge; openness; trust; and being outward looking
(Mintzberg et al., 1998 cited in Davies & Nutley, 2000). Reason (1990) identifies four critical
elements of an effective safety culture; reporting is valued, just, flexible, and encourage
learning. The challenge is how to achieve these values in organisations where the risks of
failure can be catastrophic.
4
High reliability industries such as aviation, nuclear power, petrochemicals and railways are
organisations that have persistently fewer accidents than expected on the basis of the high
risk nature of their work. According to Weick & Sutcliffe (2001) high reliability organisations
(HROs) are distinct from other organisations on the basis of their ability to manage the
unexpected “mindfully” through flexible rules and the development of live problem solving
skills for their staff. HROs are not error free but errors do not disable them. Their operations
are characterised by: i) preoccupation with failure; they pay attention to small failures as well
as large failures; they achieve this by encouraging reporting of errors and near misses and
actively learning from them, ii) reluctance to simplify interpretations; HROs take steps to
create more complete pictures of events and processes acknowledging that the world is
complex, unstable, unknowable and unpredictable. They encourage scepticism towards
received wisdom; iii) sensitivity to operations; they develop very good situational awareness
by constant monitoring of normal operations with the aim to identify and correct deficiencies
that can lead to the development of unexpected events, iv) commitment to resilience, by
developing capabilities to detect, contain, and recover from those inevitable errors;
resilience is a combination of keeping errors small and of improvising workarounds that
keep the system functioning; they also simulate and practice worst case conditions, v)
deference to expertise; decisions are made not on the basis of rank but of expertise, with
many decisions being made at the front line. Different decision systems operate in normal,
high-tempo, and crisis times when a predefined rehearsed emergency structure gets
activated. In order for most of the above to be achieved, the reporting of large and small
failures, and near misses and the generation of specific and relevant feedback to these staff
members who report can be seen as vital mechanisms via which the endeavour for high
reliability operates.
Rejman (1999) described how the setting up and maintenance of an incident reporting
system for part of the UK military aviation led to the generation of reports that did not relate
to actual adverse incidents but to potential latent failures within the organisation that could
be used proactively to manage risk.
The thrust of the study which we are reporting is that healthcare could benefit by examining
how these HROs manage failures and strive for higher safety, while also being mindful of
imitations on the transferability of learning across sectors.
5
Varieties of healthcare reporting systems
Every healthcare system uses reporting systems that have various purposes. The UK
National Patient Safety Agency (NPSA) surveyed the reporting systems in place in the NHS
in 2003. They identified 31 separate adverse incident reporting systems operating within the
NHS, not including local and specialty systems. Some are primarily used for regulatory
purposes, professional monitoring, or performance management. Reporting for improving
safety and quality of care is however primarily concerned with learning.
Many countries already operate reporting systems for adverse effects of drugs, problems
with medical devices, the safety of blood products and other matters. In response to
concerns about patient safety new reporting systems have been initiated which are intended
to cover a much wider range of adverse outcomes, errors and near misses. These may
operate at the local level (risk management systems in hospitals) or at national level (e.g.
UK National Patient Safety Agency). The UK National Audit Office commissioned a review
of national incident reporting systems in eight countries (Emslie, 2004). This showed that
while Canada, USA and Australia had national patient safety agencies, Ireland, New
Zealand, Singapore and Hong Kong did not, although all but the USA and at that time
Canada, had confidential national reporting systems, with the process under development in
Canada at the time of this review (2005-6).
Sophisticated systems have also been
established to investigate and understand a variety of specific issues, such as transfusion
problems or safety in intensive care. Some of the more sophisticated systems now involve
an examination of human factors and detailed information on the causes of adverse events.
Runciman and colleagues (Runciman, et al 2006) describes the desirable features of an
integrated framework for safety, quality and risk management, spanning all 4 levels of
individual,
organizational, inter
organizational
and
national
learning.
This
review
concentrates specifically on the first three levels, while also acknowledging there are
implications for national systems and for integration of reporting systems within the whole
risk management system.
Local reporting systems in healthcare
The development of risk management in the United States, United Kingdom (UK) and
elsewhere led to the establishment of local incident reporting systems in hospitals, usually
run centrally. Typically there is a standard incident form, asking for basic clinical details and
a brief narrative describing the incident. Sometimes staff are asked to report any incident
which concerns them or might endanger a patient; in more sophisticated systems where
staff within a unit may be trying to routinely monitor and address specific problems there
6
may be a designated list of incidents, although staff are free to report other issues that do
not fall into these categories.
These systems have a number of aims and there is sometimes a conflict between the risk
management (RM) function and the broader patient safety function, at least in terms of time
and resources. In the United States risk management is primarily concerned with managing
complaints and litigation and has often not been linked with efforts to improve the quality
and safety of care. Local systems are ideally used as part of an overall quality improvement
strategy but in practice are often dominated by managing claims and complaints. Risk
management is, in the UK at least, gradually evolving to have a stronger focus on safety
issues and to be less dominated by medico-legal concerns, and in the past two years under
guidance from the Healthcare Commission, clinical risk management has become more
integrated at corporate level with health and safety, estates, equipment and IT, workforce
and financial issues through integrated governance.
National and other large scale reporting systems in healthcare
National systems tend to be expensive to run, and have the disadvantage of being entirely
reliant on written reports, perhaps supplemented by telephone checking. On the positive
side their sheer scale gives a wealth of data, and their particular power is in picking up
events that may be rare at a local level with patterns of incident only appearing at national
level. The British National Patient Safety Agency (NPSA) is probably the only truly national
system, but the Veterans Affairs system in the United States is very wide ranging, as is the
Australian AIMS system. Many other countries are in the process of setting up large scale
reporting systems.
The National Reporting and Learning System (NRLS) of the NPSA is, explicitly, a system for
learning from the incidents reported so that solutions can be developed to address patient
safety issues. The information received is anonymous, only released in aggregate form and
not used for regulatory or other purposes. At the inception of the research study, two pilot
studies had been carried out, the second producing 28,000 incident reports, and the system
was being rolled out across the NHS. A key target for the NPSA was to develop a national
reporting system. During the current research project, an anonymous electronic reporting
system was in place by September 2004, and all trusts were undertaking some reporting to
the NRLS by January 2006 (House of Commons Committee of Public Accounts, 2006) (HC
PCA). In July 2005, the NPSA issued its first report on its analysis of incidents (NPSA, July
2005).
7
Remaining challenges are to encourage broader reporting, both in terms of involving all
professions and all clinical areas. For instance, there is little reporting in primary care.
Reporting of incidents to Primary Care Trusts (PCTs) is below the threshold for mandatory
reporting to the NPSA, and this relies on the voluntary input of independent contractors. The
NPSA is continually reviewing the many different potential uses of its data, which in itself
could be viewed as feedback. For instance incident reporting data are used, amongst other
kinds of data, to prioritise solution development within the NPSA, but as yet the extent that
these data are used to inform safety issues in healthcare policy, as feedback to submitting
trusts needs to be demonstrated. The more general dissemination of safety issues to NHS
staff and patients was criticised by the NAO and HC PCA reports referred to above,
however, in our view it is not clear that the scale of the challenge of establishing a functional
and effective reporting system across the NHS within the given timescale, was fully
appreciated.
Weaknesses in NHS incident reporting systems as identified by the “Organization with a
memory” report (Department of Health, 2000)
The following weaknesses were identified:
•
No organization, operational definition of incident.
•
Coverage and sophistication of local incident reporting systems, and the priority
afforded to them by NHS Trusts, varies widely.
•
Incident reporting in primary care is largely ignored.
•
Regional incident reporting systems undoubtedly miss some serious incidents and
take hardly any account of less serious incidents.
•
No organization approach to investigating serious incidents at any level.
•
Current systems do not facilitate learning across the NHS as a whole.
8
There has been progress in some of these areas, notably the extensive training provided by
the NPSA to 8,000 staff in Root Cause Analysis, which was evaluated under PSRP PS045
by Wallace and colleagues (Wallace, et al., 2006), but many issues are still largely as
described six years ago, particularly the unevenness in reporting and the lack of systems for
ensuring widespread learning across the NHS. This research was conducted within this
context, and focused on feedback from reporting within local healthcare organizations. We
now examine briefly the research on reporting behaviour.
Factors that affect quality of reporting in healthcare:
Reported error in healthcare is known to be a small proportion of the error occurring
(Vincent, Neale and Woloshynowych, 2001). The determinants of what is detected and
reported show that there are many filters. These
are influenced by individual, team,
professional and organisational values and practices. Some primary research has
addressed health care staff’s views on reporting and learning from errors in the British
context. Vincent, Stanhope and Crowley-Murphy (1999) conducted a questionnaire study on
reasons for not reporting adverse incidents in two obstetric units. They found that of the
combined sample of 42 obstetricians and 156 midwives, most staff knew about the existing
incident reporting system. But their views on the necessity of reporting varied considerably
as a function of the type of obstetric incident, profession and grade. The main reasons for
not reporting were fears that junior staff would be blamed, high workload and the belief that
the circumstances or the outcome of a particular case did not warrant a report. Among the
recommendations of this study for improving reliability of reporting were education about
what to report, feedback and reassurance to staff about the nature and purpose of reporting
systems.
Similar results were obtained by Firth-Cozens, Redfern and Moss (2002; 2004). They
conducted a study with focus groups of medical and nursing staff of different grades to
investigate factors that affect reporting of errors in London and Newcastle. It seems that
staff reported no near misses and what was reported was very situation specific. Senior staff
often gave the view that nothing was ever done to address the issues raised. Among the
mechanisms for change they proposed were the use of critical incident analysis, audit,
annual appraisals, change of culture, support by peers for those involved in incidents, formal
education about safety and handling of errors, and ward based learning groups and time
outs.
Meurier, Vincent and Palmar (1997) investigated the causes of errors in nursing, the way
nurses coped with them on an individual basis, and the potential of errors to initiate changes
9
in their practice. It was found that accepting responsibility for error and planful problem
solving were associated with positive changes in practice, while distancing oneself was
associated with a tendency not to divulge the error.
Research on patients harmed by treatment indicates that among the most significant
reasons for complaints from patients and their carers are: that no-one has to experience the
same situation again; and to raise staffs’ awareness of what happened (endorsed by 90.4%
and 80.1% of respondents respectively) (Bark, Vincent & Jones et al, 1994). Patients wish to
be part of the feedback loop, and to see learning from error. Several national systems now
incorporate reporting from the public (e.g. Denmark, the Netherlands, Ireland, Sweden and
the UK). However, where the system is anonymous and where no identifiers are stored, as
in the UK, feedback directly to the reporter is precluded. Payers (government and other
funders) are also expected to be part of the feedback system (American Medical
Association, cited in Shaw and Coles, 2001). The NPSA’s guidance “Being Open”, and the
Healthcare Commissions’ core standards both support the direct feedback to patients who
have been involved in clinical incidents.
Coles, Pryce & Shaw (2001) examined the factors that influence incident reporting in
healthcare in the UK and worldwide. Interviews with UK healthcare professionals showed
that information reported was influenced by many factors including beliefs about the extent
action will be taken to improve patient care, and the extent the individual will be blamed.
Few people knew what happened to the incident report data in their trust. The staff who
reported most (in acute hospitals) were nurses. Doctors either delegated or dealt with the
incident itself and did not see reporting as a means of learning. There was virtually no
analysis at trust level. Medical Audit had largely failed because the loop was seldom closed
(Buttery, Walshe and Rumsey, et al., 1995). Coles et al. (op. cit.) found the type of feedback
could be statistical or a case specific description with the later being the preferred option for
some doctors. (Prof J. Reason (personal communication) recommends on the basis of his
experience of working with HROs the use of “stories” (specific cases) as a powerful method
to provide feedback to staff in a complementary way to the more common quantitative
presentation of facts.
The types of feedback may have different impacts on behaviour. Relevant psychological
research on the factors that could facilitate healthcare professionals’ adherence to clinical
guidelines has shown that the use of specific behavioural directions in the statement of the
guidelines (specifying precisely recommended behaviours: what, who, when, where, and
how) rather than the use of general statements will assist adherence and implementation
10
(Michie & Johnston, 2004). It is reasonable, but as yet untested, to expect a similar impact
of
specific
guidance
for
future
practice
11
on
the
implementation
of
feedback.
Influence of what is fed back on the impact of feedback:
Experts from non UK healthcare have recommended that the most valued feedback
should be immediate, to acknowledge the report of the incident, and to specify what
steps are being taken to respond (B. Runciman, President, Australian Patient Safety
Foundation, personal communication cited in Coles et al., (2001) report). In addition
the Institute of Medicine (2001) stresses that reporting without analysis and follow-up
(feedback) may be counterproductive as it may weaken support for constructive
responses and be viewed as a waste of resources.
Moving from analysis to recommendations to change practice.
The US Agency for Healthcare Research and Quality through its Patient Safety
Initiative supports a number of related research programmes which involve the
development and evaluation of a number of reporting systems some of which involve
feedback mechanisms (e.g., reporting systems and learning: best practices; reporting
system to improve patient safety in surgery) which we have accessed to inform our
research.
Billings (1998), commenting on incident reporting systems in medicine on the basis of
his experience setting up and maintaining the aviation safety reporting system in the
USA, offered two main reasons to explain the willingness and high motivation of
pilots to use the reporting system: a) their sincere interest in improving safety by
identifying hazards; b) “the well grounded belief that the system to which they are
reporting uses that information productively and deliberately to improve safety rather
than simply as a means of counting failures”.
Woloshynowych and colleagues have recently completed a systematic review on the
investigation and analysis of critical incidents and adverse events in healthcare and
other industries (Woloshynowych et al., 2005). A key advice in this review for
researchers and investigation teams on incident analysis is to pay more attention to
producing recommendations for change and implementation of changes, as it is
considered of major importance to link findings from specific investigations to future
prevention.
12
From feedback to healthcare improvement: Feedback that closes the loop.
In the UK context, the Department of Health’s (DH) report “An Organisation with a
Memory” (Department of Health, 2000) recommended that there must be effective
communication and feedback to front-line staff as a result of incident or near-miss
reporting so that people can see what has changed as a result of reporting. This
would contribute to the creation of an informed culture where active learning from
reporting and responding to failures can happen. Secker-Walker & Taylor-Adams
(2001) also recommended the use of regular feedback as a means to ensure
organisational learning in healthcare and to make adverse incident reporting work, as
improvements resulting from incident reports are expected to become reinforcers for
staff to continue using the incident reporting system. Furthermore, Vincent & TaylorAdams (2001) in their guidelines for investigation and analysis of clinical incidents
refer
to
the
production
of
recommendations
to
prevent
recurrence,
the
implementation of the action arising from case analysis reports (part of which should
be feedback of the findings to staff) and the evaluation of the impact of changes by
further monitoring of incident reports. This process can be seen as a loop (cycle)
between incident reporting, analysis, feedback, changes and further monitoring of
changes and incident reports.
This point is also made in the DH report “Doing less Harm” (Department of Health,
2001) where one of the 10 key requirements for health care providers for improving
safety and quality of care is to ensure that lessons are learned from specific
incidents. It was stated there that the organisation needs to develop improvement
strategies, to implement and monitor them with local staff learning lessons, and
changing practice as appropriate.
Harrison et al., (2002) carried out research on reporting systems in the UK primary
care. They found that fewer than 20% of practices were engaged in audit of
significant events and reporting. They recommended building on the existing
significant event audit and analysis (SEA) system would be beneficial, and they
outlined the components of a typical local system which should allow learning to take
place at local levels linked with appraisal, revalidation and quality improvement
processes. They specifically advised the inclusion of a quarterly SEA learning event,
where clinicians and staff across practices should “explore and learn from local
significant events and receive feedback from the reporting process”.
assertions are plausible but untested.
13
There
Since the current study was undertaken, the NPSA has trained 8,000 staff in Root
Cause Analysis, and an evaluation was conducted by members of the current
research team under the grant PSRP PS045 (Wallace et al., 2006). The study found
that knowledge of factual aspects of the human factors model was high after the
course, although there were marked variations in how participants would analyse and
make recommendations from case scenarios. Attitudes and intentions and use of
RCA were very positive after the course and at six months afterwards, and 59% had
conducted an RCA within six months of the course. However, participants
experienced organisational barriers to the conduct and implementation of RCA.
Sharing of outcomes within the trusts was not widespread, and not all staff were
confident that recommendations would be implemented and lead to greater safety.
External learning, in development of solutions and outcomes, was rare. Similar
findings have been published by Braithwaite and colleagues in a state-wide
evaluation of a Safety Improvement Programme in New South Wales, Australia,
including RCA training (Braithwaite et al., in press (a) (b); Iedema et al., 2006;
Wallace, in press). Feedback from investigations is further explored in the current
study’s survey and case studies.
The above findings indicate that: health care staff want to see changes to practice
implemented as a result of incident reporting; that they can improve their practice by
adopting a proactive problem solving coping style in relation to errors; and that such
an approach would match the needs of patients harmed by treatment as they do
expect appropriate action to be taken at least at a local level to prevent recurrence of
similar adverse events.. Constant monitoring of operations, active learning from small
and large failures and use of the front line staff’s expertise in generating solutions to
problems are key characteristics of high reliability organisations. But, given the
immaturity of risk management systems in healthcare, it is likely that there is
considerable “noise” in existing systems, so the important indicators of risk are not
routinely detected nor acted upon. On that basis, the purposeful use of reported
incidents and their analyses to generate local feedback is a learning mechanism that
could have the potential of improving the quality as well as quantity of reporting of
incidents, improving safety and reducing litigation and could be a first step towards
high reliability functioning for health care organisations.
The suggested model of organisational learning in the original PSRP brief states the
need to systematically investigate the feedback mechanisms that exist on a local
level for lessons to be learnt from the reporting of failure. However, for relevant
lessons to be learnt from these reporting systems both at a local and at a supra-
14
organisational level require that sufficient processing of the reporting data takes
place beforehand. The results of the incident analysis should then determine what
lessons can be learnt and what will be the appropriate feedback mechanisms to
address the problems. Our approach to the research has taken into account the
stages of processing of incidents from reporting systems at the local organisational
level in relation to the review of the feedback mechanisms that exist.
It also
recognises that safety feedback systems will, as systems mature, integrate with
normal workflows of supervision and management, including the generation of safety
solutions.
The benefits of rapid and condition specific feedback to reporters has been shown in
a small scale study by Bolsin and Colson (2003). They trained junior doctors in
anaesthetics to use a computer programme and mobile device to monitor the
occurrence of adverse clinical outcomes and near misses in day to day practice for
clinicians as a way for clinicians themselves to collect data for their performance and
engage in personal professional monitoring. The data provided individual and group
complication rates. Initial data showed that trainees will accept the system and collect
performance data. In a related application, Bolsin, Solly and Patrick (2003) published
a case study where routine detailed trainee performance monitoring data was used to
help justify to a patient and relatives the unforeseeable nature of a rare complication
of a procedure. The authors stipulated that the necessary conditions for the success
of personal professional monitoring in this context are that: data collection is easy,
feedback is rapid and informative, the training environment is supportive and the
system is robust.
Research on clinical governance shows that risk management systems are
underdeveloped (Latham et al., 2000), methods to improve healthcare have largely
been uninformed by evidence of what works (Walshe et al., 2001; Wallace, Freeman
et al., 2001; Wallace et al., 2001 (a) (b)) and engagement by board members
especially non executives has been poor (Wallace and Stoten, 1999; Wallace et al.,
2004). An important part of this research includes data on the overall risk
management policies, structures and culture in which reporting systems are situated,
as we examine how trusts manage risk reporting and analysis, and the extent of
“front line” and board level involvement in developing effective feedback and service
improvement. Feedback can only be effective in the context of comprehensive risk
management systems, suitably resourced.
15
The tender specified the aims to be:
1) Establish the types of feedback systems used in healthcare and other
sectors, and how they may be affected by the quality and
comprehensiveness of incident reporting systems.
2) How effective these feedback systems appear to be in closing the loop
and creating safer systems after a patient safety incident.
3) The effectiveness of these mechanisms on culture and the willingness to
report in future.
The research is described in the following sections in two main work streams; the first
refers to the consultation with experts and literature review; the second to the NHS
survey on feedback. This is followed by a report of the formal synthesis activities
undertaken: an Expert Workshop and final case studies, which ran in parallel. All
research activities contributed to the emergent model of feedback from reporting
systems and to the generation of recommendations for the NHS and future research.
1.1 - References:
Argyris & Schön (1978) Organizational learning: A theory of action perspective; New
York: McGraw Hill
Bark, P., Vincent, C. Jones, A. & Savory, J. (1994) Clinical Complaints: a means of
improving quality of care, Quality in Health Care, 3:123-132
Bayley J, Wallace LM, Spurgeon P, Barwell F and Mazelan P Team working in the
NHS: Longitudinal evaluation of a two-day teambuilding intervention. Journal of
Learning in Health and Social Care (in press)
Billings (1998) Incident reporting systems in medicine and experience with the
aviation safety reporting system, in Cook, R.D., Woods, D.D. & Miller, C. A Tale of
Two stories: Contrasting views on patient safety, National Patient Safety Foundation,
Chicago
Bolsin S. & Colson M. (2003). Making the case for personal professional monitoring
in health care. International Journal for Quality in Health Care, 15, 1, 1-2.
16
Bolsin S., Solly R. & Patrick A. (2003). The value of personal professional monitoring
performance data and open disclosure policies in anaesthetic practice: a case report.
Quality and Safety in Health Care, 12, 295-297.
Braithwaite, J., Westbrook, M.T., Travaglia, J.F., Iedema, R., Mallock, N.A., Long, D.,
Nugus, P., Forsyth, R., Jorm, C., & Pawsey, M. (in press) (a). Are health systems
changing in support of patient safety? A multi-methods evaluation of education,
attitudes and practice. International Journal of Health Care Quality Assurance.
Accepted 16/7/06).
Braithwaite J, Westbrook MT, Mallock NA, Travaglia JF and Iedema R
(b)
Experiences of health professionals who conducted root cause analyses after
undergoing a safety improvement program. Quality and Safety In Healthcare (in
press)
Buttery, Y, Walshe K, Rumsey M, et al. Evaluating audit: Provider audit in England-a
review of 29 programmes. London. CASPE Research, 1995.
Coles, J., Pryce, D. & Shaw, C. (2001) The reporting of adverse clinical incidents –
achieving high quality reporting. http://www.publichealth.bham.ac.uk/prsp (accessed
on 10 March 2004)
Davies, H.T.O. & Nutley, S.M. (2000) Developing learning organisations in the new
NHS, British Medical Journal, 320: 998-1001
Department of Health (2000) Organisation with a memory. Report of an expert group
on learning from adverse events in the NHS. The Stationery Office, London
Department of Health (2001) Building a safer NHS for patients. The Stationery Office,
London.
Emslie S, in: Comptroller and Auditor General: A Safer Place for Patients: Learning
to improve patient safety. Appendix 4. National Audit Office, London. 2005.
Firth-Cozens, J., Redfern, N. & Moss, F. (2002) Confronting errors in patient care:
reporting on focus groups,
17
http://www.publichealth.bham.ac.uk/psrp/pdf/Focus%20group%20report_firthcozens.
pdf (accessed on 10 March 2004)
Firth –Cozens J, Redfern N, and Moss F. (2004) Confronting errors in patient care:
the experiences of doctors and nurses. Clinical Risk, 10, 184-190.
Harrison, P., Joesbury, H., Martin, D., Wilson, R. & Fewtrell, C. (2002) ScHARR
Significant event audit and reporting in general practice. ScHARR, Sheffield
University, England.,
House of Commons Committee of Public Accounts. A safer place for patients:
learning to improve patient safety. Fifty First Report of Session 2005-6. The
Stationery Office. London.
Iedema R, Jorm C and Long D et al, Turning the medical gaze in upon itself: root
cause analysis and the investigation of clinical error. Social Science and Medicine,
2006; 62 (7): 1605-1615.
Iles, V. & Sutherland, K. (2001) Managing change in the NHS, Organisational
Change a review for healthcare managers, professionals and researchers, NCCSDO,
London
Latham, L., Freeman, T., Walshe, K., Spurgeon, P. Wallace, L. (2000) Clinical
Governance in the West Midlands and South West Regions: Early Progress in NHS
Trusts. Clinician in Management, 9, 83-91.
Mannion R, Davies HTO and Marshall, MN (2003) Cultures for performance in health
care: evidence on the relationship between organisational culture and organisational
performance in the NHS. Summary of Policy Implications and Executive Summary.
Centre for Health Economics, University of York.
Meurier, C.E., Vincent, C.A. & Parmar, D.G. (1997)Learning from errors in nursing
practice, Journal of Advanced Nursing, 26: 111-119
Michie, S. & Johnston, M. (2004) Changing clinical behaviour by making guidelines
specific, British Medical Journal, 328: 343-345
18
NPSA (2005) Building a memory: preventing harm, reducing risks and improving
patient safety: The first report of the National Reporting and Learning System and the
Patient Safety Observatory. NPSA, London.
Reason J (1990) Human Error. New York. Cambridge University Press.
Rejman M. H. (1999). Confidential reporting systems and safety-critical information;
In Proceedings of the 10th International Symposium on Aviation Psychology (Eds) R.
S. Jensen, B. Cox, J. D. Callister & R.Lavis; pp 397-401.
Runciman WB, Williamson JAH, Deakin A, Benveniste KA, Bannon K and Hibbert PD
(2006) An integrated framework for safety, quality and risk management: an
information and incident management system based on a universal patient safety
classification. (2006) Quality and Safety in Health Care, 15 (Supplement) pages 8290
Shaw C. and Coles J. (2001) The reporting of adverse clinical incidents-international
views and experience. CASPE Research, London.
Secker-Walker, J. & Taylor-Adams, S. (2001) Clinical risk management in
anaesthesia, In: Vincent, C.A. (ed) Clinical Risk Management. Enhancing patient
safety British Medical Journal Publications: London
Senge P.M. (1990) The fifth discipline: The art and practice of the learning
organisation. Random House. London.
Vincent C.A., Neale G., Woloshynowych M.
(2001) Adverse events in British
hospitals: preliminary retrospective record review. British Medical Journal 322: 51719.
Vincent C.A., Taylor-Adams S. (2001) The investigation and analysis of serious
incidents. In: Vincent C.A. (ed). Clinical risk management. Enhancing patient safety.
2nd edition. British Medical Journal Publications: London.
Vincent, C., Stanhope, N & Crowley-Murphy, M. (1999) Reasons for not reporting
adverse events: an empirical study, Journal of Evaluation in Clinical Practice, 5: 1321
19
Wallace, L.M., Stoten, B. (1999) When the buck stops here: Trust and Health
Authority perspectives on their responsibilities and capablilities to manage Clinical
quality. Journal of Integrated Care, 3(1), p.61-62.
Wallace LM, Cooke, J and Spurgeon P (2003) Team working in the NHS: Report on
team building intervention to improve risk management in the West Midlands (South).
Report to Workforce Development Confederation, West Midland (South) Strategic
Health Authority, November 2003.
Wallace L M, Spurgeon P, Latham L, Freeman T and Walshe K. (2001) (a) Clinical
Governance, organisational culture and change management in the new NHS.
Clinician in Management, 10 (1) p.23-31.
Wallace L M, Freeman T, Latham L, Walshe K and Spurgeon P. (2001) (b)
Organisational strategies for changing clinical practice; how Trusts are meeting the
challenges of clinical governance. Quality in Health Care, 10, pp.76-82.
Wallace, LM, Boxall, M and Spurgeon P (2004) Organisational change through
clinical governance: The West Midlands 3 years on. Clinical Governance: An
International Journal, 9 (1),17-30.
Wallace LM, Koutantji M, Vincent C and Spurgeon P: Evaluation of the national Root
Cause Analysis programme of the NPSA. DH Patient Safety Research Programme
report. September 2006.
Walshe K, Wallace L, Latham L, Freeman T and Spurgeon P. The external review of
quality improvement in healthcare organisations: A Qualitative study. International
Journal of Quality in Healthcare (2001), 13, no 5, pp.367-374).
Wallace LM From root causes to safer systems: international comparisons of
nationally sponsored healthcare staff training programmes. Quality and Safety in
Healthcare (in press)
Weick, K.E. & Sutcliffe, K.M. (2001) Managing the unexpected: assuring high
performance in an age of complexity, Jossey-Bass: Francisco
20
Woloshynowych, M., Rogers S. Taylor-Adams S. & Vincent C (2004). The
investigation and analysis of critical incidents and adverse events in healthcare. HTA
monograph.
21
2.0 - Scoping review and synthesis of information on effective feedback
systems for incident monitoring in health care
Report of research funded by the National Patient Safety Research
Programme
22
Research Report Authors
Dr Maria Koutantji
Dr Jonathan Benn
Professor Charles Vincent
Research Programme Collaborators
Coventry University Research Team
Professor Louise Wallace (Principle Investigator)
Dr Louise Earll
Ms Julie Bayley
Imperial College Research Team
Dr Maria Koutantji
Professor Charles Vincent
Dr Jonathan Benn
Dr Andrew Healey
Birmingham Research Team
Professor Peter Spurgeon
Expert advisors to the research
Dr Mike Rejman
Captain David Lusher
Professor Laurie McMahon
Copyright © Imperial College London. All rights reserved.
23
Acknowledgements
The authors are greatly indebted to the many individuals who generously offered
considerable time and effort to make their expertise available to the review. Through
their involvement we were able to draw upon an expert panel that represented a
diversity of health care, transport and high-risk industry perspectives on safety
management. We are especially grateful to the principle consultants that joined the
project’s steering group and offered comment and interpretation throughout the
course of the work.
Principle consultants to the project
Dr Mike Rejman — Director, Confidential Incident Reporting and Analysis System
(CIRAS)
Captain David Lusher — Terema Consultants
Subject-matter experts
Professor Steven Bolsin – Barwon Health, Victoria
Mrs Linda Connell – NASA Aviation Safety Reporting System (ASRS) and NASA/VA
Patient Safety Reporting System (PSRS)
Dr Allan Frankel – Partners Healthcare System, Boston
Professor David Gillingham – Coventry University Business School
Mr Andrew Livingston – Human Factors, Atkins Global
Dr Melinda Lyons – University of Cambridge
Mr Robert Miles – Offshore Safety Division, Health and Safety Executive
Dr Michael O’Leary – British Airways UK
24
Mr Michael Powell – UK Confidential Hazardous Incident Reporting Programme
(CHIRP Maritime)
Professor James Reason – University of Manchester
Professor) WB Runciman – University of Adelaide
Mr Peter Tait – UK Confidential Human Factors Incident Reporting Programme
(CHIRP)
Dr Sally Taylor-Adams — Sally Adams & Associates and Imperial College London
Professor Robert Wears – University of Florida and Imperial College London
Mr Peter Webster – Safety and Regulation Division, British Energy
25
APPENDICES (SEE ADDENDUM) – APPENDIX 1
A: Systematic Search Strategy Version 5-8
B: Standardised Review Criteria Version 4-4
C: Process for development of systematic search algorithm and facet structure
D: Bibliographic database details
E: Guidelines for inclusion and exclusion of literature articles employed during
screening phases of the review process
F: Schedule of interview topics for consultations with subject matter experts
G: Details of higher-relevance articles
H: Details of lower-relevance articles
I: Representative quotes from interviews with subject matter experts concerning the
requirements for effective feedback systems for incident monitoring schemes
J: Matrix table showing support for the expert-derived requirements based upon a
sub-set of articles from the review
K: Initial formulation of feedback systems for incident reporting
L: Detailed fine-grain mapping of feedback modes to the safety issue management
process within the SAIFIR framework
M: Extract from CHIRP web-site on newsletter feedback and example of CHIRP
feedback
N: Representative quotes from subject matter experts concerning newsletters as a
form of safety feedback
O: Example of a “Safety Tips” bulletin from the US Intensive Care Unit Safety
Reporting System
26
2.1 - Introduction
This report details a systematic scoping review undertaken as part of a broader
programme of research funded through the Department of Health Patient Safety
Research Portfolio (PSRP). The broader project is entitled: Reporting Systems: a
scoping study of methods of providing feedback within an organisation (Grant code:
PS-028).
Specifically, this report aims to draw upon available information to
investigate potential mechanisms for providing feedback from incident reporting
systems in the UK NHS. The focus upon reporting systems as a means of detecting
safety issues in NHS Trusts has become prominent in recent years through
developments in local NHS risk management systems and the National Patient
Safety Agency’s efforts to implement a National Reporting and Learning System
(NRLS). The NRLS generated its first report on collected data in July 2005 (NPSA,
2005), following a delayed and problematic process of implementation (NAO, 2005;
House of Commons Committee of Public Accounts, 2006).
In the NHS context,
safety feedback refers to the process by which organisations “learn from failure”
(Dept. Health, 2000). Feedback therefore encompasses both action and information
outputs that are designed to enhance patient safety through increasing safety
awareness and improving systems of care delivery.
Figure 2 below provides an overview of the separate strands of research work that
comprise the overall PSRP Feedback Programme. The overall programme involved
a research consortium including academics and researchers in clinical safety from
Imperial College London, Coventry and Birmingham Universities, along with input
from industry and specialist advisors.
g
Part A:
Scoping review of feedback from safety monitoring
systems in health care and high risk industries
(Imperial College London)
1. Systematic
Literature Review
2. Expert Panel
Review
• Requirements for effective
safety feedback with rationale
• Model of feedback/control
process for safety monitoring
systems
Part
PartB:
B:
Empirical
Empiricalstudies
studiesofoffeedback
feedbacksystems
systemsand
and
in-depth
in-depthcase
casestudies
studies
(Coventry
(Coventry&&Birmingham
BirminghamUniversities)
Universities)
Part
PartC:
C:
Expert
ExpertReview
ReviewWorkshop
Workshopwith
withhealth
health
care
careprofessionals
professionalsand
anddomain
domainexperts
experts
OUTPUT:
OUTPUT:Design
Designfor
foreffective
effectivesafety
safetyfeedback
feedback
systems
systemsfor
forhealth
healthcare
care
Figure 2: Overview of the PSRP Feedback research programme including
Part A Review and Part C Expert Workshop
27
The review draws upon the available published literature and subject area expertise
in areas of international health care and high risk industries that have established
effective incident monitoring and safety improvement processes. This report seeks
to synthesise the resulting information and expert knowledge to define the
requirements for effective feedback and develop a model of possible incident
monitoring and learning processes at the organisational systems level.
During the review work considerable effort was given to the development of a
methodology that would ensure rigour and comprehensiveness in the research
output, drawing upon established methodology for systematic reviews where possible
as defined by the Cochrane Library (Higgens and Green, 2005) and the Centre for
Reviews and Dissemination (CRD, 2001). The strategy adopted within this report
however, is to present the reader with a practical summary of the findings and output
from the work, rather than a lengthy academic discussion of the process by which
they were obtained. With this aim in mind, the report will provide a concise overview
of the research process in order to focus upon the results and synthesis of
information from the review in later sections. More complete technical resources
such as the systematic search algorithm itself and structure of the data extraction
exercise may be found within the appendices (see appendices A and B).
Review aims
The rationale for the review and origin of the review question is based upon the
original research requirements outlined in the PSRP call for proposals: To investigate
how individual health care organisations can respond locally to specific reported
patient safety incidents, in a targeted manner.
Specifically, through identifying
feedback mechanisms that effectively “close the loop” (to use a term that has
become established in the safety improvement literature, e.g. Benner & Sweginnis,
1983; Gandhi et al., 2005). Such mechanisms allow learning from reported failures
and implementation of improvements in work systems that address the causes of
failure and prevent recurrence of similar adverse events. Figure 2.1 below includes a
statement of aims highlighting the key objectives of the review.
28
Key review aims:
•
To consider the characteristics of effective safety feedback from incident reporting
systems and the requirements for effective incident monitoring and learning systems
at the level of local health care organisations.
•
To describe best practices in this area, through drawing upon:
a) Safety management experience within a range of high risk domains and relevant
international expertise in patient safety;
b) The available published research literature on patient safety and incident
reporting in health care and other relevant industries.
Figure 2.1: Statement of key aims for the review
Background: Reporting systems and learning from failure
Considerable attention has been given to the role of reporting systems in learning
from failures in health care organisations. Influential reports in both the US and UK
highlight the need for active learning from adverse events and near misses,
proposing effective reporting schemes based upon high risk industry and aviation
models as a means towards achieving these objectives (Dept. Health, 2000; Kohn,
2000). Well-publicised incident monitoring studies and the development of largescale, centralised systems in some countries and states have further promoted the
incident reporting system as an important element of risk management in
organisations. One successful example of incident monitoring that has resulted in a
large volume of published analyses of incident reports is the Australian Incident
Monitoring study (e.g. Webb et al., 1993; Beckmann et al., 1996; Runciman, 2002).
The topic of reporting systems is largely beyond the scope of this report, which
remains focused upon the feedback processes and output from analysis of reported
incidents. Considerable literature in this area already exists. Several recent studies
and reports, for example, have addressed the topic of incident reporting in health
care (e.g. Firth-Cozens, Redfern & Moss, 2001; Cook, Woods & Miller, 1998; Coles,
Pryce & Shaw, 2001; Kohn, Corrigan & Donaldson, 2000) and within the safety
sciences literature relating to safety management in high risk industries,
comprehensive attempts have been made to describe the functions and operation of
reporting systems (e.g. Van der Schaaf & Wright, 2005; Johnson, 2003).
29
The Department of Health report: An Organisation with a Memory (Dept. Health,
2000) highlights several limitations in the NHS’s capability to actively learn from
failures. Several weaknesses in the design and operation of current NHS incident
reporting systems are highlighted (p54), including their failure to encourage learning
across the NHS as a whole. The report highlights a lack of standardised definitions
as to what constitutes an incident and the absence of a standardised approach to
investigating serious incidents, at different levels, as barriers to learning from
reported incidents. Coverage and sophistication of local incident reporting systems
varies widely between NHS trusts and incident reporting in primary care is largely
ignored.
Regional incident reporting systems undoubtedly miss some serious
incidents and take hardly any account of less serious events.
Regarding the
feedback issue, the report highlighted the problems with existing systems and
processes for learning from failures in the NHS (p78):
•
Existing systems take a long time to feed back information and
recommendations from the analysis of failures.
•
There is little or no systematic follow-up of recommendations proposed to
prevent recurrence of specific failures.
•
There is a general lack of clarity about the priorities for improvement efforts.
•
There is a lack of effort to develop design solutions to prevent the recurrence
of specific adverse events.
A survey by the National Audit Office (2005) of non-primary care trusts in England
concluded that at all levels, there was a need to improve the analysis of incident
reports and sharing of solutions by all organisations. Lessons learnt on a local level
were not widely disseminated either within or between trusts.
The considerable
existing complexity in feedback processes for learning from adverse events on a
national level in the NHS was highlighted as a potential barrier to effective learning.
30
Figure 2.2: Information flows between key patient safety entities
(Source: National Audit Office, 2005)
Figure 2.2 highlights the presence of multiple information flows between NHS trusts
and other organisational entities that respond and act upon reported incidents. The
result is multiple sources of safety feedback for improvement in care delivery
systems. Coupled with what the NAO report suggests is a lack of routine feedback
channels operating on the level of the NPSA’s National Reporting and Learning
System (NRLS), the need for consideration of safety feedback processes for local
level NHS trusts becomes apparent. Further evidence of the prominence of the
feedback issue for successful health care safety management and improvement is
provided by the conclusions of a recent report by the House of Commons Committee
of Public Accounts (2006). The report criticises the progress made by the NPSA and
questions the value of the NRLS, in part due to what is perceived as a failure to
feedback effective solutions to trusts and a failure to effectively evaluate and
promulgate solutions between trusts (p5).
The emerging picture of current NHS
safety feedback systems is one of a lack of effective, commonly understood and
implementable feedback channels or processes to improve safety, based upon local
operational experience of care delivery.
31
Defining safety feedback from incident monitoring
For the purposes of this review a cybernetic perspective on feedback at the level of
organisational systems is adopted (see figure 2.3 below). From this perspective,
organisational learning occurs based upon operational experience over time,
representing a process of self-regulation and the operation of a control loop that
detects outputs and feeds them back to the system. Feedback may therefore be
regarded as the use of information concerning aspects of the performance of the
organisation to regulate or modify the functioning of the system, in order to achieve
more desirable future performance.
In patient safety terms, feedback may be viewed as: The process by which the safety
of a health care delivery system is incrementally improved as a result of changes to
work systems and processes within the organisation based upon observed, safetyrelevant outcomes, such as reported patient safety incidents and adverse events.
Key
Workflow
Feedback
Resources &
information
Modify
resource
inputs
Safety
Standards
Comparator
Organisational
System &
Processes
Outcomes
of activity
Reduce vulnerability in
system and processes
Does outcome meet
aims for safe
performance?
YES
NO: Begin
feedback cycle
Safety/performance information
Figure 2.3: A cybernetic definition of safety feedback processes
for organisational systems
The terms work system and care delivery system will be used interchangeably to
refer to the often complex sociotechnical systems that deliver the product or purpose
of the organisation; care delivery in the case of health care systems. Work system is
an inclusive term used in a sociotechnical sense to describe the infrastructure,
operating environment, organisational structure, equipment, work processes and
human elements (including the characteristics and competencies of teams and
individuals) that interact to fulfil an operational function. Put another way, these
32
elements all have the potential to influence behaviour and task performance or
conspire in novel and unpredictable ways to create dangerous disturbances in
“normal” functioning.
Adopting a holistic “systems” perspective of productive
operations within an organisation recognises that work systems are composed of
structured interrelationships between care providers, their patients, technological
elements and care delivery processes.
Each of these are applied within the
situational context provided by the organisation. The organisational context itself
comprises a set of formal structures and policies, informal cultural or shared value
systems and a physical working environment. Patient safety incidents are therefore
correctly identified as the product of holistic work systems, in accordance with the
“systems” viewpoint that has emerged to explain what are more appropriately
considered organisational rather than individual failures to maintain safe operational
performance (Vincent et al., 2000; Reason, 1995).
It is the exact nature of the cybernetic control loops and the functional stages within
the feedback processes that operate for incident monitoring activities in health care
organisations that form the focus of this study.
Accordingly, effort is made to
conceptualise the various modes and channels of feedback that may be implemented
to regulate the behaviour of the organisational systems under consideration. The key
requirements concerning the nature and effectiveness of the feedback processes
initiated for incident reporting and analysis activities will be described later in this
report based on case studies of implemented systems within the research literature
and relevant subject matter expertise.
A key issue relating to the operation of cybernetic control processes for
organisational systems relates to how the organisation uses what may be limited
opportunities for learning from operational experience to inoculate itself against future
adverse events (e.g. March, Sproull & Tamuz, 2003).
In terms of the actual
mechanisms by which incident reporting systems translate data input into information
to drive the safety improvement process, three principle processes may be
implemented:
1) Analysis and investigation of single events, through Root Cause Analysis
(RCA) processes, to collect data on the circumstances surrounding a single
incident.
2) Analysis of a cluster of similar events or patterns of similar events
representing an incident type.
33
3) Comparison of events over time or between departments and to national
guidance or standards.
In the interests of comprehensively surveying the relevant literature on safety
improvement within an organisational setting, it was realised that a necessarily broad
view of safety feedback processes should be adopted.
Safety feedback may
therefore be viewed as taking both the form of information and action, in the manner
depicted within figure 2.4 below.
Identified
system
vulnerabilities
Incident
Database
Corrective
action plans
Improvements in
the design of
work systems
Safer
SaferWork
Work
Systems
Systems
SAFETY FEEDBACK = Learning
lessons from operational
experience
Reporting
System
Information on
operational
risks
Safety News
Increased
awareness of
front line staff
Figure 2.4: Two routes to improving operational safety – feedback of information
and action from incident monitoring
Feedback of safety actions means using the output from incident monitoring systems
to develop corrective action plans that target vulnerabilities in the design of work
systems and care delivery processes.
Safety actions therefore result in actual
changes in working practices, work technologies, equipment, resources and
organisation in order to deliver incremental improvements in operational safety.
Feedback of safety information refers to the broad dissemination of information to
front line staff on current risks to safe operations identified through incident
monitoring. This latter form of feedback improves operational safety through raising
the awareness and safety knowledge of the human operators of work systems,
allowing them to better adapt to variations, reduce errors and increase their vigilance
for likely failure modes through the knowledge of what could go wrong in specific
34
circumstances. The periodic publication of safety newsletters and bulletins that draw
upon data from reported incidents are one example of safety information feedback.
Precedence for a distinction between action and information modes of feedback from
incident reporting schemes is evident within the safety literature: Ghandi et al. (2005),
for example, makes a parallel distinction between “feedback” and “follow-up”, with the
latter referring to the action or improvement process and the former to dissemination
of information on adverse events.
It should be noted that the term “feedback”, in its conventional use, is often applied in
its information capacity to describe the process of eliciting or giving back information
on a subject, e.g. web-site “feedback” or “customer feedback”. In the context of
safety management, however, feedback encompasses a broader scope more in
accordance with the cybernetic definition adopted above and the original research
specification which identifies, as the focus of interest, feedback mechanisms that
effectively “close the loop”, allow learning from reported failures and implementation
of specific improvements in systems that address the causes of failure.
The distinction between information and action feedback was developed in order to
provide a useful heuristic in analysing the characteristics of existing incident
monitoring and learning schemes. The importance of its application in this context
will become more apparent in consideration of models of generic safety feedback
processes later in this report.
In literal interpretation, the information/action
distinction is valid up to a limit defined by the following caveats: 1) that all quality
control activity within an organisational setting is based upon the transmission of
some information concerning current performance and desired state or goals, and 2)
that informing front line staff of a current vulnerability in the work system may be
regarded as a first or basic form of action to take in response to a patient safety
incident.
2.2 - Overview of Research Method
The principle purpose in undertaking the scoping study was to inform policy
development and decision making in health care risk management. Conclusions and
recommendations were developed through synthesising existing research and expert
knowledge into practical guidance, with supporting rationale, for the development of
effective safety feedback processes to improve the safety of care delivery systems.
35
The review was undertaken to inform the empirical studies of current NHS practice in
this area, undertaken as Part B of the overall research programme (see figure 1a
above). The review process drew upon aspects of systematic review methodology
and qualitative research principles. Three main research methods were employed,
linked to the three principal strands of the review process:
•
Systematic search and review of relevant primary research and other
secondary sources of literature.
•
Consultation with a multi-disciplinary panel of experts on organisational level
safety feedback, culminating in a conference workshop with focus group
sessions.
•
Iterative research synthesis, theory and model building based upon
information gained from the preceding two strands of work.
Regarding the development of the review methods, considerable attention was given
to ensuring that methods for the systematic review of randomised controlled trials
and other experimental research designs often used to establish the effectiveness of
clinical interventions (Higgens and Green, 2005; CRD, 2001), were applied as far as
was practically possible within the review. This ensured that the available literature
in the areas of safety management and incident reporting in health care were
reviewed as objectively and comprehensively as was feasible for study designs of
this type. Methodology for the synthesis of information from non-experimental, casebased designs and the qualitative research approaches commonly used in the study
of organisational systems is typically less well developed or defined than those for
experimental randomised controlled trials. Elements of the search strategy drew
upon search methods that had been implemented within a review in a related area,
that of critical incident investigation and analysis methods (Woloshynowych et al.
2005).
In reporting the results from the review, various established guidelines for reporting
the results of systematic reviews were consulted (e.g. MOOSE Group, 2000;
QUOROM: Moher et al., 1999; CRD, 2001). It should be noted, however, that some
deviation has been made within this report from established formats for reporting
systematic reviews. This is due to a number of factors associated with the unique
nature of the study design (i.e. a systematic scoping study that focuses upon
36
synthesis of information from literature review and expert opinion) and the nature of
the available literature on feedback from incident reporting (mainly being of the nonexperimental case report and expert opinion type).
Within the report text and
appendices, and in accordance with MOOSE and QUOROM guidelines, the authors
adhere to several systematic review conventions. These include outline of review
aims and problem statement, the reporting of detailed search strategy and databases
used, explicit statement of the inclusion criteria and data abstraction process,
consideration of study quality and reporting of the characteristics of included studies,
including tables of descriptive information relating to each article.
In summarising the aim of systematic review techniques, the goal of any review
should be objectivity; minimising bias to ensure the reliability of findings (Mays,
Roberts & Popay, 2001). In order to achieve this aim, Mays et al. emphasise that the
review should be comprehensive in identification of relevant research, explicit in
rationale and reproducible in method.
Similarly, Greenhalgh (1997) defines a
systematic review as one that provides an objective synthesis of the findings of
primary studies, is conducted according to explicit and reproducible methodology and
utilises clearly defined objectives, materials and methods.
In accordance with Mays et al.’s (2001) assertion that it is important to be explicit
about the assumptions made and methods chosen for a review, figure 2.5 below
outlines the features of the methodology developed for the feedback review:
Main features of the review method:
1) Consideration of the problem specification and definition of the principal
aims of the review;
2) Consultation with subject matter experts to help identify relevant areas of
knowledge to the review;
3) Definition of a coherent and explicit series of review stages in order to
achieve the reviews aims;
4) Development and documentation of a comprehensive search strategy and
clear statement of search methods employed so that search results are
independently reproducible;
37
5) Use of formal study selection and quality assessment procedures
including consideration of specific inclusion/exclusion criteria for articles
with rationale related to the review aims;
6) Standardised review or data extraction criteria for the identification and
compilation of information relevant to the review;
7) Synthesis of extracted evidence and information within integrating models
and frameworks to ensure coherent conceptual formulation within the
area of study linked to the available research evidence base.
Figure 2.5: Main features of the review method
Regarding the design for a systematic scoping review, Mays et al. states that:
“Scoping studies aim to map rapidly the key concepts underpinning a research area
and the main sources and types of evidence available, and can be undertaken as
stand-alone projects in their own right, especially where an area is complex or has
not been reviewed comprehensively before.” (Mays et al., 2001: p194)
Scoping
exercises therefore seek to map out the relevant literature within a given area at an
early stage of the review and may, according to Mays et al., productively employ
contact with subject area experts in order to help identify relevant areas of knowledge
to be addressed in the plan for the scope of the review proper. This is particularly
appropriate for organisational and managerial research in which the same
phenomenon is likely to have been studied from multiple theoretical orientations, in
marked contrast to experimental approaches for randomised controlled trials of
clinical interventions.
Systematic search and review of published literature
The systematic search and review strategy was designed to sample as broad a
range of literature as possible before identifying those articles that best exemplified:
1) effective safety feedback mechanisms and processes in health care and high risk
industries, 2) reported case examples of incident reporting systems with feedback
from the international health care literature, or 3) useful background material
concerning safety management and the processes by which organisations might
learn from operational experience.
38
Figure 2.6 below gives an overview of the process, showing the sequence of review
activities undertaken and outputs expected from each stage. The search algorithm
sampled bibliographic records from 7 major clinical, social sciences and
management online databases, using advanced search interfaces from Ovid and ISI
Web Of Science.
The searches included all articles published in the English
language, predominantly focusing upon the period 1980-2005, within the following
bibliographic databases:
EMBASE 1980 to 2005 Week 37
MEDLINEI 1966 to August Week 5 2005
PsycINFO 1967 to August Week 5 2005
International Bibliography of the Social Sciences 1951 to September Week 01
(IBSS)
Science Citation Index Expanded (SCIE)
Social Sciences Citation Index (SSCI)
Arts and Humanities Citation Index
2. Develop systematic search
strategy
1. Run
preliminary
searches
Full-text journal
searches, hand
searches & wide
selection of search
engines/
databases
Output:
Key terms/phrases
identified.
Exemplar literature
set defined.
Content of initial
search results
analysed for
relevance.
Iterative process involving crossing
concepts and developing synonyms
for key terms
Output:
2 versions of complex search algorithm
developed for two different online
database interfaces.
5 major iterations piloted and inclusion
criteria refined according to relevance of
search results.
3. Develop standardised
review criteria
Generic screening criteria and
qualitative/tick-box items developed
for prioritising and extracting
information from search results
Output:
Standardised review form developed with
exclusion criteria for initial screening.
4. RUN
SEARCHES
A
C
B
5. Screening
and extraction
of information
Exclude irrelevant
articles and review
remaining against
set criteria
Output:
Output:
Bibliographic
records output
to reference
management
software,
combined and
duplicates
removed.
Literature set
prioritised for
relevance.
Requirements for
effective feedback
mechanisms for safety
information systems in
health care.
Figure 2.6: Systematic scoping review plan and sub-tasks
39
Further information and resources concerning the methods used within the review
are available within the appendices, specifically:
Appendix A: The systematic search algorithm used to query the bibliographic
databases searched.
Appendix B: The standardised review criteria used to extract information from
the articles included within the review.
Appendix C: Overview of the process of development and application of the
systematic search algorithm, including facet structure
Appendix D: Further details of the bibliographic databases searched including
content and capabilities.
Appendix E: Guidelines for inclusion and exclusion of relevant articles
employed during screening phases of the review process
Consultation with subject matter experts
A total of 18 research interviews were conducted with a sample of subject matter
experts on safety in health care and high risk industries. Figure 2.7 below provides a
breakdown of the key knowledge areas represented by the expert panel. The aim of
the interviews was to identify information describing the range of safety information
and feedback systems that are employed in other high-risk industries and various
areas of international health care and the factors that influence the success of these
processes in closing the reporting loop, improving operational safety and increasing
the safety awareness of front line personnel.
Key Knowledge Area
Industrial Domain
Other
Expert ID
UK
Incident
Safety-
AUS
UK
US
Reporting
Specific
Health
Health
Health
Systems
Operations/
Care
Care
Care
UK
UK
UK civil US civil
mining maritime aviation aviation
Research
UK
chemical
aviation
&
process
A
B
C
D
E
F
G
H
I
40
UK
military UK Rail offshore nuclear/
power
Key Knowledge Area
Industrial Domain
Other
Expert ID
UK
Incident
Safety-
AUS
UK
US
Reporting
Specific
Health
Health
Health
Systems
Operations/
Care
Care
Care
UK
UK
UK civil US civil
mining maritime aviation aviation
Research
UK
chemical
UK
military UK Rail offshore nuclear/
aviation
&
power
process
J
K
L
M
N
O
P
Q
R
Figure 2.7: Matrix of coverage of key knowledge areas within expert panel
In accordance with established qualitative research practices (concerning purposive
sampling and data saturation, e.g. Locke, 2002), the sample of experts interviewed
developed as data collection progressed in order to achieve sampling of knowledge
from key domain areas and in order to sound the perspectives of individuals with
direct involvement with a variety of incident reporting systems. Consultation with
experts comprised 45-90 minute interviews and employed a standardised schedule
of topics (included within appendix F). Figure 2.8 below comprises a table of incident
reporting systems represented within the members of the expert panel on safety
feedback, along with the various health care and non-health care industrial domains
to which each belongs. In addition to the reporting systems depicted below, the
review team consulted with experts in UK and US health care risk management,
academics working in the safety sciences and established UK industrial safety
expertise from the Health and Safety Executive.
41
Acronym
Title
Domain
ASRS
Aviation Safety Reporting System
US Aviation
PSRS
Patient Safety Reporting System
US Health Care
IRLS
Incident Reporting and Learning System
UK Health Care
CHIRP
Confidential Human Factors Incident Reporting
Programme
UK Civil Aviation
CHIRP
Confidential Hazardous Incident Reporting
Programme
UK Maritime
BASIS
British Airways Safety Information System
UK Civil Aviation
Military Incident Reporting System
UK Military Aviation
CAP
Corrective Action Programme
UK Energy
CIRAS
Confidential Incident Reporting and Analysis System
UK Rail
AIMS
Australian Incident Monitoring System
AUS Health Care
PPMS
Personal Professional Monitoring System
AUS Health Care
Figure 2.8: Incident reporting systems represented in the expert panel
The methods used to elicit information and refine the data obtained from the
interviews into a synthesised conceptual framework of the area of interest to the
review, were informed by two qualitative research approaches: Grounded Theory
(Strauss & Corbin, 1990; Locke, 2002) and Template Analysis (King, 1998). The
process of analysis involved identifying conceptually coherent units of text within the
interview content and assigning codes to them representing their relevance to the
feedback topic area and emerging framework.
Through an informal process,
conceptual categories were then developed in an iterative fashion representing
individual requirements for effective safety feedback processes and the general
design characteristics of organisational systems and processes for learning from
reported incidents.
Research synthesis
Eventual synthesis of the information gained from both the subject matter experts
and the literature review was achieved through the development of a common
theoretical framework and set of requirements for effective feedback systems. This
output was developed iteratively during the course of the project and reviewed
periodically by the programme steering group which included subject matter experts
from both within and outside of health care. A generic model of organisational level
processes for learning from reported incidents was developed, along with possible
modes of feedback. These were applied within the literature review to map and
classify the features of published case examples of operational reporting systems in
health care. In this way, accepted best practice in the operation of incident reporting
42
and feedback systems was identified, described and linked to the available research
evidence base, where practicable.
The emerging framework was validated within an Expert Workshop (see Section 4)
involving key risk management personnel and patient safety stakeholders.
The
Expert Workshop focused upon feedback from reporting systems in the NHS and
included several focus group sessions designed to elicit discussion of individual
elements of the framework.
2.3 - Results from the literature searches
This section of the report considers the results of literature searches to identify
relevant research evidence and other sources of subject matter information relevant
to the review question. It also provides an overview of the key characteristics of the
literature selected for further analysis and basic descriptive information associated
with feedback mechanisms found to have been implemented for specific incident
monitoring initiatives reported in the international health care literature.
Included literature
Figure 2.9 below provides a breakdown of results of the systematic searches by
database of origin. The final version of the search was run on 12/09/05 and was
limited to articles published since 1980 in the English language. For full details of the
number of articles elicited by each item within the complex search algorithms run in
each database, the reader is referred to appendix A, which contains a breakdown of
individual strings within the search algorithm by volume of bibliographic records
elicited.
43
Full search strategy run
in OV ID Interface
PsychINFO
244
Health
Services
Docs
Other Grey
Literature
Preli m. &
Hand
Searches
EMBASE
867
Sim plif ied search strategy run in
IS I Web of Science Interface
MEDLINE
907
IBSS
40
SCIE
SSCI
A&H CI
Expert
Panel
Literature
Total records:
Ovid Interface
156 4
Total records: WoS
Interfac e
517
53
Total r eco rds: 2002
(After de-duping)
Figure 2.9: Breakdown of bibliographic records obtained from sources included
within the systematic search strategy. The search was run on 12/09/05 and
limited to articles published in the English language.
Following application of the systematic search algorithm in several online
bibliographic databases, a total of 2002 article records were identified for inclusion,
following screening to remove duplicates. Two further successive screening phases
were then conducted to remove irrelevant articles, before in-depth information
extraction was performed for the remaining 193 articles, using standardised review
criteria.
Due to the nature of the topic area that forms the focus of the scoping review (i.e.
research into policy development and organisational processes associated with risk
management systems), there were few rigorous, quantitative empirical or
comparative studies of the effectiveness of alternative models within the search
results, with a large proportion of useful articles being of the ‘expert opinion’ or
‘reported case implementation’ type. This is as may be expected for reviews of this
type, which are likely to yield a large number of commentary articles and articles
reporting local innovations (Mays ref.). Subsequent screening and prioritisation of
articles for inclusion within the in-depth information extraction phase of the review
was therefore performed largely on the basis of article content.
44
Following compilation of the initial search results, several successive screening
phases were undertaken. The key phases in the screening and review process are
outlined below within figure 2.10, along with an indication of the number of articles
included/excluded at each phase, the main output from the review phases and steps
to ensure reliability. Further details of the inclusion/exclusion criteria used for the
selection of articles for later stages of the review are available in the form of
guidelines that were produced for the review process (included within appendix E).
Systematic search and review process:
Search phase 1: Systematic searches
Perform systematic searches using search algorithm detailed within appendix A.
Search result: 2081
Search phase 2: Hand searches
Addition of high priority articles identified from hand and preliminary searches, as well as important
substantial publications/reference works and key NHS policy documents (53 publications added).
Total search result (with duplicates): 2134
Search phase 3: Removal of duplicate records
De-dupe overall search resul—s - 132 duplicate records removed.
Total search result (without duplicates): 2002
Screening phase 1: First pass exclusion
Screening to remove irrelevant records based upon article titles and abstracts – 946 records removed.
Included records: 1056
Screening phase 2: Second pass exclusion
Screening based upon article titles and abstracts to exclude further articles with content considered to
be low priority or irrelevant according to defined criteria (see appendix E for outline of criteria) – 863
records removed.
Included records: 193
Review stage 1: Description, classification and prioritisation
All 193 records reviewed based upon full text available, using sections A-C of standardised review
criteria (included in appendix B) to determine study/content type, quality and level of relevance. 4-point
rating scale used to rate articles according to priority relative to the aims of the review.
45
High priority articles identified: 90
Output:
Results from this stage comprise two tables comprising the full literature set (193 articles)
described, classified and rated according to relevance to the review aims (see appendices G
and H for details of the higher- and lower-relevance articles identified within the review).
Interrater reliability:
A second reviewer reviewed a random sub-set of 27 articles to check reliability of the value
scores assigned to each article. Interrater reliability was 78% with reviewers disagreeing on 6
out of 27 classifications, out of which 2 affected the inclusion/exclusion decision for a specific
article and these were resolved in conference.
Review stage 2: In-depth information extraction
Content of remaining articles considered in detail, using standardised review criteria in section D of
review form (see appendix B). 29 primary research articles, reporting 23 case implementations of
incident monitoring schemes with feedback were identified from the international health care literature.
These systems were classified according to an emerging conceptual framework for safety feedback.
The remaining secondary research articles, conceptual and grey literature, were reviewed as
background material to support the research question.
Identified primary research articles describing systems cases: 29
Output:
Description and analysis of 23 incident monitoring schemes, with safety feedback, from the
international health care literature. Reported in tables in the results and synthesis sections of
this current report.
Interrater reliability:
Second and third independent reviewers were enlisted to review a sub-set of articles (13 in
total) and confer upon the content and relevance of the information extracted concerning safety
feedback from incident monitoring. For the reported case implementations double-reviewed,
the reviewers conferred on the classification of the systems described using features of an
emerging framework to achieve a consensus.
Figure 2.10: Overview of key review phases for articles
The high-relevance rated articles identified following the first review stage were
scoped and summarised in terms of general content, domain of application, systems
described and study type.
Figure 2.11 below comprises a table detailing the
breakdown of content and article types for 90 publications that comprised the
principle literature sources upon which the main review findings are based. The
complete tables of detailed information concerning the classification and description
46
of literature identified through the review are included within appendix G (highrelevance articles) and appendix H (lower-relevance articles).
Content type
Study/article type
Specific
General
quality/rep
safety/fee
Primary
Secondar
ort-ing
d-back
research
y articles
systems
content
12
0
12
1
11
Research reports
8
0
8
3
5
Policy documents
8
0
8
1
7
3
1
2
1
2
59
36
23
45
14
Generic/multiple
1
0
1
0
1
Aviation
3
3
0
3
0
Chemical Process
2
1
1
1
1
Offshore Oil
1
0
1
1
0
Nuclear Power
2
2
0
2
0
49
29
20
37
12
1
1
0
1
0
90
37
53
51
39
Publication Type
Total
Substantial publications
(books, chapters, theses)
Grey literature
(editorials, non-peer
reviewed, etc)
Peer reviewed journal
articles (by domain of
origin)
Health Care
Other
Totals:
Figure 2.11: Breakdown of high relevance literature sources by content and article
type
47
In content, the literature sources fell into two broad classes: 1) those that referred to
a specific quality or safety system, or case implementation, and 2) those that address
safety on a more general level, without reference to specific systems or scenarios. In
terms of study or article type, literature found to be relevant in content encompassed
both primary and secondary research types, including both empirical or observational
research (usually of a case study type) and expert opinion or commentary. Several
different publication types were included within the review, in addition to peer
reviewed journal articles, due to the research requirement to map out the available
information and scope as much of the published literature relating to the research
question as possible. Consequently, the review included several scholarly books,
research reports submitted to funding bodies and key health care policy documents.
The following section of this report details the findings from review of the policy
documents in this area, whilst section 3.3 of the results outlines the examples of
incident monitoring and feedback systems identified within the literature.
Safety feedback in health care policy guidance
The research detailed within this report comprises a scoping review that is
complimentary to further empirical observations to describe the current status of
safety feedback systems within NHS trusts (undertaken in Part B of the overall PSRP
Feedback Programme). With this in mind, the following text seeks to summarise
published policy within health care, drawing upon influential US and UK patient safety
sources, to identify current guidance relevant to the issues and structures associated
with effective safety feedback from incident monitoring.
Several principle sources were reviewed to gain an understanding of general policy
in health care in this area, including publications by the Department of Health,
National Patient Safety Agency and NHS Litigation Authority. In order to gain an
understanding of the relevant background, two influential reports in this area were
reviewed: the US Institute of Medicine’s report To Err Is Human (Kohn, Corrigan &
Donaldson, 2000) and the UK Department of Health’s report An Organisation with a
Memory (Dept. Health, 2000). Additionally, a recent report by the National Audit
Office (NAO, 2005) that sought to appraise the success of the NHS in implementing
mechanisms to successfully learn from failures was also reviewed.
The main
findings from both the NAO study and conclusions from the Department of Health
report are considered elsewhere in this report.
The UK Department of Health/NPSA report Doing Less Harm (Dept. Health, 2001)
outlines the requirements for local incident management and reporting policies
48
applicable to all NHS organisations, both primary and secondary care. It includes
guidelines for the incident investigation and follow-up process, including the
development of improvement strategies, learning lessons and implementing and
monitoring systems changes.
It also addresses safety awareness feedback as
reporting of aggregated incident data to relevant local stakeholders, including
clinicians.
In terms of information handling and notification processes associated with
management of specific incidents reported to local systems, the document makes
several recommendations and figure 2.12 is reproduced below from the report to give
an overview of the process. Where appropriate, information on specific reported
incidents is ‘fast-tracked’ to relevant stakeholders, ensuring that rapid feedback is
made available concerning more serious and immediate system vulnerabilities.
Actions from single incident investigation can therefore be immediate, whereas
analysis of aggregated data takes place over time. Details of incidents resulting in
serious actual harm are subject to mandatory reporting requirements within a set
time-scale.
Fast track reporting means that the incident reporting system feeds
information relating to Serious Untoward Incidents (SUIs) directly and quickly to
relevant internal or external stakeholders, which may include a number of
organisations in addition to mandatory reporting to Strategic Health Authorities.
Further agencies that receive reports include: medical devices agencies, medicines
control agencies, manufacturers of devices and equipment, the Health and Safety
Executive (HSE) and social care organisations, amongst others.
Local incident
reporting processes are therefore required to feedback information at multiple levels
of organisation, both below and above the organisational level risk management
structures in which the incident reporting system resides.
Feedback to multiple
agencies/stakeholder bodies are also required, according to the various mandatory
and voluntary reporting notification requirements of internal and external agencies.
For all incidents, analysis of stakeholder reporting requirements is therefore
recommended to determine which stakeholders require information.
49
Figure 2.12: Local requirements for incident management, reporting,
analysis and learning (Dept Health, 2001)
In addition to notification requirements, the Department of Health (2001) provides
guidelines regarding the feedback of corrective action to generate safety
improvements in local work systems.
Guidelines are included to ensure that
appropriate improvement strategies are developed where relevant.
The incident
investigation process should result in learning from incidents in the form of
recommendations and implemented improvement strategies to help prevent, or
minimise recurrences and reduce the risk of future harm. The investigative process
should therefore result in the drawing up of an improvement strategy (with prioritised
actions, responsibilities, timescales and strategies for measuring the effectiveness of
actions) and an implementation phase (involving tracking of progress relating to the
improvement strategy and tracking of the effectiveness of actions). The guidance
within the document suggests that improvement strategies should address large
(rather than small-scale) systems and should be designed to eliminate root rather
than immediate causes. Examples of recommended types of areas to address in
improvement
strategies
include:
automation, improvements
in
standardisation,
communication
50
simplification
processes,
timely
of
systems,
delivery of
information, generation of standards and checklists and introduction of constraints
upon work processes.
Where raising awareness of safety issues in local work systems is concerned, the
guidance states that aggregate reviews of local incident information should be
undertaken and significant results communicated to local stakeholders, including
clinicians and managers, and also to clinical governance and risk management
committees and the executive board. Such publications should contain frequency
analyses, cross tabulation analyses and trend analyses for incident data in both
numerical and graphical formats. Learning lessons and implementing and monitoring
improvement strategies is identified as the key requirement, with learning taking
place from individual incidents, aggregate reviews and from wider experiences or
external sources, such as the NPSA:
“Lessons are learned from individual adverse patient incidents, from local aggregate
reviews and from wider experiences, including feedback from the National Patient
Safety Agency, other agencies/bodies, and benchmarking. Improvement strategies
aimed at reducing risk to future patients are implemented and monitored by the
organisation. Where appropriate, local staff learn lessons and change practice in
order to improve the safety and quality of care for patients.” (p10, Dept Health 2001)
The NHS Litigation Authority Guidelines for Clinical Risk Management (NHSLA,
2005) highlights critical incident reporting as an important procedure to enable
professional groups to improve poor performance, ensuring that adverse events are
identified, openly investigated, lessons are learned and promptly applied at a local
level. The guidance also addresses creating a reporting (blame-free) culture locally,
to encourage staff to report their incidents, errors and near misses. Lessons can
then be learnt on the basis of analysis, dissemination of findings and implementing
changes developed from reported issues.
The issue of visibility of implemented
improvements is raised, so that the local risk management system is visibly seen by
front line operators to close the safety loop, reinforcing the perceived utility of
reporting:
“It is good practice for relevant information and feedback on reports to be circulated
at all management levels and to staff within the organisation. Such feedback should
include information on the actions taken to reduce or eliminate reported incidents and
changes in working practices.” (p55, NHSLA 2005)
51
Additionally, several areas of the standards detailed within the document relate
directly to reporting and learning from incidents, or feedback of information and
action into local work systems. Regarding the feedback issue, elements of Standard
1: Learning from Experience, and Standard 6: Implementation of Clinical Risk
Management are most relevant. The following points provide an overview of the
procedural aspects of effective reporting and feedback processes for local risk
management systems, as embodied in the standards:
•
Clear pathways are in place for clinicians to raise issues concerning
performance and quality of care.
A clear policy of who should be informed of incident details exists within the
organisation, including senior clinical and management staff, external groups, etc.
There is a policy in place for the rapid follow-up of major incidents allowing
organisations to respond rapidly and positively, mitigating the risk of litigation and
bad publicity.
Where actions are identified, responsibility for implementation is clearly defined.
There is evidence that management acts upon reported incidents and the issues
raised are actually followed up through the development of improvement actions and
those actions are implemented in practice.
There is a systematic process in place for learning from individual incidents and
changes are implemented and monitored to ensure sustained improvement.
Changes should be local and trust wide where possible.
Feedback mechanisms are in place for all levels of staff on clinical incident reports,
complaints and claims for their area, and Trust wide.
Feedback occurs to staff from the risk management group on actions taken to reduce
or eliminate the causes of reported incidents.
The NPSA’s “Seven Steps to Patient Safety” (NPSA, 2004) provides a checklist and
guidelines for implementing patient safety processes and meeting risk management
standards within local NHS organisations. Of particular interest are the steps which
52
address promoting reporting, learning safety lessons and implementing solutions.
The learning process or “circle of safety” outlined within the document comprises six
stages: 1) reporting, 2) analysis, 3) solution development, 4) implementation, 5) audit
and monitoring, and 6) feedback. This has been extended into primary care towards
the end of 2005.
Most relevant to step 2 within the aforementioned process, the NPSA has recently
committed effort to the implementation and support of Root Cause Analysis (RCA)
and Significant Event Audit (SEA) processes within local NHS organisations. Due to
the NPSA’s focus upon development of a National Reporting and Learning System
(NRLS), much of the published guidance relates to reporting processes on a national
level and integration of local risk management systems with the national database.
Feedback of information and action from incident reporting systems at a local
organisational level may productively employ common modes or channels that may
be used for feedback of both locally and nationally developed measures to improve
patient safety.
In terms of the feedback of corrective action into local work systems, the guidance
contained within NPSA (2004) describes processes for implementing solutions
developed through analyses of root causes. Once the problem has been understood
and potential changes identified, with the input of knowledge about how local
systems are failing possessed by front line staff, potential solutions are developed
and risk-assessed using multi-disciplinary teams. System changes are piloted to
assess feasibility, utility and determine the issues involved in wider implementation.
The implementation phase itself then involves drawing up an action plan applicable
to local organisations and monitoring the effectiveness of the action plan; building a
memory of changes recommended and actions taken to implement those changes.
Evaluation should be considered as part of the solutions development process and
undertaken in terms of process, impact and uptake of system improvements. The
loop is then closed by internal and external follow-up of action plans and safety alerts
to ensure they have been acted upon and implemented in local clinical processes.
In the Seven Steps, four factors are highlighted as important in ensuring the success
of the safety feedback loop: 1) showing that changes make a difference as staff are
more likely to adopt a change if it is relevant to them and helps them do their job, 2)
demonstrating effective leadership through visibly supporting changes, 3) involving
staff and patients in the design of solutions to ensure they are workable, and 4)
53
giving feedback to staff and teams on how solutions are working and any changes to
the way they are delivered in order to ensure the sustainability of solutions. Potential
feedback mechanisms identified within the guidelines include: walkarounds, team
briefings and de-briefings, and leadership-sponsored initiatives designed to increase
the visibility of safety issues.
In discussion of the NPSA’s National Reporting and Learning System (NRLS), the
NPSA’s Seven Steps raise the issue of the importance of providing immediate
feedback or acknowledgement and relevant information to individual reporters that
voluntarily contribute their concerns and reports to a reporting system:
“The importance of feedback cannot be underestimated. When staff submit a report
to the NPSA, there will be instant acknowledgement and information sharing. This is
known as ‘bounceback’. For example, reporters may be guided to the NPSA website
where the latest information about lessons learned and safety solutions relevant to
their incident can be found. Over time, the NPSA will also make deidentified routine
reports available on the website; showing general trends, issues and solutions
generated.” (NPSA 2005: Step 4, p12)
In addition to action feedback mechanisms, NPSA guidance for NHS trust’s local risk
management and reporting systems (NPSA, 2005) offers practical advice concerning
safety newsletter feedback. Some attention is also given to the practical issues
involved in effective dissemination of the information:
“The production of a newsletter or bulletin can be an effective way of ensuring the
communication of consistent and key patient safety information and messages. Staff
will only report incidents if they truly believe that something will change for the better.
A newsletter is a great way to feed information back to staff to demonstrate that their
efforts have contributed to positive change.
Many organizations have used this
method for a long time. However, the format needs to be reviewed regularly to keep
the idea fresh, and feedback should be obtained to ensure that presentation, content
and distribution are optimal…..Consideration should be given to the method of
distributing the newsletter to ensure wide circulation. Many organizations have a
large number of sites and access to computers varies. Email distribution is quick and
effective but staff will need a means of being able to print the newsletter and not
everyone will have an email address. The needs of remote workers and contractors
54
will also need to be considered. A mixture of distribution methods may be necessary
to cover the whole organization.” (p30-32, ref)
The US Institute Of Medicine (IOM) report To Err Is Human advocates the
implementation of mechanisms of feedback for learning from error [reference] as one
of several principles for the design of safety systems in health care organisations.
The report promotes the view that systems for the analysis of errors and accidents
need to be implemented to inform the redesign of processes and ensure it does not
take place in merely an ad hoc fashion. Research and analysis are essential steps in
the effective redesign of systems and provide the information upon which to base
improvements. Feedback from operational experience is therefore viewed as a fivestage process that involves: 1) detailed reporting of events, 2) investigation of events,
3) development of recommendations for improvement, 4) implementation of those
improvements, and 5) tracking the changes to understand the impact upon safety.
The IOM report highlights the need for health care organisations to acquire the multidisciplinary knowledge and principles of safe design necessary to understand
vulnerabilities in work systems and the means of reducing them.
The IOM report provides a review of health care reporting systems, from which it was
concluded that good reporting systems sought to collect data on adverse events and
their causes in order to prevent future recurrences.
The IOM review highlights
feedback and dissemination of safety information as important outputs from
successful reporting systems, raising awareness of safety problems that have been
encountered elsewhere and creating an expectation that errors will be fixed and that
safety is important. The IOM review comments upon the factors known to influence
reporting rates, including the feedback provided to reporting communities. Reporters
need to believe that there is a benefit in reporting to the system and feedback
provides the evidence that the information that they take the time to submit is actually
used. Reporting without analysis or follow-up generally weakens the value of the
system by reducing support for constructive responses and contributing to the
perception of reporting systems as a waste of resources.
In summary, practical guidance, health care policy and assessed standards relevant
to safety feedback have described in some detail a pressing need for health care
organisations to develop systems capable of learning from failure. Furthermore, they
have highlighted the development and promotion of incident reporting and associated
systems as a principle vehicle towards achieving this aim.
Within the UK NHS
context, considerable effort has been committed to the development of a National
55
Reporting and Learning System (NRLS), yet equally important are local level incident
reporting and incident management processes. To this end, local requirements for
incident management, reporting and analysis have been specified (Dept Health,
2001), along with standards relating to local risk management systems (NHSLA,
2005), with some attention given to feedback processes, where the development and
implementation of changes in practice are predominantly concerned. The definition
of processes, in the form of flow charts or a series of improvement stages, is an
essential step in developing organisational systems for learning from failure. Process
models and broad aims for these processes, in the form of standards, represent only
the functional aspects of an organisational system, however. Feedback and learning
systems must be specified operationally, in terms of activities, mechanisms, resource
input, agencies, roles and responsibilities.
Implemented feedback processes within incident monitoring schemes
From the final set of 90 high relevance articles included in the review, 23 reported
implementations of incident monitoring and feedback systems were identified within
the literature from international health care settings. These 23 operational systems
(based upon information from 29 primary articles) represent the research base
identified by the review for safety feedback in organisational systems.
Figure 2.13 below provides a summary of the information and action feedback
processes implemented for each of the 23 reporting systems identified within the
review. These incident reporting systems represent applications in varied health care
settings and differing types of quality improvement initiatives applied at various levels
within institutional and broader multi-institutional safety improvement systems.
Within the table below, summary information from the case reports reviewed are
presented to illustrate the varied mechanisms and processes adopted by clinical
incident reporting systems to disseminate information and conduct follow-up actions
based upon the results of analysing and investigating the reports of safety incidents
and near misses.
56
Reporting
Description of information/action feedback mechanisms
initiative
Ahluwalia
et al.
(2005)
• Regular multidisciplinary departmental meetings to discuss lessons learnt from
reporting
• Regular monthly department safety bulletin that includes a fixed reminder of the unit’s
Critical
agreed trigger list, summary of previous month’s critical incident reports, data on
Incident
admissions and activity levels and a clinical lesson or guideline of the month.
Reporting
System
• Individual email bulletins, paper-based newsletters or bulletins posted on department
website
• Targeted campaigns aimed at a specific incident or pattern of incidents
• Investigation of reported incidents and identified safety issues through Root Cause
Analysis (retrospective) and Failure Modes and Effects Analysis (prospective)
methods to identify system failures for remedial action
Amoore et
• Feedback notes system developed as an educational tool to provide information on
al. (2002)
incident, equipment involved, causes and triggers uncovered by investigation, lessons
Feedback
learnt and positive actions taken by staff to minimise adverse consequences
notes for
• Feedback notes issued to ward link nurses, used in teaching sessions and information
incidents
disseminated to other hospital departments through annual nurse clinical update
involving
sessions and hospital intranet
medical
devices
• Feedback notes highlight positive actions taken by staff and provide anonymous
information that allows staff to learn from why an incident occurred, in a supportive
manner, whilst promoting a culture that supports learning
Beasley et
al. (2004)
Primary
• Incident reporting system should result in a database of solutions as well as errors, so
that clinicians can search for potential solutions for specific problems
• Emphasis is placed upon discovering and disseminating best practices through
care
presenting aggregated error data to clinic administrators and care givers so that they
medical
can implement useful error or hazard prevention strategies
error
• Educational information should be provided to patients so that they understand their
reporting
role in helping to prevent errors but not information on actual errors, which may harm
system
physician-patient interactions – e.g. education as to what type of information patients
need to provide care givers with
• Information provided to clinicians and administrators must go beyond statistics in order
to be useful and should present aggregated, non-identifiable data grouped by clusters
and trends e.g. relating to the control of hazards that lead up to errors.
• Data provided by the system should not be used punitively or for comparisons
between individuals or institutions as this will discourage open reporting
• Error and solution database is reviewed by a professionally diverse entity with varied
expertise comprising peers, systems experts, human factors engineers, patient
representatives, risk management experts, safety professionals and healthcare
funders
57
Reporting
initiative
Description of information/action feedback mechanisms
• Accountability is at the healthcare system level and is focused upon the
implementation of error-reduction programmes in the problem areas identified by the
reporting system
• Report submitters, upon reporting to the system, receive reminders, composite data or
commentary to encourage a two-way flow of information
• Weekly or monthly newsletters that identify recent errors, their associated hazards and
hazard control strategies
Bolsin et
al. (2005)
and Bolsin
et al.
(2005)
PDA-
• Automatic feedback of all reported incidents to local organisational quality managers
and morbidity and mortality coordinators
• Automated analysis and secure transmission of performance data back to reporting
clinician, who has personal access to tracked data
• Professional groups, colleges and specialist associations can apply for access to data
at suitable levels of aggregation for use in monitoring training performance
based
Clinicianled
reporting
system
Gandhi et
al. (2005)
• Feedback of issue progress by email to individual reporter and direct feedback of
follow-up actions taken to original reporter to “close the loop”
Safety
• Monthly article in staff bulletin to highlight safety issues
Reporting
• Safety improvements published quarterly in hospital newsletter
System
• Monthly email circulated to front line staff (anyone that reported) with summary of
improvements made
• Weekly reports of overdue follow-up on safety reports for nurse senior directors
• Monthly patient safety leadership walkround visits
• Quarterly report to higher levels of organisation including summary of actions taken to
hospital leadership
• Production of safety improvement actions and recommendations for follow-up,
including prioritisation of opportunities and actions, assigning responsibility and
accountability, and implementing the action plan
Holzmuell
• Monthly report sent to each participating ICU including quantitative data concerning
er et al.
the number of reports submitted in previous month compared with past year and
(2005);
individualised site data compared with data from across all study sites
Lubomski
• Monthly feedback is used by ICU site team to identify local areas for improvement and
et al.
each site receives text descriptions and system factors for all the incidents reported at
(2004) &
the site, plus data on total events to date, types of events reported and types of
Wu et al.
providers reporting for use at morbidity and mortality conferences, patient safety
58
Reporting
Description of information/action feedback mechanisms
initiative
(2002)
Intensive
committees, hospital risk management and hospital quality improvement processes
• Principal investigators at each site have the capability to access and query data
Care Unit
submitted from their site for local analysis and to generate reports for use in quality
Safety
improvement activities using data analysis wizard
Reporting
• Information sharing with front line staff is promoted through use of staff bulletin board
System
postings which accompany each monthly newsletter and which contain a summary of
(ICUSRS)
a safety issue raised, textual case examples, system failures identified and specific
actions that may be taken to address them
• Quarterly newsletter is distributed to staff at all study sites containing details of the
project’s activities, focus upon a common safety problem in each issue and includes
tips for improving safety in order to encourage learning across institutions
Joshi et al.
• Systems solutions and improvements developed in a timely and efficient manner to
(2002)
address process and systems gaps identified through analysis of how, when, why and
Web-
where incidents occurred
based
• IT system streamlines the data management and risk identification process allowing
incident
more time for the design and implementation of improvement initiatives – follow-up
reporting
actions are tracked by the system
and
• User-controllable ad-hoc querying of incident database and generation of reports
analysis
• Educational feedback to enhance safety awareness through use of case studies
system
outlining hospital system successes and failures
• Continuous communication with hospital staff through regular newsletters and
department meetings to communicate patient safety data and what was being done
with the data
• Feedback of aggregate data to staff as well as specific case follow-up
• Established improvement processes define accountability for closing open incident
reports, using decentralized management style
Le Duff et
• Mechanisms for quality managers to validate reports and to track the completion of
al. (2005)
specific actions associated with a validated issue are built into the IT system that
Incident
handles the report
Monitoring
and
Quality
Improvem
• Incidents and successful actions taken to resolve them are archived within the
database
• Time taken to resolve outstanding quality issues is used as a metric to indicate level of
risk to the organisation
ent
Process
Nakajima
et al.
(2005) &
• Implementation of urgent improvement actions for high risk issues within a
predetermined timescale
• Patient safety seminars three times a year to inform all staff of findings from reporting
59
Reporting
Description of information/action feedback mechanisms
initiative
Takeda et
systems and support a hospital-wide safety culture
al. (2005)
• Targeted staff education programmes linked to professional accreditation scheme
Web-
• Ward rounds by peers and safety committee members to check safety improvements
based/online
incident
reporting
system
have been implemented
• Safety feedback information is made available to staff through hospital intranet, clinical
risk manager’s monthly meetings (cascade) and mailing list
• Paper-based and web-based newsletter alerts with topics chosen to coincide with
media coverage of serious events and content includes commentary by hospital
experts on specific issues.
Oulton
• Data from incident reports are used to investigate specific incidents in individual
(1981)
hospitals and contribute to summary profiles of types of incidents that take place
Incident
across all institutions
reporting
system
• Benchmark data relating to frequency of incident types occurring is produced for
comparison across institutions of similar bed size and comparison against systemwide averages to help identify problem areas
• Incident reports are reviewed by hospital risk manager and administrator in charge of
the unit, who can take immediate action.
• Incident reports are also passed to safety and quality assurance committee for indepth analysis and corrective action against safety problems
• Corrective actions include the development of guidelines for hospital practice and
governing the use of equipment, as well as nursing policy and changes to the
equipment used in certain procedures
• Assessment and monitoring of the effectiveness of corrective measures is undertaken
and the safety process reviewed where there is evidence of repeated incidents
• Further investigations of specific problem areas (identified through above average
incident rates) are undertaken in specific institutions
• Quarterly incident report summary generated for all participating hospitals
Parke
(2003)
Critical
Incident
Reporting
System
• System owner within hospital department screens received incident reports regularly
to identify those requiring immediate action
• Safety actions are discussed at monthly multi-disciplinary clinical governance
meetings
• Monthly newsletter details all the incidents of the month, by category, in abbreviated
narrative form
• Extended, annual presentation regarding operation of the reporting system and
reported incidents is made annually to the local anaesthetic department
Peshek et
• Disciplinary action and mandatory re-education undertaken where considered
al. (2004)
necessary in accordance with a fair policy to ensure required standards of compliance
Voice mail
and competence
60
Reporting
Description of information/action feedback mechanisms
initiative
based
medication
error
• Summary reports of reported errors with causes presented to all department
managers and at unit meetings as an educational tool
• Medication safety coordinators discuss implications for improving safety with individual
reporting
departments on a weekly basis and ad hoc work groups are formed to troubleshoot
system
problems
• Local on-site managers take immediate action to ensure patient safety and notify
department/unit management and risk management upon occurrence of incidents that
resulted in actual harm
• Specific follow-up process and actions implemented according to incident policy and
severity classification scheme linked to specific levels of action
• Information from reported errors is used to implement improvements in the medication
administration process, to support pharmacist education programmes and to support
inclusion of safe practices in the design of new computerised physician order entry
systems
Piotrowski
et al.
(2002)
• Recommendations for monitoring new systems or processes to ensure that once
implemented they continue to be effective
• Safety case management committee disseminates lessons learnt documents and
Safety
action plans to higher level authorising bodies (clinical executive board), a
Case
performance improvement committee and various other clinical committees and
Managem
specialist bodies
ent
• Changes implemented in clinical systems, processes or policy revisions
Committee
• Patient or clinician education campaigns
process
• Monitoring of clinical systems or processes
• Clinical communications produced
• Staffing adjustment or supervision
Poniatows
• Direct action taken by nursing managers to ensure rapid patient safety improvement in
ki et al.
response to reported issues, without the need for lengthy, complicated, unresponsive
(2005)
and under-resourced safety committee processes that tend to result in isolated unit
Patient
improvements or policy level changes that don’t visibly relate to the original incident
Safety Net
• On-line event reporting tool allows nurses to share experiences of how they have
(PSN)
improved patient safety in their units based upon directly actionable data provided by
occurrenc
reporting system, ensuring widespread improvements across individual organisations
e reporting
system
• Nurse managers receive immediate, real-time notification of safety events on their
units
• Managers can run their own local incident trend analysis reports
Runciman
• Identification of problems and contributory factors through incident monitoring for
et al.
further investigation towards development and implementation of appropriate
(2002);
interventions to improve quality of care
61
Reporting
Description of information/action feedback mechanisms
initiative
Yong et al.
• Preventive strategies devised on the basis of potential severity of impact and
(2003) &
frequency of occurrence of contributory factors, as well as follow-up risk-benefit
Beckmann
analysis to justify the resources used
et al.
(1993)
Australian
Incident
Monitoring
System
• Analysis of 2000 reported incidents in anaesthesia were published in 30 articles by
1992
• Development of national standards and guidelines governing aspects of clinical
practice, including equipment use and further monitoring of specific issues
• Use of reported incident data to clarify and support problems identified with clinical
equipment, leading to recall and modification of affected devices
(AIMS)
• De-identified data comparable across institutions and units
and
• AIMS 2 web system employs cue-based retrieval that allows the user to rapidly
associated
patient
safety
initiatives
retrieve a broad range of relevant incident reports
• System combines data from a number of sources including route cause analyses and
other patient safety investigative processes
• Newsletters, publications and advice at national level, feedback of improvement
actions and evidence of action occurs at local level
Schaubhut
• Brief summary information sheet “Hot Spots” is produced each month for nursing units
et al.
containing information on trends in incident data along with opportunities for
(2005)
improvement
Medication
Error
Reporting
system
• Improvement of medication administration policies to reduce causes of error
undertaken based upon issues raised through reporting system
• Immediate response to occurrence of a medication administration incident, when
required
• Mandatory medication error education programme was implemented for nursing and
medical staff using examples of different incident types reported to hospital system
• Personal study modules with written tests on local medication administration
procedures are given in cases where training needs are identified from incidents
Schneider
• Monthly reports from the incident database are generated to help identify problem
et al.
areas according to severity rated reports and categorisation according to unit of origin,
(1994)
error type, system breakdown and drug category
Severity-
• Various forums and committees are responsible for reviewing medication error data to
indexed
identify opportunities for improving patient care both within and across departments,
medication
including a quality assurance committee which produces recommendations for
error
physician’s practice, a pharmacy committee that addresses hospital-wide medications
reporting
policy and a medication error task force for specific issues
system
• Interventions are implemented on the basis of quality improvement opportunities
identified by the committees
Silver
• Disciplinary action taken against staff where necessary, including: termination,
62
Reporting
initiative
(1999)
Incident
Review
Managem
ent
Process
Description of information/action feedback mechanisms
counselling, suspension or written reprimand
• Staff safety issues are resolved through training or changes in shifts, tasks or
assignments
• Where clinical interventions are required, system changes include: development of
behaviour change plans, new goal determinations or medications review
• Facility repairs or restructuring is undertaken in response to reported health and safety
issues relating to building maintenance and design
• Actions for operations management include protocol changes, new monitoring forms
and new committee formation
• Further monitoring/baseline data gathering to support systems improvement decisions
• Communication reports are circulated immediately following an incident to notify all
relevant departments of the basic details of the event and the corrective actions that
have been taken
Suresh et
• Specialty-based centralised incident reporting, analysis and feedback processes to
al. (2004)
promote multidisciplinary, multi-institutional collaborative learning using data to
Medical
support systems improvement
error
reporting
system
• Periodic structured feedback of collected errors to participating institutions and through
biannual meetings
• Data fed into numerous local patient safety improvement projects
• Multidisciplinary teams prepared poster presentations using case studies of patient
safety issues within their units for sharing at collaborative meetings
Tighe et
• Monthly interrogation of trust incident database and report sent to A&E clinical risk
al. (2005)
management committee, which discusses reported incidents once a month and
Incident
assigns actions
Reporting
System
• Immediate response made to serious untoward incidents which are also escalated for
review in a separate process at A&E risk management meeting
• Newsletters are circulated to staff including summary of learning points and
highlighting new policies introduced as a result of safety issue analysis
Webster et
al. (2002)
• Systems improvements including changes to working practices and equipment use
implemented based upon cluster analysis of reported incidents
Ward
medication
error
reporting
scheme
Westfall et
al. (2004)
• Educational feedback to rural primary care practices based upon collection and
analysis of medical error data
63
Reporting
initiative
Description of information/action feedback mechanisms
• Implementation of interventions to improve patient safety: Principles for Process
Webbased
Improvement (PPIs) were developed for each identified safety area and focused upon
patient
processes which were amenable to assessment and improvement
• Follow-up interviews scheduled with reporting staff, where necessary, to elicit further
safety
reporting
information (provides evidence of response to reporter)
• Clinical steering group met in order to review data, direct additional study and create
system
policy. Learning groups set up, comprising personnel from practices and project staff,
in order to develop recommendations to tackle specific safety issues identified through
reporting system data
• For serious threats to patient safety, electronic and hard copy “alerts” were issued to
all participants, briefly describing the event and recommendations developed to
reduce the potential for that type of event
• Periodic newsletter issued to practices
• Practice-specific (individualised) recommendations were made for specific patient
safety issues
Wilf-Miron
• Telephone hotline for reporting incidents and near misses, providing real-time dialogue
et al.
with reporter to provide support, elaborate upon reported information and provide
(2005)
incident debriefing by professionals to ensure lessons are learnt on an individual level
Incident
Reporting
System
• Targeted training programs implemented on the basis of analysis of single and
multiple events
• Alteration of working practices and introduction of error reducing measures to ensure
organisational learning based upon investigation of single incidents and analysis of
data from multiple incidents
• Manual on error prevention using practical examples of clinical adverse events
Figure 2.13: Summary of the information and action feedback processes
implemented within 23 operational health care reporting systems identified within the
review
Further discussion of the reporting systems outlined above and the type of feedback
implemented for each may be found within section 4.3 of this current report, within
the research synthesis and discussion that follows.
2.4 - Synthesis and discussion of principle review findings
The following sections of this report provide a synthesis of the information gained
from the review activities in an attempt to produce a coherent and practical overview
64
of safety feedback from incident reporting. The main research findings are discussed
in terms of their implications for the development of practical requirements for safety
feedback systems (section 4.1) and a framework for integrated reporting and learning
processes at an organisational level (section 4.2). Research relating to key features
of the framework are then considered in section 4.3, followed by discussion of the
conclusions from an expert workshop on feedback in healthcare (section 4.4), which
considered the framework from the perspective of practical application in UK
healthcare systems. Finally, the limitations of the current review and possible future
directions for research are considered in section 4.5.
Requirements for effective feedback systems based upon expert knowledge
Through informal qualitative analysis of information gained through interviews with
subject matter experts on safety management and incident reporting, a series of
preliminary requirements for effective safety feedback systems were defined. The
requirements emerged through conceptual refinement of various categories of
information concerning operational experience with several safety management
systems implemented in high-risk industries and international health care systems.
Figure 2.14 below provides an overview of 15 requirements for effective feedback
from incident monitoring systems. For further reference, appendix I includes more
information on each requirement along with illustrative quotes from transcripts of the
expert panel interviews.
The requirements form a set of criteria, based upon
formulations derived from subject matter expertise, which represent what may be
considered to be mature or benchmark standard feedback systems for incident
reporting schemes. The requirements were also used as important guiding principles
in the development of a model of organisational level safety feedback and learning
processes reported elsewhere in this report.
Requirement
01
Description
Feedback at
Feedback or control loops should operate at multiple levels of the organisation
multiple levels
and broader regulatory levels for effective aggregation of incident data across
of the
individual teams, units or sub-departments. They should also operate on a
organisation
higher supra-organisational level, across individual organisations or
installations that share similar functions (such as the individual health care
trusts that make up the NHS). This allows lessons learnt in one particular
context to be applied as broadly as possible in as many similar localities as
possible. In this way, a single organisation might experience what is a rare
65
incident, yet all similar organisations may benefit and compensate for the risk
of a repeated occurrence elsewhere.
02
Appropriatene
Feedback should utilise a variety of modes, formats or channels to increase
ss of mode of
the awareness of as wide an audience as possible. Feedback should also
delivery or
adopt an appropriate mode of delivery for the specific working patterns of the
channel for
target audience and reporting community. Shift-workers may require home-
feedback
mailing, for example. In other contexts, email bulletins, workplace leaflets,
bulleting board postings, team briefings or safety newsletter publications may
provide the most convenient mode of delivery of critical safety information.
03
Relevance of
The content of safety information fed back should be targeted to individual
content to
work system contexts so that operators receive only what is necessary and
local work
relevant to their operations, in the appropriate working context in which it
place and
applies. Timing of feedback should be such as to allow rapid application of
systems
new knowledge on safety issues within the operational tasks to which it directly
applies. Feedback should be suitable and meaningful within the local context,
with high level guidelines and policy being directly translatable into specific
actions and behaviour on the local level.
04
05
Integration of
Consideration of the requirement for effective feedback processes should be
feedback
incorporated within the development and design of safety information and
within the
incident reporting and learning systems. The capability for useful feedback
design of
functions should be embedded within the design of risk management IT
safety
systems and incident databases, in addition to reporting and analysis
information
functions, so that the reporting community can access or generate customised
systems
reports to support local quality improvement activities.
Control of
Feedback needs to be planned and controlled. Careful consideration needs to
feedback and
be given to policy concerning how information, especially concerning safety
sensitivity to
incidents, will be presented to external and specific audiences, including what
information
interpretation or message will accompany the data. The different information
requirements
requirements of specific user groups, such as: front-line operators, safety
of different
leads, analysts, etc. must be considered.
user groups
06
07
Empowering
Effective feedback should support local operators and front line staff in their
front line staff
safety review and quality improvement activities, whilst illustrating how they
to take
can take responsibility and use initiative for maintaining or improving
responsibility
operational safety in their local working environment. Channels, mechanisms
for improving
and forums should be provided, through which front line staff can respond to
safety in local
feedback and apply their specific operational expertise to reduce recurrent
work systems
incident types.
Capability for
The feedback loop, or a rapid response process, should complete quickly for
rapid
immediate
feedback
solutions/workarounds or raise the profile of an issue in staff’s awareness until
threats
to
66
safety,
even
if
only
to
offer
temporary
cycles and
a more detailed investigative process can be completed. The feedback or
immediate
learning process must recognize that the organization is operating at risk
comprehensio
between the time when a safety issue is detected and remedial or preventive
n of risks
measures can be introduced which address the problem. Immediate action in
response to an incident may involve notification of front-line staff, issue of
bulletins/alerts or withdrawal of equipment, for example. Communication of an
“unsolvable” safety issue to the reporting community can often prompt further
reports and suggestions for solutions.
08
Direct
Feedback should be provided to individual reporters and other stakeholders at
feedback to
varying stages of the report-analyse-act cycle. Feedback and dialogue with
reporters and
the original reporters is important immediately following an incident and
key issue
especially in the interim between the issue being raised and actions
stakeholders
completed. This fosters a reporting culture as people can see their report has
been instrumental or acted upon and the interest of those affected by the
incident is sustained. The feedback also serves as a reminder to the reporting
community as to what incidents are reportable.
09
Feedback
Safety feedback channels and processes are well-established or mature, and
processes are
as such are capable of operating in a repeatable and permanent manner to
established,
continually diagnose and rectify safety problems. Clear definition of process
continuous,
steps, roles and responsible organisational bodies ensures that the safety
clearly defined
improvement process is accepted, commonly understood and proactive, in
and
contrast to temporary investigative structures that are convened to identify
commonly
problems in hindsight, following a serious incident.
understood
10
Integration of
Becoming aware of up-to-date safety information is a formal requirement of the
safety
work role and feedback is designed to be minimally disruptive to productive
feedback
work tasks. Examples of this include: pilots reading safety bulletins as part of
within working
their pre-flight check lists and nuclear power plant maintenance personnel
routines of
receiving safety bulletins through the on-line workflow management systems
front line staff
that they use every day to receive and allocate tasks. In addition, behaviours
such as checking safety bulletin boards are incorporated into routine daily
practices and time is allocated for safety awareness activities.
Recent,
relevant incidents are discussed at quality reviews and pre-task briefings.
11
12
Improvements
It is necessary to demonstrate the impact of reporting on improving local work
are visibly fed
systems to encourage future reporting to the system. The visibility of useful
back to local
feedback output from the system as safety information and actions that result
work systems
in actual changes challenges the “black hole” perception of incident reporting
from safety
schemes as “filing cabinets” for information that is never used. This allows
monitoring
busy professionals to justify the efforts they make in reporting their errors, near
programmes
misses and incidents.
Front-line
Trust amongst the reporting community exists in the quality, accuracy and
67
personnel
objectivity of data input, analysed and output by the safety system. This may
consider the
require the use of unbiased independent agencies to be responsible for
source and
aspects of the reporting system’s functions, if the reporting community is to
content of
openly report to the system in the future and act upon the recommendations
feedback to
that are produced. Front line staff must trust in the commitment of other areas
be credible
of the organization and its leadership to the goal of operational safety if they
are to accept the organizational changes and safety initiatives that are fed
back.
13
Feedback
Reporters do not expect any negative personal outcomes or risks to their own
preserves
well-being from reporting to the system, including the perception of wasted
confidentiality
time and effort.
and fosters
appropriate level of confidentiality and de-identification built into the reporting
trust between
and analysis system, and fed back information is not personally identifiable,
reporters and
whilst preserving sufficient references to the original work systems context in
policy
which the issue occurred so as to be useful.
There are clear policies and guidelines concerning an
developers
14
Visible senior
Safety actions are visibly sponsored and supported at a senior level of the
level support
organization. Safety actions fed back to the local level are followed up and
for systems
management visibly drives this process.
improvement
reinforces the emphasis placed upon commitment to shared responsibility for
and safety
high standards of operational safety alongside other productivity goals.
Leadership of safety issues
initiatives
15
Double-loop
The safety improvement process or control loop is itself subject to monitoring
learning to
and evaluation in order that the system may be developed to better detect and
improve the
mitigate vulnerabilities. An important prerequisite for this capability is the
effectiveness
ability to monitor the success of implemented system changes that result from
of
the incident reporting and analysis process and to build a memory within the
organisation’s
organisation of effective methods of response to specific types of reported
safety
safety issues.
feedback
process
Figure 2.14: Requirements for effective feedback from incident monitoring at an
organisational level based upon consultation with a panel of subject-matter experts
In order to validate the requirements outlined above, a selection of the high relevance
literature from the review was assessed in terms of support for the concepts and
assertions encapsulated within the requirements.
The resulting matrix table is
reproduced within appendix J and illustrates synergy between expert-derived
requirements and published research evidence.
68
An important structural requirement regarding effective feedback systems for
operational safety issues is feedback at multiple levels of the organisation. Figure
2.15 below depicts the multiple, recursive feedback loops that are implemented or
exist within various high-risk and transport sectors.
Feedback/learning
loops:
Supra-organisational level
–
–
Mechanism: Regulatory/national level policy
Changes: Lessons from experience in one institution applied
across all similar organisations
Organisational level
–
–
Mechanism: Local organisation risk management
Changes: Lessons from experience in one work setting
applied throughout organisation
Department/team level
–
–
Mechanism: Internal safety/quality review
Changes: Lessons learnt from performance monitoring
applied to improve local working practices
Individual level
–
Local
LocalWork
Work
Systems
Systems
–
Mechanism: Incident debriefing, staff training and safety
information publications
Changes: Risk awareness and behaviour
Increasingly
widespread learning
from safety incidents
Figure 2.15: Recursive feedback or learning loops operating at multiple
levels of analysis within embedded organisational systems
The information that flows within each feedback loop originates at the lowest level in
the day-to-day performance of local work systems, including any safety events or
near misses. Through local level debriefing and information dissemination, learning
can occur to change the risk awareness and behaviour of individuals. Information on
local operational performance is aggregated on successively higher levels of
organisation and applied in increasingly widespread safety improvement activities
that expand in terms of the scope of individual work systems and local operational
scenarios affected.
At the highest level of feedback, the supra-organisational or
regulatory control loops, allow information from a broad range of localised work
systems to be aggregated and action to be taken to improve safety on a national or
industry-wide policy level.
An advantage of aggregating data across institutions/installations onto higher supraorganisational levels is the possibility of detecting rarely occurring safety incidents
and events that are indicative of uncommonly manifested, yet ever-present
69
vulnerabilities in work systems. Additionally, highly aggregated incident data sets
built through broad contribution from a number of organisations increases the
number of observations available to statistical or other types of analyses, increasing
the opportunity for robust and accurate conclusions to be drawn. This is useful in the
case of relatively infrequent types of system failures and means that high level,
centralised systems have the potential to offer important information regarding
operational risk to individual organisations and departments, that might not
experience sufficient numbers of incidents alone to develop any comprehensive
insight.
Multi-level feedback loops ensure that only one or few organisations need experience
the ill effects of a specific (potentially rare) safety deficiency, for all similar
organisations to benefit and take preventive measures against repeated or related
incidents.
It should be noted, however, that the effectiveness of high, supra-
organisational feedback loops for the regulation of safety performance at
organisational level relies upon one important requirement. There must be functional
similarity in the individual organisations contributing data and standardisation in
operational level work systems across different sites and organisations, in order that
high level policy and recommendations are feasible and meaningful in local contexts.
Modelling organisational systems for learning from incidents
The information of interest to the review, that being concerned with effective safety
feedback, was found to be closely tied to practical knowledge concerning the
operation of incident reporting systems in various sectors.
From a research
perspective, this meant that much relevant information could only be gained through
direct contact with subject matter experts and through review of a research literature
that comprised almost exclusively non-experimental, case-study type reports of local
systems implementations. Ultimately, the knowledge gained from multiple sources
required effective integration or synthesis, which was undertaken through the
development of a unifying conceptual framework or model-driven approach. It is
hoped that this approach aids in the interpretation and practical communication of
best practices highlighted from the review for the incident monitoring and feedback
processes of local health care organisations. In addition, the figures and models
contained within this section of the report provide a convenient means of conveying a
large amount of complex information concerning the key design issues and structural
characteristics of effective safety feedback systems for high risk operations.
70
In terms of a simple overview, figure 2.16 below depicts the main structural features
and information flows associated with incident-based safety feedback to operational
level work systems. Critically, the learning and feedback processes encompass both
local level organisations and supra-organisational level regulatory processes; in
figure 2.16 represented as local health care organisations or NHS trusts and higher
level national or centralised agencies, such as those responsible for patient safety
functions.
NATIONAL LEVEL:
LOCAL HEALTH CARE ORGANISATIONS:
Report incidents
Operational
level clinical
work systems
Care
providers
& patients
Report aggregated data
Incident reporting,
analysis and feedback
system
• Monitor and investigate
reports/analyse data
• Learn lessons from single
and multiple local incidents
• Report to higher levels and
external agencies
• Develop and implement
local systems improvements
Centralised patient
safety agency
• Analyse trends in
aggregated data
• Share lessons across
multiple local
organisations
• Monitor international
information/cross-industry
comparisons
• Develop national level
policy and solutions
Feedback national alerts and policy
Feedback local systems improvements
and safety awareness information
Figure 2.16: Overview of reporting and feedback processes spanning both the
organisational and supra-organisational level
Key to understanding the multi-level approach is to appreciate that data concerning
current levels of operational safety are aggregated from multiple local work systems
at the organisational level and aggregated once again across multiple organisations
at a national level. Conversely, safety information and actions in the form of policy is
fed down from national level regulators to multiple local organisations, where it is
implemented within multiple local operational settings.
As has been stated previously within this report, considerable recent effort in the
United Kingdom has been expended into establishing a centralised, National
Reporting and Learning System (NRLS) for the NHS, to analyse aggregated incident
data sets and feed back patient safety improvements to local trust level. The findings
and recommendations from this current review are aimed at safety control processes
at the local organisational level. With this aim in mind, the following discussion of the
safety feedback process and a framework for actionable systems is offered for
consideration at the level of local NHS trusts.
71
The framework outlined in the sections that follow was the result of an iterative
development process within the review.
As illustrative of this process, an early
version of the framework with accompanying explanation may be found within
appendix K, with commentary and initial exploratory mapping of processes within the
model to the supporting research literature.
The safety feedback loop
An early conceptual aim for the feedback review was to attempt to define what was
meant by feedback in terms of the control loop that governs continued operational
safety in organisational systems. In response to early consultations with subject
matter experts concerning the process of report handling, incident investigation and
safety improvement, a functional definition of the feedback cycle for safety
information systems was developed.
The operation and functional sequence of
stages within the feedback cycle were defined drawing upon expertise developed in a
range of domains that had an established history of successful safety management
and regulation, such as civil aviation. The resulting safety feedback or control loop is
depicted within figure 2.17 below and represents a generic model of continuous
improvement processes that are implemented, in one form or another, within many
high risk domains.
Analysis
5. Investigate root
causes/contributory
factors in depth
6. Formulate
lessons/safety
recommendations
Feedback
7. Implement
safety solutions in
operations
4. Prioritise
according to
risk
Safety monitoring
and learning
functions for
organisational
systems
3. Classify and
describe
2. Capture/Input
data into system
Reporting
1. Detect safetycritical issues,
incidents and
unintended
outcomes
8. Monitor
effectiveness of
previous safety
solutions
LOOP CLOSED
Working practices
Human factors
Operating environment
Equipment/technology
Culture
NO: Begin feedback
cycle
INPUT
ORGANISATIONAL
WORK SYSTEM
OUTCOME
YES
COMPARE
Is outcome as intended?
Operating goals
Organisational structure
Model of safe
operational
performance
Figure 2.17: The safety feedback or control loop for organisational systems
72
In the figure, the reporting, analysis and feedback process is depicted as one of
information transmission between a series of stages or key functions, independent of
any specific organisational structures or agents, such as risk management structures.
Within the literature on incident reporting and safety or quality improvement, similar
models of the key stages within the process of learning from operational experiences
have been proposed (e.g. Van der Schaaf & Wright, 2005; Kjellen, 2000; Koornneef,
2000; Dept. Health, 2000).
The feedback cycle bares close similarity to a cybernetic model of regulation or
control of a system, where the system in question is ultimately an organisational work
system and the outcome that forms the signal (to which the regulative mechanism is
sensitive), being safety performance.
In this way, the process uses information
gained regarding current levels of a system’s achieved operational safety (largely
delivered in the form of incident reports) to modify the functioning of that system in
order to achieve more desirable safety outcomes in the future. Put another way, the
system detects evidence of vulnerabilities in work systems and responds by
implementing targeted improvements in systems to deliver safer performance. This
is achieved through the continuous operation of several key functions that make up
the safety feedback loop, from detection of safety-critical events, through
classification and analysis, to eventual implementation and evaluation of the safety
solutions. Figure 2.18 below depicts the sequence and content of these functional
stages in the feedback loop.
1.1.Detect
Detect
Repeat in
continuous
cycle
Safety-critical issues/patient safety risks
2.2.Capture
Capture
Incident report information
3.3.Classify
Classify
Incident type, severity and systems involved
4.4.Analyse
Analyse
Incident database for safety issues
5.5.Prioritise
Prioritise
Reported issues for corrective action
6.6.Investigate
Investigate
Root causes and contributory factors
7.7.Formulate
Formulate
Safety solutions and systems improvements
8.8.Implement
Implement
Changes in work systems to address vulnerabilities
9.9.Monitor
Monitor
Effectiveness of solutions in preventing recurrence
Figure 2.18: Functional stages in the incident-based safety feedback loop
73
It should be noted that relying upon incident reports as the monitored signal that
initiates such a process means a degree of inherent noise within that signal. A
prominent limitation of reporting systems is the fact that reported incidents at best
provide only an indication of the true safety state of a system, due to variability in
reporting rates arising from a range of technical and motivational factors that have
been summarised elsewhere (e.g. Vincent, 2006).
The functional definition of the feedback cycle described above formed an important
template for the interpretation and refinement of information gained from both subject
matter experts and the literature review. The feedback loop was further developed in
response to knowledge gained through the review. The content and cyclical
sequence of nine generic functional stages was incorporated within an emerging
model of safety feedback processes based upon incident monitoring systems at an
organisational level. The resulting practical framework is described within the
following section of this report.
74
A framework for Safety Action and Information Feedback from Incident
Reporting (SAIFIR)
As part of the practical output from the research, a framework was developed to
provide an abstracted model of the safety feedback process for incident reporting
based upon best practices established from the review. The framework is described
as one of “Safety Action and Information Feedback from Incident Reporting” (or using
the acronym: “SAIFIR”) to identify the dual outputs from incident monitoring of: 1)
specific corrective actions taken to improve the safety of work systems, and 2) broad
dissemination of information to raise general awareness of current risks to the safety
of operations.
The SAIFIR framework was developed as a tool to help understand and reason
about the features of complex safety management processes at an organisational
level, as well as to promote the practical application of the review findings. It evolved
through several iterative attempts to synthesise information gained from the literature
review with practical expert knowledge regarding the operation of safety
management systems in international health care and high risk industry domains. In
structure, the framework depicts the logical work-flows or processing stages for
incident report management at an organisational level and links them to the various
information channels/action mechanisms that form possible modes of feedback to
improve safety at the level of local operations.
The literature identified for reporting systems within the review was mapped to the
framework in order to demonstrate the efficacy of the framework in definition of key
modes of feedback present in effective operational health care reporting systems.
An account of how the key features of the health care reporting systems reviewed in
the literature map to the modes of feedback identified within the SAIFIR framework is
included within the subsequent section of this report.
The SAIFIR framework is depicted within figure 2.19 below, which represents an
architecture for organisational-level systems for learning from reported safety
incidents. Critically, the framework links key feedback modes to a generic safety
issue management process for application in incident monitoring and learning
systems. The key features of the framework are discussed in the following text,
along with further description of the model’s origin.
75
OPERATIONAL
LEVEL
LOCAL ORGANISATIONAL
LEVEL
SUPRAORG.
LEVEL
SAFETY ISSUE MANAGEMENT PROCESS
Incident reports
Feedback A:
Bounce back
Local
clinical work
systems
Care
providers
& patients
Integrate
&
support
changes
Local
implementing
agents &
leadership
1. Incident
report
monitoring
All classified
incidents
Single incidents
& priority issues
identified for
follow-up
Feedback B:
Rapid response
Incident
repository
High-level and
external
reporting
requirements
Aggregated data
from multiple
incidents
2. Safety
issue
analysis
Feedback C:
Raise awareness
Reported
analyses of
national
aggregated
data
Root causes,
contributory
factors and key
trends
Feedback D:
Publicise actions
Feedback E:
Improve work systems
3.Solutions
development
&
systems
improvement
High level
safety alerts,
national
priority
issues &
direct action
Figure 2.19: the SAIFIR framework, for incident-based learning. The framework
provides a systems architecture that links key feedback modes to a generic safety
issue management process for application in organisational level incident monitoring
schemes
The SAIFIR framework was developed iteratively in response to review of the key
features of established reporting systems across multiple domains and through
consideration of the practical considerations outlined by subject matter experts
consulted as part of the research effort.
As such, the framework represents a
solution to the requirements for effective organisational level feedback processes
developed through the literature review and interviews with subject matter experts.
Although the framework depicts feedback in the form of information flows to front line
work systems, it should be noted that these processes are designed to instigate
productive dialogue between the reporting system and reporting community (a theme
that will be taken up later in this section). Specifically, the model addresses the key
requirements for effective safety feedback systems, based upon subject matter
expertise and outlined within section 4.1 of this report.
Figure 2.20 below outlines
the relationship between the fifteen requirements and key features of the SAIFIR
framework.
76
Requirement
01
Solution
Feedback at
The SAIFIR framework is designed for application in local organisational level
multiple levels
incident reporting systems that are integrated within wider supra-organisational
of the
level networks operated by external agencies. As such it accepts inputs
organisation
(incident reports) and aggregates data from multiple operational scenarios or
work systems in its incident repository or database. It also passes data up to
higher level, external agencies such as centralised reporting systems and
accepts feedback from them in the form of high level policy, national safety
alerts, reports from the analysis of aggregated data, etc. Viewed as a multilevel system that aggregates data from individual organisations and their
individual operations, the overall architecture is capable of detecting and
learning lessons from rare occurrences.
02
Appropriatene
The SAIFIR framework incorporates multiple modes of feedback that occur at
ss of mode of
various points within the safety issue management process, promoting
delivery or
comprehensive action and information feedback to local clinical work systems.
channel for
feedback
03
Relevance of
The framework includes a local implementing agent at the operational level for
content to
integration and support of systems changes fed back from incident monitoring
local work
and analysis. This ensures that actions to improve work systems represent
place and
targeted responses applicable to specific operational scenarios. The
systems
framework also emphasises dialogue in the form of feedback and information
exchange between the reporting system functions and staff within local clinical
work systems as the end users of any recommendations for systems
improvements that might be considered necessary. This helps to ensure that
the content of safety feedback is relevant and therefore useful.
04
Integration of
Through linking various feedback modes with specific stages in the safety
feedback
issue management process implemented for organisational level reporting
within the
schemes, the SAIFIR framework provides an example of an integrated or
design of
unified design for a system that fulfils both reporting/analysis and feedback or
safety
learning functions.
information
systems
05
Control of
The SAIFIR framework provides breakdown and definitions for various types of
feedback and
feedback that should be implemented at various stages of processing a safety
sensitivity to
issue or incident. As such it provides a framework for planning and control of
information
fed back information.
requirements
of different
user groups
06
Empowering
The safety improvement process embodied within the SAIFIR framework
77
front line staff
focuses upon not just improvement actions but also safety information to raise
to take
awareness of front line staff and empower them with knowledge of
responsibility
vulnerabilities in local operations.
for improving
dialogue with the reporting community, especially during initial reporting,
safety in local
investigation and development of safety solutions.
Additionally, the framework emphasises
work systems
07
Capability for
Feedback mode B “rapid response” within the SAIFIR framework fulfils the
rapid
requirement for fast feedback cycles and quick responses to immediate threats
feedback
to patient safety.
cycles and
consideration of the severity of a reported incident allows prioritisation and
immediate
rapid assessment of whether any urgent action is required. Multiple points of
comprehensio
feedback within the framework also allow timely comprehension of current
n of risks
system vulnerabilities and intermediary feedback means that staff should be
Within the safety issue management process, early
aware of the issues surrounding a safety incident as it is being processed by
the system.
08
Direct
Within the SAIFIR framework, feedback modes A “bounce-back” and D “inform
feedback to
staff of actions taken” provide direct feedback to the reporters or key issue
reporters and
stakeholders associated with the original incident or reported event.
key issue
helps dialogue to be maintained with the reporting staff and direct
stakeholders
communications are initiated that emphasise that the reported information has
This
been/is being acted upon, thus reinforcing future reporting behaviour.
09
Feedback
The SAIFIR framework, in as much as it represents an architecture for incident
processes are
monitoring and feedback systems design with clear criteria, offers a possible
established,
model for mature and effective risk management systems in this area.
continuous,
Definition of the process, throughputs and outcomes of each stage ensure the
clearly defined
process is repeatable and the roles and involvement of all implicated agents
and
can be explicitly agreed. Definition of a generic model of the system with key
commonly
feedback modes and rationale serves as a guideline for implementation and
understood
checklist of key functional capabilities for any specific attempt to instantiate the
system within a particular local setting.
10
11
Integration of
The attempt made to break down and define separate modes of safety
safety
feedback from incident monitoring within the SAIFIR framework provides a
feedback
means of understanding the information process from the perspective of front
within working
line staff. In time, staff should become familiar with the different modes of
routines of
feedback available to them and this will shape expectations for interactions
front line staff
with the reporting and risk management systems.
Improvements
The SAIFIR framework includes feedback modes not only for the
are visibly fed
implementation of changes in local work systems (mode E), but also the
back to local
publicising and widespread dissemination of information concerning actions
work systems
arising from issue analysis amongst the professional reporting community.
from safety
Comprehensive feedback functions ensure that safety improvements resulting
78
12
monitoring
from the operation of the system are visible to potential reporters, thus
programmes
promoting future reporting.
Front-line
The channels for feedback and dialogue within the SAIFIR framework, in
personnel
addition to the requirement for collaboration with front line staff in safety
consider the
improvement efforts, should foster trust between the reporting community and
source and
operators of the incident reporting system and improvement process.
content of
Confidence in the system may be developed through open and repeated
feedback to
successful operation.
be credible
13
Feedback
Clear policies for feeding back incident-related information, such as those
preserves
embodied within the SAIFIR framework, support the effective management of
confidentiality
safety-related information and help provide the basis for development of data
and fosters
disclosure and confidentiality protocols according to mode of feedback.
trust between
reporters and
policy
developers
14
Visible senior
Through the involvement of leadership in systems improvement processes at
level support
the operational level, senior management and local department heads may
for systems
visibly sponsor and drive local improvement efforts.
improvement
and safety
initiatives
15
Double-loop
A rational, defined process such as that described within the SAIFIR
learning to
framework provides a standard or model that can be tested and improved
improve the
through continual review based upon operational experience. The continuous
effectiveness
nature of the feedback cycle embodied within the framework ensures that the
of
effectiveness of changes introduced at a local level as a direct result of the
organisation’s
operation of the system can be evaluated through monitoring future incidents.
safety
In this way, reports of repeated incidents of the same type demonstrate failure
feedback
of remedial actions and poor performance of the whole system, which should
process
then be assessed for opportunities for improvement.
Figure 2.20: Correspondence between features of the SAIFIR framework
and expert-derived requirements for effective safety feedback systems
The SAIFIR framework provides a depiction of the entire safety monitoring and action
process for organisational level learning from failures and operational experiences.
As such it focuses upon the description of two specific key elements or processes
and their interrelationship:
79
1) The safety issue management process: The sequence of stages performed
by the organisation’s incident reporting and quality improvement systems in
processing a safety issue. This includes processing of the initial incident
report, risk assessment, incident investigation and analysis, classification and
archiving functions, development of appropriate responses or solutions and
their implementation in front line work operations.
The key safety feedback modes: The specific feedback processes undertaken to
disseminate safety-critical information that raises the awareness of operational risks
amongst front line staff and implement recovery actions. Crucially, the feedback
modes also include the specific action mechanisms for the implementation of
targeted improvements in operational work systems to safe-guard against
vulnerabilities uncovered through incident investigation and analysis of aggregated
incident dat
2) a.
Regarding the safety issue management process incorporated within the SAIFIR
framework, the process is closely related to early attempts within the review to
describe the safety feedback or control loop (see section 4.2.1 for further
information). Figure 2.21 below depicts the main functional stages within the safety
feedback loop and how they relate to the three top-level stages for processing safety
issues depicted within the SAIFIR framework.
80
Main stages in the safety
issue management process
1.
1.Detect
Detect
1. Incident
report
monitoring
2.2.Capture
Capture
3.
3.Classify
Classify
Repeat in
continuous
cycle
4.
4.Analyse
Analyse
2. Safety
issue
analysis
5.
5.Prioritise
Prioritise
6.
6.Investigate
Investigate
7.7.Formulate
Formulate
3.Solutions
development
&
systems
improvement
8.
8.Implement
Implement
9.9.Monitor
Monitor
Figure 2.21: Relationship between the functional stages within the safety feedback or
control loop and the main stages of the safety issue management process
incorporated within the SAIFIR framework.
On a simple level, the main functional stages of the safety issue management
process involve: a) the receipt, screening and archiving of incoming reports, b) the
analysis of incident data and investigation of specific high severity incidents, and c)
the development and implementation of systems improvements to prevent
recurrence and similar incidents happening in the future. These three main stages
are depicted within the overview of the SAIFIR framework (see figure 2.20 above).
The actual stages in the workflow represented by this process may be considered at
a finer-grained level of detail, however. Figure 2.22 below illustrates the sub-tasks or
processing stages involved, their key outputs and logical sequence.
81
SAFETY ISSUE MANAGEMENT PROCESS
INPUT
Incident
reports
1.Incident report
monitoring
Grade according
to severity
Fast-track
serious
incidents
All
incidents
Classification/
description (&
elaboration)
Classified
reports
Local incident
case repository
Single incidents
selected for follow-up
2. Safety issue analysis
In-depth local
investigation
Aggregated data
from multiple
incidents
Priority issues Incident trend
analysis
Root causes/
contributory factors
Identify system
vulnerabilities
Causal
trends
Safety problem areas
3. Solutions development &
systems improvement
Generate workable
solutions
Systems changes
OUTPUT
Safety
improvements and
risk information
Define implementation
strategy
Implementation plans
Agree
responsibilities &
support local
implementation
Figure 2.22: Safety Issue Management Process for Learning from Events (SIMPLE
workflow)
The process as it appears above in figure 2.22 was defined following discussion of
key activities undertaken within incident reporting schemes with subject matter
experts. As such it represents a generic model or abstraction of specific functions
that have been implemented in one form or another in the majority of safety
monitoring systems across a range of high-risk domains. Through detailed analysis
of this process, it was possible to map the feedback functions that appear within the
SAIFIR framework in a comprehensive manner according to emerging operational
expertise and information from published accounts of reporting systems (the
complete, detailed version of the resulting process model is reproduced within
appendix L for reference).
82
Within the first stage of the issue management process, incoming reports are clarified
and graded according to severity before being classified or described using some
typology, causal or other human factors taxonomy that allows assignment of key
terms to identify the incident within the incident database maintained by the system
for archiving of incident records.
aggregated incident data sets.
This is a prerequisite for statistical analysis of
Logically, the initial grading of reports forms a
screening process that identifies significant events or potentially high severity
incidents at an early stage in the process, to allow fast-tracking of those issues that
pose a high or immediate risk to the continued safety of operations.
The initial
screening of incoming reports for potential high severity safety issues by the risk
management team, along with input from the reporter, is important in deciding
whether rapid and immediate feedback of corrective actions is required – in which
case the incident may be fast-tracked to implement preventive measures as quickly
as possible to avoid further harm.
Safety issue analysis and incident investigation forms the second major stage within
the process.
Analysis and investigation using formalised procedures such as
Significant Event Audit (SEA) or Root Cause Analysis (RCA) are essential processes
if the often complex interaction of contributory factors associated with an incident are
to be understood and lessons learnt from adverse events. A detailed consideration
of the various human factors investigative techniques associated with incident and
error analysis is beyond the scope of this current review, but significant contributions
in this area were identified through the searches (e.g. Woloshynowych et al., 2005).
Safety issue analysis should be based both upon the investigation of single reported
incidents (making the system responsive to current risks to operations) and from
periodic interrogation/analysis of data from multiple incidents to look for trends. This
allows the system to prioritise safety issues according to conventional risk analysis
processes that account for both severity of potential outcome and frequency of
occurrence.
Further data mining analyses of the incident database may be
undertaken periodically in order to review incidents of a certain type or to identify and
prioritise common contributory factors for remedial action.
The result of comprehensive safety issue analysis is information that identifies and
prioritises system vulnerabilities and potential problem areas, for subsequent
remedial action (the final stage in the process). This takes place through a process
of generating appropriate interventions or design solutions to safety problems
83
apparent in current work systems, along with a strategy for implementation. The final
action plan arising from the process will include responsibilities and ownership of the
planned changes as well as continued monitoring and support of the changes, once
deployed within operations.
A critical feature of the proposed framework is that it addresses both the safety issue
management process and links them to specific key feedback modes for the
dissemination of action and information outputs from reporting systems. The area of
feedback from incident reporting schemes is one that lacks comprehensive
consideration in both the research and safety operations management literature. The
potential value of the SAIFIR framework is that it adopts a holistic position on
integrated risk management processes for learning from reported safety incidents. In
so doing, it offers an integrated view of three important elements: 1) local reporting
system functions, 2) committee investigation, quality improvement and action
mechanisms, and 3) the multiple feedback functions that ensure the risk
management system effectively interacts with front line operations.
Figure 2.23 below provides a description of the key feedback modes A-E that
comprise the main outputs within the SAIFIR framework (depicted within figure 2.20).
In line with the conceptual approach to the definition of safety feedback adopted for
this review, the feedback modes below fall into two broad types: action or information
feedback processes.
Bounce-back feedback, risk awareness information and
informing staff of actions taken (feedback modes A, C and D, respectively) are of the
information type, being aimed at either the original reporters of an incident or the
reporting or professional community as a whole.
Feedback modes B “rapid
response” and E “safety improvement actions” are of the action type, involving actual
changes or modification to existing systems, working practices, training interventions,
equipment or other aspects of the environment in which front line staff function. In
the following discussion these issue will be dealt with in further detail along with
further elaboration of the key feedback modes, rationale and structural characteristics
of the SAIFIR framework.
84
Feedback
Type
Content & Examples
A: Bounce
back
information
Information
to reporter
• Acknowledge report filed (e.g. automated response)
• Debrief reporter (e.g. telephone debriefing)
• Provide advice from safety experts (feedback on issue type)
• Outline issue process (and decision to escalate)
B: Rapid
response
actions
Action within
local work
systems
• Measures taken against immediate threats to safety or serious issues
that have been marked for fast-tracking
• Temporary fixes/workarounds until in-depth investigation process can
complete (withdraw equipment; monitor procedure; alert staff)
C: Risk
awareness
information
Information
to all front
line
personnel
• Safety awareness publications (posted/online bulletins and alerts on
specific issues; periodic newsletters with example cases and
summary statistics)
D: Inform staff
of actions
taken
Information
to reporter
and wider
reporting
community
• Report back to reporter on issue progress and actions resulting from
their report
• Widely publicise corrective actions taken to resolve safety issue to
encourage reporting (e.g. using visible leadership support)
E: Systems
improvement
actions
Action within
local work
systems
• Specific actions and implementation plans for permanent
improvements to work systems to address contributory factors
evident within reported incidents.
• Changes to tools/equipment/working environment, standard working
procedures, training programs, etc.
• Evaluate/monitor effectiveness of solutions and iterate.
Figure 2.23: Description of different types of feedback corresponding to
feedback modes A-E within the SAIFIR framework
Whilst it is the action feedback processes that deliver meaningful, targeted
improvements in work systems, the informational feedback processes are
nevertheless of considerable importance, especially to the success of voluntary
reporting schemes.
Even mandatory reporting systems are subject to voluntary
disclosure of information by professional communities (Leape, 2002) and maintaining
the open, voluntary reporting of information concerning operational experience to
safety management systems should be a high priority if vulnerabilities are to be
identified and counteracted in a timely and effective manner. It is therefore important
to encourage reporting through emphasising the utility of reporting to operational
level staff and this requires effective feedback to the reporting community and
communication with front line staff.
Feedback to individual reporters and dialogue between the reporter and risk
management system representatives is important at the point of reporting an incident
or near miss to local safety systems (i.e. feedback mode A within the SAIFIR
framework) for several reasons. Providing acknowledgement or “bounce-back” upon
the receipt of a report and informing the reporter of what will then happen to the
report are important in challenging the perception that reports are filed away and not
acted upon, which is a factor that contributes to low reporting rates (NPSA, 2004).
85
Early contact with the reporter allows the risk management team to offer immediate
advice, for both known and novel issues, or provide other information such as the
incidence of reported incidents of a similar type. Expert advice may include further
actions to take to prevent or limit any further harm, or other agencies that should be
notified of the incidents occurrence.
As part of this debriefing, contact with risk
management personnel trained to deal with reporters also allows the provision of
emotional support for staff who may have been involved in an unsettling experience
(Wilf-Miron et al., 2003).
Often the sheer volume of reports received by a reporting system may be prohibitive
in terms of person-to-person interaction between system representatives and the
reporter at the point of initial contact. This is especially the case for centralised
systems and reporting systems for large organisations, such as NHS trusts. Webbased systems in which online reporting forms log information in databases
automatically may therefore be preferred over those that employ manual initial
processing of reports (e.g. Nakajima, Kurata & Takeda, 2005), due to their capability
for automated acknowledgement and report submission.
There are also notable
examples, however, of reporting systems that successfully employ person-to-person
report input mechanisms, such as telephone reporting of incidents in dispersed
ambulatory care systems (e.g. Wilf Miron et al., 2003) that allow spontaneous and
easy input of data from the reporters point of view.
There is a direct trade off between specificity of report and targeted feedback on the
one hand, and open systems. These are prone to receive numerous reports often
concerning minor incidents. Unless filtering is introduced, only automated responses
are feasible. Systems that do employ person-to-person contact at the point of
reporting have the advantage of being able to exert a degree of quality control over
the information logged in their databases for later analysis. Actual dialogue with the
reporter either through correspondence, meeting or telephone conversation allows
the risk management team representative to elaborate upon the information
contained within the submitted report and query the reporter regarding the important
details as necessary.
This method of eliciting information also requires less
judgement on the part of the reporter regarding, for example, identification and
classification of contributory factors (as is often required within automated report
forms) – the capability for which is likely to vary considerably as a result of
professional role and many other factors. Initial contact with the reporter therefore
increases the quality and relevance of information input to the reporting system and
86
has the additional benefit of ensuring timely capture of all incident details close to the
event itself, before the reporters recall degrades (Silver, 1999). It may also allow the
risk management team to influence future reporting behaviour, through reinforcing
definitions of what amount to reportable or non-reportable incidents. Early feedback
to those that were involved in an incident may therefore incorporate the decision to
escalate the report and investigate further, or not as may be, in which case the
rationale for this decision can be explained to the reporter.
Closely associated with the issue of dialogue between reporting system
representatives and the reporter is the question of the level of anonymity afforded to
reporters and exactly where in the issue management process the original report is
“de-identified” of information that might point to the originating work scenario and
individuals involved in the initial incident.
A prerequisite for further contact with
individual reporters is that they are identifiable to the system once their report has
been filed. Further contact is therefore not possible for totally anonymous reporting
systems to which individual reporters do not submit any personal information. In
systems where reporters’ details are initially logged, feedback such as progress
updates and further dialogue aimed directly at the original issue stakeholders is
possible at intermediary stages of processing, though it is critical to be able to
provide appropriate assurances of confidentiality.
Feedback mode B refers to rapid action taken in response to a reported incident to
limit any further harm that might arise from the issue. It is important that reports are
analysed promptly and recommendations for dealing with serious hazards
disseminated rapidly (Leape, 2002). Initial screening of incoming reports for potential
high severity safety issues by the risk management team, along with input from the
reporter, is important in deciding whether rapid and immediate feedback of corrective
actions is required. Following identification of specific serious or immediate threats to
safety, the incident may be fast-tracked to implement preventive measures as quickly
as possible. This process may include the implementation of temporary preventive
measures (i.e. “workarounds” or “fixes”) until a more comprehensive safety issue
investigation and feedback process can be completed, resulting in more permanent
and widespread systems improvements.
Immediate responses to serious issues
reported to the system may include withdrawal of specific equipment or drugs,
implementation of close monitoring and review of a particular process or procedure,
issue of a safety alert to notify staff of the incident’s occurrence or temporary removal
of the individuals involved from service. This last measure may be taken to give
87
traumatised staff a debriefing period as well as to review the individual’s actions
(although caution must be used to ensure any performance review processes are
seen to be fair and that reporters are protected as far as possible, in order to
encourage future reporting).
More in-depth analysis or investigation of an incident is initiated for the purpose of
developing recommendations or action plans that actually improve the resilience of
work systems to similar future adverse events (feedback mode E within the SAIFIR
framework).
Safety issue analysis should be based both on the investigation of
single reported incidents (making the system responsive to current risks to
operations) and on periodic interrogation of aggregated incident data to look for
trends.
Typically, in high risk industry a safety committee structure will be
implemented to process the results of incident analyses or investigation and draw up
recommendations for working practice.
Systems improvement requires visible support from senior level management and
clinical leadership, as well as input from staff and external experts with a relevant
diversity of perspectives and expertise. Representation in solutions development
teams can productively represent a diagonal cross section of the organisation,
spanning functional and hierarchical boundaries.
This would include input of
expertise from safety representatives, all relevant functional areas, clinical groups
affected by the change, front line staff and senior management support to ensure the
team possesses sufficient authority and autonomy to operate effectively. The multidisciplinary team will therefore comprise representation of both safety/risk
management expertise and clinical leadership. It is important during the solutions
formulation phase to include staff who will be the eventual end users of any modified
or new systems. This serves three purposes: 1) it brings local practical knowledge
and insight to bear upon safety problems and how they might be solved, 2) it ensures
the practicality of developed solutions, and 3) it increases acceptance and support for
implemented changes. One potential strategy relevant here is to invite staff with
appropriate expertise to sit on a multi-disciplinary committee (Piotrowski, Saint &
Hinshaw, 2002), though such an action would require effective inter-professional
collaboration.
From the corporate level to the front line it is important to develop structures and
processes to implement and monitor improvement plans developed as part of the
safety feedback loop. Actions taken in response to specific safety issues should also
88
be reported to external and higher-level agencies, capable of sharing lessons learnt
with other organisations. It is important to follow-up on the implementation of new
systems and improvements within agreed time-scales (e.g. Nakajima, Kurata &
Takeda, 2005; Takeda et al., 2003). This could happen by nominating a relevant
staff member as a local process owner who will report back to the risk management
group on actions taken, or through follow-up by the risk management team itself
(Gandhi et al., 2005).
In order to monitor the effectiveness of individual safety solutions and corrective
actions, some consideration of how to measure the success of an intervention needs
to be given during the development process.
It is also important to feedback
information from local work settings to risk management systems on the
effectiveness of solutions, so that learning about effective strategies for correcting
vulnerabilities takes place and can be generalised to other settings. Information on
the effectiveness of solutions can be gained from monitoring further incident reports,
with recurrent incident types being a clear indication of failure of the safety feedback
loop to resolve the issue. Direct monitoring of the implemented solution and any
associated evaluative measures by the risk management team is another means of
capturing evaluative information on the effectiveness of the safety feedback loop. In
local work units, teams responsible for reviewing quality of clinical care may take on
responsibility for the assessment and monitoring of safety improvements.
National or other high-level regulatory bodies, having identified specific safety
problem issues with widespread implications from analysis of aggregated safety data,
will need to utilise local risk management systems and structures to ensure
implementation and follow-up of systems improvements, new policies and other
changes in front line operations in local organisations. Use of common mechanisms
of implementation for locally and nationally developed patient safety solutions will
therefore be an effective strategy, as will exploiting common information channels
and means of feeding back safety information from both local and supraorganisational agencies.
Along with the actual implementation of safety actions to improve systems, it is
important to publicise the changes resulting from analysis or investigation of reported
incidents. Feedback mode D captures this information process and ensures that
actions taken on priority issues are widely communicated amongst all front line care
providers in order to emphasise the utility of reporting safety issues to local risk
89
management systems and encourage future reporting.
Communication of issue
outcome to the reporter at this stage helps to complete the process from the point of
view of those involved in the original incident. In addition, information on actions
taken and their outcome should be reported to all relevant incident stakeholders, both
external and higher-level agencies, so that lessons learnt can be generalised across
health care organisations.
Feedback mode C considers the possible information outputs from the analysis of
incident reports that may be broadly disseminated within an organisation to raise the
awareness of safety issues and vulnerabilities in current work systems.
The
effectiveness of various feedback channels (when and where information will be
delivered) and content of safety information fed back (what data and information will
be presented), must be considered. These can be combined in familiar vehicles
such as safety newsletters, which have been successful in aviation (Barach & Small,
2000), targeted staff training initiatives (Wilf-Miron et al., 2003), seminar schemes for
professional groups (Nakajima, Kurata & Takeda, 2005), team briefings and
individual debriefings, quality reviews, patient safety bulletin boards (Holzmueller et
al., 2005), and leadership walkrounds (Frankel, Gandhi & Bates, 2003), amongst
others. A survey of how NHS trusts disseminate lessons learnt conducted by the
National Audit Office in 2005 concluded that most trusts use multiple feedback
channels and that almost 80% of NHS trusts use newsletters as a form of feedback.
The report identified a number of feedback mechanisms, such as: hospital-wide
meeting groups, local meeting groups responsible for patient safety, mail/email to
local managers (for cascade to local staff), through trust’s intranets and through
newsletter publications (NAO, 2005).
Various professional incentives might be
implemented to ensure awareness and effective use of safety information, such as
credit schemes for attendance at safety functions (Nakajima, Kurata & Takeda,
2005). This may help to incorporate activities for becoming aware of current safety
issues into the daily working practices of staff, whilst promoting the timely uptake of
information impacting upon safety in clinical care tasks.
Frequent periodic newsletters are often used in a variety of industries to convey
important safety awareness information, information on incident rates and promote
the operation of local risk management or incident reporting systems to front line
staff. The US Aviation Safety Reporting System (ASRS) distributes over 85,000
copies of its newsletter Callback annually. The Callback newsletter provides highly
visible, monthly feedback to the reporting community of the results of its analyses of
90
incidents and studies of the data received. The visibility of the information provided
by individual reporters back to the professional community is an essential element in
the success of the system, building support for the system, promoting safety and
encouraging voluntary reporting (Billings, 1998).
In UK general aviation, the Confidential Human Factors Reporting Programme
(CHIRP) publishes a regular newsletter Feedback, which is mass-mailed to all
registered pilots and different versions produced for other relevant professional
groups – a distribution of some 30,000 in total. Appendix M provides information
from CHIRP’s web-site, concerning safety feedback and associated rationale, as well
as an example of feedback from CHIRP’s system. In content, the newsletter includes
simple, summary statistics presented graphically to show the incidence of different
incident types in a recent period and draw attention to any specific trends in incident
rates.
In addition to statistical information, textual descriptions of de-identified
incidents are included using accounts taken directly from the original incident report,
where practicable.
The newsletter additionally includes editorial commentary to
highlight best practices and draw conclusions regarding lessons learnt from specific
incidents. Such editorials allow the reporting system to focus upon one specific
safety issue or type of incident, for example. Focusing upon specific issues often
stimulates further reporting on that issue in the aviation community, highlighting how
effective a directly mailed newsletter can be in stimulating dialogue between the
reporting system and reporters on safety issues. Such observations also serve to
emphasise the inadequacy of reporting rates as a direct indicator of the frequency of
different types of incidents occurring in operational practice.
Due to diversity in working practices between different professional groups, shiftworking patterns and time-pressures, it is important that safety information feedback
is as user-friendly as possible, employing multiple means/channels to ensure as
many members of staff become aware of it, as possible. It is also important to
present safety-critical information in the working context and at the time at which it is
directly applicable (e.g., in and around the specific work or hospital area; immediately
before undertaking specific procedures or using specific equipment).
There are
several factors that might increase the likelihood that staff will read safety-critical
information. Ensuring the content is filtered or adapted to be directly relevant to the
specific audience at which it is aimed, so that staff receive only that which is relevant
to their professional group and the day-to-day tasks in which they are engaged, is
perhaps ideal, but resource intensive.
Presenting information in an accessible,
91
friendly format (i.e. describing specific incident stories along with summary statistics
in safety newsletters), is another.
Feedback of safety awareness information
produced on a national level in the form of safety bulletins, newsletters or summary
reports will also need to be communicated through effective and convenient channels
at local operational level, to ensure wide circulation.
Considering the importance of safety information feedback from local risk
management systems for increasing safety awareness amongst front line staff, it is
perhaps surprising that the current review could locate very little published guidance
and less empirical evidence concerning the effectiveness of different forms of
presentation of information and the applicability of different modes of delivery
according to professional audience. Twenty one of the twenty three health care
incident reporting systems studied as part of this review employed safety awareness
publications, including periodic newsletters, to disseminate key safety information to
front line staff. Issues such as specific information content and mode of distribution
of safety information feedback appear to be too fine-grained a level of detail to
warrant much attention. Many authors merely mention in passing that their reporting
systems employed newsletter feedback, with few further details. It therefore appears
appropriate to conclude that there seems to be implicit, widespread agreement on
the efficacy of newsletters as a means of feeding back to staff.
There is less
information in terms of the efficacy of various formats or channels for feedback and
little guidance for the practical choices that risk management teams must make in
seeking to initiate feedback.
Johnson (2003) considers the practical difficulties associated with safety information
dissemination and reaching the target audience in a comprehensive work on incident
reporting systems.
Staff groups that can be difficult to reach where timely
dissemination is concerned include: shift-workers, contract staff, temporary staff, staff
that are mobile between units and new employees. Further problems are associated
with dissemination to different reporting communities, other relevant external
organisations and varying localities. Johnson also comments that there is a tension
between the need for broad dissemination of safety information and the need to not
overload busy staff.
Through consultation with subject-matter experts as part of the review, many of
whom had experience of directing industry-wide reporting systems, the following
points were made in consideration of newsletters as a means of feeding back safety
92
information to the reporting community (representative quotes from the interview
transcripts may be found within appendix N):
•
Safety newsletters aimed at the reporting community are an important form of
feedback as they communicate to potential reporters that the information is
used productively to make improvements.
Often paper/hard copy, rather than soft/electronic copy of newsletters is best,
especially for people that work shift-type patterns. Shift workers and those that work
at multiple sites can take the newsletter with them to read wherever and whenever is
convenient.
Personnel enjoy newsletters that are short in content, light in tone, informative rather
than judgemental or accusatory and present a breadth of perspectives, whilst still
highlighting important safety issues and encouraging awareness.
To ensure ease of uptake and effectiveness of information dissemination,
consideration should be given to streamlining the various channels of information
feedback on the performance of clinical care that reach front line staff from the
multiple professional networks and agencies that exist in health care.
Where
practical it may be possible for safety feedback to utilise existing, common channels
of communication and there is a need to coordinate and integrate communicated
feedback into primary channels which can be monitored in terms of effectiveness.
A final issue to raise in connection with the SAIFIR framework concerns interaction
between the risk management system and front line staff. It should be noted that
although the main focus of this review is upon feedback from incident monitoring
systems, the flow of information following submission of a report should not be
considered to be a one-way process, with the reporting community as passive
recipients of information output from the reporting system. The SAIFIR framework
with its separate feedback modes provides a means of initiating dialogue with front
line staff through establishing channels of communication at various stages of issue
management and involving them in the safety improvement process. Figure 2.24
below depicts three main areas in which the contribution of information and
collaboration with front line staff throughout the incident management process can
increase the effectiveness of the system. This point was emphasised at the Expert
93
Workshop, at which various aspects of the framework outlined above were discussed
(see Section 4.4 for further information).
OPERATIONAL
LEVEL
LOCAL ORGANISATIONAL
LEVEL
SUPRAORG.
LEVEL
INCIDENT-BASED LEARNING SYSTEM:
Incident reports
Local
clinical work
systems
Care
providers
& patients
1. Incident
report
monitoring
fo .
t in
n
e
d
nci
of i
n
o
ati
bor
Ela
Single incidents
& priority issues
identified for
follow-up
al o
per
Integrate
&
support
changes
Local
implementing
agents &
leadership
atio
nal
e
xpe
r
Incident
repository
High-level and
external
reporting
requirements
Aggregated data
from multiple
incidents
2. Safety
issue
analysis
Local knowledge of causes
Loc
All classified
incidents
Root causes,
contributory
factors and key
trends
tise
3.Solutions
development
&
systems
improvement
Systems
improvements
Figure 2.24: SAIFIR framework depicting key dialogue processes or inputs from the
reporting community to the safety issue management process
As is evident from the figure, further contact with the original reporters of an incident
can provide a depth of elaborative information to increase the quality of the incident
description. During this process, the risk management system representative may
prompt recall of specific details that might not have occurred to the reporter at the
time of reporting. Local investigation of specific incidents using techniques such as
Root Cause Analysis require the cooperation of front line staff and the input of the
reporter or work unit in which the incident took place. Staff involved in the initial
incident may be interviewed or even asked if they would like to join the investigation
team. It is also important during the solutions formulation phase to include the frontline staff that will be the eventual end users of any modified or new systems in the
development process.
Front line operators can often provide insight into what
improvements might be practically implemented and how to solve safety problems
within their specific area of work.
Crucially, involving and empowering front line
professionals through the improvement process increases acceptance and support
for the resulting changes to work systems. Each stage of the feedback loop may
94
therefore benefit from the input of practical expertise and in-depth knowledge of
working practices and systems at a local level.
This information is best sought
through feedback and communication with those that operate the implicated work
systems on a day-to-day basis.
Through synthesis of system requirements and subject-matter expertise in
development of the SAIFIR framework outlined above, it was possible to classify the
available literature identified by the review in order to provide an indication of the
level of support for the key features identified within the model.
The resulting
classification maps the case-based literature from the review onto the framework and
is reported in the subsequent section of this report (section 4.3).
4.3 Survey of feedback modes for incident reporting systems in the
international health care literature
In order to better understand the structural and feedback characteristics of the
reporting systems identified through the literature review, the 23 cases of
implemented systems (described in further detail in the results section 3.3) were
surveyed for use of the five feedback modes A-E defined within the SAIFIR
framework. This exercise effectively links individual features of the model to relevant
literature and establishes a structured, supportive research base for the development
of the framework. The resulting classificatory table (see figure 4.3a below) lists the
key descriptive features of the incident reporting initiatives according to the extent of
safety feedback processes employed in each case.
In selection of the examples, the criteria used were that the safety systems: 1) were
applications within the health care/patient safety domain, 2) related to safety incident
reporting processes (rather than other forms of quality monitoring/control systems),
and 3) provided information on the feedback processes associated with the scheme,
including the recommendation and corrective action process that improves the safety
of work systems and any safety publication/dissemination efforts aimed at raising the
awareness of front line staff.
Within the table in figure 2.25 below, the second field includes a number of key
structural/design characteristics associated with each reporting system. The level at
which the system resides is indicated, i.e. whether it belongs to a department/subunit, organisational or supra-organisational level (this latter reserved for state/national
level systems, or other centralised systems that accept reports from multiple
95
separate organisations/institutions). Where determinable from the reported account,
the status of the system as either anonymous (retaining no identifying data regarding
the specific reporter) or confidential (retaining identifying data which are restricted in
use) was also recorded, along with the associated reporting policy within the
organisation or stated impetus for reporting (i.e. either mandatory or voluntary). The
scope of the reporting system, where explicit statements were available concerning
whether the system accepted reports of near misses, as well as actual safety
incidents, was recorded. Within the third field of the table, both the country of origin
and specific health care specialty areas or professional sub-groups were recorded.
System
Level and type
Feedback modes
Domain
A
Ahluwalia et al.
Department/organisational
Healthcare (UK):
(2005) Critical
unit level critical incident
Neonatal
Incident Reporting
reporting system. Not
Department
System
anonymous.
Amoore et al. (2002)
Organisational (Trust) level
Healthcare (UK):
Feedback notes for
medical device incident and
Medical equipment
incidents involving
near miss reporting system
management in a
medical devices
with anonymous feedback
NHS Trust
B
X
C
D
E
X
X
X
X
X
X
X
X
X
X
X
X
X
X
notes
Beasley et al. (2004)
Supra-organisational level
Healthcare (US):
Primary care medical
(state/federal) confidential
Wisconsin Primary
error reporting
error reporting system
care
Bolsin et al. (2005)
Multi-institutional confidential
Healthcare
and Bolsin et al.
performance tracking
(Australia):
(2005) PDA-based
system for clinical outcomes
Anaesthesiology
Clinician-led
and clinical incidents.
Australia and New
reporting system
Associated with supra-
Zealand College of
organisational level
Anaesthetists
regulatory body and
(ANZCA)
system
accreditation schemes
Gandhi et al. (2005)
Institutional/organisational
Healthcare (US):
Safety Reporting
level safety information and
Brigham and
System
risk management system
Women’s Hospital,
that includes voluntary
Boston
X
X
X
X
X
X
X
X
X
X
incident reporting, root
cause analysis and patient
safety walkround processes
Holzmueller et al.
Centralised, web-based
Healthcare (US):
(2005); Lubomski et
supra-organisational level
Hospital intensive
al. (2004) & Wu et al.
anonymous incident and
care units –
96
System
Level and type
Domain
(2002) Intensive
near miss reporting scheme
reporting centre at
Care Unit Safety
that includes approximately
Johns Hopkins
Reporting System
30 departments
University School
(ICUSRS)
Feedback modes
A
B
C
D
E
of Medicine
Joshi et al. (2002)
Organisation-wide (US
Healthcare (US)
Web-based incident
Healthcare system) – Multi-
Baylor Healthcare
reporting and
institutional. Anonymous
System: Dallas,
analysis system
incident and near miss
Texas
X
X
X
X
X
X
X
X
X
reporting.
Le Duff et al. (2005)
Hospital level IT system for
Healthcare
Incident Monitoring
monitoring incidents and the
(France): Rennes
and Quality
quality improvement process
Hospital, Brittany –
Improvement
Department of
Process
Radiology and
X
Medical Imagery
Nakajima et al.
Institutional/hospital level
Healthcare (Japan):
(2005) & Takeda et
anonymous incident
Osaka University
al. (2005) Web-
reporting system and patient
Hospital
based/on-line
safety programme
X
X
X
X
X
X
X
X
X
X
X
X
X
incident reporting
system
Oulton (1981)
Supra-organisational level
Healthcare (US):
Incident reporting
mandatory reporting scheme
Virginia Hospitals
system
involving a number of
Insurance
hospitals
Reciprocal
Parke (2003) Critical
Hospital
Healthcare (UK):
Incident Reporting
department/organisational
General Intensive
System
sub-unit level anonymous
Care Unit, Reading
incident and near miss
reporting system
Peshek et al. (2004)
Organisational level
Healthcare (US):
Voice mail based
confidential voice-mail
Summa Health
medication error
based incident reporting
System, Akron:
reporting system
system
Ohio (Medication
X
administration)
Piotrowski et al.
Institutional (medical centre)
Healthcare (US):
(2002) Safety Case
level safety improvement
Veterans Affairs
Management
process
Ann Arbor
Healthcare System
Committee process
Poniatowski et al.
Supra-organisational level
Healthcare (US):
(2005) Patient Safety
online occurrence reporting
University
Net (PSN)
system that accepts incident
HealthSystem
occurrence reporting
and near miss data from a
Consortium (UHC)
97
X
X
X
X
X
System
system
Level and type
Domain
number of consortium
with over 90
member hospitals
members
Runciman et al.
Supra-organisational level
Healthcare
(2002); Yong et al.
(multi-site/national) system –
(Australia): Initially
(2003) & Beckmann
voluntary and anonymous
anaesthesia
et al. (1993)
reporting
followed by all
Australian Incident
other specialty
Monitoring System
areas and generic
(AIMS) and
entire hospital
associated patient
systems
Feedback modes
A
B
C
D
X
E
X
X
X
X
X
X
X
X
safety initiatives
Schaubhut et al.
Hospital level reporting
Healthcare (US):
(2005) Medication
system as part of a
East Jefferson
Error Reporting
Medication Administration
Memorial Hospital,
system
Review and Safety
Louisiana. Nursing
Committee process (MARS)
Medication
Administration
Processes
Schneider et al.
Organisational (hospital)
Healthcare (US):
(1994) Severity-
level voluntary reporting
Ohio State
indexed medication
system
University Medical
error reporting
Cent—e -
system
Medication errors
(pharmacy)
Silver (1999)
Organisational (institutional)
Healthcare (US):
Incident Review
level incident management
All specialties
Management
system
X
X
X
X
Process
Suresh et al. (2004)
Multi-institutional, supra-
Healthcare (US):
Medical error
organisational level
Neonatal Intensive
reporting system
voluntary and anonymous
Care – Vermont
reporting system for medical
Oxford Network
X
X
X
X
errors
Tighe et al. (2005)
Department level incident
Healthcare (UK):
Incident Reporting
reporting system
London Accident
System
X
and Emergency
Department
Incident Reporting
System
Webster et al. (2002)
Organisational level
Healthcare (New
Ward medication
anonymous reporting
Zealand): Hospital
error reporting
system for ward drug
ward medicine
scheme
administration errors
administration
98
X
System
Level and type
Domain
Westfall et al. (2004)
Supra-
Healthcare (US):
Web-based patient
organisational/independent
Ambulatory primary
safety reporting
system that accepts
care for rural and
system
voluntary reports from a
frontier
number of rural practices
communities
Wilf-Miron et al.
Organisation level incident
Healthcare (Israel):
(2005) Incident
reporting system and risk
Maccabi
Reporting System
management processes.
Healthcare
Not anonymous.
Services –
Feedback modes
A
B
C
D
E
X
X
X
X
X
X
X
X
Ambulatory care
service
organisation
Figure 2.25: Description and classification of health care incident monitoring and
feedback systems according to features of the SAIFIR framework. The feedback
modes are: Bounce back information (mode A), Rapid response actions (mode B),
Risk awareness information (mode C), Inform staff of actions taken (mode D) and
Systems improvement actions (mode E).
Due to the strict selection criteria for feedback-relevant articles, 100% of the systems
reported possessed mechanisms for closing the safety loop for reported issues,
allowing the implementation of specific actions for improving the safety of care
delivery processes (feedback mode E). Additionally, the majority of systems also
delivered some regular information output (feedback mode C) that was broadly
disseminated to personnel in order to improve awareness of safety issues (91% of
reporting systems described).
Fewer reporting systems employed dedicated rapid feedback processes (feedback
mode B – 70%) for fast-tracking and acting upon issues that pose an immediate
threat to safety. This function was largely fulfilled through risk analysis processes
designed to identify the priority for action associated with a particular reported
incident.
This was achieved through consideration of potential frequency and
outcome severity associated with the occurrence. Structural characteristics of the
reporting systems and their degree of centralisation, scope and location relative to
front line operations was also a factor that impacted upon the opportunities for rapid
response to individual reported issues. For example, systems in which reports were
screened by local level managers (e.g. Tighe et al., 2005) before being processed by
the higher level risk management system, afforded the opportunity for immediate
99
local corrective or improvisatory action, prior to a formal response from the risk
management team. Fast track processes should occur at an organisational, rather
than operational level, however, as it is only through a centralised response to
specific issues arising in localised work systems that a considered and widespread
response can be made in all relevant operational scenarios.
From classification of the case-based research literature it was possible to identify
exemplary implementations of reporting schemes that employed four or even all five
of the feedback modes specified within the SAIFIR framework. Thirteen of the twenty
three reporting systems employed four or more feedback modes and four reported
processes relating to all five individual feedback modes (the four systems are
described in articles by: Gandhi et al., 2005; Holzmueller et al., 2005; Lubomski et
al., 2004; Wu, Pronovost & Morlock, 2002; Joshi, Anderson & Marwaha, 2002; and
Poniatowski, Stanley & Youngberg, 2005). Of the nine other systems that employed
four of the five possible feedback modes, noteworthy commentary on systems
employing effective feedback is offered by: Ahluwalia & Marriott (2005), Nakajema,
Kurata & Takeda (2005) and Wilf-Miron et al. (2003), amongst others.
These
sources may be regarded as models of best practice due to the extent to which they
illustrate comprehensive, integrated and well-planned feedback and communication
processes within incident monitoring schemes. Three specific systems were also
used for illustrative purposes in the Expert Workshop activity within the research
programme (reported in the subsequent section of this report): Gandhi et al. (2005),
Nakajima et al. (2005) and Wilf-Miron et al. (2003).
Figure 2.26 below provides an illustrative example of an architecture for an effective
hospital reporting and risk management system reproduced from an article by
Nakajima et al. (2005). From the diagram it is apparent that the system represents
the complete feedback loop for regulation of hospital safety; the system
encompasses reporting functions, safety committee quality improvement structures
and feedback processes to communicate actions and information back to the front
line. As such, the system represents a process that effectively integrates both the
technical IT systems necessary to handle and archive reported information and the
organisational structures and agents necessary to disseminate and implement
changes.
100
Figure 2.26: Architecture of a hospital risk management and reporting system
reproduced from Nakajima et al (2005)
Holzmueller et al. (2005) describes the operation of the Intensive Care Unit Safety
Reporting System at Johns Hopkins University in the US. The system accepts data
in the form of incident reports from 18 separate ICUs and provides a comprehensive
programme of safety information feedback to each institution, including regular
“Safety Tips” bulletins (an example of which appears in appendix O of this report).
The key features of the bulletin, designed for posting on staff notice boards, are that
it includes examples of reported incidents of a specific safety theme; it lists the
systems failures identified within the reports and describes the actions that have or
might be taken to address each failure.
The greatest variation in the systems described was found relative to the capability
for feedback to individual reporters, local reporting communities and other relevant
stakeholders with an interest in specific issues passing through the risk management
process. Feedback mode A “bounce back” was only considered in the processes
implemented within 39% of the systems described. Again, this trend appears to be
largely influenced by the scope of the reporting system, the nature and level of
anonymity of reporting mechanisms employed to handle the reporting process and
the specific information technologies that mediate interaction between the reporter
and the reporting system. Accordingly, individual level feedback to reporters is not
always possible due to resource constraints where centralised or even national level
reporting systems are broad in scope and have a high monthly throughput of incident
reports to deal with. Although current IT systems for management of the reporting
101
process are capable of providing automated responses to reporters, this necessitates
individual access to work stations and not all health care working patterns and
environments yet provide ready access in this manner. It should also be noted that
in systems that employ person-to-person contact during the act of reporting (e.g.
Wilf-Miron
et
al.,
2003:
telephone-based
system
of
reporting),
individual
feedback/bounce back of information, immediate advice and support can occur
naturally, with the opportunity for the reporter to query expertise within the risk
management team relevant to the reported issue.
As part of the dissemination of information concerning the operation of the safety
system to front line stakeholders and specific reporting communities, communication
of actions taken in response to a specific reported issue (feedback mode D), as
distinct from dissemination of more general safety awareness information, was
addressed within approximately half of the systems described (52%). The ability to
close a specific safety issue with the original reporter or targeted reporting
community to whom the issue is relevant was found to be influenced by the report
management policy of the organisation involved. Logically, progress updates for
specific reporters may only be implemented where the individual’s and other
identifying details are retained with the incident report, through to the point of
completion of follow-up action. The implication is that both feedback modes A and D
are excluded in totally anonymous reporting systems and mode D is not possible in
reporting systems that de-identify reports following reporting/verification in order to
ensure records entered into the incident database are anonymous.
Anonymous
systems of this type therefore necessitate robust, broad information feedback
channels capable of maximising communication with the reporting community as a
whole.
Analysis of existing reporting systems in this manner highlights tendencies within the
design and operation of effective reporting systems to prioritise completion of a
formal organisation-wide safety action/improvement loop and broad safety
awareness initiatives (feedback modes B, C and E) over reinforcing reporting to the
system in the first place.
This is achieved through demonstrating the utility of
reporting and keeping stakeholders such as the original staff involved in the incident
“in the loop”, through specific interaction with the reporting system, through the
provision of intermediary progress updates and communication of specific actions
taken (i.e. feedback modes A and D). As safety feedback must serve the dual role of
retrospectively introducing safe-guards to prevent recurrence of past incidents, whilst
102
prospectively fostering the capability to detect novel incidents through voluntary
reporting, the relative lack of attention to this latter aim should be addressed in future
systems.
It should be noted that the results of this attempt to analyse the features of existing
reporting systems in international health care domains is limited by the extent to
which the structural/process characteristics and practical operational issues relevant
to feedback within the systems identified are reported within the available literature.
This limitation is constrained as far as possible through the efforts made in the
development of the search and screening strategy to narrow the search results to
articles that contain this type of information. One possibility for productive future
work may therefore involve further direct contact and discussion with representatives
of the relevant health care reporting systems identified through the review.
4.4 Validation of review findings within an expert workshop on health care
safety feedback
In order to validate the literature review findings and the emerging best practice
framework based upon safety management expertise in high risk domains, an
interactive workshop event was hosted by Coventry University. Seventy one invited
participants attended, representing high level health care policy makers, expertise in
NHS risk management systems and experts in safety management in high risk
domains.
The event was planned to elicit discussion of the applicability of the
emerging framework within a health care context, to establish current practices in
safety feedback from incident monitoring and to identify the practical issues
associated with learning from experience in NHS Trusts.
The format for the event was planned to be maximally interactive. Following an
introduction and presentation of the review findings, a series of separate focus group
sessions were undertaken.
Elements of the SAIFIR framework were applied to
structure discussion within the focus group sessions of the requirements for effective
NHS feedback systems.
Focus group participants were asked to consider how
current NHS systems related to the features and functions inherent within the
framework and if they didn’t, how similar processes could be implemented within an
NHS Trust setting.
Consideration of safety feedback within a health care context provoked the
discussion of many issues concerned with incident reporting and learning within NHS
103
Trusts. Developing effective safety feedback processes for incident monitoring was
considered to represent a significant challenge for NHS trusts due to the high volume
of reports received at the local organisational level, especially in acute care settings.
The large number of potential safety issues for processing by local risk management
systems emphasises the need for effective prioritisation and investigation of
incidents. Prioritisation of incidents for further action often relied upon severity of
outcome rather than the likelihood of recurrence or potential for learning.
In content and interpretation of feedback from incident monitoring, it was concluded
that reporting rates were limited in their ability to represent inherent safety and were
at best an indicator of the openness of the reporting culture that exists within the
organisation.
It was difficult to compare safety on the basis of reporting rates
between services within a trust or between trusts themselves due to variability in
reporting rates and key differences in the nature of data collected between care
settings. Observed trends in data associated with reporting rates were therefore
difficult to interpret and act upon.
Regarding the structure of reporting systems, the issue of multiple configurations of
reporting systems in health care was raised.
There are often multiple reporting
channels to both specialty incident reporting schemes and generic reporting schemes
operated at an organisational level. Different structures have differing requirements
for reportable incidents and specialty-based systems sometimes failed to share
lessons learnt outside the specific professional discipline in which the incident
originated.
Narrow feedback limits the generalisation of best practices to whole
health care systems and contributes to isolated clusters of capability development
within the organisation’s operations.
Such considerations highlight the role of a
centralised system in broad data analysis and feedback of improvements on a
sector-wide level.
In terms of safety feedback as a requirement for NHS risk management systems, the
general opinion emerging from discussions at the workshop was that feedback
presently was initiated on an ad hoc basis. With attention focused upon reporting
mechanisms, feedback was largely considered retrospectively and processes for
feedback are not an integral part of the risk management and reporting systems
established in the majority of trusts. This problem is exacerbated by the apparent
current remoteness of risk management from front line interests (most investigation
and feedback being undertaken through Directorates). A further problem may be due
104
to a narrow view of feedback in the NHS, which was considered to focus upon the
dissemination
of
information
concerning
adverse
events,
rather
than
a
comprehensive solution development, implementation and evaluation process
geared towards making meaningful, targeted improvements in work systems.
In assessing the value of the SAIFIR framework, broadly speaking the model and
component processes for learning from safety incidents provided a useful means of
deconstructing the complex concept of safety feedback down onto a level where
individual elements of the broader system could be considered and discussed in
detail. Specifically, the framework provided a model of best practices against which
current NHS systems could be assessed in terms of the capabilities of existing roles
and organisational structures.
Group discussions identified three levels of immediate feedback or “bounce-back” to
the reporter following submission of a report, relating to feedback mode A within the
SAIFIR framework:
1. Acknowledgement of receipt, thank you and encouragement to reinforce
future reporting.
Clarification and classification of the incident details when the reporter interacts with
the risk manage
2. r.
Safety enhancing information sent back to the reporte
3. r.
The first two levels of feedback were considered to occur in NHS reporting schemes
but no specific examples were offered of the third level. There was concern that
without deeper analysis, the wrong causes may be assumed and inappropriate
solutions generated. Further important issues raised concerning immediate feedback
to reporters included the fact that the reporter of an incident may not always be the
problem owner and that the system must be able to accommodate feedback to
reporters who may not be trust employees, such as members of the public.
Rapid action in response to a safety incident (feedback mode B within the
framework) may occur prior even to reporting of the incident, through the involvement
105
of departmental management in overseeing the immediate handling of the incident.
Rapid responsive actions are usually reserved for more serious events within the
NHS and it is important to develop appropriate decision criteria or protocols
governing when immediate responsive action is warranted. The immediate response
to serious incidents in a health care setting might include the removal of implicated
equipment or the suspension of staff involved pending enquiry. This latter measure
may be necessary in some cases but is highly undesirable due to the perception that
staff debriefings following an incident may be seen as punitive. A further potential
problem is that rapid response on the basis of incident severity involves a prejudgement concerning the impact and implications of the incident, before a more
detailed investigative process can actually take place to establish this.
Regarding implementation of capability for rapid responses within the NHS,
dissemination of notifications that an event has occurred may take place through the
existing Safety Alert Broadcast System (SABS) implemented within Trust risk
management systems.
The development of appropriate rapid actions could be
supported by checklists, definition of a rapid action team including access to critical
staff and the necessary authority to effectively escalate the issue for executive level
action. Incidents involving external agencies may prove to be a challenge for rapid
responding to incidents, especially where external experts will have to gain access to
the organisation. With formal protocols for rapid responses in place, other quality
review mechanisms might trigger the process, such as morbidity and mortality
conferences.
Newsletters are a popular form of information dissemination (feedback mode C)
within NHS trusts to raise staff awareness of specific safety issues.
Many
newsletters are published and sent out. This can be very resource intensive and
there is little evaluative evidence concerning the impact of newsletter circulation on
learning and operational safety.
Other methods of communicating safety critical
information and learning identified included: action learning sets, alerts, roadshows,
debriefings following significant untoward incidents and regular management
reviews, though again the impact of these forms of feedback upon behaviour are
unknown. In content, it was stressed that safety information feedback should be
tailored to meet the preferences of the specific audiences, in terms of the interests of
different professional groups and the emphasis placed upon narrative over databased information. It was agreed that summaries of multiple event analyses were
useful when disseminated to staff.
106
In considering processes for developing and implementing safety solutions in clinical
work systems (feedback mode E within the framework), it was agreed that feedback
should include examples of changes resulting from the investigative process and
their impact upon safety. The dissemination of information on actions taken by the
system relates to feedback mode D within the SAIFIR framework.
Strategic Health Authorities have accumulated information on significant untoward
incidents, but they vary in terms of to what extent this information is used to drive
improvements in health care organisations on a local level. Learning from specific
incident investigations could be promoted through wider dissemination of findings
regarding how to improve operational safety. The view was expressed that currently
the NHS engaged in overly localised and isolated improvement efforts in specific
systems of care delivery, thus limiting safety improvement both throughout single
organisations and across multiple organisations. Comparative system level learning
should take place through centralised agencies such as the NPSA. It was thought
that the NHS currently micro-manages solutions and should in future focus upon
providing more general guidance and allow more discretion to individual
organisations as to how operational safety issues are resolved. Attention also needs
to be given to how the sustainability and effectiveness of solutions implemented in
practice may be evaluated.
Finally, it was evident from discussions concerning the whole safety feedback cycle
at the workshop that further effort was warranted in ensuring that local NHS risk
management systems actually complete the safety loop and deliver effective
improvements that address reported issues. Reporting was regarded as being seen
as an end in itself, with considerable effort devoted to the development of incident
reporting systems and relatively too little consideration given to how the data
generated by such systems could be productively used to improve safety. With this
in mind, the framework developed through the review provides one possible means
of addressing the concerns raised at the workshop in that it outlines a complete and
integrated reporting and feedback solution.
4.5 Limitations in the current study and opportunities for future research
In interpretation and extrapolation of the findings from this study, the inherent
limitations of the research method and study design should be noted. A practical
scoping study design, with theory building elements, was chosen over a more
conventional review for a number of reasons.
107
The main rationale was that this
design allowed the best use of the literature available to support the research
question whilst allowing fulfilment of the research requirement to develop models of
best practice and attempt to synthesise the emerging information into a useful
framework.
As a review of published evidence and expert knowledge, this scoping study must be
classed as secondary research, to the extent that it did not engage in primary
activities such as the direct empirical or observational investigation of existing health
care systems.
Although some attempts have been made within this report to
describe existing systems that serve the functions of safety feedback, the review
team relied upon published policy and expert opinion from those with first hand
knowledge of the systems in question, rather than direct objective measurement, to
obtain the data and information upon which the review is based. The twenty-three
health care incident monitoring systems identified and analysed within the review
represent reported case studies of specific implementations, for which the review
team relied upon the original author’s accounts.
Early in the review a lack of published research that directly addressed the question
of effective feedback processes for safety information systems was identified, with
the bulk of available research evidence falling into the “reported case
implementations” category. Although these articles are empirical in the sense that
they describe the direct, “real-world” study of existing organisational systems, they
lack rigorous experimental design, partly due to the complexity of studying
organisational level phenomena. Due to this limitation, aggregation of primary study
findings is only possible through narrative synthesis and iterative refinement of the
grounded assertions and conclusions from the reviewed sources. Consequently,
section 4 of this report relies upon information synthesis of this type, rather than any
formal quantitative data extraction and meta-analysis. It should also be noted that
the lack of experimental research designs in this area means that any systematic
search strategy must rely upon content-based inclusion of articles, rather than
exclusion of “low rigour” or “low quality” study designs. The review highlights
the value of including grey literature in answering research questions of this type. 53
articles were added to the systematic search results from preliminary non-systematic
internet searches, hand searches of reference lists and literature identified from
consultation with experts. This literature included NHS policy docs and research
reports (16), published reference sources such as scholarly books, chapters, theses
(12), and other grey literature (3). The value of the sources that fall outside of the
108
non-peer reviewed journal article classification was great in answering the research
question. This was due to the practical, operational nature of the issue of feedback
and safety management systems in health care and resulting lack of literature
reporting studies using controlled experimental research designs. Much of the
practical operational knowledge concerning the development of effective safety
management systems was developed in non-health care sectors such as civil
aviation and not subject to the same extent of published empirical evaluative study as
in health care. Consulting scholarly sources written by leading academics capable of
providing an overview across domains on established risk and error management
practice was therefore important. The research question also required some
conceptual development to define the research object of interest to the review and
the target processes and areas for subsequent systematic searches. The nonempirical literature, reports, policy statements and conceptual articles/theses that
contained theoretical frameworks, process models and the like were essential in
developing these views and the definition of the research area. Our own models and
feedback framework, used to subsequently analyse case studies of reporting
systems in the literature and in actual NHS systems, were greatly informed by this
grey literature.
This literature represents a broad scope of application domains, encompassing
disciplinary areas such as (operations) management and safety sciences that can be
brought to bear upon the issue of safety improvement and feedback processes. A
considerable quantity of potentially useful material of this type was scoped and
mapped through the review effort, in order to establish background information to the
question of what made an effective safety feedback system and define the existing
evidence base for developments in this area.
The reliance upon authors’ reports of implemented health care reporting systems as
a means of establishing best practices in an area has its own limitations. Omissions
on the part of the authors, differing conceptual focus of articles and imposed
restrictions such as those concerning article length in peer reviewed journals may
have influenced the accuracy of descriptions of the feedback processes implemented
for individual reporting systems. Future research might therefore productively seek
further elaboration on the feedback elements of the health care reporting systems
described within this report, through more objective means of analysis, such as direct
contact, case study site visits and interviews with key personnel responsible for each
system. Regarding consultation with subject matter experts on safety management
109
methods in high risk domains, there is much practical information to be gained
regarding the requirements for engineering effective feedback processes, that exists
in the form of operational knowledge gained through experience of managing incident
reporting systems. Such expertise represents a considerable resource for future
efforts to improve safety systems in health care.
The SAIFIR framework developed through synthesis of practical information gained
within the review proposes a model for effective integrated safety reporting and
feedback processes for local organisation’s risk management systems.
As a
reference for best practice, the features of the framework may contribute to future
audit or assessment criteria for appraising risk management systems. In this sense,
the presence of all feedback modes outlined within the SAIFIR framework, for
example, might provide an indicator of effective organisational learning processes.
The utility of the framework as a practical architecture for the design of reporting and
learning systems remains to be tested and future research might productively seek to
learn from instantiation of the model in a specific organisational scenario. Further
development and elaboration of the processes embodied within the framework could
then be undertaken based upon direct observation of operational issues and practical
feasibility.
Having established the research evidence base and conceptual underpinnings for
safety feedback, there is currently considerable scope for further empirical
investigation of the effectiveness of risk management processes and reporting
systems for promoting patient safety. The limited sophistication of current research
designs for organisational level interventions, such as the introduction of incident
reporting and quality improvement systems, may be redressed through the use of
more advanced designs for organisational studies and operations research. Such
research may exploit opportunistic sampling of current localised patient safety
improvement initiatives and new system implementations, in quasi-experimental
comparative and longitudinal designs.
Through these types of design, limited
experimental control may be exerted for quantitative measurement of key variables
pre- and post-implementation, and the effects observed upon certain outcome
measures such as voluntary reporting rates and available indicators of operational
safety. The impact of feedback mode and method of delivery on reporting behaviour
and operational safety may therefore be tested over time.
110
Further practical empirical work could productively be aimed at the comparative study
of the effectiveness of different formats and modes of delivery for safety information
feedback, to address the question of which type of feedback should be implemented,
in which work settings, for specific professional groups. Such research must seek to
overcome the inherent problems of assessing the effectiveness of interventions
designed to improve operational safety.
The difficulty lies in separating out the
effects of such initiatives from the incremental improvements in clinical practice and
technologies that must also have contributed to improved patient safety (AHRQ,
2001).
Other practical questions associated with the delivery of safety feedback and the
operation of the organisational learning process, require further attention. What is
the optimum cycle time for the safety issue detection, investigation and improvement
process, which ensures adequate solutions are implemented within as short a time
period from the detection of system vulnerabilities as possible? How should these
vulnerabilities be assessed in terms of risks to patient safety in the interim period?
What metrics or process measures can be monitored as an indicator of the
effectiveness of the safety issue management and feedback process?
The
establishment and monitoring of integrated incident reporting and learning processes
within local level NHS trusts, of the type described within this report, should
contribute to the development of effective organisational memories for the
management of key safety knowledge concerning local operations. Such systems
might learn from not only the occurrence of specific failure modes in care delivery
systems, but from practical experience in the development of corrective actions over
time, so that specific types of solutions can be understood as appropriate for specific
types of systems failures.
Just as future research work might build upon the current review to look at aspects of
safety feedback on a finer-grained level of analysis, feedback systems also fit into a
broader, macro-level view of safety in health care organisations. Such a programme
of work might take as its starting point established theory in the area of so-called
High Reliability Organisations (e.g. Weick & Sutcliffe, 2001; Roberts, 1989; 1990;
Reason, 1997), that maintain track records of consistent failure-free performance in
the face of high risk operations due to variable situational and task conditions. From
this perspective, effective safety issue detection, analysis and feedback systems
form one prerequisite for high operational safety capability or “resilience”, along with
other training, procedural and cultural components.
111
Whilst this current work has identified characteristics of effective safety feedback
mechanisms and processes for health care systems, it remains for future
investigation to establish how organisations may evolve over time to develop these
process capabilities in an effective manner. The important question here is one of
practical implementation and integration: how do health care organisations effectively
develop and deploy systems that promote continued operational safety?
Future
research efforts to answer this question might productively seek to establish a road
map to mature safety management processes for health care, based upon lessons
learnt in high risk industrial operations. Such efforts will encourage the development
of
commonly
understood,
effective,
actionable
and
ultimately
measurable
organisational processes and systems for improving operational safety.
2.5 - Summary of findings and recommendations
In conclusion and in accordance with the original research aims, the review sought to
investigate the feedback processes by which health care organisations might
respond locally to reported patient safety incidents and effectively close the safety
loop. In response, the review focused upon organisational processes for learning
from the analysis of reported incidents, through the dissemination of information to
raise safety awareness and implementation of targeted improvements in front line
work systems. This was achieved through synthesis of information and description of
emerging best practice in this area, based upon the available literature and practical
experience in both high risk industry and international health care.
Aside from any specific conclusions and recommendations regarding current NHS
reporting and learning systems, the main outputs from the research effort include
several resources representing structured interpretation of the available information
and expertise in the safety management domain. The main review outputs contained
within this report are summarised within figure 2.27 below.
112
Key review aim
To consider the characteristics of effective safety feedback from
incident reporting schemes and the requirements for effective incident
monitoring and learning systems at the level of local health care
organisations.
Principle research output
1. Scoping review and classification of 193 individual sources, relevant to
safety feedback from incident reporting, identified through a
comprehensive search of the available literature (involving screening of
some 2000 records for relevance).
2. Refinement and description of 15 requirements for the design of
effective safety feedback systems for high risk operations, based upon
subject matter expertise from multiple application domains.
3. Development of a framework for Safety Action and Information
Feedback from Incident Reporting (SAIFIR), including description of five
distinct modes of action and information feedback and how they map
onto a generic safety issue management process for organisational
level risk management systems.
4. Description and analysis (using the SAIFIR framework) of 23 healthcare
incident reporting systems that include feedback processes for learning
from failure.
Figure 2.27: Key aim and outputs from the review
The review establishes the topic of organisational level feedback processes is an
important one for effective patient safety management in healthcare.
From
information gained through consultation with subject-matter experts in the areas of
safety management and reporting systems in high risk domains, it was possible to
identify 15 individual requirements for effective feedback systems based upon
practical knowledge of the operation of effective safety management systems. These
113
requirements comprise a series of recommendations for organisational level safety
feedback processes, based upon expert opinion regarding best practices in this area.
As such they are generic or independent of any specific domain of application;
applying equally to health care as they do to industrial or transport operations. The
requirements for the design of effective safety feedback processes are summarised
in the 15 statements within figure 2.28 below. It should be noted that, in accordance
with the research rationale, the term “feedback” refers to both corrective action and
information on operational risks.
System requirements for effective safety feedback
01 Feedback loops must operate at multiple levels of the organisation or system
02 Feedback should employ an appropriate mode of delivery or channel for
information
03 Feedback should incorporate relevant content for local work settings
04 Feedback processes should be integrated within the design of safety information
systems
05 Feedback of information should be controlled and sensitive to the requirements
of different user groups
06 Feedback should empower front line staff to take responsibility for improving
safety in local work systems
07 Feedback should incorporate rapid action cycles and immediate comprehension
of risks
08 Feedback should occur directly to reporters and key issue stakeholders as well
as broadly to all front line staff
09 Feedback processes should be well-established, continuous, clearly defined and
commonly understood
10 Feedback of safety issues should be integrated within the working routines of
front line staff
11 Feedback processes for specific safety improvements are visible to all front line
staff
12 Feedback is considered reliable and credible by front line staff
13 Feedback preserves confidentiality and fosters trust between reporters and
policy developers
14 Feedback includes visible senior level support for systems improvement and
safety initiatives
15 Feedback processes are subject to double-loop learning to improve the
effectiveness of the safety control loop
Figure 2.28: 15 requirements for effective safety feedback systems
114
Drawing upon the requirements for effective feedback systems and review of
research literature reporting various case implementations of reporting and learning
systems, a best practice framework for this area was developed (the framework for
Safety Action and Information Feedback from Incident Reporting, SAIFIR).
The potential value of the SAIFIR framework lies in its adoption of a holistic position
on integrated risk management processes for learning from reported safety incidents.
In so doing, it offers an integrated view of three important elements: 1) local reporting
system functions, 2) committee investigation, quality improvement and action
mechanisms, and 3) the multiple feedback functions that ensure the risk
management system effectively interacts with front line operations. The framework
includes both action and information feedback processes.
Action to address
vulnerabilities in work systems identified through analysis of reported incidents and
information delivered to front line personnel to raise awareness of current risks to
patient safety. In terms of individual feedback processes developed from the review,
five specific feedback modes are identified within the framework (see figure 2.29
below).
Feedback modes from the SAIFIR framework
Mode A
Bounce back information: Acknowledgement and debriefing of reporter
immediately following report submission.
Mode B
Rapid response actions: Measures taken against immediate and serious
threats to safety (issue is fast-tracked through the process).
Mode C
Risk awareness information: Broad dissemination of safety awareness
information to all front line staff on current system vulnerabilities (through
newsletters and other channels of distribution).
Mode D
Inform staff of actions taken: Reporting back to the reporter and reporting
community on issue progress and actions taken based upon reports.
Mode E
Systems improvement actions: Development and implementation of specific
action plans for improvements to work systems that address specific
contributory factors identified through analysis of reported issues.
Figure 2.29: Five modes of feedback from incident monitoring
The framework and requirements developed through the research represent a
synthesis of the available research evidence and established best practices in safety
feedback. This provides guidance for healthcare towards the development of mature
and capable risk management systems that include effective organisational systems
115
for learning from reported incidents. In this sense, the SAIFIR framework represents
a possible implementation of the safety feedback or control loop for continuous
improvement in health care organisations.
A further important feature of the
framework is that it provides a structured process for dialogue between the reporting
community and those responsible for incident monitoring, investigation and corrective
action.
Twenty-three reported implementations of exemplary incident reporting and feedback
systems within the international health care literature were identified during the
review, thirteen of which were found to employ four or more of the five feedback
modes described within figure 2.29 above. From analysis of these systems it was
possible to identify many possible feedback mechanisms for safety actions and
information, which might be implemented within the context of UK NHS
organisations. It was found that the most popular form of reported feedback from
incident reporting was through the distribution of safety newsletters (91% of the
systems reviewed, e.g. Beasley, Escoto & Karsh, 2004; Joshi, Anderson & Marwaha,
2002; Parke, 2003; Tighe et al., 2005, amongst others). Examples of other forms of
feedback are included within figure 2.30 below.
Safety action and information feedback mechanisms for incident monitoring systems
•
Targeted and timely systems improvement solutions to address gaps identified
through incident analysis (Joshi, Anderson & Marwaha, 2002).
•
Implementation of urgent improvement actions for high risk issues within a short
timescale (Nakajima et al., 2005).
•
In-depth analysis and corrective action undertaken by quality assurance committee
(Oulton, 1981)
•
Policy development on basis of incident data fed into clinical steering group
process (Westfall et al., 2004)
•
Feedback notes for medical devices (Amoore & Ingram, 2002).
•
Automated feedback of individual performance data to the reporting physician
(Bolsin, 2005; Bolsin et al., 2005)
•
Email distribution to all front line staff of summary improvements made (Gandhi et
al., 2005)
•
Staff bulletin board postings with safety issues raised and actions taken (Lubomski
et al., 2004)
•
Targeted staff education programmes linked to professional accreditation scheme
116
(Takeda et al., 2003)
•
Patient safety seminars to inform staff of findings from reporting systems
(Nakajima et al., 2005)
•
Clinical risk management monthly meetings and cascade to staff (Nakajima et al.,
2005)
•
Benchmarking of frequency of incident types between comparable institutions and
with national averages (Oulton, 1981)
•
Development of manuals on error prevention based upon lessons learnt from
reporting schemes (Wilf-Miron et al., 2003)
•
Periodic conferences or presentations of findings from incident monitoring within
specific departments (Parke, 2004)
•
Targeted training schemes and error awareness programmes (Schaubhut & Jones,
2000)
•
Immediate notification reports of incident occurrence circulated throughout the
institution (Silver, 1999)
•
One-to-one telephone debriefing with reporter (Wilf-Miron et al., 2003)
Figure 2.30: Examples of further forms of safety feedback mechanisms
for incident monitoring
In order to further validate the framework and investigate specific requirements for
safety feedback in current NHS risk management systems, an expert workshop event
for health care was organised and undertaken as part of the review work.
The
SAIFIR model proved to be a useful framework for identification and discussion of
potential gaps between best practices in operational safety feedback and current
NHS practices. This was found to be due to a range of issues, both concerning
practical barriers and capability limitations associated with current systems. Figure
2.31 below outlines the main limitations of current NHS safety feedback systems, as
identified through the expert workshop and related activities.
117
Limitations of current NHS safety feedback systems
1. Considerable attention has been given to reporting systems and processes; it is
less clear as to how the information from incident reporting and analysis can be
used effectively to improve patient safety in care delivery processes, through the
development of safety monitoring systems that employ integrated feedback
mechanisms.
2. A narrow view of safety feedback as information newsletters distributed to care
providers is often adopted, rather than considering the use of information to
support the corrective action and quality improvement process.
3. Further consideration of the uses of reported incident data in safety feedback
should be undertaken, as interpretation of reporting rates is problematic,
especially for benchmarking between organisations or monitoring trends within an
organisation over time.
4. Current systems for feeding back information to non-trust employees and the
public are limited.
5. The impact of feedback content and mode of delivery upon risk awareness,
learning and operational safety is not currently understood or based upon
evaluative evidence.
6. Currently the NHS engages in overly localised and isolated improvement efforts in
specific systems of care delivery, thus limiting application of safety lessons both
throughout single organisations and across multiple organisations.
Figure 2.31: Current limitations of NHS safety feedback systems
Expert opinion considered that presently, feedback was initiated on an ad hoc basis
and not effectively integrated within the design of current risk management and
reporting systems within the NHS.
With attention focused upon reporting
mechanisms, feedback was largely considered retrospectively and processes for
feedback were not an integral part of the risk management and reporting systems
established in the majority of trusts. A further problem was identified due to a narrow
view of feedback in the NHS, which was considered to focus upon the dissemination
of information concerning adverse events, rather than a comprehensive solution
development, implementation and evaluation process designed to deliver targeted
improvements in work systems.
In accordance with a recent study of systems for learning from failure in health care
organisations, undertaken by the National Audit Office (2005), the issue of multiple
118
reporting and feedback channels in current systems was also raised as a potential
problem area. It was suggested that simplification of information flows and systems
for reporting and feedback should be considered, along with use of common
channels for feeding back information from both organisational and supraorganisational/external incident databases, at a local level.
Through consideration of the aforementioned research findings and the key themes
emerging from the review it is possible to identify a series of recommendations to
promote effective safety feedback in future NHS systems. These recommendations
are summarised within figure 2.32 below, and are discussed in terms of spefic
recommended actions for NHS trusts, SHAS and the NPSA in the integrative
summary section of this report, including a checklist for trust risk managers.
Recommendations for future NHS systems
a) Comprehensive information and action feedback processes, such as those
described within the SAIFIR framework, to be integrated within all local NHS risk
management and reporting systems, according to a common framework.
b) Focus upon development of a common, defined process, with defined
responsibilities and structures, which effectively closes the safety loop in NHS
organisations. This process must include reporting, incident analysis and
investigation, solutions development, implementation and continued monitoring of
effectiveness of corrective actions.
c) Simplification of current system of multiple reporting and feedback channels to
organisational, supra-organisational and external agencies. Use of common
feedback channels for local and national level information and alerts.
d) Development of clear policies with supporting criteria for decision making
concerning what level of feedback or action to instigate in response to specific
safety incidents, including definition of which incidents require a rapid local
response to prevent further harm to patients.
e) Definition of clear protocols governing local and trust-wide action to be taken for
rapid response scenarios.
f)
Integration of popular local level quality review processes, such as morbidity and
mortality conferences, within a common safety improvement process that includes
incident reporting.
g) Reallocate effort spent in local level risk management systems from collecting
reports to the development of effective improvements for implementation.
h) Safety feedback should be flexible in content, mode of delivery and tailored to a
119
target audience, to ensure ease of uptake by varying professional and stakeholder
groups, with differing information and practical requirements.
i)
Feedback channels should be rapid, repeatable and visible.
j)
Information fed back to front line staff must include examples of changes resulting
from the investigation of reported incidents and their impact upon safety, in
addition to reported incident rates, if future reporting is to be encouraged.
k) Effective integration of local organisational safety feedback loops with supraorganisational level centralised systems must be achieved if all health care
organisations are to benefit from operational experience in a single organisation.
This should reduce the incidence of repeated identical failures in multiple care
settings that is evidence of an inability for system-wide learning.
l)
Use of information technology to facilitate access, for all relevant stakeholders, to
incident data, analyses and lessons learnt. Also to promote integrated, automated
feedback as far as is practically possible into the design of existing risk
management information systems.
m) Visible support of feedback process from both senior management and local
clinical leadership to demonstrate the importance of safety issues generally and in
achieving effective uptake and implementation of specific systems solutions and
improvements.
Figure 2.32: Recommendations for enhanced safety feedback systems in
UK healthcare based upon the review findings
In summary, the field of effective feedback from safety information systems is an
important area for development and is central to the ultimate effectiveness and
viability of incident reporting systems in health care organisations.
The specific
organisational processes that define how incident reporting, investigation and data
analysis translate into safer care delivery require further attention from those that
would seek to engineer safer organisations. It is hoped that the practical findings and
recommendations contained within this report will support this objective and
contribute to the establishment of effective safety monitoring systems with integrated
feedback, for learning and incremental improvement in health care systems.
120
2.6 - References and Included Articles
The following bibliography incorporates all articles and literature sources included
within the review, following screening. Further description of included articles may be
found within Appendices G and H.
Adams, T. D. & Burleson, K. W. (1992). Continuous quality improvement in a
medication error reporting system. P & T.Vol.17(6), 1992..
Ahluwalia, J. & Marriott, L. (2005). Critical incident reporting systems. [Review] [15
refs]. Seminars In Fetal & Neonatal Medicine.10(1):31-7.
AHRQ (2001). Making Health Care Safer: A Critical Analysis of Patient Safety
Practices (Report 43) Rockville, MD: Agency for Healthcare Research and Quality.
Amoore, J. & Ingram, P. (2002). Quality improvement report: Learning from adverse
incidents involving medical devices.[see comment]. BMJ.325(7358):272-5.
Anderson, D. J. & Webster, C. S. (2001). A systems approach to the reduction of
medication error on the hospital ward. Journal of Advanced Nursing, 35, 34-41.
Anon. (1997). Minimize risk and costly adverse events with data gathering, reporting
program. Data Strategies & Benchmarks. 1(3):44-8.
Anon. (2005). Survey finds hospitals lagging behind on safety. Healthcare
Benchmarks & Quality Improvement. 12(1):8-9.
Aspden, P., Corrigan, J. M., Wolcott, J., & Erickson, S. M. (2004). Patient Safety:
Achieving a New Standard for Care. Washington: Institute of Medicine.
Avery, J., Beyea, S. C., & Campion, P. (2005). Active error management: use of a
Web-based reporting system to support patient safety initiatives. Journal of Nursing
Administration.35(2):81-5.
Bagian, J. P., Lee, C., Gosbee, J., DeRosier, J., Stalhandske, E., Eldridge, N. et al.
(2001). Developing and deploying a patient safety program in a large health care
delivery system: you ’an't fix what you ’on't know about. [Review] [7 refs]. Joint
Commission Journal on Quality Improvement. 27(10):522-32.
121
Barach, P. & Small, S. D. (2000). Reporting and preventing medical mishaps:
Lessons from non-medical near miss reporting systems. British Medical Journal, 759763.
Barach, P. & Small, S. D. (2000). How the NHS can improve safety and learning. By
learning free lessons from near misses. BMJ.320(7251):1683-4.
Battles, J. B., Kaplan, H. S., Van der Schaaf, T. W., & Shea, C. E. (1998). The
attributes of medical event-reporting systems: experience with a prototype medical
event-reporting system for transfusion medicine.[see comment]. Archives of
Pathology & Laboratory Medicine. 122(3):231-8.
Beasley, J. W., Escoto, K. H., & Karsh, B. T. (2004). Design elements for a primary
care medical error reporting system. WMJ. 103(1):56-9.
Beckmann, U. & Runciman, W. B. (1996). The role of incident reporting in continuous
quality improvement in the intensive care setting.[comment]. Anaesthesia & Intensive
Care. 24(3):311-3.
Beckmann, U., West, L. F., Groombridge, G. J., Baldwin, I., Hart, G. K., Clayton, D.
G. et al. (1996). The Australian Incident Monitoring Study in intensive care: AIMSICU. The development and evaluation of an incident reporting system in intensive
care. Anaesthesia and Intensive Care, 24, 314-319.
Benner, L. & Sweginnis, R. (1983). System Saf’ty's Open Loops. Hazard Prevention,
19.
Benson, J. (2000). Incident reporting: a vital link to organizational performance.
[Review] [0 refs]. Home Healthcare Nurse Manager. 4(4):6-10, -Aug.
Billings, C. (1998). Some hopes and concerns regarding medical event-reporting
systems: lessons from the NASA Aviation Safety Reporting System.[comment].
Archives of Pathology & Laboratory Medicine. 122(3):214-5.
Billings, C. (1998). Incident Reporting Systems in Medicine and Experience With the
Aviation Safety Reporting System. In R.I.Cook, D. D. Woods, & C. Miller (Eds.), A
Tale of Two Stories: Contrasting Views of Patient Safety: Report from a Workshop on
Assembling the Scientific Basis for Progress on Patient Safety (National Patient
Safety Foundation, AMA.
122
Bird, D. & Milligan, F. (2003). Adverse health-care events: Part 3. Learning the
lessons. [Review] [7 refs]. Professional Nurse. 18(11):621-5.
Bird, D. & Milligan, F. (2003). Adverse health-care events: Part 2. Incident reporting
systems. Professional Nurse. 18(10):572-5 .
Bolsin, S. N. (2005). Using portable digital technology for clinical care and critical
incidents: a new model. Australian Health Review, 29, 297-305.
Bolsin, S. N., Patrick, A., Colson, M., Creatie, B., & Freestone, L. (2005). New
technology to enable personal monitoring and incident reporting can transform
professional culture: the potential to favourably impact the future of health care.
Journal of Evaluation in Clinical Practice, 11, 499-566.
Boyce, T., Howard, R., King, K., Milliken, B.’ O'Donnell, S., Redpath, J. et al. (2004).
Illustrations of strategies to reduce medication errors and near misses. Pharmacy in
Practice. Vol.14(5) (pp 134-136), 2004., 134-136.
Boyce, T. J., King, K. S., Milliken, B. M.’ O'Donnell, S. T., Redpath, J. L., & Smith, L.
A. (2004). Developing an open and fair culture can improve incident reporting.
Pharmacy in Practice. Vol.14(3)()(pp 64-67), 2004., 64-67.
Bradbury, K., Wang, J., Haskins, G., & Mehl, B. (1993). Prevention of Medication
Erro—s - Developing A Continuous- Quality-Improvement Approach. Mount Sinai
Journal of Medicine, 60, 379-386.
Bradley, E. H., Holmboe, E. S., Mattera, J. A., Roumanis, S. A., Radford, M. J., &
Krumholz, H. M. (2004). Data feedback efforts in quality improvement: lessons
learned from US hospitals. Quality & Safety in Health Care, 13, 26-31.
Brown, I. D. (1990). Accident reporting and analysis. In (.
Busse, D. K. & Holland, B. (2002). Implementation of Critical Incident Reporting in a
Neonatal Intensive Care Unit. Cognition, Technology and Work, 4, 101-106.
Chamberlain, J. M., Slonim, A., & Joseph, J. G. (2004). Reducing errors and
promoting safety in pediatrics emergency care. Ambulatory Pediatrics. Vol.4(1)()(pp
55-63), 2004., 55-63.
123
Chem, C. H., How, C. K., Wang, L. M., Lee, C. H., & Graff, L. (2005). Decreasing
clinically significant adverse events using feedback to emergency physicians of
telephone follow-up outcomes. Annals of Emergency Medicine, 45, 15-23.
Chiang, M. (2001). Promoting patient safety: creating a workable reporting system.
Yale journal on regulation. 18(2), -408.
Chua, D. K. H. & Goh, Y. M. (2004). Incident causation model for improving feedback
of safety knowledge. Journal of Construction Engineering and Management-Asce,
130, 542-551.
Cohen, M. M., Kimmel, N. L., Benage, M. K., Cox, M. J., Sanders, N., Spence, D. et
al. (2005). Medication safety program reduces adverse drug events in a community
hospital. Quality & Safety in Health Care. Vol.14(3)()(pp 169-174), 2005., 169-174.
Cohoon, B. D. (2003). Learning from near misses through reflection: a new risk
management strategy. Journal of Healthcare Risk Management. 23(2):19-25.
Coles, G., Fuller, B., Nordquist, K., & Kongslie, A. (2005). Using failure mode effects
and criticality analysis for high-risk processes at three community hospitals. Joint
Commission Journal on Quality & Patient Safety. 31(3):132-40.
Coles, J., Pryce, D., & Shaw, C. (2001). The reporting of adverse clinical inciden—s achieving high quality reporting: the results of a short research study. London:
CASPE Research.
Cook, R. I., Woods, D. D., & Miller, C. (1998). A Tale of Two Stories: Contrasting
Views of Patient Safety: Report from a Workshop on Assembling the Scientific Basis
for Progress on Patient Safety National Patient Safety Foundation, American Medical
Association.
Cooper, M. D., Phillips, R. A., Sutherland, V. J., & Makin, P. J. (1994). Reducing
Accidents Using Goal-Setting and Feedba—k - A Field- Study. Journal of
Occupational and Organizational Psychology, 67, 219-240.
Cornell, J. & Silvester, N. (2000). Critical incident analysis: A case report. Clinician in
Management. Vol.9(4)()(pp 219-227), 2000., 219-227.
Cox, P. M.’ D'Amato, S., & Tillotson, D. J. (2001). Reducing medication errors.
American Journal of Medical Quality, 16, 81-86.
124
CRD (2001). ’RD's Guidance for those Carrying Out or Commissioning Revie—s Report Number 4 (2nd Edition) University of York: CRD Publications Office.
Croskerry, P. (2000). The feedback sanction. Academic Emergency Medicine, 7,
1232-1238’
D'Souza, D. C., Koller, L. J., Ng, K., & Thornton, P. D. (2004). Reporting, review and
application of near-miss prescribing medication incident data. Journal of Pharmacy
Practice & Research. Vol.34(3)()(pp 190-193), 2004., 190-193.
Davies, H. T. O. (2001). Exploring the pathology of quality failings: Measuring quality
is not the probl—m - Changing it is. Journal of Evaluation in Clinical Practice.
Vol.7(2)()(pp 243-251), 2001., 243-251.
Dean, B. (2002). Learning from prescribing errors. Quality & Safety in Health Care,
11, 258-260.
Dejong, D., Brookins, L. H., & Odgers, L. (1998). Multidisciplinary redesign of a
medication error reporting system. Hospital Pharmacy. Vol.33(11)()(pp 1372-1377),
1998., 1372-1377.
Del Mar, C. B. & Mitchell, G. K. (2004). Feedback of evidence into practice. Medical
Journal of Australia, 180, S63-S65.
Dept.Health (2000). An Organisation with a Memory. London: The Stationary Office.
Dept.Health (2001). Building a Safer NHS for Patients: Implementing an Organisation
with a Memory London: The Stationary Office.
Dept.Health & NPSA (2001). Doing Less Harm London: National Patient Safety
Agency, Department of Health.
Donaldson-Myles, F. (2005). Nur’es' experiences of reporting a clinical incident: A
qualitative study informing the management of clinical risk. Clinical Risk.
Vol.11(3)()(pp 105-109), 2005., 105-109.
Dunn, D. (2003). Incident repo—s--correcting processes and reducing errors.
[Review] [21 refs]. AORN Journal. 78(2):212, 214-6, 219-20, passim; quiz 235-8.
Dunn, D. (2003). Incident repo—s--their purpose and scope.[erratum appears in
AORN J. 2003 Aug;78(2):202]. [Review] [23 refs]. AORN Journal. 78(1):46, 49-61,
65-6; quiz 67-70.
125
Edmondson, A. C. (2004). Learning from failure in health care: frequent
opportunities, pervasive barriers. Quality & Safety in Health Care, 13, 3-9.
Edmondson, A. C. (1996). Learning from mistakes is easier said than done: Group
and organizational influences on the detection and correction of human error.
Ref Type: Generic
Edmondson, A. C. (2004). Learning from mistakes is easier said than done: group
and organizational influences on the detection and correction of human error. Journal
of applied behavioral science. 40(1), -90.
Etchells, E. & Shumak, S. (2004). Improving Patient Safety: Sharing Experiences to
Develop and Implement Solutions. Canadian Journal of Diabetes. Vol.28(1)()(pp 7986), 2004., 79-86.
Feied, C. F., Handler, J. A., Smith, M. S., Gillam, M., Kanhouwa, M., Rothenhaus, T.
et al. (2004). Clinical information systems: Instant ubiquitous clinical data for error
reduction and improved clinical outcomes. Academic Emergency Medicine, 11, 11621169.
Firth-Cozens, J., Redfern, N., & Moss, F. (2001). Confronting Errors in Patient Care:
Report on Focus Groups.
Firth-Cozens, J., Redfern, N., & Moss, F. (2004). Confronting errors in patient care:
the experiences of doctors and nurses. Clinical Risk, 10, 184-190.
Forza, C. & Salvador, F. (2000). Assessing some distinctive dimensions of
performance feedback information in high performing plants. International Journal of
Operations and Production Management, 20, 359-385.
France, D. J., Miles, P., Cartwright, J., Patel, N., Ford, C., Edens, C. et al. (2003). A
chemotherapy incident reporting and improvement system. [erratum appears in Jt
Comm J Qual Saf. 2003 May;29(5):209]. Joint Commission Journal on Quality &
Safety. 29(4):171-80.
France, D. J., Cartwright, J., Jones, V., Thompson, V., & Whitlock, J. A. (2004).
Improving pediatric chemotherapy safety through voluntary incident reporting:
lessons from the field.[see comment]. Journal of Pediatric Oncology Nursing.
21(4):200-6, -Aug.
126
Frankel, A., Gandhi, T. K., & Bates, D. W. (2003). Improving patient safety across a
large integrated health care delivery system. International Journal for Quality in
Health Care, 15, i31-i40.
Frey, B., Buettiker, V., Hug, M. I., Waldvogel, K., Gessler, P., Ghelfi, D. et al. (2002).
Does critical incident reporting contribute to medication error prevention? European
Journal of Pediatrics, 161, 594-599.
Gandhi, T. K., Graydon-Baker, E., Huber, C. N., Whittemore, A. T., & Gustafson, M.
(2005). Reporting Syste—s - Closing the Loop: Follow-up and Feedback in a Patient
Safety Program. Joint Commission Journal on Quality and Patient Safety, 31, 614621.
Gaynes, R., Richards, C., Edwards, J., Emori, T. G., Horan, T., onso-Echanove, J. et
al. (2001). Feeding back surveillance data to prevent hospital-acquired infections.
Emerging Infectious Diseases, 7, 295-298.
Gordon, R., Flin, R., & Mearns, K. (2005). Designing and evaluating a human factors
investigation tool (HFIT) for accident analysis. Safety Science. Vol.43(3)()(pp 147171), 2005., 147-171.
Gordon, R. P. E. (1998). The contribution of human factors to accidents in the
offshore oil industry. Reliability Engineering & System Safety, 61, 95-108.
Greengold, N. L. (2002). A Web-based program for implementing evidence-based
patient safety recommendations. Joint Commission Journal on Quality Improvement.
28(6):340-8.
Greenhalgh, T. (1997). How to read a paper: Papers that summarise other papers
(systematic reviews and meta-analyses). British Medical Journal, 315, 672-675.
Grzybicki, D. M. (2004). Barriers to the implementation of patient safety initiatives.
Clinics in Laboratory Medicine, 24, 901-+.
Gysels, M., Hughes, R., Aspinal, F., ddington-Hall, J. M., & Higginson, I. J. (2004).
What methods do stakeholders prefer for feeding back performance data: a
qualitative study in palliative care. International Journal for Quality in Health Care, 16,
375-381.
127
Harr, D. S. & Balas, A. (1994). Managing physician practice patterns: providing
information feedback to improve quality care & reduce cost. Missouri
Medicine.91(3):138-9.
Harrison, P., Joesbury, H., Martin, D., Wilson, R., & Fewtrell, C. (2002). Significant
Event Audit and Reporting in General Practice University of Sheffield: School of
health and Related Research (ScHARR).
Hart, E. & Hazelgrove, J. (2001). Understanding t rganization lnal context for
adverse events in the health services: the role of cultural censorship. Quality in
Health Care, 10, 257-262.
Helmreich, R. L. (2000). On error management: lessons from aviation. British Medical
Journal, 320, 781-785.
Higgins, J. P. T. & Green, S. (2005). Cochrane Handbook for Systematic Reviews of
Interventions 4.2.5 [updated May 2005] Chichester, UK: Wiley & Sons, Ltd.
Hobgood, C. D., Ma, O. J., & Swart, G. L. (2000). Emergency medicine resident
errors: Identification and educational utilization. Academic Emergency Medicine, 7,
1317-1320.
Hofmann, D. A. & Stetzer, A. (1998). The role of safety climate and communication in
accident interpretation: implications for learning from negative events. Academy of
Management journal. 41(6), -657.
Holzmueller, C. G., Pronovost, P. J., Dickman, F., Thompson, D. A., Wu, A. W.,
Lubomski, L. H. et al. (2005). Creating the Web-based Intensive Care Unit Safety
Reporting System. Journal of the American Medical Informatics Association, 12, 130139.
House of Commons Committee of Public Accounts (2006). A Safer Place for
Patients: Learning to Improve Patient Safety (51st report of Session 2005-6) London:
The Stationary Office.
Hudson, P. (2003). Applying the lessons of high risk industries to health care. Quality
and Safety in Health Care, 12(Suppl 1), i7-i12.
James, B. C. (1997). Every defect a treasure: Learning from adverse events in
hospitals. Medical Journal of Australia, 166, 484-487.
128
Jamtvedt, G., Young, J. M., Kristoffersen, D. T., Thomso’ O'Brien, M. A., & Oxman,
A. D. (2005). Audit and Feedback: effects on professional practice and health care
outcomes (Review) The Cochrane Library, Issue 3, 2005.
Johnson, C. (2000). Software support for incident reporting systems in safety- critical
applications. Computer Safety, Reliability and Security, Proceedings, 1943, 96-106.
Johnson, C. (2002). Software tools to support incident reporting in safety-critical
systems. Safety Science, 40, 765-780.
Johnson, C. W. Architectures for incident reporting.
Ref Type: Generic
Johnson, C. W. (2003). How will we get the data and what will we do with it then?
Issues in the reporting of adverse healthcare events. Quality & Safety in Health Care,
12, II64-II67.
Johnson, C. W. (2003). Failure in safety critical systems: A handbook of incident and
accident reporting. Glasgow: Glasgow University Press.
Joshi, M. S., Anderson, J. F., & Marwaha, S. (2002). A systems approach to
improving error reporting. Journal of Healthcare Information Management.16(1):40-5.
Kaplan, H. S. & Fastman, B. R. (2003). Organization of event reporting data for
sense making and system improvement. Quality and Safety in Health Care, 12(Suppl
II), ii68-ii72.
Killen, A. R. & Beyea, S. C. (2003). Learning from near misses in an effort to promote
patient safety. AORN Journal.77(2):423-5 .
King, N. (1998). Template Analysis. In G.Symon & C. Cassell (Eds.), Qualitative
Methods and Analysis in Organizational Research ( London: Sage.
Kingston, M. J., Evans, S. M., Smith, B. J., & Berry, J. G. (2004). Attitudes of doctors
and nurses towards incident reporting: a qualitative analysis. Medical Journal of
Australia, 181, 36-39.
Kirchsteiger, C. (1999). Status and functioning of the European Commiss’on's major
accident reporting system. Journal of Hazardous Materials, 65, 211-231.
129
Kirchsteiger, C. (1999). The functioning and status of the’EC's major accident
reporting system on industrial accidents. Journal of Loss Prevention in the Process
Industries, 12, 29-42.
Kjellen, U. (2000). Prevention of accidents through experience feedback. London:
Taylor & Francis.
Kohn, L. T., Corrigan, J. M., & Donaldson, M. S. (2000). To Err is Human: Building a
Safer health System. Washington: National Academy Press.
Koornneef, F. (2000). Organised learning from small-scale incidents. Delft University
of Technology, Netherlands.
Krieg, M. M., Mason, P., Hemann, R. A., Alsip, J., & Kresowik, T. (1996). The impact
of different data feedback methods on provider response to the Cooperative
Cardiovascular Project. Clinical Performance & Quality Health Care.4(2):86-9, -Jun.
Landry, M. D. & Sibbald, W. J. (2002). Changing physician behavior: a review of
patient safety in critical care medicine. [Review] [26 refs]. Journal of Critical
Care.17(2):138-45.
Langslow, A. (1997). Incident reports: exploring the’do's and ’on'ts. Australian
Nursing Journal.4(11):32-3, 37.
Larson, E. B. (2002). Measuring, monitoring, and reducing medical harm from a
systems perspective: A medical direc’or's personal reflections. Academic Medicine,
77, 993-1000.
Lawton, R. & Parker, D. (2002). Barriers to incident reporting in a healthcare system.
Quality & Safety in Health Care, 11, 15-18.
Le Duff, F., Daniel, S., Kamendje, B., Le Beux, P., & Duvauferrier, R. (2005).
Monitoring incident report in the healthcare process to improve quality in hospitals.
International Journal of Medical Informatics, 74, 111-117.
Leape, L. L. (2000). Reporting of medical errors: Time for a reality check. Quality and
Safety in Health Care, 9, 144-145.
Leape, L. L. (2002). Reporting of adverse events. New England Journal of Medicine.,
347, 1633-1638.
130
Lesch, M. F. (2005). Remembering to be afraid: Applications of theories of memory
to the science of safety communication. [References].
Ref Type: Generic
Locke, K. (2002). The Grounded Theory Approach to Qualitative Research. In
F.Drasgow & N. Schmitt (Eds.), Measuring and Analyzing Behavior in Organizations:
Advances in Measurement and Data Analysis ( San Francisco, CA: Jossey-Bass.
Lubomski, L. H., Pronovost, P. J., Thompson, D. A., Holzmueller, C. G., Dorman, T.,
Morlock, L. L. et al. (2004). Building a better incident reporting system: Perspectives
from a multisite project. Journal of Clinical Outcomes Management.Vol.11(5)()(pp
275-280), 2004., 275-280.
Maidment, I. D. & Thorn, A. (2005). A medication error reporting scheme: Analysis of
the first 12 months. Psychiatric Bulletin.Vol.29(8)()(pp 298-301), 2005., 298-301.
March, J. G., Sproull, L. S., & Tamuz, M. (2003). Learning from samples of one or
fewer. Quality & Safety in Health Care, 12, 465-472.
Marcus, A. A. & Nichols, M. L. (1999). On the Edge: Heeding the Warnings of
Unusual Events. Organization Science, 10, 482-499.
Mavroudakis, K. G., Katsikas, S. K., & Gritzalis, D. A. (1997). Forming a health care
incident reporting scheme. Studies in Health Technology & Informatics.43 Pt B:83943.
Mays, N., Roberts, E., & Popay, J. (2001). Synthesising Research Evidence. In
N.Fulop, P. Allen, A. Clarke, & N. Black (Eds.), Methods for Studying the Delivery
and Organisation of Health Services ( London: Routledge.
McIlwain, J. (2001). A conceptual approach for the rapid reporting of clinical
incidents: A proposal for benchmarking. Clinician in Management.Vol.10(3)()(pp 153159), 2001., 153-159.
Michie, S. & Johnston, M. (2004). Changing clinical behaviour by making guidelines
specific. British Medical Journal, 328, 343-345.
Milligan, F. & Dennis, S. (2004). Improving patient safety and incident reporting.
Nursing Standard., 33-36.
131
Mills, P. D., Weeks, W. B., & Surott-Kimberly, B. C. (2003). A multihospital safety
improvement effort and the dissemination of new knowledge. Joint Commission
Journal on Quality & Safety. 29(3):124-33.
Moher, D., Cook, D. J., Eastwood, S., Olkin, I., Rennie, D., & Stroup, D. F. (1999).
Improving the quality of reports of meta-analyses
rganizatised controlled trials: the
QUOROM statement. Quality of Reporting of Meta-analyses. Lancet 354, 18961900. Ref Type: Journal (Full)
MOOSE Group (2000). Meta-analysis of Observational Studies in Epidemiology: A
Proposal for Reporting. JAMA, 283, 2008-2012.
Morath, J. M. & Turnbull, J. E. (2005). To Do No Harm. San Francisco: Jossey-Bass.
Morrison, L. M. (2005). Best practices in incident investigation in the chemical
process industries with examples from the industry sector and specifically from Nova
Chemicals. Journal of Hazardous Materials, 111, 161-166.
Nakajima, K., Kurata, Y., & Takeda, H. (2005). A web-based incident reporting
system and multidisciplinary collaborative projects for patient safety in a Japanese
hospital. Quality & Safety in Health Care, 14, 123-129.
NAO (2005). A Safer Place for Patients: Learning to Improve Patient Safety. London:
The Stationary Office.
NHSLA (2005). CNST General Clinical Risk Management Standards London:
National Health Services Litigation Authority.
NPSA (2004). Seven steps to patient safety: an overview guide for NHS staff.
London: National Patient Safety Agency.
NPSA (2005). Building a memory: preventing harm, reducing risks and improving
patient safety. The first report of the National Reporting and Learning System and
the Patient Safety Observatory (July, 2005) London: National Patient Safety Agency’
O'Leary, M. & Chappell, S. L. (1996). Confidential incident reporting systems create
vital awareness of safety problems. ICAO Journal.51(8):11-3, 27’
O'Leary, M. (2002). The British Airways human factors reporting programme.
Reliability Engineering & System Safety, 75, 245-255.
132
Orlander, J. D., Barber, T. W., & Fincke, B. G. (2002). The morbidity and mortality
conference: the delicate nature of learning from error. Academic
Medicine.77(10):1001-6.
Oulton, R. (1981). Use of incident report data in a system-wide quality assurance/risk
management program. Quality Review Bulletin.Vol.7(6)()(pp 2-7), 1981., 2-7.
Parke, T. (2003). Critical incident reporting in intensive ca—e - Experience in a
District General Hospital. Care of the Critically Ill.Vol.19(2)()(pp 42-44), 2003., 42-44.
Pena, J. J., Schmelter, W. R., & Ramseur, J. E. (1981). Computerized incident
reporting and risk management. Hospital & Health Services
Administration.Vol.26(5)()(pp 7-11), 1981., 7-11.
Peshek, S. C. & Cubera, K. (2004). Nonpunitive, voice-mail-based medication error
reporting system. Hospital Pharmacy.Vol.39(9)()(pp 857-863), 2004., 857-863.
Piotrowski, M. M., Saint, S., & Hinshaw, D. B. (2002). The Safety Case Management
Committee: expanding the avenues for addressing patient safety. Joint Commission
Journal on Quality Improvement..
Poniatowski, L., Stanley, S., & Youngberg, B. (2005). Using information to empower
nurse managers to become champions for patient safety. Nursing Administration
Quarterly.29(1):72-7, -Mar.
Pronovost, P. J., Weast, B., Bishop, K., Paine, L., Griffith, R., Rosenstein, B. J. et al.
(2004). Senior executive adopt-a-work unit: a model for safety improvement. Joint
Commission Journal on Quality & Safety.30(2):59-68.
PSRP (2006). PSRP RCA programme repo—t -reenter .
Ray, N. K. (1995). From Paper Tigers to Consumer-Centered Quality Assurance
Too—s - Reforming Incident-Reporting Systems. Mental Retardation, 33, 239-247.
Reason, J. (1990). Human Error. Cambridge: Cambridge University Press.
Reason, J. (1995). A Systems Approach to Organizational Error. Ergonomics, 39,
1708-1721.
Reason, J. (1997). Managing the Risks of Organizational Accidents. Aldershot:
Ashgate.
133
Rejman, M. H. (1999). Confidential reporting systems and safety-critical information.
In.
Render, M. L. & Hirschhorn, L. (2005). An irreplaceable safety culture. Critical Care
Clinics.Vol.21(1)()(pp 31-41), 2005., 31-41.
Restuccia, J. D. & Holloway, D. C. (1982). Methods of control for hospital quality
assurance systems. Health Services Research.17(3):241-51.
Revuelta, R. (2004). Operational experience feedback in the World Association of
Nuclear Operators (WANO). Journal of Hazardous Materials, 111, 67-71.
Roberts, K. H. (1989). New challenges in organizational research: high reliability
organizations. Industrial crisis quarterly, 3, 111-125.
Roberts, K. H. (1990). Managing High-Reliability Organizations. California
management review, 32, 101-113.
Rooksby, J. & Gerry, R. M. (2004). Incident Reporting Schemes and the Affordance
of a Good Story. In.
Rosen, H. M. & Feigin, W., Sr. (1983). Quality assurance and data feedback. Health
Care Management Review.8(1):67-74.
Runciman, W. B. (2002). Lessons from the Australian Patient Safety Foundation:
setting up a national patient safety surveillance syst—m - is this the right model?
Quality & Safety in Health Care, 11, 246-251.
Sarter, N. B. & Alexander, H. M. (2000). Error types and related error detection
mechanisms in the aviation domain: An analysis of Aviation Safety Reporting System
incident reports. International Journal of Aviation Psychology, 10, 189-206.
Schaubhut, R. M. & Jones, C. (2000). A systems approach to medication error
reduction. Journal of Nursing Care Quality, 14, 13-27.
Schneider, P. J. & Hartwig, S. C. (1994). Use of severity-indexed medication error
reports to improve quality. Hospital Pharmacy. Vol.29(3)()(pp 208-211), 1994., 208211.
Schobel, M. & Szameitat, S. (2000). Experience feedback and safety culture as
contributors to system safety. Human Error and System Design and Management,
253, 47-50.
134
Shaw, C. & Coles, J. (2001). The reporting of adverse clinical inciden—s international views and experience. CASPE Research, London.
Shaw, R., Drever, F., Hughes, H., Osborn, S., & Williams, S. (2005). Adverse events
and near miss reporting in the NHS. Quality & Safety in Health Care, 14, 279-283.
Shojania, K. G., Duncan, B. W., McDonald, K. M., & Wachter, W. M. (2001). Making
Health Care Safer: A Critical Analysis of Patient Safety Practices University of
California: Agency for Healthcare Research and Quality (AHRQ).
Short, T. G., ORegan, A., Jayasuriya, J. P., Rowbottom, M., Buckley, T. A., & Oh, T.
E. (1996). Improvements in anaesthetic care resulting from a critical incident
reporting programme. Anaesthesia, 51, 615-621.
Silver, M. S. (1999). Incident review management: a systemic approach to
performance improvements. Journal for healthcare quality, 21, 21.
Simpson, R. L. (2005). Error reporting as a preventive force. [Review] [18 refs].
Nursing Management. 36(6):21-4, 56.
Smith, D. & Toft, B. (2005). Towards an organization with a memory: Exploring the
organizational generation of adverse events in health care. Health Services
Management Research. Vol.18(2)()(pp 124-140), 2005., 124-140.
Smith, D. S. & Haig, K. (2005). Reduction of adverse drug events and medication
errors in a community hospital setting. Nursing Clinics of North America. 40(1):25-32.
Stanhope, N., Crowley-Murphy, M., Vincent, C.’ O'Connor, A. M., & Taylor-Adams, S.
E. (1999). An evaluation of adverse incident reporting. Journal of Evaluation in
Clinical Practice, 5, 5-12.
Strauss, A. & Corbin, J. (1990). Basics of Qualitative Research: Grounded Theory
Procedures and Techniques. Newbury Park, CA: Sage Publications.
Suksi, S. (2004). Methods and practices used in incident analysis in the Finnish
nuclear power industry. Journal of Hazardous Materials, 111, 73-79.
Suresh, G., Horbar, J. D., Plsek, P., Gray, J., Edwards, W. H., Shiono, P. H. et al.
(2004). Voluntary anonymous reporting of medical errors for neonatal intensive care.
Pediatrics, 113, 1609-1618.
135
Takeda, H., Matsumura, Y., Nakajima, K., Kuwata, S., Yang, Z. J., Ji, S. M. et al.
(2003). Health care quality management by means of an incident report system and
an electronic patient record system. International Journal of Medical Informatics, 69,
285-293.
Ternov, S., Tegenrot, G., & Akselsson, R. (2004). Operator-centred local error
management in air traffic control. Safety Science, 42, 907-920.
Tighe, C. M., Woloshynowych, M., Brown, R., Wears, B., & Vincent, C. (2005).
Incident reporting in one UK accident and emergency department. Accident and
Emergency Nursing, In Press.
Trooskin, S. Z. (2002). Low-technology, cost-efficient strategies for reducing
medication errors. American Journal of Infection Control, 30, 351-354.
Uth, H. J. & Wiese, N. (2004). Central collecting and evaluating of major accidents
and near-miss-events in the Federal Republic of Germ—y--results, experiences,
perspectives. [Review] [16 refs]. Journal of Hazardous Materials. 111(1-3):139-45.
Van der Schaaf, T. W., Lucas, D. A., & Hale, A. R. (1991). Near Miss Reporting as a
Safety Tool. Oxford: Butterworth-Heinemann.
Van der Schaaf, T. W. & Kanse, L. (2004). Biases in incident reporting databases: An
empirical study in the chemical process industry. Safety Science. Vol.42(1)()(pp 5767), 2004.Date of Publication: 01 JAN 2004., 57-67.
Van der Schaaf, T. W. & Wright, L. B. (2005). Systems for near miss reporting and
analysis. In J.R.Wilson & N. Corlett (Eds.), Evaluation of Human Work (3rd ed.,
London: Taylor & Francis.
Vincent, C., Stanhope, N., & Crowley-Murphy, M. (1999). Reasons for not reporting
adverse incidents: an empirical study. Journal of Evaluation in Clinical Practice, 5,
13-21.
Vincent, C., Taylor-Adams, S., Chapman, E. J., Hewett, D., Prior, S., Strange, P. et
al. (2000). How to investigate and analyse clinical incidents: Clinical Risk Unit and
Association of Litigation and Risk Management protocol. British Medical Journal, 320,
777-781.
Vincent, C. (2006). Patient Safety. London: Elsevier Churchill Livingstone.
136
Vredenburgh, A. G. (2002). Organizational safety: Which management practices are
most effective in reducing employee injury rates? Journal of Safety Research, 33,
259-276.
Wallace, L. M., Spurgeon, P., & Earll, L. (2006). Patient Safety Research Programme
(Do—) - Evaluation of the NPSA 3 Day Root Cause Analysis Training Programme:
Final Report Coventry University: HSRC.
Walshe, K. (1995). Using adverse events in health-care quality improvement: results
from a British acute hospital. International journal of health care quality assurance, 8,
7.
Walshe, K. & Freeman, T. (2002). Effectiveness of quality improvement: learning
from evaluations. Quality and Safety in Health Care, 11, 85-87.
Walshe, K. (2003). Understanding and learning fr rganization lnal failure. Quality &
Safety in Health Care, 12, 81.
Walshe, K. & Higgins, J. (2002). The use and impact of inquiries in the NHS. BMJ,
325, 895-900.
Warburton, R. N. (2005). Patient saf—y--how much is enough? Health
Policy.71(2):223-32.
Waring, J. J. (2004). A qualitative study of the intra-hospital variations in incident
reporting. International Journal for Quality in Health Care, 16, 347-352.
Waring, J. J. (2005). Beyond blame: cultural barriers to medical incident reporting.
Social Science & Medicine, 60, 1927-1935.
Webb, R. K., Currie, M., Morgan, C. A., Williamson, J. A., Mackay, P., Russell, W. J.
et al. (1993). The Australian Incident Monitoring Study: an analysis of 2000 incident
reports. Anaesthesia & Intensive Care. 21(5):520-8.
Webster, C. S. & Anderson, D. J. (2002). A practical guide to the implementation of
an effective incident reporting scheme to reduce medication error on the hospital
ward. [Review] [43 refs]. International Journal of Nursing Practice. 8(4):176-83.
Weick, K. E. & Sutcliffe, K. M. (2001). Managing the unexpected: Assuring high
performance in an age of complexity. San-Francisco: Jossey-Bass.
137
Weick, K. E. & Sutcliffe, K. M. (2001). Managing the Unexpected: Assuring High
Performance in an Age of Complexity. San Francisco, CA: Jossey-Bass.
Weinberg, N. S. & Stason, W. B. (1998). Managing quality in hospital practice.
International Journal for Quality in Health Care, 10, 295-302.
Weissman, J. S., Annas, C. L., Epstein, A. M., Schneider, E. C., Clarridge, B., Kirle,
L. et al. (2005). Error reporting and disclosure systems: views from hospital leaders.
JAMA.293(11):1359-66.
Westfall, J. M., Fernald, D. H., Staton, E. W., VanVorst, R., West, D., & Pace, W. D.
(2004). Applied strategies for improving patient safety: a comprehensive process to
improve care in rural and frontier communities. Journal of Rural Health., 355-362.
Wilf-Miron, R., Lewenhoff, I., Benyamini, Z., & Aviram, A. (2003). From aviation to
medicine: applying concepts of aviation safety to risk management in ambulatory
care. Quality & Safety in Health Care, 12, 35-39.
Willeumier, D. (2004). Advocate health care: a systemwide approach to quality and
safety. Joint Commission Journal on Quality & Safety. 30(10):559-66.
Williams, J. S. (2005). Adverse event reporting: making the system work. Biomedical
Instrumentation & Technology. 39(2):111-8, -Apr.
Wilson, J. (1998). Incident reporting. British Journal of Nursing. 7(11):670-1, -24.
Wojner, A. W. (2002). Capturing error rates and reporting significant data. Critical
Care Nursing Clinics of North America. 14(4):375-84.
Woloshynowych, M., Rogers, S., Taylor-Adams, S., & Vincent, C. (2005). The
investigation and analysis of critical incidents and adverse events in health care.
Health Technology Assessment, 9.
Woodward, S. (2002). Learning from mistakes and near misses. Journal of Clinical
Excellence. Vol.4(3)()(pp 323-324), 2003., 323-324.
Woodward, S. (2004). Achieving a safer health service: Part 3. Investigating root
causes and formulating solutions. [Review] [18 refs]. Professional Nurse., 390-394.
Wu, A. W., Pronovost, P., & Morlock, L. (2002). ICU incident reporting systems.
Journal of Critical Care, 17, 86-94.
138
Yong, H. & Kluger, M. T. (2003). Incident reporting in anaesthesia: A survey of
practice in New Zealand. Anaesthesia and Intensive Care, 31, 555-559.
Ziegenfuss, J. T., Jr. (2000). Quality management d—a--the feedback challenge.
American Journal of Medical Quality. 15(2):47-8, -Apr.
139
3.0 - Reporting Systems: a scoping study of methods of providing feedback
within an organization- Survey of NHS Trusts in England and Wales.
Research Team:
Professor Louise M Wallace
Health Services Research Centre
Coventry University
Professor Peter Spurgeon
Health Services Management Centre,
University of Birmingham,
and
West Midlands Deanery &
University of Warwick
Dr Louise Earll
Consultant Psychologist
Researchthatworks Limited
Research support by staff from the Health Services Research Centre, Coventry
University:
Ms. Denise James
Research Project Manager
Ms. Julie Bayley
Senior Research Assistant
Tan Ur Rehman
Research Assistant
Dr. Jemma Edmunds
Senior Research Assistant
140
With advice from the PSRP Feedback study collaborators:
Dr Maria Koutantji
Dr Jonathan Benn
Professor Charles Vincent
Clinical Safety Research Unit
Imperial College London
Steering Group members:
Mr David Lusher, Terema/ Independent Aviation consultant
Dr Michael Rejman, NPSA
141
3.1 - Background to the survey:
Empirical studies of the experience of healthcare staff’s use of patient safety
reporting systems is a very new field, since these national systems are new to
healthcare worldwide, as described in the introduction and section A. Research on
the patterns and determinants of reporting, and of comparative systems of reporting,
is beyond the scope of this study. At the time the study was commissioned, there
had been no national studies of risk reporting systems in UK healthcare, although
there were several reports of empirical studies of clinical governance in some trusts
(Wallace et al., 2001 (a) (b), Walshe et al., 2001; Wallace et al., 2004), and studies of
the nature of error in primary care (e.g. Sanders and Esmail, 2001), and of risk
management systems in GP practices (Wallace, et al, 2007). Independent sector
primary care risk reporting systems are beyond the scope of this study, and these
two studies suggest these systems are even less well developed than in trusts. At the
time of writing this report (2006) , General Practitioners (GPs) are not required to
report directly to Primary Care Trusts (PCTs), but are encouraged to do so, and can
report directly to the NPSA’s NRLS. (NPSA, 2005; PAC, 2006).
142
Planning the survey:
Given that there were no national surveys of risk reporting systems, our proposal
included a substantial commitment to a survey of risk management leads in all 607
trusts in England and Wales. However, while seeking approval of the study via the
NHS gateway system, we were informed that the NAO were in the process of
conducting a very similar survey in the provider trusts in England. This resulted in a
considerable period of negotiation, and caused unanticipated delay, as it was
essential that the two surveys were seen to add value and not to duplicate effort. We
therefore planned to include items directly as used by the NAO, in order that we
could take existing data from those trusts which already supplied responses, and so
reduce the burden to these trusts. We obtained NAO agreement to access these
data, provided the trusts themselves gave permission individually for the deanonymising of the selected items in their survey responses. This meant that we
would have some trusts who had responded to the NAO’s survey but might not
respond with active consent to data sharing, which meant they would in fact be asked
to give the data twice. In all, we gained responses from 197 trusts out of 260 to
access their data, and we were not informed which trusts had consented, so we were
unable to directly target these trusts. In addition, the NPSA informed us that some
items would overlap with a new survey in relation to the guidance “Being Open”
conducted by the patie rganizationion CRUSE with the NPSA, so we had to adapt
our wording accordingly. This required Multi site Research Ethics Committee (MREC)
approval. Some months into these negotiations, during which time we received
MREC approval, we were also informed that the NAO were required to re-survey the
trusts to get more up to date data, and this would occur as we were about to launch
our survey. We re-adapted our survey instrument for the third time, briefing and
distribution materials, and obtained MREC approval for these changes. In addition,
the NAO undertook a survey of English PCTs in clinical governance including risk
reporting with data collection in the Autumn of 2005, which again will have resulted in
the same clinical governance staff being involved in somewhat similar survey
responses.
143
In addition to obtaining MREC approval, the approval of the NPSA, we had to obtain
approval for each trust via their research governance process. This proved to be
enormously difficult, since the Research and Development (R&D) department is not
always part of a trust, many operate on a consortium basis, and there is no central
database that was available to use to reconcile the trust with the R&D office. We
therefore used the databases obtained via the DH R&D Forum, and also asked trusts
themselves to forward the forms to the relevant department. Because of the
complexity of the MREC forms and the briefing process, the pack contained 10
documents. We also found that many trusts and departments raised numerous
queries, including asking us to complete their own documentation, including honorary
contract forms. In one case, we were asked to complete Criminal Records Board
checks, at our own cost, despite the fact this was a postal survey. We have reason to
believe, from comments from key respondents, that the combination of repeated
surveys of an apparently similar nature, and inordinate requirements to gain trust
clearance, had an adverse impact on our response rates.
As we had access to some of the NAO data from their 2004 survey (n=197 of the 267
trusts) before we sent out our survey, we were able to adjust ours to get more
focused information on feedback from reporting systems. For example, questions in
their survey included many opened ended items, which we re-analysed to create
categorical responses in our survey, thereby improving precision. Further, we believe
the NAO survey did not differentiate between feedback from aggregated reports in
the incident system and feedback from investigations, so we had specific distinctions
in our items.
The survey is appended (Appendix 2). It is the result of the above process, in
addition to the usual process of piloting and of seeking expert advice from the
Steering Group.
Survey administration:
The survey questionnaire was mailed out from November 2005, with two re-mailings
until the end of March 2006, to the Lead for Risk management in each of 607 trusts,
with a covering letter. A full questionnaire was also mailed to the Trusts’ Chief
Executive Officer (CEO). The option was given for web site completion on a secure
site (www.hsrc.co.uk). We obtained 351 replies (57.8%). Response rates were not
144
uniform geographically, with only 5 from Wales, 38 from London, 98 from the South,
101 from the North and 109 from Midlands and East.
The results of simple frequencies are presented below. As there was some missing
data for almost all questions, we have reported exact numbers and percentages of
these respondents, rather than percentages of the whole sample. We report the
results and give an explanatory commentary in each section, also drawing where
possible comparisons with the NAO 2004 survey (NAO, 2005), as this will enable a
fuller picture to be given of the empirical results from both pieces of work. However,
it must be noted that the original NAO survey was completed in August 2004 (n=267
trusts), and a partial follow up carried out in the Summer of 2005 (to which we were
not given access). Our study was conducted from November 2005 to March 2006, so
the data are not directly comparable. Data reported from the two NAO surveys are
taken from the appendices of the report (NAO, 2005).
Aims:
To determine the progress that NHS trusts in England and Wales have made in using
incident reporting systems for organisational learning and improving patient safety.
Specifically to understand:
•
how trusts are developing a learning culture;
•
the development of reporting systems;
•
the analysis and use of information from incidents, and the place of incident
investigation in this process;
•
how solutions are formulated;
•
how changes are implemented;
•
the use of feedback and methods of dissemination.
145
3.2 - Part 1: Patient Safety Culture:
Q1: Who has lead board level executive responsibility
No. out of
%
for patient safety in your trust (tick one)
336
Chief Executive
46
13.7
Medical Director
67
19.9
Director of Nursing
124
36.9
Director of Human Resources
4
1.2
Other Executive Director
91
27.1
Lead responsibility is not at executive board level
4
1.2
Q1: Responsible board officer was most often the Director of Nursing Service (DNS)
Q2: Has the trust made a clear statement to its
No. out of
staff of its support for an “open and fair culture”
346 for Q2,
(e.g. as defined in the NPSA’s “Seven Steps to
324 for Q3
%
NAO
NAO 2004
Patient Safety guidance”)?
Yes
328
94.8
97
Team meetings
146
45.1
41
Through the management cascade
158
48.8
75
Via letter, poster, newsletter, intranet
144
44.4
73
Incorporated in trust risk management/ incident
308
95.1
Not asked
81
25.0
58
Q3: If yes to Q2, how was that communicated?
reporting policies
Other
Q2 & 3: While most trusts had communicated the “open and fair” policy,
communication is most often via policies.
The NAO data in 2004 gives a similar high percentage of compliance with the
requirement. Our options included the additional fixed choice item of “incorporating in
trust risk management/ incident reporting policies”, having analysed the “other
category in the NAO data and from insight into local trust procedures, which had
suggested that the predominant means of dissemination is via policies. This may be
accompanied by awareness raising via other management channels such as team
briefing. Perhaps what cannot be answered from both surveys is the more important
issue of how often the message is reinforced by management actions.
146
Q4: How would your organisation describe its safety
culture?
No. out of
%
345
NAO
NAO
2004
2005
%
%
The Trust has an open and fair safety culture throughout
140
41.9
23
31
The Trust has a predominantly open and fair safety
176
52.7
72
65
17
5.1
5
1
1
0.3
0
-
culture but there are some small pockets where there is a
tendency towards a blaming and closed culture
The Trust is moving towards an open and fair safety
culture there are substantial areas where there is a
tendency towards a blaming and closed culture
The trust has a predominantly blaming and closed culture
Q4: The safety culture is largely open and fair, with most Trusts agreeing that it is at
least predominantly so. The NAO data suggested a somewhat lower proportion
believed the culture is open and fair throughout, but with a modest improvement
between the two surveys.
Q5: Does your organisation report Patient safety
No. out of
%
Incidents (PSIs) to the NPSA?
345
Yes, currently reporting to the NPSA
304
88.1
No, but have implementation plan to do so
38
11.0
No, and have no plans to do so
3
0.9
No. out of
%
Q6: If your organisation reports PSIs to the NPSA,
does the Trust receive feedback?
310
Yes
118
38.1
Q5/6: Nearly all Trusts are reporting to the NPSA, or have plans to do so in the next
year. This was not asked in a similar form in either NAO survey, but the 2005 survey
showed 16% of trusts were not yet sending data from the trusts’ system to the NPSA
NRLS, i.e. similar levels to above.
147
Q7: Does your organisation report PSIs to the
No. out of
%
SHA/region
342
Yes
286
81.9
No, but have implementation plan to do so
5
1.5
No, and have no plans to do so
57
16.7
Q8: If your organisation report PSIs to the
No. out of
SHA/region, does the Trust receive feedback?
281
Yes
176
62.6
Q7 & 8: Most trusts report to the SHA/region, but only two thirds receive feedback.
This issue was not directly investigated by the NAO surveys.
In discussion with several trusts in the phase of selecting case studies and at the
Expert Workshop, “soft intelligence” suggested that for many trusts, reporting to
region was a bureaucratic rather than a learning function, i.e. no feedback was given
that might influence reporting quality, remedial action or prevention of future adverse
events.
Q9: Does your organisation investigate the Trust’s
reporting and learning culture using safety
No. out of
%
340
assessment surveys?
Yes
53
15.6
Checklist for Assessing Institutional Resilience (CAIR)
0
0
Manchester Patient Safety Assessment Tool (MaPSaT)
13
3.8
Advancing Health in America (AHA) and Veterans Health
0
0
Safety Attitudes Questionnaire (SAQ)
14
4.1
Stanford Patient Safety Centre of Inquiry Culture Survey
1
0.3
Other
30
8.8
Q10: Which safety culture assessment tools has your
Trust used?
Association (VHA) Strategies for Leadership
Q9, 10: Formal assessment of learning culture by surveys is practiced by a small
minority of trusts, of which 13 are using the NPSA’s sponsored MaPSaT, and overall
28 Trusts are using well known methods, and a further 30 are using a range of
others.
148
The NAO survey in 2004 asked simply whether the safety culture was evaluated,
yielding 42% “yes”. We undertook more detailed questions as above to test the rigour
of methods used. Of note, the House of Commons Committee of Public Accounts
(usually known as “the PAC”) was critical of the failure to use the more robust
methods advised by the NPSA, such as MaPSaT (HCPAC, 2006). Our data suggest
that most trusts do not evaluate culture and are largely not using recommended
methods.
Main issues raised by section one:
The vast majority of trusts are reporting externally, but at least a third received no
feedback from the SHA/ region. At the time of this report, there is no direct channel of
feedback to Trusts, except for some Trusts receiving aggregated reports on numbers
reported, with no direct feedback on the quality of the report or investigation, nor
safety advice. This point was made very firmly by the NAO report (NAO, 2005) and
the PAC report (HCPAC, 2006).
Internal culture is said to be at least predominantly open and fair, but very few Trusts
perform formal assessments to validate these claims. Again, the NAO and PAC
reports noted these findings from the NAO survey, and drew attention to the
importance of action at Trust level to address organisational safety culture, built on
robust methods of evaluation.
3.3 - Part 2: Reporting Systems:
Q11: Is the patient safety reporting system part of an integral
reporting system for all incidents, clinical and non clinical,
No. out of
%
343
2004
and those affecting patients, staff and visitors?
There is a separate system specifically for incidents affecting
NAO
%
7
2.0
2
335
97.7
97
2
0.6
1
No. out of
%
patients
Patient safety reporting is a part of an integrated reporting system
for incidents affecting staff, patients and visitors
Other
Q 12: does your organisation have a patient safety incident
reporting system that is designed to be accessed by:
343
Staff
332
96.7
-
Patients
68
19.8
-
Public
45
13.1
-
No system in place
11
3.5
-
149
Q11 & 12: Patient safety reporting systems cover incidents involving patients and
staff in nearly all trusts. The system is designed to be accessed by staff, but a one in
five allow access by patients and the public, while 11 trusts (3.5%) reported that no
system was in place.
Q 13: Is the patient safety incident reporting system
No. out
anonymous, i.e. it is not possible to trace the
of 343
%
NAO
2004
respondent?
%
No
306
89.2
Q. 14 I Is the patient safety incident reporting system
No. out
%
confidential, i.e. the respondent’s name is known to the
of 342
95
person receiving the report, but not disclosed to
others?
(NAO question: Although the system is anonymous are
there any mechanisms for the Patient Safety team to
identify staff that knowingly depart from agreed
protocols or safe procedures).
Yes
270
78.9
79
Q 15: How can patients, or members of the public,
No. out
%
report their concerns about patient safety issues?
of 349
By reporting incidents in the same way as hospital staff
124
35.5
26
By completing patient complaints forms/ complaints system
323
92.6
87
By informing staff members who then complete incident
314
90.0
94
333
95.4
97
NPSA NRLS
202
57.9
-
We do not have a route through which they can raise
16
4.6
0
30
8.6
3
forms
Through the PALS (Patient Advise and Liaison Service)
system
patient safety issues
Other
Q13 &14: One tenth of Trusts stated the system was anonymous, while nearly all
stated it was confidential. Of interest is the question related to Q13 in the second
NAO survey (2005 sample), asking if the trust operates any anonymous reporting
systems but excluding the national NPSA e form that had been introduced in the
period between the two surveys. There were 38% of trusts who agreed. This
suggests that there may be parallel systems in place in some trusts, or that some
150
systems allow for anonymity in certain circumstances. It may also refer to “whistle
blowing” systems via personnel departments or to the CEO outside the patient safety
reporting systems. The value of additional channels in increasing reporting, balanced
against the responsibility of the trust to investigate, which may include questions to
the reporter to gain a full picture, have not been investigated in the NHS.
Q15: Reporting by the public is facilitated in a variety of ways, including complaints,
PALS and by informing staff, although a minority acknowledged there was no such
system in their trust.
Q 16: How effective do you think the incident reporting
No. out of
systems implemented in your Trust/ organisation are at
344
%
detecting safety incidents/ dangerous conditions?
Low effectiveness: a large proportion of safety- critical
16
4.7
138
40.1
190
55.2
incidents go unreported
Moderate effectiveness: at least half of safety- critical
incidents are reported
High effectiveness: a large proportion of safety- critical
incidents are reported
Q16: Effectiveness of incident reporting system at detecting safety incidents/
dangerous conditions is said to be low for only a few Trusts, while just over half
believe their system is highly effective, i.e. a large proportion of safety critical events
are reported. We are unable to ascertain the veracity of this rating, and accept that it
s open to social desirability bias. It was included in the survey to act as a possible
filter, combined with other questions, in selecting trusts which might yield case study
sites for good practice.
Different questions were used in the two NAO surveys. Of the 20% of trusts in the
NAO 2004 sample that could offer an estimate (in deciles) of the proportion of reports
against actual incidents, the median is 50-60% are reported. In the 2005 survey, with
larger categories of response (quintiles) and therefore less fine grained analysis, the
median is 41-60% are NOT reported.
Arguably these estimates are less an indication of actual risk events but more an
indication that the trusts’ patient safety staff are beginning to understand the scale of
the problem. It seems unlikely these are estimates are based on actual figures for the
denominator, since there would have to be evidence that trusts regularly obtain
151
monitoring or audit data of the actual levels of reporting. This known to be a problem
worldwide (WR. Runciman, personal communication).
Chi squared analyses were conducted to see if there were differences in the
responses in this section by trust type, since partnership trusts / mental health/
learning disability trusts may be expected to have longer term relationships with
clients and have methods of including user views in service improvements more.
There were no significant differences found. However, caution in interpretation is
required, since some trusts classified as partnership trusts were at the time of the
survey also combined with PCTs, so there is not a simple comparison to be made
with this data set.
Conclusions:
Trust wide systems are most usually confidential rather than anonymous, although
the NAO’s 2005 data suggested some trusts allow variation in reporting to allow
some anonymous reporting outside or in addition to the confidential system. While
nearly all the incident systems include incidents about staff and patients, most are
not accessed directly by the public, and in many the report will be lodged as a
complaint rather than an incident.
3.4 - Part 3: Analysis of incidents and patient safety information:
Q 17: is the Trust’s patient safety incident
No. out of 348
%
A response to a specific serious incident report
335
96.3
An analysis of a pattern of minor incident reports
306
87.9
A preventive strategy promoted by a series of
234
67.2
6
1.7
investigative system initiated as:
“near miss” reports
None of these
Q17: Investigations are, in most trusts, initiated in response to specific serious
incidents, and nearly as many trusts initiate investigations from analysis of patterns of
minor incidents. The majority of trusts also trigger investigations preventatively via
near miss reports.
152
Q18: What proportion of all reported incidents
No. out of 336
%
None
2
0.6
1-5%
96
28.6
6-10%
69
20.5
11-25%
41
12.2
26-50%
53
15.8
More than 50%
75
22.3
are subject to a patient safety investigation,
i.e. beyond recording and describing the
nature of the incident.
Q18: The proportions of incident reports analysed and investigated varied, with a
modal response of 11-25% of incidents. There are no equivalent data in the NAO
surveys. Interpretation of these results depends upon a common definition of patient
safety investigation, which was only defined in the negative, ie beyond recording and
describing the nature of the incident. Future research might usefully employ more
exact definitions, perhaps with several levels of depth of investigation, since it is
unlikely that the depth of investigation typically required of a SUI (Serious Untoward
Incident) is possible for more than a small minority of incidents.
Q 19: What criteria are used to trigger a patient
No. out
safety investigation?
of 345
%
NAO
2004
NAO: What criteria does your organisation use
%
to decide which patient safety incidents to
investigate?
Potential/ actual severity of the outcome/ impact of
327
94.8
94
Likely frequency of occurrence of the incident
278
80.6
83
Potential for learning from the incident
262
75.9
-
Other
30
8.7
18
the incident
Potential risk to the trust or patient
86
Q19: The trigger for analysis and investigation is most often the potential outcome/
severity/impact of incident across almost all trusts, with the likelihood of occurrence
being also important for most, as is the potential for learning, and according to the
NAO 2004 survey, the potential risk to the trust or patient.
153
Q20: Apart from incident reports, does the Trust
systematically examine any other data sources in the
No. out of
%
345
context of identifying potential safety issues?
Complaints
333
96.5
PEAT (environment/ hygiene audits)
183
53.0
Equipment maintenance logs
98
28.4
Staff training records
135
39.1
Infection rates or other routine clinical data
225
65.2
Clinical outcome data such as unplanned return to theatres/ re-
106
30.7
132
38.3
4
0.1
admissions
Other routinely available information
There are no relevant additional sources of information
Q20: Other data sources used for identifying safety issues most often include
complaints, with two thirds also using routine clinical process and a sizable minority
using other quality indicators, clinical outcome data and staff training records.
Q 21: Does your Trust use data other than incident reports
No. out of
%
to analyse patient safety information?
341
Yes
291
85.3
Risk management department
243
71.3
Patient Safety department
54
15.8
Clinical Governance department
229
67.1
Other internal department
78
22.9
External experts/ consultancy
33
9.7
Other
38
11.1
Q 22: Please specify which Department or Function
analyses this information?
Q21 & 22: Data other than patient safety incident reports are used to analyse patient
safety information by the majority of trusts The vast majority of these other
investigations are conducted in the risk management or clinical governance
department, which are likely to be the same department in many trusts. About a tenth
of trusts use outside consultancies or experts.
154
Q 23: In your experience, in what proportion of individual
investigations is the level of information submitted
No. out of
%
343
adequate for the purposes of identifying contributory
factors and prioritising remedial actions?
None
1
0.3
Only a few reports
96
28.0
At least half of all reports
136
39.6
Nearly all reports
92
26.9
All reports
18
5.2
Q23: The quality of reports on which investigations are made are adequate for
identifying contributory factors and for prioritising remedial actions in about half or
more of trusts, but just over a quarter report that only a few reports are adequate
and one trust lead reporting none are adequate.
Q24: In your organisation, how are patient safety incidents
No. out of
%
categorised in your incident reporting system:
348
By clinical service or department
325
93.4
By clinical process, e.g. diagnostics, medical or surgical
156
44.8
By specific technology/ equipment/ medication/ intervention
179
51.4
By patient group, e.g. children
157
45.1
By error/ failure types
216
62.1
By contributory factors/ causes
134
38.5
By severity of outcome
304
87.4
By likelihood of recurrence
218
62.6
No categories used
2
0.6
Other
32
9.2
No. out of
%
procedures
Q25: In your organisation, are patient safety incidents
analysed to produce reports using the categories listed in
337
question 24?
Yes
Q 26 If yes, is the information presented with sufficient
analysis to allow meaningful judgements of errors/ failures
319
94.7
No. out of
%
328
and possible solutions?
Yes, reports are analysed and presented so that we have
155
83
25.3
confidence in forming meaningful judgements of errors and
possible solutions
Although most are analysed sufficiently, we recognise that this
223
68.0
22
6.7
needs more time/ expertise to allow meaningful judgements of
errors / failures and possible solutions
No, we are not confident that information is presented with
sufficient analysis to allow meaningful judgements of errors and
possible solutions
Q24 & 25: The answers to these questions show that a variety of categorisations are
used within the same trust, most often involving actual or possible severity, with a
majority including likelihood of recurrence and error types. These categories are used
for analysis by all but 18 trusts. There is sufficient analysis for meaningful
identification of errors/ failures and possible solutions for a quarter of trusts, but for
the majority more time or expertise is often needed, while very few are not confident
that information is presented with sufficient analysis.
Q 27: In your organisation, are reports of similar
categories of incidents synthesised to gain insight into
No. out of
%
343
possible causes?
Yes
Q 28: If yes, is the synthesis by…
291
84.5
No. out of
%
287
General discussion between staff involved in investigations
204
71.1
Written text summarising key points
176
61.3
Numerical summaries (e.g. tables, graphs)
218
76.0
Staff in addition to those involved in investigations are involved
166
48.4
21
7.3
in discussions about causes and solutions
Other
Q27 & 28: Similar categories of incidents are synthesised to gain insight into possible
causes by most trusts. This is achieved by several means including discussion
between staff involved in the investigation, written text summarising key points,
numerical summaries and discussion with other staff.
156
Q29: Does your Trust have a patient
No. out of 337
NAO
safety committee?
2004
%
Yes
144
42.7
-
Trust Board
95
65.9
33
Risk Management Committee
90
62.5
40
Clinical Governance Committee
123
85.4
47
Health and Safety Committee
59
41.0
12
Executive Board
44
30.6
9
Trust Management Committee
23
16.0
-
Clinical Risk Forum
27
18.6
-
Quality Committee
14
9.7
1
Assurance Committee
23
16.0
-
Audit Committee
18
12.5
3
Other
22
15.3
(-)
Q30: If yes, to
which group
does your
organisation’s
Patient Safety
Committee (or
equivalent),
(a) report
to(a) report
No.
out of 144
NB: The NAO 2004 survey has other response options (e.g. Clinical Risk Forum), all
with very low frequency which were not used in our survey.
Q29: Does your Trust have a patient
No. out of 337
safety committee?
Yes
144
42.7
Q30: (b) If yes, to which group does
(b) provide
%
your organisation’s Patient Safety
information
2004
to
%
Committee (or equivalent) other
groups do they provide information?
No. out of
157
NAO
144
Trust Board
85
59.0
48
Risk Management Committee
49
34.0
27
Clinical Governance Committee
88
61.1
35
Health and Safety Committee
64
44.4
-
Executive Board
50
34.7
-
Trust Management Committee
33
22.9
-
Clinical Risk Forum
32
22.2
-
Quality Committee
18
12.5
-
Assurance Committee
24
16.7
-
Audit Committee
39
27.1
-
Other
22
15.3
20
Q 29 & 30: Just less than half of trusts have a Patient Safety Committee; however it
is likely others will use committees which may include this within a wider remit such
as Clinical Governance or Risk Management. Where a Patient Safety Committee
exists it often reports to a clinical governance or related committee and two thirds
report to the trust board,. In the NAO survey only ne third reported directly to the
Board. In this survey, slightly 3/5ths provide information to the trust board. . While a
small difference, it may be that where there is a mediated reporting relationship to the
Board, that there is less resource and oversight of patient safety in these trusts.
Q32: What type of reports?
Q 31: Does your organisation produce formal summarised
No. our of
reports from analyses of patient safety incident reports?
347
Yes
Q 32: If yes, what type of reports:
%
333
96.0
No. out of
%
339
Summary statistics of incidence of various types of incidents over
325
95.9
265
78.2
298
87.9
Focus upon a specific area/ type of incident each issue
182
53.7
Textual accounts/ examples of specific incidents
201
59.3
a recent period
Cumulative statistics of incidence of various types of incidents to
date
Total number of reports of incidents submitted in last period
compared with previous periods
158
Safety-relevant information from sources other than incident
140
41.3
10
2.9
reporting
Other
Nearly all trusts produce summarised reports from analyses of safety incident
reports. These most often include summary statistics of gross numbers and types of
incidents, most also include some textual account, while about a third report safety
relevant information from other sources. Textual accounts of specific incidents may
be of greater interest to others not familiar with the event, and invite discussion of
possible causes, solutions and preventive measures.
The NAO 2004 survey examined the type of information contained in formal
summarised patient safety reports. Response options differed from the above, with
the most frequently endorsed options being: analysis of trends (88%); analysis by
category (91%) analysis by directorate (81%). Of more interest perhaps is the finding
that only 34% report analyses by outcome of underlying cause, and action taken/
recommendation (6%). These latter are arguably most relevant to determining
remedial or preventive actions.
Taken together, these results suggest that information is largely produced in ways
that are relevant for performance review and reporting externally, but not for
organisational learning by elaborating on causes and making recommendations for
preventive and remedial action.
Q33: At what forum are patient
No. out
safety reports discussed and
of 336
%
actions prioritised?
NAO
NAO
2004 %
2004 %
(discussed)
(action
prioritised)
Trust Management Board
156
46.4
68
22
Health and Safety Committee
194
57.7
67
51
Patient Safety Committee
73
21.7
2
36
Clinical Governance Committee
223
66.4
11
55
Risk Management/ Clinical Risk
265
78.9
81
67
-
-
45
33
76
22.6
-
-
Management Committee
Other
Complaints Committee/ Claims
Committee
159
Service Directorate/ Department/
172
51.2
5
1.5
-
-
Local Group
None/ don’t know
Reports are discussed and actions prioritised largely at the clinical governance and
risk management committees, but with some trusts showing further discussion at
local sub areas, and some at trust management board level (which is likely to mean
an operational executive or other formal board sub committee level in larger trusts).
Conclusions:
Analysis is confined to actual incidents in a third of trusts, suggesting in this sizable
minority of trusts that near misses are either not reported or are largely ignored.
There is wide variation in the proportions investigated, with most trusts investigating
less than a quarter of incidents, but with some investigating more than half. Triggers
to investigation are mainly associated with severity of incident/ recurrence and
potential for learning. In addition to complaints data, clinical and other quality and
training records are used by two thirds to a half of trusts to identify safety issues.
The quality of reports is varied, with the majority reporting that at half the reports are
not adequate. This implies that central expertise is required to conduct even
preliminary investigations.
Categorisation includes severity of outcome, recurrence and errors, but also local
departmental and service categories necessary for targeting remedial action. There
is evidence that most trusts use multiple categories to facilitate analyses. Synthesis
of similar events is provided by most trusts, most using discussion among
investigators or other staff, textual or numerical summaries. This may add context
and improve recommendations, a process described by Weick (1995) as sense
making. See also Battles et al (2006).
Trusts vary as to the formal reporting structure of safety committees, with some
having dedicated Patient Safety Committees; others subsume this within a committee
with a wider risk management remit.
Formal reports to these bodies include numerical analyses of numbers and types of
incidents, but there are only a minority that combine other sources of safety relevant
160
information, suggesting a restricted understanding will be achieved of factors relevant
to prevention. Only a minority report that analyses include outcome and type of
recommendation, which may mean that insufficient analysis has been undertaken on
most incidents to identify these factors. This will mean that likely causes, remedial
actions and preventive measures, and the monitoring of recommendations, is unlikely
to occur.
Prioritisation occurs at a number of levels in different trusts, with most via a risk/
clinical governance committee. Whether this has sufficient authority to ensure actions
and resources follow is not possible to determine, but some insights can be gained
from the next section.
3.5 - Part 4: Formulating solutions and recommendations for change:
The NAO 2004 survey examined in a general question the extent that several
sources of information were used to learn about patient safety. The most frequently
endorsed item was complaints, endorsed at least “to some extent” by all trusts, and
similarly by nearly all trusts with regard to clinical negligence claims. The question
posed was too general for our purposes of examining solution generation. Further,
the generation of solutions from individual investigated incidents, as opposed to
reports of aggregated incidents, was not investigated in their NAO survey. We
therefore asked how solutions are generated from these investigations.
Q34: In your organisation, how are solutions generated to
No. out of
address errors or contributory factors/ causes from each
348
%
incident investigation?
Solutions are generated by those involved in the investigation
313
89.9
Recommendations are made by staff undertaking the
324
93.1
277
79.6
External advice is sought before making recommendations
110
31.6
The risk management/ risk investigation staff generate
253
72.7
154
44.3
Solutions are not generated at present
9
2.6
Other means are used
17
4.9
investigation
Solutions are generated by discussion among staff in the affected
service
recommendations
The patient safety committee or equivalent generates
recommendations
161
Q 34: Solutions are most often generated by staff involved in the investigation, who
also generate recommendations. A third of trusts also use external advice.
Q 35: In your organisation, how are solutions generated to
No. out of
address errors or contributory factors/ causes from each
338
%
incident investigated?
Solutions are generated by those involved in the investigation
206
60.9
Recommendations are made by staff undertaking the
227
67.2
224
66.3
External advice is sought before making recommendations
91
26.9
The risk management/ risk investigation staff generate
249
73.7
147
43.5
Solutions are not generated at present
14
4.1
Other means are used
22
6.5
No. out of
%
investigation
Solutions are generated by discussion among staff in the affected
service
recommendations
The patient safety committee or equivalent generates
recommendations
Q 36: in your organisation, how are solutions generated to
address errors or contributory factors/ causes from
292
synthesis of similar incidents?
Complaints
275
94.2
PEAT reports (hospital environment/ hygiene audits)
144
49.3
Equipment maintenance logs
94
32.2
Staff training records
131
44.9
Infection rates, other routine clinical data
193
66.1
Clinical outcomes, e.g. return to theatre
103
35.3
Other routine information
127
43.5
No relevant information
8
02.7
Other
36
12.3
162
Results for questions 35 and 36 are very similar, suggesting there are similar
processes in trusts for generating solutions from individual incidents and patterns of
similar incidents.
163
Q 36: In your organisation, are solutions generated from
No. out of
consideration of data other than incident reports?
343
Yes
291
Q 37: if yes, what sources are used?
%
84.8
No. out of
292
Complaints
275
94.2
PEAT reports (hospital environment/ hygiene audits)
144
49.3
Equipment maintenance logs
94
32.2
Staff training records
131
44.9
Infection rates or other routine clinical data
193
66.1
Clinical outcome data such as unplanned return to theatres/ re-
103
35.3
Other routinely available data
127
43.5
Other
36
12.3
There are no relevant additional sources of data
8
2.7
admissions
The solutions are apparently also generated from other sources of data in trusts
other than incident reports, with complaints being the most often cited source, along
with routine clinical data such as infection rates.
In the NAO 2004 survey, there was a question about the extent that patients and the
public are involved in the design and development of any patient safety solutions in
the trust, to which 58% reported some input. This was via the patient forum (17%),
representation on boards or committees (12%) or consulted about specific issues
(12%).
It would seem that there is infrequent systematic use of patients’ views in generating
solutions and recommendations, which is an important part of testing the feasibility of
solutions.
Conclusions:
The formulation of solutions and recommendations for change from investigations
and from analyses of patterns of similar events relies mainly on core risk
management staff and those involved in investigations, with about two thirds
involving staff in affected areas. It is perhaps surprising this percentage is not higher,
164
since not only may these staff have valuable insights, their commitment to change
may be less if they are not involved in suggesting solutions. This process is known as
sense making. There is some evidence for the use of other sources of data in
generating solutions such as complaints and routine clinical information, which
should result in a wider understanding of possible solutions and how suggested
change could be monitored. The systematic use of patients’ views is rare. There are
evident gaps in the extent that all sources of relevant information and opinion are
used to make feasible safety solutions.
3.6 - Part 5: Implementing recommendations:
Q38: In your organisation, how are changes implemented?
No. out of
%
335
Written plans are submitted to the risk management department
216
64.5
177
52.8
66
19.7
118
35.2
25
7.5
38
11.3
and reviewed periodically
Service managers/ clinicians are expected to implement the
recommendations and no formal system is in place to monitor
this
Staff are encouraged/ rewarded for demonstrating that
recommendations are implemented
Staff are congratulated when changes are demonstrated to have
improved patient safety
No systems are in place to monitor implementation of
recommendations
Other
Although two thirds of trusts report that written plans are made and reviewed, a half
also indicated that implementation is devolved to local managers and clinicians with
no formal monitoring, while a small minority (7.5%) indicated that no system is in
place at all to monitor implementation of recommendations. Only a third congratulate
staff on safety improvement.
165
Q39: Is the patient safety incident reporting system used to
No. out of
%
monitor the effectiveness of solutions by:
342
Overall number of incidents
188
55.0
Changes in number of incidents relevant to the solutions
179
52.3
No indicators are used at present
97
28.4
Other indicators
20
5.8
A key part of monitoring the effectiveness of solutions is the review of indicators
linked to the impact of the solutions. Most trusts have some systems in place but a
quarter do not.
Q40: Are any additional sources of information beyond the
No. out of
incident reporting system used to help monitor the impact of
316
%
changes intended to improve patient safety?
Complaints
267
84.5
PEAT reports(hospital environment/ hygiene audits)
114
36.1
Equipment maintenance logs
70
22.2
Staff training records
105
33.2
Infection rates or other routine clinical data
166
52.5
Clinical outcome data such as unplanned return to theatres/ re-
89
28.2
There are no relevant additional sources of information
26
8.2
Other routinely available information
50
15.8
admissions
However, there is some evidence that other sources of information may be used to
monitor the impact of safety solutions, again with particular emphasis on using
patient complaints as an indicator. This may not be wholly satisfactory since none of
these sources may be causally linked to safety issues.
Conclusions:
There is only weak evidence that recommendations for action are given to the
relevant service managers, since around a half of trusts delegate implementation to
managers and clinicians without also ensuring there is monitoring of the impact of
these recommendations (Q38). There is also evidence that systems for directly
monitoring the impact of solutions are not put in place by just over a quarter of trusts
who stated they had no system for monitoring the impact of recommended changes
166
(Q39). Other sources of clinical and performance data are in use, but the extent they
are causally linked to safety causes is unknown.
3.7 - Part 6: Feedback and dissemination
Q41: When staff report patient safety incidents or
No. out of
near misses, is it the Trust’s practice to provide:
336
%
NAO
2004
%
An acknowledgement thanking them for reporting the
106
31.5
33
143
42.6
52
244
72.6
84
No feedback is provided
40
11.9
-
Other feedback
68
20.2
32
incident
Feedback on how the report will be dealt with (i.e.
investigated, or recorded)
Feedback on the outcome of the investigation (if one is
undertaken)
Q41: Given that staff that have reported an incident can be expected to have an
interest in the response by management, including the progress and outcome of any
investigation, it is surprising that only about two thirds of trusts directly feedback the
result, and only a third send an acknowledgment, or indicate progress, and 11.9%
give no feedback.
Q42: How is this feedback provided?
No. of 309
%
Individual letter to those who reported it
106
34.3
Presented at meetings
209
67.6
Posted on the intranet
42
13.6
Summarised in reports/ other regular documents
233
75.4
Published in newsletters/ leaflets
167
54.0
Other
50
16.2
Methods of feedback are generally by reports, presented at meetings, or by letter in a
minority of trusts. From this question it is not possible to ascertain whether this
feedback is tailored to specific incidents that a reporter might recognise, which may
be important in influencing their confidence in the integrity and effectiveness of
reporting and investigation. Teasing his out would require more detailed questioning
than was possible in this survey.
167
Q43: if the public report patient safety incidents or
near misses, it is the Trust’s practices ….
No. out of
%
307
NAO
2004
%
Not to provide information for patients and family
15
4.9
-
To provide an acknowledgement thanking them for
210
68.4
51
236
76.9
50
239
77.9
67
19
61.9
16
reporting the incident
To provide feedback on how the report will be dealt with
(ie. investigated, or recorded)
To provide feedback on the outcome of the investigation
(if one is undertaken)
To provide other feedback
Q43: Reporting by the public results more often in acknowledgement, and a similar
number of trusts as for staff reports (above) also report the outcome of investigations
to patients who reported the incident. Given that most trusts treat the reports by
patients as complaints not incidents, which are subject to mandatory response times
and types of response, it is unsurprising that feedback to patients is at least this high.
Arguably this also shows that the system has the capacity to implement specific
processes to provide feedback to patients; these processes could be adapted to
ensure that feedback to staff and patients are provided in similar ways for incidents.
Q44: How does your organisation disseminate lessons
No. does
% of valid
NAO
learnt across the Trust?
apply
responses
2004
%
Through discussion at relevant hospital wide meeting
258
78.7
97
230
82.7
86
Via Team Brief meetings
206
74.1
80
By mail/ email to local managers (e.g. ward sisters) who
199
81.9
59
Through the Trust’s intranet/ electronic alerts
167
74.9
77
Through newsletters/ reports
254
88.0
-
Through bulletins/ notice boards
135
69.2
-
Via training programmes/ workshops
254
90.7
-
groups (e.g. Clinical Governance Committee)
Through discussion by local meeting groups who are
designated as responsible for local patient safety issues
disseminate relevant findings
Other
various
168
NB: Numbers of missing values vary in each line, so actual numbers in column “no.
does apply” does not directly relate to the percentages shown in next column, which
are % of all valid responses, i.e., excluding missing values.
The next table shows the data in a different way, to show the extent that the methods
were useful, in relation to the proportion with experience of the method.
Q44: How does your organisation disseminate lessons learnt across the Trust?
No.
%
No.
%
No.
%
No.
%
% very
does
not
quite
Very
useful
not
useful
useful
useful
out of
apply
those to
whom it
applies
Through discussion at
70
21.3
3
0.9
146
44.5
109
33.2
42.2
48
17.3
0
0
100
36.0
130
46.8
56.5
Via Team Brief meetings
51
19.8
7
2.7
121
47.1
78
30.4
37.8
By mail/ email to local
44
18.1
9
3.7
121
49.8
69
28.4
34.7
56
25.1
7
3.1
107
48.0
53
23.8
31.7
29
10.2
9
3.2
149
52.7
96
33.9
37.8
60
30.8
12
6.2
78
40.0
45
23.1
33.3
26
9.3
2
0.7
108
38.6
144
51.4
56.7
22
61.1
1
2.8
3
8.3
10
27.8
-
relevant hospital wide
meeting groups (e.g.
Clinical Governance
Committee)
Through discussion by
local meeting groups who
are designated as
responsible for local
patient safety issues
managers (e.g. ward
sisters) who disseminate
relevant findings
Through the Trust’s
intranet/ electronic alerts
Through newsletters/
reports
Through bulletins/ notice
boards
Via training programmes/
workshops
Other
169
For those trust leads who endorsed that this applies, discussions at local meetings
concerned with patient safety and via training events were most often found to be
useful. While newsletters were not an explicit response option in the NAO surveys,
this survey shows there is widespread use of newsletters and intranet messages, but
these attracted somewhat fewer ratings of being very useful, although they have the
potential to reach more staff with lower cost. There is some evidence that e mail and
intranet methods are more used in this later survey than in the NAO 2004 survey.
Q45: The use of guidelines as a recommendation from analysis of patient safety
information is common. We asked how focussed and practical they are in practice.
Q45: If guidelines/ recommendations are specified from the
analysis of patient safety incident reports or other patient
No. out of
%
337
safety information, how are they focused?
The recommendations are general and unrelated to what staff
14
4.2
103
30.6
220
65.3
need to do to be really practical for the staff involved
The recommendations are tied to very specific circumstances, so
sometimes they may be relevant to only a few events
The recommendations are practical and aimed at the right level
of the organisation so as to have a widespread and beneficial
impact upon safety
Guidelines are thought to be fit for purpose for about two thirds of trusts. This
suggests some trusts are aware that effort is required to ensure guidelines are
focussed, credible and practical. Again, this question was used in the NAO surveys,
and was included here as an indicator of possible good practice, n
However it may also be subject to social desirability bias, so results are treated with
caution.
Q46: Which of the following best describes the feedback
No. out of
provided from patient safety incident investigations and
343
%
analyses of information in your incident report database?
Little direct feedback reaches staff, and few actions result
16
4.7
Some feedback information (regarding vulnerability/ safety risks)
233
67.9
94
27.4
reaches staff, but it is not reliably acted upon
Feedback is regular, relevant and is reviewed for its effectiveness
in improving patient safety processes and is regularly acted upon
170
Q46: However, feedback is thought to be effective and acted upon in only a quarter
of trusts..
Q47: Does your organisation share lessons learnt
No. out of
with any of the following external organisations?
329
%
NAO
2004
%
Strategic Health Authority (SHA), Welsh Regional
240
68.4
91
NPSA
227
64.7
34
Primary Care Trusts
200
57.0
68
Local Trusts
168
47.9
63
Other NHS body
70
21.3
(various)
Other local authorities
45
13.7
5
Medicines and Healthcare products Regulatory
152
46.2
51
Health and Safety Executive (HSE)
153
46.5
60
Other
39
11.9
20
Office
Agency (MHRA)
Q47: Sharing lessons externally occurs with several bodies for most trusts. It must be
noted that at the time of data collection not all trusts had reporting systems in place,
and some were not able to report electronically to the SHA or NPSA. Of interest is
the low rate of reports to local authorities, given that many incidents involve those in
receipt of social care / vulnerable groups.
Every trust potentially has access to one of the NPSA’s 34 Patient Safety Managers
(PSMs), as each is allocated to a geographical area. Contact with PSMs is described
in Q48, 49:
Q48: Has your organisation contacted the NPSA
No. out
Patient Safety Manager responsible for your
of 341
%
NAO
2004
Strategic Health Authority [or Welsh Regional
%
Office] area in relation to any of the following?
To seek advice or expertise in the investigation of
164
48.1
49
262
76.8
-
68
19.9
-
patient safety issues
To seek advice/ provide training on investigation of
patient safety incidents
To seek advice on methods of feedback from patient
171
safety incidents
To bring to their attention local patient safety concerns
133
39.0
38
272
79.8
92
To seek advice on other wider patient safety issues
164
48.1
49
No, we have not had any contact with them about
10
2.9
2
38
11.1
-
or effective solutions to problems
To seek support and advice about the rollout of the
National Reporting and Learning System
patient safety
To seek advice about other issues
Contact with NPSA PSMs is varied between trusts, and only 10 had had no contact
at all. The predominant response in this survey concerned linking to the NRLS which
was a major priority for the NPSA at this time. This may have detracted from the
wider advisory roles within the PSMs’ remit.
Q49: Has the local Patient Safety Manager
contacted your organisation in relation to any of the
No. out of
%
344
NAO
2004
following?
%
To introduce themselves and explain their role
332
96.5
97
To provide advice/ provide training on investigation of
300
87.2
-
148
43.0
-
180
52.3
49
166
48.3
38
296
86.0
89
193
56.1
45
3
0.9
2
patient safety incidents
To provide advice on training methods of feedback from
patient safety incidents
To provide advice or expertise in the investigation of
patient safety incidents
To seek information about local patient safety concerns
or effective solutions to problems
To provide support and advice about the rollout of the
National Reporting and Learning System
To provide advice on other wider patient safety issues
No, they have not contacted us about patient safety
Proactive contact by PSMs is evident over a variety of issues, with only 3 trusts
stating they have not had contact from a PSM. Some 43%of trusts report that advice
in relation to training trust staff in methods of feedback has been provided.
172
3.8 - Conclusions:
Within trusts, there is evidence that staff who report incidents are not routinely
thanked, acknowledged, nor informed of progress, although most will be informed
about the outcome of any investigation, via a formal report or less often, a personal
letter. Patients too are generally informed at the start and end of the process, but this
is most often in the context of complaint procedures. Formal means are used
internally to disseminate outcomes of investigations, with the use of meetings and
training where discussion is possible, preferred over newsletters and intranet
messages. There is some evidence that the use of email, intranet and newsletters is
becoming more common.
Guidelines as a means of improving safety are generally felt to be practical and
focussed, but feedback from the incident investigations and information from the
database with lessons learnt is generally felt to be poor.
External sharing of the outcome of investigations occurs with many national and
regional bodies, although very little locally. NPSA PSMs are a resource for many
aspects of patient safety.
Methodological issues arising from this study
First, we recognise there are methodological weaknesses in this study. These
include the extent to which a survey of trust risk management leads can give a valid
picture of risk reporting systems, processes and culture. Intentional presentational
bias of “faking good” was perhaps reduced compared to the NAO surveys, as unlike
the NAO studies, this study was confidential.
However, bias introduced by non-responding cannot be so easily ruled out. Unlike
the NAO studies which can achieve a near complete response rate since provision of
information to the NAO is a statutory requirement, this study is voluntary. It is likely
that there is some relationship between the willingness of trust risk managers to
respond to this survey and the trusts’ overall level of confidence in its risk
management systems. We have no means of undertaking retrospective analyses of
responders and non responders on the extent to which they are regarded as
excellent at risk management. We did consider such an analysis, using CNST ratings
(Clinical Negligence System for Trusts). However, systems in a trust are not
173
comprehensively assessed (high ratings can be obtained for achievement by only
some services) and many of the components of the overall rating do not relate to risk
reporting systems. Also, this data is not captured for every part of each trust to
achieve a rating. Ratings for CNST would be obtained at different times in each trust,
and so as a census of risk management effectiveness at the time of collection of the
survey data would not be of equivalent recency for each trust. Similar issues arise
from consideration of Healthcare Commission ratings.
With these limitations in mind, we turn now to examine the implications of the survey
for the emerging model of feedback, and comment on the extent that NHS trusts
have the elements in place to achieve effective individual and organisational learning
from their clinical risk reporting systems.
Overall conclusions from the survey in relation to the model of feedback from
patient safety incident reporting systems:
The survey was designed to map how progress in risk management and incident
reporting systems in the NHS in England and Wales reflects the emerging model
from the review and interviews with experts, and informed the Expert Workshop and
case study phases of work. We examine first the 15 features in the SAIFIR model,
and then the levels of feedback (A-E) as described in the review section.
174
Feedback
Type
Content & Examples
A: Bounce
back
information
Information
to reporter
• Acknowledge report filed (e.g. automated response)
• Debrief reporter (e.g. telephone debriefing)
• Provide advice from safety experts (feedback on issue type)
• Outline issue process (and decision to escalate)
B: Rapid
response
actions
Action within
local work
systems
• Measures taken against immediate threats to safety or serious issues
that have been marked for fast-tracking
• Temporary fixes/workarounds until in-depth investigation process can
complete (withdraw equipment; monitor procedure; alert staff)
C: Risk
awareness
information
Information
to all front
line
personnel
• Safety awareness publications (posted/online bulletins and alerts on
specific issues; periodic newsletters with example cases and
summary statistics)
D: Inform staff
of actions
taken
Information
to reporter
and wider
reporting
community
• Report back to reporter on issue progress and actions resulting from
their report
• Widely publicise corrective actions taken to resolve safety issue to
encourage reporting (e.g. using visible leadership support)
E: Systems
improvement
actions
Action within
local work
systems
• Specific actions and implementation plans for permanent
improvements to work systems to address contributory factors
evident within reported incidents.
• Changes to tools/equipment/working environment, standard working
procedures, training programs, etc.
• Evaluate/monitor effectiveness of solutions and iterate.
Mode A is feedback to the reporter that provides acknowledgement and outlines the
issue process and is covered by Q41 (items 1 & 2). The review highlights that
dialogue with reporters is important for several reasons: a) to demonstrate that
actions are taken on the basis of reports and in so doing to stimulate future reporting
to the system, b) to provide the opportunity for clarification and validation of reported
information coming into the system, and c) to gain the input of local front line
expertise concerning the causes of failures and how these might be addressed with
practical and workable safety solutions. We found that only approximately a third of
NHS trusts surveyed provide a specific acknowledgement to the reporter or any
further information on how the issue is to be handled. One in 10 trusts provided no
direct information to the reporter at all regarding their reported issue, although the
extent to which staff might learn from reports presented on specific incidents or
aggregated data relating to the issue of concern could not be ascertained from this
survey.
Mode C is concerned with the dissemination of information concerning lessons learnt
from analysis of reported incidents. Data from Q44 indicates that the more popular
forms of dissemination were newsletters, hospital-wide group discussions and
training programmes/workshops. Of the Trusts using each form of feedback, only
around 38% reported newsletters to be of the highest level of usefulness, in contrast
to other forms of dissemination which were more highly regarded, such as training
175
programmes (57%), local patient safety group discussions (57%) and hospital-wide
group meetings, such as clinical governance committees (42%).
Mode D refers to the communication of information concerning issue outcome to the
original reporter and hence closing the loop and demonstrating the utility of reporting
to those that originally reported the issue. Q41 (item 3) indicates that only around
two thirds of Trusts directly feed back the results of the investigative process to
reporters. Although it is possible that reporters may learn about how the incident was
addressed via access to reports presented via the patient safety and related
committees, it cannot be assumed that most staff will have such access nor that it is
possible to identify the response to the incident (s) of interest to the reporter. We
suggest that this may have implications for staff’s future motivation to report.
Mode E feedback is concerned with the implementation of actions to improve
systems safety. Q46 indicates that the majority of Trusts (68%) believe that some
safety feedback does reach staff but that it is not reliably acted upon. Only 27% of
Trusts could report that feedback was regularly offered, acted upon and evaluated in
terms of effectiveness in improving safety. Mode E feedback also includes the
capability to monitor the implementation and evaluate the impact of patient safety
improvements. Q38 indicates that in around half of all Trusts, service
managers/clinicians are expected to implement recommendations and yet there is no
formal system in place to monitor this process. Twenty five Trusts reported that there
was no system in place whatsoever governing the implementation of improvements.
We turn now to the 15 system requirements of effective safety feedback, as
described in the review section.
System requirements for effective safety feedback
01 Feedback loops must operate at multiple levels of the organisation or
system
02 Feedback should employ an appropriate mode of delivery or channel for
information
03 Feedback should incorporate relevant content for local work settings
04 Feedback processes should be integrated within the design of safety
information systems
176
05 Feedback of information should be controlled and sensitive to the
requirements of different user groups
06 Feedback should empower front line staff to take responsibility for
improving safety in local work systems
07 Feedback should incorporate rapid action cycles and immediate
comprehension of risks
08 Feedback should occur directly to reporters and key issue stakeholders
as well as broadly to all front line staff
09 Feedback processes should be well-established, continuous, clearly
defined and commonly understood
10 Feedback of safety issues should be integrated within the working
routines of front line staff
11 Feedback processes for specific safety improvements are visible to all
front line staff
12 Feedback is considered reliable and credible by front line staff
13 Feedback preserves confidentiality and fosters trust between reporters
and policy developers
14 Feedback includes visible senior level support for systems improvement
and safety initiatives
15 Feedback processes are subject to double-loop learning to improve the
effectiveness of the safety control loop
1. Feedback loops must operate at multiple levels of the organisation or
system
“Fulfilment of this requirement ensures that organisations are networked regarding
patient safety information, allowing generalisation of lessons learnt across
organisational boundaries and the ability to detect rarely occurring events that may
only be experienced by a handful of organisations” (from review section above).
We found that reporting externally appears to occur routinely to SHAs and NPSA, or
was planned to be in place once infrastructure issues were addressed (Q5-7), but
only a third received feedback from the NPSA and two thirds from SHAs, so external
reporting loops are not uniformly successful.
Sharing lessons from incident reports occurs with a number of organisations (Q47)
but mostly the SHA, with only 47% sharing across local trusts and 13.7% with local
authorities. The PSM has been seen by many trusts as a source of advice and
177
means of sharing lessons learnt (Q48), but this has depended upon the role taken by
each PSM who have had considerable autonomy, and range of competing activities,
so the extent that this is a reliable means for external sharing, is not known.
Within the trust, a range of committees including the Trust Board itself takes
responsibility for patient safety (Q29 a), and a diverse range of responses show that
reports emanate from this committee to other senior committees (Q29 b). Two thirds
of trusts had patient safety committees that report to the trust board. Two fifths
provide information to the Trust Board. For the remaining third of trusts, this may
mean some dilution of reporting is occurring, or it may reflect migration to integrated
governance, e.g. patient safety is fed through an Integrated Governance Committee
to the Board, affording cross reference and consideration of financial, personnel and
other corporate risks.
Reports of analyses of patient safety incidents are made by almost all trusts, (Q32),
but they appear to have little analysis by human factor related categories, being more
relevant for performance review than learning (Q33).
However in relation to the model, it is probably safe to say that feedback loops are in
place at Board level regarding patient safety issues but we cannot ascertain the
extent of ownership of the feedback from reporting systems from this survey.
Feedback at the level of individual reporters (Q41& 2) occurs in at least three
quarters of trusts, but 11.9% stated there is no feedback at to staff and 4.9% said
there was no direct feedback to patients at all (Q43).
2. Feedback should employ an appropriate mode of delivery or channel for
information
A variety of means are used (Q44), but the survey can shed no light on how
appropriate and effective these channels are. As noted in the case study section, it
may be more effective to use channels that involve dialogue between informants and
informers so that relevance to local circumstances can be increased and informers
may gauge how the feedback is received.
3. Feedback should incorporate relevant content for local work settings
Question 45 is relevant here, with almost a third of Trusts finding that feed back
guidelines/recommendations are too narrowly tied to preventing specific events and a
178
handful of trusts (14) reporting that guidelines were impractical. Q46 (item 3) is also
important, with only 27% of Trusts concluding that safety feedback was relevant
enough to be reliably acted upon.
Answers to questions Q34 and Q35 suggest that staff involved in the investigation
and the incident are involved in identifying causes and in formulating solutions, but
with little use of outside expertise. For most trusts there is evidence of use of some
routine data sources (Q 36 & 37) such as complaints and infection rates,
environmental quality standards, staff training records which could be used to help
increase relevance of recommendations for the local context, and in the monitoring of
recommendations (Q39 & 40). But the comparatively lower frequency of input from
patients and the public, than these other sources may mean that some important
contextualisation is not included.
4. Feedback processes should be integrated within the design of safety
information systems
Safety information systems in the NHS are nearly all designed to capture incidents
related to staff, patients and visitors (Q11), although prior to the NRLS there was
much greater diversity. Nearly all are not anonymous (Q 13) but are confidential (Q
14), which means that feedback can be made to the reporter. However, there is little
evidence that feedback to reporters is integral in these systems, i.e. it has not been
designed to issue direct feedback to reporters (modes A, B, D).
5. Feedback of information should be controlled and sensitive to the
requirements of different user groups
The survey showed a surprising result, in that there was more feedback to patients
(Q43) than staff (Q41), although this may be confounded with the complaints
systems which in many trusts are now integrated with risk reporting systems.
6. Feedback should empower front line staff to take responsibility for
improving safety in local work systems
Q38 (items 3 & 4) showed that a third of trusts (35.2%) congratulate staff when
changes are demonstrated to occur in patient safety as a result of recommendations
from incident investigation, and one fifth (19.7%) are encouraged or rewarded from
implementing recommendations. There is considerable scope for most trusts to have
more overt systems for ensuring front line staff members are incentivised to take
action within their local work systems.
179
The survey showed there was more emphasis on information sources of feedback
(newsletters etc) than action feedback. An important outcome from analyses of
multiple incidents and investigations is guidelines. Q 45 showed that a third of trusts
do not think their trusts uses guidelines effectively. Further, feedback from safety
incident systems is not reliably enacted in over two thirds of trusts by most staff
(Q46).
7. Feedback should incorporate rapid action cycles and immediate
comprehension of risks
As stated under requirements 1 and 8, there is little evidence of feedback mode (A),
the survey did not directly assess rapid recovery information (mode B). As explained
in the introduction, the survey was conducted in parallel with the work to develop the
model. However, this was a requirement that one of the case studies was selected
to reflect upon.
8. Feedback should occur directly to reporters and key issue stakeholders as
well as broadly to all front line staff
As described under requirement no. 1, (Q 41) shows there is evidence of feedback to
front line staff and reporters. But direct and rapid feedback to the reporter (mode A)
only occurs as an acknowledgement in 31.5% of trusts, issue progress (42.6%),
outcome of investigation (72.6%). Methods are largely by reports and at meetings (Q
42), i.e. not particularly rapid feedback, nor targeted feedback.
Feedback of wider lessons learnt (modes C, D, E) is most often by training
programmes and meetings, e mail, reports and newsletters (Q 44), some of which
include a more active dialogue with those informed (e.g. team brief).
9. Feedback processes should be well-established, continuous, clearly defined
and commonly understood
The survey was conducted in parallel with the work to develop the SAIFIR model.
This is, however, a requirement that two the case studies were selected to reflect
upon.
10. Feedback of safety issues should be integrated within the working routines
of front line staff
The survey was conducted in parallel with the work to develop the model. This is,
however, a requirement that two the case studies were selected to reflect upon.
180
11. Feedback processes for specific safety improvements are visible to all front
line staff
The survey was conducted in parallel with the work to develop the model. This is,
however, a requirement that two the case studies were selected to reflect upon.
12. Feedback is considered reliable and credible by front line staff
This was not assessed by the survey, although Q44 shows that participants thought
that in two thirds of trusts feedback was partial and not acted upon by many staff,
which may be a consequence of not providing reliable, credible feedback. This is a
requirement that two the case studies were selected to reflect upon.
13. Feedback preserves confidentiality and fosters trust between reporters and
policy developers
The reporting systems are confidential, but it is unknown how far feedback is
provided in confidence.
14. Feedback includes visible senior level support for systems improvement
and safety initiatives
In Q1, we found that although all but 4 trusts had an Executive Director responsible
for Patient Safety, only 13.7% attributed this to the CEO. While arguably the CEO is
responsible for every major function, and cannot lead on all of them, it suggests that
patient safety leadership is often delegated, and hence may not have that visible
senior level support relative to other activities. It is unknown from this survey how
credible the leadership of patient safety is experienced by clinical staff. The survey
question on safety culture (Q4) suggests that in at least 58% of trusts there is
significant progress to be made on embedding a safety culture across the trust, and
that very few are using any recognised safety culture assessment tools (Q9 &10).
15. Feedback processes are subject to double-loop learning to improve the
effectiveness of the safety control loop
Q9 & 10 suggests there is scant use of recognised safety culture assessment. The
survey did not assess the extent that feedback is evaluated. The extent of sharing
outside the trust (Q47) from which feedback about the process of reporting and
investigation may be gleaned, occurs in about two thirds of trusts to the SHA and
about half with trusts in the local area. It is not possible to ascertain the extent that
learning was achieved by this sharing. Some comments suggested the dialogue via
SHAs and at the time of the survey, with the NPSA via NRLS, was largely one way.
181
The overall conclusions and recommendations from the survey are integrated within
the discussion section of this report.
References:
Battles JB, Dixon NM, Borothotkanics RJ, Rabib- Fastman B, and Kaplan HS (2006)
Sensemaking of patient safety risks and hazards. Health Research and Educational
trust . Health Services Research 41 (4) part 11, p.1555-1575
Comptroller and Auditor General: A Safer Place for Patients: Learning to improve
patient safety. National Audit Office. The Stationery Office. London. 2005
House of Commons Committee of Public Accounts. A safer place for patients:
learning to improve patient safety. Fifty First Report of Session 2005-6. The
Stationery Office. London.
NPSA (2005) Seven steps to patient safety for primary care. National Patient safety
Agency, London.
Sanders J and Esmail A (2001) Threats to patient Safety in Primary Care: A review of
the research into the frequency and nature of error in primary care/ University of
Manchester.
Weick KE (1995) Sensemaking in Organisations. Thousand Oaks CA: sage
Publications.
Wallace L M, Spurgeon P, Latham L, Freeman T and Walshe K. (2001) (a) Clinical
Governance, organisational culture and change management in the new NHS.
Clinician in Management, 10 (1) p.23-31.
Wallace L M, Freeman T, Latham L, Walshe K and Spurgeon P. (2001) (b)
Organisational strategies for changing clinical practice; how Trusts are meeting the
challenges of clinical governance. Quality in Health Care, 10, pp.76-82.
Wallace, L M, Boxall, M and Spurgeon P (2004) Organisational change through
clinical governance: The West Midlands 3 years on. Clinical Governance: An
International Journal, 9 (1), 17-30.
182
Wallace LM, Boxall M, Spurgeon P and Barlow F. (2007) Organisational interventions
to promote risk management in primary care: the experience in Warwickshire,
England. Health Services Management Research 20: 84–93
Walshe K, Wallace L, Latham L, Freeman T and Spurgeon P. The external review of
quality improvement in healthcare organisations: A Qualitative study. International
Journal of Quality in Healthcare (2001), 13, no 5, pp.367-374).
183
4.0 - Expert Workshop:
Synthesis of work streams and testing with industry and NHS trust experts
Rationale:
The research proposal envisaged that the synthesis of Non NHS high risk industry
expertise would result in a set of system requirements, and possibly some examples of
good practice directly from or relevant to the NHS. The empirical study via a survey of all
NHS trusts in England and Wales would indicate some of the realities of current NHS
incident reporting systems and feedback processes, and showcase some emerging
good practice. Therefore an event that allowed testing of both sets of information with
experts from high risk industries and healthcare would help to focus the scoping review
on the intended outcomes of identifying how the NHS can tailor its systems to maximise
the feedback and hence learning from incident reporting systems at the level of
individuals as well as local trust organisations.
Conduct and design of the Expert Workshop:
At the outset, the team included the advice of Mr Laurie MacMahon, Director of the
Office of Public Management, as he has a considerable track record in designing and
conducting large scale interactive events. The principles established were:
•
Experts should be invited from key constituencies with (a) influence on the NHS,
its risk systems, or via its influence on the professions, (b) and/ or expertise on
national policy relevant incident reporting systems, (c) high level and front line
expertise in operating clinical adverse event reporting systems in NHS trusts.
•
The format of the event would be maximally interactive, therefore participants
would be briefed to expect this, and be provided with an overview briefing in
advance. Presentations would be focussed on key facts rather than an academic
rationale.
•
The event would be recorded in such a way as key outcomes could be used for
the research with an ethos of sharing and learning.
184
The programme and attendees are appended (Appendix 3). A briefing document was
circulated to all attendees 2 weeks prior to the event. It is not included here as it contains
no new material to that information now reported in the main review section. In addition
to the members of the steering Group and research team (n=12), some 71 invitees
attended, which included 12 invited experts. The 46 attendees were invited due to their
risk management roles in trusts or at the Strategic Health Authority, and 7 members of
the National Patient safety Agency attended, including 5 Patient Safety Managers
(PSMs).
Methods:
The discussions in the interactive phases of the day were recorded with one or two
members of the research team. Notes were written up and circulated within the next
week, and discussed at the Steering Group meeting after the event. The points
described here are a summary of key points relevant to developing the model and
describing the implications for the NHS. In addition, the discussion contributed to the
criteria used to select case study sites. Participants were also asked to nominate
examples of good practice for case studies. Although two members nominated their own
trusts, the practices identified were not, on further inspection and advice from other
parties, found to be sufficiently exemplary. The key results are summarised below.
4.1 - Key Results:
Reporting:
There is a huge volume of reports especially in acute trusts
-
Both trusts and the NPSA are overloaded with incident reports. There is a need
for prioritisation for analysis and investigation.
-
Interest among NHS staff is in increasing reporting, but reporting is selective by
professional group, and type of incident, which means that consistent information
is not available to maximise learning.
-
Prioritisation systems within trusts focus on severity of outcome often more than
likelihood of recurrence and potential for learning.
185
There is a focus on reporting rather than learning:
-
Without denominator information, further analysis and comparative data it is
unclear what increased numbers of reports show, although it is assumed to show
a more open culture.
-
Risk reporting rates are difficult to interpret either by comparison to other
services within trusts or across trusts (“apples and pears”).
-
In the absence of data on the underlying clinical processes (e.g. adherence to
guidance) or clinical outcomes, difficult to know what incident reporting rates
mean, especially if the investigation reports do not clearly identify the underlying
causes within specific services.
-
It was felt there should be a move away from a simple goal of reporting to
establishing a baseline of safe performance.
Putting known risks on corporate trust risk registers:
This can be a means for holding the organisation to account. However, it was felt
that this can be manipulated by staff wanting to commandeer resources for their
service over others.
Speciality based Incident Reporting Systems (IRS) versus generic IRS:
- It was noted from the presentations that anaesthetics has had systems within
specialities for many years.
- Many attendees viewed mandatory reporting as more effective than voluntary
systems, whether in the NHS or other industries.
- Speciality specific systems had the disadvantage that information not always
shared outside that speciality.
- It was felt by those who had experience of systems such as those in
anaesthetics prior to the establishment of the NPSA NRLS that there were
problems mapping their systems to the new one, and that there may have been
a reduction in miss reporting in these specialities.
- It was recommended that generic adverse event indicators relevant to the
specialities should be mandatory, to improve cross speciality comparisons.
- Speciality systems would still need a central or national system for analysis,
learning and dissemination.
186
What should be reported:
-
The debate centred on whether everything should be reported to gain coverage,
or whether effort should be targeted at specified near miss/ harm events. While
systems were immature, it was suggested that greater coverage should be the
aim, but that refinements would be necessary to target resources effectively.
-
The current system in the NHS allows for trusts to create their own forms. Some
made a plea for a generic enforceable form to reduce variation, while others felt
this would stifle reporting, particularly if it had many predetermined categories
that were not seen as relevant to the reporter. A compromise model in use by the
Civil Aviation Authority (CAA) was suggested, which uses a list of what should be
reported to its national mandatory system (which is in addition to each airline’s
own systems).
Feedback from incident reporting systems :
-
Within NHS trusts incident reporting was felt to be ad hoc, retrospective, and not
“built in“ to the system.
-
The UK maritime industry body, CHIRP, uses a form of dialogue with the reporter
once the report is received to flesh out the detail, which provides a level of
feedback.
Risk Management in NHS trusts:
-
It was felt that the role of risk manager was, in some larger trusts, quite remote
from front line staff where many of the RM investigations and feedback is via
Directorates.
Who reports?
-
It was noted that in airline reporting, the commonest reports were of other staff’s
incidents.
-
Across industries it was noted that anonymity reduces the quality of the report
and negates direct feedback to the reporter.
-
Public reports of incidents are regarded as complaints and handled separately
from staff incident reports in the civil aviation system. It was noted that this was
somewhat similar in the NHS, as complaint reports beyond the trust level go to
the Healthcare Commission and Health Ombudsman, and incidents go to the
187
SHA/NPSA. However, at trust level, many risk management departments are
now alongside or integrated with complaints and litigation departments.
What gets investigated out of the overall number of reports?
-
It was reported that CHIRP (maritime industry) analyses two thirds of reports,
and the Civil Aviation Authority (CAA) analyses a quarter- the number is
unknown in the NHS, although the NAO report included some estimates, there
was very considerable inter-trust variation.
Solution development:
-
An advantage of the independence of the CAA and CHIRP (maritime and
aviation industries) was reported to be that this gives credence and force to the
proposed solutions. The NPSA’s role in this respect was felt to be unclear, as
although it is a special health authority, it is neither independent of the NHS nor
are its’ recommendations directly enforceable.
4.2 - Feedback levels in the SAIFIR model:
A: Bounce back:
The group sessions identified three levels of immediate feedback within the level A:
1. Acknowledgement, thank you and encouragement to report in future.
2. Clarification and classification of the incident which can be more accurately
achieved when reporter interacts with Risk Manager.
3. Safety enhancing information could be sent back. Of note, attendees could not
identify any examples where this happened in the initial dialogue with the
reporter.
Other points that impinge on the model were noted:
1. Feedback is sometimes given at department manager or by the line manager of
the reporter prior to the reporter submitting their report.
2. Report may be of an incident where reporter isn’t problem owner.
3. Systems not set up for patient/ public report, or report by those not trust
employee.
188
B: Immediate / rapid action feedback:
1. Some feedback to the reporter may occur prior to the report being submitted, i.e.
a departmental manager is involved before the report is submitted.
2. NHS risk managers expressed preference is for this level of feedback to apply to
more serious or rarer events because of their workloads, and they felt the NPSA
decision tree is relevant in deciding if this is warranted.
3. Safety recovery actions could include the removal of equipment and information
about substitution of equipment, and reference to known protocols.
4. It was also felt the issue of giving feedback may appear to prejudge the
seriousness and root causes, and so may preclude or attenuate feedback being
given at this stage.
5. NHS attendees reported being very concerned about the removal of staff from
work or suspension of staff prematurely, unlike what appears to be the case in
aviation. Debriefing in the NHS was reported to be seen sometimes as punitive.
6. Dissemination of information that an adverse event has occurred to others may
be possible via Safety Alert Briefings from the Department of Health (SABS) and
NHS Newsletters. It was suggested an audit of the effectiveness of
communication routes is needed to establish the most effective routes.
Information on SABS was sought in the second NAO survey.
7. It felt that in general guidance and judgement is needed on which incidents
require what rapid action.
8. It was recommended that the feedback action could be supported by checklists,
and supported by access to specific staff such as staff trained to undertake
Serious Untoward Incident reviews and conduct investigations, or escalation to
the CEO and a rapid action team.
9. Incidents involving external agencies may present a challenge for any form of
rapid response by a trust, especially if the incident will result in external experts
needing access to the trust.
10. Morbidity and mortality meetings are also a source of investigation that could
trigger the need for rapid action.
189
C: Feedback to raise awareness of specific safety issues:
Newsletters
1. NHS staff reported that the Health and Safety Executive issue large volumes of
information which was resource intensive and of unknown impact.
2. Evaluation of impact of learning from these is very sparse. This topic was
suggested by the participants as a useful focus of case where audit or other
evidence is regularly sought to understand the impact of the feedback on the
learning of staff/ practice.
3. Newsletters are a popular mode of communication concerning awareness
raising, but it was felt the impact of these is seldom tested.
4. Feedback formats should be matched to the preferences of the audience; for
example it was suggested that vignettes would suit some staff while doctors may
prefer “facts”.
Alternative methods in use in parts of the NHS were noted:
•
Action learning sets within an SHA area and supported by the NPSA’s
PSMs. Some risk managers had found these helpful for a number
reasons including growing their own skills and in sharing the learning from
incident investigations.
•
Roadshows within trusts on risk management and reporting
•
Debriefing from SUIs: This was felt to be very useful within trusts. Bit it
was also pointed out that SHAs have accumulated information re: SUIs,
but vary as to how they use it locally.
•
Regular review via management meetings of incidents
Feedback definitions:
It was felt by those from non healthcare high risk industries that the NHS tends to
view this narrowly, i.e. information about the occurrence of an event and the
actions taken, rather than the whole action, implementation and evaluation cycle.
190
Raising the profile of risk management within NHS Trusts:
1. Patient safety champions were seen as a means of increasing the
visibility of Risk Management in the distributed organisation of a primary
care trust (PCT).
2. The Royal College representatives commented that Patient safety should
be a Continuing Professional Development appraisal requirement, and it
should feature in job descriptions.
For all types of feedback at this level it was suggested:
1. The effect on staff awareness and behaviour of these forms of feedback is
unknown.
2. The sustainability of solutions needs to be evaluated.
3. Feedback should include examples of changes resulting from the
investigation and its impact to demonstrate the learning loop.
D: Communicating and acting on national alerts, SUIs, multiple events.
-
Experience within the NHS was discussed, and reference made to the results of
the survey for this report and the NAO’s study, shows that the current Safety
Alert Briefings (SABS) are not having the expected impact. There needs to be
quality control and testing for impact: While a person has been identified in each
trust with a designated role as SABS officer, there may be little training to do the
job, and trusts vary in the routes used and the extent that they assess the
effectiveness of communication and actions taken.
-
Investigations: Learning from investigations was felt to require wider
dissemination in trusts than at present.
-
Multiple event analysis summaries should be disseminated along with an
analysis of causes, solutions and means of assessing the impact of changes on
the pattern of incidents and other safety indicators.
E: Developing solutions and implementation:
-
It was felt that so far in the NHS there were overly localised solutions, with little
sharing of learning across trusts, or from the NPSA.
191
-
A common system of classification is needed to facilitate wider learning and
comparative system level learning (via NPSA).
-
Reporting in the NHS is still seen as an end in itself.
-
It was felt that the NHS micro-manages solutions, while it should give more
general guidance and discretion as to how the guidance is applied.
4.3 - Outcome from the Expert Workshop:
The above is a distillation of comments from the detailed reports from each part of the
workshop, and comments on the initial report given to attendees within two weeks of the
event. The main product of the workshop was to inform the final work on the SAIFIR
model described in the review section above. The second main product was to influence
the selection of case study sites as described in the case study section. Overall
conclusions from this work are therefore integrated within the conclusions from the
review and case study sections and the overall report discussion.
192
5.0 - CASE STUDY SECTION: OVERVIEW
Purpose:
In the original submission, we described the case study phase as being as opportunity to
examine in depth examples of excellent, or unsuccessful, attempts at developing
effective feedback systems in NHS healthcare trusts. This would include any examples
that were under development as well as those already tested. The methodology would
be tailored to the case example, but would likely include a number of interviews with
senior staff and clinicians, possibly the use of safety culture measures, and if time
allowed, some tracking of the development over a period of months. We expected that
selection methods would use intelligence from a range of sources including agencies,
Strategic Health Authorities (SHAs and the Regional Board for Wales), publications in
professional journals or healthcare newsletters, and key informants, as well as
responses to the survey of all NHS trusts.
Methods:
A letter was sent to all SHA leads for risk management inviting nominations. The advice
of the NPSA was sought directly via Dr Mike Rejman, Dr Sally Adams and Mrs. Suzette
Woodward. A letter and e mail was sent to all 34 Patient Safety Managers (PSMs)
seeking nominations. We also undertook an analysis of the survey returns, selecting by
trust those that responded to key questions, particularly those concerned with a range of
feedback to staff (e.g. Q44), and ratings of the effectiveness of key systems (Q25, Q45),
and nominations via the Expert Workshop. From all sources, 18 trusts were long listed.
Contact was made with each trust to determine the nature of the innovation. The final
case studies were selected to reflect some aspects of the five types of feedback and the
15 system requirements, as depicted below, arising from the review section and Expert
Workshop. Each is described briefly below in relation to these features. More detail is
available in the appendices. However, both for brevity and to respect the confidentiality
of interviewees, the appendices do not contain detail of the interview transcripts, nor all
the trust documentation inspected on each site. Details are available from the Principal
Investigator on request and with the prior permission of the trusts.
193
Types of feedback:
Feedback
Type
Content & Examples
A: Bounce
back
information
Information
to reporter
• Acknowledge report filed (e.g. automated response)
• Debrief reporter (e.g. telephone debriefing)
• Provide advice from safety experts (feedback on issue type)
• Outline issue process (and decision to escalate)
B: Rapid
response
actions
Action within
local work
systems
• Measures taken against immediate threats to safety or serious issues
that have been marked for fast-tracking
• Temporary fixes/workarounds until in-depth investigation process can
complete (withdraw equipment; monitor procedure; alert staff)
C: Risk
awareness
information
Information
to all front
line
personnel
• Safety awareness publications (posted/online bulletins and alerts on
specific issues; periodic newsletters with example cases and
summary statistics)
D: Inform staff
of actions
taken
Information
to reporter
and wider
reporting
community
• Report back to reporter on issue progress and actions resulting from
their report
• Widely publicise corrective actions taken to resolve safety issue to
encourage reporting (e.g. using visible leadership support)
E: Systems
improvement
actions
Action within
local work
systems
• Specific actions and implementation plans for permanent
improvements to work systems to address contributory factors
evident within reported incidents.
• Changes to tools/equipment/working environment, standard working
procedures, training programs, etc.
• Evaluate/monitor effectiveness of solutions and iterate.
Figure 5: Types of Feedback
The first case study illustrates the incident reporting system in one trust, and the way
that the system currently gives feedback to the reporter, particularly at modes A and C,
and examines how a new electronic and mobile system may improve this, and provide a
platform for mode B feedback. The subsequent three case studies focus on the use of
Patient Safety Newsletters. These are case studies in two Partnership Trusts, and an
audit of 90 trusts’ newsletters. These three case studies illustrate mode C feedback, i.e.
information to all front line staff, and aspects of mode D feedback where the information
is relevant to publicising corrective actions from reported incidents. Newsletters could
also be relevant to mode E feedback, where changes to working systems are described,
and the impact of monitoring changes on safety outcomes, or links to prospective
hazards methods are made.
The 15 requirements for effective patient safety feedback are also used to evaluate each
case study, (Figure 5b in the review). The requirements are used throughout each case
study, and in a table which compares each case study in the final comparative section of
this report.
194
System requirements for effective safety feedback
01 Feedback loops must operate at multiple levels of the organisation or
system
02 Feedback should employ an appropriate mode of delivery or channel for
information
03 Feedback should incorporate relevant content for local work settings
04 Feedback processes should be integrated within the design of safety
information systems
05 Feedback of information should be controlled and sensitive to the
requirements of different user groups
06 Feedback should empower front line staff to take responsibility for
improving safety in local work systems
07 Feedback should incorporate rapid action cycles and immediate
comprehension of risks
08 Feedback should occur directly to reporters and key issue stakeholders
as well as broadly to all front line staff
09 Feedback processes should be well-established, continuous, clearly
defined and commonly understood
10 Feedback of safety issues should be integrated within the working
routines of front line staff
11 Feedback processes for specific safety improvements are visible to all
front line staff
12 Feedback is considered reliable and credible by front line staff
13 Feedback preserves confidentiality and fosters trust between reporters
and policy developers
14 Feedback includes visible senior level support for systems improvement
and safety initiatives
15 Feedback processes are subject to double-loop learning to improve the
effectiveness of the safety control loop
195
5.2 - CASE STUDY NO. 1: DEVELOPING A PERSONAL DIGITAL ASSISTANT (PDA)
PLATFORM FOR CLINCAL RISK REPORTING TO INCLUDE FEEDBACK TO THE
REPORTER:
Case Study authors: Professor Louise Wallace, Dr Louise Moody, Dr Jacqui
Bleetman, Coventry University.
Context:
In 2004, the National Patient Safety Agency (NPSA) launched the National Reporting
and Learning System (NRLS) to improve organisational learning from such incidents by
assimilating reports from health professionals. Two years later, the data suggest a
significant reporting gap between the estimated levels of adverse events and the levels
of reporting (O’Dowd, 2006), and comparative under reporting amongst junior doctors
(Wareing, 2005). As key future healthcare leaders, it is essential to engage this group in
reporting to maximise the reliability and learning via NRLS and support the development
of an open and fair culture. Some exploration of reporting barriers, making reference to
junior doctors has been reported by Firth-Cozens et al (2002, Lawton and Parker, 2002),
but there are few successful interventions, and the scope of these is limited (Bent, et al
2002), although a new training programme to raise awareness among doctors in training
has been launched (Matthew, 2006).
Currently, the bulk of incident reporting in trusts uses paper based systems with all the
attendant risks of inaccurate data being recorded, delays arising from logistical problems
of paper handling, and poor lines of communication with the existing system. The NAO
survey in 2004 (NAO, 2005) showed the quality of initial reports seldom allows
identification of causes. Where systems are paper based, there may be considerable
time lags for receipt of the form in the clinical risk department. Evidence from our case
study sites and from the Expert Workshop showed that trusts vary in how they respond
to these reports. A crucial issue for accuracy and timeliness of reports is the extent that
a reporter gains advice from their manager before they submit the form, and whether the
risk manager who receives it provides any “bounce back” information (level A in our
model). Secondly, trusts vary as to whether the risk manager undertakes any preliminary
enquiry with the reporter or local staff to clarify issues raised, and whether the risk
196
manager undertakes a preliminary assessment of the degree of risk and need for
investigation and for escalation to an executive level. At this point, feedback to the
reporter and staff in the relevant work areas might be advised about remedial or
recovery actions (model B in our model), although evidence from the Expert Workshop
indicated that this was generally not taken as an opportunity to give feedback, in part
because of the workload on risk managers using existing paper based systems.
The context in the University Hospitals Coventry and Warwickshire NHS Trust:
This case study shows the problems experienced in a part of a large teaching hospital
NHS trust. The case study site has benefited from a programme of risk management
training by Terema Limited, an aviation industry consultancy. The Clinical Governance
department is lead by a Director with evidence of visible leadership and day to day
support for risk reporting and investigation by a team of Clinical Risk managers. The
trust was one of the first to produce a trust wide quarterly bulletin of all investigated
patient safety incidents. The case study focuses on an area with enthusiastic clinical
leadership by the Clinical Director.
At University Hospitals Coventry and Warwickshire NHS Trust (UHCW), incident data
are provided in long hand reports to the Clinical Governance team (consisting of a
manager and six staff) whose job it is to code and enter the data into their incident
reporting system, Datix. This is a time consuming task and there is often a backlog of
reports to be coded. An alternative approach taken by some trusts has been to introduce
a more complex form which requires the reporter to determine categories related to the
department, and incident type and severity, for example. This was rejected at UHCW on
the basis that this may be a barrier to reporting, and that it can introduce a level of
inaccuracy or a need for recoding. However, UHCW acknowledge that there is value in
all staff having a clearer understanding of incident definitions and grading, especially of
the most commonly occurring incidents. This could potentially reduce the Clinical
Governance workload and also forms a valuable part of organisational understanding
and development of the reporting process.
Experience with a more limited system in use for anaesthetics trainees in Australia (Bent
et al., 2002; Bolsin et al., 2005) was the stimulus for looking at a new system that would
197
better meet the trusts’ requirements, and would potentially be relevant throughout the
NHS.
Initial discussions resulted in the development project objectives being defined as to
produce a demonstration system that would enable staff to report incidents in the same
way as the paper version, but to use electronic methods to facilitate reporting and
learning. This work was funded by a grant from Advantage West Midlands and Coventry
University’s Applied Research Centre in E Working.
Exploration of the systems in UHCW lead to the agreement to pursue a mobile platform,
using Personal Digital Assistants (PDAs) which may have important advantages over
either paper or intranet web based options:
1. A mobile device for clinical risk reporting may reduce barriers to submitting
reports by being available to the reporter all the time.
2. It would allow for various types of feedback to be communicated rapidly to both
the reporter and others such as team managers who may need to act.
3. It could significantly reduce the heavy workload of skilled staff at the initial stage
of reporting and cataloguing reports.
4. It would support organisational learning and process ownership, providing
“bounce back” information (mode A in our model), e.g. an automated response
that an incident has been logged, followed by further enquiries to the reporter to
clarify the report, and to provide information on how the incident is initially
classified and what actions will follow.
5. It could provide further feedback at mode B in our model, e.g. through the
provision of system recovery advice once a report has been submitted. This
might suggest to the reporter or their manager likely recovery actions, or refer
them to relevant resources including guidelines, or previous incident reports and
investigation reports.
6. Events with a serious outcome are likely to be picked up within the current
system; therefore it would be of added value if a new process was able to drive
up the cycle of reporting and learning from more minor events and near misses.
Over the past 2 years the reporting of clinical adverse events has increased from 40 to
500-700 reports / month. About 90% of these events involve minimal harm. Many minor
198
events and near misses are still not reported, particularly by medical staff, and where
senior medical commitment to reporting is weakest.
The reporting process, from the report being made to being closed is detailed in
appendix 4 (a).
There are four levels of feedback resulting from a clinical adverse event report:
1. A letter is generated to acknowledge receipt; to thank and inform the reporter
what action has been taken in response to the event reported (mode A feedback)
2. When the investigation is complete, a further letter confirms that it has been
investigated and what action has been taken (mode C feedback).
3. Each month a report is sent by email to the specialties summarising the events
during that month. The report summarises the results of each investigation.
Clinical governance staff attend meetings with each speciality to discuss adverse
event reports and their status (mode D feedback)
4. A quarterly bulletin of adverse events across the trust is produced and made
widely available, (mode D, with possible mode E action).
Methods:
In addition to three of the six Clinical Governance department staff, clinical staff in an
area with high frequency of incident reporting (intensive care) and the intensive care
outreach team (high work mobility) were selected for interview or focus group
discussion. There were 23 staff overall. Key topics included the current incident reporting
system, the drivers and barriers to reporting, feedback that is currently provided after an
incident report is submitted and the type of feedback that would be valuable. Familiarity
with electronic devices was sought, along with the user requirements for a mobile
electronic reporting system. Of relevance to this scoping study are the views expressed
by staff about reporting, and how this might be addressed by a revised system using
mobile technology. These specific questions are included in appendix 4 (a). Analysis of
transcribed notes was conducted thematically, using word tables for each theme by staff
group and job role.
199
Key findings:
Only one of the 23 interviewees admitted to never having made a clinical adverse event
report. Most however, acknowledged that not enough reports were made. The
frequency of reports for individuals ranged from 5 a day to 1 every six months. The
experience of receiving (or perception of receiving) feedback was varied. Some staff
reported that they received feedback and a compliments slip to indicate the report had
been received, as well as feedback on the resulting action taken.
For example:
“We get email confirmation that the incident report has been received”, and “The
reports do make you think about things more sometimes e.g. needle stick injuries.”
Others claimed there was not a feedback process. For example: “The form goes to
some department somewhere, we don’t find out what happens.”
More respondents stated they would find it useful to receive more feedback: For
example: “ We need a structured feedback time, there are no real meeting times on the
wards now, there’s a breakdown in communication, and people only really talk to you
when you’ve done something wrong.”
Reasons for reporting incidents included reference to a more supportive culture in
some departments. For example: “Critical care is more open and understanding”, and
“The Terema (team resource management) course has made a big difference, those
who have been on accept reporting more readily.”
But there was evidence that reporting was not always seen as a non judgemental and
positive action: For example: “I report when on the warpath to get something changed”,
and “Some people use them to vent frustration and highlight logistic issues.”
200
Barriers to reporting:
Barriers to reporting were reported by all respondents. They are summarised under key
themes:
Scepticism of the system in terms of organisational learning and change:
“Some recurring incidents are not reported because they are seen to be “part of the
system” and “nothing will change”.
Frustration with the reporting process
“The forms are all over the place”.
“Some reports get lost or hide.”
“Some e.g. not hand washing, should be highlighted to the individual at the time.”
Concerns over confidentiality
“Everyone knows you’ve filled it out as you go hunting for it, then ask where to put it,
and it’s left lying around for all to read.”
Potential training /education need
“I have no idea how critical it has to be in order to report, I’ve been to all the inductions
and I still don’t know. The nurses are better trained to fill in.”
Perception of the need to report
“What is counted as a critical incident is an individual matter based on judgement and
experience.”
Existence of a blame culture
“They fear the results – especially if the report is about them or their consultant.”
Work culture
“Outreach had been openly challenged before on a ward for putting in a report by a
senior sister.”
“Junior doctors don’t want to isolate themselves from ward staff.”
“Porters and cleaners don’t report – hierarchical still, they think the sister should do it.”
201
“It has to be sold to the nursing staff, they feel they are getting another slap in the face
and criticism again.”
Leadership impact
“I know I should be a role model– if I did it so would the junior doctors”
Characteristics of the job and work pressures
“Incidents always happen at a bad time when you’re under pressure and its hard to
report at that time”
Suitability of a mobile reporting system:
In the ICU, a PC or paper-based form was widely viewed as more appropriate than the
use of a mobile device due to the ease of use and access. However the Outreach team
were far more positive about the potential of such a system. The Critical Care Outreach
Service follows up patients after they leave the ICU to improve the speed and quality of
recovery. They consisted of a team of 6 mobile workers (five nurses and a
physiotherapist). Reporting rates were high, with one member of team completing about
five reports a day, others two to three per day, but there are more that could be reported.
The opportunity to trial mobile reporting was welcomed, provided it was integrated with
other activities, i.e., not a bolt on, and another device to carry.
Interviews showed the introduction of a mobile device for clinical adverse incident
reporting should aim to support organisational learning and process ownership. One
means of contributing to this, is through the provision of recovery advice once a report
has been submitted (mode A). In addition, they identified that feedback might suggest to
the reporter or their manager likely investigatory or recovery actions, or refer them to
relevant resources guidelines, or previous reports (mode B reporting), i.e., feedback that
is currently not available with the paper based system.
The clinical governance team identified that a mobile reporting device could have a role
in reducing the follow-up actions by their central team, since they could have a rapid
dialogue with the reporter and team members to gain a wider picture once the report
was logged. In addition, the provision of mode B feedback, for example automated
202
follow-up advice and feedback at the level of minor incidents and near misses would
also reduce their workload.
The next phase of the development project will be to trial a PDA based reporting and
feedback system and examine the impact on the quality and quantity of reports, and the
workload of clinical staff and risk managers, and the extent of individual and
organisational learning achieved compared to those not using the device. The
subsequent phases will include teams with medical staff, particularly those whose work
requires high mobility within the hospital and between hospital sites.
Conclusions:
The case study shows a number of features of the model developed in the review
section. In particular, it shows how current systems are designed to give feedback,
primarily by e mail or letter acknowledgement (mode A), discussion at clinical meetings
of the progress of incident reports and investigations, and corrective actions arising from
investigations (mode D) and some further feedback of incident reports and analyses
trust wide. Problems with the system in practice include the credibility and confidentiality
of paper based reporting, and the relevance and timeliness of feedback. The user needs
analysis phase of the study has indicated that an electronic and mobile system may
improve the quality and quantity of reporting. Further, it may enable mode A feedback to
be more complete and timely, and will give a means of providing mode B feedback
which is currently absent, i.e., opportunity to give advice on current guidance, “work
arounds” and other recovery actions. There is scope for other levels of feedback to be
delivered to PDAs also, although it is likely that this will duplicate channels established
for all staff (such as Newsletters).
203
5.3 - THE NEWSLETTER CASE STUDIES: Overview
In the final three case studies, we examined the ways that Newsletters might be used to
provide feedback, primarily related to model C in the model (risk awareness information),
and also exploring whether they are used for mode D feedback (informing reporters
about issue progress and actions arising from this, and other staff about the issue and
recommended actions. mode E feedback might also be relevant if Newsletters contribute
to feedback of the impact of safety actions.
The 15 requirements of a safety feedback system which are most likely to be relevant to
safety newsletters are summarised in the table below:
System requirements for effective safety
feedback
1. Feedback loops must operate at
multiple levels of the organisation
Extent of reach to internal and
external audiences?
or system
2. Feedback should employ an
Does medium convey the message?
appropriate mode of delivery or
channel for information
3. Feedback should incorporate
Relevance to staff group, site, team,
relevant content for local work
topic?
settings
4. Feedback processes should be
Fit with other patient safety media?
integrated within the design of
safety information systems
5. Feedback of information should be Fit with knowledge and culture,
controlled and sensitive to the
working systems of users?
requirements of different user
groups
6. Feedback should empower front
line staff to take responsibility for
Does the content link the messages to
current working practices?
improving safety in local work
systems
7. Feedback should incorporate rapid Unlikely to be relevant
action cycles and immediate
204
comprehension of risks
8. Feedback should occur directly to Is the newsletter audience issue
reporters and key issue
specific or wider staff groups?
stakeholders as well as broadly to
all front line staff
9. Feedback processes should be
Are newsletter users informed how to
well-established, continuous,
input/ change the newsletter or
clearly defined and commonly
suggest topics. Do Newsletters also
understood
convey information about reporting
systems and other feedback types?
10. Feedback of safety issues should
Do newsletters lend themselves to
be integrated within the working
being part of normal work information
routines of front line staff
channels, or are they an added extra
to the working day?
11. Feedback processes for specific
Are newsletters useful for adding
safety improvements are visible to visibility to safety improvements?
all front line staff
12. Feedback is considered reliable
Is the newsletter content credible and
and credible by front line staff
how is this established?
13. Feedback preserves confidentiality Not directly relevant, except in so far
and fosters trust between
as material presented is anonymised.
reporters and policy developers
14. Feedback includes visible senior
level support for systems
Do newsletters give an opportunity for
senior staff to show visible
improvement and safety initiatives leadership?
15. Feedback processes are subject to Are newsletters evaluated for content,
double-loop learning to improve
format, distribution generally, and by
the effectiveness of the safety
audience focus?
control loop
Can individual staff assess their
understanding of the content, Does
the trust learn from feedback about
the Newsletter and about topics
covered?
205
5.4 - CASE STUDY NO. 2: LEICESTERSHIRE PARTNERSHIP NHS TRUST
Case study authors:
Professor Peter Spurgeon and Professor Louise Wallace
Background
The Trust had been identified as a potential Case Study site in good practice around
feedback processes concerning incident reporting. The focus of particular interest was
the newsletter, known as TRAIL, developed in the Adult Mental Health Services. The full
case study material is contained in the appendices. This report is based on a series of
on site interviews with five staff plus supporting documents by the Trust. The interview
schedule is in appendix 4 (a).
We present the results of the case study in relation to the SAIFIR model’s 15
requirements of effective risk incident reporting feedback system, and the associated 5
types of feedback. Agreement to participate was sought from trust staff and the CEO,
and MREC procedures and research governance approval sought from the trust. All staff
interviews were preceded by formal briefing and consent procedures.
Leicestershire partnership mental Health NHS trust: TRAIL
TRAIL was initiated in 2003 and reflected the Trust’s desire to move from just reporting
incidents, which some staff reported “seemed to go in to a Black hole “, while clinical
governance staff felt their work on analysis of incidents and investigations was not
impacting on front line staff.
The TRAIL model advocates a five stage process to
support teams to consider what they can do locally to improve patient safety. For an
example see appendix 4 (b).
The five stages of the TRAIL process are:
•
Talk – create a regular opportunity for open discussion about incidents and
adverse events within your team.
206
•
Reflect – take time as a team to reflect on the key themes identified and the
implications for your clinical practice.
•
Act – consider simple changes in working practices to reduce the likelihood of
things going wrong.
•
Improve – develop your team’s focus on reportable incidents and awareness of
safety units.
•
Learn – ensure that all staff know when and how to report an incident and
disseminate learning from other teams and services.
TRAIL is produced within the adult mental health service of the trust on a quarterly basis
and aims to raise awareness of the importance of the reporting processes, to provide
feedback on incidents that have been reported and to disseminate learning points
identified from investigations. TRAIL is e-mailed to all managers, heads of services and
professions. Some hard copies are placed around the Trust for those with limited e-mail
access. Complaints are also incorporated and are issued alongside TRAIL in a quarterly
report, with TRAIL learning points identified to make the link with improving safety and
quality of care.
Innovation: An active modality of feedback is included, whereby the informer, for
example a mental health team charge nurse, ensures time is allocated within each
service to reflect on the issues and agree changes in working practices. This stimulates
an active modality of communication in those who are informed, and provides both an
opportunity for double loop learning by feeding back on how the feedback has been
received. It also affords exploration of implementation issues and tailoring of feedback to
the specific conditions of the individual and team. The team manager completes a short
feedback questionnaire in each issue to summarise actions taken as a result. An audit
was undertaken in August 2006 of a wide range of front line staff to give overall
feedback on this channel of communication.
This showed 39% of staff surveyed
reported that TRAIL had increased their understanding of the need to report and
investigate incidents, and 95% of respondent felt that it is constructive to share
information about incidents that have occurred in other clinical areas.
207
Conclusions:
The TRAIL case study shows the place that a simple newsletter can have within a wider
cascade communication process with active face to face dialogue between informers
and front line staff. The focus on reflection and identification of practical workplace
changes in practice, and closing the organisational loop by audit of the impact of TRAIL
has met several of the requirements of effective safety systems, and is discussed in
more detail in the comparative table across all case studies at the end of this section.
5.5 - CASE STUDY NO.3: LANCASHIRE PARTNERSHIP NHS TRUST BLUELIGHTS
NEWSLETTER CASE STUDY
Case study authors:
Dr Louise Earll and Professor Louise Wallace
Rationale for selection:
The trust was recommended by NPSA and SHA risk management contacts, and from a
presentation given at the NPSA conference in 2006. Preliminary conversations with SHA
and Trust staff revealed an innovative approach, developed and reviewed over a
sustained period of time, and hence likely to show examples of good practice and key
learning points. Agreement to participate was sought from trust staff and the CEO, and
MREC procedures and research governance approval sought from the trust. All staff
interviews were preceded by formal briefing and consent procedures.
Method of data collection:
Face to face interviews were conducted by Dr. Louise Earll with the Lead for Risk
Management at the SHA, the Clinical Governance Team manager, and with one front
line clinical and one estates member of staff, and one further interview was conducted
by e mail with a clinical member of staff. The author/editor of Bluelights was on leave.
208
The interview schedule is contained in appendix 4 (a) and is the same as for the
Leicestershire partnership Trust case study.
The interview data were collected by contemporaneous notes and analysed according to
a schedule drawn from the review above, with the categories of feedback and 15 criteria
of effective feedback from incident reporting systems described above.
The generation and description of the Bluelight initiative:
In 2004, the trust had undergone a visit from the Healthcare Commission. Some criticism
was levelled at the management of Serious Untoward Incidents (SUIs) and risk
reporting. The SHA adopted the Strategic Executive Information System (Steis), and
implemented it with the aid of group of clinical risk management staff from across
several trusts. This was part of a plan to ensure local systems of risk management were
more integrated (for example including complaints) and that clinical risk incident action
plans were linked to corporate risk via risk registers.
At SHA level, a “lessons learnt” group, which in turn issued a “Lessons Learnt”
newsletter was formed to take forward actions from national guidance, e.g. from the
NPSA, and from local investigations. The group inherited a large number of reported
incidents from 1996, many of which they closed. The new robust performance
management system was put into place. This replaced the system which focussed in
minute detail on a small number of incidents which took a very long time to progress.
SHA took a ‘helicopter’ view, looking across SUIs to pick out trends from which others
could learn. An example there was a series of choking incidents, so that was linked to
the work the NPSA had done on dysphagia with some existing well written guidelines in
use by the ambulance service. The SHA lead then went back to the Trusts after the
introduction of the new guidelines to see how they had worked and if there had been a
choking incident dealt with differently due to them.
The “Lessons learnt” group was still in evidence at the time of the case study (August
2006) and one of the main concerns was on the interface between different services.
209
By the time of our case study, there had been 18 Bluelights, and an audit report was
disseminated in September 2006. (See appendix 4 (c)).
The Bluelight was a short, often one page, newsletter. It used graphics, that highlighted
lessons learnt from an incident investigation, or the analysis of patterns of incidents, or
the application of national guidance. It was a tool to share learning from incidents quickly
and in a timely manner, about which something can be done relatively quickly. The
Clinical governance department received issues for consideration, and drew up a
shortlist for consultation with a multi-disciplinary panel of staff, users, carers and
members of clinical governances. See appendix 4 (d) for an example of Bluelight.
Its dissemination was to all staff by e mail, although some staff might see paper copies
on notice boards. Staff members were encouraged to discuss Bluelights at regular staff
meetings and team handovers, and the Directors’ Team briefings.
The Lancashire Partnership Trusts’s Bluelight initiative was described in a poster
presentation to the NPSA’s biennial conference in February 2006: “Sharing important
learning quickly, broadly and effectively Bluelight are a fast efficient and dynamic
communication method helping the Trust to learn from safety incidents. The Bluelight
uses a combination of graphical e-mail and www technologies to communicate key
learning from safety incidents. The Bluelight highlights particular care and service
delivery issues arising from incidents in one area of Lancashire Cared Trust (LCT) and
communicates them across the whole organisation and beyond. In short the LCT
Bluelight programme is a systematic way of sharing the learning from incidents quickly
and broadly, increasing awareness of safety issues and improving safety and minimising
the likelihood of reoccurrence”. .See appendix 4 (e).
Impact on actions: Staff members were expected to take actions arising from the lessons
learnt, and if they were not adopted, this was documented in the corporate risk register
(for review by the Trust Board).
210
Examples from the Drugs service team:
One risk issue arising from a near miss incident was the ‘home’ production of a
banned substance. The recipe was available on the internet and ingredients available
from the chemists. The presentation of symptoms had not been seen before and
there appeared to be no accepted treatment, despite efforts to find information
regarding treatment from other services. Bluelight was used to raise awareness of
the psychotic presentation to alert Adult Mental Health services and A&E among
others. The other time this service used Bluelight was to rapidly publicise the
availability and risks of contaminated heroin. They also presented the Bluelight
information in poster form in all patient areas to enable their clients to have access to
the information at the same time as the staff.
Comments from front line staff interviewed were very positive:
“Bluelight is an extremely useful way of getting information across”, often contains
things they hadn’t thought of, makes them think ‘out of the box – its very responsive
as a tool’.
“Its very well presented, have to work on getting the message across succinctly and
accessibly – they help with that”.
“Very impressed – used to circulate ‘boring memos” following an investigation which
by and large would be ignored as they were seen as management memos”
“Its not management ‘gobbledy gook’ – not a management memo.”
“It’s the learning that’s the relevant bit, and it is that which is useful.”
“Compared to the DoH ‘Hazard Warning” - no one ever reads them – they are
obscure and difficult to relate to your setting.”
One weakness identified by the interviewees was the lack of a system to routinely
check that these actions were taken, and the impact on patient safety. Adaptations to
improve targeting of information were being considered, e.g. to code the article by
department or topic.
Innovative features:
The Newsletter was presented via e mail so it could potentially have very wide and
rapid staff coverage. Staff members were encouraged to complete a set of questions
about the impact on their practice on the intranet: in this way there was feedback to
the clinical governance team about the reach of the Bluelight and planned actions.
211
The Bluelight was audited every 12 months. Results from the most recent audit with
217 respondents showed that 97% of respondents reported regularly receiving
Bluelight. The content was thought to be useful to 98% staff. Views about whether
front line staff have time to read it varied, with 25% believing this was a problem. But
some 76% had used information from Bluelight in their practice.
Comments from the audit showed overall very positive responses. Suggested
improvements arising from the audit data we believe are worthy of note, as they are
relevant to how information and action oriented feedback can be made more
effective:
Improving Bluelight, and other similar patient safety newsletters:
First, the need for relevance to a practitioners’ area should be flagged:
“I really like the format and layout and find it really easy to read through. The main
issue is that often the problems highlighted are not relevant to my practice so I often
choose not to read it because I don't have much time. So perhaps the bulletins could
have a code on (or specific colour etc.) to say whether they are relevant to clinical
practice administration, security issues etc.?”
Second, the need for clarity as to who is expected to take actions was raised
(although this might be a matter resolved via the discussion of content at staff
meetings and handovers):
“It may be more relevant a) if each incident was given a priority rating dependent on
need for subsequent action b) that the relevant managers were charged with
imparting the information to their staff e.g., as an integral part of staff meetings, team
briefs etc..”.
Third, it was suggested that more regular and wide ranging audits were necessary:
“ …… that regular reviews took place with various staff groups to see if the
information was getting through...possibly with User/Carer reps (as advisory)?”
Fourth, there was a need for monitoring of recommended actions:
212
“Blue light is a good system for disseminating this type of information. However,
because the information uptake and any subsequent action taken is not monitored,
reliance on it alone leads to a degree of inaction in areas of risk.”
Conclusions:
The Bluelight case study shows how a patient safety newsletter can be used to
convey targeted information to front line staff, with credible content, supported by
visible risk management leadership and systems. The use of dialogue between
teams using the prompts of learning points, and self assessment questions on the
web questionnaire, gives an opportunity for the clinical governance department to
assess the uptake and likely actions arising from the feedback. Further opportunities
for feedback would include active follow up of planned actions and feedback on the
impact of those actions on patient safety. Reviewing the initiative against the 15
systems requirements is undertaken in the table below. We note in particular, the
absence of explicit visible examples of senior staff “buy in”, although this may well be
implicit given that the initiative has been a high profile initiative for the trust in the
local healthcare economy as an example of good practice, sustained for over two
years.
5.6 - CASE STUDY NO.4: SURVEY OF TRUST NEWSLETTERS
Case study author: Professor Louise M Wallace, with research support from
Julie Bayley, Rhiannon Carroll, Stephanie Ashford and Denise James at
Coventry University.
Background:
We gained the impression from several sources that patient safety newsletters were
increasingly common practice in NHS trusts over the period of the research project.
The Survey had highlighted in Q44 that newsletters and reports were used by 88% of
responding trusts. During the short listing phase for case study sites, we gained
corroborating advice during conversations with NPSA Patient Safety Managers
(PSMs) and SHA clinical governance leads about the growth of newsletters as a
major means of communication with staff about patient safety issues. During the
workshop groups, particularly those focussing on level C and D feedback, several
213
NHS Risk Managers commented that they wished to draw our attention to the many
examples, and widely differing formats, in use in their localities. After consultation
within the Steering Group, it was decided to undertake a desktop survey of at least
one newsletter from all trusts that would be invited to send us their newsletters.
Method:
In June 2006, all 607 trusts in England and Wales were e mailed a request to send
us by paper or email a copy of their patient safety newsletter. The addressee was the
head of clinical governance or risk management, with a copy to the CEO.
Response:
We received responses from 90 trusts, some giving more than one newsletter. In
these cases a random process (using random numbers and blind selection) was
used to select a newsletter for that trust. Information about use was also offered in
many cases in a supporting letter or e mail. An evaluation of the case study against
the 15 systems requirements for effective safety feedback is presented below.
Evaluation method:
From consultation with risk managers in the other three case study sites, and within
our local trusts, we drew up a simple form for reviewing the content and format of
each newsletter and covering information. See appendix 4 (f). These forms were
evaluated by three research staff working on a set of Newsletters each. A random
sample of nine newsletters was given a second review by the lead researcher, and
the few discrepancies resolved by discussion. This double check took place in three
stages, after the first 20 newsletters were reviewed (one per researcher), then when
the first 50 had been reviewed, and when all newsletters had been reviewed once, so
that any deviations in evaluation methods by the team could be rectified as early as
possible.
214
Response rate:
Responses were received from 90 trusts. Where more than one Newsletter was
received, the Newsletter with most relevance to incident reporting was selected.
The profile of the trusts which supplied Newsletters:
The majority of respondents were PCTs (n=42), which is unsurprising since these
made up almost 50% of al trusts. The next most numerous were general/ acute
hospital trusts (n=28), followed by mental health/ partnership trusts (n=9), combined
PCTs and mental health trusts (n=6) and ambulance services (n=5). The proportions
are similar to the overall distribution of types of trusts in England. There was one
respondent in this sample from Wales, all others were English trusts.
.
Results:
Results are analysed using a framework that incorporates features from the SAIFIR
15 system requirements and the 5 modes of feedback. Results are presented as
valid percentages, i.e. excluding data from cases where the response could not be
determined.
Format results:
Most newsletters were produced periodically, i.e. without pre set publication dates
(n= 72/80, 90 %), aimed at all staff (n=48/77, 62.3%) rather than a speciality or
department. The modal size of all newsletters in pages was three or more A4 pages
(n = 52/90, 57.8 %). Distribution was not easy to determine from the responses, but
for some included e mail, post, intranet and face to face options as well as by internal
post. All went to staff, one went to the SHA or other trusts, one went to the PPI
Forum, but none went to non NHS organisations such as the local council. This
suggests that Newsletters are seen almost exclusively as a means of communicating
with the trusts’ own staff, and that coverage may not be complete as most appear to
rely on internal post as the main method of distribution. Perhaps because of cost the
large majority are in house designed and printed publications (80/90, 88.9%),
although two thirds did use some photographs or graphics, and web sources (36/89,
40.4%). Evidence of audit of “reach” was provided by only (9/90, 10.0%), which again
suggests that this channel of communication may not be optimised in most trusts.
215
Content results
Review of the content of the single issue per trust showed slightly more than half
were focussed on one main topic per issue ( n= 49, 54,4% ). The majority were
specific to risk management or patient safety (n=68/90, 75.6%) rather than including
other trust management news. Specific aspects related to the SAIFIR model were
also assessed.
First, questions were related to the modes of feedback. There was evidence of mode
C feedback, the extent that newsletters incorporated analysis of patterns of incidents
in 38/90 (42.2%). Newsletters were used to alert staff to changes in local policies and
procedures (n=42/90, 46.7%).
The extent that newsletters identified suggested root causes of single incidents or
patterns of incidents was assessed, and 23/90 (23.3%) showed evidence of this
mode D feedback. Recommended changes in working practices were made in 45/89
(50.0%), showing mode E feedback. The specificity of recommended changes in
working practices was often specific to targeted work groups or work systems (51/89,
56.7%). For example, regarding a problem of patient identification, one newsletter
exhorted staff to in a general way to “always check you have the right patient before
giving treatment”, while on the same issue, while another trusts’ newsletter was more
specific. This trust gave a list of actions that might be relevant in different contexts,
e.g. ensuring all in patients have a wrist band, that it is generally on the dominant
arm, described the procedure for remediation if patients have no band, the use of
bands on an upper and lower limb for surgery, and the reinforcement of checking by
all staff at each patient transfer.
The use of scenarios or vignettes of investigated incidents was used in 30/90
(33.3%) of newsletters. An example from a Mental Health trust “Proof Positive” gave
two short vignettes with clearly sign posted changes in working practices for clinical
staff, including reference to evidence based guidelines and specific actions for
members of the community mental health team to take for suicide prevention.
The extent of senior staff “buy in” through explicit reference to endorsement by senior
staff such as the CEO was evident in only 5/85 (5.6%) of newsletters. It was difficult
to determine whether staff members were congratulated for involvement in risk
reporting, investigation and implementing changes. There was evidence for positive
comment about staff reporting incidents 5/85 (5.6%), and positive comment about
216
their involvement in 6/84 (6.7%), and 3/90 (3.3%) making successful changes in
working systems.
Newsletters are used to alert staff to relevant training (32/90,
35.6%) and resources such as web links (36/89, 40.4%), which are both relevant to
mode C in the SAIFIR model.
Newsletters are used for several modes of feedback simultaneously. Mode C refers
to communication of system wide alerts, and mode E refers to required changes in
working practices. A third of newsletters describes alerts, which is to be expected
since it is a requirement assessed by the Healthcare Commission that all trusts
disseminate SABS. Only 3/87 (3.3%) described results of published patient safety
research. There is some evidence of double loop learning, through audit of the
content of the newsletter (5/85, 5.5%) in having suggestions on patient or content of
the newsletter were evident in 23/90 (25.6%) % of newsletters, and on suggestions
for improving patient safety systems (14/90, 15.6%). However most (70/90, 77.8%)
made it clear who to contact about the newsletter.
217
5.7 - Overall conclusions from case study section:
The first case study showed that even among staff lead by an enthusiast for risk
reporting, there is scepticism about the importance and relevance of risk reporting by
clinical staff. The option of a mobile device that would both make reporting a more
normal part of daily work, and offer the possibility of immediate feedback (modes A
and B) as well as more usual modes of feedback back, shows real promise.
The newsletter survey shows that there is great variability in the extent that
newsletters are used for risk reporting feedback (modes C and E). There are many
ways that newsletters could e used more effectively, by following suggestions from
the 15 requirements above, as well as following good practice in designing attractive
and targeted newsletters. Perhaps the most obvious features which are likely to
affect motivation to report and to make changes in safer practices, are the credibility
and relevance of information presented, the extent of senior staff endorsement, and
the positive recognition of staff for involvement in risk reporting and learning
activities. Closing the loop, by ensuring staff views of the newsletter are sought, by
self tests of learning points (Bluelight) , and the inclusion of a face to face dialogue
between risk managers, line managers and front line staff (TRIAL) are likely to
improve the effectiveness of newsletters as channels of mode C and E feedback.
System
Case studies
requirements for
effective safety
feedback
UHCW NHS
Leicestershire
trust PDAs
TRAIL
newsletters
PDA use is
Evidence of reach Evidence of reach
Not clear that all
loops must
being trialled
across staff
across staff groups
channels are used
operate at
for clinical staff groups in adult
trust wide
effectively to achieve
multiple levels of
initially
11.Feedback
mental health
Lancashire Bluelight
Audit of 90 trusts’
“reach” o all staff in
the organisation
most trusts. Most
or system
give contact details
re originator of
newsletter but few
218
audit content.
2. Feedback
PDA system is
Electronic
Audit shows
Channels of
should employ
being
medium
electronic medium
communication
an appropriate
developed
supplemented by
accepted by staff
seldom audited.
mode of delivery
from user
paper versions
or channel for
requirements
information
3. Feedback
PDA user
Vignettes are
Content was largely
Minority contain
should
requirements
relevant as
relevant, but some
workplace specific
incorporate
include
incidents are from
staff wanted more
recommendations.
relevant content
relevance of
the target service
targeting of topic
for local work
feedback to
settings
reporter’s
workplace
4. Feedback
It is one of
It is one of
It is one of several
processes
several modes
several modes of
modes of feedback
should be
of feedback
feedback
5. Feedback of
PDA user
Tailoring relies on
Tailoring relies on
information
requirements
verbal discussion
verbal discussion
should be
include advice
between staff of
between staff of
controlled and
to recover the
learning points,
learning points
sensitive to the
problem and
with feedback to
requirements of
improve
central risk
different user
working
department on
groups
practices
actions to be
-
integrated within
the design of
safety
information
systems
taken on each
newsletter
219
-
6. Feedback
Feedback to
Action points are
Action points are
Some evidence in
should empower
those
directed at front
directed at front line
vignettes of specific
front line staff to
responsible for
line staff taking
staff taking local
work practice
take
action is a
local action
action
change
responsibility for
user
improving safety
requirement
recommendations.
in local work
systems
7. Feedback
PDA system is
should
suitable for
incorporate
rapid and
rapid action
iterative
cycles and
feedback
Not relevant
Not relevant
-
immediate
comprehension
of risks
8.Feedback
PDA system is
TRAIL vignettes
Bluelight contains
Relevance is often
should occur
designed to
contain examples
examples of
balanced against
directly to
give feedback
of feedback
feedback relevant to
wide coverage to all
reporters and
to reporter and
relevant to staff in
staff in addition to
staff.
key issue
others that
addition to
reporters
stakeholders as
need to act
reporters
9. Feedback
PDA users will
TRAIL is well
Bluelight process is
Newsletters are
processes
need to be
established as is
well established, but
mostly periodic, with
should be well-
trained to use
the short
system for feedback
uncertain feedback
established,
this mode of
feedback
on Bluelight is by
loops as to content
continuous,
reporting and
questionnaire
audit, as well as
and format by users.
clearly defined
feedback
issued to
questions on intranet
and commonly
managers for
on learning points
understood
each issue, and
well as broadly
to all front line
staff
audit shows good
reach, but no on
going feedback
220
on TRAIL as a
medium
10. Feedback of
PDA feedback
Feedback via e
Feedback via e mail
Most trusts rely on
safety issues
will be part of
mail and
and discussion at
internal post, with
should be
using PDAs
discussion at staff
staff meetings,
some e methods.
integrated within
for other work
meetings,
handovers and
the working
tasks. Other
handovers and
briefings is routine
routines of front
feedback, e.g.
briefings is
line staff
at team
routine
meetings, will
continue.
11. Feedback
Not relevant to
TRAIL vignettes
Bluelight can
Feedback
processes for
PDA part of
can highlight
highlight safety
recommendations
specific safety
system
safety actions
actions taken as a
vary in specificity
improvements
taken as a result
result of RCA
and focus- often
are visible to all
of RCA
unclear who is main
front line staff
target.
12. Feedback is
PDA system
Audit shows high
Audit shows high
Credibility would
considered
may include
credibility
credibility
have to be
reliable and
advice from
credible by front
Risk
line staff
Managers,
established by audit.
some of which
can be
automated.
13. Feedback
PDA system
TRAIL vignettes
Bluelight cases are
Case materials
preserves
can be more
are anonymised
anonymised, and
reviewed were
confidentiality
secure than
efforts are made to
anonymised.
and fosters trust
paper systems
be sensitive to staff in
between
areas that are
reporters and
identified in the
policy
reports
developers
14. Feedback
Relevant to
Buy in of senior
No explicit
Only 5 Newsletters
includes visible
overall safety
staff above
statements in
contained a CEO or
221
senior level
system in
service
Bluelights reviewed
other senior staff
support for
which PDA
department level
that refer to senior
endorsement.
systems
system will
is not clear
staff endorsement
improvement
exist
and safety
initiatives
15. Feedback
PDA feedback
Questions asked
Questions on intranet
Minority of
processes are
should be
on simple survey
and regular audits
newsletters have
subject to
subject to
form of managers
occur. But regular
audit of content and
double-loop
audit-
for each issue,
monitoring of impact
format. Contact
learning to
feedback on
and annual audit;
of learning points on
person for comment
improve the
usefulness of
but regular
safety is not built into
clear in majority of
effectiveness of
feedback; it
monitoring of
this system
newsletters.
the safety
could also be
impact of action
control loop
sent to the
points on safety is
Risk
not built into this
Management
system
dept.
Figure 5.1: Comparative analysis of the case studies against SAIFIR framework
requirements
222
5.8 - References:
Bent, PD., Bolsin, SN., Creati, BJ., Patrick, AJ., Colson, M.E., (2002) Professional
monitoring and critical incident reporting using personal digital assistants Medical
Journal of Australia. 177 (9): pp496-499
Bolsin SN, Faune T and Colson M (2005) Using portable digital technology for clinical
care and critical incidents: a new model. Australian Health Review. 29 (3) 297-305.
Firth Cozens J, Redfern N, Moss F (2002) Confronting errors in patient care: report
on focus groups. Report to DH Patient Safety Research Programme, University of
Birmingham, Birmingham, UK.
Lawton R., Parker, D. (2002) Barriers to incident reporting in a healthcare system.
Quality and Safety in Health Care; 11: 15-18
Matthew, L. (2006) Encouraging doctors to report. Presentation at Patient Safety
2006 1-2 February 2006. ICC
O'Dowd, A. (2006) Adverse incidents in NHS are still under-reported British Medical
Journal. 333: p59
Waring JJ. (2005) Beyond blame: cultural barriers to medical incident reporting.
Social Science and Medicine. 60(9):1927-35
223
6.0 - OVERALL DISCUSSION, RECOMMENDATIONS AND CONCLUSIONS
FROM THE SCOPING STUDY OF FEEDBACK FROM PATIENT SAFETY
INCIDENT REPORTING SYSTEMS:
Lead author:
Professor Louise M Wallace
Coventry University
Dr Maria Koutantji
Dr Jonathan Benn
Professor Charles Vincent
Imperial College London
Professor Peter Spurgeon
Universities of Birmingham
& Warwick
Discussion and Recommendations:
The scoping study consists of the parallel elements of the review, survey, case
studies and expert workshop. As the outcome from each element has been
discussed in each discrete section, this section attempts to integrate the outcomes,
both for their implications for the UK NHS, and for research into patient safety.
In each section, we first, examine the SAIFIR model against how the UK NHS
incident reporting and learning systems operate, and second we identify issues for
further research.
Modes of feedback:
The Expert Workshop and case study of the user requirements for a mobile (PDA)
reporting and feedback system showed (UHCW NHS trust case study), UK NHS
trusts currently rely on paper based reporting, and although are moving to intranet
forms, these systems are currently operating as one way post boxes, with long time
delays, with for example paper reports sitting in pigeon holes on wards for
unpredictable periods, and often exposed for others to see. Strictly speaking,
224
feedback occurs in some trusts before an issue is reported, since front line staff
members are often required to discuss a reportable incident with their immediate
manager before they or their manager completes the report. It is unknown how much
this process impacts on the willingness of staff to report, and whether it adds to the
quality of the report or detracts from a full account by the introduction of social
desirability bias. The case study using PDAs versus standard paper based systems
could provide opportunities to test the impact of these work organisation variables on
reporting quantity and quality.
A further issue that arose from the Expert Workshop that is not immediately apparent
from the linear representation of reporting often descried by flow chart
representations of reporting systems is that staff very often report on incidents where
the chief actors are not themselves, and sometimes the incident originates in a
completely different clinical team, or more rarely, another health or social care
organisation (such as a nursing home). While this should not negate the importance
of providing incident processing and outcome information to the reporter, it is
probable that others may be more centrally involved and therefore become the focus
of feedback. Feedback flows are not likely in many instances to be single and linear.
It is important to recognise that our description of feedback modes does not imply
only one reporter is involved per incident, nor that this person will necessarily be the
important focus for outcome and workplace system change feedback.
With these caveats in mind, we turn now to examine the modes of feedback
described in the SAIFIR model and how they apply to the UK NHS.
Mode A is feedback to the reporter that provides acknowledgement and outlines the
incident categorisation and management process. The review highlights that dialogue
with reporters is important for several reasons: a) to demonstrate that actions are
taken on the basis of reports and in so doing to stimulate future reporting to the
system, b) to provide the opportunity for clarification and validation of reported
information coming into the system, and c) to gain the input of local front line
expertise concerning the causes of failures and how these might be addressed with
practical and workable safety solutions. In the survey, we found that only
approximately a third of NHS trusts surveyed provided a specific acknowledgement
to the reporter or any further information on how the issue is to be handled. One in
10 trusts provided no direct information to the reporter at all regarding their reported
incident. Positive acknowledgement, i.e. thanking staff for reporting, is practiced
225
routinely by only a minority of trusts surveyed, and even in the audit of newsletters,
this part of describing an incident scenario was almost always absent. It is unknown
how much dialogue there is with reporters to clarify the nature of the incident. This in
part may be a function of the extent that trusts use pre-categorised forms, since at
the Expert Workshop some participants argued this is intended to reduce the burden
on central reporting and management staff who use the information on the form
almost exclusively to categorise the incident for action. They also use it for
aggregated incident reports. However, trusts such as UHCW NHS Trust argued that
categorised forms deter reporting, are often inaccurate and do not reduce the need
for dialogue with the reporter.
NHS Recommendation no.1: In terms of the SAIFIR model, we suggest that
methods of reporting should be designed to maximise mode A feedback
(acknowledgement, thanking for reporting, clarification), and that it should explicitly
include a dialogue with the reporter and other affected staff to gain a wider picture of
the incident, and to engage staff in reporting and learning, since it is likely these staff
will be those subsequently involved in an investigation, development of solutions,
implementation and monitoring of changes in work systems.
Research recommendation no. 1: In relation to mode A research should examine
whether pre categorisation or free form reporting impacts on reporting quality and
quantity, along with the technology of reporting (paper, intranet, mobile technology).
Mode B refers to rapid response actions to recover from the incident, to put in
immediate remedial and preventive actions to the affected services. This aspect of
the model was discussed in some depth in the Expert Workshop. It was shown that
many industries other than healthcare aspire to this mode of feedback, but few
concrete examples emerged. NHS participants voiced concern that such action
advice might be prematurely based on assumptions of causality that might
subsequently be found to be incorrect, thereby compounding the problem. It was
argued by others that as systems of reporting are refined, clear patterns and
indicators of incident type and causation will begin to emerge, making it more likely
that this mode of feedback will be acceptable. The UHCW NHS trust PDA project is
designed to allow the capability of this mode of feedback, providing a means of
testing the utility of rapid action feedback.
226
Recommendation for the NHS no. 2: The NHS reporting systems should be
designed at trust level to encourage identification of patterns of incidents for which
causes are readily established, and for which rapid recovery action can be provided.
Research recommendation no.2: Evaluation of the impact of incident reporting and
recovery actions by the provision of rapid action advice, including by electronic
means (e.g. e mail advice or referral to trust pathways of care or clinical guidance)
should be undertaken. This could include analogue as well as naturalistic
experiments.
Mode C feedback is concerned with the dissemination of information concerning
lessons learnt from analysis of reported incidents. Data from the survey indicates
that the more popular forms of dissemination were newsletters, hospital-wide group
discussions and training programmes and workshops. The case studies of
newsletters showed that of those 90 trusts that submitted newsletters for this study,
most were not using this medium in ways that are likely to optimise risk awareness.
Some are used mainly to disseminate information about national alerts (e.g. NICE
guidance, NPSA policies, CNST and Healthcare Commission patient safety
standards). There were some examples of good practice in the provision of local
safety incident scenarios, with clearly identified learning outcomes. The practice of
Leicestershire Partnership NHS Trust TRAIL initiative, by including both face to face
feedback between managers and front line staff as well as one way methods, allows
for staff to more clearly identify changes in practice that they can make. The use of
regular on line self assessment of learning, as evidenced in the Bluelight initiative of
Lancashire Partnership NHS trust, shows how this mode of feedback can also
incorporate double loop learning and mode E feedback, i.e. a means of monitoring
that changes in working practices are identified.
Recommendation no.3 for the NHS is that all trusts should establish comprehensive,
multi modality, audited feedback to all staff, particularly regarding the outcomes and
changes required in working systems arising from incidents reported and investigated
within the trust. Further, on going feedback from the staff as to the usefulness of this
feedback and to whether they have identified and made changes in systems and
practices as a result of the feedback, should be incorporated into these channels of
communication – e.g. by newsletters having tear off suggestion slips, face to face
discussion of learning points as evidenced by the TRAIL initiative, dedicated e mail
227
addresses for suggestions, regular self test assessments via the intranet as
evidenced by the Bluelight initiative.
Recommendation for research no.3: Identification of commonly recurring causes of
incidents and the safety solutions should be evaluated in both naturalistic designs,
and analogue tests. For example, given the same scenario and solutions and
recommendations, there may be more or less effective methods of ensuring staff are
made aware of the issues and enabled to make improvements in the safety of
working practices and systems.
Mode D refers to the communication of information concerning incident outcomes to
the original reporter, and as argued above, to others in the affected services, and
hence closing the loop and demonstrating the utility of reporting. The survey results
indicated that only around two thirds of trusts directly feed back the results of the
investigative process to reporters.
Recommendation no. 4 for the NHS is that trusts ensure they routinely identify those
to whom the outcome of an incident report, and of the investigation if one is triggered,
should be made. The outcome information should be routinely shared with them, and
with others such as the patient, and with health and social care organisations
involved in the pathway of care. The effectiveness of chosen communication
channels (e mail, alerts, newsletters etc) should be audited.
Research recommendation no. 4: The impact of incident outcome reports on safety
culture, reporting behaviour and on safer working systems should be evaluated.
Again, naturalistic experiments may arise through changes in the communication
channels within a trust (e.g. the introduction of an e form versus paper systems of
reporting and feedback). Analogue experiments could use healthcare staff in training
as participants to assess the cognitive and motivational impact of different types of
outcome report. For example, if vignettes are often welcomed by staff than
quantitative data reports, is this also true for all types of staff and incidents, and what
information is recalled from such reports?
Mode E feedback is the implementation of actions to improve systems safety, e.g.
equipment and the care environment, working practices and processes of care
delivery. The survey showed that the majority of responding trusts’ risk management
leads believes that some safety feedback does reach staff but that it is not reliably
228
acted upon. Only a quarter of these trusts could report that feedback was regularly
offered, acted upon and evaluated in terms of effectiveness in improving safety.
Newsletters are a potential format for communicating these recommendations widely.
The two case studies of best practice showed that they were welcomed by staff as a
useful vehicle for this mode, and the audits showed some impact on self reported
staff behaviour. The newsletter audit case study however showed that most trusts do
not use newsletters for this purpose, not do they build in means of assessing the
impact of recommendations communicated in this way.
Mode E feedback also includes the capability to monitor the implementation and
evaluate the impact of patient safety improvements. The survey results indicate that
in around half of all responding trusts, service managers and clinicians are expected
to implement recommendations and yet there is no formal system in place to monitor
this process, and in some a recognition that guidelines are inadequately specified to
be practicable nor capable of being monitored. Twenty five trusts reported that there
was no system in place at all governing the implementation of improvements.
Recommendation no. 5 for the NHS is that for every reported incident, and especially
for those that trigger an investigation, there should be specific recommendations to
the affected service and related services in the care pathway to remediate and
prevent reoccurrence. Communication channels should be tailored to the needs of
receiving staff, as “one size fits all “is unlikely to be effective. Communication that
requires active processing by the respondent to generate a commitment to changing
practice is more likely to be effective. Recommended changes should be monitored,
not only so that action plans are completed, but so that impacts on safety are
monitored.
Recommendation no. 5 for research is to establish common templates or other
approaches to streamlining the monitoring of safety changes. If there were similar
measures of safety in use across services and trusts (e.g. process measures such as
avoidance of return to theatre after planned surgery) or outcomes such as wound
healing rates for common procedures, then learning may be achieved about what
solutions appear to impact most on safety performance monitoring data. Further
refinements that might lead to understanding attributions of causality would be to
distinguish between skill or knowledge errors and intentional breaches of protocols.
We turn now to the 15 system requirements of effective safety feedback, as
described in the review section.
229
1. Feedback loops must operate at multiple levels of the organisation or
system
In the survey we found that reporting externally appears to occur routinely to SHAs
and the NPSA, or was planned to be in place once infrastructure issues were
addressed but only a third received feedback from the NPSA and two thirds from
SHAs, so external reporting loops are not uniformly successful. Sharing lessons from
incident reports occurs with a number of organisations but mostly the SHA, with only
half sharing across local trusts and about a sixth with local authorities.
Within the trust, a range of committees including the Trust Board itself takes
responsibility for patient safety, and the effectiveness of these communication
channels is not known. The nature of what is reported reflects performance review
data rather than in depth understanding of causation. We cannot ascertain the extent
of ownership of the feedback from reporting systems from this survey. As discussed
under Modes A, B and E above, feedback occurs to reporters more often at the end
of the process and there is no direct feedback to patients except via the complaints
system.
Recommendation no. 6 for the NHS is that feedback to the Board and to reporters
should be informed by understanding of causality rather than the current focus on
performance data.
Research recommendation no. 6: Establishing common patterns of causality of
similar incidents across trusts could lead to new diagnostic tools and techniques
which may make incident investigation more efficient, or even obviate the need for
many stages in the investigative process, giving more opportunity to focus on
establishing prospective hazards assessment processes.
2. Feedback should employ an appropriate mode of delivery or channel for
information
The survey and case studies show a variety of channels are used, but it is unknown
just how appropriate and effective these channels are.
Recommendation no. 7 for the NHS is that active dialogue between risk managers,
line managers and front line staff, as exemplified in the TRAIL case study, are
230
employed to achieve active solution generation and buy in to changing practices by
front line staff.
Research recommendation no. 7: The possibilities of new technology to provide
systematically varied format and content of feedback should be exploited to see what
factors can optimise the motivation and competence of staff to make changes in their
practice. For example, PDAs may be more suitable for location mobile staff, but add
no value for those with access to stationary workstations or offices.
3. Feedback should incorporate relevant content for local work settings
The survey shows in almost a third of trusts that feed back guidelines and
recommendations are poorly designed, and only a quarter of trusts concluded that
safety feedback was relevant enough to be reliably acted upon. Sources of
“sensemaking” information including input from front line staff and patients, is not
reliably used in investigations.
Recommendation no. 8 for the NHS is to use e technology and available databases
such as PEAT reports and complaints databases to integrate sources of information
relevant to causation and solution generation, as well as unquantifiable but essential
contextual information from staff and patients.
Recommendation no. 8 for research is to establish the criteria for relevant content of
feedback that will be comprehensible to those intended to use it.
4. Feedback processes should be integrated within the design of safety
information systems
The survey shows that safety information systems in the NHS are nearly all designed
to capture incidents related to staff, patients and carers, but there is little evidence
that feedback to reporters is integral in these systems, i.e. it has not been designed
to issue direct feedback to reporters (modes A, B, D).
Recommendation no. 9 for the NHS is to integrate feedback to reporters and their
managers within all local incident reporting systems.
Recommendation no. 9 for research is to establish ways that technology, such as e
working via PDAs, can assist with ensuring that feedback reaches those that have
231
knowledge of the event and its likely causation, solution generation, remediation and
preventive action in a timely and accurate way.
5. Feedback of information should be controlled and sensitive to the
requirements of different user groups
The survey showed that trust systems for managing incidents known to patients are
more mature than those which are staff initiated incident reports.
Recommendation no. 10 for the NHS is to encourage active reporting by patients and
carers of incidents and near misses, to reduce reliance on the complaints system and
engage patients as partners in organisational learning.
Recommendation no. 10 for research is to establish the psychological and sociocultural factors that may influence the willingness of patients and carers to use a “fair
blame” rather than complaints and litigation oriented systems.
6. Feedback should empower front line staff to take responsibility for
improving safety in local work systems
Both the survey and newsletter case study survey showed it is a small minority of
trusts that actively encourage reporting by thanking the reporter and still fewer
publicise explicitly the successful actions of staff to improve safety. There is
considerable scope for moving beyond simple recognition to include incentives. This
may be controversial, since in some trusts reporting is another of many duties. The
survey showed there was more emphasis on information sources of feedback
(newsletters etc) than action feedback.
Recommendation no. 11 for the NHS is to routinely acknowledge and congratulate
staff for reporting and improving safety.
Recommendation no. 11 for research is to establish whether overt incentives are a
motivator or detractor of reporting and safety implementation behaviour.
7. Feedback should incorporate rapid action cycles and immediate
comprehension of risks
This requirement and recommendations arising from our research are described
above under modes A and B.
232
8. Feedback should occur directly to reporters and key issue stakeholders as
well as broadly to all front line staff
As described under requirement no. 1 above the survey showed there is evidence of
feedback to front line staff and reporters. But direct and rapid feedback to the
reporter (mode A) only occurs as an acknowledgement in a third of trusts, issue
progress in just less than half of trusts and the outcome of investigations in three
quarters of trusts. Methods described were reports and meetings, that is there is
not particularly rapid feedback nor is it targeted feedback. Feedback of wider
lessons learnt (modes C, D, E) is most often by training programmes and meetings,
e mail, reports and newsletters, some of which include a more active dialogue with
those informed (e.g. team brief). The case studies, particularly TRIAL, showed the
value of active face to face dialogue in feedback to staff.
Recommendations for the NHS and for research are outlined above in relation to
modes A, C, D and E.
9. Feedback processes should be well-established, continuous, clearly defined
and commonly understood
The case studies of Bluelight newsletter and TRIAL showed that staff who were
interviewed, and the local audits undertaken, showed that in these trusts the system
for reporting were well known and expectations of feedback were largely being met,
and importantly, the impact of newly issued advice could be regularly assessed by
staff themselves through completing a simple web based questionnaire linked to
each Bluelight newsletter issued. Technology such as web questionnaires have the
potential to reach all staff, such as intranet and e mail may have mass
communication potential, but currently access by many staff groups is restricted.
Recommendation no.12 for the NHS is that risk managers should undertake regular
audits of the communication channels to staff about reporting procedures, the
conduct and outcome of investigations and changes to working practices resulting
from reported incidents. Self assessment and aggregated reports of these
assessments should be used to monitor the impact of feedback prospectively, rather
than through periodic audit.
Recommendation no. 12 for research is to establish the efficacy of prospective self
assessment of risk reporting and safety information versus periodic audit on the
quality and quantity of risk reporting and on safety outcomes.
233
10. Feedback of safety issues should be integrated within the working routines
of front line staff
The survey was conducted in parallel with the work to develop the model. The two
case studies of newsletters showed feedback via e mail and discussion at staff
meetings, handovers and briefings is routine and welcomed by staff.
Recommendation no. 13 for the NHS is to use normal methods of communication,
such as e mail, handovers and team briefing in addition special procedures such as
Clinical Governance meetings and Risk Management Newsletters to all staff, to
feedback on safety issues.
Recommendation no. 13 for research is to establish what communication styles are
most appropriate and motivating for staff to implement recommended changes in
their practice.
11. Feedback processes for specific safety improvements are visible to all front
line staff
Both Bluelight and TRIAL case studies showed how the initiatives disseminated
widely to staff the improvements implemented after RCA investigations.
Recommendation no. 14 for the NHS is for all trusts to adopt regular multi channel
forms of communication with staff about improvements in safety systems.
The research recommendation here is the same as for mode E above.
12. Feedback is considered reliable and credible by front line staff
The survey found that participants thought that in two thirds of trusts feedback was
partial and not acted upon by many staff, which may be a consequence of not
providing reliable, credible feedback. The TRAIL and Bluelight case studies had high
credibility as shown in their audit data. It is unknown what aspects of the message, or
of the implied authority of the sender, may influence credibility of feedback.
Recommendation no.15 for the NHS is to develop simple and reliable staff attitude
measures or rating scales to measure the credibility of safety feedback, and
determine what variables influence staff perceptions of credibility e.g. the implicit
authority of the source of feedback (Medical Director versus Risk Manager) or factors
within the feedback itself (e.g. how readily it can be operationalised). Research on
234
guideline development suggests the latter may be important in influencing
implementation (Michie and Johnston, 2003).
13. Feedback preserves confidentiality and fosters trust between reporters and
policy developers
The survey shows the NHS trust level systems are confidential but not anonymous,
as the NRLS is at a national level. It is unknown how far feedback is provided in
confidence.
14. Feedback includes visible senior level support for systems improvement
and safety initiatives
In the survey we found that although all but four trusts had an Executive Director
responsible for Patient Safety, only a sixth attributed this to the CEO. It is unknown
from this survey how credible the leadership of patient safety is experienced by
clinical staff. The survey question on safety culture suggests that most trusts
recognise there is significant progress to be made on embedding a safety culture and
that very few are using any recognised safety culture assessment tools.
Recommendation no. 16 for the NHS is to use the many opportunities that exist to
communicate with staff about safety to highlight the involvement and priority given to
patient safety by the CEO.
Recommendation no. 17 for the NHS is to ensure that regular assessments of
organisational culture are conducted using recognised measures, and that the results
are seen to be acted upon.
15. Feedback processes are subject to double-loop learning to improve the
effectiveness of the safety control loop
The survey suggests the extent of sharing outside the trust from which feedback
about the process of reporting and investigation may be gleaned, occurs in about two
thirds of trusts to the SHA and about half with trusts in the local area. It is not
possible to ascertain the extent that learning was achieved by this sharing. Within
trusts, the TRAIL and Bluelight initiatives gave evidence from questions on intranet
and regular audits that there is learning about the impact of feedback, but regular
monitoring of the impact of learning points on patient safety does not routinely occur.
235
6.2 - Implications for practice in NHS trusts from the scoping project
Feedback can only be effective in the context of comprehensive risk management
systems, including leadership at the highest level, policies, processes and structures,
with competent risk management staff and IT support, and clinical and public
engagement. These are beyond the scope of this study, but it must be recognised
that these are of overriding importance to the success of any specific
recommendation made in this report. International research supports this view
(Runciman, et al, 2006).
The recommendations above explicitly use the SAIFIR model’s 15 requirements and
five modes of feedback (A-E) to create recommendations for how the NHS can
develop more effective feedback from incident reporting systems. In addition, the
research leads to some more general recommendations.
Feedback must be built into the regular reporting of patient safety issues at all levels
within the trust and most crucially to the Board, and externally, and this feedback
should be evaluated. This should include evidence of awareness of staff of the
importance and mechanisms for reporting and investigating incidents, safety culture,
of lessons learnt from incidents, rather than the current emphasis on reporting
statistical data on incident types.
Feedback to reporters should be of all types in the model, using available IT, paper,
and face to face means. An emphasis on dialogue with reporters will improve
accuracy of reporting, effective recovery, and “buy in” to solution development and
action.
The content and formats of feedback should be integrated into working systems, not
a bolt on, and evaluated regularly by users.
Feedback involving both information and action should occur in all modes (A-E), with
much greater emphasis on modifying working practices, monitoring impact and
publicising personal and organisational success in improving patient safety.
236
6.3 - Implications for patient safety research
The survey and case studies could not capture all aspects of the emerging model.
There are gaps in our knowledge even against these criteria, e.g- the extent of rapid
recovery information (mode B), the perceived credibility of recommendations, and the
extent that safer systems result (mode E).
Recommendation no 14 for research is that effective methods of sharing incident
investigation methods and results should be established on a healthcare economy
basis, and whether this is best via trusts with similar services or within a local
healthcare economy should be investigated.
Recommendation no. 15 for research is that the various methods of trust wide
feedback (mode C) such as “newsletters” could be subject to more rigorous
evaluation. The case studies throw further light on how such methods can be used in
potentially more effective ways. Testing of any of these methods could be undertaken
in field trials as well as experimental analogue trials, e.g. using clinical students.
The extent that patients and the public are willing to be partners in learning from
incident reporting systems, as opposed to complaints, is untested. It is well known
that many patients seek to avoid litigation but state they resort to litigation to prevent
event recurrence (Vincent, 2001). Recommendation no.10 for research above
applies.
Returning to points made at the start of this section on the generation of reports by
staff on incidents that involve staff outside their department or trust, recommendation
no.18 for the NHS is that the NHS includes the whole healthcare economy in
reporting and learning.
The final recommendation for research (no.16) is that the links between retrospective
investigation and analysis, and proactive hazard estimation and prevention methods,
including detecting and building resilience, need further investigation to establish the
relative merits and effectives of these methods on improving patient safety.
237
Level within UK
Recommended action
Suggested outcomes
National Policy
Comprehensive information and
Health Care Commission
including via
action feedback processes, such as
is able to detect
SHAs/ Health
those described within the SAIFIR
improvements in quality of
Boards
framework, to be integrated within
incident reporting and
all local NHS risk management and
learning from incidents
reporting systems, according to a
within all NHS trusts.
Healthcare
common framework, e.g. via
updating Standards for Better
Health safety standards.
National Policy
Focus upon development of a
Health Care Commission
including via
common, defined process, with
is able to detect
SHAs/ Health
defined responsibilities and
improvements in quality of
Boards
structures, which effectively closes
incident reporting and
the safety loop in NHS
learning from incidents
organisations. This process must
within all NHS trusts.
include reporting, incident analysis
SHA/ Health Board level is
and investigation, solutions
able to detect
development, implementation and
improvements within local
continued monitoring of
trusts, and trusts can
effectiveness of corrective actions.
identify how SHA/ Board
level feedback has
improved the trusts’
systems.
National Policy
Simplification of current system of
Reduction in number of
including via
multiple reporting and feedback
channels of reporting- as
SHAs/ Health
channels to organisational, supra-
recommended in NAO
Boards
organisational and external
(2005) report.
agencies. Use of common feedback
channels for local and national level
information and alerts.
National Policy
Development of clear policies with
238
Greater specificity of
including via
supporting criteria for decision
current definition of SUIs,
SHAs/ Health
making concerning what level of
with evidence that these
Boards
feedback or action to instigate in
criteria were used to
response to specific safety
determine upward
incidents, including definition of
reporting- current system
which incidents require a rapid local
includes issues of political
response to prevent further harm to
interest to DH which may
patients. Definition of clear protocols
not be relevant to the
governing local and trust-wide
overall safety
action to be taken for rapid
improvement objective. .
response scenarios.
An audit of SUIs may show
considerable variation due to local
interpretation of thresholds.
National policy to have formal
investigation of al SUIs mandatory –
as in New South Wales, Australia.
NHS Trust
Visible support of feedback process
Regular reports on
from both senior management and
incident analyses, not
local clinical leadership to
simply frequencies, to
demonstrate the importance of
Boards and sub
safety issues generally and in
committees, where
achieving effective uptake and
learning and review of
implementation of specific systems
actions are visible.
solutions and improvements.
Feedback to the Board and to
reporters should be informed by
understanding of causality rather
than the current focus on
performance data.
NHS Trust
IRS should be designed to
Integration of multiple
maximise ease and accuracy of
means of reporting,
input- use of paper, web and e
analyses and feedback.
239
based systems. E technology
should also facilitate combing data
from other available databases as
PEAT reports and complaints
databases to integrate sources of
information relevant to causation
and solution generation, as well as
unquantifiable but essential
contextual information from staff and
patients. These multiple modes may
also be used for communicating
feedback information.
NHS Trust
Mode A feedback
Improved quality of reports
(acknowledgement, thanking for
and engagement of staff in
reporting, clarification), should occur
investigation and actions.
for all incidents, with explicit
dialogue with the reporter and
others affected for more serious
incidents.
NHS Trust
The IRS should be designed at
Use of guides to identify
trust level to encourage
common contributory
identification of patterns of incidents
factors where menu driven
for which causes are readily
forms are used.
established, and for which rapid
recovery action can be provided
NHS Trust
Establish comprehensive, multi
Risk Management
modality, audited feedback to all
Newsletters having tear off
staff, particularly regarding the
suggestion slips, face to
outcomes and changes required in
face discussion of learning
working systems arising from
points as evidenced by the
incidents reported and investigated
TRAIL initiative, dedicated
within the trust. Feedback on
e mail addresses for
feedback is needed from the staff as
suggestions, regular self
to the usefulness of this feedback
test assessments via the
and to whether they have identified
intranet as evidenced by
and made changes in systems and
the Bluelight initiative.
240
practices.
NHS Trust
Trusts ensure they routinely identify
The outcome information
those to whom the outcome of an
should be routinely shared
incident report, and of the
with them, and with others
investigation if one is triggered,
such as the patient, and
should be made.
with health and social care
organisations involved in
the pathway of care. The
effectiveness of chosen
communication channels
(e mail, alerts, newsletters
etc) should be audited.
NHS Trust
For every reported incident, and
Recommended changes
especially for those that trigger an
should be monitored, and
investigation, there should be
the effectives of the
specific recommendations to the
channels of
affected service and related
communication, so that
services in the care pathway to
action plans are
remediate and prevent
completed, and impacts
reoccurrence. Communication that
on safety are shown to
requires active processing by the
occur.
respondent to generate a
commitment to changing practice is
more likely to be effective.
NHS Trust
Active dialogue between risk
Solutions generated would
managers, line managers and front
have credibility with front
line staff, as exemplified in the
line staff
TRAIL case study, are employed to
achieve active solution generation
and buy in to changing practices by
front line staff. Similar process can
be achieved by integrating feedback
on incidents into existing mortality
and morbidity reviews n the acute
241
sector.
NHS Trust
Trusts should routinely acknowledge
Integrate recognition for
and congratulate staff for reporting
safety improvement by
and improving safety.
staff into trust systems of
communication and
reward.
NHS Trust
Risk managers should undertake
Self assessment and
regular audits of the communication
aggregated reports of
channels to staff about reporting
these assessments should
procedures, the conduct and
be used to monitor the
outcome of investigations and
impact of feedback
changes to working practices
prospectively.
resulting from reported incidents.
NHS Trust
Trust staff should use normal
Staff would be aware of
methods of communication, such as
how reporting,
e mail, handovers and team briefing
recommended solutions
in addition special procedures such
had improved safety.
as Clinical Governance meetings
and Risk Management Newsletters
to all staff, to feedback on safety
issues, and particularly on the
impact of suggested changes on
their own services.
NHS Trust
Use of staff safety culture
Improve reporting and
measures, recommended by NPSA,
learning across the trust.
should be used alongside specific
questions about the IRS and
learning from incidents, to establish
the impact on safety culture, and
identify areas where some staff
members are not engaged, and that
242
results of the surveys are seen to be
acted upon.
NHS Trust
Trusts should encourage active
Fewer complaints, more
reporting by patients and carers of
patient reported incidents,
incidents and near misses, to
better quality solutions.
reduce reliance on the complaints
system and engage patients as
partners in organisational learning.
Figure 6: A checklist of actions to implement the recommendations from the scoping
study of feedback from incident reporting system;
Note_ IRS = Incident Reporting System
References:
Runciman WB, Williamson JAH, Deakin A, Benveniste KA, Bannon K and Hibbert PD
(2006) An integrated framework for safety, quality and risk management: an
information and incident management system based on a universal patient safety
classification. (2006) Quality and Safety in Health Care, 15 (Supplement) pages 8290.
Michie S and Johnston M (2003) Changing clinical behaviour by making guidelines
specific. BMJ, 328, 343-5.
Vincent C. (2001) Caring for patients harmed by treatment. Vincent C (ed) Clinical
risk management: enhancing patient safety. Second edition. BMJ books. London.
243