A literature review about indicators and their uses

Aboriginal Health & Medical Research Council of New South Wales
A literature review about
indicators and their uses
Acknowledgements
The following AH&MRC staff contributed to this report and the literature review project:
Pip Duncan, Senior Project Officer
Jenny Hunt, Public Health Medical Officer
Lucy McGarry, Senior Project Officer
Tania Waitokia, Manager (CQI)
We sadly acknowledge the passing of Pip Duncan prior to the publication of this review.
This project was funded by NSW Ministry of Health, as part of the AH&MRC CQI Program.
ISBN: 978-0-9805159-1-6
Suggested Citation: Aboriginal Health & Medical Research Council. (2013) A literature review
about indicators and their uses. Aboriginal Health & Medical Research Council. Sydney, Australia.
2
Contents
Acronyms
4
1.
5
Executive Summary
2.Introduction
6
3.Methods
7
4.
The Review
8
4.1.
8
What are ‘indicators’ and their purposes?
4.1.1.
Definitions
8
4.1.2.
Indicator Purposes
9
4.1.3.
Indicator Types
12
4.1.4. Who uses indicators?
14
4.2.
What characteristics are considered important for good indicators?
16
4.3.
What are the strengths and limitations of using indicator approaches?
19
4.3.1.
Strengths
19
4.3.2.
Limitations
20
4.4.
Is there evidence to support ‘best practice’ approaches for developing
and implementing indicators and indicator sets?
22
5.Conclusions
24
6.Appendices
25
6.1.
Appendix One
25
6.2.
Appendix Two
26
7.References
27
3
Acronyms
ACHS
Australian Council on Healthcare Standards
ACCHS
Aboriginal Community Controlled Health Service
ACSQHC
Australian Commission on Safety and Quality in Health Care
AH&MRC
Aboriginal Health and Medical Research Council
CEE
Centre for Epidemiology and Evidence
CQI
Continuous Quality Improvement
NHS
National Health Service (United Kingdom)
nKPIs
National Key Performance Indicators
QAIHC
Queensland Aboriginal and Islander Health Council
RACGP
Royal Australian College of General Practitioners
WHO
World Health Organisation
4
1. Executive Summary
The Aboriginal Health & Medical Research Council (AH&MRC) Continuous Quality Improvement (CQI)
program has undertaken this literature review to learn from the Australian and international literature
about the nature, use, and development of indicators, in order to inform CQI efforts in primary health
care to improve Aboriginal health.
The increasing cost of health care, a greater requirement for evidence-based policy making and
accountability in funding arrangements, a focus on quality improvement within health care systems
and the need to assess the impacts of health care on populations, have led to the demand for, and
increasing currency of indicators as measuring tools (Australian Council on Healthcare Standards,
2013; Josif, 2011; Mattke, 2006).
The reviewed literature highlights the multidimensional nature of quality, it requires multiple measures
and that measurement can lead to improvement. However, no indicator is error-free or suitable for all
purposes and even carefully selected indicators may produce unintended negative consequences.
Therefore, developing appropriate, useful and meaningful indicators is a priority for those who want
to accurately judge, monitor and improve the way in which health care is designed and delivered.
Developing and/or selecting effective indicators requires a focus on the metadata (ie indicator
definitions and specifications), as well as:
• A clear articulation of purpose
• Defining the dimensions of quality being evaluated
• Consideration of the evidence base and the opinions of relevant experts, and
• A participatory development process that involves users in meaningful engagement
Several ‘best practice’ frameworks for indicator development and selection have been identified, and
could be adapted for an ACCHS and Aboriginal health context. The ACCHS sector is well placed to
develop indicators for use by ACCHSs, as well as for the health system more broadly to drive quality
improvement efforts in Aboriginal health.
5
2. Introduction
The overall aim of the Aboriginal Health & Medical Research Council (AH&MRC) Continuous Quality
Improvement (CQI) program is to build the capacity of Aboriginal Community Controlled Health
Services (ACCHSs) to undertake quality improvement. The program is multi-faceted with a broad
range of delivery strategies and is funded by NSW Health. One area of focus is the use of indicators
for quality improvement purposes by ACCHSs. This literature review forms part of the AH&MRC CQI
Program work in this area.
The purpose of undertaking this literature review is to learn from the Australian and international
literature about the nature, use, and development of indicators, in order to inform CQI efforts in
primary health care to improve Aboriginal health.
In particular, the review sought to answer four key questions:
• What are ‘indicators’ and their purposes?
• What characteristics are considered important for effective indicators?
• What are the strengths and limitations of using indicator approaches?
• Is there evidence to support ‘best practice’ approaches for developing and implementing
indicators and indicator sets?
The literature review findings will be used by the AH&MRC to guide their work to support member
ACCHSs in their development and use of indicator sets that are relevant and useful for CQI
purposes. Those involved in developing indicators for other health provider groups delivering
services to Aboriginal people, or working at regional, state or national levels may also find this review
useful.
This report describes the methods used to conduct the literature review and provides an analysis
of the key findings that relate to the chosen research questions as outlined above. The final section
includes a brief discussion and conclusion of the key points highlighted in the literature review and
the implications for consideration by the AH&MRC and the CQI program.
6
3. Methods
Australian and international health literature relating to indicator sets and indicator frameworks in
primary health care was searched.
Combinations of the following key search terms were used: ‘primary health care, indicators’,
‘data indicators’, ‘clinical indicators’, ‘performance indicators’, ‘quality indicators’, ‘performance
measures’, ‘quality measures’, ‘Aboriginal and Torres Strait Islander primary health care’, ‘Aboriginal
Community Controlled Health Services’.
The literature search focused on the following sources:
Electronic databases: Medline (via PubMed) and Australian Indigenous Health Infonet1
Relevant websites and grey Literature: The websites of relevant Australian State and Federal
Government departments, Royal Australian College of General Practitioners (RACGP), Queensland
Aboriginal and Islander Health Council (QAIHC), Aboriginal Medical Services Alliance Northern
Territory and other organisations involved in CQI and indicator work in primary health care in Australia
were searched for relevant unpublished documents and links to other sources.
Bibliographies of published research: Snowballing techniques were used to cross reference
bibliographies of selected literature to identify any material that was not provided by database and
website searches.
Searches were limited to literature published after 1999 although key, seminal earlier studies have
been included in the review. The search was also limited to English language sources.
Initial searches identified a large volume of potentially relevant literature, with more than 1500 papers
being identified. From an initial scan of titles and abstracts, 75 papers were selected as being most
relevant to the research questions.
1
http://www.healthinfonet.ecu.edu.au/
7
4. The Review
This section discusses the reviewed literature and is set out using the four primary questions.
4.1. What are ‘indicators’ and their purposes?
Indicators or ‘measures’ to assess quality, safety and performance in health care have existed since
the 1960’s when the concept of measuring quality in health care was first accepted and adopted
(Donabedian, 2005).
Since that time, means for evaluating aspects of health care systems have been developed and
applied in every corner of the globe. The increasing cost of health care, a greater requirement for
evidence-based policy making and accountability in funding arrangements, a focus on quality
improvement within health care systems and the need to assess the impacts of health care on
populations, have led to the demand for, and increasing currency of indicators as measuring tools
(Australian Council on Healthcare Standards, 2013; Josif, 2011; Mattke, 2006).
Quality health data is a key aspect of strengthening health systems (Nutley & Reynolds, 2013;
WHO, 2007) and there are now thousands of indicators designed to measure the structures,
processes and outcomes of care in both generic and disease specific contexts, providing
information to policy makers, funders, administrators, service providers, communities and individual
patients. The reviewed literature asserts that the multidimensional nature of quality requires
multiple measures (Mainz, 2003a) and that measurement can lead to improvement (Kwedza, 2009;
McDonald, 2009).
4.1.1. Definitions
Defining quality in primary health care is a complex and debated topic which is not the primary focus
of this review but is necessary to provide a basis for applying indicator approaches. Included in this
section are definitions of ‘quality’, ‘performance’ and ‘indicators’, as understanding these terms
assists in navigating what indicators are and their purposes.
There is no universally accepted definition of quality (Australian Institute of Health and Welfare, 2009).
It is consistently stated that quality is a broad term that includes many dimensions, and that defining
quality depends on whose perspective is being represented (Campbell, 2000; Goodwin, Dixon,
Poole, & Raleigh, 2011; Rubin, 2001a; Zeitlin, 2003).
8
The Institute of Medicine defines quality as:
“The degree to which health services for individuals and populations increase the likelihood
of desired health outcomes and are consistent with current professional knowledge”
(Lohr & Schroeder, 1990, p. 21)
The World Health Organisation (2006, pp. 9-10) defines quality by identifying six areas or dimensions
of quality; requiring that health care be:
• “efficient, delivering health care in a manner which maximises resource use and avoids
waste,
• effective, delivering health care that is adherent to an evidence base and results in improved
health outcomes for individuals and communities, based on need;
• accessible, delivering health care that is timely, geographically reasonable, and provided
in a setting where skills and resources are appropriate to medical need,
• acceptable/patient-centred, delivering health care which takes into account the
preferences and aspirations of individual service users and the cultures of their communities,
• equitable, delivering health care which does not vary in quality because of personal
characteristics such as gender, race, ethnicity, geographical location, or socioeconomic
status, and
• safe, delivering health care which minimizes risks and harm to service users”
Campbell et al (2000, p. 1614) use the following more patient-centred perspective:
“whether individuals can access the health structures and process of care which they need
and whether the care received is effective”
Defining performance is less complex. Pencheon (2008, p. 31) sums it up as:
“The degree to which a system delivers, as measured against specific and agreed standards
and criteria”
9
Throughout the reviewed literature, there are common terms used to define indicators.
Mainz (2003a, p. 524) outlines the definitions of indicators in the following three ways:
• “As measures that assess a particular health care process or outcome,
• As quantitative measures that can be used to monitor and evaluate the quality of important
governance, management, clinical, and support functions that affect patient outcomes, and
• As measurement tools, screens or flags that are used as guides to monitor, evaluate, and
improve the quality of patient care, clinical support services, and organisational function that
affect patient outcomes”
The NHS Institute for Innovation and Improvement provides a guide titled ”The Good Indicators
Guide: Understanding how to use and choose indicators” which defines an indicator (Pencheon,
2008, p. 5) as a:
“succinct measure that aims to describe as much about a system as possible in as few points as
possible. Indicators help us to understand a system, compare it and improve it”
The reviewed literature demonstrated some variation in perspective when specifically considering
definitions of ‘quality indicators’ and ‘performance indicators’. For example, Majeed (2007) claims
that the terms ‘quality’ and ‘performance’ are interchangeable and refer to the same purpose
of measuring how well health services improve health outcomes and meet current best practice
standards. While Campbell et al (2003, p. 816) distinguishes between:
• “quality indicators, which infer a judgement about the quality of care provided, and
• performance indicators which are statistical devices for monitoring performance without any
necessary inference about quality”
4.1.2. Indicator Purposes
Throughout the reviewed literature, authors identify that indicators can be used for different
purposes, depending primarily on where the user is located in the system being assessed. Funders,
administrators, providers and consumers each have different roles and responsibilities within the
health care system, and are likely to use indicators for different reasons (Ibrahim, 2001; Josif, 2011).
The reviewed literature demonstrated broad consensus that, in a primary health care context,
indicators can be used for measurement of the structures and processes involved in care, as well
as assessment of the impacts care has on patient outcomes (Campbell, 2003; Donabedian, 2005;
Kwedza, 2009; Mainz, 2003a).
Mainz (2003a) outlines several purposes for indicators:
• “to measure and compare performance against set targets (benchmarking),
• to support accountability, regulation and accreditation processes,
• to set service or system priorities,
• to support quality improvement initiatives, and
• to support patient choice of providers”
10
Pencheon (2008, p. 5) identifies three uses for indicators:
• “understanding: to learn more about how a system operates and how it might be improved –
a research role
• performance: measuring if a system is performing to an agreed standard a performance/
managerial/improvement role, and
• accountability: allowing a system to be accountable to funders and the public and to be
openly scrutinised – an accountability/democratic role”
Throughout the reviewed literature, the described purposes and classifications of indicators are
often multiple and overlapping; it is therefore difficult to categorise them definitively. For example,
in the purposes outlined by Pencheon (2008) above, the performance and quality improvement
applications of indicators are grouped together as part of the same ‘performance’ role. In contrast,
other authors consider these two applications as having separate and distinct purposes (Goodwin,
2011; Ibrahim, 2001; Wollersheim, 2007) which may conflict (Kwedza, 2009).
Wollersheim et al (2007) identifies another way of understanding indicator purposes by determining
at what level of comparison indicator data is being used. The author proposes the following three
levels:
1. “Internal comparisons: Comparing performance results over time within a system/
organisation
2. External comparisons: Comparing performance with other organisations/systems
and measuring performance against best practice, and
3. Standard comparisons: Comparing performance with a predetermined standard
or benchmark”
Whether indicators are being used for performance management, quality improvement, or for
external or internal comparisons, the reviewed literature consistently identifies the importance of
determining the intended audience and the overall aim and purpose of the indicators (AIHW, 2009;
Kelley, 2006; Pencheon, 2008).
11
Indicators are commonly organised and grouped as ‘sets’ or ‘suites’. In some cases, these sets
are designed to provide a comprehensive assessment of care, combining a set of complimentary
measures to provide information about the different aspects of care within a system or the
range of services provided by an organisation (Ibrahim, 2001). These are referred to as ‘core’, or
‘interdependent’ indicator sets, and are defined by the overall objectives or target areas that have
been chosen to focus on and measure (Pencheon, 2008).
Indicator sets can also be designed to be used individually or in bundles chosen by providers,
to measure specific, or unrelated aspects of care (Ibrahim, 2001), and can be referred to as
‘independent’ (Australian Commission on Safety and Quality in Health Care, 2011). Kwedza (2009)
supports the use of a selection of multiple indicator sets within one health organisation in order to
cover all aspects of its services and in order to allow staff to choose those indicators most relevant
to their particular role.
Overall, the reviewed literature asserts that indicators do not give a definitive answer but rather,
provide information that provokes further questioning and can lead to a better understanding of the
system under assessment (Donabedian, 1997, 2005; Mainz, 2003a; Pencheon, 2008). Indicators
alone cannot improve health outcomes, but, by promoting action, they are a key tool in broad quality
improvement processes (Ferlie, 2001; NSW Health, 2001; RACGP, 2012; ; Wollersheim, 2007).
4.1.3. Indicator Types
Indicators are classified in different ways and may be categorised in terms of their purpose, for
example as ‘performance’ or ‘quality’ measures (Campbell, 2011; Ibrahim, 2001; Majeed, 2007)
or for quality improvement (Pencheon, 2008). Additionally they may be classified in terms of the
aspect of health or health care which they are designed to measure. Examples include population
health indicators and health status indicators (Centre for Epidemiology and Evidence, 2012; Pope,
2003), clinical indicators (Wollersheim, 2007), and structure, process or outcome indicators (Kringos,
2010). The reviewed literature has particular focus on the different uses of clinical indicators and
Donabedian’s classification of process, structure and outcome indicators. These indicator types are
discussed in more detail in the following sections.
12
Clinical Indicators
Clinical indicators are commonly applied in primary and other health care settings to measure
particular aspects of clinical care and used to assess whether specified standards are being met
(Mainz, 2003a). They are often rate-based and measure the rate of occurrence of an event (ACHS,
2009). They may also be classified as structure, process or outcome indicators (see below) and may
be specific to a particular disease or applied to generic aspects of care including patient safety and
clinical governance (Department of Health Victoria, 2010). Clinical indicators are not exact standards;
rather they are designed to be ‘flags’ which can alert users to possible problems and/or areas for
improvement in patient care (NSW Health, 2001).
Clinical indicators may be used for many purposes, in particular for performance assessment and
quality improvement, and are often labelled as, and included in, sets of performance, quality and
safety indicators (Josif, 2011; RACGP, 2012; ; Wollersheim, 2007)
There are different perspectives within the reviewed literature about the particular uses of clinical
indicators. Travaglia (2009, p. 4) claims that clinical indicators, “are a form of performance
measurement”, while Wollersheim et al (2007, p. 15) claim that clinical indicators are, “more suitable
for internal quality improvement”.
Structure, Process and Outcome Indicators
Donabedian (2005) identifies three categories for indicators – structure, process and outcome – in
his early work on measuring quality in health care, and this categorisation is now extensively used
in healthcare. Some authors focus on using either process or outcome measures (Campbell, 2003;
Evans, 2009; Rubin, 2001a; Rubin, 2001b) and there is much debate about the pros and cons of
each type and about which are the most valid and useful (Donabedian, 1997; Evans, 2009; Heath,
2007; Rubin, 2001b). Below is a brief discussion and examples of how these three categories of
indicators can be applied.
Structure indicators relate to the material and human resources that are in place at the point where
care takes place. This type of indicator includes facilities, supplies, funding, infrastructure and
equipment, the number of staff, their qualifications, and access to training. Structure indicators may
also measure organisational elements of a health care setting such as availability of guidelines and
peer assessment (Donabedian, 1997).
Mainz (2003a) provides examples of structure indicators as: access to specific technologies (e.g.
MRI scan); proportion of specialists to other doctors; clinical guidelines revised every second year.
Process indicators relate to what is done in giving and receiving care. They are used increasingly
to assess and improve quality of care (Campbell, 2003; Dept Health Victoria, 2010; Rubin,
2001b) and include both a patient’s actions involved in accessing and receiving care and carrying
out recommendations, as well as the processes employed by a practitioner in diagnosing and
implementing treatment (Donabedian, 1997). Mainz (2003a) provides examples of process
indicators as: proportion of patients with diabetes given regular foot care; proportion of patients
with myocardial infarction who received thrombolysis; proportion of patients treated according to
clinical guidelines.
13
Outcome indicators measure the effects of care on the health status of patients and populations.
These include recovery, restoration, survival, disease, discomfort, disability, discomfort,
(dis)satisfaction and death (Donabedian, 2005; Mainz, 2003b). Mainz (2003a) provides examples of
‘intermediate’ outcome indicators as: HbA1c results for diabetics; lipid profile results for patients with
hyperlipidaemia; blood pressure results for hypertensive patients. Examples of ‘end result’ outcomes
indicators also given by Mainz are: mortality, morbidity, health status measurement, and patient
satisfaction.
Donabedian (1997) considers outcome indicators to be valid, stable and often ‘concrete’ measures
of the impacts of health care, but also recognises many limitations associated with their use. In
particular, multiple factors influence health outcomes, many of which are outside the control of
health care providers. To isolate factors that relate to clinical care would require controlling of the
main confounding factors that impact on the outcome, which is very challenging in practical terms.
Furthermore, although producing concrete results, outcome indicators do not reveal the particular
part of the care process which led to the outcome.
By measuring different aspects of care, Gardner (2012) points out that structure, process and
outcome indicators are intrinsically linked, as good structures increase the likelihood of good
processes, which in turn, lead to more positive outcomes.
4.1.4. Who uses indicators?
This section outlines some of the ways indicators are used by different stakeholders at different
levels within and across health care systems. These different levels are broadly identified as: systems
level, organisational level and community and client/patient level.
Systems level
The World Health Organisation (WHO) uses indicators to measure, observe and compare health
statistics and health outcomes across and within countries. The Millennium Development Goals
(MDG) include a set of targets with indicators which are interdependent measures of progress (WHO,
2010). The indicators are not meant to limit priorities in health, nor determine how programmes
should be organized and funded.
Alternatively, governments use indicators to understand population health issues, to set priorities
within the health system and to guide policy (AIHW, 2009; Josif, 2011). They are used as
14
accountability measures (Ibrahim, 2001), and to measure the performance of different levels of the
health system in improving health outcomes in the population (Campbell, 2000; Goodwin, 2011;
Mcloughlin, Leatherman, Fletcher, & Wyn Owen, 2001). The National Health Service (NHS) Quality
Outcomes Framework indicators are used at a national level across the United Kingdom to monitor
the performance and quality of care provided by general practices (NHS Employers, 2013).
Researchers and research organisations can use indicators to analyse and understand the major
health issues and burdens affecting populations, and to measure and assess the impacts and
effectiveness of interventions (Goodwin, 2011; Josif, 2011).
Organisational level
Across regions, organisations can use indicators to share information and solutions, plan
interventions, set guidelines and compare performance with other jurisdictions. (Josif, 2011). The
Royal Australian College of General Practitioners (RACGP) Quality Indicators for General Practice are
an example of indicators being recommended for use to improve quality and safety within general
practices nationwide (RACGP, 2013).
Accreditation organisations use indicators to measure compliance by health care providers and
services with recognised best-practice standards in primary health care (Ibrahim, 2001). The
Australian Council on Healthcare Standards Clinical Indicators are an example of indicators used at
this level (ACHS, 2013).
At a health service level, indicators are used to measure the quality of care provided (Majeed, 2007)
and to engage in quality improvement initiatives (Goodwin, 2011; Ibrahim, 2001; Marley, 2012;
Pencheon, 2008). Indicators can support program planning and allow services to better understand
the needs of their client populations, and the ability of their service to meet those needs. Indicators
are also used to meet reporting and accountability requirements to governments and funders, and
to communities (Goodwin, 2011; Pencheon, 2008). Queensland Aboriginal Health Council (QAIHC)
describes using their Core Performance Indicators at a service level for both internal and external
comparison purposes (QAIHC, 2010).
Community and clients/patients level
Patient-centredness is a core characteristic of primary health care and considered fundamental in
assessments and definitions of quality of care (Gardner, 2012; Goodwin, 2011; Petterson, 2004; ;
Rubin, 2001a; WHO, 2006). Despite this focus, a review of indicator lists from primary health care
settings around the globe reveals the difficulties experienced in developing effective and meaningful
ways of measuring patient experience, and including patient feedback in analyses of the quality of
care (Kötter, 2013; NHS Commissioning Board, 2013; Petterson, 2004; Sitzia, 1999).
15
There is an increasing call for the public to have access to indicator data measuring the performance
of health services and systems (Majeed, 2007). Communities and patients want more responsive
services and to play a more active role in decision making and monitoring and evaluating the quality
of the care they receive (Goodwin, 2011). There are also increasing efforts to design indicators that
measure patient experience, such as the Practice-Level Indicators being developed by the Australian
Council on Safety and Quality in Health Care (ACSQHC, 2012).
Overall, the reviewed literature points out that access to indicator data increases the accountability
of health systems and services to the public, providing greater transparency and allowing patients
to make informed choices about which health care providers may meet their particular needs and to
judge how well the system is performing (Goodwin, 2011; Majeed, 2007; Pencheon, 2008).
4.2. What characteristics are considered important for good
indicators?
In the Good Indicators Guide, Pencheon (2008) proposes that there is no indicator in the history
of measurement and improvement that is perfect for all purposes. However, this guide identifies a
range of characteristics that are desirable for a ‘good’ indicator, proposing six key criteria to assist in
determining a ‘good’ indicator. These are: importance, relevance, validity, reliability, meaningfulness,
and an assessment of the implications of what might be done with the indicator results.
Additionally, the Good Indicators Guide highlights that to make good decisions requires the most
appropriate indicator populated with the best available data. Pencheon (2008, p. 9) makes a
distinction between indicators, or metadata, and data, stating that:
“In operational terms the indicator is known as a metadata, referring to the title, the rationale, and
the information about how it is actually constructed. This is different from the information that is
fed into the indicator which is called the data.”
The metadata, or indicator definitions and specifications, is used to assess if a particular indicator
is important and relevant, is able to be populated with reliable data, and is likely to have a desired
effect when communicated well. The Good Indicator Guide provides a list of ten key questions to
use when assessing the metadata, and recommends that each of these questions be considered
and assessed systemically, and any compromises judged acceptable, and made explicit. See
Appendix One for the list of ten key questions and examples of how they might be applied.
16
Mainz (2003a, p. 524) provides an alternative guide to assessing indicators, outlining that an ideal
indicator has the following key characteristics:
1. “The indicator is based on agreed definitions, and described exhaustively and exclusively;
2. The indicator is highly or optimally specific and sensitive, i.e. it detects few false positives
and false negatives;
3. The indicator is valid and reliable;
4. The indicator discriminates well;
5. The indicator relates to clearly identifiable events for the user (e.g. if meant for clinical
providers, it is relevant to clinical practice);
6. The indicator permits useful comparisons; and
7. The indicator is evidence-based. Each indicator must be defined in detail; with explicit data
specifications in order to be specific and sensitive.”
Similarly, Campbell et al (2011) write that all measures should be tested for acceptability, feasibility,
reliability, sensitivity to change, and validity. See Appendix Two for definitions and examples of each
of these characteristics.
Couzos and Murray (2008), Hancock (2006) and Peiris (Peiris, 2012) provide perspectives on
indicators from the Aboriginal Community Controlled Health Service (ACCHS) sector, noting that
particular cultural factors influence what is considered best practice in primary health care for
Aboriginal people, including many critical intangibles that are important in assessing quality. These
include Aboriginal community ownership, pride, advocacy, cultural affirmation, and ACCHSs as
Aboriginal ‘public space’. For quality measurement to be relevant and applicable to the ACCHS
sector, indicators in these dimensions are required (Couzos & Murray, 2008).
17
Additionally, The Australian Council on Healthcare Standards (2013) outlines that to use indicators for
quality improvement effectively, they need to be meaningful and relevant to staff who are involved in
the daily data collection and reporting procedures.
In specific health care contexts, indicator development processes and selection decisions may give
priority to some attributes over others, as the following three examples demonstrate.
1. Kringos, (2010) describes the development of the indicators of a Primary Care Monitoring
System (PC Monitor) for application in 31 European countries. In this context, the authors
reviewed indicators against the following criteria:
• Relevance
• Precision
• Flexibility
• Discriminating power, and
• Suitability
2. The RACGP (2012) developed a “dash board” set of 22 clinical indicators for Australian
general practice. These were not designed for performance but rather to be used internally
and on a voluntary basis by individual general practices to monitor and improve the quality
and safety of their services. The RACGP chose their initial set of indicators on the basis of:
• Health burdens within Australia
• Health priorities and impact of indicators
• Existing evidence based indicators
• Scope of general practice – recognising the diversity of general practice
• Minimising administrative burdens
• Concordance with current priorities and health burden within Australian primary care
• Equity, evidence, reliability, ease of collection and acceptability
• Taskforce expertise
18
3. The Queensland Aboriginal and Islander Health Council (2010) Aboriginal and Islander
Community Controlled Clinical Excellence (ACE) program identified the following the key
indicator characteristics to guide their work. These were:
• Attributable: critical for both accountability and showing the effectiveness of the services
• Sensitivity: for quality improvement, it is essential that indicators respond fairly rapidly to
changes in policy and practice
• Comparability: by being able to compare with other services the effectiveness of the
community controlled services can be demonstrated
• Feasibility: indicators that can be reported either with existing systems or with reasonable
modifications to existing systems
In summary, while the reviewed literature demonstrated several different approaches to indicator
development and evaluation, common criteria or characteristics or good indicators were: relevance,
sensitivity to change, validity, reliability, specificity, acceptability, and being clearly defined and
understandable (Campbell, 2003; Kringos, 2010; Mainz, 2003b; Pencheon, 2008).
The reviewed literature was also consistent about ‘good’ indicators being relevant to the context, fit
for purpose, and tested systematically against an agreed set of criteria, as well as being formulated
from the perspective of the people involved in their use.
4.3.What are the strengths and limitations of using indicator
approaches?
Throughout the reviewed literature there is considerable discussion about the strengths and
limitations of indicator approaches both in a broad application to primary health care and in the more
specific context of Aboriginal Community Controlled Health. A synthesis of these discussions is
provided below.
4.3.1. Strengths
The strengths of indicator approaches are evident throughout most of the reviewed literature which
promotes the use of indicators; Mainz (2003a) claims that measuring and monitoring the quality of
health care is impossible without indicators.
A particular strength in the use of indicators is that measuring and monitoring aspects of the
health system and services by patients, staff and funders may result in better outcomes for all
concerned (Kwedza, 2009; Pencheon, 2008; Wollersheim, 2007). Using indicators may contribute to
increasing the level of public engagement in care practices, and increasing professional pride when
performance results improve and targets are met (Goodwin, 2011). Indicators can measure the
effectiveness of quality improvement initiatives and allow poor performance/quality to be identified
and accountability to be improved (Goodwin, 2011; Pencheon, 2008; RACGP, 2009).
19
Indicators can be designed to be specific to a particular health context, and can be flexibly applied
in different national settings despite cultural differences and health system variations (Engels, 2005;
Kringos, 2010; Mattke, 2006).
Campbell et al (2000) and Goodwin et al (2011) propose that using indicator approaches allows for
a faster and more cost-efficient way of measuring quality and performance than other methods such
as peer-review.
4.3.2. Limitations
The reviewed literature includes considerable discussion of the limitations of using indicators; the key
points are discussed below.
Indicators provide measurements which, on their own, do not create improvement. Indicators are
only effective if their use provokes questions, leads to further understanding of the problem and
promotes action (Mainz, 2003a; Pencheon, 2008). Therefore indicator approaches are limited by the
extent to which the data’s relevance and potential is understood by staff and subsequently acted
upon (Pencheon, 2008).
Donabedian (1997, p. 1145) warned against expecting measurements of quality in health care that
are “easy, precise and complete”. He, and other authors (Goodwin, 2011; Wright, 2012), discuss
the importance of ‘hard to measure’ more qualitative or subjective elements of care, including the
impact that the inter-personal relationship between patient and caregiver has on the outcomes of
care.
The reviewed literature outlines that the quantitative nature of most indicator data can inhibit
accurately reflection of more complex aspects of care like patient-centredness, cultural
appropriateness and equity, as well as be more difficult in areas of health care such as nutrition and
social and emotional wellbeing (Anderson, 2006; Kringos, 2010; NHS Commissioning Board, 2013;
QAIHC, 2010)
Some authors have noted that other forms of assessment, such as peer review and interviews with
patients providing qualitative data, may be needed to supplement indicator approaches (Campbell,
2003; Donabedian, 1997; Goodwin, 2011).
Concerns have been expressed that Aboriginal cultural perspectives on important elements of
primary health care are not reflected in current indicator sets and data (Anderson, 2006; Couzos
& Murray, 2008; Peiris, 2012). In particular for the ACCHS sector, there has been a call for using
qualitative data to provide a clearer understanding of how ACCHSs operate and to shed light on
crucial aspects of quality such as appropriateness, accessibility and continuity of primary health care
for Aboriginal people (Steenkamp, 2010; Hancock, 2006).
Additionally, Kwedza (2009) and Zaslavsky (2001) point out that indicators that are reliable and valid
can still produce data that is misinterpreted or misrepresented because of the method of analysis or
a lack of understanding of the limitations of the data and data analysis methods. Pencheon (2008)
and Ong et al (2004) support this notion, stating that in order for indicator data to be accurately read
20
and analysed, it is important to have a clear understanding of the particular health context within
which the measurement was taken.
Cuozos and Murray (2008) observe that confusion can arise when an indicator is purported to
measure the ‘performance’ of a care provider, yet the indicator data reflects aspects of health status
related more to population health and the social determinants of health than to the performance of a
particular service.
Additionally, there is concern about the potential for unintended negative impacts of indicator
approaches on several aspects of primary health care processes and outcomes. These
consequences are particularly relevant to pay-for-performance models where financial incentives
are provided to meet performance targets, and there is considerable discussion about the pros and
cons of these models of care in the reviewed literature (Heath, 2007; Lester, 2006; Wright, 2012).
In relation to performance indicators, it is argued that the drive to meet performance targets, or
‘measure fixation’, may undermine the value and investment in the communicative and continuous
patient-caregiver relationship (Elliot-Smith, 2010; Hannon, 2012; Heath, 2007; Lester, 2011)
There is also the risk that focusing on indicators, or “tunnel vision”, may divert attention and
resources away from other aspects of care (Elliot-Smith, 2010; Hannon, 2012; Lester, 2011; RACGP,
2009). Campbell et al (2000) suggests that the use of any indicator set needs to be tempered with
an awareness of what care is not being measured.
The risk of “gaming” has been documented, particularly in pay-for-performance models of primary
care where providers or practitioners may manipulate indicator results to improve performance, or
may change behaviour only sufficiently to achieve better performance results rather than to address
underlying issues that the indicator is designed to measure (Mcloughlin, 2001; Pencheon, 2008;
Wollersheim, 2007; Wright, 2012).
Moreover, the reviewed literature indicates that there may be resistance to the use of indicators by
health care providers based on a fear of being judged or blamed for bad performance (Pencheon,
2008; RACGP, 2009), and due to suspicion that funding will be affected by performance
assessments (Mcloughlin, 2001).
Werner and Asch (2005) assert that some of these unintended negative consequences of indicator
approaches are less likely to occur if care providers are assessed against indicators which they
themselves have developed, as opposed to those which have been externally developed and
imposed as performance measures.
21
4.4. Is there evidence to support ‘best practice’ approaches for
developing and implementing indicators and indicator sets?
The reviewed literature identifies a range of ‘best practice’ approaches to developing and
implementing indicators for use in primary health care. This section highlights and discusses
these elements.
Overall, the literature asserts that the back bone of a ‘best practice’ indicator development and
selection process is formed by the systematic combination of a robust scientific evidence base
(systematic reviews, clinical guidelines or similar) that identifies ‘best practice’ in patient care, and
the consensual approval by a panel of experts (Kötter, 2012; Kringos, 2010; Mainz, 2003b;
Wollersheim, 2007).
Campbell et al (2000) and Kelley et al (2006) recommend that a conceptual framework be
established to guide the development, implementation and ongoing use of indicators. Key aspects
of this early stage are to identify and clearly define what constitutes ‘quality’ in the particular service/
organisation/context under assessment (Goodwin, Donabedian, 1997; 2011), which stakeholder
views are being represented (Campbell, 2003; Evans, 2009), and the aims, overarching priorities
and principles that will guide the use of the indicators (Pencheon, 2008).
Mainz (2003b) outlines the following six step process for indicator development in the clinical context
as:
1. Choose the clinical area to evaluate
2. Organise the measurement team
3. Provide an overview of existing evidence and practice
4. Select clinical indicators and standards
5. Design indicator specifications
6. Perform pilot testing
Wollersheim et al (2007) and Mainz (2003b) assert the importance of establishing the relevance of
the indicators by choosing areas of care which have a high incidence rate or are of high priority,
which need improvement, and where there are opportunities for intervention.
Campbell et al (2011) and Pencheon (2008) claim that the key to successful indicator development
is in the metadata which includes the exploration of the precise meaning and specifications of the
indicator. This approach to defining and specifying indicators helps to establish the validity of the
indicator and its ability to measure what it is designed to measure.
22
Additionally, a participatory approach is recommended to allow the perspectives of diverse
stakeholders to inform the development of indicators and to encourage the involvement of health
services and the community (Si, 2007; Vargo, 2012). The reviewed literature consistently asserts that
broad consultation in the process of selecting, developing, implementing and reviewing indicators
will result in better quality indicators, greater engagement with the data they generate as well as with
associated activities for improving quality within local health services (Anderson, 2006; Braa, 2012;
Campbell, 2003; ; First Nations Centre, 2011; Gilson, 2007; Hancock, 2006; Kwedza, 2009; Werner
& Asch, 2005; WHO, 2006).
As developing new indicators is expensive and resource intensive and recommendations are made
that wherever possible, existing indicators should be reviewed, adapted, and tested for application in
a new setting (Campbell, 2003; Wollersheim, 2007).
The particular importance of pilot testing indicators, (both newly developed or selected from existing
sets) is emphasised in several papers (Campbell, 2011 ; Evans, 2009; Lester, 2011; Wollersheim,
2007). Pilot testing can determine the measurability and acceptability of an indicator and identify
unintended consequences and other issues with implementation which may lead to further
clarification and refinement of the indicator definitions prior to introduction. Several authors (AIHW,
2009; Lester, 2007; Rubin, 2001a) emphasised the need for ongoing review and modification
of indicators due to changes in the health needs of populations and advances in the science of
medicine.
23
5. Conclusions
This literature review has been undertaken as part of the AH&MRC CQI Program, to learn from the
Australian and international literature about the nature, use, and development of indicators, to inform
CQI efforts in primary health care to improve Aboriginal health. The following points provide an
overview of the key findings derived from the literature review, and consider their relevance of these
findings for future AH&MRC CQI program work.
Overall, the reviewed literature highlighted the importance of indicators in the contemporary health
system context, and their potential usefulness for improving the quality of health service care
and outcomes. Indicators are measuring tools that help to understand a system, to compare
and improve; they do not give definite answers but provide information that can provoke further
questioning and action, and can lead to a better understanding of the system being assessed.
The reviewed literature highlighted the importance of indicator effectiveness by drawing on the
evidence base and expert opinion, and using a participatory development approach, and engaging
local staff and communities in the process of developing indicators. AH&MRC and Member services
are well placed to provide the evidence base, expert opinion, participation and engagement required
for the development, implementation and monitoring of effective quality improvement indicators
for ACCHS, as well as for the broader health system to support efforts to improve the health and
wellbeing of Aboriginal people.
Several conceptual frameworks that can be used to guide the development, implementation and
ongoing use of indicators were identified through this literature review. The Good Indicators Guide
(Pencheon, 2008) provides details of a development process that could be adapted for application
within the ACCHS context, and possibly for use to develop Aboriginal health indicators for the
broader health system to enable quality improvement of efforts to improve Aboriginal health.
24
6. Appendices
6.1. Appendix One
The following table is a direct extract from Pencheon (2008, p. 10) which provides a list of ten key
questions to use when assessing the metadata relevant to identifying a good indicator.
Table1: Critical questions to ask around data strategy
10 key questions
Answers (examples only)
1. What is being measured?
Levels of diabetes in Aboriginal population
2. Why is it being measured?
It is a serious disease with serious consequences
3. How is this indicator actually defined?
From recorded levels in general practice
4. Who does it measure?
All persons of Aboriginal background; all ages
5. When does it measure it?
Which day/month/year?
6. Will it measure absolute numbers or
Proportions: numbers of case per thousand resident
proportions?
7. Where does the data actually come
from?
population
Through records of General Practice? PIMS?
Other sources depending on the systems reporting
capabilities?
8. How accurate and complete will the
Depends on the system’s reporting capabilities
data be?
9. Are there any warnings or problems?
Potential for errors in collection, collation and
interpretation (such as an under-sampling of
Aboriginal populations, young people)
10.Are particular tests needed such as
E.g. when comparing small numbers, in small
standardisation, significance tests, or
populations, or to distinguish inherent (common
statistical process control to test the
cause) variation, from special cause variation
meaning of the data and the variation
they show?
25
6.2. Appendix Two
The following section is extracted from Campbell et al (2003, pp. 818-819) detailing the criteria for
the development and testing of indicators.
Acceptability: The acceptability of the data collected depends on whether the findings are
acceptable to both those being assessed and their assessors. For example, doctors and nurses can
be asked about the acceptability of review criteria being used to assess their quality of care.
Feasibility: Information about quality of care is often driven by availability of data. Quality is difficult
to measure without accurate and consistent information, which is often unavailable at both the
macro (health organisations) and micro (individual medical records) level. Quality indicators must
also relate to enough patients to make comparing data feasible—for example, by excluding those
aspects of care that occur in less than 1% of clinical audit samples.
Reliability: Reliability refers to the extent to which a measurement with an indicator is reproducible.
This depends on several factors relating to both the indicator itself and how it is used. For example,
indicators should be used to compare organisations or practitioners with similar organisations or
practitioners. The inter-rater reliability refers to the extent to which two independent raters agree on
their measurement of an item of care.
Sensitivity to change: Quality measures need to detect changes in quality of care in order to
discriminate between and within subjects. This is an important and often forgotten dimension of a
quality indicator. Little research is available on sensitivity to change of quality indicators using time
series or longitudinal analyses.
Validity: Content validity in this context refers to whether any criteria were rated valid by panels
contrary to known results from randomised controlled trials. The validity of indicators has received
more attention recently. Although little evidence exists of the content validity of the Delphi and
nominal group techniques in developing quality indicators, there is some evidence of validity for
indicators developed with the RAND method. There is also evidence of the predictive validity of
indicators developed the RAND method.
26
7. References
Anderson, M., Smylie, J., Anderson, I., Sinclair, R., & Crengle, S. (2006). First Nations, Inuit and
Métis Health Indicators in Canada. A background paper for the project ‘Action-oriented
indicators of health and health systems development for indigenous peoples in Australia, Canada
and New Zealand’. (Vol. Discussion Paper 18). Melbourne: Onemda VicHealth Koori Health Unit,
University of Melbourne.
Australian Commission on Safety and Quality in Health Care. (2011). Practice-level indicators of
safety and quality for primary health care: Consultation Paper.
Australian Commission on Safety and Quality in Health Care. (2012). Practice-level indicators of
safety and quality for primary health care specification. Sydney: ACSQHC.
Australian Council on Healthcare Standards. (2009). Clinical indicators. Sydney: Australian Council of
Health Care Standards.
Australian Council on Healthcare Standards. (2013). Clinical Indicator Program Information:
Australian Council of Healthcare Standards.
Australian Institute of Health and Welfare. (2009). Towards national indicators of safety and quality in
health care. Canberra: AIHW.
Braa, J., Heywood, A., & Sahay, S. (2012). Improving quality and use of data through data-use
workshops: Zanzibar, United Republic of Tanzania. Bull World Health Organ, 90, 379-384.
Campbell, S., Braspenning, J., Hutchinson, A., & Marshall, M. (2003). Improving the quality of health
care: Research methods used in developing and applying quality indicators in primary care. BMJ
326.
Campbell, S. M., Kontopantelis, E., Hannon, K., Burke, M., Barber, A., & Lester, H. E. (2011).
Framework and indicator testing protocol for developing and piloting quality indicators for the UK
quality and outcome framework. BMC Family Practice, 12(85).
Campbell, S. M., Roland, M. O., & Buetow, S. A. (2000). Defining Quality of Care. Social Science &
Medicine, 51(11), 1622-1625. doi: 10.1016/S0277-9536(00)000
Centre for Epidemiology and Evidence. (2012). The health of Aboriginal people of NSW: Report of
the Chief Health Officer. Sydney: NSW Ministry of Health.
Couzos, S., & Murray, R. (2008). Aboriginal Primary Health Care. Melbourne: Oxford University
Press.
27
Department of Health Victoria. (2010). Understanding Clinical Practice Toolkit. Melbourne: Victorian
State Government.
Donabedian, A. (1997). The quality of care. How can it be assessed? 1988. [Biography Classical
Article Historical Article]. Arch Pathol Lab Med, 121(11), 1145-1150.
Donabedian, A. (2005). Evaluating the quality of medical care. 1966. [Biography Classical Article
Historical Article]. Milbank Q, 83(4), 691-729. doi: 10.1111/j.1468-0009.2005.00397.x
Elliot-Smith, A., & Morgan, M. A. (2010). How do we compare? Applying UK pay for performance
indicators to an Australian general practice. AFP, 39(1/2).
Engels, Y., Campbell, S., Dautzenbergm, M., van den Hombergh, P., Brinkmann, H., Szécsényi,
J., Grol, R. (2005). Developing a framework of, and quality indicators for, general practice
management in Europe. Fam Pract, 22(2), 215-222.
Evans, S. M., Lowinger, J. S., Sprivulis, P. C., Copnell, B., & Cameron, P. A. (2009). Prioritizing
quality indicator development across the healthcare system: identifying what to measure.
[Review]. Intern Med J, 39(10), 648-654. doi: 10.1111/j.1445-5994.2008.01733.x
Ferlie, E., Shortell SM,. (2001). Improving the quality of healthcare in the United Kingdom and the
United States: A framework for change. Milbank Q, 79, 281-315.
First Nations Centre. (2011). Ownership, Control, Access and Possession (OCAP). Sanctioned
by the First Nations Information Governance Committee, Assembly of First Nations. Ottawa:
National Aboriginal Health Organisation.
Gardner, K., & Mazza, D. (2012). Quality in general practice: Definitions and frameworks. Australian
Family Physician, 41(3), 151-154.
Gilson, L., Doherty J, Loewenson R, Francis V et al,. (2007). Final Report: Knowledge Network
on Health Systems, June 2007, WHO Commission on the Social Determinants of Health
(Knowledge Network on Health Systems, Trans.): World Health Organisation.
Goodwin, N., Dixon, A., Poole, T., & Raleigh, V. (2011). Improving the quality of care in general
practice. London: The King’s Fund.
Hancock, H. (2006). Aboriginal women’s perinatal needs, experiences and maternity services:
A literature review to enable considerations to be made about quality indicators: Ngaanyatjarra
Health Service.
Hannon, K., Lester HE, Campbell SM,. (2012). Patients’ views of pay for performance in primary
care: a qualitative study. [Research Support, Non-U.S. Gov’t]. Br J Gen Pract, 62(598), e322328. doi: 10.3399/bjgp12X641438
Heath, I., Hippisley-Cox J, Smeeth L,. (2007). Measuring performance and missing the point?
BMJ, 335.
Ibrahim, J. (2001). Performance indicators from all perspectives. International Journal for Quality in
Health Care, 13(No. 6), 431-432.
28
Josif, D. (2011). Universal Core Service Framework, Performance Indicators and Workforce
Implications (S. P. a. A. P. Division, Trans.). Darwin: Department of Health, Northern Territory.
Kelley, E. T., Arispe, I., & Holmes, J. (2006). Beyond the initial indicators: lessons from the
OECD Health Care Quality Indicators Project and the US National Healthcare Quality Report.
[Comparative Study]. Int J Qual Health Care, 18 Suppl 1, 45-51. doi: 10.1093/intqhc/mzl027
Kötter, T., Blozik, E., & Scherer, M. (2012). Methods for the guideline-based development of quality
indicators - a systematic review. Implementation Science, 7(21).
Kötter, T., Schaefer, F. A., Scherer, M., & Blozik, E. (2013). Involving patients in quality indicator
development - a systematic review. Patient Preference and Adherence, 7, 259-268.
Kringos, D., Boerma WGW, Bourgueil Y, Cartier T, Hasvold T, Hutchinson A, Lember M, Oleszczyk
M, Pavlic DR, Svab I, Tedeschi P, Wislon A, Windak A, Dedeu T, Wilm S,. (2010). The european
primary care monitor; structure, process and outcome indicators. BMC Family Practice, 11(81).
Kwedza, R. (2009). Creating a framework for key performance indicator use in primary health care.
Paper presented at the 10th National Rural Health Conference, Cairns. http://www.ruralhealth.
org.au/10thNRHC/10thnrhc.ruralhealth.org.au/program/index731e.html?IntCatId=4
Lester, H., & Roland, M. (2007). Measuring Quality Through Performance: Future of quality
measurement. BMJ 1(335), 1130-1131.
Lester, H., Sharp, D. J., Hobbs, F. D. R., & Lakhani, M. (2006). The Quality and Outcomes
Framework of the GMS contract: a quiet evolution for 2006. Br J Gen Pract, 56(525), 244-246.
Lester, H. E., Hannon, K. L., & Campbell, S. M. (2011). Identifying unintended consequences
of quality indicators: a qualitative study. BMJ Qual Saf 20, 1057-1061. doi: 10.1136/
bmjqs.2010.048371
Lohr, K. N., & Schroeder, S. A. (Eds.). (1990). Medicare: A Strategy for Quality Assurance
(Vol. Vols 1 & 2). Washington DC: National Academy Press.
Mainz, J. (2003a). Defining and classifying clinical indicators for quality improvement. International
Journal for Quality in Health Care, 15(6), 523-530.
Mainz, J. (2003b). Developing evidence-based clinical indicators: a state of the art methods primer.
International Journal for Quality in Health Care, 15, Suppl 1, 5-11.
Majeed, A., Lester, H., & Bindman, A. B. (2007). Improving the Quality of care with Performance
Indicators. BMJ, 335(7626), 916-918.
Marley, J. V., Nelson, C., O’Donnell, K., & Atkinson, D. (2012). Quality indicators of diabetes care: an
example of remote area Aboriginal primary health care over 10 years. MJA, 197(7), 404-408.
Mattke, S., Epstein, A. M., & Leatherman, S. (2006). The OECD Health Care Quality Indicators
Project: history and background. International Journal for Quality in Health Care, 18 (suppl 1.).
McDonald, K. M. (2009). Approach to improving quality: the role of quality measurement and a case
study of the agency for healthcare research and quality pediatric quality indicators. Pediatric Clin
North Am, 56(4), 815-829.
29
Mcloughlin, V., Leatherman, S., Fletcher, M., & Wyn Owen, J. (2001). Improving performance using
indicators. Recent experiences in the United States, the United Kingdom, and Australia. Int J
Qual Health Care, 13(6), 455-462. doi: 10.1093/intqhc/13.6.455
NHS Commissioning Board. (2013). Quality and Outcomes Framework guidance for GMS contract
2013/14: NHS.
NHS Employers. (2013). Quality and Outcomes Framework Retrieved 8th May 2013, from
http://www.nhsemployers.org/PayAndContracts/GeneralMedicalServicesContract/QOF/Pages/
QualityOutcomesFramework.asp
NSW Health. (2001). The Clinician’s Toolkit For Improving Patient Care (First Edition). Sydney: NSW
Department of Health.
Nutley, T., & Reynolds, H. W. (2013). Improving the use of health data for health system
strengthening. Glob Health Action, 6.
Peiris, D. (2012). Building better systems of care for Aboriginal and Torres Strait Islander people:
findings from the Kanyini health systems assessment. BMC Health Services Research, 12(369).
Pencheon, D. (2008). The Good Indicators Guide: Understanding how to use and choose indicators:
NHS Institute for Innovation and Improvement, Association of Public Health Observatories.
Pettersen, K. I., Veenstra, M., Guldvog, B., & Kolstad, A. (2004). The Patient Experiences
Questionnaire: development, validity and reliability. Int J Qual Health Care, 16(6), 453-463. doi:
10.1093/intqhc/mzh074
Pope, J. (2003). Selecting health indicators in population health: Notes on choosing health indicators
for a National Biomedical Risk Factor Survey (P. H. I. D. Unit, Trans.) AHMS Working Paper
Series (Vol. No.2). Adelaide: The University of Adelaide.
Queensland Aboriginal and Islander Health Council. (2010). Core Performance Indicators:
Version 2 Updated September 2010 (pp. 30).
Royal Australian College of General Practitioners. (2009). Clinical indicators and the RACGP. Policy
endorsed by the 51st RACGP Council May 2009. Melbourne: RACGP.
Royal Australian College of General Practitioners. (2012). RACGP Clinical Indicators for Australian
General Practice: A proposed set of clinical indicators for stakeholder comment by 30 July 2012.
(P. a. I. D. RACGP Clinical Improvement Unit; Practice, Trans.).
Royal Australian College of General Practitioners. (2013). Practice Guides and Tools Retrieved 2nd
May 2013, from http://www.racgp.org.au/your-practice/business/tools/support/indicators/
Rubin, H., Pronovost, P., & Diette, G. B. (2001a). From a process of care to a measure: the
development and testing of a quality indicator. Int J Qual Health Care, 13(6).
Rubin, H., Pronovost, P, Diette, GB. (2001). The advantages and disadvantages of process-based
measures of health care quality. Int J Qual Health Care, 13(6), 469-474.
Si, D., Bailie RS, Dowden M, O’Donoghue L, Connors C, Robinson GW, Cunningham J,
Weeramanthri T,. (2007). Delivery of preventive health services to Indigenous adults: response to
a systems-oriented primary care quality improvement intervention. Med J Aust, 187(8), 453-457.
30
Sitzia, J. (1999). How valid and reliable are patient satisfaction data? An analysis of 195 studies. Int J
Qual Health Care, 11(4), 319-328. doi: 10.1093/intqhc/11.4.319
Steenkamp, M., Bar Zeev, S., Rumbold, A., Barclay, L., & Kildea, S. (2010). Pragmatic indicators
for remote Aboriginal maternal and infant care: why it matters and where to start. Australian and
New Zealand Journal of Public Health, 34(S1.).
Travaglia, J., & Debono, D. (2009). Clinical Indicators: a comprehensive review of the literature:
The Centre for Clinical Governance Research in Health, University of New South Wales, .
Vargo, A. C., Sharrock, P. J., Johnson, M. H., & Armstrong, M. I. (2012). The use of a Participatory
Approach to Develop a Framework for Assessing Quality of Care in Children’s Mental Health
Services. Adm Policy Ment Health.
Werner, R. M., & Asch, D. A. (2005). The unintended consequences of publicly reporting quality
information. JAMA, 293, 1239-1244.
Wollersheim, H., Hermens, R., Hulscher, M., Braspenning, J., Ouwens, M., Schouten, J., Grol, R.
(2007). Clinical indicators: development and applications. Netherlands Journal of Medicine, 65(1),
15-22.
World Health Organisation. (2006). Quality of care: A process for making strategic choices in health
systems. Geneva: World Health Organisation.
World Health Organisation. (2007). Everybody’s business, strengthening health systems to improve
health outcomes: WHO’s framework for action. Geneva: World Health Organisation.
World Health Organisation. (2010). Accelerating Progress Towards the health-related Millennium
Development Goals.
Wright, M. (2012). Pay-for performance programs: Do they improve the quality of primary care?
AFP, 41(12), 989-991.
Zaslavsky, A. (2001). Statistical issues in reporting quality data: small samples and casemix variation.
[Research Support, U.S. Gov’t, Non-P.H.S. Research Support, U.S. Gov’t, P.H.S.]. Int J Qual
Health Care, 13(6), 481-488.
Zeitlin, J., Wildman, K., Breart, G., Alexander, S., Barros, H., Blondel, B., Macfarlane, A. (2003).
Selecting an indicator set for monitoring and evaluating perinatal health in Europe: criteria,
methods and results from the PERISTAT project. Eur J Obstet Gynecol Reprod Biol,
111 Suppl 1, S5-S14.
31