outcomes - PebblePad - Choose organisation

Evaluating Better Care
Together
Dr Tom Grimwood
Health and Social Care Evaluations (HASCE)
University of Cumbria
The Evaluation Project
• Aims, objectives and outcomes
• The methods we use
• Feedback loops
• What we have been finding
Aims, objectives and outcomes
• Working alongside quantitative metrics collected by UHMB Business Intelligence, HASCE are
evaluating the qualitative aspects of BCT.
• We are:
•
•
•
•
•
Conducting semi-structured interviews and focus groups
Surveying populations
Working on contextual analyses
Examining different forms of resource use
Facilitating workshops on evaluation and its role in BCT
• In order to provide formative and summative evaluation of the New Care Model, as well as
ensuring evaluation continues to be central to the programme.
The questions we are answering
• What is the context (e.g. history, culture, economic and social circumstances, relationships, health
inequalities, local and national policies, national legislation) in each vanguard into which new care
models have been implemented?
• What key changes have the vanguards made and who is being affected by them? How have these
changes been implemented? Which components of the care model are really making a difference?
• What is the change in resource use and cost for the specific interventions that encompass the new
care models programme locally? How are vanguards performing against their expectations and how
can the care model be improved? What are the unintended costs and consequences (positive or
negative) associated with the new models of care on the local health economy and beyond?
The questions we are answering (cont.)
• What expected or unexpected impact is the vanguard having on patient outcomes and
experience, the health of the local population and the way in which resources are used in the
local health system e.g. equality?
• What is causing the outcomes demonstrated in particular elements of the programme, systems,
patients or staff?
• What are the ‘active ingredients’ of a care model? Which aspects, if replicated elsewhere, can be
expected to give similar results and what contextual factors are prerequisites for success?
The methods we use
• The aim of the evaluation is not to tell us simply ‘what works’. Rather, it should tell us:
“What works in which circumstances, and for whom?”
• Or:
“What works, for whom, in what respects,
to what extent, in what contexts, and how?”
• Frontline evaluation is interlinked with workshops and an on-line webfolio, in order to ensure
its robustness, and work to develop a longer-term sustainable evaluation culture within the
Morecambe Bay footprint.
Data collection
• Contextual – desk-based, using the VICTORE model (December 2016, updated regularly)
• Volitions, Implementation, Contexts, Time, Outcomes, Rivalries, Emergence.
• Process – semi-structured interviews and focus groups (ongoing)
• ‘Snowball’ process: following particular projects through from strategic to ground-level (where possible).
• ‘Joining up’ different small-scale activities to identify patterns and themes.
• Outcomes – large-scale survey (June 2017)
• Resources – synthesis of qualitative and existing data (August 2017)
Realist evaluation
• We are interested in evaluating BCT in terms of its contexts, mechanisms (enabling or disabling),
and outcomes.
• These enable us to theorise the programme:
• An intervention works (or not) because actors make particular decisions in response to an
intervention/programme (or not).
• If the right processes operate in the right conditions, then the right outcome appears.
• E.g. “In this context, that particular mechanism engaged these actors, generating those outcomes. In that
context, this other mechanism happened, generating these different outcomes.”
• This allows for a ‘ground up’ evaluation of multi-faceted change programmes, alert to local needs.
• We identify theories from qualitative analysis, and then test it via larger-scale surveys.
Creating hypotheses
• The aim is not to replicate a ‘scientific experiment’
• But to allow for the dynamics of transformative
change to be captured, and to inform its
development.
• Realist evaluation is an ongoing and dynamic
‘effectiveness cycle’ (Kazi 2003: 30)
Feedback loops
• Due to the nature of BCT and its implementation, the evaluation must be iterative.
• As such, the evaluation design contains several feedback loops:
• Within the research team
• Between researchers and participants/stakeholders
• Via informal and scoping discussion
• Via discussion with BCT Research and Evaluation Group
• Via workshops
• Via on-line webfolio
Synthesis of Programme Evaluation
with National Metrics
Engagement Workshops
Synthesis of Data into Overall Themes
Programme-Wide Response to
Evaluation Questions
Data Analysis and Emergent
Findings
Mixed methods data
collection
Formative Workshops
Findings inform further data collection
Programme-Wide Analysis
What we have been finding
• What key changes have the vanguards made and who is being affected by
them?
• As might be expected, each change tends to have both enabling and disabling aspects.
• Positive views about multi-disciplinary partnerships; wariness of increasing workloads in
some areas.
• Clarity of roadmap and direction of travel, but less clarity on implementation and endpoint.
• Some of the bigger changes involve smaller mechanisms: e.g. improved communications
on the ground level. How can the effects of these be captured?
What we have been finding
• What is causing the outcomes demonstrated in particular elements of the
programme, systems, patients or staff?
• A lot of discussion from participants around what the right kind of outcomes are. For
example, cultural change (e.g. a change of focus to health and wellbeing) may not be
reflected in targets. How can these successes be recognised, identified, and validated?
• Importance of cultural perception of place: past interventions, funding distribution,
funding longevity, etc. How much causal effect is this having?
• Importance of leadership at every level.
• Importance of personalities (harder to replicate!).
What we have been finding
• What are the ‘active ingredients’ of a care model?
• There is a tension between the time it takes for change to happen, and the measures this
may require, with more immediate pressures (e.g. financial). How is this balanced?
• One active ingredient is anxiety: including localised concerns around resources,
recruitment and attrition, etc.
• Another very active ingredient is the role of technology, and how new ways of working
are implemented.
Contexts
Mechanisms
Outcomes
Demographics across ICCs
Intervention logic
Quantitative metrics (e.g. reductions in
outpatient appts)
Roles and levels of engagement with BCT
Integrating care
Accountability
Recruitment and Attrition
Patient-centred approaches
Measures of change
Enthusiasm for BCT
Working with local populations
Types of ‘success’
Resistance/cynicism to change
Relationships (inter- and intraorganisational), including information
sharing
Shared understanding of roles,
responsibilities and goals
Perceived distance from decision-making
Leadership
Positive feedback from staff/patients
Time Available
Use of technology
Sustainability
Funding
Organisational architecture
Shifting care to the community/
increased pressure outside of acute care
Communication