Reporting Outcomes with MHCT

Reporting Outcomes with MHCT
A Categorical Change ‘Dashboard’
Outcomes Analyses
We want to:
• Embed the routine measurement, analysis and
feeding back of clinical outcomes to frontline
teams
• Improve clinical effectiveness through
reflective practice, shared learning, identifying
gaps in service, training needs etc.
• Support re-organisation and changing
priorities
Is there any evidence we make a difference?
• For several years we have been recording HoNOS
(MHCT) scores at key times during patients’ pathway
through our services:
1. At first assessment
2. When there is a significant change in need e.g.
admission
3. At CPA
4. At discharge
• Comparing a patient’s scores gives us a measure of
our effectiveness
3 ways of showing change in HoNOS
• Mean total score
– But conflates scales getting worse and getting better
• HoNOS Four Factor
– But conflates change within each scale getting worse and
getting better
– Changes from score of 4 to 3 equated with, and
neutralised by, changes from 0 to 1
• Categorical change method
– But arbitrary cut-off point
Categorical approach to showing change
in each HoNOS scale score
Start of episode
Low Severity
MHCT scores and classification
• 0: No problem
• 1: Subclinical problems
• 2: Mild problems
End of episode
Low Severity
• Outcome: Remained stable
High Severity
• Outcome: Reliable deterioration
High Severity
MHCT scores and classification
• 3: Moderate problem
• 4: Severe to very severe problems
Low Severity
• Outcome: Reliable improvement
High Severity
• Outcome: Remained highly unwell
Predicated upon Reliable gulf between scores
of 2 and below, and 3 and above
Reassessing the cut-off
• Does basing ‘change’ on a scoring threshold of 2
or below to 3 or above impact on reported
outcomes?
– Is an improvement in MHCT score from 2 to 0, or
deterioration from 0 to 2 significant?
– If clinicians tend to ‘under-score’ then this change
would be missed
– Would ‘change’ based on 2 MHCT points be more
appropriate and reliable (a change from 3 to 2 may
not be ‘significant’ but the result of poor inter-rater
reliability
Calculating change based on a vlookup
MHCT Score
0-0
0-1
0-2
0-3
0-4
1-0
1-1
1-2
1-3
1-4
2-0
2-1
2-2
2-3
2-4
3-0
3-1
3-2
3-3
3-4
4-0
4-1
4-2
4-3
4-4
Categorical Change
Remains Stable
Remains Stable
Reliable Deterioration
Reliable Deterioration
Reliable Deterioration
Remains Stable
Remains Stable
Remains Stable
Reliable Deterioration
Reliable Deterioration
Reliable Improvement
Remains Stable
Remains Stable
Remains Stable
Reliable Deterioration
Reliable Improvement
Reliable Improvement
Remains Stable
Remains Highly Unwell
Remains Highly Unwell
Reliable Improvement
Reliable Improvement
Reliable Improvement
Remains Highly Unwell
Remains Highly Unwell
The resulting Excel table
The finished Dashboard
What are the issues
• Have not completed a comparison between
approaches to determine impact of changing the
‘cut-off’
• This data extract was based on SU’s discharged
between 1/4/15 and 30/9/15, however:
– Of 5577 service users discharged, only 2803 (50%) had
2 MHCTs completed
– Of those only 527 had a discharge MHCT completed
within 30 days of the recorded discharge date
Concluding Thoughts
• Does changing to a 2 point difference in MHCT
points a more accurate reflection of change?
• Do we have the right data extract?
• How often do we run the report?
• How do we close the loop? Is this dashboard
accessible to clinicians?
• What level do you drill down into it?
• How do we deal with the data issues?