Real-world coaching evaluation A guide for practitioners

Guide
September 2010
Real-world coaching
evaluation
A guide for practitioners
Real-world coaching evaluation
1
This guide was written by Dr John Mc Gurk, Adviser: Learning and Talent Development, CIPD.
CONTENTS
Overview2
Part 1: Introduction 3
Part 2: The coaching evaluation knowing–doing gap
6
Part 3: Return on investment: rigorous evidence or pointless distraction?
10
Part 4: Coaching evaluation data: strengths and limitations
12
Part 5: Coaching evaluation: an integrated approach
15
Part 6: Conclusion and recommendations for practice
19
Sources of information
21
OVERVIEW
Coaching is a powerful and enabling tool for
development and performance. It is one of the key
arrows in the learning and talent development quiver
and is well regarded by many people in organisations
at all levels. Coaching helps to raise performance
and align people and their goals to the organisation,
cements learning and skills, and is a powerful agent
for culture change and agility. Coaching is, as we
have indicated in previous CIPD Learning and Talent
Development surveys, becoming a ‘must have’ for
organisations and part of normal management
practice. However, coaching has an ‘Achilles’ heel’ in
that evaluation of coaching is largely neglected. Only
36% of respondents to the 2010 Learning and Talent
Development survey report evaluating coaching, with
the majority who do any evaluation not going beyond
the reactions or ‘happy sheet’ level. The CIPD believes
that if this evaluation gap persists, coaching will come
under threat since learning and talent development
(LTD) professionals still cannot demonstrate its impact.
When every item and line of expenditure is being
acutely scrutinised, as in the current climate, that need
is even more urgent.
The key messages from this practical guide are:
• Our surveys show that coaching is not being
effectively evaluated.
• Our 2010 survey shows that only 36% are
doing any evaluation of coaching.
• The minority who do evaluate are focusing on
qualitative evaluation, such as reaction, stories
and testimony.
• Where quantitative evaluation is used, it is
often a crude use of return on investment
(ROI).
• Evaluation is not being grounded in a
capability perspective in terms of its links to
the organisation and people plan.
• Good practice is out there and a more
systematic mindset would deliver a step
change in evaluation performance.
• A wide range of data can be used for coaching
evaluation and practitioners need to access
these data streams.
2
Real-world coaching evaluation
• Part 1: reviews the evidence on the evaluation gap
•
•
•
•
•
•
and suggests reasons why this is so in terms of
LTD mindsets, the perception that coaching is too
abstract to manage, failings in evaluation models
and methods.
Part 2: looks at the basic tools of evaluation
for coaching and the issues and problems with
them, including Kirkpatrick levels, ROI, return on
expectation (ROE), and so on.
Part 3: addresses the issues with inappropriate use
of the ROI model.
Part 4: examines the data sources for evaluation
and how we can use them.
Part 5: looks at how we can develop an integrated
approach to coaching evaluation using the OPRA
framework.
Part 6: conclusion and recommendations.
We’ve prepared online thought tools to accompany
this guide and help practitioners to evaluate
coaching.
PART 1
Introduction
Evaluation is really more a way of thinking and
working than a set of methods and practices. It
involves continuously improving designs, responding to
changing needs and situations, regularly reviewing costs
and benefits, and, of course, reporting on progress.
These words featured in the CIPD book The Case for
Coaching in 2006. The evaluation gap in coaching,
however, continues. This is concerning, because
coaching, as the CIPD has pointed out in subsequent
surveys and reports, is becoming part of normal
management practice (CIPD 2009a). It’s becoming
embedded as a key aspect of learning and talent
development, performance management and
leadership development (CIPD 2008, 2009b, 2010).
Yet while the numbers who report the use of coaching
are steadily rising, the real value of coaching is not
being captured. Only 36% of organisations, according
to our 2010 Learning and Talent Development survey,
evaluate coaching and, within that overall figure, there
is little evidence of rigorous evaluation (CIPD 2010).
Coaching of course occurs at different levels, from
bespoke executive coaching at the executive level
or ‘C-suite’ to the kind of line manager coaching
identified by our Coaching at the Sharp End report
(Anderson et al 2009). Our Developing Coaching
Capability in Organisations project in 2008 looked
deep into the organisational development of coaching
and sought to explain how, and through what stages,
coaching and its counterpart, mentoring, emerged
within organisations. Our Taking the Temperature of
Coaching survey in autumn 2009, one year into the
global financial crisis and deep UK recession, probed
into the health of coaching. These various research
outputs all showed that coaching was here to stay and
that engagement with the practice was increasing.
The research behind this guide
This guide draws on several sources:
• Taking the Temperature of Coaching, an online
survey published in September 2009, which
looked in detail at coaching practice (sample
size: 521)
• the CIPD 2010 Learning and Talent Development
survey, which looked in more depth at those
issues (sample size: 624)
• detailed discussion with practitioners through
various forums:
– Northern Ireland branch event on coaching
(March 2010)
– London CIPD forum on coaching evaluation
(July 2010)
– Midlands University network (July 2010).
Real-world coaching evaluation
3
Figure 1: Coaching incidence, 2005–10
%
100
90
90
80
70
60
82
78
71
64
63
50
40
30
20
10
0
2005
2006
2007
2008
2009
For example in Taking the Temperature of Coaching we
found that about 90% of respondents to this online
poll reported using coaching. The 2010 Learning and
Talent Development survey showed that figure at
82%. This confirms a long-term trend, with coaching
increasing in use since 2005. At the same time other
organisations have begun to ponder the evaluation
problem, from the Association for Coaching and the
European Mentoring and Coaching Council (EMCC) to
the academic literature (Grant et al 2010). The problem
was clear. Coaching had become widespread but
many coaching interventions were poorly evaluated
for a number of reasons. The data in our 2010 survey
provides some insight into this (see Figure 2).
2010
Figure 2: Evaluation priorities, 2010
%
100
90
80
70
60
50
58
56
44
40
40
30
20
10
0
Happy
sheets
Linked to
KPIs
ROI/ROE
Stories and
testimony
Evaluation practice
Of the 36% who do evaluate coaching, socalled happy sheets and stories and testimony
were the most collected forms of data used by
well over half. Well under half used approaches
such as linking coaching outcomes to key
performance indicators (KPIs). More quantitative
and mixed-method approaches, such as return
on expectation (ROE) and return on investment
4
Real-world coaching evaluation
(ROI), were less common. Few linked coaching
evaluation to performance and only 13%
frequently discussed evaluation at management
meetings. Only about 20% frequently collected
and analysed data on the impact of coaching. We
looked at the ways coaching was monitored and
discussed. These are captured in Figure 3.
Figure 3: Monitoring progress during coaching
(2010 survey)
1/3 frequently
discuss and
link to
performance
1/5 frequently
collect and
analyse data
Only
13% frequently
discuss the
progress
of coaching at
management
meetings
1/3 ask
participants
to keep diaries
and records
Poor evaluation: a clear and present danger to
coaching
These shortcomings could make coaching vulnerable
as cost pressures mount. In difficult times, when
learning and development and other HR interventions
compete for scarce organisational resource, anything
that cannot prove its value will be increasingly
vulnerable. Coaching cannot claim a unique
contribution to organisational performance and impact
if its practitioners and champions assume its value
rather than prove it. We need to build a convincing
evaluation narrative and most organisations are failing
to do this. As an intervention, coaching is similar
to culture change and organisational development
initiatives. We cannot begin to really gauge the value
of these contributions until we think systematically
about their impact on the organisation. Practitioners
who exercise good practice evaluation know this. We
now need to move towards a systematic approach
based on a thorough review of the coaching process.
The CIPD sees evaluation as a cornerstone of effective
coaching and we want to assist practitioners in
developing best evaluation practice.
Most learning and talent professionals know that
coaching should be evaluated, yet we have a yawning
gap between the practitioners who do and those who
don’t. The issue is often in terms of the behaviour and
approaches we deploy. In order to step back from this
we consider below some of the likely reasons for the
‘knowing–doing’ gap in evaluation. This is addressed
in section 2.
Practice points 1: the coaching evaluation gap
1 Our surveys show that currently around 64% of
practitioners are not evaluating coaching at all.
2 Are you one of the 64%? If so, think about how
you can start to develop effective evaluation. This
guide should get you started.
3 If you are one of the 36% who does some
evaluation of coaching, think about how you
do this. Use the online tools to improve your
thinking about evaluation.
4 Looking at Figure 3, where would you stand in
terms of evaluation practice?
Real-world coaching evaluation
5
PART 2
The coaching evaluation knowing–doing gap
Effective evaluation is a recurring challenge for
organisations and especially for HR. What we do and
how it works is a major preoccupation for all of us.
Yet all too often we do not evaluate effectively or
assess the evidence base for our interventions. This
is the situation with coaching. Pfeffer and Sutton
(2000) highlighted this ‘knowing–doing gap’ in respect
of interventions that we know work but we fail to
implement. We are very good at introducing coaching
– patently, we are not so good at understanding the
need to evaluate. This can have consequences, as the
box below indicates.
Coaching called to account
You are in the lift with the finance director and
she asks, ‘What is the return on investment we
are getting from coaching?’ You are stumped
for an answer and you say, ‘Well we just know
it works because…’. ‘Because of what?’ she
retorts. ‘Where can I get that information?’ You
reply, ‘Eh, I think…’. You should be nervous about
her approving your next tranche of funding for
coaching.
On the other hand, if you say, ‘We don’t use ROI,
as in the costs minus benefits, because it’s not
the right approach but I can show you how we
do evaluate it,’ she may well be reassured that
coaching is delivering.
According to our survey, roughly seven out of every ten
lift conversations would not go well. In coaching we
concentrate on raising awareness and responsibility.
So, if we assume that people want to and should
evaluate coaching, we need to ask ourselves:
• Why is this not being done and what’s getting in
the way?
• What can we do about it so that we can enable
good evaluation?
• How can we make sure we have that information
for that lift encounter with the finance director?
Thinking about the issues that might be contributing
to our knowing–doing gap, several come to mind:
• our focus on delivering coaching to the
organisation as a method of development and
sometimes our own role in delivering as coaches
• the fact that our skill set as LTD professionals may
not be best suited to evaluation and that we need
to recognise that and develop ourselves in that area
• the idea that coaching is too abstract and diffuse
an intervention for it to be properly evaluated,
unlike, say, the process issues around training
• our reliance on some basic evaluation models that
may not be fit for the purpose of effective coaching
evaluation.
Delivery focus can detract from evaluation
In reality, stakeholders won’t expect us to produce
a spreadsheet with scenario forecasts for coaching
and ROI. In fact, they are more likely to be convinced
if we can tell them how many people are coached,
how much we spend on external coaches, the length
of assignments and some data on the impact of
coaching: perhaps some engagement scores shown
before or after coaching, or maybe some anonymous
360 feedback on people’s ability to complete projects.
If we are also generous about other interventions and
their impact and we can apportion some of the effect
to coaching, we have some compelling evidence.
That’s not happening enough.
6
Real-world coaching evaluation
Many practitioners think that developing and delivering
coaching is enough. This ‘delivery focus’ can lead
people to believe that the process of introducing
coaching is enough. As Jarvis et al pointed out in their
2006 book, The Case For Coaching, there is often an
assumption that time spent in any learning activity
such as coaching always has a positive payback (Jarvis
et al 2006). They also suggest that evaluation may
not be addressed because we might uncover negative
results that could threaten coaching – much better,
then, to carry on delivering. We know from our surveys
that coaching is being used primarily for performance
management and leadership development, but we have
less evidence of how it is actually impacting those areas.
Could delivery focus blur value?
In our 2010 Learning and Talent Development
survey, practitioners reported that they split their
time evenly between delivery and planning and
gaining insight for learning interventions. That
in itself may be an issue; perhaps as LTD people
who should be planning and strategising about
coaching we spend a lot of time delivering actual
coaching sessions. It is of course good that we are
‘sleeves up’, sometimes delivering, but if we do
too much delivery we can see coaching through
a fairly narrow lens. Evaluation is certainly part
of that wider perspective that makes us effective
practitioners.
Evaluation skills need a different approach
Evaluation requires systems, reporting, records and,
eventually, numbers. These are distinct skills. Some
people in LTD are very comfortable with process but
it may not be the biggest skill set for coaches and the
learning and talent community who oversee coaching.
Professionals in learning and talent development
perhaps overspecialise on the creative side of the skills
dimension. Perhaps our activist learning styles make us
reluctant evaluators.
Point to ponder: could our strengths in LTD
be our weakness in evaluation?
In terms of the widely used Myers-Briggs Type
Inventory (MBTI), coaches tend to be overrepresented in the FP category of feeling and
perceiving and much less in the TJ category for
thinking and judging usually associated with
analytical activities such as evaluation (Passmore
2008). For example, just under a fifth of coaches
are ENFP compared with just under 3% of the
general population. Most managers and executives
lean towards the TJ side of the skills/behaviour
dimension. Most learning and talent professionals
come out on the FP side and many are also
coaches. So there could be an issue about how we
have effective dialogue with managers on these
issues. We should be mindful about our learning
mind-sets and behaviours.
Evaluation is also very much – to use the Belbin team
roles (2008) approach – a ‘monitor evaluator’ activity;
many in LTD tend to be on the resource investigator/
plant side of the skills divide.
Myers-Briggs, like other personality tests, is only a
guide and the states describe preferences rather than
hard-wired behaviours, but there is some truth that
our preferences can make us less disposed towards
evaluation or less likely to resist when others such
as managers downplay it. Perhaps we can play to
our strengths and learning biases and simply deliver
coaching, leaving it to others to evaluate. When we
do evaluate, it’s much easier to go with the happy
sheet or reaction levels because we can relate better
to these outputs. However, in coaching and learning
we know that by working on any skill deficit we can
help people raise their game. The CIPD Profession Map
and diagnostic tools will help you to address your skills
gaps in this area. The diagnostic tool My HR Map will
give you an instant report.
Coaching is seen as abstract and difficult to measure
‘It’s absurd to even try to measure so abstract
and evanescent an intervention as coaching.’
(Response to Taking the Temperature of Coaching
survey 2009)
As the remark above implies, the value of coaching
can’t really be defined because it’s too abstract.
Coaching is not abstract, though people and
conversations can be. Nor is it ‘evanescent’;
conversations, relationships and feelings, goals and
objectives can all be difficult to pin down. It’s not
rocket science.
Real-world coaching evaluation
7
Points to ponder: measuring the abstract
A point-blank refusal to evaluate coaching was
once put to the author in terms of Heisenberg’s
uncertainty principle. This is a concept in science
where any attempt by a human agent to measure,
say, the movement of atoms, changes the
outcome because the human agent has ‘violated’
the current state. It’s the best argument I have
heard for not evaluating coaching, though it’s not
a sufficient one. There is an important point here
about measuring abstract phenomena.
When we put abstract goals and objectives into
a concrete form, we can measure them. If we
consider for example corporate culture, which can
be measured with surveys, cultural barometers,
focus groups and many other forms of data
collection, we can see that coaching is not alone.
Consider how major companies value themselves
through something as abstract as ‘brand’. Issues
such as brand and goodwill are complex to
calculate because they are abstract, but since
they carry a huge amount of real value businesses
persist in trying to measure them. Now it’s become
normal practice to have brand intangibles and
goodwill (the assumed value of an acquisition) on
a balance sheet. Coaching has the same quality
of being abstract but measureable (see Human
Capital Evaluation: Developing performance
measures).
Structured conversations and discussions such as those
around coaching can also be measured. If in reflecting
and recording we can think about how the objectives
and goals meet wider goals, we can gauge the impact
of interventions and link that to the resource invested.
Investment decisions should not be undertaken
without appraisal and the investment in coaching is no
different. Incidentally, investment in people is also an
investment and should be accounted for in terms of
resource costs. So it’s no excuse to say that a coaching
programme was cost-free if it carried the costs of
employee or manager time. The benefits should also,
of course, be assessed. In weighing the benefits we
often use standard models, which is the subject of our
next evaluation gap.
8
Real-world coaching evaluation
Our evaluation models are stale and overused
Our existing models of evaluation are a bit tired and
dated. In 1959, Kirkpatrick first outlined four levels for
training evaluation:
• reactions – ‘liking or feelings for a programme’
• learning – ‘principles, facts, and so on, absorbed’
• behaviour – ‘using learning on the job’
• results – ‘increased production, reduced costs, and
so on’.
Kirkpatrick is a plausible model of evaluation first
designed in 1959, later developed by Kirkpatrick
himself and later still augmented by others such
as Phillips and Phillips (2007). The model allows
practitioners to progress through a connected process
for evaluation. Like the Morris Minor, however, also
designed in the late 1950s, it is no longer appropriate
for today’s climate. Furthermore, the Kirkpatrick levels
that are used by most practitioners tend to simply
address reaction and behaviour. The final level of
Kirkpatrick should be results. This dimension is often
missing as it can be difficult to get coaching outcomes
linked to KPIs and key business outcomes.
The CIPD has challenged the overuse of the Kirkpatrick
model of evaluation with its four levels (see our Value
of Learning report (Anderson 2007) and Holton
(1996)). Criticism of the Kirkpatrick model suggests
that the levels often don’t relate to each other and
amount to a ‘taxonomy’ (ordering system).
The CIPD strengthened and simplified evaluation
strategy in The Value of Learning (Anderson 2007). In
this report, we boiled down evaluation of any learning
and development outcome, including coaching, to
three simple issues. We highlighted this again in our
Promoting the Value of Learning in Adversity report
(CIPD 2009b) and we think this simple model, which
we outline in Figure 4, is a useful ‘thought tool’ for
coaching.
The alignment process ensures that we are talking
to stakeholders, particularly senior sponsors and
line managers, and that we are thinking about the
costs and resource around coaching. We are also
ensuring that we are benchmarking and ‘calibrating’
Information point: RAM model
Relevance allows practitioners to envisage a
‘structured thought process’ for how they develop
learning interventions that have real organisational
impact. Relevance impact stops us from doing
process and routine and thinking about where the
business is and where it needs to be. Alignment
means that we ensure that our coaching
interventions have real business impact, that we talk
to the key stakeholders and that we know
that what is being delivered fits with the needs of
stakeholders. Measurement means that we embed
evaluation into the entire process by ensuring that
we collect the necessary data appropriate to the type
of coaching that is being delivered. LTD departments
have a key role in holding coaching consultants and
contractors to account on this and ensuring that
individual departments are ‘teed up’ through the
alignment process.
What problem/
opportunity does it
address?
How will it drive the
business?
• business-ready people
• productive performance
• cost-effective people
• talent management
Stakeholders and
budgets
Time and cost
Indicators of business
outcomes
Benchmarking and
calibrating internally
and externally
• dialogue
• measurement
an important concept, that is, looking outside to
measure against competitors and ensuring that what
we are doing is appropriate and fit for purpose. On
measurement we counsel against the obsession with
crude ROI and take an approach that captures the
richness of learning interactions but which links these
clearly to business deliverables and outcomes.
Measurement
Business case for
intervention
Alignment
Relevance
Figure 4: The RAM model of evaluation
Quantitative and
qualitative
Performance and
improvement against
criteria
Broad evaluation
Business metrics
Return on expectation
• learning
• effciency/productivity
• KPI, ROI, non-HR
Arguably one of the biggest problems is the fact that
evaluation is often assumed to be about finding a
single magic number, the so-called ROI. In the next
section we address this issue separately, given the
amount of confusion and difficulty it can cause.
Practice points 2: the coaching evaluation knowing–doing gap
1 Imagine that lift conversation with the finance
3 Consider which evaluation models and approaches
director. How would yours go?
2 Consider your own and your team’s evaluation
skill sets. If they need work, use the new My HR
Map
you use/overuse and the level you use. Use the
CIPD’s Value of Learning tool to examine your
evaluation thinking
4 Consider the perceptions getting in the way of
effective evaluation. Reflect on and examine
whether these are real or just thinking patterns.
Is it too abstract, difficult, and so on?
Real-world coaching evaluation
9
PART 3
Return on investment: rigorous evidence or
pointless distraction?
When measuring outcomes, people often assume that
a compelling number will convince others of the value
of an intervention. Return on investment (ROI) is often
seen as the ultimate measure in evaluation. When
people such as the finance director in the lift use ROI,
they can mean different things. They may mean the
overall assessment of benefits minus the costs. That
should involve looking at a range of data. However,
in learning and talent circles, the term ROI is often
viewed as simply a calculation of benefits minus costs.
In fact, many of the key quoted ‘results’ on coaching
and its impact come from a small number of studies of
executive coaching. The often cited MetrixGlobal case
study (Anderson 2001) is a case in point. This study
of 43 executives in a US firm highlights the pitfalls
with using ROI as a narrative for coaching evaluation.
MetrixGlobal, like the Manchester study (below) and
the Colone study of a financial services firm, focuses
on executives (see Jarvis et al 2006). Executives are
in a position to impact the direction of a firm and it’s
difficult to gauge whether this improvement in the
‘bottom line’ was down to coaching or myriad other
issues. We should also be aware that many studies are
carried out by consultants anxious to prove the value of
their own intervention.
The MetrixGlobal study was actually a consultancy
study of a leadership development programme, but
coaching was identified as a significant factor. The
study identified a return on investment (ROI) from
coaching. According to the study, coaching conducted
with executives was responsible for an ROI of about
500%. Evaluation was based upon self-report
questionnaires and administered by the coach; the
idea of it being a definitive financial demonstration of
the impact of coaching is questionable. In reality, the
description of how the study examined perceptions
of improved productivity, increased skill and better
performance were actually more meaningful, but
the headline ROI figure appeals to a notion that the
exercise was more ‘scientific’.
10
Real-world coaching evaluation
Information point: ROI in action
Another famous evaluation case, the Manchester
study (McGovern et al 2001), is a more credible
attempt at ROI. It used a well-demonstrated
method and, though again based on self-report,
it did link to business outcomes such as increased
sales or better project delivery. The study examined
how coaching was translated into individual and
organisational effectiveness, that this increased
capability has a business impact and that this
impact could be quantified and maximised.
The authors then translated this into the
Kirkpatrick levels with an additional step for ROI.
However, arguably, this is still rather limited for a
coaching perspective and is still very much within
the Kirkpatrick straitjacket. We have developed
an alternative framework based on RAM (outlined
above), which encapsulates the key aspects of the
CIPD’s return on expectations framework. Such a
framework is the most effective way to develop an
ROI and the key to effective integrated evaluation
(see the CIPD tool Value of Learning).
Many ROI exercises like this say more about the size of
the relative numbers than the quality or impact of the
intervention. The coaching psychologist Tony Grant
demonstrated his great humour at a recent conference
when he said:
‘ROI rocks. I can put my coaching into a really big
project and get a massive ROI, and all I need to
do to make it even bigger is to bring my charges
down.’ (Remarks to Association for Coaching
Conference, 8 July 2010)
For evaluation to be effective we need a whole
supporting cast of evidence, not just a headline number.
Jack Phillips, the leading practitioner on business
evaluation, explains:
‘Value is not defined as a single number. Rather,
its definition is composed of a variety of data
points. Value must be balanced with quantities
and qualitative data, as well as financial and nonfinancial perspectives.’ (Phillips and Phillips 2007)
Phillips and Phillips (2008) have developed a coherent
framework for business evaluation that also measures
coaching but, unlike many of the studies quoted above,
they try to measure the impact coaching has relative
to other aspects of business performance. In a major
study of the ROI gained by a hotel chain from business
coaching, they develop a range of techniques to
demonstrate the benefits of coaching, which returned
roughly $3 for every $1 invested. Their model uses
estimates of probability such as confidence intervals to
adjust for the subjective judgements of executives, and
collects data at all levels. For a detailed ROI, these robust
approaches are critical.
Finally, we should also be aware that the best
evidence for the impact of coaching comes not from
business but from the extensive literature on therapy.
Coaching as a helping behaviour can be seen to have
the same structures – powerful conversations, active
listening, probing questions and goal-directed action.
Nevertheless, as De Haan (2008) explains, the real value
of coaching – like therapy – lies in the relationship. That
factor, above all, was isolated in an extensive study of
the effectiveness of therapy as the strongest predictor
and applies to coaching as well.
The key to integrated evaluation is data. Most of the
data needed is already around the organisation and
can easily be collected. The protocol for using this
data should be grounded appropriately in company
confidentiality issues. However, we do not need to be
bureaucratic and prescriptive. What sorts of data are
available? These range far and wide and are the subject
of part 4.
Practice pointers 3
1 Ask the question of whether ROI represents a
3 Think about how you capture abstract processes
stepped process or a one-off calculation in your
view.
2 Investigate a detailed ROI process and see if it
works for your organisation (see Phillips and
Phillips 2007, 2008 and Anderson 2007).
[This could be a great development project for
LTD talent.]
and put a value on them.
4 Check out the return on expectation (ROE) model
(see Anderson 2007). Would it work better?
Real-world coaching evaluation
11
PART 4
Coaching evaluation data: strengths and limitations
The data needed to drive effective coaching evaluation
is all around us in the organisation. The key is to be
able to understand it, put it in context and make use
of it. Often evaluation is undertaken without a firm
grounding of data. There can, of course, be obstacles,
but learning and talent professionals should be able to
secure the data stream. We will discuss and use data
to test the impact of coaching. We should of course
respect confidentiality and other protocols, such as data
protection, but we do need to press our case for the use
of data such as:
•psychometrics
• 360-degree feedback and other performance
•
•
•
•
appraisal records
individual diagnostics, such as learning styles
team diagnostics and performance data
employee surveys and polls
HR systems data on absence, retention, talent
management, learning attainments, and so on.
We look at these data sources in detail below. For
reasons of scope, we have omitted hard production
data such as six sigma, quality data and lean production
metrics. These should be used where the organisational
context drives their use. Similarly, we have omitted
the sort of target and attainment data commonly
used in the healthcare and education sector. Again, in
context these and other business metrics such as sales
and customer retention are critical. We focus for the
purposes of this guide on the key organisational data
about employee performance and development usually
held in HR departments and therefore easier to access
and use. We also look briefly at the data that is used in
the coaching process itself.
Psychometric testing is widespread in organisations and
is used both for pre-employment selection screening
and for continuing development and assessment. Since
the data tends to be well validated, robustly sampled
and normally involves external comparators, it is useful
data to inform the coaching evaluation process.
12
Real-world coaching evaluation
Psychometrics
There are many psychometric tests, gauging everything
from psychological fitness to cognitive ability, skills and
performance. In many organisations such tests are used as
a pre-recruitment screening exercise. The data is sensitive
and often confidential and the time period of the testing
needs to be taken into account. The fact that people
learn and change means the often fixed and deterministic
approach of psychometric tests should not be taken at
face value. Psychologists can administer instruments
such as MBTI and Saville Wave during recruitment and
development programmes. Normally these require expert
or at least trained feedback. Many look at personality
in terms of dimensions and seek to find people’s core
behaviours and skills (see Saville and Hopton (2009) for
an accessible guide to the Saville Wave technique through
tests on sports and business personalities).
Information point: example Myers-Briggs
Type Inventory (MBTI)
The MBTI has probably been most used in
coaching assignments because it describes
personality in positive, non-threatening terms. It’s
especially useful for issues such as influencing,
problem-solving, team development and
addressing character issues. One problem with
psychometric testing is it tends to be in the
domain of the psychological profession, who
tend to control and advise on tests, though nonpsychologists can be trained in these instruments.
However, tests such as Myers-Briggs and Saville
Wave have been robustly validated with hundreds
of thousands taking the same test, allowing
credible comparison (see Passmore 2008).
Psychologists tend to have appropriate training
and operate to higher regulated standards than
many coaches. Some learning and talent specialists
and many coaches are suspicious of psychometrics
in coaching, but they can provide rigorous and
evidence-based data that can counteract hearsay
and perception and can provide a platform
for discussion and development. The book
Psychometrics in Coaching (Passmore 2008)
provides an excellent and accessible summary as
well as good practice guidance so that these tests
can be used by practitioners in an informed way.
360-degree and qualitative performance appraisal
Employee surveys
360-degree appraisal is often used to inform coaching
assignments precisely because it provides feedback
on performance and behaviour from all levels of the
organisation. The instrument should be well designed
and properly worded and the emphasis should be
developmental, not punitive or performance focused
(see the CIPD factsheet 360 Feedback).
Most organisations use surveys of some kind to gauge
the opinions and views of their employees. Engagement
surveys range from self-designed instruments to
If an individual has issues, such as a failure to deliver on
time or an inability to accept feedback, the data can be
fed into a coaching assignment. Since, like with many
psychometrics and other appraisal data, the individual
will already know about their 360-degree profile, this
can be a productive stepping stone for the coaching
process. Performance appraisal is usually based on
written reports of discussion and is usually held as a
structured report.
Individual diagnostics such as learning styles
Many learning and talent specialists are familiar with
instruments such as the Honey and Mumford learning
inventory and the Kolb learning styles approach. These
tend to use questionnaires to develop a ‘construct’
of the individual’s learning preference, such as their
tendency towards activist/reflector theorist/pragmatist.
These states might describe activists as better at getting
on with things and less good at assessing their impact
or paying attention to detail. On the other hand,
someone displaying theorist bias might over-engineer
a project but be less good at progressing it. Some have
criticised this and the original Kolb approach as being
poorly validated, but used as indicative tools they can
help people to develop awareness, take responsibility
and pay attention to their learning style.
Team diagnostics and performance data
Team performance data can be garnered from
everything from six sigma data in an engineering
operation to team development models such as the
Belbin Team Role Inventory. They can range from basic
questionnaires to sophisticated psychometrics. Although
team coaching is likely to be the province of highly
qualified coaches and leaders, the information from
team interactions, where relevant, can be very useful in
assisting coaching conversations.
Information point: team diagnostics
For example, someone who is in a depleted
project team who is a well-known ‘plant’ or ideas
generator (in the Belbin framework) could work
on becoming more of a completer/finisher to
help develop the team’s capability. This is likely
to enhance team performance while minimising
the conflict in teams. Team ‘sociomapping’ is
a technique that ‘locates’ people in a threedimensional map according to their characteristics,
skills and preferences. It allows teams or individuals
to be developed together. This sort of diagnostic
can help teams with performance or personality
issues and focus on improvement. Again, like
psychometric tools, these need to be administered
and analysed by trained facilitators, though these
don’t always need to be psychologists, and for a fee
organisations can have their own people trained to
administer and analyse such instruments.
the well-known Gallup Q12, which uses 12 ‘critical’
questions to gauge satisfaction and engagement.
Employee engagement is increasingly seen as a key
driver of sustainable organisational performance (see
Sustainable Organisation Performance: What really
makes the difference).
Engagement surveys have also been a feature of much
work in government and have a pedigree in successful
organisations such as Standard Chartered Bank and the
boutique hotel group Malmaison. Engagement scores can
often provide detailed information on how leaders are
engaged with their reports. They can be used as a basis
of discussion for coaching in leadership development. An
employee survey can also be used to identify disengaged
and burnt-out employees, especially those in key areas,
to ‘re-motivate’ them and to identify personality conflicts
where they ‘down rate’ their manager so much that they
stand out from other team members. Looking at the
manager’s data may identify a personality clash. Given its
often subjective nature, such data should be used not as a
decision tool but as a discovery tool.
Real-world coaching evaluation
13
HR data on absence, retention, exit interviews, talent
planning, and so on
HR departments have a wealth of other information
that can be used to evaluate the impact of
interventions such as coaching. Systems contained
within HR information databases, storing information
on such issues as absence management and retention,
job levels, promotions and vacancies, can all be used
to some degree as data for gauging the impact of
coaching and other learning and talent interventions.
Coaching content data
We should also capture the key information within
the coaching process. Each coaching conversation, for
example, will use standard models such as GROW or
solutions-focused tools to develop the structure of the
conversation. In the GROW conversation, for example,
we can, at the way forward stage, discuss in a tenminute evaluation period whether the coach and
coachee think progress has been made. We can then
quickly reflect back on this data with other parties to the
coaching relationship. That would be a way of ensuring
that the ‘response’ aspect of the Kirkpatrick levels was
being used. Scaling, where we use the numbered scale
to determine how much progress has been made and
target future improvements, is another useful approach
for gathering data within the coaching conversation.
Tony Grant, one of the leading coaching academics, says
this would be preferable to a ‘pointless ROI exercise’
(remarks to Association for Coaching Conference,
London, 8 July 2010). If we think about how we could
collate this data across coaching assignments, it could
provide us with useful data. It requires a systematic
approach using well-designed tools such as the
conversation capture thought tool.
Having addressed the issue of evaluation data, we
will now develop a useful framework that helps us to
integrate insight about coaching process and capability
to inform the evaluation process.
Information point: stories and testimony
Stories and testimony are another important data
dimension. It is often assumed these are anecdotal
but when they are collected from varying individuals
they can add to a rich and coherent picture. The
reflective notes that are integral to a coaching
relationship don’t need to be long. They can be an
email from coach to coaching stakeholders on what
took place and what the coachee is aiming for, but
they provide a rich source of information. With
a bit of attention we can assess the conversation
content for verbal indicators of progress or look for
indications that people might be stuck in patterns,
and so on. This can work especially well when linked
with coaching conversation tools such as those
discussed above. The key issue is to get data early
and often but to avoid being too process driven and
therefore driving a ‘box-ticking’ approach.
Practice pointers 4
14
1 Think about developing an inventory of the data
4 If using psychometrics, do you understand what
available to you through the HR route.
2 Is there other data from business processes, such
as production metrics and quality data?
3 Think about your current data and what is
available to inform coaching assignments.
their results mean or do you have advice on this?
5 Think about how employee surveys can be used
and think about including questions on the effect
of coaching and other LTD issues.
Real-world coaching evaluation
PART 5
Coaching evaluation: an integrated approach
One of the key issues in coaching evaluation is to
ensure that the process is grounded in credible data
and organisational insight from the beginning. By
ensuring that we are clear at the outset, we can
develop a continuous appraisal of the value and
impact of coaching. CIPD evaluation experts Roland
and Frances Bee stress this key point, as does our
Case for Coaching book (Jarvis et al 2006). In order
to help practitioners to do this, we will use a thinking
tool known as OPRA to understand how these issues
fit together. OPRA is an approach that allows us to
link insight with data to drive effective evaluation. Its
components are outlined below. OPRA stands for:
•ownership
•positioning
• resourcing and procurement
• assessment and evaluation.
We can also use two key CIPD research projects,
Developing Coaching Capability in Organisations
(Knights and Poppleton 2008) and Coaching at
the Sharp End (Anderson et al 2009). The key issue
is to ensure that coaching is treated as a specific
intervention and as we would treat any learning
and development or leadership initiative. Sometimes
coaching is viewed as an episodic intervention going
on in the background, and because it is not like a
training course with a defined time period, it is less
easy to pin down. Yet we can make sure that we get a
systematic view on coaching by ensuring that:
• Coaching fits with the business context.
• Coaching relationships are properly designed
through robust contracting.
• The expectations and issues are specified clearly.
• Coaching links to the key processes of the
organisation.
• We are clear about the outputs required.
The steps in the OPRA approach can happen at
different times, but good coaching development takes
account of all of them. We start with the issue of
ownership and sponsorship, which is critical.
Ownership and sponsorship
Ownership and sponsorship are key and often
overlooked aspects of developing a coherent approach
towards coaching. When developing a coaching
intervention we should be aware of who owns and
sponsors coaching in terms of people and at an
organisational level:
• Who are the key people you need to consult?
• Are executives and organisational leaders involved
in the coaching programme? Are other senior
leaders acting as sponsors?
• Is coaching used in some parts of the organisation
but not others?
• Is there an open or closed approach to the
coaching that takes place?
• Are line managers and others regularly using
coaching and mentoring in managing their staff?
Figure 5: The OPRA coherent coaching thinking tool outline
Ownership
Positioning
People
Context
Organisation
Purpose
Resourcing and Procurement
Make/Buy
Manage costs
Assessment and Evaluation
Contracting
Results
Real-world coaching evaluation
15
• Is coaching/mentoring seen purely as an HR/LTD
intervention? If so, what are the reasons for that
perception?
• Does coaching have a high or low profile?
• What’s the frequency of coaching and the extent
and reach of coaching within the organisation?
These issues and how to develop an effective
mapping process on coaching capability are
addressed in our tool Developing Coaching Capability.
We can then use the context and purpose tools to look
at the issue. The context issue is about scanning the
coaching landscape within the organisation and asking
the critical questions about the nature of the organisation
and its environment, both internal and external.
In considering how coaching is positioned, we should
pay particular attention to the following organisational
questions:
• Is it fruitful (or otherwise) for coaching to be
The key point is that we understand the ownership
and sponsorship aspect from various dimensions (see
the above tool for a stakeholder analysis map). How
coaching is used within an organisation and how it fits
in with the context and purpose of the organisation is
the positioning piece. We address this aspect below.
Positioning
Once we are aware of the ownership issues around
coaching, we should then consider the positioning issues
around coaching. These issues are about context and
purpose directly aligned to organisational goals. We
should consider those issues in turn using the Developing
Coaching Capability tool and our coaching context
spidergram thought tool:
• Why do we want to introduce coaching? What
change do we want to see?
• What do we expect coaching to deliver?
• What’s the main purpose (consider performance,
engagement, change, agility, skills, and so on)?
• How does coaching fit with organisational priorities?
linked to organisational development and change
programmes?
• Is the learning and development climate conducive
to coaching? Questions to consider include:
– Is LTD mainly based on classroom delivery or
blended approaches?
– Is the LTD department itself well disposed
towards coaching and able to manage the
process?
• Is the approach towards learning based on a
knowledge-sharing environment or a closed system/
silo mentality?
• What’s the talent/performance story around
coaching? Is coaching considered as rocket fuel for
high-flyers? A recovery track for the derailed? A
prod for poor performers? Or is coaching ‘the way
we do things’?
• Is the organisational culture likely to support or
reject coaching?
Information point: developing coaching capability
The Developing Coaching Capability project (2008)
was carried out with Ashridge Management
School, who developed the coaching context
spidergram. This allows us to think about the
various contextual issues that will impact the
success of coaching and how it relates to business
priorities. For example, a move from targets
to outcomes in a school management system
might mean an entirely different context for the
16
Real-world coaching evaluation
development of a coaching/mentoring programme
for teachers. Similarly, if a jewellery chain wishes
to develop a more quality-focused sales strategy,
the type of coaching will need to take account of
that business driver. Culture is another dimension.
It’s pointless introducing a coaching solution into
a top–down culture without accounting for the
impact on the existing business organisation.
Having outlined ownership and positioning, we move
now to an often neglected dimension of coaching
evaluation: resourcing and procurement, the R in OPRA.
Resourcing and procurement
Resourcing and procurement may seem a long way
from evaluation, but if you do not know your coaching
resource and don’t know, for example, the ratio of
delivery by external coaches to internal or have an idea
of the coaching skills picture in your organisation, you
may end up with unreliable data. It’s also important
when resourcing and procuring coaching services,
whether external or internal, to ensure that the coaches
understand the importance of evaluation. So in the
resourcing and procurement aspect of OPRA, consider:
• How is coaching bought and paid for?
A clear view of resourcing will allow you to get to the
core of what sort of coaching your organisation is
prepared to make available. The CIPD has provided a
range of tools and support for those buying coaching
services. Our Coaching and Buying Coaching Services
guide was updated in 2008, providing a wealth of
information on the critical issues around selecting
coaching services. Everything from how to interview
prospective coaches to how to run an assessment
centre is covered. You can download the guide at
cipd.co.uk/atozresources
Another aspect of intelligent coaching resourcing is to
conduct a make or buy exercise. This will demonstrate,
especially in straitened times, that you have thought
about the costs and benefits of coaching (see Coaching:
make or buy matrix thought tool).
• What resource does the organisation have to deliver
•
•
•
•
coaching?
What’s the external and internal mix?
How much coaching capability do you have
internally?
How will it be deployed to obtain the best value?
How do you address training and supervision for
large internal programmes?
Information point: the line manager and internal resource
One key aspect of evaluating coaching capability
is to know how ready and equipped line
managers are to deliver coaching when your
organisation decides on the make or
buy pathway to source coaching in that way.
The Coaching at the Sharp End online tool
provides a method of assessing line managers’
readiness to coach and their coaching skills
levels. This is based on a validated study using a
questionnaire of 521 managers identifying the
factors that drove line manager coaching (see
CIPD 2009a).
Line manager coaching can range from using
the basic GROW conversation in a routine oneto-one with staff or simply adopting a more
listening/less directive approach. This is defined
as primary coaching in the Coaching at the
Sharp End model in Figure 6 overleaf. Such skills
tend to focus on performance. Line managers
and internal coaches can also develop deeper
coaching approaches designed to encourage
shared decision-making or unleash creativity and
skills. At an even higher level, with the right skills
internal coaches can deliver team coaching and
even internal consultancy. These are the mature
coaching skills and are set out in the model,
tending towards an empowerment focus.
Real-world coaching evaluation
17
Figure 6: Coaching characteristics of line managers
Primary coaching
Mature coaching
Development
orientation
• powerful questioning
Planning/Goalsetting
Effective
feedback
• using ideas
• shared decision-making
• encouraging problem-solving
Performance
orientation
PERFORMANCE FOCUS
EMPOWERMENT FOCUS
Assessment and evaluation
The final aspect is assessment and evaluation, which
needs to be driven by appropriate data and insight
and informed by the other aspects of the model. Good
evaluation comes from an integrated approach.
Practice points 5: OPRA
18
1 Put your organisational coaching through the
4 Use the Coaching at the Sharp End tool to look
OPRA process to get a quick insight into how to
deliver integrated ‘evaluation ready’ coaching.
2 Use the coaching capability tool to look at
ownership and positioning.
3 Use our Coaching and Buying Coaching Services
guide and our ‘make or buy’ matrix to think
about coaching resource.
at your internal coaching capability with fresh
eyes.
5 Look at the case studies in the Appendix to see
how your own practice compares.
Real-world coaching evaluation
PART 6
Conclusion and recommendations for practice
Coaching evaluation is increasingly critical and needs
to be addressed. The evaluation gap is clear from
subsequent CIPD surveys and from practical day-to-day
experience. Evaluation is a must-have and we need
to engineer evaluation into coaching and embed the
process all the way through. In order to do that, we
need first of all to understand why evaluation is not
taking place. We addressed that issue in section 2 in
terms of the fact that evaluation is seen as a soft and
abstract concept, that it was often difficult to collect
usable information and that many of the skills and
mindsets of evaluation might not be commonly found in
coaches and LTD practitioners. The problems around ROI
and the need to address the issues around the fixation
with the end point calculation of ROI were addressed.
We discussed the shortcomings of the Kirkpatrick model
and positioned the return on expectations model as
a richer and more grounded approach to evaluation.
We then discussed what we needed to do to develop
capability in that area by outlining the OPRA framework
of looking at the key issues of:
• ownership and sponsorship
• positioning and context
• resourcing and procurement
• assessment and evaluation.
We examined each aspect of the OPRA model in
turn, introducing thought online toolkits, ‘thought
tools’ and models to help practitioners find a way of
collating information throughout the process on these
critical dimensions of coaching capability. In terms
of positioning, the coaching context ‘spidergram’
was used to help practitioners find a simple way
of assessing the key context issues in coaching. For
resourcing and procurement we introduced the ‘make
or buy’ matrix to allow the nature of resourcing
coaching to be examined and for assessment and
evaluation. We introduced a stepped evaluation
model around our RAM framework, addressing
the integrated steps and data sources necessary
for effective evaluation. We then looked in detail
at the application of the OPRA framework. Taken
together, our discussion of the evidence base for poor
evaluation practice and our suggestions as to why
that may be the case, drove us towards developing
a framework for coherent evaluation. We consider
the paucity of evaluation methods and the focus on
stale and overused models. We then focus on the
shortcomings of crude ROI as a methodology. And
then we discuss the data sources we should use and
suggest an integrated evaluation process. This raises
both the awareness of coaching and the responsibility
for evaluation among practitioners. It is our view
that this model will help embed effective practice.
The model allows key streams of business data to be
integrated in a way that captures coaching evaluation
without undue complexity. We make a number of
recommendations and suggestions for practice.
Recommendations
• Ownership and sponsorship are critical, as is
business context, and should be at the forefront.
• Coaching can only be effectively evaluated if it
is properly positioned and aligned with business
priorities.
• Attention to resourcing and procurement ensures
that we can deliver effective coaching resource and
contributes to evaluation.
• Evaluation should move beyond the Kirkpatrick
levels approach, especially ‘happy sheets’ and
‘warm words’, towards more integrated approaches
that utilise both qualitative and quantitative data.
• The reflective notes and conversation tools that
drive coaching such as GROW and scaling data
provide excellent raw material for coaching
evaluation, as do 360-degree feedback data,
psychometrics, learning inventories, team
diagnostics, appraisal tools, engagement surveys,
HR metrics and KPIs.
• A financial return on investment (ROI), though
often seen as the ‘Holy Grail’, is on its own seldom
appropriate.
• Useful testimony of the impact of coaching
from managers and fellow employees is another
appropriate evaluation resource if systematically
recorded and appraised.
• The best evaluation has explicit links between
coaching and key business metrics, such as KPIs,
organisational targets and service levels.
Real-world coaching evaluation
19
• Metrics such as 360-degree feedback, performance
appraisal, psychometrics and strengths/learning
inventories should be used more.
• Line manager capability is a critical resource in
coaching and we offer tools and insights to help
develop this.
• Successful evaluation is about developing links
between ownership and sponsorship, positioning
and context, resourcing and procurement, and
assessment and evaluation.
20
Real-world coaching evaluation
SOURCES OF INFORMATION
CIPD sources
ANDERSON, V. (2007) The value of learning: from
return on investment to return on expectation.
Research into practice. London: Chartered Institute of
Personnel and Development.
ANDERSON, V., RAYNER, C. and SCHYNS, B. (2009)
Coaching at the sharp end: the role of line managers
in coaching at work. Research into practice. London:
Chartered Institute of Personnel and Development.
CHARTERED INSTITUTE OF PERSONNEL AND
DEVELOPMENT. (2008) Coaching and buying coaching
services [online]. Guide. 2nd ed. London: CIPD.
Available at: www.cipd.co.uk/AtoZresources
[Accessed 24 August 2010].
CHARTERED INSTITUTE OF PERSONNEL AND
DEVELOPMENT. (2009a) Innovative learning and talent
development: positioning practice for recession and
recovery [online]. Hot topic. London: CIPD. Available at:
www.cipd.co.uk/AtoZresources
[Accessed 24 August 2010].
CHARTERED INSTITUTE OF PERSONNEL AND
DEVELOPMENT. (2009b) Promoting the value of
learning in adversity [online]. Guide. London: CIPD.
Available at: www.cipd.co.uk/AtoZresources
[Accessed 24 August 2010].
CHARTERED INSTITUTE OF PERSONNEL AND
DEVELOPMENT. (2009c) Taking the temperature
of coaching [online]. Survey report. London: CIPD.
Available at: www.cipd.co.uk/AtoZresources
[Accessed 24 August 2010].
CHARTERED INSTITUTE OF PERSONNEL AND
DEVELOPMENT. (2010) Learning and talent
development [online]. Survey report. London: CIPD.
Previous editions of this survey from 2005 to date are
also available at: www.cipd.co.uk/surveys
[Accessed 24 August 2010].
JARVIS, J., LANE, D. and FILLERY-TRAVIS, A. (2006)
The case for coaching: making evidence-based
decisions. London: Chartered Institute of Personnel
and Development.
KNIGHTS, A. and POPPLETON, A. (2008) Developing
coaching capability in organisations. Research into
practice. London: Chartered Institute of Personnel and
Development.
Books and articles
ANDERSON, M. (2001) Case study on return on
investment in executive coaching. Executive briefing.
Des Moines, Iowa: Metrix Global.
BELBIN, M. (2008) The Belbin guide to succeeding at
work. Cambridge: Belbin Books.
DE HAAN, E. (2008) Relational coaching: journeys
towards mastering one-to-one learning. Chichester:
Wiley.
GRANT, A., PASSMORE, J., CAVANAGH, M. and
PARKER, H. (2010) The state of play in coaching today:
a comprehensive review of the field. International
Review of Industrial and Organizational Psychology.
Vol 25. pp125–167.
HOLTON, E.J. (1996) The flawed four-level evaluation
model. Human Resource Development Quarterly.
Vol 7, No 1. Spring.
KIRKPATRICK, D. and KIRKPATRICK, J. (2005)
Transferring learning to behavior: using the four levels
to improve performance. San Francisco, CA:
Berrett-Koehler.
MCGOVERN, J., LINDEMAN, M., VERGARA, M.,
MURPHY, S., BARKER, M. and WARRENFELTZ, R.
(2001) Maximising the impact of behavioural coaching,
behavioural change, organizational outcomes and return
on investment. The Manchester Review. Vol 6, No 1.
Real-world coaching evaluation
21
PASSMORE, J. (2008) Psychometrics in coaching:
using psychological and psychometric tools for
development. London: Kogan Page.
Links
Association for Coaching:
www.associationforcoaching.com
PFEFFER, J. and SUTTON, R. (2000) The knowingdoing gap: how smart companies turn knowledge into
action. Boston, MA: Harvard Business School Press.
European Mentoring and Coaching Council:
www.emccouncil.org
PHILLIPS, J. and PHILLIPS, P. (2007) Show me the
money: how to determine ROI in people, projects
and programs. San Francisco, CA: Berrett-Koehler.
PHILLIPS, J. and PHILLIPS, P. (2008) ROI in action
casebook. New York: Wiley.
SAVILLE, P. and HOPTON, T. (2009) Talent:
psychologists personality test elite people. St. Helier:
Saville Consulting.
22
Real-world coaching evaluation
Incorporated by Royal Charter Registered charity no.1079797
Issued: September 2010 Reference: 5350 © Chartered Institute of Personnel and Development 2010
Chartered Institute of Personnel and Development
151 The Broadway London SW19 1JQ
Tel: 020 8612 6200 Fax: 020 8612 6201
Email: [email protected] Website: cipd.co.uk