AdaptME toolkit

AdaptME toolkit
ADAPTATION MONITORING & EVALUATION
Acknowledgements
© UKCIP, November 2011
This report should be
referenced as:
Pringle, P. 2011. AdaptME:
Adaptation monitoring and
evaluation. UKCIP, Oxford,
UK.
UKCIP, School of Geography
and the Environment,
OUCE,
South Parks Road,
Oxford OX1 3QY
Written by Patrick Pringle, with contributions from Kate Lonsdale, Megan Gawith, Mark
Goldthorpe and Roger Street
Many thanks to the following people who contributed thoughts and ideas which informed
this toolkit:
Molly Anderson, Environment Agency
Joseph Lowe, HM Treasury
Emily Beynon, Adaptation Sub-Committee
Nick MacGregor, Natural England
Muriel Bonjean, Global Climate Adaptation
Partnership
Heather McGray, World Resources Institute
Emily Boyd, Leeds University
Olivia Palin, Acclimatise
Ruth Butterfield, SEI Oxford
Steve Rayner, Said Business School
Tom Downing, SEI Oxford
Christine Roehrer, DFID
Jens Evans, Environment Agency
Michael Mullan, Defra
T: 01865 285717
F: 01865 285710
E: [email protected]
Saminder Gaheer, Defra
Margaret Spearman, World Resources
Institute
Kristen Guida, Climate SE
Swenja Surminski, ABI
Adrian Hilton, Climate NE
Jenny Swift, SQW Consulting
www.ukcip.org.uk
Lisa Horrocks, AEA
Anna Taylor, SEI Oxford
Ceris Jones, National Farmers Union
Emma Tompkins, Leeds University
Simon Jude, Cranfield University
Nikki van Dijk, Atkins
Peter Kaufman, University of Sussex
Alan Watson, National Trust
Katharine Knox, Joseph Rowntree
Foundation
www.ukcip.org.uk/adaptme-toolkit/
2
AdaptME toolkit
Contents
1. Why a toolkit?
2. What is the purpose of my evaluation?
3. What is it that I’m evaluating?
4. What logic and assumptions underpin the intervention I will be evaluating?
5. What are the challenges I might face when evaluating adaptation performance?
6. What limitations are placed upon my evaluation?
7. Measuring progress and performance
8. Establishing evaluation criteria: indicators and metrics
9. How do I evaluate the unintended and unexpected?
10. Who should I involve in the evaluation?
11. How should I communicate the findings?
3
AdaptME toolkit
1. Why a toolkit?
Organisations that have undertaken adaptation activities now
face the challenge of evaluating what worked, what didn’t,
how and why. This toolkit responds to a growing demand
for practical support in evaluating adaptation progress and
performance, and enhances Step 5 of the UKCIP Adaptation
Wizard (Monitor and Review).
PURPOSE OF THE TOOLKIT
AdaptME will help you to think through some of the factors that can make an evaluation of
adaptation activities inherently challenging, and equip you to design a robust evaluation.
You will be able to ‘tweak’ a single part of your evaluation design or use multiple tools to
build a new approach – that is for you to decide!
This toolkit will help you to:
4
•
Refine your evaluation purpose and objectives
•
Reflect on what you are trying to evaluate and the logic behind this
•
Understand how specific traits of climate adaptation can make evaluation challenging and how you can overcome these challenges
•
Draw out, understand and re-evaluate your assumptions
•
Consider how progress and performance might be best measured and evaluated
•
Identify examples, good practice and techniques which may help ensure your evaluation is robust in the context of climate change
•
Prioritise your evaluation activities, recognising that evaluations need to be proportionate to the investment and are resource limited.
AdaptME toolkit
AdaptME does not seek to provide a comprehensive evaluation framework as it is clear that
there is no one-size-fits-all approach to evaluating adaptation. Often climate adaptation
may be just one aspect of a broader evaluation. Although the toolkit focuses on the process
of evaluation, it is also relevant to the development of monitoring frameworks.
HOW THE TOOLKIT CAME ABOUT
Workshops with practitioners and experts in adaptation, monitoring and evaluation were
the starting point for the toolkit. The outputs from these workshops, a review of key literature, and experience of adaptation and evaluation practice were used to inform the final
AdaptME toolkit.
WHY IS MONITORING AND EVALUATION IMPORTANT IN THE CONTEXT OF ADAPTATION?
We are still at an early stage in understanding how best to adapt to future climate change,
how vulnerability can be most effectively reduced and resilience enhanced, and what the
characteristics of a well-adapting society might be. Learning what works well (or not), in
which circumstances and for what reasons, is critical. It raises two key questions:
•
Are we doing things right? and
•
Are we doing the right things?
The complex and long-term nature of climate change places a strong emphasis on embedding monitoring and evaluation as a continuous and flexible process (UNFCCC, 2010). Such
an approach can stimulate a process of ongoing improvement, and help you to understand
adaptation from different perspectives. It can also help you to understand the process of
adaptation as well as how well a particular intervention worked. Most importantly, it will
enable you to make better, more informed decisions in the future and strengthen future
adaptations.
5
AdaptME toolkit
2. What is the purpose of my evaluation?
In order to get the most out of the evaluation process it is vital
to understand the purpose of undertaking such work. More
specifically, what are the reasons for the evaluation, what do
you hope to get out of it and what type of evaluation you are
planning ?
As you might expect, the purpose of your evaluation will relate closely to your objectives;
why you do something is likely to inform what you want to achieve. It will be important
that before designing your evaluation approaches you are clear about the purpose and
objectives and maintain a focus upon these during the design, delivery and dissemination
of your work.
The reasons for your evaluation may be complementary or conflicting. By understanding
these potential synergies and tensions at the planning stage, a more balanced and effective
evaluation approach can be developed. Some of the most common reasons for evaluations
are explored below. This list is by no means exhaustive but can be used to reflect on why
you are undertaking the evaluation and what you hope to achieve.
TO EVALUATE EFFECTIVENESS
Evaluations often seek to discover whether or not an intervention has achieved the outputs
and outcomes it originally intended. In order to do this it is essential that the objectives
(outputs and outcomes) are clearly specified at the start. Understanding effectiveness is
important in the context of adaptation as we are still learning what are the most effective
interventions, in what circumstances and why. It is also important to consider whether what
was intended was actually appropriate or needed?
6
AdaptME toolkit
TO ASSESS EFFICIENCY
Evaluators may want to determine the efficiency of the intervention including assessing the
costs, benefits and risks involved and the timeliness of actions. This may mean involving
or utilising economic evaluation techniques where the costs and benefits are calculated in
financial terms. Such an approach may be required in the context of adaptation as additional investments will need to be assessed and justified.
TO UNDERSTAND EQUITY
The impacts of climate change will be experienced unevenly, both spatially and temporally
and the consequences of climate change will also vary as a result of the differing vulnerability of individuals and communities. Thus equity and justice are important factors to
consider when evaluating the appropriateness and effectiveness of adaptation interventions. This may raise questions about the effects of the project on different social groups
and their ability to engage in (procedural justice) and benefit from the intervention;
whether the intervention has targeted the ‘right’ people; and whether certain groups are
exposed to disproportionate risks, bear additional costs or suffer disbenefits as a result of
the intervention.
TO PROVIDE ACCOUNTABILITY
There may be a contractual or procedural requirement to undertake an evaluation to ensure
that commitments, expectations, and standards are met. This is especially true where public
money has been invested in adaptation and evidence is needed to illustrate the achievements and challenges of the project. Accountability may overlap with efficacy and efficiency
considerations, for example to account for an investment in terms of its costs and benefits.
TO ASSESS OUTCOMES
An evaluation may seek to provide an understanding of the outcomes of an intervention
and the impacts that it has had. This can be challenging as there is a need to disentangle
those outcomes which can be attributed to the intervention, as opposed to those resulting
from a range of other variables. In the context of climate change, this can be made harder
as there may be a long time period before outcomes can be assessed. In addition, the
avoidance of negative consequences can be a successful outcome in adaptation yet can
be hard to measure and assess, precisely because they have been avoided! The assessment
of outcomes tends to be associated with summative evaluation approaches and the use of
impact indicators.
TO IMPROVE LEARNING
Learning should permeate all reasons for undertaking an evaluation; we should always be
looking to improve our understanding of adaptation interventions, what works and why.
However, the reality is that the creativity and time invested in learning can vary considerably between evaluations. This can be the result of:
•
a tension between learning (‘what happened and why?’) and accountability (‘have
we done what we said we would?’).
•
The limitations placed upon monitoring and evaluation processes.
Recognising these tensions and identifying who should be learning what and how can help
you to achieve your learning objectives.
7
AdaptME toolkit
Figure 1: Internal and
shared learning through
monitoring and evaluation
Learning can occur in different spaces, within and between organisations, communities
and sectors. Given the complex nature of adaptation, we should look to combine our own
learning objectives with broader societal learning about adaptation. While some information may be commercially sensitive, much of the time sharing knowledge and experience
of adaptation makes sound business sense, helping to make future adaptation interventions
more efficient and cost effective. Figure 1 illustrates how this learning can have benefits for
specific interventions through an iterative process of review and improvement. In addition,
learning and experience can ‘spin out’ of project or programme-specific monitoring evaluation processes to benefit other projects, programmes and organisations. Good evaluations
make use of this ‘shared space’ to gather knowledge and inform future adaptation actions.
Such an approach will help society to learn how best to adapt to a changing climate much
more efficiently than working in isolation (a ‘trial and error’ approach).
ov
em
ent
Monitoring &
evaluation
| P ro j e c t i m p r
COMPARATIVE
LEARNING
Projec
t im
p
nt
me
ve
ro
|
Monitoring &
evaluation
CROSS-SECTORAL
WORKING
DIFFUSION
OF IDEAS
|
Pr
oje
ct i m p ro
Monitoring &
evaluation
ve
m
8
en
t
AdaptME toolkit
TO IMPROVE FUTURE INTERVENTIONS
The purpose of your evaluation may be to strengthen future activities and interventions
either at the end of a project (to inform future projects) or mid-way through an on-going
project. This would suggest a strong focus on learning in the design of your evaluation
and, where appropriate, use of the formative methodology. Given we are at an early stage
in adapting to climate change this should be a strong consideration for all evaluation
processes.
TO COMPARE WITH OTHER SIMILAR INTERVENTIONS
You may wish to undertake a comparative evaluation to understand how the impact of
an adaptation intervention varies in different locations or communities, or to compare the
implementation and outputs of one adaptation option with those of another.
QUESTIONS TO CONSIDER
• What is the purpose of your evaluation and what would you like to learn?
• How might you maximise the synergies between these purposes or
manage conflicting purposes?
• What trade-offs might you have to make and can these be justified?
• Have you defined the learning objectives of your evaluation? Who should
be learning what and how?
FURTHER INFORMATION ON ECONOMIC EVALUATION TECHNIQUES
• World Bank, 2010. Economic Evaluation of Climate Change Adaptation
Projects Approaches for the Agricultural Sector and Beyond. World Bank,
Washington. http://siteresources.worldbank.org/ENVIRONMENT/
Resources/DevCC1_Adaptation.pdf
TYPES OF EVALUATION
There are many different categories and types of evaluation. One particularly useful distinction which can be made is between formative and summative evaluations. A formative evaluation focuses on ways of improving a project or programme while it is still happening, and
is often associated with ex-ante and mid-term evaluations. In contrast, a summative evaluation seeks to judge the overall effectiveness of an intervention, usually after a project or
programme has been completed (ex-post). The difference between formative and summative evaluations has been neatly summarised below:
9
“When the cook tastes the soup, that’s formative; when the guests taste the soup,
that’s summative” (Robert Stake)
AdaptME toolkit
As recognised by UNDP, ‘adaptation is not generally an outcome, but rather consists of a
diverse suite of ongoing processes (including social, institutional, technical and environmental processes) that enable the achievement of development objectives (UNDP, 2007)’.
Viewing adaptation as an on-going process in response to a changing baseline of climate
impacts and societal consequences, the forward looking nature of formative evaluations can
often provide a rich source of techniques to inform future adaptation strategies. However,
it is important to recognise that for accountability purposes, summative approaches can
provide a valuable evidence base by reflecting on past experience.
The type of evaluation is inextricably linked to the purposes of the evaluation. For example,
if efficiency and value for money is a key purpose of your evaluation, then you may well
favour an economic evaluation with a strong focus on the quantification of costs and benefits of an adaptation intervention. However, such an approach may be less effective at
considering who bears the costs or experiences the benefits or what the non-economic
costs and benefits might be, unless this is factored into your evaluation design. As will
become evident, designing an evaluation usually results in trade-offs being made between
competing and, at times, conflicting purposes and objectives.
QUESTIONS TO CONSIDER
• Given the purposes you have identified, what are the benefits of
formative and summative approaches?
• What balance do you need to strike between these two types of
evaluation? What can you learn from other types of evaluation?
FURTHER INFORMATION ON TYPES OF EVALUATION
• HM Treasury, 2001. Chapter 2: Identifying the right evaluation for the
policy (p.21). The Magenta Book Guidance for Evaluation.
www.hm-treasury.gov.uk/d/magenta_book_combined.pdf
• National Sciences Foundation, 2002. Evaluation and Types of Evaluation.
www.nsf.gov/pubs/2002/nsf02057/nsf02057_2.pdf – an easy-to-read
introduction to why evaluations are useful and to different types of
evaluation.
10
AdaptME toolkit
3. What is it that I’m evaluating?
Having established the purposes of your evaluation and
considered the type of evaluation you are seeking to deliver, we
shall now turn to what it is that you are evaluating.
There is an almost limitless diversity of possible adaptation interventions relating to all
sectors and subjects (from biodiversity, health, the built environment, transport infrastructure, agriculture, energy, etc.) in response to a range and combination of possible impacts
(heat, rainfall, wind speed, sea level rise, etc.). Particular sectors and disciplines may also
have existing data or standards which should be accounted for during the design of an
evaluation. Setting clear boundaries around what you are going to evaluate (and what you
are not) will enable you to select the most appropriate methodology.
ADAPTIVE CAPACITY OR ADAPTATION ACTION?
It can be useful to consider what you are evaluating in terms of two broad categories of
planned adaptation; Building Adaptive Capacity (BAC) and Delivering Adaptation Actions
(DAA). In practice, your intervention may involve activities relating to both adaptive capacity
and adaptation actions, but this distinction may provide a practical way of thinking about
what you are evaluating and therefore how performance and progress can be best assessed.
Building Adaptive Capacity (BAC) involves developing the institutional capacity to
respond effectively to climate change. This means compiling the information you need and
creating the necessary regulatory, institutional and managerial conditions for adaptation
actions to be undertaken. BAC activities include:
11
•
Gathering and sharing information (e.g. undertaking research, monitoring data
and company records, and raising awareness through education and training initiatives);
•
Creating a supportive institutional framework (changing standards, legislation, and
best practice guidance, and developing appropriate policies, plans and strategies);
AdaptME toolkit
•
Creating supportive social structures (changing internal organisational systems,
developing personnel or other resources to deliver the adaptation actions, and
working in partnership).
In order to evaluate the development of adaptive capacity, it is necessary to identify the
capacities which are required to facilitate adaptation in the context of your intervention.
This then enables you to establish monitoring systems and to evaluate progress. Process
Indicators (see page 30) can be a useful tool in this context, as they provide a means of
measuring the ways in which a service or intervention has been delivered.
Adaptive capacity can be framed in different ways and there is no single definitive list
to determine which capacities you may wish to consider. However, the following studies
provide a useful introduction to the concept of adaptive capacity and how it may be measured or evaluated.
FURTHER INFORMATION ON ADAPTIVE CAPACITY
• Berkhout, F., Hertin, J. & Arnell, N. 2004. Business and Climate Change:
Measuring and Enhancing Adaptive Capacity. Tyndall Centre Technical
Report 11. Oxford: Tyndall Centre for Climate Change Research. http://
www.tyndall.ac.uk/sites/default/files/it1_23.pdf
• Gupta, J. et al. 2010. The Adaptive Capacity Wheel: a method to assess the
inherent characteristics of institutions to enable the adaptive capacity of
society. Environmental Science and Policy 13, 459–471.
• Lonsdale, K.G., Gawith, M.J., Johnstone, K., Street, R. B., West, C.
C. & Brown, A. D. 2010. Attributes of Well-Adapting Organisations: A
report prepared by UK Climate Impacts Programme for the Adaptation
Sub-Committee. UKCIP, Oxford. http://www.ukcip.org.uk/wordpress/
wp-content/PDFs/UKCIP_Well_adapting_organisations.pdf
• PACT framework – a potentially useful tool for assessing and improving
your organisation’s response to the challenges posed by climate change,
structured around six response levels. http://www.pact.co/home
• UKCIP, 2005. Principles of good adaptation. UKCIP Guidance Note,UKCIP,
Oxford. http://www.ukcip.org.uk//essentials/adaptation/goodadaptation/
• World Resources Institute, 2009. The National Adaptive Capacity
Framework: Key Institutional Functions for a Changing Climate. http://pdf.
wri.org/working_papers/NAC_framework_2009-12.pdf
• Smith, T.F., Brooke, C., Meacham, T.G., Preston, B., Gorddard, R.,
Withycombe, G., Beveridge, B., & Morrison, C. 2008. Case Studies of
Adaptive Capacity: Systems approach to Regional Adaptation Strategies.
Prepared for the Sydney Coastal Counties Group, Sydney, Australia.
• Yohe, G. & Tol, R.S.J. 2002. Indicators for social and economic coping
capacity – moving toward a working definition of adaptive capacity.
Global Environmental Change 12, 25–40.
12
AdaptME toolkit
Delivering Adaptation Actions (DAA) involves taking practical actions to either reduce
vulnerability to climate risks or to exploit positive opportunities and may range from simple
low-tech solutions to large scale infrastructure projects. DAA can include:
•
Accepting the impacts, and bearing the losses that result from those risks (e.g.
managed retreat from sea level rise)
•
Off-setting losses by sharing or spreading the risks or losses (e.g. through insurance)
•
Avoiding or reducing one’s exposure to, climate risks (e.g. build new flood
defences, or change location or activity)
•
Exploit new opportunities (e.g. engage in a new activity, or change practices to
take advantage of changing climatic conditions).
Adaptation Actions can be evaluated using a range of standard evaluation approaches.
However, the complex and long term attributes of some adaptation actions can create
particular challenges for evaluators which are explored later (‘What are the challenges I
might face when evaluating adaptation performance?’).
QUESTIONS TO CONSIDER
• Does the intervention you are evaluating involve building adaptive
capacity, adaptation actions or both?
• Does your evaluation focus on a particular sector or discipline? If so, are
there particular data sources or standards which might be applicable to
your evaluation?
13
AdaptME toolkit
4. What logic and assumptions underpin the intervention I will be evaluating?
Efforts to evaluate an adaptation intervention require a clear
sense of what a particular action was, and is, expected to
achieve or deliver, reflected in unambiguous objectives.
However, given the complexity of climate change and the importance of learning what
works and why, an effective evaluation needs to examine the thinking or logic which lies
behind these objectives. We also need to look beyond the objectives to capture the unexpected and unintended outcomes.
A useful method of exploring these issues is to map out an adaptation logic model. A logic
model provides a graphical description of the adaptation process, project or programme
that has been planned. A logic model draws upon many of the same concepts as a ‘theory
of change’ or ‘logical framework’ (logframe) which you may be familiar with. These
approaches can help when designing the intervention, but are equally important when
designing the evaluation approach.
ADAPTATION LOGIC MODEL
A sound Adaptation Logic Model can form the foundation of an effective evaluation by
providing a clear, concise view of the adaptation intervention you are planning to evaluate.
It lays out clearly what the intervention hopes to achieve; who will be affected by it, how
and when; and what resources will be provided. By understanding the underlying logic
behind the intervention it is possible to evaluate:
14
•
The connections between inputs, activities and outputs
•
The planned impacts and outcomes (project objectives)
•
The appropriateness of the logic behind the intervention including the assumptions that were made
•
If and how unexpected or unintended interventions may have come about
AdaptME toolkit
Figure 2: An example of an
Adaptation Logic Model:
Commercial Property
Over-heating Project
The example in Figure 2 illustrates how such a logic model can be presented, using the
hypothetical example of a project to address overheating in commercial buildings. Of critical importance is the identification of assumptions. Any theory of change or logical model
requires certain assumptions to be made about how inputs can generate activities that
will result in the desired outputs, outcomes and impacts. A good evaluation must explore
and challenge these assumptions. This is particularly true in the case of climate adaptation
where there can be considerable uncertainty. One commonly made assumption involves
determining an ‘acceptable level of risk’, e.g. a coastal defence project may be developed
on the basis that not all properties can realistically be protected from a major flood event.
An evaluation should examine the basis of the assumption in order to understand whether
the project is ‘doing the right thing’ and whether it is ‘doing things right’. An good evaluation must also explore any unintended and unexpected outcomes and impacts. This is
especially important in a relatively new field such as climate adaptation.
ADAPTATION LOGIC MODEL
INPUTS
ASSUMPTIONS
• Funding
• Expertise & knowledge
• Information
• Information will
remain valid
ACTIVITIES
• Sufficient incentive
to engage
• Events
• Research
• Training
OUTPUTS
• Information alone
provides sufficient
incentive
OUTCOMES
• Businesses are able
to influence future
building design
• Business information sessions
• Comparative cost report
• New skills & knowledge
Businesses have improved
understanding:
• of adaptation
• of economic impact
• to make better adaptation decisions
UNINTENDED OUTCOMES
• Participatory
businesses
share knowledge
• Greater willingness to
incorporate features
IMPACTS
15
• Local economy more robust
• Increased productivity
UNEXPECTED IMPACTS
• Site-based adaptation
plans developed
• Adaptation levy developed
for business tennants
AdaptME toolkit
QUESTIONS TO CONSIDER
• How does the Adaptation Logic Model help you focus your evaluation?
• What assumptions lie behind the Adaptation Logic Model and how might
these be tested? Are these assumptions valid?
RESOURCES ON LOGIC MODELS AND THEORY OF CHANGE
• Kellogg Foundation. 2004. Using Logic Models to Bring Together Planning,
Evaluation, and Action: Logic Model Development Guide. www.wkkf.org/
knowledge-center/resources/2006/02/WK-Kellogg-Foundation-LogicModel-Development-Guide.aspx
• Learning for Sustainability (LfS). Theory of change and logic models.
http://learningforsustainability.net/evaluation/theoryofchange.php
• Taylor-Powell, E. & Henert, E. 2008. Developing a Logic Model: Teaching
and Training Guide. University of Wisconsin. www.uwex.edu/ces/pdande/
evaluation/pdf/lmguidecomplete.pdf
• Innovation network. Logic model workbook. www.innonet.org/client_
docs/File/logic_model_workbook.pdf
16
AdaptME toolkit
5. What are the challenges I might face when evaluating adaptation performance?
Based on the theory of change you have described, you are likely
to face particular challenges in understanding what has and
hasn’t worked. UKCIP has worked with adaptation practitioners
to identify a number of ‘tricky issues’ which can arise when
evaluating climate adaptation interventions. These are not
unique to adaptation but are often experienced when evaluating
adaptation interventions.
17
AdaptME toolkit
COPING WITH UNCERTAINTY
Uncertainty is an inherent attribute of future climate and, inevitably, our understanding of
climate science, impacts and risks is dynamic. The result is that the goalposts for adaptation
M&E may appear to be continually shifting making it hard to establish appropriate objectives and measures. If we don’t know the extent to which the climate will change and how
society might respond, it can seem difficult to evaluate the success or appropriateness of
an intervention.
POSSIBLE RESPONSES
Use of formative evaluation approaches which focus on strengthening future adaptation
interventions. This is consistent with a of view adaptation as a continuous process rather
than an end point to be reached.
18
•
Establish baselines so it is possible to track what has changed from when the intervention was first developed. Such baselines may relate to climate and weather
data but can also apply to public perception, economic conditions (e.g. the cost of
interventions may reduce as technology advances) or scientific knowledge. Note
that establishing baselines can be time consuming so should be developed only
where it is proportionate to the intervention.
•
Ensure that the evaluation challenges assumptions but also examines the conditions in which such assumptions were made.
•
Where uncertainly is high, the flexibility of the intervention should become an
important success measure for the intervention. Robustness of the intervention
and it’s outcomes to a variety of possible futures may become a key factor to
consider in the evaluation.
AdaptME toolkit
DEALING WITH LONG TIMESCALES
Significant time lags exist between interventions and measurable impacts. The timescales
over which the effectiveness of an adaptation may need to be measured are such that there
can be a substantial gap between taking action (or making an investment) and measureable impact (or the return on the investment).
POSSIBLE RESPONSES
19
•
View adaptation as an iterative, formative process; ‘climate change in the foreseeable future will not be some new stable ‘equilibrium’ climate, but rather an
ongoing ‘transient’ process’ (Pittock and Jones, 2000); our adaptation responses
must also be viewed in this way.
•
Make your assumptions clear through your Adaptation Logic Model. Long time
frames mean your assumptions are likely to change with circumstances.
•
Ensure regular monitoring and evaluation is in place to enable progress to be
tracked.
•
Use process indicators to determine whether progress is on track even if impacts
cannot be determined yet.
•
Understand the decision lifetime of your adaptation intervention. The decision lifetime is the sum of the lead time (the period from the first idea to the execution
of the project) and the consequence time (the time period over which the consequences of the decision emerge). For example, the use of drought tolerant crops
may have a long lead time through a careful programme of plant breeding, but a
relatively short consequence time as the farmer may only choose to grow the crop
for a single season (see Stafford Smith et al., 2011). By understanding the decision
lifetime it will be possible to phase your M&E work more effectively.
•
Retaining flexibility (thus avoiding becoming ‘locked in’ to a potentially maladaptive response) is an important attribute of effective adaptation. Consider how you
might evaluate whether the adaptation intervention has successfully ‘retained flexibility’ in response to a range of futures.
AdaptME toolkit
WHAT WOULD HAVE HAPPENED ANYWAY?
The assessment of the appropriateness, or otherwise, of an adaptation policy or action
relies, to some extent, on our understanding of what would have happened without this
action (known as the counterfactual). This can be difficult to establish given the uncertainty
highlighted above and the myriad of possible changes to societal attitudes, scientific knowledge, the economy and technology, all of which might shape the consequences of climate
change and our responses to them.
POSSIBLE RESPONSES
20
•
Consider the purpose of your evaluation. How important is the establishment of
a counterfactual(s) to meeting these objectives? For example, if accountability or
efficiency are key objectives, then establishing the additionality of an intervention
above and beyond a ‘do nothing’ scenario may be important. The UK Treasury
Green Book provides useful guidance on this (pp. 53–54).
•
Your Adaptation Logic Model can be useful in developing a counterfactual. By
following the logic for an intervention, what might you realistically expect to
happen without the intervention? What are the variables that are likely to influence
this and what assumptions can be made about these variables? Are data sources
available to back up such assumptions? However, be aware that climate change is
likely to be non-linear making existing baseline data a poor basis for determining
future conditions. As with the Adaptation Logic Model, record all assumptions
which are made about the counterfactual you establish.
•
Recognise that establishing a counterfactual may not always be appropriate. It
may be more effective to consider the intervention as one of an infinite number
of ‘adaptation pathways’. The job of the evaluator is then to test the effectiveness of the chosen pathway (as defined in the Adaptation Logic Model) in the
context of a dynamic set of social, economic and environmental variables rather
than against a single counterfactual. A counterfactual should be developed only
when it is proportionate to do so (i.e. the investment of doing so is proportionate
to the scale of the intervention).
AdaptME toolkit
ATTRIBUTION
Attribution of the costs and benefits of adaptation interventions can be problematic for a
number of reasons, linked to many of the other ‘tricky issues’. For example, long time lags
mean that a variety of factors may have shaped the outcomes, not just the adaptation intervention. Furthermore, for sound reasons we are often encouraged to embed adaptation
within existing processes and M&E systems, yet this can make attribution difficult.
POSSIBLE RESPONSES
21
•
Think in terms of contribution rather than attribution. Instead of attempting to
demonstrate that a specific outcome or impact is due to an intervention, it may be
more appropriate to record the contribution to that outcome. Such an approach
recognises that there are often many influences which shape the attainment of
outcomes; this is especially true in the case of a complex and often long term issue
such as climate adaptation. Instead of seeking to attribute impacts and outcomes
to an intervention, the evaluator can focus on gathering evidence to determine the
type, nature and level of contribution the activities have made to (a) developments
consistent with the Adaptation Logic Model and (b) any additional unplanned
impacts.
•
Where adaptation is embedded within a broader set of organisational objectives,
it may be useful to frame your evaluation in different terms. For example, instead
of making adaptation the subject of your evaluation you may wish to examine the
contribution a particular project is making to ‘the achievement of xxxx (organisational/project objective) in a changing climate’.
•
Economic impact approaches can provide a way of determining economic costs
and benefits which can be attributed to the project. While this can be useful, it is
important to examine why benefits have been accrued rather than becoming solely
focused only upon their value.
AdaptME toolkit
IDENTIFYING APPROPRIATE ‘SUCCESS’ MEASURES
Identifying success measures is not easy. An action which aids adaptation in one location or community may increase vulnerability or inequality elsewhere – so who gets to
define success? Adaptation can create winners and losers, or at least winners and nonwinners, making success hard to define and measure. Linked to this, adaptation actions are
often characterised by trade-offs, determined by assessments of risk, which recognise that
accepting loss may be part of an adaptation strategy. Thus, an extreme event leading to
damage is not necessarily an indication of adaptation failure. Furthermore, in the context
of uncertainty, does success mean planning for all eventualities (higher costs?) or backing
a winner (risky?)?
POSSIBLE RESPONSES
22
•
Engage a wide range of groups in the design and delivery of your evaluation (see
Who should I involve in the evaluation?, page 34). This will provide you with a
broader view of what success means to different people and help you to develop a
wider range of success measures.
•
Identify who will benefit and who will not benefit (or be adversely affected) by
the intervention when developing the Adaptation Logic Model. Examine and test
the assumptions that were made in relation to these beneficiaries and non-beneficiaries. What trade-offs or ‘acceptable levels of loss’ have been assumed in the
Adaptation Logic Model? For example, it may be assumed that it was not viable
to protect the residents of a particular road from storm surge because of excessive costs. In this case, an evaluation will need to determine whether these were
acceptable and reasonable assumptions on which to judge the success of the intervention.
•
Where uncertainly is high, the flexibility of the intervention should become an
important success measure for the intervention. Robustness of the intervention
and it’s outcomes to a variety of possible futures may become a key factor to
consider in the evaluation.
AdaptME toolkit
LEARNING
Monitoring and evaluation of adaptive capacity and adaptation actions needs to be undertaken in the spirit of continual improvement and learning rather than only in strict terms of
economic justification or judging success or failure. But how do we ensure that this learning
informs future decision-making within a particularly organisation and how do we enable
this learning to add to society’s broader understanding of how to adapt?
POSSIBLE RESPONSES
•
Adaptation to climate change is a relatively new and complex challenge – we must
share our learning. Consider mechanisms for sharing across and between organisations, sectors and disciplines.
•
Given the complex nature of adaptation, we should look to combine organisational
objectives with broader societal learning about adaptation and think ‘outside of
the project box’ (Spearman & McGray, 2011).
•
Ensure mechanisms are in place for both formative and summative evaluations to
inform future decisions. Critically, the proposed timing of the evaluation must fit
well with the timing of key decisions about future investments. For example, an
evaluation of a flood defence scheme needs to report before future flood management budgets are decided.
•
Learning must be an objective of all evaluations – make sure it does not become
subordinate to your other evaluation objectives.
•
In designing an evaluation consider the application of learning – where, when and
to whom do the key learning messages need to be articulated to maximise the
efficacy of the evaluation process?
QUESTION TO CONSIDER
• Which ‘Tricky Issues’ are likely to be relevant to the evaluation you will
be undertaking? Review the possible responses in the context of your
project.
23
AdaptME toolkit
6. What limitations are placed upon my evaluation?
It is important to acknowledge any limitations placed upon your
evaluation at the outset. These might include:
FINANCIAL CONSTRAINTS
It is therefore important that any evaluation is proportionate to the intervention investment. It would be nonsensical to spend £50,000 on evaluating a £20,000 project; your
evaluation cloth must be cut accordingly.
TIME CONSTRAINTS (TOTAL DAYS AVAILABLE AND TIMEFRAME IN WHICH THE
EVALUATION NEEDS TO BE DELIVERED)
The time available to contribute to an evaluation is not only defined by finances. In designing
an evaluation, consider how much time it is realistic to expect participants to be able to
give – ensure the proposed methodology uses their time wisely. A well designed evaluation
should allow sufficient time for the evaluation to be completed effectively. The delivery of
findings must be timely and dovetail with key decision-making processes. This is especially
important where a decision may ‘lock in’ an organisation to a particular investment for a
long period of time.
LIMITATIONS OF SCOPE, LINKED TO RESOURCES OR THE INTEREST AND
RESPONSIBILITIES OF THE COMMISSIONING BODY
An evaluation may also be limited by the scope of interest of the commissioning body
which may be defined in an evaluation brief. However, it is important that more peripheral
impacts and outcomes are documented wherever possible, even if they cannot be explored
in detail. This is important, as we are still learning about how best to adapt and the potential
impacts of adaptation which may cut across traditional sectors and areas of responsibility.
Thus a well designed evaluation must balance a pragmatic approach focussed on its specific
purpose with the flexibility to explore unexpected avenues, at least in a ‘light touch’ way.
24
AdaptME toolkit
QUESTIONS TO CONSIDER
• List the limitations which exist in relation to your evaluation. Does the
proposed evaluation appear proportionate?
• Consider which limitations are likely to have the greatest adverse effect in
understanding what has worked and why? Can these be overcome (e.g.
by adjusting the timescale of the evaluation) and how might you make
a convincing case to decision-makers (e.g. to expand the scope of an
evaluation)?
• What trade-offs have been made? Can these be justified?
25
AdaptME toolkit
7. Measuring progress and performance
Assessing progress and performance is fundamental to most
evaluations and, where possible, there is an understandable
keenness to quantify and measure.
Before considering in detail how we might measure adaptation progress and performance
with reference to a specific intervention, we need to think what we are measuring against?
In this guidance, we examine three different ways of measuring adaptation performance:
•
Against the objectives of the intervention
•
Against emerging understanding of good adaptation
•
Against a baseline
Having examined the adaptation intervention you will be evaluating through these three
lenses you should be able to establish a robust set of evaluation criteria.
MEASURING PERFORMANCE AGAINST OBJECTIVES
One of the most straight forward means of evaluating performance is to compare outputs
and outcomes against what your project or programme intended to achieve (its purpose
and objectives). This might include evaluating changes in behaviour and practice which
supports these objectives (adaptive capacity). However, such an approach does not
consider whether these objectives were right in the first place. This is important given that
as a society we are still learning how best to adapt to a changing climate. By developing an
Adaptation Logic Model, it is possible to examine the assumptions that underlie the intervention and test the logic of the objectives in addition to evaluating whether the objectives
have been met.
26
AdaptME toolkit
MEASURING PERFORMANCE AGAINST ‘GOOD ADAPTATION’
The characteristics of ‘good’ adaptation – distilled from the emerging lessons of what seems
to enable effective adaptation – can also be a useful way to measure performance. These
characteristics tend to be broad but can form the basis of evaluation criteria alongside the
assessment of project-specific objectives. The 6 ‘guiding principles’ of good adaptation
developed by Defra (2010) provide a useful starting point and emphasise that adaptation
interventions should be:
Sustainable – Sustainable development will ensure that we are best placed both to
minimise the threats posed by the impacts of climate change and to capitalise on
potential opportunities presented by it.
Proportionate and integrated – Assessing climate risks should become ‘business
as usual’ and part of normal risk management. Action must relate to the level of
risks and the desired outcomes, and will need to be taken at the most appropriate
level and timescale.
Collaborative and open – Adapting to climate change is a challenge for the whole
of our economy and society, and will require action from a range of individuals and
organisations, within and across sectors working together.
Effective – Actions should be context specific, implementable, and enforceable.
They should incorporate flexibility to adjust to a range of future climate scenarios,
as well as socio-economic, technical and other changes.
Efficient – Actions should weigh costs, benefits and risks involved. Measures should
be timed appropriately.
Equitable – The distributional consequences of different options should be considered to inform decision makers of the effects of the activity on the natural environment and different social groups, especially vulnerable ones, to ensure that
individuals or groups do not bear a disproportionate share of those costs or residual
risks.
In addition, it is worth considering the degree of flexibility preserved or promoted
through adaptive actions taken (Defra, 2010). These may be an important evaluation criterion as we may not yet be able to evaluate if our decisions will be optimal or appropriate.
Given this uncertainty, it is important to evaluate whether we have retained flexibility in our
systems to ‘change direction’ at a later point in time.
27
AdaptME toolkit
QUESTION TO CONSIDER
• How can the principles of good adaptation be best reflected in your
evaluation criteria?
RESOURCES ON MEASURING PROGRESS AND ‘GOOD ADAPTATION’
• Defra, 2010. Measuring Adaptation to Climate Change – A Proposed
Approach. http://archive.defra.gov.uk/environment/climate/
documents/100219-measuring-adapt.pdf
• Lonsdale, K.G., Gawith, M.J., Johnstone, K., Street, R. B., West, C.
C. & Brown, A. D. 2010. Attributes of Well-Adapting Organisations: A
report prepared by UK Climate Impacts Programme for the Adaptation
Sub-Committee. UKCIP, Oxford. www.ukcip.org.uk/wordpress/wp-content/
PDFs/UKCIP_Well_adapting_organisations.pdf
• PACT framework – a potentially useful tool for assessing and improving
your organisation’s response to the challenges posed by climate change,
structured around six response levels. www.pact.co/home
• UKCIP, 2005. Principles of good adaptation. UKCIP Guidance Note, UKCIP,
Oxford. www.ukcip.org.uk//essentials/adaptation/good-adaptation/
MEASURING PERFORMANCE AGAINST A BASELINE
A commonly used approach in evaluations is to assess performance against a baseline:
a ‘snapshot’ of conditions, established before the start of an intervention, from which
progress can be assessed. A baseline describes conditions prior to an intervention using a
set of indicators relevant to your objectives. Progress can then be determined by comparing
the indicators at a set point during implementation with the original set of conditions.
Establishing a baseline in the context of adaptation is inherently tricky as the baseline against
which you are measuring may ‘move’ over time, especially for projects with a long lifetime. The result is that underlying conditions which underpin an intervention may change
dramatically irrespective of that intervention. For example, if a project is aiming to conserve
a natural wetland environment then what is ‘natural’ if baseline conditions are changing?
In establishing a baseline for an adaptation intervention, it becomes important to challenge
assumptions you may have about prevailing conditions and what these mean for your
objectives (‘are we doing the right thing?’). It may also mean a broader range of indicators
may be required which capture changes in climatic and socio-economic conditions which
may later become pertinent. This is potentially time consuming and expensive, hence it
may be appropriate to use secondary data wherever possible.
28
AdaptME toolkit
QUESTIONS TO CONSIDER
• The following questions may be useful in thinking about a baseline and
the selection of indicators. As mentioned before, ensure your investment
in baseline data is proportionate.
• Will your baseline provide a clear picture of the type and nature of both
climate and non-climate vulnerabilities and impacts? As climate change is
unlikely to be considered as a single issue, it is important to understand
non-climate issues too. For example, understanding local economic
conditions during a project may help you to understand the changing
ability of community members to contribute to adaptation investments as
the project develops.
• For medium and long term interventions, does the mix of metrics chosen
for your baseline enable you to tease out the differences resulting
from your actions and changes in baseline conditions? For example, an
agricultural adaptation project may enable wheat crops to be harvested
and achieve good market prices, yet due to changes in the baseline
climate net yields may actually reduce.
• How often should you revisit your baseline to assess how conditions
have changed? This will be influenced by the length of the proposed
intervention (both in terms of delivery and expected impacts), the timing
of key decision points during the project and the likely rate of change
from the baseline.
• How will data availability change during the course of the project? Can
new data be incorporated into your baseline?
• Critically, do you think your baseline will help you make better decisions
during and after the intervention?
29
AdaptME toolkit
8. Establishing evaluation criteria: indicators and metrics
Evaluation criteria form a benchmark or standard against which
progress and achievement can be measured. These criteria
are usually encapsulated in specific evaluation indicators and
metrics and can be used in conjunction with a baseline study of
existing conditions.
•
An indicator provides evidence that a certain condition exists or certain results
have or have not been achieved and can be either quantitative or qualitative.
•
A metric refers to a unit of measurement that is quantitative.
PROCESS AND OUTCOME INDICATORS
Two distinct types of indicators can be used with reference to adaptation evaluation
“A process-based approach seeks to define the key stages in a process that would
lead to the best choice of end point, without specifying that point at the outset”.
(Harley et al., 2008.)
“An outcome-based approach seeks to define an explicit outcome, or end point, of
the adaptation action”. (Harley et al., 2008.)
Process indicators are often used in the context of adaptation as we have often not yet reach
the point where the outcome of adaptation can be evaluated; hence it can be challenging
to apply a purely outcome-based approach. By using process indicators it is possible to
consider whether the ‘direction of travel’ is correct given the information we have at this
point in time. For example, we may not be able to determine whether a 20 year project
will deliver adaptation benefits in a socially equitable way in Year Three, however, we could
evaluate the nature of engagement in the design of the project to assess whether all social
groups have had their voice heard.
30
AdaptME toolkit
DEVELOPING EVALUATION METRICS
Metrics are attractive as evaluation criteria as they are objective and transparent and can
be easily reproduced. The use of sound metrics can allow comparison with other types of
adaptation actions or those delivered in other places and can enable the comparison of
adaptation across spatial and temporal scales. Metrics can also be used to provide easy-tounderstand ‘progress checks’ and snapshots of adaptation progress which can be understood by a wide range of users.
However, metrics also have disadvantages in the context of adaptation. There are no direct
metrics which measure the adaptation process itself, thus proxy indicators are used as
measures of progress. This can be problematic if inappropriate metrics are used or interpreted incorrectly. Such measures can also be influenced by a range of social, economic
and environmental factors outside the adaptation process. For example, a reduction in
insurance claims in the flood prone area may reflect the fact that insurance companies are
refusing to insure properties, rather than that the properties are now better protected as the
result of a project to adapt buildings and reduce financial losses from floods. Thus metrics
need to be chosen carefully often as part of a balanced package of indicators. The use of
metrics and indicators needs to be supported by a more detailed evaluation of the reasons
behind these data and a more qualitative analysis of the impacts and reasons behind an
adaptation intervention. It is essential that we monitor what is important in improving our
understanding, not only what is measurable.
When identifying potential indicators and metrics you may find it useful to consider the
following questions:
QUESTIONS TO CONSIDER
• Refer back to the objectives and purposes of the intervention and
the Adaptation Logic Model – do the metrics and indicators you are
proposing help you to understand whether the objectives have been
achieved?
• Consider and thoroughly test the logic behind your chosen indicators.
Are they fit for purpose? Might they be more robust if worked into a
‘package’ of indicators?
• How might changes in availability of data over the study period affect
what can be measured and when (and therefore which metrics to
choose).
• Resist the temptation to distil your findings into a ‘single number’ – this
may be attractive to policy makers but does it tell them the full story?
• Remember that while metrics may be objective, the choice of indicators
and metrics is not; these may reflect a particular framing of climate
change. For example, a business may develop metrics which help
determine the economic viability of an adaptation action rather that
those which help to identify the social distribution of benefits. Consider
and challenge your own framing to ensure it provides you with as full a
picture as possible and which meets your organisational needs.
31
AdaptME toolkit
• Quantitative metrics are attractive – but should be balanced with
qualitative data which examines the reasons behind the figures.
• Do the metrics you have chosen reflect a particular ‘framing’ or
perception of success? Do you need to consider success from the point
of view of other stakeholders or community members? For example,
the success of a project to increase green space in urban areas could be
measured in terms of ‘reduced impact of the urban heat island effect’,
‘increased biodiversity’ or ‘increased recreational space’. All may be valid
success measures depending on an individual’s perception.
RESOURCES ON METRICS
• Rosenzweig, C. & Tubiello, F.N. 2006. Developing Climate Change Impacts
and Adaptation Metrics for Agriculture, Climate Impacts Group NASAGISS
and Columbia University. Produced for the Global Forum on Sustainable
Development on the Economic Benefits of Climate Change Policies 6–7
July 2006, Paris, France. www.oecd.org/dataoecd/43/53/37117548.pdf
• Harley, M., Horrocks, l., Hodgson, N. & van Minnen, J. 2008. Climate
Change Vulnerability and Adaptation Indicators. ETC/ACC Technical Paper
2008/9.
32
AdaptME toolkit
9. How do I evaluate the unintended and unexpected?
There are often good reasons for the unexpected happening and
things not going to plan, especially when tackling long term
challenges involving a high degree of uncertainty.
However, it is easy to close the door on unexpected outcomes in an attempt to stick to
budget or in order to keep things simple. When dealing with a complex issue such as
climate change, much of the richness and learning may be found by examining the unexpected and unintended. The following tips can help you to gather and benefit from unexpected findings:
• Ensure your evaluation design is sufficiently flexible to explore
unexpected outcomes; do not focus all your resources on examining only
whether what you thought would happen did happen.
• Consider an iterative design or an initial scoping stage in your evaluation
which allows you to identify areas of interest and reallocate resources if
unexpected areas of interest emerge.
• Where possible, use ‘open’ questions and participatory techniques which
allow participants to explain what they think worked well (or not) and
why.
• Flag up areas of interest or findings which are beyond the scope of your
study to those in other organisations or sectors. You may not be able to
pursue these findings further but others may. In turn, ‘peripheral’ findings
from other studies may provide valuable insights for your work.
33
AdaptME toolkit
10. Who should I involve in the evaluation?
Adaptation occurs at multiple scales, from international and
national policies to actions by individuals in local communities.
Yet as the impacts of climate change are experienced locally,
so adaptation itself tends to occur at a local level, even if the
intervention you are evaluating operates at an international or
national scale.
An effective evaluation should seek to engage stakeholders from a range of levels and relationships with the intervention. These might include policy makers; project and programme
staff; direct beneficiaries and the broader community who may be indirectly affected by the
project. Evaluations which engage a wide range of stakeholders throughout the process are
more likely to gain a complete picture of how different groups are vulnerable to climate
change and how adaptation inventions can be made most relevant to their needs. It is also
more likely that issues of social justice and unequal distribution of benefits (and disbenefits)
will be identified and can be addressed accordingly.
PARTICIPATION AND PARTICIPATORY METHODS
An extensive internal literature exists on participatory methods which can be used to
engage communities in considering adaptation and in the evaluation process specifically.
Participatory monitoring and evaluation is a partnership approach to evaluation whereby
stakeholders actively engage in developing the evaluation and all phases of its implementation (Zukoski & Luluquisen, 2002). “Participatory monitoring and evaluation is not just a
matter of using participatory techniques within a conventional monitoring and evaluation
setting. It is about radically rethinking who initiates and undertakes the process, and who
learns or benefits from the findings.” (Estrella & Gaventa, 1998). This approach requires a
significant commitment from those involved but can be extremely useful in:
34
AdaptME toolkit
•
Identifying locally relevant evaluation questions
•
Empowering participants
•
Building capacity
•
Improving organisational learning
•
Extending learning beyond traditional parameters (ensuring the learning from the
evaluation reaches and is used by a wider audience)
•
Dealing with ‘blockages’ in local implementation
•
Addressing questions about effects on beneficiaries
•
Gathering information and views from stakeholders
QUESTIONS TO CONSIDER
• Who needs to be engaged in the evaluation process, at what point and
how?
• Would engaging particular groups help you to better understand your
assumptions or to explore unexpected or unintended outputs and
outcomes?
• Could you incorporate adaptation into existing engagement prosesses?
RESOURCES ON PARTICIPATION AND PARTICIPATORY METHODS
• Estrella, M. & Gaventa, J. 1998. Who Counts Reality? Participatory
Monitoring and Evaluation: A literature Review. IDS Working Paper no. 70,
IDS, Sussex.
• Holstein, A.N. 2010. GRaBS Expert Paper 2: Participation in Climate
Change Adaptation. Town and Country Planning Association, UK. www.
grabs-eu.org/downloads/Expert_Paper_Climate_Participation_FULL_
VERSION(mk3).pdf
• Jackson, E.T. & Kassam, Y. 1998. Eds. Knowledge Shared: Participatory
Evaluation in Development Cooperation. Kumarin Press, West Hartford, CT.
• Reid, H., Alam, M., Berger, R., Cannon, T., Huq, S. & Milligan, A. 2009.
Participatory Learning and Action 60 – Community-based Adaptation to
Climate Change. www.planotes.org/pla_backissues/60.html
• USAID, 1996. Performance Monitoring and Evaluation Tips. Conducting A
Participatory Evaluation. http://pdf.usaid.gov/pdf_docs/PNABS539.pdf
• Whitmore, E. 1998. Understanding and Practicing Participatory Evaluation:
New Directions for Evaluation. Jossey Bass, San Francisco.
• Zukoski, A. & Luluquisen, M. 2002. Participatory Evaluation: What is it? Why
do it? What are the challenges? Community-based Public Health Policy and
Practice. http://depts.washington.edu/ccph/pdf_files/Evaluation.pdf
35
AdaptME toolkit
11. How should I communicate the findings?
There are numerous ways to communicate evaluation findings
including written reports, meetings, public events, online fora
and social media. As climate adaptation is a relatively new area
of study, it is important that learning is communicated as widely
as possible and shared beyond the organisation commissioning
the evaluation.
QUESTIONS TO CONSIDER
• Think when to communicate. Communication should not just be about
dissemination at the end of the evaluation process – consider how
emerging lessons can be communicated, shared and tested effectively
with stakeholders throughout the process.
• Communication should be two-way – have you set up mechanisms to
gather feedback?
• Determine the purpose for communicating and the probable audiences.
• Once you have established the likely audiences, consider their preferred
media, the time they have available, their level of engagement, the
amount they would be prepared to read, the type of language they use
(technical, non-technical). Ensure this is factored into the approaches you
are using.
36
AdaptME toolkit
• Consider how can your evaluation contribute to (a) wider understanding
of climate adaptation and (b) wider understanding of how to evaluate
climate adaptation? This can often be overlooked but setting time aside
for these considerations can be important – often innovation happens
and the boundaries between sectors and disciplines.
• Linked to the above, are there Communities of Practice with whom your
findings can be shared?
GENERAL RESOURCES AND REFERENCES
• Evaluation Checklists Project. www.wmich.edu/evalctr/checklists
• Pittock, A.B. & Jones, R.N. 2000. Adaptation to what and why?
Environmental Monitoring and Assessment 61 (1), 9–35.
• Spearman, M. & McGray, H. 2011. Making Adaptation Count; Concepts and
Options for Monitoring and Evaluation of Climate Change Adaptation. World
Resources Institute. http://pdf.wri.org/making_adaptation_count.pdf
• Stafford Smith, M., Horrocks, L., Harvey, A. & Hamilton, C. 2011.
Rethinking adaptation for a 4°C world. Philosophical Transactions of the
Royal Society A 369, 196–216.
• World Bank, 2009. Guidance Notes: Mainstreaming Adaptation to Climate
Change in Agriculture and Natural Resources Management Projects. World
Bank, Washington. http://siteresources.worldbank.org/EXTTOOLKIT3/
Resources/3646250-1250715327143/GN8.pdf
• UNDP, 2007. Monitoring and Evaluation Framework for Adaptation to
Climate Change. www.undp.org/climatechange/adapt/downloads/
Adaptation_ME_DRAFT_July.pdf
• UNFCCC, 2010. Synthesis report on efforts undertaken to monitor
and evaluate the implementation of adaptation projects, policies and
programmes and the costs and effectiveness of completed projects,
policies and programmes, and views on lessons learned, good practices,
gaps and needs. Thirty-second session, Bonn, 31 May to 9 June 2010.
http://unfccc.int/resource/docs/2010/sbsta/eng/05.pdf
• van den Berg, R.D., & Feinstein, O. (Eds.) 2010. Evaluating Climate
Change and Development. World Bank Series on Development, Volume 8.
Transaction Publishers, New Brunswick, New Jersey, USA.
37
AdaptME toolkit