A useful framework for understanding ethics in the evaluation

A useful framework for understanding
ethics in the evaluation function of
development organisations
by //
Joseph Barnes
March 2010
We conducted case studies of two development organisations based on literature review and key informant
interviews. One was a bilateral donor and one a multilateral agency. We looked at whether these
organisations are up to the task of conducting ethically robust randomised evaluations. As part of this we
had to build and test an analytical framework to give us better insight into the values of the two case study
organisations.
The framework proved to be very useful when we applied it to the two case studies. It enabled us to
demonstrate that both organisations valued ethical evaluation, but that they had different world views of
what this meant. This enabled us to pinpoint the different strengths and weaknesses of each case study
organisation. It also meant that we were able to trace the practical implications for ethical evaluation of
adopting different world views within organisations.
Theory
Cortina (2007 p5-6) establishes that ‘development work’ gets its legitimacy from the end goals that it
produces. So, whilst theory may provide a ‘compass’ for ethical development, we can only reveal the moral
‘roadmap’ that people use in real life by looking at practice. As evaluators explicitly attempt to determine the
worth of what is being researched, so by interrogating the practice of evaluators we should be able to reveal
the values that they hold as important, and the worldviews behind these values (Wolf et al 2009 p171).
Again, Cortina (2007 p7-8) provides us with the clues we need to look for to do this: three features of
development that reveal the philosophical foundation explaining why each is defined in the way that it is.
These are the agreed internal goods (ends), the ethical principles established by social cooperation, and the
institutions built to serve them.
Identifying goods, principles and institutions is complex. One challenge is that demands for evaluation take
place simultaneously at multiple levels, in multiple subject areas, and in a dynamic interactive way
(Tannahill 2008 p381).
Framework
One option to address this is suggested by Sumner (2007 p60). His analysis splits development into the
levels of approach (technology and processes), purpose (outcomes and goals), and focus (nature and
meaning). Sumner’s framework promises to help us understand at what level practitioners position their
moral reasoning.
We might also need to consider which aspect, as well as which level, of evaluation practitioners refer to in
their reasoning. One framework that would appear to fit our requirements is used by Bonell et al (2006).
This considers the utility (usefulness), feasibility and internal ethics of a methodology to determine its moral
validity in a particular context.
Combined, these two lenses provide us with a framework that we can test by cross-examining the goods,
principles and institutions of development evaluation to try and reveal the philosophical foundations of its
practice. We reasoned that if such an analysis proved fruitful, then we should be in a position to compare
what is (capable of) being done with what different worldviews say should be done.
A framework for moral reasoning in development evaluation
(derived from Bonell et al 2006 and Sumner 2007)
EVALUATION
Focus
Purpose
Approach
Methodology
Utility
Feasibility
Internal Ethics
Will influence institutions
Is universally viable
Defines a philosophy
Will influence policy
Is politically viable
Fulfils ethical principles
Will influence
programmes
Is operationally viable
Passes ethical standards
When we applied this framework to analysing our two case studies we were able to reveal that a
predominantly rights-based organisation was strong in terms of ‘internal ethics’, but struggled to achieve
‘utility’. We also found, conversely, that a predominantly economics-based organisation was strong in terms
of ‘utility’, but faced severe challenges in relation to ‘internal ethics’.
Rather more tellingly, we discovered that both case study organisations were weak in terms of ‘feasibility’: so
good theory was unlikely to be translated into good practice. These findings will be elaborated on in a future
IOD PARC learning product. The main finding relevant to this learning note is that the analytical framework
that is presented here did demonstrate potential to be a useful tool in better understanding ethical
evaluation in development organisations.
References
Bonell, Christopher, James Hargreaves, Vicki Strange, Paul Pronyk and John Porter 2006. Should structural
interventions be evaluated using RCTs? The case of HIV prevention. Social Science and Medicine, 63: 11351142.
Cortina, Adela 2007. Development Ethics: A Road to Peace. Working Paper 339. Kellogg Institute for
International Studies.
Sumner, Andrew 2007. What are the Ethics of Development Studies? IDS Bulletin, 38 (2): 59-67.
Tannahill, Andrew 2008. Beyond Evidence – to Ethics: a Decision-Making Framework for Health
Promotion, Public Health and Health Improvement. Health Promotion International, 23 (4): 380-390.
Wolf, Amanda, David Turner and Kathleen Toms 2009. Ethical Perspectives in Program Evaluation. In
Donna Mertens and Pauline Ginsberg (eds), The Handbook of Social Research Ethics. Thousand Oaks: Sage.