Do Developers Really Care if Development Works

Do Developers Really Care if Development Works?
Complex Adaptive Systems, Impact Assessment, and the Politics of
Knowledge
Kent Glenzer, Ph.D.
Director of Learning, Evaluation, and Accountability
Oxfam America
[email protected]
Research Associate
Abstract
Center forMethods
the Studyinofdevelopment
Public Scholarship
impact evaluation have never been as political as they
Emory University
are today. Two diametrically opposed discourses of impact evaluation have
arisen simultaneously in the past decade: The first is one that argues that
developers have been scientifically remiss for the past fifty years, and calls for the
application of mainstream logical positivism for impact evaluation. This
discourse places a premium on expert knowledge, the scientific method, and on
quantitative proof. The second maintains that such approaches are, in fact, at the
root of why development hasn’t worked. It calls for rights-based approaches to
both development and measurement, approaches which take on the challenge of
assessing impact in the long term and on knotty, structural problems of injustice,
discrimination, and exclusion. This paper analyzes the simultaneous rise of these
two discourses related to impact evaluation in the development enterprise. It
Methods
in development
impact
evaluation
haveepistemology
never been asofpolitical
as they are now.
questions
not the validity
but the
politics and
logical positivism
This paperinisthe
an current
attemptmoment
to understand
why
this
is
so.
It
is
also
an
attempt
to
explore
whether or
1 and looks at the opportunities for impact assessment rooted
not the development
enterprise
is atways
an important
crossroads, what that crossroads might be, and
in alternative
politics and
of knowing.
what it might mean for the future of development practice, resource flows and, ultimately, what
is considered to be “good development.”
My reflections are occasioned by the simultaneous emergence of two diametrically
opposed discourses around development’s effectiveness. The first one is the explosion of
discourse around “rigorous impact evaluation” over the past four or so years, one frequently
associated in this country with MIT’s Poverty Action Lab. The second is the less explosive but
still persistent rise of a discourse around rights-based approaches, on changing power relations,
and a commitment to addressing underlying causes of poverty and injustice. That both of these
discourses arise at roughly the same time in the history of development – the latter seems to me
to have preceded the former, actually, and I’ll have more to say about the meaning of that later in
this paper – warrants scrutiny. Both make interesting – albeit incommensurable – claims on
“Development enterprise” is a phrase I borrow from Uvin (1998). I use it quite deliberately in opposition to the
phrase “development industry,” found often in the literature, and carrying with it overtones of Western hegemony,
mechanicism, Fordism, etc. While I do not disagree with such connections between the discourse of development
and political economy, I find Uvin’s phrase much more descriptive of the professional world I’ve worked in for
most of the last 25 years. Here is the definition of “enterprise” in Merriam-Webster’s:
1 : a project or undertaking that is especially difficult, complicated, or risky
2 : readiness to engage in daring or difficult action
3a : a unit of economic organization or activity
b : a systematic purposeful activity
(see http://www.merriam-webster.com/dictionary/enterprise)
1
Glenzer, Berkeley University Lecture, October 2008. DRAFT. PLEASE REQUEST
PERMISSION BEFORE CITING OR QUOTING.
what constitutes good development. Both make arguments – albeit different – about
unconscionable gaps between rhetoric and practice in the development enterprise.
Gaps between rhetoric and practice are a long-standing reality in the development
enterprise. Indeed, some critics of development might say such slippages constitute the very
practice of development.2 New goals or priorities (“sustainable development,” “gender equity,”
“participatory development,” “Millennium Development Goals”) only rarely occasion
substantive changes in donor implementation styles. Means rarely match the goals declared.3
Deeper, qualitative social changes that any reasonable definition of “poverty eradication” must
include – such as changes in gender relations – are frequently ignored by monitoring and
evaluation strategies and indicators (Narayan 2006).4 Crucial qualitative phenomenon are often
given short shrift in planning tools such as logical frameworks or excluded entirely, as in the
case of much of USAID’s “results based management” approach.5 And most of these problems
have a long pedigree, dating back at least to colonialism.6
Academia’s late-20th century epistemological skirmishes between positivism,
constructivism, and postmodernism are not incidental to the gap between development goals,
practices, and evaluation methods.7 I’ve been encountering these tensions frequently over the
last five years, first as Director of Impact Measurement and Learning for CARE USA and now
as Director of Oxfam America’s Learning, Evaluation, and Accountability Department (LEAD).
2
I would argue that the works of Arturo Escobar (1994), James Ferguson (1994), Jonathan Crush (1998), Mitchell
(1995), and Comaroff & Comaroff (1999) lend themselves to such a reading, while perhaps no single one of them
makes this explicit argument. In all of these works, “development” is analyzed as a curiously “as if” kind of
phenomenon, in which hard realities and contradictions are covered over with buzzwords and hollow, nice-sounding
phrases. The very actions of development actors – in their full, plural, multi-dimensional complexity – borders on
shadow theater. Meanwhile, careful ethnographic and actor-oriented studies of development processes (Long &
Long 1992; Hobart 1993) reveal that to some extent, the only way that development projects can actually move
forward is through the sequestering, off stage, of many forms of contestation and disagreement.
3
The following quote regarding the achievement of the MDGs can be found in similar forms at halfway points of
nearly every major, international agreement regarding development assistance since the 1940s. The fact that we
who work in development are not utterly embarrassed by the predictability of such mid-term findings is an
interesting ethnographic fact.
There is a large delivery gap in meeting commitments towards the MDG target of addressing
the special needs of the least developed countries … [and to provide] more generous official
development assistance for countries committed to poverty reduction. (United Nations,
Millennium Development Goal 8: Delivering on the Global Partnership for Achieving the
Millennium Development Goals, MDG Gap Task Force Report, New York: UN, 2008, vii.)
One of the most stunning aspects of this excellent edited volume on women’s empowerment is that it was
published in the mid ‘00s, nearly three decades after international development actors began talking about the
importance of gender and power in the construction of global and local poverty.
5
As with any tool or approach, the people actually using it make a difference. RBM can, of course, help get at
qualitative changes, used in the right hands. Radelet (2005), Senior Fellow at the Center for Global Development, in
testimony before Congress, underscored the systemic weaknesses of the US government’s approach to monitoring
and evaluation, a view shared by many inside USAID:
The DFA office has introduced a large number of new indicators to track progress. However, there
appear to be far too many indicators, and most of these emphasize immediate outcomes rather than
output or actual impact. As of yet there is no independent process to verify results and to evaluate
the connection between short- and medium-term results and impact.
6
Two general histories of development practice that address this issue are Cowen and Shenton (1996) and Rist
(2005)
7
For a concise overview of the tensions created by these skirmishes inside the field of development evaluation, see
Khakee (2003).
4
2
Glenzer, Berkeley University Lecture, October 2008. DRAFT. PLEASE REQUEST
PERMISSION BEFORE CITING OR QUOTING.
For the past three years, both organizations have been trying to make an important shift in
development practice, from short-term project meant to produce immediate improvements in
human conditions to long-term programs that, over the course of 10-15 years, seek to change
power relations that underpin poverty, vulnerability, exclusion, and rights denial.
Such programs require “monitoring and evaluation” systems that capture impacts on
underlying causes of poverty, exclusion, and injustice. They challenge us to honor social change
in its full complexity, to eschew fatuous proxy measures. These programs do, however, require
rigorous approaches for holding NGOs and their partners accountable for such audacious goals,
measurement systems that go beyond the anecdotal, the subjective, and the fleeting.
This shift towards longer term, more structural approaches to poverty eradication – which
we can view as rooted in rights based discourse – coincides with not only the emergence of the
discourse of “rigorous impact evaluation” (which, in truth, is a refurbishment of 1950s-1960s
ideas around program evaluation). It also coincides with the re-emergence of discourses of
development in terms of “take off”, i.e., that economies will grow if we a) get the basic
infrastructure in place (roads, electricity, education, health services), b) ensure enough up-front
capital investment, and c) have the right macroeconomic policies in place. This is seen most
clearly in the thinking of Jeff Sachs (2005) who touts 1950s and 1960s development approaches
as if they were new or innovative. What we have witnessed over the past five years or so is a
subversive resurgence of the scientific method and positivist social science into the heart of
development policy and practice.8 Such an epistemology, set of measures of success, and
practices have, however, a very uneasy relationship to understanding impacts on power,
injustice, and exclusion, to rights-based approaches to development.
At the broadest level, then, this paper is an attempt to understand why there has emerged
– at least in the US – a consensus that we need to return to Auguste Compte at a time when the
language, goals, and public utterances of leaders of powerful global development agencies have
shifted to ever more complex, contextual, relational, nonlinear, and structural changes in human
societies as the goal of their development policies.9 Why, in short, have these emerged at the
same time?
In the paper that follows, I will:
1.
Discuss the nature of the two “returns” just mentioned, and put them quickly into
historical context of thinking about the role of evaluation or impact assessment in
development over the past 50 years or so;
2.
Briefly introduce the concept of complex adaptive systems and the challenges of
measuring – and attributing – changes in them, and relate these to changes in
development discourses over the past decade or so;
3.
Summarize the kinds of research methods, approaches, and processes that we are finding
most valuable for understanding programs’ impacts on underlying causes of poverty,
injustice, and rights denial; and
4.
Analyze the different kinds of changes needed in the institutional structure of the global
development enterprise that sections 1 and 3 imply.
See, in particular, MIT’s Poverty Action Lab web site for this argument (www.povertyactionlab.com).
Here I’d just note, in the past 15 years, the rise of good governance, social justice, human rights, so-called pro-poor
social and economic policies, the resurgence of gender and gender equity, to name a few. All of these take the goals
of development far beyond previous generations of development paradigms and point to a very interesting discursive
demise of the pre-eminence of the economic in development thinking.
8
9
3
Glenzer, Berkeley University Lecture, October 2008. DRAFT. PLEASE REQUEST
PERMISSION BEFORE CITING OR QUOTING.
I conclude the paper by returning to the question in this paper’s title, “Do Developers
Really Care if Development Works?” There are many underspecified terms in that question, but
of all “works” is the one I find most intriguing. For in the end, ideas of what is “success” or
“failure” – what “works” or does not “work” – both shape and are shaped by and within a social
system itself, one made of policy makers, politicians, experts of almost uncountable disciplines,
those who do development, oversee it, plan it, evaluate it, and those who are its targets
(object(ive)s, participants).
I started off this paper with a claim that impact evaluation methods have never been as
political as they are now. The core of my argument is as follows: The resurgence of normal
social science as a “new commitment to accountability” (as some western/northern development
actors like to describe it) can be seen, in important ways, as a rampart erected to protect expert
knowledge and the ‘00s vestiges of a colonial development enterprise. The return to 1950s and
1960s ideas about development and the methods for measuring its success are not just technical
arguments (although they are, too, exactly that): they are battles for the control of global
development processes and practice, of the intellectual, social, and political capital that the
power to define what counts – and how to count it – entails. The return to the 1950s and 1960s –
and the strong element of “common sense” that one hears in the arguments of proponents –
represents a direct challenge to the intellectual work of developing nations themselves and the
current and next generation of development scholars and practitioners from those nations who
have, for quite some time, been wresting authority for advancing development theory, practice,
and policy. It’s a move that allows those who formerly controlled not just the purse strings of
development but also its intellectual agenda to devalue goals, processes, and trajectories that are
deeply important but not easily counted, quantified, or measured. This dismissal of the
importance of hard-to-measure, qualitative changes in poor people’s lives makes those very
changes the responsibility not of the north/west but of the south/east. It is, in short, an act of
disciplinary power to determine what success is, what counts, and therefore how to measure
it…and who has authority to do so. That is why it is not, perhaps, surprising that we are
experiencing a surge in discourses in the apolitical, the purely technical nature, of research
methods.
I.
Plus ça change? Two Returns in Development Discourse
Analyzing the recycling of development discourses, as I’ve argued elsewhere (Glenzer
2002), needs to be done carefully. We should strongly question claims that nothing has changed
over the past 30, 50, or 150 years in development thinking, practice, or the power structures that
underpin global development as an organizational and institutional field. At the very least,
identical utterances and discourses are happening in very different contexts across time, space
and place and this creates interesting interpretive fissures in seemingly identical texts. More
interesting, identical discourses can come to have opposite meanings over time or shift their
meaning significantly in any case. Identical discourses uttered by different organizational or
human actors in different eras are also worth a nuanced and careful analysis, because their
intentionality and the action that they seek to provoke can be very different.
In this first section I will focus on similarities in two discourses that have arisen at
roughly the same time in the past 50 or so years. That rhythm itself I find intriguing, a puzzle
that deserves to be explained rather than naturalized. The two domains of discourse that are of
interest are 1) the scientific method as central to development and 2) development as “take off”,
4
Glenzer, Berkeley University Lecture, October 2008. DRAFT. PLEASE REQUEST
PERMISSION BEFORE CITING OR QUOTING.
one that needs a solid, stolid, unsexy and basic infrastructural foundation, after which
individuals’ own ingenuity and natural human tendencies will be unleashed.
Program evaluation as an activity, a professional discipline, and a focus for scholars is
actually very young. There are lots of ways of defining “evaluation” but for my purposes here
I’ll just go ahead and adopt a definition that is very widespread in the evaluation literature itself:
Evaluation research is the systematic application of social research procedures in
assessing the conceptualization, implementation and utility of social intervention
programs (Rossi and Freeman 1993: 5).
Program evaluation was a child of post-World War II industrial and social change, one
that began with firm roots in the academy (particularly in organizational studies and social
work), was deeply tied to government economic and social policy in the United States and,
through the 50s and 60s, relied quite heavily – exclusively wouldn’t be too strong a word – on
mainstream notions of the scientific method borrowed from the physical sciences.10 Hallmarks
of the approach include randomization, control groups, quantification, and statistical
significance. In its early years, evaluation was really about economic accountability and
efficiency of social programs. It was also unregulated, unstructured. In development, two key
mileposts – emblems of the field of evaluation undergoing Weberian rationalization processes in
a strengthening institutional field (Powell and DiMaggio 1991)-- during the 1970s were 1) the
adoption of evaluation by USAID and by the Organization for Economic Cooperation and
Development (OECD) and 2) the emergence of the Logical Framework Approach (LFA). Then,
in the 1980s, the evaluation field witnessed the rise of organizations like the Evaluation Network,
the Evaluation Research Society and then, in the 1980s, the American Evaluation Society. At the
same time that evaluators were creating a profession for themselves, they also were challenging
some of the long standing assumptions about social research: During the 1980s, evaluators
began pushing back in interesting ways at the hegemony of normal social science, quantification,
and statistical methods for telling us anything truly useful about complex social, political, or
cultural change.
As a result, the 1980s-1990s saw a decentering of evaluation discourses. Qualitative
methods – never absent even from the early days – acquired new status. A plethora of
participatory approaches to evaluation was pioneered. Many built on extant yet peripheral
methods experimented with in the 1940s-1970s. Action research, action science, participatory
learning and action, and many more labels came into existence and captured developers’
imaginations.
Often linked discursively with critiques of power within development, liberatoryparticipatory approaches (as far back as the 60s) at first challenged the very foundation of who
had the right to decide what “success” was when it comes to processes of social change, and who
had the right to declare if success happened or not. The 1980s-90s was a time of methodological
and epistemological ferment in the academy too, of course, with postmodern, poststructural, and
postcolonial critiques of knowledge unsettling the foundations of positivist social science.
Strong arguments arose that processes of social change are deeply complex, never the same,
never replicable, always nonlinear and recursive, never comparable in any way that is needed for
Comptean social science to really function. In short, the nature of the phenomenon needing
study or explanation excluded the use of normal social science on methodological grounds. But
development evaluation had also acquired strong institutional backers and supporters in the
10
This historical summary is adapted from Iverson (2003). .
5
Glenzer, Berkeley University Lecture, October 2008. DRAFT. PLEASE REQUEST
PERMISSION BEFORE CITING OR QUOTING.
1980s, and most of what passed for ‘participatory’ approaches to evaluation were little more than
opening a small door to poor people to talk a bit more with expert consultants and, perhaps, have
the opportunity to hear and comment on conclusions and findings.
In the midst of this transition, many professional evaluators also began pointing out, from
a purely pragmatic standpoint, that the kinds of designs and evaluations needed for normal social
science to demonstrate causality and attribution were so costly as to eliminate them from the
budgets of most donors, except in very rare and “strategic” cases. This is still, for the most part,
the overall approach of the World Bank’s Evaluation Unit, DFID’s evaluation group, and most
other development organizations, public or private, government or nongovernmental, academic
or practitioner: only very few and strategically important programs get the full “social science
treatment.” The rest are cursorily assessed for compliance. What was (and is) needed, these
pragmatists say, are smaller scale, more human, more intuitive, and easily used evaluative
processes that are “good enough”.11 Part and parcel of this argument is that evaluation should be
an ongoing, reflective process, done frequently, done by poor people and other project
participants as well as by developers. This discourse links very smoothly – for developers – with
participatory ideologies. Qualitative methods are highly valued, and in the late 90s and early 00s
we have witnessed a mushrooming of interest in evaluation as “storytelling,” as subjective
impressions of “most significant changes” in people’s lives (Davies and Dart 2004), and the
democratization of quantitative methods in the form of “participatory numbers” in which
participants themselves identify the forms of quantification that they – rather than but with
participation of experts – think are indicative of important changes (Barahona and Levy 2002).
All these shifts occurred at a time in development’s institutional history when the high level
goals and objectives of development were altering significantly:
The whole aid business is changing in significant ways: there are fewer discrete
projects now and more emphasis on sectors and programmes and on types of aid
that are intrinsically difficult to evaluate such as good governance, community
empowerment, poverty alleviation, human rights, etc. (Cracknell 2000: 48)
Some developers, in other words, were asking the industry to take itself more seriously,
to stop addressing symptoms, to take on what some organizations – like CARE, my own – call
“underlying causes of poverty.” Stop treating symptoms is a current admonition. Admit that
development IS political and don’t shy away from trying to alter social structures, norms, and
values that discriminate and exclude certain actors. Stop making the silly claim that you are
contributing to “gender equity” by training a few women how to sew. Get serious about poverty,
about people’s rights…and hold yourselves accountable for deep changes and not just superficial
outputs.
And at the same moment, two discourses resurged with a vengeance: the need to return
to positivist social science in order to BE accountable in development, and the idea that what
places like Africa need is a basic, A-B-Cs kind of “take off” approach of road building,
electricity infrastructure, health services, and education. The former is represented by the
emergence of MIT’s Poverty Action Lab as an influential actor in high-level development policy
circles. The Lab (it’s been around now for more than a decade but didn’t get much traction until
the 00s) touts randomized control trials as prerequisite to any honest form of accountability – and
A major international consortium of NGOs active in humanitarian relief – Oxfam Great Britain was part of the
consortium – released a set of guidelines about monitoring and evaluation in emergencies, titled in part “The Good
Enough Guide” (ECB 2007).
11
6
Glenzer, Berkeley University Lecture, October 2008. DRAFT. PLEASE REQUEST
PERMISSION BEFORE CITING OR QUOTING.
credible evidence of impact – in social change programs. Fully aware of many of the technical
arguments against randomization and the subsequent complex of ideas that go with it in terms of
erecting valid research that can demonstrate relationships between action and result, and of the
ethical arguments about experimenting on the world’s poorest, PAL leadership and the network
of academics of which PAL is composed are unflagging in arguing that other forms of impact
evaluation are insufficient: helpful and useful, yes, for all sorts of other objectives…but not for
establishing cause and effect and attributing them to some actor or set of actors. And most
recently, the Gates Foundation and the Hewlett Foundation funded a Washington think tank, the
Center for Global Development, to facilitate the establishment of a new, independent evaluation
institute that would put randomized controlled trials (RCTs) at the center of its work (Savedoff
and Levine 2006).12 The International initiative for Impact Evaluation (3ie) is now well along
and should start funding impact evaluations soon. Gates remains a strong financier of the new
organization.13
The new “take off” discourse for Africa comes from two directions: from the likes of
Jeffrey Sachs but also from the Presidents of South Africa, Nigeria, and Senegal and their “New
Partnership for Africa.” 14 Sachs’ argument is, I believe, actually a bit more nuanced than some
give him credit for. But it and its complement, the Millenium Development Goals and
indicators, are at their heart opposed to the forms of measurement and methodological
complexity that addressing underlying causes of poverty requires. Both the “new Marshall Plan”
and MDG discourses ask developers not to dig deep and come up with creative new ways to be
accountable for the things they say they are trying to do but, rather, to roll back their
expectations and only do things that are easily counted and measured.
Interestingly enough, few professional evaluators that I know are emotionally wrought by
Jeff Sachs or the MDGs. They might disagree with all or some of the discourses, but they don’t
get angry or overly excited about them. PAL and the CGD work, however, is a different story.
Members of the American Evaluation Association responded very emotionally to the ideas of
PAL, to the resurgence of positivism. PAL and CGD academics are accused by those who
consider themselves professional evaluators (many of whom are, actually, also academics, so the
lines here are predictably fuzzy) of “methodological fundamentalism.”15 It is an interesting
I was one of about two dozen global “evaluation experts” whom CGD asked to be on its “Leading Edge Group.”
Our task was to finalize 3ie’s strategy, mission, vision, etc. In doing so, the group moved significantly away from
an over-reliance on randomized controlled trials as the gold standard for effective evaluation of development
projects. In professional evaluation circles, however, such as the African Evaluation Association, deep mistrust and
anger still presides with regard to the CGD initiative. Many southern academics, researchers, and development
professionals view the CGD initiative as but an extension of the US government’s obsession and narrow-minded
focus on RBM.)
13
A third ‘return’ is implicit here: that of the supposed utility of ‘hard-headed’ business thinking about success for
the soft, touchy-feely world of do-gooders who just can’t seem to get their precious little heads to think in pragmatic
and measurable terms about their work. Articles on the “new philanthropy” have spread across local and national
newspapers, on NPR, in national magazines such as Newsweek in the past three years or so. What goes unsaid here
is that the Drucker Institute, devoted to nonprofit management, has existed since the 1970s. One of Peter Drucker’s
comments from the 1980s is revealing. He claimed that after having spent many years looking at nonprofit
management and nonprofit managers, he was sure that the very best of them had much to teach the for-profit world
and little to learn. (He added, of course, that the vast majority of nonprofit managers could really use a good,
standard MBA program!)
14
See www.nepad.org.
15
A barb tossed by an anonymous conference-goer or three in a meeting in late September of the European
professional evaluators association in the UK, aimed at academics who have led the resurgence of the scientific
12
7
Glenzer, Berkeley University Lecture, October 2008. DRAFT. PLEASE REQUEST
PERMISSION BEFORE CITING OR QUOTING.
situation: professional evaluators accusing academics of ignoring social complexity, of making
untrue claims for their methods, of implying that there is only one right path. Interestingly, the
professional evaluators have a pretty good argument on their side. But they have done very little
to propose any better pathway when it comes to research methods than those who currently tout a
return to orthodox positivism. It is to this slippage that I turn in sections II and III of this paper.
Complex Adaptive Systems, Evaluation, and Accountability16
Simple (mostly) closed systems are all around us. An automobile engine is one; an
arrangement of billiard balls is another; so is a watch. These kinds of systems are linear with
predictable and reliable cause and effect. One ball hits another and the effect is that the second
ball goes into the pocket. Every time you have the same arrangement of two balls, and you hit
the cue ball in the same way, the struck ball will do the same thing. Every time you turn the key
in your ignition, it starts your car. (Or it fails to. But one thing’s sure: turning the key in your
car will not clean your laundry). Complex systems, on the other hand, are composed of many
interacting elements. The interaction itself changes the system so that the relationships between
elements are changed by the interaction, changing the nature of causes and effects between them.
Such systems are open, by definition: you can never be completely sure what variables are part
of or not part of the system. In some ways, the idea of isolating “dependent” and “independent”
variables in complex systems is actually nonsensical. (It can still be useful, of course, but
actually tells you little about the level you are interested in: the future state of the complex
system as a whole). The openness and nonlinearity of complex systems make it difficult to
predict future interactions from the initial state. Examples of complex systems are well known:
The weather, an ecosystem, human groups, oceans.
Why is this of any interest to developers? First, let me suggest an axiom: Trying to
achieve something called “poverty eradication” or “social justice” or “rights fulfillment” means
you are operating in a complex adaptive system and not a simple, closed, linear system.
Development programs and projects -- efforts to alter the calculus or relations of poverty, justice,
rights, or capital –– are therefore subject to the following characteristics of any complex adaptive
system:
1. “Sensitive Dependence”: Even the tiniest differences in initial conditions can produce huge
differences over time. Further to this, very small initial influences (like, say, training five
women how to sew) can have very large long-term impacts as they interweave with other
elements of the system, form new configurations and patterns. (The women, perhaps, eventually
create sub-Saharan Africa’s largest clothing factory 25 years later). And very large initial
influences (like, say, the eruption of Mt. St. Helens or a Poverty Reduction Strategy Paper) can
have very small long-term impacts.
2. Causes and effects can be separated very widely in space and time. Complex systems are
“discontinuous”: they may appear stagnant (at some level of observation) over long periods and
suddenly take a great leap. They may roil and bubble for a time without seeming to provoke any
substantive permanent change. An effect today might be a major cause of its own negation 10
years hence.
3. The determination of long-term effects is contingent upon the interaction, over time, of the
entire set of elements in the system.
II.
method in development evaluation over the past ten years or so. Personal communication, Jim Rugh, former Chair
of InterAction’s Evaluation Committee.
16
This section is adapted from Eoyang and Berkas (1998).
8
Glenzer, Berkeley University Lecture, October 2008. DRAFT. PLEASE REQUEST
PERMISSION BEFORE CITING OR QUOTING.
4. It is “massively entangled,” and important changes can happen at many levels (micro, meso,
macro). You may not be paying attention to the levels where change is underway, and those
changes can leap levels quickly…or not at all.
What arises are a couple of interesting conclusions. First, it’s clear that the more
complex the system, the less possible it is to actually “prove” attribution, i.e., that the action of a
single actor – be it an individual or a government – is responsible for important changes in the
system. There are simply too many possible intervening, confounding, or just unknown
variables. While it’s really clear that a single actor – the driver – is responsible for turning a key
that makes an engine run, it’s much less clear that a butterfly over Tokyo produces atmospheric
waves that become a hurricane in the Gulf of Mexico. Or, to be less flip: It’s very unclear that
one can attribute gains in economic indicators to the World Bank. Or to any other single
actor…who, we shall presume for the sake of politeness, wants to be accountable for
development work.
The second interesting conclusion is that from a complex adaptive systems perspective,
“sustainability” is an achievement of the entire system. Sustainability inheres in the complexity
of multiple, interacting, nonlinear elements -- not in individual elements. At a fundamental,
axiomatic level, no single actor or organization can make legitimate claims to having
“sustainably” reduced poverty, or “sustainably” ensured full enjoyment of human rights by any
particular marginalized group of people.
If you are working in complex systems – and readily admitting that this is the case – then
you actually stop using the language of positivist evaluation. Rather than attribution, you need
to start looking at claims of probable contribution.17 In complex adaptive systems, participants’
opinions and insights regarding what has changed and why take on much more importance: they
can identify intervening, confounding, or otherwise unknown forces much better than outside
experts. Accountability in this kind of context is not a one-off achievement: rather, it is a
narrative, a story, and a set of social relationships developed over time. For, at the end of the
day, if we are honest, there is always the spectre of doubt when it comes to what, exactly, has
produced “sustainable reduction in poverty”, or permanent changes in relations of power
between human actors. There is always the fact that any deep, significant social change for the
benefit of poor people requires the combined work of thousands of actors and scores of years.
I’ve been trying to get managers in NGOs like CARE and Oxfam to get their minds
around this over the past few years, to confront rather than avoid the fact that we don’t operate in
simple systems. This is hard, when much in mainstream development approaches (from the
RFA/RFP structure, to logframes, to end-of-project compliance evaluations, to the lack of funds
for ex-post evaluations) says that we do and that our performance will be assessed based on our
ability to magically conjure linear cause-and-effect in a complex, nonlinear world. I have taken
to saying to senior managers that “in complex adaptive systems, demonstrating impact on
underlying causes of poverty = having effective learning processes in place that continuously construct
and achieve shared meanings among social actors.”
Just so I’m not misunderstood at this point in the argument: Nothing I’ve said up to now means that there is anything wrong
with our standard arsenal of social science methods. Indeed, a short list of things that any reputable NGO should be doing if it
wants to be accountable in complex adaptive operating environments will sound familiar to social scientists: a) do multiple
studies (no single study will be persuasive), b) carefully compare your impact research to the work of others with a very keen
attention to similarities and differences, c) have a clear, explicit theory of change that can be challenged and refuted, and d)
carefully describe your context. But all of these are very rare in mainstream NGO, donor, or UN development work.
17
9
Glenzer, Berkeley University Lecture, October 2008. DRAFT. PLEASE REQUEST
PERMISSION BEFORE CITING OR QUOTING.
IV. Nice Thoughts. Where’s the Methodological Beef? NGO Accountability in a Complex
World
NGO accountability in a post-post-Washington Consensus development era,18 for many
organizations,19 is a new beast. There have emerged in the past decade or so many developers
who believe that a prime problem with development over the past 40 years is that donors and
other organizations are not actually accountable to anybody other than themselves. This is one
factor in the rise of “rights-based approaches” that view participants as rights-bearers who hold
claims to minimum levels of treatment, services, and opportunity, and who exist in a wider
societal context within which such claims are either respected or ignored. Poverty reduction is
about altering relationships of power among rights holders and duty bearers and, so,
development is reconstrued as inextricably political. But rights-based approaches also shift the
development model from one based on meeting poor people’s needs to one of supporting the
poor to claim what is rightfully theirs, from a model in which NGO accountability is upwards (to
donors, to governments, to academia) to one in which the fundamental accountability is to the
poor. All well and good, except that there is deep ambiguity about how to monitor this type of
change, how to evaluate whether power relationships have changed for the better, and whether
deep structural causes of poverty and rights denial are ameliorated.
The vision of rights-based programs is riveting for many developers. It gives them new
energy and commitment. How to root such programs in an accountability system is very blurry,
however. What seems clear to many who have thought about this problem is that standard
approaches to program evaluation fall short. Of particular concern are donor approaches – such
as USAID’s heavy use of RFAs/RFPs – that lock an award recipient into a rigid set of required
outputs that, at best, serve as distant proxies for the kinds of changes in human relations and
social positions that global poverty eradication requires.
NGOs face not only the moral imperative to turn accountability on its head but also the
theoretical challenge to understand and measure change in complex adaptive systems. At present
they are generally meeting the challenge with approaches to understanding and measuring social
change suited to simple, closed, and linear systems. What might characterize accountability
systems if developers really cared about the changes they now talk about under the banner of
rights-based development, social justice, and poverty eradication? I think we would see five
crucial differences.
First, contrary to long standing norms and standards in the professional monitoring and
evaluation literature and guidelines, we actually must look at building a much wider evidentiary
net in our projects and programs, and also look at changes at multiple levels. We need to
constantly seek intervening, confounding, or new variables that we have not considered or did
not know about. In other words, we need to pay attention to noise around our projects rather
than filter that noise out through “focused strategy” and a small set of proxy indicators. This
runs counter, of course, to all current common sense about how to do a good logframe, to
construct an efficient set of measures and monitoring procedures, and to spend as little money as
18
Another way of saying that the brightest, best, most creative ideas about sub-national development are now
coming from citizens of the very places that donors wish to develop. It’s the age of, perhaps, the Bahía or Bangalore
consensus.
19
“Many” because, despite what some critics of development claim, there are tremendous spaces, large room for
maneuver, within development paradigms. Even in the age of the Washington consensus, some organizations were
post-consensus. In fact, they were post-consensus before the consensus ever emerged. Conversely, there are
organizations that are firmly enmeshed in the Washington Consensus, still, even though their leaders would deny it
(like USAID).
10
Glenzer, Berkeley University Lecture, October 2008. DRAFT. PLEASE REQUEST
PERMISSION BEFORE CITING OR QUOTING.
possible on the accountability system. And such an accountability system needs to gather and
process data much more regularly that most projects or programs currently do: if complex
adaptive systems are discontinuous, if they are nonlinear, if small changes can lead to large
effects, then systems that only capture and mull over data every 6-12 months are not sufficient.20
Second, we must find ways to see, grasp, and discuss changes at the level of emergent
patterns in the entire system (or at least significant swathes of it), rather than persist in
methodological individualism of different sorts. We need monitoring and evaluation systems
that can track patterns over time, and flag emerging relationships, and we need measures for such
changes in the system and not just its component parts. Instead of, for example, asking whether
women as a result of training and support organize into solidarity groups (a measure straight
from methodological individualism and linear hypotheses), we might, instead, seek to understand
the ways that their relationships, influences, and power have shifted with a wide net of other
social actors, and what gender norms are being eroded and what are being strengthened or newly
created. The first is really easy to measure; the latter is not, although it is not an insoluble
problem if we don’t expect simple indicators for complex changes.
A third implication follows: In complex adaptive systems, knowledge of how the system
works is often tacit, embedded in the actors who are enmeshed in the system. I am not saying
that outsiders do not have important comparative knowledge to bring to bear across contexts.
What I am arguing is that the open, nonlinear, and highly unpredictable nature of change in
complex systems means that no matter how small, or circumscribed, or tight our project,
program, or research is, it can have far-reaching impacts that are unknown. It can also have large
short-term impacts that dissipate and disappear as the larger system closes around those changes.
NGOs and other developers need to bring a much wider group of “insiders” into their reasoning,
plans and logic models, and recruit insiders to help cast the widest net possible on a) intervening
variables, b) confounding variables, c) unintended positive effects, and d) unintended harms.
One could simplify this into the word “participation” but that would be a gross simplification.
We need to think more about a sustained dialogue across different forms of knowledge that
challenges the mental models and assumptions about what actions will produce what effects.
This is a dialogue that will push all actors implicated in a project or program to think more
systemically, to structure and engage in an ongoing process of hypothesis generation, testing,
agreement on what constitutes evidence -- and then repeat the process. In a world where “proof”
of “causality” is actually unreachable, then we must strive for different resting points, ones in
which careful dialogue and debate lead us to generate a set of shared meanings amongst many
actors – local, nonlocal, educated, illiterate, foreign, national, etc. – about what is happening and
why. The views of the poor are not optional in this conversation. In a complex system, beliefs
and consensus about causation are social achievements as much as technical breakthroughs and
so the views of the poor – their evidence, rationales, and theories -- are essential to determining
success…and holding developers to account.21The fourth ramification: I noted above that
20
Another pragmatic conclusion, but one more about how we design and manage programs: in such a system, there
may be no linear relationship between the amount of time and resources that have been consumed by an initiative
and movement towards a goal. Anybody who has ever worked with donors knows how important “burn rates” are
to the forms and norms of desk management that donors (and senior NGO officials in their global headquarters)
engage in.
21
There are similarities of my points here to what some analysts have termed “fourth generation” evaluation
research. These developed in reaction to positivist paradigms and include naturalistic responsive approaches (Guba
and Lincoln, 1989), the multiplist model (Cook, 1985) and the design approach (Bobrow and Dryzek, 1987).
11
Glenzer, Berkeley University Lecture, October 2008. DRAFT. PLEASE REQUEST
PERMISSION BEFORE CITING OR QUOTING.
qualitative methods are needed that a) are capable of measuring changes not in things but in
processes (of social change, changing social relations, reallocations of power) and b) facilitate
collective sensemaking around what is and is not changing in the social world around us, and c)
are able to bridge both academic, practitioner, and local insider knowledge and mental models.
But this is not in any simple way a question purely of methods. If this is the new world of
project/program accountability, we quickly have to start thinking about the additional resources –
human and otherwise – needed to actually do this kind of bridging, producing the kinds of
knowledge products that each audience can understand and engage with, and plying the social
pathways that are so critical to building trust. I find “building trust” to be an extraordinary
important part of this because in the end, there’s too much data, too much information, and if an
NGO is going to be able to broker conversations about the worth of its programs, then naively
relying on “the data” to tell our tale will not work. We must find ways to pierce – at least in
places rural Africa where people have learned very well that the development game only rarely
calls for honest conversation and exchange of opposing views between “participants” and
developers – the pathological theater of development that allows us to substitute countable
proxies for measures and understandings of change in social relations and processes. And these
methods simply cannot be expert-driven; they cannot rely on specialist outsiders to determine
whether development programs are doing what they claim. They must represent careful
compromises that allow different actors to have both an empirical and a values-based
conversation about what constitutes success and how to know it when you see it.
Fifth, and perhaps counterintuitively: we need not to abandon but, rather, make much
better use of long-standing, mainstream approaches to accountability. Far from jettisoning things
like explicit theories of change, logical frameworks, positivist social science, tangible measures
of shorter-term success, tangible proxies of longer-term change, ex-post evaluations, external
evaluation, and so forth, we need to invest much more in these as a global industry.22 In CARE,
we are calling this cluster of basic competencies and core business processes “the new basics.”
The difference, however, will be that a major internal measure of success of these elements of
good development programming is that they all will change over the course of a program. If
there is one clear sign, I’d argue, of a poor rights fulfillment or “underlying cause of poverty”
program it would be that 100% -- or even 75% -- of its originally identified desired outputs or
outcomes are actually found to be relevant and worth accomplishing.
The two linked discourses that I described in section I and the ideas about what ‘success’
should mean in development programs that I outlined in Sections II and III do not fit together
easily. They imply different pathways forward if the development enterprise is to be more
accountable. They also represent two different epistemological stances as well as philosophical
cleavages about the meaning and role of development and the global organizations, institutions
Critical ethnography also offers a useful set of approaches, methods, and epistemological stances that differ in
fundamental ways from the positivism of the 1950s and 1960s.
22
For example, I would suggest that such innovations in mainstream, positivist methods as the Adjusted Interrupted
Time Series Method could be very helpful for producing certain kinds of persuasive evidence of changes in social
process, relations, power relations, etc. See, Galster et. al. (2004). White (2005) mentions a number of approaches
and methods that might be of use in complex adaptive systems. Sherman and Strang’s (2004) “experimental
ethnography” also offers promise in this regard.
12
Glenzer, Berkeley University Lecture, October 2008. DRAFT. PLEASE REQUEST
PERMISSION BEFORE CITING OR QUOTING.
and relationships that comprise development action. That these discourses have all arisen at
roughly the same time is, I think, no coincidence.
Discourses like those coming from Jeff Sachs or the MDGs simplify the development
challenge for external developers but certainly not for governments or people in the developing
world. Sachs’ vision, the New Partnership for Africa, the MDGs all, at some important level,
say that there are a small handful of “common sense” changes that all countries need to
experience. The common sense is based, as many scholars have already pointed out, on
teleologies derived from how the west developed. Development, in these discourses, may well
be messy, chaotic, nonlinear, etc., at some level far below the ‘strategic’ level that global
development thinkers ask themselves and others to aim for. What a relief for donors: no more
do they have to really think hard about context, about particular histories and cultures and social
trajectories. No more do they need historians, anthropologists, rural sociologists, interpretive
political scientists, religion scholars, gender experts, etc.: They really just need economists,
quantitative sociologists, demographers, and engineers. And they just need to do what they’ve
always – really – thought was necessary: build bridges, roads, schools, clinics. All perfectly
countable, tangible. Add in some new elements to the take-off model – again, perfectly
countable and tangible – such as multi-party political systems, more level economic playing
fields, and civil society groups and federations. Stir and let sit for a few decades.
The re-emergence of normal social science methods is not identical to the renascent
Marshall-style plan but dovetails with it in fortuitous ways. First, Marshall Plan-style results for
international donors lend themselves better to positivist, quantitative social science: hence, the
methods discourse gains power and influence by linking itself with not just a (teleological)
theory of what constitutes development but also to expert knowledge about what is needed to
prove causality and attribution, and then links these quite seamlessly to much broader – and
philosophically and theoretically fraught – discourses of donor, NGO, and government
accountability. It just seems a little too convenient that by combining the two discourses, longstanding seats of power in the development industry can both make their own job easier, raise
their own forms of expertise to the highest rung, and tacitly delegate all of the hard, nonlinear,
complex, emergent, mucky, and murky changes in control and access to capital in all its forms to
others. To be fair: PAL network members and others arguing for a stricter use of the scientific
method in development programs do tackle complex issues, issues that are of deep concern to
those who identify more with the Bahía rather than the Washington consensus. And every
member of the PAL network I’ve talked to is passionate, I will say, about sustainable and deep
changes in the structures of poverty in the developing world. But the return to the scientific
method means that questions and programs must be narrowed into very thin slices of effort,
intent, and outcomes. Science, as we know, tends towards the miniscule, towards
compartmentalized knowledge and expertise, towards knowing a great deal about extremely
small patches of intellectual ground. This is where the new scientism in development points.
And finally, the two discourses together, in their second incarnations, make for good domestic
politics from Washington’s perspective, as Paul Wolfowitz – speaking in his role of World Bank
President – recently made clear in what might qualify as the most jaw-droppingly hypocritical
statement of Americans’ interest in foreign aid in recent memory:
In my eyes, Americans as well as other tax payers are quite ready to show more generosity. But
one must convince them that their generosity will bear fruit, that there will be results (CGD
Evaluation Working Group 2006).\
The two discursive returns form a powerful front: a) simplify the role and goals of
external donors into a handful of ‘common sense’, easily counted results; b) insist that expert
13
Glenzer, Berkeley University Lecture, October 2008. DRAFT. PLEASE REQUEST
PERMISSION BEFORE CITING OR QUOTING.
knowledge of the scientific method and reliable and valid research processes is desperately
needed for reasons of public accountability in development and that the only valid form of
knowledge derives from normal, positivist social science; and c) developers who might identify
more with a Bahía rather than Washington consensus are left either having to purchase the
costly, expert knowledge and competencies of consultants in order to be taken seriously by
powerful organizations, or reject the discourses and so become saddled with standards and norms
and values of proof for affecting fundamental changes in power and social relations that they can
never meet. The two discursive returns, as a result, become a rampart erected against the
south’s/east’s intellectual leadership and influence over the past two decades regarding
development’s purpose, processes, and ultimate goals. Is it too strident to call this, as I did in the
introduction, the last throes of colonial development? Or too optimistic?
Discourses of rights-based approaches, social justice, underlying causes of poverty, etc.,
imply that a whole series of changes is needed in how influential global actors act, allocate
funds, devise Official Development Assistance strategy and agree on and evaluate what
constitutes ‘success.’ Development funding would have to become longer-term, more openended, more focused on process quality assurance rather than on countable outputs. New kinds
of dialogic methods for agreeing on what has actually happened and why would need to become
standard operating procedure rather than intriguing side shows posted on the web sites of the
more forward leaning, committed, and innovative agencies. Important bridges would need to be
built between technical experts, evaluation experts, managers, other stakeholders, and the poor
themselves…and this bridge-building would have to be as normal a line item in donor budget
requirements as the production of quarterly financial reports is now. Much more investment
would have to be made in harvesting and synthesizing knowledge from around the world rather
than in the rather headlong and blinkered rush to launch always-new projects (because, of course,
there are always new scholars, intellectuals, and young professionals who want to do something
original). Much more attention would be paid than is currently to quality standards of project
and program designs. We would see much more activity in – and much greater influence
accorded to – comparative historical studies that help us understand much better processes of
long-term social change and how relations of power, control of resources, and equalityfomenting processes occur (if anybody knows of a development agency with a comparative
historical sociologist or political scientist on board, let me know). We would see new structures
and organizations inside the development apparatus: knowledge management, storage, and
dissemination would occupy much greater budgetary space, for example. The issuance of RFAs
and RFPs would subside or would greatly modify in pace and goals. Donor and NGO staff
would remain longer in particular contexts instead of the current norm of moving them from site
to site to keep them “fresh” and uncynical. And we would see much more emphasis and
investment in the generation of knowledge and new learning processes, social learning loops, at
the rural dog-ends of the global system.
All of the above changes mean changes in a) resource allocation, b) staffing, c) required
skills, d) relationships between global organizations, e) who gets to define the ‘success’ of
development programs, and f) who gets to tell the stories, where, and to whom. These changes
are inherent in the paradigm of rights-based approaches, of development that is openly political
and targets inequitable power relations, inequitable access to and control of material and cultural
capital, and social exclusion. They are deeply threatening to the status quo.
Conclusion
14
Glenzer, Berkeley University Lecture, October 2008. DRAFT. PLEASE REQUEST
PERMISSION BEFORE CITING OR QUOTING.
Do developers really care if development works? In some ways, of course, this is a dumb
question. Of course they care. It’s just that there are an infinite number/types of actors that we
can label a ‘developer’ and an infinite number of ways that we can define ‘care’ and let’s not
even get into the morass around what ‘works’ means. A more interesting discussion can open,
however, if instead of getting trapped into the particular, the individual, we ask if, through its
actions, the development enterprise (or apparatus, or industry) shows that it is concerned with
careful learning about what works and what does not, devotes significant resources to both
mainstream and innovative methods for uncovering promising practices and approaches, and
encourages its many members to be transparent and public about successes and failures.
Here the answer is decidedly mixed. A large number of new, participatory, continuous,
and deeply dialogic methods for learning what is working and not working in development
programs has arisen in the past 15 years. Proponents sometimes reject all mainstream research
methods on the grounds that they are so laden with discourses of knowledge and power that they
are of no value to those who wish to transform social and material relations in the developing
world. There is a deep commitment to what some call “social knowledge” and not “expert
knowledge” and, in ways very similar to the work of Freire, monitoring and evaluation is
positioned as a process that raises consciousness and ability of people to analyze and act on their
own situations. It is, actually, a deep and passionate commitment to agreeing on what works, for
whom, and against whom. Meanwhile, in the U.S., and roughly over the same period, a much
smaller group of highly credentialed scholars has staked its claim to the technical high ground,
arguing that development needs to return to basic, social science methods at which the scholars
are particularly adept. Unsurprisingly, the latter are attracting a lot of funding and contracts and
acquiring significant interest in the US government and philanthropic world. In many ways, this
is a reproduction of colonial forms of development, power, and control, a kind of methodological
panopticon. When I say that I mean to imply no moral or ethical judgment on proponents of a
return to the scientific method, or to Jeff Sachs, or the many architects who erected the MDGs.
I began many months ago contemplating the phenomenon of complex adaptive systems
because it felt to me that these two sets of actors were both a) caught in long standing discursive
regimes that militate against bridges being built between them, and b) needed to look to the other
to move their own ideas forward. Neither of the two camps is allowing the challenge we face –
that is, poverty reduction -- to be as rich, complex, contradictory, difficult to pin down and
measure, maddening, and humbling as it patently is. Neither of the two camps is putting in place
the kinds of questioning and learning processes essential for making progress at the pace they
would like. Neither is trying to develop new methods for the measurement challenges they face.
And as a result, we are missing an opportunity to alter the norms, practices, and policies of the
larger development industry in ways that would not just allow but force these forms of
knowledge, opinions about what counts, and how to count it together in provocative but
productive ways. This is a vision of researchers and methodologists contributing to the end of
colonial development rather than manipulating the development enterprise machine while
commanding us to pay no attention to the man behind the curtain.
15
Glenzer, Berkeley University Lecture, October 2008. DRAFT. PLEASE REQUEST
PERMISSION BEFORE CITING OR QUOTING.
Bibliography
Barahona, Carlos and Sarah Levy. How to generate statistics and influence policy using
participatory methods in research. Statistical Services Centre Working Paper. Reading,
UK: University of Reading, November 2002.
Bobrow, D.B. and Dryzek, J.S. Policy Analysis by Design, Pittsburgh, PA: University of
Pittsburgh Press, 1987.
Comaroff, John L. and Jean Comaroff. “Introduction.” In John L. Comaroff and Jean Comaroff
eds., Civil Society and the Political Imagination in Africa: Critical Perspectives, 1-43.
Chicago and London: University of Chicago Press, 1999.
Cook, T.P. “Postpositivist Critical Multiplism.” In R.L. Shortland and M.M. Mark eds., Social
Science and Social Policy, 129-46. Newbury Park, CA, Sage, 1985.
Cowen, M.P. and R.W. Shenton. Doctrines of Development, London and New York: Routledge,
1996.
Cracknell, Basil Edward. Evaluating Development Aid: Issues, Problems, and Solutions. New
Delhi: Sage, 2000.
Crush, Jonathan ed. Power of Development. London: Routledge, 1998.
Davies, Rick and Jess Dart. The ‘Most Significant Change’ (MSC) Method: A Guide to its Use.
April 2005. Available at http://www.mande.co.uk/docs/MSCGuide.pdf
ECB. Impact Measurement and Accountability in Emergencies: The Good Enough Guide.
Oxford UK: Oxfam Publishing, 2007.
Eoyang, Glenda H. and Thomas H. Berkas, “Evaluation in a Complex Adaptive System,” April
30, 1998, http://www.winternet.com/~eoyang/EvalinCAS.pdf. .
Escobar, Arturo. Encountering Development: The Making and the Unmaking of the Third
World. Princeton, NJ: Princeton University Press, 1995.
Evaluation Working Group, “When Will We Ever Learn: Improving Lives Through Impact
Evaluation,” Washington, DC: Center for Global Development, May 2006.
Ferguson, James. The Anti-Politics Machine: “Development,” Depoliticization, and
Bureaucratic Power in Lesotho. Minneapolis, MN: University of Minnesota Press, 1994.
Galster et. al. “Measuring The Impacts Of Community Development Initiatives: A New
Application of the Adjusted Interrupted Time-Series Method,” Evaluation Research 38, 6
(2004): 502-538.
16
Glenzer, Berkeley University Lecture, October 2008. DRAFT. PLEASE REQUEST
PERMISSION BEFORE CITING OR QUOTING.
Glenzer, Kent. “La Secheresse: The Social and Institutional Construction of a Development
Problem in the Malian (Soudanese) Sahel, c.1900-1982.” Canadian Journal of African
Studies 36, 1 (2002): 1-34.
Guba, Egon G. and Lincoln, Yvonne S. Fourth generation evaluation. Newbury Park, CA: Sage,
1989.
Hobart, Mark ed., An Anthropological Critique of Development: The Growth of Ignorance.
London and New York: Routledge, 1993.
Iverson, Alex. “Attribution and Aid Evaluation in International Development: A Literature
Review.” Toronto: International Development Research Centre, May 2003.
Khakee, Abdul. “The Emerging Gap Between Evaluation Research and Practice.” Evaluation 9,
3 (2003): 340-352.
Norman Long and Ann Long eds. Battlefields of Knowledge: The Interlocking of Theory and
Practice in Social Research and Development. London and New York: Routledge, 1992.
Narayan, Deepa. “Conceptual Framework and Methodological Challenges.” In Deepa Narayan
ed., Measuring Empowerment: Cross Disciplinary Perspectives, 3-38. Washington DC: The
World Bank, 2005.
Powell, Walter W. and Paul J. DiMaggio. The New Institutionalism In Organizational Analysis.
Chicago: University of Chicago Press, 1991.
Radelet, Stephen. “Foreign Assistance Reforms: Successes, Failures, and Next Steps.”
Testimony for the Senate Foreign Relations Subcommittee on International Development,
Foreign Assistance, Economic Affairs, and International Environmental Protection. June 12,
2007
Rist, Gilbert. The History of Development: From Western Origins to Global Faith. London:
Zed Books, 2005.
Rossi, Peter H. and Howard E. Freeman. Evaluation: A Systematic Approach. Newbury Park
CA: Sage Publications, 1993.
Sachs, Jeffrey. The End of Poverty: Economic Possibilities of our Time. New York: Penguin
Press, 2005.
Savedoff, William D. and Ruth Levine, “Learning from Development: the Case for an
International Council to Catalyze Independent Impact Evaluations of Social Sector
Interventions,” CGD Brief, Washington, DC: Center for Global Development, May 2006.
17
Glenzer, Berkeley University Lecture, October 2008. DRAFT. PLEASE REQUEST
PERMISSION BEFORE CITING OR QUOTING.
Sherman, Lawrence and Heather Strang. “Experimental Ethnography: The Marriage of
Qualitative and Quantitative Research.” The Annals of the American Academy of Political
and Social Science, 595, 1, (2004): 204-222.
United Nations, Millennium Development Goal 8: Delivering on the Global Partnership for
Achieving the Millennium Development Goals, MDG Gap Task Force Report, New York:
UN, 2008, vii
White, Howard. Challenges in Measuring Development Effectiveness. IDS Working Paper 242.
Brighton UK: Institute of Development Studies, March 2005.
18