Does Information Lead to More Active Citizenship

World Development Vol. 60, pp. 69–83, 2014
Ó 2014 Elsevier Ltd. All rights reserved.
0305-750X/$ - see front matter
www.elsevier.com/locate/worlddev
http://dx.doi.org/10.1016/j.worlddev.2014.03.014
Does Information Lead to More Active Citizenship?
Evidence from an Education Intervention in Rural Kenya
EVAN S. LIEBERMAN
Princeton University, USA
DANIEL N. POSNER
University of California at Los Angeles, USA
and
LILY L. TSAI *
Massachusetts Institute of Technology, Cambridge, USA
Summary. — We study a randomized educational intervention in 550 households in 26 matched villages in two Kenyan districts. The
intervention provided parents with information about their children’s performance on literacy and numeracy tests, and materials about
how to become more involved in improving their children’s learning. We find the provision of such information had no discernible impact on either private or collective action. In discussing these findings, we articulate a framework linking information provision to
changes in citizens’ behavior, and assess the present intervention at each step. Future research on information provision should pay
greater attention to this framework.
Ó 2014 Elsevier Ltd. All rights reserved.
Key words — Kenya, Africa, information, accountability, education, field experiments
1. INTRODUCTION
information about municipal spending, corruption, and other
outcomes. 3 However, to date, the results from these studies
have been mixed, and clearly more research is needed to draw
stronger conclusions about the logic and assumptions undergirding the recent enthusiasm for information campaigns as
development strategy.
This paper aims to further this understanding by evaluating
and then unpacking the results of a large-scale informational
intervention designed to generate both citizen activism and
private behavioral change on behalf of improved educational
outcomes in Kenya. Our study is unique with respect to most
impact evaluation research in this area in that we manage to
avoid many of the typical tradeoffs between internal and external validity: We study a largely “natural” intervention in the
sense that we, as investigators, did not influence the formulation of the treatment materials, the sampling, or any aspect of
Providing information to citizens about the quality of the
government services they receive has been seized upon by
development organizations in recent years as a key lever for
improving the welfare of the world’s poorest people. The logic
is straightforward: Poverty can be reduced by improvements
in governance and service delivery (World Bank, 2004). In
turn, governance and service delivery can be strengthened by
increasing bottom-up pressure from citizens (Bruns, Filmer,
& Patrinos, 2011). And, in keeping with the rich scholarly literature on the role of asymmetric information in principalagent relationships (Besley, 2006; Fearon, 1999; Ferejohn,
1986), bottom-up pressure can be increased by providing citizens with comprehensible information about what their governments and elected representatives are (or are not) doing
on their behalf. The causal chain runs from information to citizen pressure to improved service delivery to welfare improvements.
This logic has motivated donors to support hundreds of millions of dollars of interventions designed to alleviate the presumed informational constraints faced by citizens in
developing countries. And yet, these projects risk proving as
unproductive as the ones they supplanted in the absence of a
deeper understanding of the conditions under which information is likely to change people’s behavior. Indeed, various
researchers have begun to shed light on this plausible development strategy through a series of experimental interventions to
study the effects of information provision, including through
the distribution of report cards on local health service
provision, school quality, and legislators’ performance. 1
Others have involved media campaigns to publicize the leakage of development funds. 2 Still others have disseminated
* We gratefully acknowledge Jessica Grody for serving as our Project Manager; Ruth Carlitz, Kelly Zhang, Angela Kiruri, and Richard Odhiambo, for
serving as field coordinators; and Jason Poulos and Yue Hou for additional
research assistance. We also appreciate the steadfast cooperation of various
Uwezo staff, including Sara Ruto, John Mugo, James Angoye, Joyce
Kinyanjui, Conrad Watola, Amos Kaburu, and Ezekiel Sikutwa. We thank
Claire Adida, Guy Grossman, Daniel de Kadt, Thad Dunning, and seminar
participants at the University of Florida, Harvard, Columbia, the University
of California at San Diego, and the World Bank for helpful comments. This
study received human subjects review board approvals from Princeton University IRB (Protocol #: 0000005241) and MIT (COUHES #: 1104004442),
and was conducted with a permit from the Kenya National Council for
Science and Technology Research (NCST/RRI/12/1/SS-011/674). The research was funded by Hivos/Twaweza as part of an evaluation study. Final
revision accepted: March 8, 2014.
69
70
WORLD DEVELOPMENT
its implementation; but we are also able to make relatively
strong inferences about causal relationships because villages
and households were randomly sampled for inclusion in the
program. That said, there are also some limitations associated
with our study that are distinct from field experiments designed purely for research purposes: the intervention combined multiple treatments in ways that make it difficult to
assess their independent impact; the information provided to
citizens only indirectly addresses the outcomes the intervention seeks to produce; and the theory of change structuring
the intervention relied on a set of relatively optimistic assumptions about people’s willingness to take costly actions to
achieve collective ends. These characteristics, however, make
the project no different from many other initiatives launched
by major development organizations and, in some respects,
make it more, not less, important to try to determine whether
the project achieved its goals (and, if not, why).
We employ a post-treatment field study conducted in
matched villages in two rural Kenyan districts. Using multiple
measures, we evaluate both public citizen activism and private
actions taken by members of households that did and did not
receive a randomized informational intervention. The intervention involved two different kinds of information: the
reporting to parents of the results of literacy and numeracy
tests administered to their school-aged children, and the provision of materials describing strategies parents might employ to
improve their children’s learning. The objective of the former
was to provide parents with factual information from which
they could make an inference about the performance of their
local primary school, and hence the need to take action to improve it. 4 The goal of the supplementary materials was to expand parents’ repertoires of action by providing them with
ideas for concrete steps they could take in order to hold
schools and government accountable for better education.
We find that these informational interventions did not have
any substantial impact on parents’ public or private behavior.
Parents that received the informational treatments were no
more likely than other parents to take actions at school or
in the public sphere to improve the quality of their children’s
schooling or to adopt behaviors at home that might have a positive impact on their children’s learning. Nor were they more
likely to increase their levels of citizen activism or community
participation in areas outside education.
Although disappointing from the standpoint of those who
have embraced the link between information provision and
service delivery improvements, our null findings provide an
opportunity for exploring some of the (usually unarticulated)
conditions that may be necessary for information provision to
generate real behavioral change. Specifically, we suggest that
for information to generate citizen action it must be understood; it must cause people to update their prior beliefs in
some manner; and it must speak to an issue that people prioritize and also believe is their responsibility to address. In addition, the people at whom the information is directed must
know what actions to take and possess the skills for taking
these actions; they must believe that authorities will respond
to their actions; and, to the extent that the outcome in question requires collective action, they must believe that others
in the community will act as well. And, of course, they cannot
already be doing everything that is possible for them to do.
Either these conditions must already be met prior to the
informational intervention or the intervention itself must produce these conditions. The absence of any of these conditions
may be enough to interrupt the presumed link between information and both private and public actions. Our articulation
of these key conditions has implications not just for making
sense of our findings but for the assessment, design, and
understanding of informational interventions more broadly.
2. RELATION TO THE LITERATURE
The hypothesized link between information and citizen
activism for improved service delivery has been subjected to
a growing number of empirical tests. Multiple studies have
found that informed citizens are more likely to be involved
in civic and political action and to engage in participatory
activities such as voting, attending political meetings, contacting officials, and protesting (Brady, Verba, & Schlozman,
1995; Gerber & Green, 2000; Neuman, 1986; Zaller, 1992).
Studies have also shown that such participation is associated
with higher levels of service provision (e.g., Bjorkman-Nyqvst,
de Walque, & Svensson, 2013; Heller, 2001). Yet, a great deal
of empirical work has found little substantive impact from the
provision of information to poor citizens. This is true both
among studies (like ours) that test for a link between the provision of information and changes in citizens’ public and private behaviors and among those that investigate the reduced
form relationship between information and the improved public service provision that these behaviors are thought to promote. Little empirical consensus has emerged.
A number of studies in this literature focus on the impact of
information on voting. Among these, Banerjee et al. (2011) find
that slum dwellers in Delhi increase turnout and select for better performing candidates when equipped with pre-election report cards on incumbent performance and candidate
qualifications. However, Chong et al. (2012) find that the provision of information on municipal spending and corruption to
Mexican voters has no impact on turnout or vote choices.
Humphreys and Weinstein (2012) also find no effect on voters’
electoral behavior in Uganda 2 years after the dissemination of
report cards detailing their MP’s performance. De Figueiredo,
Hidalgo, and Kasahara (2011) investigate the impact on turnout, ballot spoilage, and electoral support of publicizing a candidate’s conviction on corruption charges in Brazil. They find
that the effect of providing such information is conditional
on the convicted candidate’s party connection, presumably because of the differing dispositions of each party’s support base
vis-a-vis corruption.
These mixed findings are echoed in studies that emphasize
the impact of information on citizen actions outside of voting.
Banerjee et al. (2010) find that providing information to citizens in Uttar Pradesh about the role of the local village education committee and about the quality of learning in local
schools had no impact on parental involvement in the school
system. Keefer and Khemani (2011) employ a natural experiment in Benin built around within-commune variation in access to community radio programing to evaluate the effects
of information dissemination on literacy, government inputs
to education, citizen involvement in Parent-Teacher Association meetings, and private investments in children’s learning.
They find that increased radio access has no impact on community-level participation, although it does seem to affect private
behavior supportive of children’s learning, such as purchasing
books or making informal or private tuition payments to
schools. Bjorkman and Svensson (2009) find that providing
communities in Uganda with information about the performance of their local health facilities and encouraging community members to become more involved in monitoring their
performance is associated with greater citizen involvement.
Another set of studies sidesteps the intermediate link between information and citizens’ public or private actions and
DOES INFORMATION LEAD TO MORE ACTIVE CITIZENSHIP?
71
tests for a direct connection between information provision
and the quality of government service delivery. Besley and
Burgess (2002) examine government responsiveness to food
production shortfalls in India and find that state governments
respond more aggressively where newspaper circulation (and
presumably, the flow of information) is higher. Reinikka and
Svensson (2005) report that a newspaper campaign in Uganda
aimed at reducing the capture of public funds by providing
information about local officials’ handling of a large education
grant program had a strong impact on both enrollment and
test scores. Andrabi, Das, and Khwaja (2009) find that the distribution of report cards with information on student performance on math and language tests in Pakistan led to test
score increases in subsequent years. Jensen (2010) finds that
providing students in the Dominican Republic with information about the returns to schooling significantly increases the
number of years of schooling they complete. Both Keefer
and Khemani (2011) and Bjorkman and Svensson (2009), described above, find positive effects of information provision on
pupil test scores and infant mortality rates, respectively.
Almost all of the interventions described thus far, as well as
much of the existing theoretical literature, focus on the provision of factual information that increases citizens’ appreciation of the (usually deficient) quality of government services.
But information provision might also generate citizen activism
through other channels: by informing citizens about the
importance of taking action and providing ideas about the
specific actions they might take in order to improve the quality
of government service provision (or substitute for it). One of
the advantages of the intervention we study here is that it combines these different types of information. Like Banerjee et al.
(2010) and Pandey et al. (2009), the interventions we evaluate
involve information about both the quality of children’s learning and how citizens might improve it.
as described in the next section. 6 Selected households received
the following treatment during the months of February and
March of 2011:
1. Assessment: An Uwezo volunteer administered tests of
basic literacy, numeracy, and reading comprehension—in
both English and Swahili—to every child in the household
aged 6–16. Parents were presented with the results of these
tests at the conclusion of the assessment and told that the
test provided an indication of whether or not their children
had mastered basic skills in reading and math.
2. Instruction materials: An Uwezo volunteer presented
assessed households with materials that outlined strategies
that parents might pursue to improve their children’s learning. These included: a wall calendar with statements about
the value of education; a poster with a checklist of strategies parents might take to improve their children’s learning; 7 a sign-up sheet to become a “friend of education”
and to receive periodic text messages from Uwezo on education themes; stories in English and Swahili intended to be
read by children; and a “citizen’s flyer” with recommendations about how to get involved in local and national efforts
to improve educational outcomes. The volunteers took time
to talk through the checklist of strategies listed on the poster, but left the other materials for household members to
consider on their own.
During the 2011 round of the Uwezo initiative, 124 districts
were randomly selected (from a total of 158 districts nationwide), weighted such that the number of districts selected in
each province would be proportional to the province’s population. Thirty villages were then randomly selected for treatment
from each district. In each selected village, 20 households were
chosen to receive the assessment and instruction materials. 8 In
total, 72,106 households were visited by Uwezo volunteers in
2011, and a total of 134,243 children were administered the
literacy and numeracy tests (Uwezo Kenya, 2011).
3. INTERVENTION, CONTEXT, AND RESEARCH
DESIGN
(b) The context
(a) The intervention
The particular intervention we study, the Uwezo initiative, is
a large-scale, multi-country, information-based intervention
that seeks to promote citizen action toward the improvement
of children’s learning in East Africa. 5 It does this in three
linked steps: First, by providing parents with reliable, easily
understood information with which they might be able to
make an inference about how much their children are (or
are not) learning in school; second, by providing concrete suggestions about steps that parents might take to improve education outcomes for their children; and finally, by facilitating a
broad public discussion of the state of education in the country. Our study focuses on the impact of the first two of these
steps during the second Uwezo assessment round in Kenya,
which took place in 2011. The specific interventions we study
involve a series of informational treatments that provide parents with feedback about their children’s performance (the
“assessment”) and guidelines for action (the “instruction
materials”). Our research exploits the random implementation
of these treatments across households and villages to estimate
their effects on parents’ willingness to take public and private
actions on behalf of improved educational outcomes for their
children, and on their degree of citizen activism more generally.
Villages and households were selected for assessment and
information dissemination (hereafter described as “treatment”)
Although the Uwezo initiative covers three East African
countries (Kenya, Tanzania, and Uganda), we focus our evaluation on Kenya. Kenya is a democratic, multi-ethnic country
with a historically solid educational system that, according to
the 2011 UNDP report, provides seven average years of
schooling to children (high compared to 4.5 for sub-Saharan
Africa as a whole). In 2003, the government introduced universal free primary education. Since that time, primary school
enrollment rates have risen dramatically. However, many
observers believe that due to the absence of a commensurate
infusion of new funds, children’s learning has suffered (Hornsby, 2012; Sifuna, 2007, pp. 702–703). Corruption, mismanagement, and a lack of resources also may have undermined
educational outcomes over the past two decades (Wrong,
2009).
While the country’s high baseline levels of schooling might
bias against finding a strong effect of the intervention, this
may be counterbalanced by the country’s historic propensity
for citizen activism, which should make Kenya especially fertile soil for the kind of social mobilization that the Uwezo initiative was deigned to inspire. Moreover, we study the
intervention in areas with both relatively low and relatively
high overall levels of educational attainment.
(c) Research design
We employ a post-treatment, matched village research design for estimating the effects of the Uwezo informational
72
WORLD DEVELOPMENT
treatment. We selected two specific districts that provide variation in socio-economic status and baseline literacy rates for
intensive study—Kirinyaga, in the Central Province, and Rongo, in the Nyanza Province. 9 Kirinyaga ranks in the top third
of districts in terms of advanced schooling completed (secondary or higher); Rongo ranks in the bottom third. 10 Within
each district, we designed our study to maximize the possibility of detecting a treatment effect. First, as described above, we
study a compound treatment of student-specific information
about performance and process-oriented information concerning how to participate as an active citizen. 11 Moreover, to
minimize contamination (and associated attenuation bias)
from the first Uwezo assessment round in 2010, we selected
both study districts in part because they were not included
in the 2010 Uwezo assessment. 12 To reduce the likelihood
of information diffusion from the earlier Uwezo intervention
and follow-up dissemination campaign, we also selected the
districts so as to be distant from Nairobi and other large population centers, and as far as possible from districts that had
been included in the 2010 assessment round.
Within Kirinyaga and Rongo, we selected six villages from
among the 30 that had received the Uwezo assessment
(“treated villages”). We selected these villages so as to be physically distant (at minimum, nonbordering) from one another.
From among the hundreds of untreated villages in each district, we then selected as control villages the six villages that
offered the closest matches with the treated villages that we
had already chosen. Matching was accomplished using data
from the 2009 Kenyan census on a number of village-level
characteristics that we hypothesized might influence the impact of the Uwezo intervention: population size, number of
households in the village, number of people currently attending school, percentage of population that had finished primary
and secondary school, percentage of population with radio
and mobile phone service, and percentage of households with
a mobile phone. Matches were also chosen so as to contain villages from the same electoral constituency and from adjacent,
albeit different, sub-locations. Because we discovered that one
of the treated villages in Kirinyaga contained only four treated
households (discussed more below), we selected an additional
treated village and matched pair in that district, for a total of
seven village pairs in Kirinyaga and six in Rongo. Table 1
summarizes the characteristics of the village pairs across the
covariates on which we matched.
Our main interest was measuring differences in parents’
behavior vis-a-vis their children’s learning in households that
received the assessment and instruction materials and those
that did not (“treated” and “untreated” households, respectively). Since treated households could only be located in treated villages, and since our control villages contained, by
definition, no treated households, this meant comparing
households located in treated and untreated villages. However,
we were also interested in ascertaining whether the impact of
the Uwezo intervention spilled over within treated villages
from households that received the assessment and instruction
materials to those that did not. Hence, we administered our
household questionnaires (described below) in three different
types of households: treated households in treated villages, untreated households in treated villages, and untreated households in untreated villages. We selected these households in
the following manner:
Treated households in treated villages: Uwezo’s protocol
had called for conducting assessments in 20 households in
each treatment village. However, these 20 households were
selected from the village household lists before the Uwezo
volunteers had been able to confirm that the households
contained school-aged children (thus suitable for assessment). Approximately one-third of the time, the preselected households did not contain children aged 6–16.
Since the Uwezo protocol did not provide for the replacement of such households with new ones, the number of
households in which the assessment was actually carried
out was far below the target of 20 per village: on average,
only 12. In order to maximize the number of treated households in our study, our protocol was to include all of them
in our sample.
Untreated households in treated villages: We also sought
to study households in treated villages that had not themselves received the assessment and accompanying instruction materials. To do this, we returned to the original
household list developed during the assessment exercise
and randomly sampled 15 households from among those
that had not received the treatment. We also selected a
set of replacement households using the same procedures.
Households in untreated villages: In the control villages
we had no household lists to draw upon, so our field coordinators worked with village elders to develop them, following the same procedures Uwezo used in their original
sampling plan. From those lists, we randomly sampled 15
households, along with an additional set of replacements.
The final size of our sample, tabulated in terms of our three
treatment conditions and village pairs, is presented in Table 2.
To select respondents to interview within each sampled
household, we employed the following protocol: Enumerators
were instructed to greet the first adult they encountered when
they approached the household. They were to mention that
they were conducting a survey on democracy and family
behavior and ask if there were children aged 6–16 living in that
household. If the answer was no, then the enumerator would
not proceed with the interview. If the answer was yes, then
the enumerator would ask the adult if he/she was the direct
caregiver of a school-aged child living in the household. If
the answer was again yes, then the enumerator would continue
with the survey. If the adult indicated that she/he was not a
direct caregiver of a school-aged child, then the enumerator
would ask to identify another adult living in the household
who was. Once a direct caregiver was identified, the enumerator would interview that person. Enumerators were instructed
to return two times to households in which it was not initially
possible to conduct an interview, and to select a household
from the replacement list when a suitable respondent could
not be identified. 13
During the interview, the enumerator would ask the respondent to list the names of every member of the household and
to identify the children for whom she/he had direct responsibility. If the respondent cared for more than one child, the
enumerator would roll a die to select just one child to be the
subject of questions later in the survey.
4. DATA
In order to gather information about outcomes and covariates of interest, we developed an extensive household survey,
which was translated into Swahili and Luo and administered
by trained enumerators fluent in the appropriate local language. Interviews were conducted face-to-face with adult
household members in June and July 2011, selected as described
above. Households that had received the Uwezo assessment
were told that we were following up on a survey conducted in
March; households that had not received the assessment were
told that their names had been selected at random. In neither
DOES INFORMATION LEAD TO MORE ACTIVE CITIZENSHIP?
73
Table 1. Characteristics of Matched Village Pairs
District
Pair Treatment
code status
2009
pop
# of
# children at % adults only % adults only % households % households % households
households
school
primary
secondary
w radio service
w mobile
w mobile
school
school
phone service
phone
Kirinyaga A
A
B
B
C
C
D
D
E
E
F
F
F2
F2
Treated
Control
Treated
Control
Treated
Control
Treated
Control
Treated
Control
Treated
Control
Treated
Control
800
700
200
200
400
400
1,400
1,200
800
700
900
1,000
300
200
200
200
100
100
100
100
500
400
200
200
300
300
100
100
200
200
100
100
100
100
500
400
200
200
200
300
100
100
0.5
0.5
0.6
0.6
0.4
0.5
0.5
0.6
0.6
0.6
0.6
0.6
0.6
0.5
0.2
0.3
0.2
0.2
0.4
0.2
0.3
0.2
0.2
0.1
0.2
0.1
0.2
0.2
0.9
0.9
0.9
0.8
1
0.9
0.8
0.9
0.8
0.8
0.8
0.8
0.7
0.9
1
0.8
0.7
0.4
0.7
0.9
0.6
0.8
0.4
0.5
0.5
0.7
0.4
0.5
0.8
0.8
0.6
0.6
0.8
0.7
0.8
0.8
0.7
0.6
0.5
0.7
0.6
0.6
Rongo
Treated
Control
Treated
Control
Treated
Control
Treated
Control
Treated
Control
Treated
Control
200
200
1,200
700
1,300
1,000
900
800
300
300
300
400
50
50
200
100
200
200
200
200
100
100
100
100
100
100
700
300
700
400
400
400
100
100
200
200
0.5
0.5
0.3
0.6
0.4
0.5
0.6
0.5
0.6
0.5
0.5
0.5
0.2
0.1
0.4
0.1
0.3
0.1
0.1
0.1
0.1
0.1
0.2
0.1
0.9
0.8
0.9
0.9
0.4
0.7
0.6
0.6
0.6
0.6
0.8
0.7
0.4
0.4
0.8
0.4
0.3
0.4
0.4
0.5
0.4
0.4
0.5
0.4
0.5
0.4
0.7
0.8
0.7
0.6
0.4
0.4
0.4
0.4
0.6
0.6
G
G
H
H
I
I
J
J
K
K
L
L
Note: Village names are not provided and numbers are rounded to protect the identities of respondents in villages, following our protocol for the
protection of human subjects.
Table 2. Distribution of Survey Households across Treatments
Pair
Control Villages
Treated Villages
Total
Control households
Untreated households
Treated households
Kirinyaga
Total
A
B
C
D
E
F
F2
109
16
15
15
17
16
15
15
105
20
12
14
13
15
16
15
77
13
10
11
4
11
18
10
291
49
37
40
34
42
49
40
Rongo
Total
G
H
I
J
K
L
91
15
16
14
15
16
15
200
99
16
16
20
17
14
16
204
69
11
10
10
16
11
11
146
259
42
42
44
48
41
42
550
instance did the enumerators specifically mention the Uwezo
initiative while introducing the survey. The household survey
took approximately 1.5 hours to administer.
In order to minimize desirability bias, we wrote the introduction to the survey to reduce the possibility that respondents
would be primed to think about the Uwezo assessment or that
they would think that the researchers and interviewers were
principally interested in education (and thus might be assumed
to support efforts to improve children’s learning outcomes).
We embedded questions on education in the latter part of
the survey, alongside questions about governance, health,
and water service provision.
(a) Outcomes
We were interested in outcomes of two sorts: actions that
parents take at home to improve their own children’s learning;
and actions that parents take in public arenas to improve
74
WORLD DEVELOPMENT
education provided in school. Both types of actions may have
positive impacts on one’s own children’s welfare, but the latter
is subject to free riding and hence may be perceived as more
costly relative to private benefits. Although the Uwezo initiative was designed principally to generate public citizen action
with potential externalities (i.e., the latter sort), private actions
to help one’s own family members are a critical complementary (or perhaps even substitutive) response that must be studied alongside the more “civic” reactions.
Thus, one logical way that parents may have responded to the
information they received about their children’s literacy and
numeracy was by increasing their efforts at home to help children with their schoolwork. We therefore asked parents
whether they helped their children with reading, writing, and
math. We also asked them whether they had read to their children in the past week, whether they had asked their children
about their teacher’s presence at school, and whether they had
ever considered switching their children to another school in order to improve the quality of the education they were receiving.
In addition to these specific questions, we also asked parents
for a subjective assessment of their level of involvement in
their children’s education. We first asked parents: “Overall,
how involved would you say that you are in trying to improve
the quality of your child’s education?” We then asked: “Has
this level of involvement changed during the past 3 months?”
(i.e., since the time of the Uwezo assessment).
In addition to taking action at home or becoming involved
in their own children’s education, parents may also undertake
more public activities in support of improved learning at their
children’s school. From a policy perspective, such activities are
particularly desirable, as they will likely have positive externalities that will benefit other children as well one’s own children.
To measure activities of this type, we asked parents whether,
in the prior 3 months, they had discussed their child’s performance with their teacher, attended a parent-teacher meeting,
organized school activities for children, assisted teaching at
school, provided extra lessons outside school, provided teaching materials to school, helped with school maintenance, provided food or water to the school, or discussed learning quality
with their child’s teacher.
We also asked respondents a series of questions about their
participation in education-related groups and/or associations.
We measured such participation in three different ways. The
first indicator was a dichotomous measure recording whether
or not the individual had participated in any groups or associations dealing with education issues in the last 3 months.
The second was a more fine-grained measure based on the
number of meetings the individual had attended on the subject during this period, if they in fact had participated in such
a group. The third recorded whether or not the individual
had contributed any money in the last 3 months to the
group. To get a sense of whether the Uwezo assessment triggered broader collective action at the village level, we asked
respondents how often in the last 3 months members of the
village had jointly approached village officials or political
leaders, such as MPs and councilors, to improve their
schools.
Because we were interested in the possible spillover of the
impact of the Uwezo intervention beyond the educational
realm, we also asked questions that indicated the extent of citizen activism on behalf of improvements in the delivery of
public services more broadly. We therefore asked individuals
whether or not they had taken any of a series of actions to improve the delivery of education, health, or water services. We
also asked a series of questions about village-level collective
action over and above education.
In all, we studied a total of 14 outcomes, the full details of
which are provided in Appendix A.
(b) Balance of covariates
Our strategy for making meaningful comparisons between
treated and untreated households rested upon the assumption
that the respondents from our treatment villages were not
markedly different in aggregate from respondents in the control villages with which they were matched. While some confidence for this assertion comes from the balance in village
characteristics summarized in Table 1, further confidence
comes from a comparison across treated and untreated households of the mean values of the key covariates collected in the
household surveys (see Appendix A for a description of these
variables). As Table 3 indicates, the differences in the mean
values across the three treatment groups are statistically insignificant for all covariates (see especially column 4, which compares treated and untreated households). 14
5. FINDINGS
Do we observe different levels of involvement and citizen
activism among parents whose children received the Uwezo
assessment and the accompanying instruction materials; and
among those parents whose children did not? The answer is
an unambiguous no: no matter how we analyze the data, we
find no evidence of a substantively or statistically significant
average treatment effect on any of the outcomes we investigated. Figure 1 reports public actions taken by parents at
school or in the public sphere that have potential externalities;
Figure 2 reports private actions taken by parents at home for
the sole benefit of their own children. Although the threshold
for what constitutes a meaningful effect size is somewhat arbitrary, it is reasonable to think that an average effect of at least
0.5 standard deviations—equivalent to reporting taking one
additional action (out of nine possible actions) to help to improve one’s child’s school—is a defensible benchmark. This
threshold also strikes us as appropriate given the Uwezo’s initial ambitious goal of increasing primary school literacy and
numeracy by 10 percentage points. However, as the figures
make clear, none of the point estimates exceed even 0.2 standard deviations, and since the 95% confidence intervals include
zero in every case, we cannot reject the null hypothesis of no
effect for any of the aspects of citizen activism we measure.
Figures 1 and 2 only compare average responses across treated and untreated units, but our (non-) findings are robust to
an alternative regression specification in which we control for
a host of covariates that might have differed slightly across
these two populations. 15
We also find no evidence for conditional effects. In further
analyses we estimated a series of models in which we interacted the highest grade achieved by the household head, the
literacy status of the household head, and the number of meals
that household members consume each day (a standard, if
rough, proxy for household income) with the treatment variable, and found no impact of these interaction terms on any
of our measures of private action or citizen activism. (All additional analyses are available upon request.)
In addition to our household survey, we spent 2 months carrying out in-depth fieldwork in each study village, including
interviews with village elders and head teachers, and focus
groups with village elites. These studies were designed to help
us identify the mechanisms linking the informational treatment to changing attitudes and behaviors. In the absence of
DOES INFORMATION LEAD TO MORE ACTIVE CITIZENSHIP?
75
Table 3. Covariate Balance across Treatment Groups
Gender
Age
Highest grade
Mother’s educ.
Can write
Reading materials
Meals per day
Information
Ethnic outsider
Social capital
Request index
Pretest active
(1) Untreated
households
(2) Treated
households
(3) Untreated households in
treated villages
(4) Difference of
means (1)–(2)
(5) Difference of
means (1)–(3)
(6) Difference of
means (3)–(2)
0.41
(197)
43.3
(196)
4.08
(193)
1.8
(181)
0.76
(195)
7.63
(196)
2.73
(196)
10.1
(191)
0.07
(195)
2.24
(190)
23.83
(193)
8.51
(192)
0.33
(146)
42.5
(146)
3.97
(145)
1.79
(134)
0.76
(145)
7.81
(146)
2.66
(146)
10.04
(145)
0.03
(145)
2.21
(141)
24.37
(146)
8.56
(143)
0.37
(194)
40.28
(192)
4.06
(192)
2.14
(186)
0.74
(204)
7.68
(199)
2.71
(204)
9.99
(201)
0.06
(204)
2.23
(197)
23.85
(201)
8.56
(202)
0.077
(0.05)
0.832
(1.63)
0.110
(0.20)
0.005
(0.18)
0.000
(0.05)
0.176
(0.16)
0.077
(0.06)
0.063
(0.11)
0.032
(0.02)
0.033
(0.11)
0.536
(0.29)
0.054
(0.23)
0.040
(0.05)
3.010*
(1.46)
0.026
(0.18)
0.344
(0.18)
0.019
(0.04)
0.051
(0.13)
0.024
(0.05)
0.115
(0.10)
0.008
(0.02)
0.007
(0.10)
0.017
(0.28)
0.059
(0.17)
0.037
(0.05)
2.178
(1.61)
0.085
(0.19)
0.349
(0.20)
0.018
(0.05)
0.125
(0.16)
0.053
(0.06)
0.051
(0.11)
0.024
(0.02)
0.026
(0.11)
0.519
(0.28)
0.005
(0.21)
0.2
0.0
−0.2
−0.4
−0.6
Treatment Effect (std devs)
0.4
0.6
Mean values reported in columns 1–3, with sample size in parentheses. Difference of means reported in columns 4–6, with standard errors in parentheses.
*
p < 0.05.
Involved in
improving
education
# actions
taken to
improve school
Participated in
education
groups
# meetings
attended
on education
Contributed to
Education
Groups
Villagers
approached
officials
Figure 1. Average Treatment Effects: Public Actions.
Villagers asked
officials to
improve government
# actions taken
to improve
outcomes
WORLD DEVELOPMENT
0.2
0.0
−0.2
−0.6
−0.4
Treatment Effect (std devs)
0.4
0.6
76
Do you help
with reading
Did you read
to child last week
Do you help
with writing
Do you help
with math
Do you ask
about teacher presence
If possible
consider switching
schools
Figure 2. Average Treatment Effects: Private Actions.
any detectable treatment effect at the household level, we used
these studies to confirm that there was also no effect at the village level. If there had been any new substantial discussions or
outbreaks of citizen activism in any of the treated villages, we
almost certainly would have learned about these outcomes
through this research, but this was not the case.
What about spillover effects? Thus far, we have focused on
the differences in citizen activism between members of households that received the Uwezo assessment/instruction materials and members of households in control villages that were
not part of the Uwezo assessment. However, our research design also makes it possible to identify spillover of the treatment effect from treated households to nontreated
households located in the treated villages. 16 This involves
comparing the 200 control households with the 204 households that were located in treatment villages but did not receive the assessment or instruction materials. If there is
spillover, we should see a difference in average outcomes
across these two categories of households (although given
the lack of a treatment effect among households that actually
received the treatment, a finding of a treatment effect here
would have been quite surprising). We can also compare outcomes among the 146 households that received the treatment
and the 204 households that were located in the same villages
but did not. There is, again, no evidence of an average treatment effect in any of the models, and thus no evidence of spillover.
6. UNDERSTANDING THE LACK OF TREATMENT
EFFECT
What, then, explains the lack of a treatment effect? As
noted, the project we were evaluating was a high-profile intervention that embodied the new wisdom of how to improve the
welfare of citizens of developing countries. It is thus important
to try to figure out why we find no effects of the treatments. Of
course, with hindsight, it might appear “obvious” that a particular treatment might not have an effect on desired outcomes, but we seek to contribute to the development of
future interventions by providing a more systematic consideration of the potential limitations of the intervention we studied
(and also our own study of its impact).
First, it may be that our analysis is simply underpowered. A
sample of 146 treated and 200 control units should be sufficient to identify treatment effects of 0.5 standard deviations given an intensive treatment such as the multi-pronged Uwezo
intervention. But we cannot rule out the possibility that a larger sample of households might have made it possible to detect
substantively smaller effect sizes, and hence altered our conclusions.
A second possible explanation is that the number of treated households in each treated village was too small—that
is, that the treatment itself was insufficiently powerful. To
the extent that real behavioral change requires collective
action, a critical mass of citizens must be mobilized to act.
It may be that the number of treated households in each
village (an average of just 12 out of between 50 and 1,400
households in the village as a whole) was too few to generate
this critical mass.
A third possibility is that inadequate time had elapsed between the assessment and our measurement of its impact. Real
behavioral change may require reflection, discussions with
other community members, and a rearrangement of commitments and prior obligations to make room for new activities
and behaviors. Three months may simply have been too short
an interval for these processes to work themselves through. Of
course, it is also possible that the impact of the intervention
was extremely short-lived, in which case 3 months may have
been too long an interval.
DOES INFORMATION LEAD TO MORE ACTIVE CITIZENSHIP?
However, given our “clean” matched-village design, along
with the confirmation provided by our subsequent qualitative
fieldwork, we do not believe that we have missed a causal effect
that actually exists. In light of the enthusiasm for such interventions among scholars and practitioners, our null finding
thus demands further explanation.
We propose that a potentially important explanation for the
absence of a strong treatment effect was the likely widespread
absence of key conditions that logically ought to be present for
an informational intervention to have a plausible chance of
generating citizen activism. In Figure 3, we offer a systematic
framework for articulating these conditions. The framework
can be thought of as responding to the question: What must
be true for us to expect the provision of information to citizens
to have a reasonable probability of causing a change in their
behavior? Although we present it as a flow chart, the framework can be equally—and perhaps more usefully—understood
simply as an enumeration of key constraints that might, alone
or in combination, dilute the impact of an informational intervention.
The framework can be used in two ways. Here, we use it to
structure an investigation into the possible explanations for
our null findings. However, the framework can also be used
for broader purposes to design more successful informational
interventions and, more fundamentally, to better understand
Figure 3. When Might Information Generate Citizen Action?
77
how, why, and under what conditions information might affect
behavior—a contribution for which development scholars and
practitioners have recently been calling (Booth, 2011; Foresti,
Guijt, & Sharma, 2007; Joshi, 2013; Joshi, 2013; McGee &
Gaventa, 2010; O’Meally, 2013). Indeed, despite the growing
awareness that pre-existing conditions matter for understanding whether informational interventions are likely to affect citizen action, there has been little progress on identifying exactly
which factors actually matter (Joshi, 2013; O’Meally, 2013).
The factors we identify in Figure 3 fall into two categories.
The first category focuses on the relationship of individuals to
the specific content of the information provided by the treatment: Is the informational treatment easily understood? Does
it provide new information that causes people to update their
beliefs about the quality of service delivery? The second category focuses on the attitudes and beliefs of individuals about
their political environment more generally: Do people prioritize the given issue (in our case, education)? Do they feel
responsible for doing something about it? Do they have the
knowledge and skills to take action? Do they feel their actions
can have an impact? The key insight of the framework is that
if, after the provision of the information, the answer to any of
these questions is no—either due to the pre-existing characteristics of the people receiving the treatment or due to the contents of the treatment itself—then the intervention is less likely
to have a significant effect on their behavior.
For each of the steps in Figure 3, we draw upon data from
our study population to assess whether the answer to the question is likely to have been “yes” or “no” in the context we
study. Where possible, we also evaluate whether the characteristic at issue is in fact correlated with differential treatment
outcomes. Ideally, we would also evaluate whether treatment
effects depend on the joint presence of all nine conditions listed
in Figure 3, but our limited sample size makes this impossible.
First, we note that the people at whom the information is directed must understand its content. As Fox (2007) has argued,
if the information is provided in a form that is “opaque”—not
understandable or not actionable—then it is unlikely to result
in behavioral change. We are unable to assess this condition
with our survey data since we did not administer a test of comprehension, but our extensive qualitative research suggests
that parents did seem to have a relatively good grasp of the
assessment and the ideas for action. We therefore do not believe that a lack of understanding was the source of our null
findings.
Assuming that parents understood the information they received, the information should also be new for it to increase
the probability of behavioral change. “Newness” may not be
absolutely necessary. Mere repetition (Allport & Lepkin,
1945; Schwarz, Sanna, Skurnik, & Yoon, 2007) or receiving
the information from a particularly trustworthy source (Malhotra, Michelson, & Valenzuela, 2012) may increase its impact, even if the information has already been received
previously. In addition, if people receiving the information
know that others are receiving it as well, it may play a coordinating role that is independent of its novelty. This said,
whether the receipt of information leads to changes in behavior is plausibly much more powerfully related to whether it
causes people to update their prior beliefs.
For our study population, this condition was largely unmet.
In our household survey, we asked parents of assessed children
whether their children’s test scores were higher or lower than
they expected them to be. Among those who could remember
the results, fully 60% reported that their child’s scores were
about the same as they expected. 17 Parents also seemed fairly
well informed about the performance of their children’s
78
WORLD DEVELOPMENT
schools more generally. All Kenyan schoolchildren take the
Kenya Certificate of Primary Education (KCPE) examination
upon completion of primary school, and all schools are ranked
by their students’ performance on this test. More than 72% reported that they knew the KCPE rank quartile of their child’s
school. 18 It therefore seems unlikely that the Uwezo intervention was providing most parents with entirely new information
about their children’s school performance. 19
Closely linked to the question of whether the information
causes people to update their priors is the question of the
direction in which those priors are updated. Most informational interventions in the service delivery sector are designed
to cause people to believe that services are worse than they had
thought—that is, they are designed to provide “bad news.”
This is because it is usually assumed that bad news is what galvanizes people and motivates them to take action. 20
It turns out, however, that for many of the households in
our study, the Uwezo assessment may have inadvertently given parents the impression that their children were learning
at a level beyond their actual capabilities. The assessment
tested literacy and numeracy for all children 6–16 years old
but only at a Class 2 level, which is typically attended by children 6–7 years old. Since the majority of the assessed children
were well beyond Class 2, most received passing scores on the
assessment—despite the fact that many were almost certainly
performing below grade level and would have received failing
scores had the Uwezo tests been grade appropriate. As a consequence, many parents received (erroneous) positive information about their children’s performance. In our study villages,
61% of children passed the English assessment, 62% passed the
Swahili assessment, and 62% passed the numeracy assessment.
More than half of the children—54%—received passing scores
on all three tests. Not surprisingly, passage rates were especially high among older pupils. As shown in Figure 4, more
than three quarters of assessed children above the age of 12 received passing grades on the literacy and numeracy.
Consistent with the failure of the assessment to provide bad
news, large numbers of parents reported high levels of satisfaction with the quality of their children’s learning. More than
half of parents in untreated households reported being very
satisfied with the quality of English teaching at their child’s
school and more than 85% reported being at least somewhat
satisfied. These percentages are nearly identical among parents
Figure 4. Percent of Children Receiving Passing Scores on Uwezo
Assessments, by Age.
in treated households, which suggest that the treatment may
have generated little motivation for the vast majority of parents to expend energy on improving their children’s learning.
But were parents who received “bad” news more likely to
take action? We are hamstrung in answering this question
by the fact that parents in control households received no news
about their children’s performance. Hence we cannot estimate
treatment effects by conditioning on whether parents in treated
households received “bad” news. What we can do, however, is
look at the treatment effect only among parents of children under 10 years old, who failed the literacy and numeracy assessments at rates of greater than 50%. This is tantamount to
limiting the sample to households where, with reasonable
probability, parents received (or, in the case of parents in control households, we can infer would have received) bad news
about their children’s performance on the assessment. 21 To
the extent that the intended treatment of the Uwezo campaign
was not just the provision of information but the relaying of
“bad” news about one’s children’s learning achievements,
the analysis of this limited sample can be thought of as an
analysis of the “treatment-on-the-treated” rather than (as in
the results presented earlier) an analysis of the “intent-totreat.”
We find no treatment effect on any of our 14 outcomes in
this analysis (results available on request). While not definitive, these findings cast doubt on the hypothesis that the lack
of a treatment effect in our study was due primarily or exclusively to the improper calibration of the Uwezo test. Although
most parents in the intervention did not receive “bad news,”
there is no strong evidence that bad news led parents to increase dramatically the priority given to education as an issue
for action.
Next, we move to assessing the attitudes and beliefs of individuals about their political priorities and the political environment. Even if the information is new, action to effect
change is only likely if the recipients of the information prioritize the issue in question. Citizens in developing countries face
problems of many kinds. Information about a particular problem, no matter how compelling, may not inspire action if people rank its importance below that of other concerns. Citizens
also may not be aware of the potential benefits of action, and
this may reduce the extent to which they care about the outcome in question. 22
Data from our survey suggested that many parents do value
education, but not significantly more than other public goods
such as health care and drinking water infrastructure. In a
supplementary survey conducted in one of our two research
districts (Kirinyaga), but in villages that had not received the
Uwezo assessment, we asked 261 parents what they would
do if they were given 1,000 shillings to spend on improving
the local health clinic, school, or village well. Most tended to
split the contributions relatively evenly. On average, respondents allocated 380 shillings for education, 343 for health,
and 272 for water improvement. Forty-three percent of
respondents allocated the most money to education.
Assuming the information is well understood, is relatively
new, and speaks to an issue that parents prioritize, then the
next question is whether parents think it is their responsibility
to do something about the problem. Feeling a sense of responsibility for an issue (or, more generally, a sense of civic duty) is
especially important when one individual’s action is unlikely
to have an effect by itself. 23 In many cases, citizens in poor
countries believe that monitoring the government and applying pressure to improve the quality of public services is the
responsibility of local leaders, NGOs, professional inspectors,
journalists or other individuals, but not citizens themselves. To
Table 4. Heterogeneous Treatment Effects (Treated vs. Untreated Households)
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
Helped children
Involvement in
# actions taken Participated in # meetings Contributed to Villagers
# actions
Villagers asked
with school
improving education to improve school
education
attended on
education
approached to improve officials to improve
groups
education
groups
officials
outcomes
governmence
0.012
(0.23)
0.245
(0.21)
0.478
(0.83)
0.048
(0.15)
0.091
(0.41)
0.119
(0.19)
0.143
(0.20)
0.172
(0.76)
0.237
(0.23)
0.153
(0.11)
0.151
(0.14)
0.329
(0.33)
0.227
(0.11)
0.622
(0.33)
0.271
(0.16)
0.208
(0.13)
0.392
(0.71)
0.161
(0.17)
0.026
(0.22)
0.144
(0.17)
0.022
(0.30)
0.120
(0.12)
0.565
(0.33)
0.197
(0.15)
0.447*
(0.17)
0.290
(0.80)
0.207
(0.13)
0.409*
(0.17)
0.208
(0.16)
0.556
(0.49)
0.104
(0.15)
0.463
(0.32)
0.025
(0.19)
0.137
(0.15)
0.423
(0.89)
0.161
(0.17)
0.000
(0.00)
0.000
(0.00)
0.000
(0.00)
0.000
(0.00)
0.000
(0.00)
0.000
(0.00)
0.000
(0.00)
0.000
(0.00)
0.000
(0.00)
0.136
(0.21)
0.132
(0.17)
0.657
(0.48)
0.002
(0.11)
0.030
(0.31)
0.265
(0.13)
0.059
(0.20)
0.724
(0.62)
0.086
(0.16)
Reported coefficient is the interaction term between the treatment dummy and the variable listed at left. Observations vary in each cell. Robust standard errors, clustered by village, are in parentheses. All
specifications include the following controls: Gender, Age, Highest grade attained, Mother’s education, Can write, Reading mat’ls, Meals per day, Information, Ethnic outsider, Social capital, Request
index, Pretest active; the treatment dummy, and the variable of interest.
*
p < 0.05.
DOES INFORMATION LEAD TO MORE ACTIVE CITIZENSHIP?
Respondent knows, or could figure
out, what specific actions to take to
improve child’s school
Respondent believes that someone
like myself can have a lot or some
influence over local government
decisions that affect my village
Respondent believes that people like
myself can have a lot or some
influence in making this village a
better place to live
Respondent believes that people who
speak out or complain about
corruption at the school or clinic are
not very likely to be punished
Share of the hypothetical KSh 10,000
in relief payments that respondent
believes a person in their village
would actually receive
Respondent lives in a community
with above average levels of social
capital
79
80
WORLD DEVELOPMENT
the extent that this is the case, the recognition that something
needs to be done may not produce behavior by citizens to do
something about it. In such an environment, we should not expect the provision of information about service delivery failures to generate citizen action.
There is evidence that this consideration may have been relevant in our study. Only six percent of our respondents reported that parents were most responsible for making sure
that teachers come to school and teach the children, while
83% thought the school or headmaster was most responsible
for doing so.
Next, if all these criteria are satisfied, the would-be actors
must have useful knowledge about the concrete actions they
could take. They need to know whom to contact, how the
political and educational systems work, and where they can
most effectively apply pressure for improvements (Tarrow,
1998). If they do not have ideas for concrete actions and some
knowledge of the public decision-making process, then they
may take actions that have no impact or, anticipating their
inability to effect meaningful change, may not take any actions
at all. This roadblock appears to have been potentially salient
in our study population: When asked whether they knew what
action to take when addressing problems with their child’s
school, the vast majority of parents in our study—72%—said
they would not know, or would not know how to figure out,
what specific actions to take.
To test whether the low level of political knowledge and
sophistication among members of our study population was
responsible for the lack of impact of the informational intervention (independently of the presence or absence of other
conditions) we again looked for heterogeneous treatment effects. However, as we report in row 1 of Table 4, we find that
even when we limit the sample to parents who do report knowing what actions to take, parents who received the informational treatment were no more likely than parents who did
not to become actively engaged in their children’s learning.
Again, this result suggests that the source of our null finding
either lies elsewhere or that lack of political sophistication is
responsible for the null impact jointly with the absence of
other key conditions.
Next, individuals must possess the skills to take these actions. As Brady et al. (1995) have highlighted in their resource
model of political participation, citizens need to be able to
voice their concerns, either verbally or in writing. They need
to be able to organize themselves and others to take action.
These skills are typically learned through schooling and
through participation in nonagricultural employment and civic and religious organizations. In our sample, these skills appear to have been lacking. Only 17% of our respondents had
experience contacting an official, and only 21% had written a
letter as part of a community group. Only about a quarter
had planned a meeting, although a fairly larger number—
41%—claimed to have given a speech in a community group.
In addition to knowledge about actions to take and the skills
for these actions, the next consideration is whether citizens
have a subjective sense of efficacy and think that their efforts
will have an impact. Even if they know who to contact, when
and where to hold the meeting, and which buttons to push,
they may still believe that their appeals will fall on deaf ears,
that their pressure will generate no change, or that their efforts
will, in the end, do nothing to change the status quo. 24 If this
is the case, then skills and knowledge alone will not be enough
to lead to citizen action.
Parents in our sample displayed a reasonable level of internal efficacy or confidence in their ability to affect their environment: 65% said that people like them had “a lot” (30%) or
“some” (35%) influence in making their village a better place
to live. Forty-seven percent said that people like them had
“a lot” (16%) or “some” (31%) influence over local government decisions that affect their village.
Parents, however, expressed significant reservations about
their external efficacy (i.e., the government’s trustworthiness
and capacity to respond to their demands). For example,
39% of parents thought that it was very likely that they would
be punished if they spoke out or complained about corruption
at the village school or clinic, and another 28% thought punishment was somewhat likely. When asked a question about
whether a hypothetical businessman who supported the opposition would receive equal treatment in his application for a
business license from the local government, only 25% answered in the affirmative; the other 75% thought he would face
at least some problems.
Respondents also estimated the general level of corruption in
governments to be very high, which we interpret as a likely barrier to citizen efficacy. When asked how much money people
would actually receive if the government initiated a program
that was supposed to provide each Kenyan with 10,000 shillings in drought relief payments, 85% thought that the average
person would receive half or less than half of the payment. On
average, people thought that only 2,678 out of 10,000 shillings
allocated by the government would make it into their hands—
in other words, they anticipated that almost three-quarters of
the allocation would be siphoned off by corrupt officials.
The results from our surveys are echoed in the 2008 Afrobarometer data. On a question that asked respondents how much
they thought an ordinary person could do to improve problems
with how local government is run in their community, 72% said
“nothing” or “a small amount.” When asked how likely it was
that people be punished by government officials if they make
complaints about poor quality services or misuse of funds,
34% said “somewhat” or “very likely.” When respondents were
asked how easy or difficult they thought it was for an ordinary
person to have his/her voice heard between elections, 53% said
“very difficult” and another 24% said “somewhat difficult.”
There was some evidence to suggest that higher levels of efficacy were associated with greater citizen activism. Respondents
who indicated confidence in their ability to shape outcomes in
their village and to influence the local government were more
likely to take action. Expectations of punishment were correlated with willingness to speak out. People who thought they
would receive a greater percentage of the relief payments were
also more likely to have taken actions to improve the school
and to report that people in the village had asked officials to improve governmental performance in these areas.
But is greater efficacy associated with differential response
rates to the information treatment? The results reported in
Table 4 (rows 2–5) provide little evidence to support such a
conclusion. If the lack of efficacy among our subjects independently and exclusively lies behind the lack of impact of the
Uwezo intervention, then our measures and estimation procedures are insufficient to capture this channel.
Finally, for citizens to act on behalf of change, they must believe either that their own individual actions can make a difference or, if they think that generating real change will require
collective action, that others in the community will act with
them. This is a key insight in Barr, Mugisha, Serneels, and
Zeitlin (2012), which shows that information alone has a
weaker impact on the success of community-based monitoring
of schools in Uganda than information plus engagement in a
dialog with other members of the school monitoring committee, which the authors suggest aids community members in
overcoming collective action problems. 25
DOES INFORMATION LEAD TO MORE ACTIVE CITIZENSHIP?
To the extent that collective action is a real constraint, we
might see stronger treatment effects on private actions to improve learning, like reading to one’s own children or helping
them with their math and writing, and weaker effects on participation in group activities, like going to a meeting. As we
saw in Figures 1 and 2, however, there is no evidence for this
pattern: the informational intervention generated no more private actions than public ones. To the extent that collective action problems are binding, we might also expect to find
stronger treatment effects in places where respondents say they
live in communities where there is higher “social capital.” 26
As the results reported in Table 4, row 6 make clear, however,
respondents in such communities are no more likely to have
responded to the Uwezo intervention than respondents in
communities with lower levels of social capital.
We are left, once again, with a plausible explanation for the
lack of treatment effect but little evidence in our data that it
independently and exclusively drives the null impact of the
intervention. It could be that we simply lack the appropriate
measures or statistical power to estimate these effects properly.
Or it could be that several of the factors we investigated are
complicit, but that each alone is insufficient to generate effects
that are large enough to be detected in our analyses. Whatever
the reason, the framework we applied to guide our inquiry
strikes us as the right way to go about understanding the possible source of our null findings.
The framework also has utility beyond the uses to which we
put it here. By identifying the factors that mediate the effects
of an informational intervention on behavioral change, the
framework provides a template for investigating why informa-
81
tional interventions frequently fail to generate the behaviors
they are hypothesized to produce. The framework can also
be used to design and to target informational interventions
that have a higher likelihood of success. And it can be used
as a structure for thinking more deeply and theoretically about
the relationship between information and citizen action.
7. CONCLUSIONS
A large and growing number of interventions in developing
countries are built around the assumption that information deficits are responsible for poor governance and service delivery failures. In recent years, development agencies have spent hundreds
of millions of dollars underwriting projects designed to improve
welfare by filling this purported information deficit. In this paper,
we evaluate the impact of a prominent example of such an intervention and, consistent with the findings of many similar studies,
we find no substantial impact on any of a range of outcomes associated with public or private citizen activism.
Our findings lead us to be skeptical of the transformative effects of supplying citizens in poor countries with information
about the quality of service provision. As the framework we develop makes clear, many factors are likely to mediate—and
frankly, to dissipate—the effects of even very well-designed
and well-implemented informational interventions on citizen
attitudes and behaviors. Both efforts to promote citizen action
through the provision of information and attempts to understand the failures of previous informational interventions would
benefit from a more explicit engagement with this framework.
NOTES
1. Examples include Bjorkman and Svensson (2009), Banerjee, Banerji,
Duflo, Glennerster, and Khemani (2010), Pandey, Goyal, and Sundararaman (2009), Humphreys and Weinstein (2012), and Banerjee, Kumar,
Pande, and Su (2011).
2. Examples include Reinikka and Svensson (2005) and Chong, De La O,
Karlan, and Wantchekon (2012).
3. For reviews, see McGee and Gaventa (2010) and Pande (2011).
4. Assessment scores provide imperfect information about school
quality, inasmuch as they reflect both a child’s own aptitudes and the
quality of the schooling they have received. Nonetheless, parents who
were informed that their children failed the assessments should, on
average, have had lower opinions of the quality of their children’s
schooling, and hence be more motivated to take actions to improve it,
than parents whose children were not assessed.
5. Uwezo is a multi-year project involving annual assessments of
children’s learning for five years, beginning in 2010. The initiative is a
sub-project of Twaweza, a non-governmental organization based in Dar es
Salaam, Tanzania. For a fuller discussion of the theory of change
underlying the Twaweza project, see Twaweza (2011, chap. 2).
6. Throughout the paper, when we refer to “villages” we mean village or
urban areas, as the latter were also included in Uwezo’s random sample.
Our own evaluation, however, was limited to rural districts and included
no urban locations.
7. These were in the form of questions: Do you read to your kids? Do
you tell them stories? Do you ask your children to read to you? Do your
children see you reading?
8. To select these households, an Uwezo volunteer worked with the
village elder to draw up a list of all the households in the village. The
volunteer then divided the total number of households by twenty 20 to
generate a value n, and selected every nth household on the list. Five
additional households were selected as alternates using a similar method.
9. In 2011, Kenya adopted a new system of devolved government in which
counties replaced districts as the key units of sub-national administration
below the province level. However for ease of exposition, we refer to these
units as districts. Also, Rongo district became part of a broader Migori
county, so adopting the “county” designation would lead to ambiguity as to
the boundaries of our research site, which corresponds to the boundaries of
the old Rongo district. Kirinyaga district became Kirinyaga county, so there
would be no loss of precision if we adopted the new label.
10. Based on 2009 census data.
11. In a supplementary study of villages and households that received
only the assessment treatment, the substantive (null) findings were largely
the same as those described below (results available upon request).
12. Uwezo’s sampling protocol calls for the progressive expansion of
assessed districts in each assessment round to ensure that each round (after
the first) will contain a combination of districts that had previously
received the assessment/information materials and those that had not.
82
WORLD DEVELOPMENT
13. We contacted a total of 732 households and completed 550
interviews, for an overall response rate of 75.1%. The vast majority of
non-completed interviews were in households that were disqualified from
our protocol because the household did not contain a child aged 6-–
16 years. Among households that did contain children in this age range,
our overall response rate was 91.2 percent%. This rate was slightly higher
in Uwezo villages (92.9 percent% vs. 88.4 percent%) that had been
previously contacted during the Uwezo assessment.
14. Note that, since all covariates were measured post-treatment,
questions regarding behaviors that might have been affected by treatment
were phrased to refer to the period “before the start of this year’s rainy
season” (i.e., before the assessment).
15. We can be fairly certain that the lack of a treatment effect is not due
to a poorly implemented intervention. Fully 84 percent% of treated
households had at least one of the instruction materials visible in their
home at the time of our enumerators’ follow-up visit 3 months after the
intervention, and about half of the households had three or more (out of
six) instruction materials visible.
16. Because control villages were selected so that they did not border
villages in the Uwezo assessment, and since our household survey work
was conducted prior to the launch of Uwezo’s dissemination campaign,
spillover to households in the control villages should be minimal.
17. Only about half of the parents in our survey could recall how their
children had performed on the assessment, suggesting either that they did
not grasp the import of the information or that they felt it was not worth
remembering—perhaps because it confirmed their prior beliefs.
18. We did not, however, assess the accuracy of their knowledge of their
child’s school’s KCPE ranking. Here, and in all subsequent analyses in this
section, we report averages for the control group only.
19. Were the parents who received “new” information more likely to take
action? In answering this question we are handicapped by the fact that
while our data permit us to sort parents in treated households into those
who received “new” information and those who did not, we cannot
similarly sort parents in the control households, who did not receive
information of any kind (the receipt of the information being the
treatment). This means that we cannot estimate the treatment effect on
subjects who received “new” information. Nor can we meaningfully
compare levels of activism among treated parents that did and did not
receive “new” information, since “new” information was not randomized
within the treatment villages. Unobservable background characteristics
that explain the gap between the parents’ expectations and the child’s
actual performance on the test (which determines whether the information
was “new”) may also be correlated with the parents’ levels of activism. We
are thus left with a plausible candidate explanation for our null findings
that, unfortunately, we cannot test.
20. This assumption may not, however, be correct. It is at least possible
that an equally powerful way of motivating behavior is by providing
information that conditions are better than people had imagined. This
might occur if, for example, “good news” caused people to have a greater
sense of pride in the outcome, made them feel like part of a “winning
team,” increased feelings of efficacy, or triggered an aversion to seeing
service delivery decline.
21. The key, and we believe plausible, assumption here is that children in
control households would have failed the test at the same rate as children
in the treatment households.
22. Jensen’s (2010) finding that providing students with information
about the returns to schooling has a dramatic positive impact on the
number of years of schooling they ultimately attain is consistent with this
argument if we interpret the response as being due to an increase in the
extent to which students care about schooling.
23. Downs (1957), Riker and Ordeshook (1968), Blais and Achen (2010),
Blais (2000), and Campbell (1960, p. 156).
24. Researchers distinguish between two types of efficacy: internal and
external. Internal efficacy refers to an individual’s confidence in their
ability to understand and participate in the world around them. External
efficacy refers to an individual’s belief that the government will be
responsive to their demands. For reasons of data availability, we focus
here mainly on external efficacy.
25. Bjorkman and Svensson’s (2010) finding that community-based
monitoring of health facilities is more effective in ethnically homogeneous
districts and is constituent with this argument, insofar as citizens in
ethnically homogeneous districts are better able to overcome collective
action problems.
26. Our measure of social capital is an index, as described in Appendix A.
REFERENCES
Afrobarometer Data, Kenya, Round 4 (2008). Available from http://
www.afrobarometer.org.
Allport, F. H., & Lepkin, M. (1945). Wartime rumors of waste and special
privilege: why some people believe them. The Journal of Abnormal and
Social Psychology, 40(1), 3.
Andrabi, T., Das, J., & Khwaja, A. I. (2009). Report cards: The impact of
providing school and child test-scores on educational markets. Unpublished paper.
Banerjee, A., Banerji, R., Duflo, E., Glennerster, R., & Khemani, S.
(2010). Pitfalls of participatory programs: Evidence from a randomized
evaluation in education in India. American Economic Journal:
Economic Policy, 2(1), 1–30.
Banerjee, A., Kumar, S., Pande, R., & Su, F. (2011). Do informed voters
make better choices? Experimental evidence from India. Working paper.
Barr, A., Mugisha, F., Serneels, P., & Zeitlin, A. (2012). Information and
collective action in community-based monitoring of schools: Field and lab
experimental evidence from Uganda. Unpublished paper.
Besley, T. (2006). Principaled agents? The political economy of good
government. Oxford University Press.
Besley, T., & Burgess, R. (2002). The political economy of government
responsiveness: Theory and evidence from India. Quarterly Journal of
Economics, 117(4), 1415–1451.
Bjorkman, M., & Svensson, J. (2009). Power to the people: Evidence from
a randomized field experiment on community-based monitoring in
Uganda. Quarterly Journal of Economics, 124(2), 735–769.
Bjorkman, M., & Svensson, J. (2010). When is community-based
monitoring effective? Evidence from a randomized experiment in
primary health in Uganda. Journal of the European Economic
Association, 8(2–3), 571–581.
Bjorkman-Nyqvst, M., de Walque, D., & Svensson, J. (2013). Information
is power: Experimental evidence of the long run impact of communitybased monitoring. Unpublished paper.
Blais, A. (2000). To vote or not to vote?: The merits and limits of rational
choice theory. University of Pittsburgh Press.
Blais, A., & Achen, C. H. (2010). Taking civic duty seriously: Political
theory and voter turnout. Unpublished manuscript.
Booth, D. (2011). Aid, institutions and governance: What have we
learned?. Development Policy Review, 29(s1), s5–s26.
DOES INFORMATION LEAD TO MORE ACTIVE CITIZENSHIP?
Brady, H. E., Verba, S., & Schlozman, K. L. (1995). Beyond SES: A
resource model of political participation. The American Political
Science Review, 89(2), 271–294.
Bruns, B., Filmer, D., & Patrinos, H. A. (2011). Making schools work: New
evidence on accountability reforms. Washington, D.C.: World Bank
Publications.
Campbell, A. (1960). The American voter. University of Chicago Press.
Chong, A., De La O, A., Karlan, D., & Wantchekon, L. (2012). Looking
beyond the incumbent: The effects of exposing corruption on electoral
outcomes. NBER working paper 17679.
De Figueiredo, M., Daniel Hidalgo, F., & Kasahara, Y. (2011). When do
voters punish corrupt politicians? Experimental evidence from Brazil.
Unpublished paper.
Downs, A. (1957). An economic theory of political action in a democracy.
The Journal of Political Economy, 135–150.
Fearon, J. (1999). Electoral accountability and the control of politicians:
Selecting good types versus sanctioning poor performance. In B.
Manin, A. Przeworski, & S. Stokes (Eds.), Democracy, accountability,
and representation. Cambridge: Cambridge University Press.
Ferejohn, J. (1986). Incumbent performance and electoral control. Public
Choice, 50, 5–25.
Foresti, M., Guijt, I., & Sharma, B. (2007). Evaluation of citizens’ voice and
accountability evaluation framework methodological guidance for country case studies. London: ODI.
Fox, J. (2007). The uncertain relationship between transparency and
accountability. Development in Practice, 17(4–5), 663–671.
Gerber, A., & Green, D. (2000). The effects of canvassing, telephone calls
and direct mail on voter turnout: A field experiment. American
Political Science Review, 94(3), 653–663.
Heller, P. (2001). Moving the state: The politics of democratic decentralization in Kerala, South Africa, and Porto Alegre. Politics & Society,
29(1), 131–163.
Hornsby, C. (2012). Kenya: A history since independence. London: I.B. Tauris.
Humphreys, M., & Weinstein, J. (2012). Policing politicians: Citizen
empowerment and political accountability in Uganda. Unpublished paper.
Jensen, R. (2010). The (perceived) returns to education and the demand
for schooling. Quarterly Journal of Economics, 125(2), 515–548.
Joshi, A. (2013). Context matters: A causal chain approach to unpacking
social accountability interventions. Work in progress paper is produced
by the IDS-DLGN framework for the Face to Face Meeting, Aswan,
Egypt, Institute of Development Studies.
Joshi, A. (2013). Do they work? Assessing the impact of transparency and
accountability initiatives in service delivery. Development Policy Review.
Keefer, P., & Khemani, S. (2011). Mass media and public services: The
effects of radio access on public education in Benin. World Bank policy
research working paper 5559 [February].
Malhotra, N., Michelson, M. R., & Valenzuela, A. A. (2012). Emails from
official sources can increase turnout. Quarterly Journal of Political
Science, 7(3), 321–332.
McGee, R., & Gaventa, J. (2010). Synthesis report: Review of impact and
effectiveness of transparency and accountability initiatives. In Transparency and accountability initiative. Institute of Development Studies.
83
Neuman, W. R. (1986). The paradox of mass politics: Knowledge and
opinion in the American electorate. Cambridge, MA: Harvard University Press.
O’Meally, S. C. (2013). Mapping context for social accountability: A
resource paper. Washington, DC: Social Development Department,
World Bank.
Pande, R. (2011). Can informed voters enforce better governance?
Experiments in low-income democracies. Annual Review of Economics,
3(1), 215–237.
Pandey, P., Goyal, S., & Sundararaman, V. (2009). Community participation in public schools: Impact of information campaigns in three
Indian states. Education Economics, 17(3), 355–375.
Reinikka, R., & Svensson, J. (2005). Fighting corruption to improve
schooling: Evidence from a newspaper campaign in Uganda. Journal of
the European Economic Association, 3(2–3), 259–267.
Riker, W. H., & Ordeshook, P. C. (1968). A theory of the calculus of
voting. The American Political Science Review, 25–42.
Schwarz, N., Sanna, L. J., Skurnik, I., & Yoon, C. (2007). Metacognitive
experiences and the intricacies of setting people straight: Implications
for debiasing and public information campaigns. In P. Z. Mark (Ed.),
Advances in experimental social psychology. Academic Press.
Sifuna, D. (2007). The challenge of increasing access and improving
quality: An analysis of universal primary education interventions in
Kenya and Tanzania since the 1970s. International Review of Education, 53(5–6), 687–699.
Tarrow, S. (1998). Power in movement: Social movements and contentious
politics. Cambridge University Press.
Twaweza (2011). Twaweza strategy 2011–2014.
UNDP (2011). Human development report 2011.
Uwezo Kenya (2011). Are our children learning? Annual learning assessment report. Uwezo Kenya.
World Bank (2004). World development report 2004: Making services work
for poor people. Washington, D.C.: The World Bank.
Wrong, M. (2009). It’s our turn to eat: The story of a Kenyan WhistleBlower. Harper Collins.
Zaller, J. (1992). The nature and origins of mass opinion. New York:
Cambridge University Press.
APPENDIX A. SUPPLEMENTARY DATA
Supplementary data associated with this article can be
found, in the online version, at http://dx.doi.org/10.1016/
j.worlddev.2014.03.014.
Available online at www.sciencedirect.com
ScienceDirect