long-term effects of computer use in schools: evidence from

| LONG-TERM EFFECTS OF COMPUTER USE IN SCHOOLS: EVIDENCE
FROM COLOMBIA
CATHERINE RODRIGUEZ1
Department of Economics, Universidad de los Andes Calle 19A No. 1‐37 Este Bloque W, Bogotá (Colombia) email address: [email protected] FABIO SANCHEZ
Department of Economics, Universidad de los Andes Calle 19A No. 1‐37 Este Bloque W Of. 915, Bogotá (Colombia), Fax: 3324492, Tel: (571) 3324494, email address: [email protected] TATIANA VELASCO
Department of Economics, Universidad de los Andes Calle 19A No. 1‐37 Este Bloque W, Bogotá (Colombia) email address: t‐[email protected] JULIANA MARQUEZ
Cifras y Conceptos Cra. 3 No. 62 ‐ 21, Bogotá (Colombia) email address: [email protected] 1
Corresponding Author
1 | LONG-TERM EFFECTS OF COMPUTER USE IN SCHOOLS: EVIDENCE
FROM COLOMBIA
Abstract
This paper evaluates the long-run impact of the Computers for Education
(Computadores para Educar—CPE) program on students’ scholastic achievement,
measured by a national standardized high school graduation exam. This supply-side
program provides computers to Colombian public schools and trains their teachers
in order to ensure that information and communication technology (ICT) is
integrated into the schools’ daily activities. Positive and significant effects of the
program were found on the total standardized scores under two estimation and
comparison group methodologies. Moreover, placebo tests and estimations using
data from a randomized control trial provide evidence of the robustness of these
results.
Keywords: computers, education, program evaluation, long run effects
JEL: C2, I21, I28
2 | 1. INTRODUCTION
It is widely recognized that high-quality education is one of the main channels through
which poverty and inequality in developing countries can be reduced.2 Unfortunately, the
evidence suggests that the quality of schools in these countries is poor and well behind that
of developed countries.3 Given this situation, the design and implementation of effective
policies that can influence educational outcomes are of particular interest to governments
and policymakers. One such policy that is increasingly being introduced around the world
is the use of information and communication technology (ICT) to aid educational
instruction.
Evidence on the impact of school computer use on educational outcomes is,
however, limited and mixed.4 For developed countries, Angrist and Lavy (2002) and
Leuven et al. (2009) find that computer-aided instruction may have a negative effect on
math test scores and girls’ academic achievement, respectively. Machin et al. (2007) and
Barrow et al. (2009), however, find positive and significant effects on English, science, and
math test scores. Finally, Dynarski et al. (2007) and Rouse et al. (2004) find no effect of
ICTs on educational outcomes. The evidence for developing countries, although more
limited, suggests that positive impacts may be obtained from the use of these technologies
2
Using standardized test scores, studies for developing countries have shown that among other effects, the
quality of education significantly increases personal income (Hanushek and Woessmann, 2007), is positively
correlated with scholastic attainment (Hanushek et al., 2008), and increases the rate of economic growth of
the country (Hanushek and Woessmann, 2009).
3
For example, Glewwe and Kremer (2006) report that according to TIMSS scores, the disparities between
developed and developing countries amount to a three-year education gap. See Lockheed and Verspoor
(1991), Harbison et al. (1992), Hanushek (1995), Glewwe (1999), and Filmer et al. (2006) and for further
evidence.
4
A closely related literature that studies the causal impact of home computer use on educational outcomes
includes studies conducted by Malamud and Pop-Eleches (2011), Vigdor and Ladd (2010), and Santiago et al.
(2010).
3 | in schools. Positive short-term effects are found by Banerjee et al. (2007) and He et al.
(2008). Linden (2008) also finds positive effects when ICTs are used as a complement to
the normal curriculum, but when used as a substitute they have a negative and significant
effect. Finally, Linden and Barerra (2009), who evaluate the short-run effects of
Computadores para Educar, the program evaluated in this paper, find positive but not
significant effect on test scores.
A common characteristic of these previous studies is that they all evaluate the
impact of ICT use in schools after only one or two years of exposure. However, there are
no studies in the literature on the long-run effects of using this technology to aid classroom
instruction. This may hinder our understanding of the impact of such interventions, given
the results from studies such as Banerjee et al. (2007), Muraldiharan and Sundararan (2007,
2008, 2011), and Andrabi et al. (2009), which show that outcomes of educational programs
may differ depending on when they are evaluated.
This paper fills this gap by empirically estimating the long-term impact of
Computadores para Educar (CPE) on Colombian students’ academic achievement. CPE is
a nationwide, well-organized program which, since 2000, has equipped more than 26,000
schools and 4 million students with computers. The program’s objective is to deploy
computers as an important tool in the educational process. Hence, a crucial element of the
program is its education component, under which nearly 160,000 teachers from beneficiary
schools have received approximately one year of training in the use of ICTs in computeraided instruction.
We use the results of a national standardized exam taken by more than three million
Colombian students who graduated high school between 2000 and 2010 to estimate the
4 | long-run impact of the program. In particular, we use census information of all students
who graduated from public high schools in the country during the period of analysis. We
follow Imbens and Wooldridge (2009) and estimate the causal impact of the program under
two assumptions: unconfoundedness and selection into the program on unobservables.
Under the first assumption, we estimate the effect of CPE using OLS regressions and
controlling for the socioeconomic characteristics of the students, year, and school fixed
effects, as well as fixed effects at the state and year level. Under unconfoundedness, we use
an instrumental variable approach as an alternative estimation methodology. The
instruments are chosen based on the specific rules that guided the selection of the
program’s beneficiary schools.
Under both methodologies, we find that exposure to the program increases the
scholastic achievement of the students benefited by CPE. However, this impact is not
constant over time; rather, it increases with the exposure of the student to the program.
Specifically, under the unconfoundedness assumption, while in the first three years the
program has positive but small effects, after the ninth year of exposure there is a significant
improvement in test scores that reaches 0.13 standard deviations. We argue that this delay
is natural, given the characteristics of the program. It takes approximately one year for the
computers to be installed in beneficiary schools and another to complete the entire teacher
training. When the IV methodology is used, evidence of possible self-selection emerges,
and the impact of the program is slightly lower reaching 0.12 standard deviations. All of
these results are robust to different specifications and the use of alternative control groups.
Moreover, the results are maintained under two different robustness checks. First,
using data from 1996 to 2000 we estimate a placebo effect assuming that the treated
schools started the program six years before the true treatment began. Using the same
5 | specifications we find no impact of the placebo suggesting that indeed it is the effect of the
program that drives the positive and significant coefficients and not the specifications,
period of analysis or methodologies used. Second, we use the schools that took part in the
Barrera and Linden (2009) randomized control trial (RCT) and estimate the long-run
impact of the program under unconfoundedness. We complement their findings by using
the results of the SABER11 test scores and evaluate the effect on students that attend
beneficiary schools after eight years of exposure. Under this smaller sample we also find
positive and significant effects of the program, evidence that corroborates our main results
using census information.
This paper complements the existing literature in several respects. First, this is the
first long-term impact evaluation of ICT use in schools in the literature. This is particularly
important because, as our results suggest, outcomes of educational interventions change
over time; hence, evaluating programs after only a short time after implementation may not
give a complete picture of their possible effects. Second, unlike the aforementioned studies,
which are based on small samples of schools or students, we estimate the impact on all
those who have been benefited from this program nationwide. Finally, CPE has been in
place for more than ten years and has been expanded across the country in both urban and
rural areas. Thus, the program could be applied on a massive scale in other developing
countries and has proved to be sustainable over time.
The remainder of the paper is organized as follows. Section two presents a succinct
description of the program, and section three presents our identification strategies. Section
four describes the data sets used and some basic descriptive statistics. Sections five and six
present the main results and some robustness checks, respectively. Section seven concludes.
6 | 2. COMPUTERS FOR EDUCATION (CPE)
CPE was established in Colombia at the end of the 1990s based on the experiences of the
Canadian programs Schoolnet and Computers for Schools.5 The program benefited from a
massive donation of computers by public organizations, private companies, and private
citizens to the country’s public schools. The aim of these donations was to incorporate
computers as an important tool in the education of Colombian students. The specific
functions of the various participating entities (of which the Ministry of Information and
Communication Technologies—MTIC—was the most heavily involved) and the resources
to implement the program were regulated by Decree 2324 of November 15, 2000, the date
that the program officially began.
The program was designed to be implemented in five stages, the first of which is the
acquisition and adaptation of the computers donated to the schools. Regardless of the
source of the computers, they are all subject to inspection and adaptation by CPE’s
specialized personnel, who ensure that they meet minimum quality standards.6 This process
provides assurances to the beneficiary schools that the computers are in optimum condition
for use by their teachers and students.
In the second stage, the MTIC selects the schools to be served each year. All
benefited institutions must meet three basic criteria: i) they must be public schools; ii) they
must not have benefitted from any IT equipment program; and iii) they must have the
5
In 1999, the National Council of Economic and Social Policy, through a document entitled CONPES 3063,
defined the general characteristics of the program and approved its launching. 6
The quality requirements have been well established since the program’s inception. Today, among others,
the equipment needs to have at least a Pentium III 300 processor, 128 MB of memory, a hard disk of 10 GB,
unit of diskette 3.5 HD, CRT or LCD color monitor, multimedia and card of network 10/100, or wireless card
of 54 mbps, among others.
7 | infrastructure required to adapt a classroom where the computers are to be installed
(namely electrical outlets, electricity with voltage regulation, and proper grounding).
The program is not limited to any specific range of grades (primary, secondary, or
both) or number of students. Any school interested in participating needs to fill out an
online registration form or obtain one and submits it to CPE headquarters at the MTIC.
Each year, CPE selects the beneficiary schools using a set of specific criteria that have been
relatively constant through the years. Until 2010, seven characteristics, each of which
received a certain score (from 0 to 10) and were then multiplied by a given weight to obtain
a final score, were used as criteria to select the schools. The seven criteria and their
respective weights were: time elapsed since the request to participate in the program was
made (20%), number of students per computer in the state (5%), percent of benefited
schools in the state (10%), rural schools (15%), electric power at school (10%),
commitment of the mayor of the municipality and students’ parents (30%), and the use of
the school by the community (10%). The first four criteria are easily verified by CPE team;
the remaining three are subjective assessments of the school’s director. Those schools with
the highest score are selected for participation, which initiates the third stage of the
program.7,8
In the third stage, CPE personnel meet with the school directors, teachers, and the
municipality’s mayor in order to introduce the program and begin adapting the
infrastructure so that the computers can be installed. The minimum requirements requested
by CPE to support the operation and maintenance of the equipment are: a specific
7
The program tries to select as many schools as they are able to assist. For instance, in 2009 out of 6,123
registries they selected a total of 4,562 schools.
8
Details on this selection process were not available. Hence, although an RD design could have been
appropriate, the lack of information on the scores each school attained did not allow us to carry out such an
estimation strategy.
8 | classroom with a minimum level of security against robbery and fire, adequate illumination
and ventilation, an electrical system with three voltage stabilizers, and suitable computer
furniture. Once the classroom is fully adapted, the MTIC delivers the computers to the
schools, and specialized technicians install them and verify that they are in working order.
The number of computers donated by CPE is standard for all schools. Specifically, during
the period under study, the program strove for each school to have one computer for every
20 students. If the beneficiary school already had some computers, CPE provided the exact
number of computers to reach this target. This third stage, since the school is selected until
the computers are delivered, takes on average ten to twelve months.
The fourth stage of the program—teacher training—is crucial. It lasts approximately
one academic year, and it is specially designed to generate the educational and motivational
abilities so that the teachers from the beneficiary schools incorporate the use of computers
into the subjects that they teach. The training process that favors the integration of ICTs in
the learning process as computer-aided instruction (CAI) is made through alliances with
regional universities. The training stage encompasses two activities: the first is field
instruction, in which representatives from the universities go to the schools and impart
specialized training. It includes theory as well as the development of classroom activities
that, through specific teaching projects and the development of networks, ensure the
adequate adoption of ICTs by the teachers. In the second phase, teachers acquire additional
skills through a battery of pedagogical support alternatives that include permanent
telephone assistance delivered by skilled personnel, a website where teachers receive
additional suggestions for the development of school projects, and virtual forums in which
knowledge is shared and constructed. Even though the two stages (and the time assigned to
each of them) are standard across universities, the specific material taught and implemented
9 | differs across regions. Each university develops specialized material and training in
accordance with the needs of the population to be served and focuses on the subjects in
which it has expertise.
The fifth and final stage is related to the maintenance, service, and replacement of
the equipment donated by CPE. In the first year after the computers are delivered, CPE
offers technical support directly via telephone and/or electronic mail. In the second year, a
technician visits all of the schools once to provide preventive maintenance and repair of the
donated equipment. Also in the second year, the teachers are given technical training.
Specifically, they are taught how to perform basic preventive maintenance and repair that
will enable them to keep the equipment in good working order. In the third year, preventive
maintenance continues. At the start of the fourth year, outmoded computers are replaced.
Replacement of old computers falls under the responsibility of the municipal mayors.
From its inception in 2000, CPE has steadily expanded throughout the country. As
shown in Maps 1-3, the program has reached nearly every municipality in the country. By
2010 CPE had received donations of almost 170,000 computers, trained 160,000 teachers,
and reached 33 percent of all Colombian public schools. In other words, by 2010, CPE had
reached 43.28 percent of all Colombian public students, or more than 4 million students.
Table 1 presents summary statistics of the expansion of the program.
3. IDENTIFICATION STRATEGY
As in many empirical program evaluations, when estimating the average effect of
CPE on the academic achievement of students attending beneficiary schools (the average
impact of treatment on the treated—ATT) we suffer from a missing data problem. To
illustrate, we follow the common notation in the literature and let D be a zero-one indicator
10 | variable that equals one if child i attended a beneficiary CPE school s; Yi,s,t,1 the outcome of
interest the student i will obtain if she attends an intervened school s in period t, and Yi,s,t,0
the outcome obtained if she did not attend a CPE school in period t. Then, the outcome
observed for student i in period t will be given by Yi,s,t=DYi,s,t,1+(1-D)Yi,s,t,0 and the average
gain for children that attended CPE beneficiary schools and have characteristics Xi,s,t will
be given by: E(Yi,s,t ,1  Yi,s,t ,0 | D  1, X i,s,t , )  E( | D  1, X )
.
Given that Yi,s,t,0 is not observed for students who attended CPE schools, we need an
econometric methodology that allows us to obtain a reliable estimate of this counterfactual.
Given that CPE was not introduced in the country following a randomized assignment,9 we
are forced to evaluate the long-run effects of the program using retrospective observational
data. According to Imbens and Wooldridge (2009), under such a scenario, the assumptions
regarding selection into the program are crucial to determining the appropriate estimation
methodology.
Among the greatest concerns that researchers have when estimating impact is selfselection into the program. Two assumptions can be made. The first is unconfoundedness,
under which it is assumed that by adjusting treatment and control groups for differences in
observed covariates, there are no unobserved factors associated with treatment and its
potential outcomes. The second possibility is that selection into the program may have been
driven by unobservables that are also determinants of the potential outcomes.
Since we are interested in estimating the impact of CPE on student i academic
achievement, and given that the CPE program chose the participating schools (and not
students) on the basis of observable characteristics, unconfoundedness could be a
9
The exception is of course the sample of 97 schools from Linden and Barrera, 2009.
11 | reasonable assumption. After controlling for students’ personal and family characteristics,
and time and school fixed effects, it is difficult to foresee unobserved variables that may
invalidate such an assumption.
However, other possible channels may invalidate unconfoundedness. For example,
given the program’s objective of bridging the technological divide across the country, CPE
could have chosen to benefit schools with poor infrastructure, that is, those schools with the
fewest resources. This could also imply that students attending these schools come from
less wealthy families and may be poorer prepared students on average. If the control
variables do not completely capture these possibilities, unconfoundedness may be an
invalid assumption. Hence, in this paper we present the results under both scenarios.
Under the unconfoundedness assumption, we follow Imbens and Wooldridge (2009)
and estimate the impact of the program using a combination of different control groups and
linear regressions, which is, according to the authors, currently the best practice in the field.
Specifically, we estimate the impact of CPE using the information from two distinct
samples. First, we use information from all public schools in the country. Second, we take
advantage of the gradual expansion of CPE across time and space in Colombia and estimate
all our specifications using only information from schools that were actually treated at
some point in time in the period under study. Hence, with this second alternative control
group, our preferred one, we take schools and students that should be more similar between
each other and evaluate the impact of the program on them. For both groups however we
estimate the normalized differences in all observable covariates and analyze whether they
are lower than 0.25. According to Imbens and Wooldridge (2009), differences below this
threshold imply that linear regression estimates will not be sensitive to the specification
chosen; hence, specification bias concerns are significantly reduced.
12 | Under these conditions, in the first estimation framework used in this paper, we
assume that the academic achievement of student i is given by:
, ,
∑
,, ,
, ,
, ,
(1)
where Yi,s,t is the educational outcome of interest obtained by student i in school s in time t
and measured in our case by a standardized national high school graduation exam, SABER
11. Ak,i,s,t are nine dummy variables where the kth dummy will take a value of one for a
child i that is attending school s in time t which has benefited from the CPE program for k
years and zero otherwise. Xi,s,t are control variables of each student, such as age, gender,
mother’s education, and socioeconomic strata dummies.10 Our specification also includes
fixed effects at the school level (λs). These are important controls given that as mentioned
the program selects schools, not students, and hence such fixed effects will control for any
unobservable and constant differences across them that could invalidate the
unconfoundedness assumption. Furthermore, we also include time fixed effects (φt) to
capture differences that could emerge over time and influence all students’ scores. Finally,
, ,
represents the error term which we will cluster at the school level, the level of
treatment.
To acknowledge the possibility that unconfoundedness could be an erroneous
assumption, we use an instrumental variables approach to estimate program impact. From
specification (1) we observe that there are nine treatment variables (one for each period of
time the school could have been served by the program). Thus, we need nine different
instruments. To choose an appropriate instrument for each year, we follow the CPE
10
Public services in Colombia are cross‐subsidized through six socioeconomic strata that vary according to neighborhoods in each municipality. Less wealthy households normally belong to lower strata (1, 2 and 3), medium income households belong to strata four and five while the richest households in the country belong to strata six. 13 | selection process. One of the criteria used by the program to select beneficiary schools was
the percentage of schools served by the program in the state. However, in order to have
variation at the municipal level, we use one closely related instrument for the kth year that
school s has participated in the program. In particular, we use the percentage of public
schools in all other municipalities in the country that were benefited by CPE in year t-1 for
this specific kth period of time adjusted by the distance to the school of interest.
Specifically, for each year t and period k, we first build a vector containing the percentage
of public schools served by CPE in each municipality of Colombia for kth years in year t-1.
This vector’s size is 1,122 (corresponding to the total number of municipalities in the
country) times one and takes the form of:
,
,
∗
%
%
%
%
.
.
.
.
⋮
1
1
1
1
(2)
,
We then construct a normalized distance matrix (M) between all municipalities in
the country where the distances are expressed in radians and are normalized so that the sum
of each row is equal to one. Hence, we have:
,
∗ ,
⋮
,
,
,
⋯
⋱
⋯
⋮
,
,
(3)
,
,
,
As seen, matrix M is a square matrix of 1,122 times 1,122, where each cell is the
normalized distance between municipality i and j. For instance,
,
is the
14 | distance between municipality 1 and itself, cell
,
is the distance between
municipalities 1 and 2, and similarly for the remaining municipalities. Note that matrix
M’s diagonal is equal to zero, corresponding to the distance between each municipality and
itself.
We then multiply
,
,
,
∗
∗ ,
∗
and obtain vector
,
,
∗ which
contains
the proportion of public schools that have been served by CPE for k years in all neighboring
municipalities in the previous year (t-1), weighted by the normalized distance between each
municipality j and all other municipalities. Vector
,
,
∗
is the variable employed as an
instrument for each Ak,i,s,t from specification (1) and is expressed as follows:
,
,
∗
%
%
%
%
.
.
.
.
⋮
1
1
1
1
(4)
,
Such weighted percentages will therefore be highly correlated with the number of
years school s have benefited from CPE reflecting both the conditions to participate as well
as the program’s expansion throughout the country in time. This information that varies by
school and time will comply with the first requirement that a good instrument should have.
Moreover, given the difficulty in foreseeing how the knowledge attained by a student (after
controlling for personal, school, and municipality characteristics) might be related to the
percentage of schools benefited by CPE for k years in all other municipalities in the country
in the previous year we argue that the instruments are exogenous to our variable of
interest.11
11
Naturally, we have no instruments for schools who were treated in 2001, the first year the program started, given that the percentage of schools in neighboring municipalities treated in 2000 will always be zero. Hence, we exclude from the regressions these 40 schools that amount to 3.01% and 1.14% treated and public schools in the country respectively. 15 | Rodríguez et al. (2011) show that students attending CPE beneficiary schools are
almost four percentage points less likely to drop out of school two years after the program
was implemented. Given that in order to present the SABER 11 examination students must
stay in school until their senior year, the distribution of test scores between treated and nontreated students may not be comparable. If one assumes that the program most likely
induced those students who would otherwise have dropped out to remain in the system,
then the estimates obtained under the above-described methodologies may represent a
lower bound of the true effect. In order to take this into account, we follow Angrist et al.
(2006), who face a similar problem and estimate an upper bound effect using a nonparametric approach. Specifically, based on the results from Rodríguez et al. (2011), we
drop the lowest 0.5 and 1 percent of SABER 11 scores for students attending a CPE school
and run all our estimations on this restricted sample. Such estimates in turn could provide
an upper bound effect of the program.
Finally, we carry out two different robustness checks. First, we do a placebo test and
assume that the program started in 1996 instead of 2001. We assume the exact same
process of selection of benefited schools but use scholastic achievement information from
1996 until 1999, years where the CPE program had not even started yet. We estimate the
placebo effect under both unconfoundedness and selection on unobservables using the same
specifications used before. However, unlike the true estimates, due to data restrictions we
can only estimate it for a maximum of four years for such placebo treatment.12 The second
robustness check uses data from the RCT conducted by Barrera and Linden (2009). In their
original study, 100 schools were randomly selected to receive the benefits of CPE, of which
12
As when estimating the true effect we are obliged to exclude those schools who began treatment in 2001 since no instrument for them exist. 16 | half were assigned to the treatment group and the other half to the control group. Since we
have census data, we are able to identify the schools that were selected in the RCT and
estimate the impact of CPE on the scholastic achievement of their students measured by
SABER 11 scores, a question not addressed in the original paper.13
4. DATA
The data used in this paper come from four sources, which had to be carefully
merged in order to have census data on all schools and students in the Colombian public
school system. The first source comes from what is known as the SABER 11 exam for
2000 through 2010. The SABER 11 is a government-administered exam that evaluates
approximately 90 percent of Colombian senior high school students. This exam evaluates
students in math, language, social studies, science, and an elective subject chosen by the
student or the school. A total score is then calculated as the sum of the result obtained in
each area. However, because the exam questions change in each round, we standardize it in
order to make it comparable throughout the period, so that each year the mean and the
standard deviation are equal to zero and one, respectively. We will use this standardized
measure throughout the empirical exercises. Conveniently, SABER 11 also has basic
socioeconomic characteristics of students which include the age, gender, mother’s
education and socioeconomic strata. We use these variables as controls in all our
estimations.
The second source of information comes directly from CPE management, which has
detailed information on all the schools that were benefited and the year they were chosen to
enter into the program. As the schools are identified by an official code, we were able to
13
We thank MTIC and CPE for providing the complete list of schools that were included in the Barrera and
Linden (2009) RCT.
17 | merge this information with the data from the SABER 11 scores. By merging all of the data
sets, we obtained information on almost 3’760,261 students attending 5,836 public schools
who take the SABER11 exam in their senior year in the period 2000-2010.
Table 2 provides some descriptive statistics of high school seniors which we use
information from according to whether or not they attended a beneficiary school. The table
presents the mean and standard deviation of each variable for both the treatment and control
groups for two distinct samples: the complete sample and a restricted one based only on
those students who attended a school that was benefited by CPE. These last students are
further divided according to whether they graduated from a school which had been treated
between one and four years by CPE or between five and nine years respectively. Moreover,
for each sample, we also present the normalized difference present for each characteristic
according to treatment status.
When using the complete sample, it can be observed that the schools served by CPE
have on average students belonging to lower socioeconomic strata. Students from CPE
schools have less educated mothers and belong to households of lower income strata. This
is probably closely related to the fact that they also have lower average SABER 11 scores.
At first glance, this would suggest that, if anything, there could be negative self-selection of
schools into the program. The same conclusion emerges when analyzing only students who
attend schools that have benefited by CPE. Students that attend schools that have been
benefited for a longer period of time by the program, between five and nine years, have a
slightly lower socioeconomic background than those attending CPE schools benefited for a
shorter period of time. Notably however, all normalized differences are below 0.25 (except
for the proportion of students belonging to the lowest income strata) which reduces
specification bias concerns under our linear specifications.
18 | 5. LONG-TERM CPE IMPACT ON STUDENT ACHIEVEMENT
5.1
Results under Unconfoundedness
As explained above, we will first assume that adjusting for observable personal
characteristics, as well as school and year fixed effects, there are no unobserved factors
associated with students attending a school served by CPE and with their potential
SABER11 score. We believe this is a reasonable assumption given that CPE selects schools
and not students, that is, the treatment unit is not the student itself. Moreover, we have a set
of control variables that are normally thought to be highly correlated with school
performance and probably school choice of parents.
Table 3 presents the results obtained after estimating specification (1) under three
different models, each one including a different set of control variables and all with
clustered errors at the school level. The first model is the simplest one, in which only
school and year fixed effects are included and all information from SABER 11 is used. In
the second model we include students’ socioeconomic controls which have been found to
be highly correlated with their scholastic achievement. Finally, the third model reproduces
model two’s specification but uses the restricted sample of schools that have been
eventually treated by CPE.
The results suggest that CPE has positive and significant impacts on student
achievement. Moreover, the effect differs according to the number of years each student is
exposed to the program. It is interesting to note the stability of all coefficients of interest
irrespective of the controls or the information used when estimating them. This could
suggest that indeed the unconfoundedness assumption could be a valid one and that, after
controlling for school and time constant characteristics, treatment status is orthogonal to the
19 | students’ socioeconomic characteristics. Results using information on all students who
graduated from public schools in the country suggest that attending a school that has been
benefited by CPE for one year has a positive and significant effect on SABER 11
standardized scores, increasing it by 0.03 standard deviations. Attending a school that has
been benefited for nine years have a positive and significant effect of 0.09 standard
deviations.
As explained before, given the variation in the expansion of the program across the
country and in time, we can estimate its impact using only treated schools. That is, model 3
presents the impact of the different years of treatment only for those students attending
benefited schools at some point in time; hence, the number of observations is significantly
reduced in more than one million observations. Under this last preferred specification we
find that the impact becomes even stronger. Students who attend a school that has benefited
from CPE for one year obtain a total SABER11 score 0.04 standard deviations larger than
students attending a CPE school that has not yet been benefited. This impact increases to
0.13 standard deviations after the ninth year.
It is worth analyzing why positive impacts increase in time after a certain
number of years have elapsed since the acquisition of the computers. According to
Downes (2001), cited in Blackmore (2003), there are four levels of integration of ICTs
into schools and classrooms. At the first level, ICT technology arrives at the school, but
teaching practices remain unchanged. At this level, students are taught basic ICT skills
as a separate subject. At the second level, the integration of ICT into the daily work of
some teachers begins. At the third level, the use of ICT changes content as well as
teaching practices. Finally, at the fourth level, ICT integration leads to changes in
organizational and structural features of schooling. We argue that this structure of
20 | integrating ICT into the schools closely reflects what was found in this evaluation and
in the organization of the CPE.
5.2
Results under Selection on Unobservables
In this section we acknowledge the possibility that unconfoundedness could be an
erroneous assumption and hence use an instrumental variables approach to estimate the
impact of the program. Table 2 provided some evidence for the hypothesis that there is
some negative self-selection of schools and students into the program. To account for this
possibility, we undertake an instrumental variables approach using the instruments
described in Section III. The first-stage results are presented in Table A1 in the appendix.
As can be seen, our instruments are highly correlated with our independent variables of
interest. Moreover, all F tests associated with the excluded instruments are well above
standard minimum levels; hence, our study does not have a weak first stage.
The second-stage results using different control groups are presented in Table 4.
The first column uses as a control group the same group of students as in models one and
two of Table 3, that is all students who attend public schools in Colombia not served by
CPE. Comparing these coefficients with those obtained under unconfoundedness, there is
evidence of some negative self-selection of schools on unobservables. Once we control for
selection on unobservables, there is a positive and significant, yet slightly smaller, effect of
CPE for every year of exposure. Moreover, the IV results also reproduce the pattern of a
continuous and increasing appropriation of ICT for educational use in the classroom. By the
end of the ninth year of exposure, the effect on SABER 11 standardized scores amounts to
0.06 standard deviations.
21 | Model 2 presents the results using our preferred control group, which includes only
students from schools that at some point in time have been served by CPE. As can be
observed, using this alternative control group, all coefficients of interest increase again.
Furthermore, the rising trend continues with duration of exposure. Using this last group of
students, we find that by the ninth year of exposure to the program, SABER 11 scores
increase by 0.11 standard deviations. This impact however is still slightly lower than that
obtained under the unconfoundedness assumption.
5.2.2 Upper Bounds of the Impact of CPE. The estimates presented above could represent a lower bound on the impact of CPE
on student achievement over the long term. This is due to the fact that there is evidence to
suggest that students attending CPE beneficiary schools are almost four percentage points
less likely to drop out of school two years after they enter the program. This may mean that
the distribution of test scores between treated and non-treated students may not be
comparable. Following Angrist et al. (2006), we estimate an upper bound using a nonparametric approach by dropping the lowest 0.5 and 1 percent of SABER 11 scores of
students attending a CPE school. Models 3 and 4 in Table 4 presents the results after this
correction is implemented and under the IV methodology using information from all public
schools in the country. As expected, all estimates are larger than those in the previous
models and provide an upper bound for the program’s true impact.14
6. ROBUSTNESS CHECKS
As explained, as a robustness checks we implement two different strategies. In the
first one we estimate the impact of a placebo effect. Using SABER 11 data from 1996 until
14
First stage results for this restricted group are presented in Table A2 in the appendix. 22 | 1999 we assume that CPE started its implementation in Colombian public schools in the
year 1996 instead of 2001. We estimate specification (1) under unconfoundedness and
selection on unobservables assuming the exact same expansion of the program, yet six
years before it actually occurred. For example, in the year 1996 we will assign a dummy
equal to one to all public schools that started to be benefited by CPE in year 2002 and zero
otherwise. In 1997 these schools will be assigned a value of one in a dummy variable that
identifies they had been treated for two years and zero otherwise.15 Panel A in Table 5
presents the impact for four years of treatment under this placebo. As can be observed,
none of the coefficients of interest are significant under the OLS nor under the IV
estimation methodologies. This provides evidence that indeed the positive and significant
effects previously found can be attributed to CPE and not to any particular pre-trend that
the first CPE schools were experiencing in terms of the scholastic achievement of their
students.
The second robustness check uses data from the Barrera and Linden (2009) RCT to
estimate the long run impact the program has had on the SABER 11 test scores of their
students, a research question that was not addressed in the original paper. Three details
must be highlighted at this point. First, for these schools we were able to obtain SABER 11
test scores for the period 2005-2013. Second, of the 100 original RCT schools we use a
subsample of 86 schools which have information on 4,758 students who presented SABER
11 exams. It is not strange that SABER 11 test scores information from only 86 of the 100
schools is available given two main reasons. On the one hand, this is an exam that is
presented by students in the eleventh grade and only 34 of the original RCT schools offer
15
As when estimating the true effect, we do not take into account the schools first benefited in 2001 since no instrument is available for them. 23 | the complete high school cycle. On the other hand, even though all of the high school
graduates present the exam the dropout rate in the country is extremely high and less than
60% of the students who start first grade finish secondary education. Finally, as time has
elapsed many of the control schools have been benefited by the program since 2008. Thus,
the information used must be analyzed with caution and we rely on it only as a simple
robustness check in this paper.
In order to accommodate the low number of schools and students in the RCT, we
estimate the impact of CPE on total test scores using a simplified version of our previous
regressions. Specifically, we define a dummy variable equal to one if student i attends
school s which was originally a treatment school selected in the RCT. Panel B in Table 5
presents the results under unconfoundedness using the subsample of this RCT. Model 3
presents the results with no controls, model 4 include student socioeconomic characteristics
and year fixed effects and finally model 5 clusters the errors at the school level. Under this
last specification we find a positive and significant effect that implies that students who
graduated from the RCT schools that were initially benefited by CPE obtain higher
scholastic achievement, corroborating thus the results using census information.
7. CONCLUSIONS
Using census information on Colombian high school graduates and different empirical
methods, this study finds that CPE has been a successful supply-side intervention program.
By providing computers to public schools and training teachers to incorporate ICT into
their teaching methodologies, CPE has helped increase the quality of education, as
measured by a standardized national high school test. Under both the unconfoundedness
assumption as well as selection on unobservables we find positive and significant impacts
24 | that increase over time. Such results are congruent with Downes (2001) and could in
principle imply the importance of teacher training, which could ensure that these
technologies are used correctly and appropriately in the classroom. Additionally, our results
are consistent with other studies in the literature and provide further evidence that
evaluating educational programs over a short period of time may not give the full picture of
their possible effects.
The program’s organization and its success over a ten-year period provide evidence
that its replication and expansion are possible and beneficial in other developing countries.
The results also suggest the need for certain improvements. For example, computers should
be integrated into the classrooms sooner so as to shorten the time before important positive
impacts are realized. Further research is also needed to understand the mechanisms that
enable the results found here to occur.
25 | ACKNOWLEDGMENTS
We wish to acknowledge the generous collaboration of the entire technical team in
charge of supervising Computadores para Educar at the Colombian Ministry of
Information and Communication Technologies. We are grateful to Felipe Barrera and
the participants in Seminario CEDE at Los Andes, the Education Across the Americas
Conference, and XX Jornadas de la Asociación de Economía de la Educación, for their
valuable comments.
26 | REFERENCES
Andrabi, T., J. Das, A. I. Khwaja, and T. Zajonc. 2009. “Here Today, Gone Tomorrow?
Examining the Extent and Implications of Low Persistence in Child Learning.”
Working Paper Series rwp09-001. John F. Kennedy School of Government.
Cambridge, MA: Harv. University.
Angrist, J. and V. Lavy. 2002. “New Evidence on Classroom Computers and Pupil
Learning.” The Economic Journal 112: 735-765.
Angrist, J., Bettinger, E. and Kremer, M. 2006. “Lon-Term Educational Consequences of
Secondary School Vouchers: Evidence from Administrative Records in Colombia”.
American Economic Review. 96(3): 847-862.
Banerjee, A., S. Cole, E. Duflo, and L. Linden. 2007. “Remedying Education: Evidence
from Two Randomized Experiments in India.” Quarterly Journal of Economics 122:
235-1264.
Barrera, F. and L. Linden, 2009. “The Use and Misuse of Computers in Education:
Evidence from a Randomized Experiment in Colombia.” Policy Research Working
Paper. Washington, DC: World Bank.
Barrow, L., L. Markman, and C. E. Rouse. 2009. “Technology's Edge: The Educational
Benefits of Computer-Aided Instruction,” American Economic Journal: Economic
Policy, American Economic Association 1: 52-74.
Blackmore, J., L. Hardcastle, E. Bamblett, and J. Owens. 2003. Effective Use of
Information and Communication Technology (ICT) to Enhance Learning for
Disadvantaged School Students. Deakin Centre for Education and Change, Koorie
27 | Institute of Education. Melbourne, Australia: Institute of Disability Studies, Deakin
University.
Documento CONPES 3063. 1999. Departamento Nacional de Planeación. República de
Colombia.
Downes, T. 2001. “Remaining Focused on a Moving Target: New Challenges
for the Progression of Professional Development for ICT use in Primary
and Secondary Education” http://www.aare.edu.au/
Dynarski, M., et al. 2007. Effectiveness of Reading and Mathemathics Software Products:
Findings from the First Student Cohort. Institute of Education Sciences Report to
Congress. Washington, DC: U.S. Department of Education.
Filmer, D., A. Hasan and L. Pritchett. 2006. “A Millennium Learning Goal: Measuring
Real Progress in Education.” Working Paper 97. Center for Global Development.
Washington, DC: The Brookings Institution.
Glewwe, P. 1999. The Economics of School Quality Investments in Developing
Countries. New York, NY: St. Martin’s Press.
Glewwe, P. and M. Kremer. 2006. "Schools, Teachers, and Education Outcomes in
Developing Countries." in Hanushek, E. A., S. Machin, and L. Woessmann (Eds.).
Handbook of the Economics of Education, Vol 2. Amsterdam: Elsevier NorthHolland.
Hanushek, E. A., J. B. Gomes-Neto, and R.W. Harbison 1992. “Self-Financing Educational
Investments: The Quality Imperative in Developing Countries.” RCER Working
Papers 319, Center for Economic Research. Rochester, NY: University of
Rochester.
Hanushek, E. A. 1995. “Interpreting Recent Research on Schooling in Developing
28 | Countries.” World Bank Research Observer 10: 227-46.
Hanushek, E. A. and L. Woessmann. 2007. “The Role of Education Quality for Economic
Growth.” Policy Research Working Paper Series 4122. Washington, DC: World
Bank.
Hanushek, E. A. and L. Woessmann. 2008. “The Role of Cognitive Skills in Economic
Development” Journal of Economic Literature 46:3, 607 - 668
He, F., L. Linden, and M. MacLeod. 2008. “How to Teach English in India: Testing the
Relative Productivity of Instruction Methods within the Pratham English Language
Education Program.” Draft paper.
Imbens, G. and J. Wooldridge. 2009. “Recent Developments in the Econometrics of
Program Evaluation.” Journal of Economic Literature 47: 5-86.
Leuven, E., L. Mikael, O. Hessel, and W. Dinand. 2009. “The Effect of Extra Funding for
Disadvantaged Pupils on Achievement.” The Review of Economics and Statistics
89: 721-36.
Linden, L. L., 2008. “Complement or Substitute? The Effect of Technology on Student
Achievement in India.” Working Paper.
Lockheed, M. E. and A. M. Verspoor. 1991. Improving Primary Education in Developing
Countries. New York: Oxford University Press for World Bank.
Machin, S., S. McNally, and O. Silva. 2007. “New Technology in Schools: Is There a
Payoff?” Economic Journal 117: 1145-67.
Malamud, O. and C. Pop-Eleches. 2011. "Home Computer Use and the Development of
Human Capital.” The Quarterly Journal of Economics 126(2): 987-1027.
Muralidharan, K. and V. Sundararaman. 2007. “Teacher Incentives in Developing
29 | Countries: Experimental Evidence from India.” Mimeographed document.
Washington, DC: World Bank.
Rouse, C. E., A. B. Krueger, and L. Markman. 2004. “Putting Computerized Instruction to
the Test: A Randomized Evaluation of a ‘Scientifically-based’ Reading Program,”
Economics of Education Review 23: 323-38.
Rodríguez, C., F. Sánchez, and J. Márquez. 2011. “Impacto del Programa Computadores
Para Educar en Deserción Estudiantil, Logro Escolar e Ingreso a la Educación
Superior.” Documento CEDE No. 15. Bogota, Colombia: Universidad de los Andes.
Santiago, A., et al. 2010. “Experimental Assessment of the Program ‘One Laptop Per
Child’ in Peru.” Briefly Noted No. 5. IDB Education. Washington, DC: InterAmerican Development Bank.
Vigdor, J. and H. Ladd. 2010. “Scaling the Digital Divide: Home Computer Technology
and Student Achievement,” NBER Working Paper16078. Cambridge, MA: National
Bureau of Economic Research.
30 | TABLES
Table 1: Summary Statistics of CPE
Year
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
Acumulated %
Computers
of public
donated to schools served
date*
by CPE
1.904
0.56%
8.819
2.06%
20.281
4.16%
33.392
6.36%
45.818
11.02%
61.391
14.50%
83.989
21.83%
97.714
29.58%
99.193
38.61%
163.711
49.77%
Acumulated % of
students attending public
schools served by CPE
1.17%
5.13%
9.63%
10.41%
17.57%
23.35%
32.35%
45.74%
59.56%
65.31%
Source: SIMCE, 166 Resolution, CPE
*We take into account the number of computers replaced every year
by CPE
31 | Table 2. Descriptive Statistics
Treated
No. Students
Personal Characteristics
Student's age
Men
Mother's education
Primary
Secondary
Non-profesional
Profesional
Socio-economic strata
1
2
3
4
School characteristics*
Standarized Saber 11 score
Number of Saber 11° test takers in school
Municipality Characteristics*
Number of school with 11° grade in
municipality
* Variables measured in 2000.
Complete sample
Controls
Normalized
Difference
Between 1 to 4
years of CPE
No.
Schools=2,367
Mean
S.d.
1.860.977
Treated sample
Between 5 and 9 Normalized
years of CPE
Difference
No.
Schools=1,872
Mean
S.d.
693.846
No.
Schools=4,239
Mean
S.d.
2.554.823
No.
Schools=1,597
Mean
S.d.
1.205.438
17.87
0.45
3.70
0.50
17.76
0.45
3.76
0.50
0.02
0.01
17.80
0.45
3.69
0.50
18.06
0.46
3.73
0.50
-0.05
-0.02
0.53
0.37
0.07
0.04
0.50
0.48
0.25
0.20
0.47
0.40
0.09
0.04
0.50
0.49
0.28
0.20
0.08
-0.05
-0.06
0.00
0.50
0.39
0.07
0.04
0.50
0.49
0.26
0.21
0.60
0.32
0.04
0.03
0.49
0.47
0.21
0.18
-0.15
0.10
0.09
0.04
0.39
0.48
0.12
0.01
0.49
0.50
0.33
0.07
0.20
0.53
0.26
0.01
0.40
0.50
0.44
0.10
0.30
-0.08
-0.25
-0.04
0.33
0.52
0.14
0.01
0.47
0.50
0.34
0.08
0.53
0.37
0.09
0.00
0.50
0.48
0.29
0.06
-0.28
0.21
0.10
0.02
-0.07
-0.13
0.00
105.49
0.29
123.30
-0.09
54.97
0.32
64.23
0.20
0.36
-0.09
3.60
9.54
2.74
5.96
0.08
-0.03
87.84
2.83
0.30
0.00
0.37
109.08 110.52 127.78
4.72
4.27
15.59
32 | Table 3. Impact of CPE on Student Achievement - Results under Unconfoundedness
OLS Regressions - Dependent Variable is SABER 11 Standardized Scores
Number of years served by
CPE
One year
Two years
Three years
Four years
Five years
Six years
Seven years
Eight years
Nine years
(1)
Models
(2)
(3)
0.031***
(0.005)
0.039***
(0.007)
0.060***
(0.008)
0.057***
(0.010)
0.084***
(0.012)
0.076***
(0.013)
0.069***
(0.016)
0.084***
(0.017)
0.092***
(0.021)
0.034***
(0.005)
0.043***
(0.007)
0.063***
(0.008)
0.057***
(0.010)
0.079***
(0.011)
0.076***
(0.012)
0.067***
(0.015)
0.083***
(0.016)
0.093***
(0.019)
0.038***
(0.006)
0.054***
(0.008)
0.075***
(0.010)
0.069***
(0.013)
0.092***
(0.014)
0.091***
(0.016)
0.092***
(0.019)
0.117***
(0.021)
0.132***
(0.025)
Year FE
Yes
Yes
Yes
School FE
Yes
Yes
Yes
Student controls
No
Yes
Yes
Only CPE schools
No
No
Yes
Number of Students
3,431,087 3,431,087 2,322,715
Number of schools
5,755
5,755
4,160
Notes: Student controls include: gender, age, mother's level of
education, socio-economic strata. Models (1) and (2) uses all
available information. Model (3) uses only CPE schools where the
controls are schools which will eventually be served by CPE.
Clustered standard errors by schools in brackets; * significant at
10%; ** significant at 5%; *** significant at 1%
33 | Table 4: Impact of CPE on Student Achievement - Results under Selection on
Unobservables
Second Stage IV Regressions - Dependent Variable is SABER 11 Standarized Scores
Model
Number of years served by
CPE
One year
Two years
Three years
Four years
Five years
Six years
Seven years
Eight years
Nine years
Year effects
School effects
Student controls
Observations
Number of schools
(1)
(2)
(3)
(4)
0.028***
(0.006)
0.032***
(0.007)
0.051***
(0.009)
0.036***
(0.011)
0.051***
(0.012)
0.036***
(0.013)
0.028*
(0.017)
0.039**
(0.018)
0.058***
(0.020)
Yes
Yes
Yes
3,431,080
5,748
0.038***
(0.008)
0.052***
(0.011)
0.075***
(0.014)
0.063***
(0.016)
0.083***
(0.019)
0.074***
(0.021)
0.075***
(0.026)
0.093***
(0.028)
0.118***
(0.031)
Yes
Yes
Yes
2,322,711
4,156
0.041***
(0.006)
0.052***
(0.007)
0.078***
(0.009)
0.076***
(0.011)
0.100***
(0.012)
0.091***
(0.013)
0.088***
(0.017)
0.105***
(0.017)
0.127***
(0.020)
Yes
Yes
Yes
3,412,471
5,748
0.048***
(0.006)
0.066***
(0.007)
0.099***
(0.009)
0.104***
(0.010)
0.136***
(0.012)
0.130***
(0.013)
0.134***
(0.017)
0.156***
(0.017)
0.180***
(0.020)
Yes
Yes
Yes
3,394,001
5,748
Notes: Regressions include all controls and FE from model 2 in Table 3. Instruments used
are proportion of schools served by CPE in the neighbour municipalities at t-1 year for
different years (first stage results available in Table A1 in the Appendix). Model (1) use all
CPE schools and as controls all schools without CPE; Model (2) use only CPE schools
where the controls are schools which will eventually be served by CPE; Model (3) and (4)
use all CPE schools and as control all schools without CPE but takes into account the
effect of CPE in students drop-out rates.
Clustered standard errors by schools in brackets; * significant at 10%; ** significant at
5%; *** significant at 1%
34 | Table 5. Robustness Checks
Placebo 1996 - 1999
Panel A
Number of years served by
CPE
One year
Two years
Three years
Four years
(1)
OLS
(2)
IV
-0.005
(0.013)
-0.012
(0.018)
0.009
(0.023)
0.011
(0.027)
-0.018
(0.015)
-0.005
(0.021)
-0.013
(0.025)
0.000
(0.029)
Panel B
Original treatment schools in the RCT
Student controls
Year FE
Clustered standard errors by
school
School FE
Observations
Number of schools
RCT Barrera & Linden
(3)
OLS
(4)
OLS
0.177*** 0.133***
(0.023)
(0.022)
(5)
OLS
0.133*
(0.070)
Yes
Yes
Yes
Yes
No
No
Yes
Yes
Yes
Yes
Yes
Yes
No
No
Yes
Yes
656,351
3,101
Yes
656,349
3,099
No
4,758
86
No
4,758
86
No
4,758
86
Student controls: gender, age, mother's level of education
Robust standard errors in parentheses
*** p<0.01, ** p<0.05, *p<0.1
MAPS
35 | Map I. Percentage of benefited schools in 2001
36 | Map II. Percentage of benefited schools in 2005
37 | Map III. Percentage of benefited schools in 2008
38 | APPENDIX
Table A1: First Stage results using as controls all students attending public schools not served by CPE
Dependent variables: Number of years being served by CPE program
Number of Years served by CPE program
One year Two years Three years Four years Five years Six years Seven years Eight years Nine years
Proportion of CPE treated schools in
neighbour municipalities at t-1for
One year or more
3.530***
-0.011***
-0.020***
0.013***
0.013***
-0.022***
-0.055***
-0.058***
-0.003***
(0.035)
(0.004)
(0.004)
(0.002)
(0.002)
(0.003)
(0.004)
(0.006)
(0.001)
0.010*
3.833***
0.022***
0.047***
0.037***
0.009***
-0.033***
-0.067***
-0.007***
(0.005)
(0.029)
(0.005)
(0.003)
(0.003)
(0.002)
(0.003)
(0.008)
(0.002)
-0.130***
-0.066***
4.824***
0.071***
0.054***
0.020***
-0.025***
-0.075***
-0.011***
(0.010)
(0.010)
(0.055)
(0.005)
(0.004)
(0.003)
(0.003)
(0.010)
(0.002)
-0.459***
-0.447***
-0.419***
7.501***
0.073***
0.019***
-0.061***
-0.119***
-0.017***
(0.020)
(0.025)
(0.027)
(0.082)
(0.006)
(0.006)
(0.007)
(0.015)
(0.004)
-0.824***
-0.829***
-0.914***
-0.421***
10.410***
0.015
-0.092***
-0.198***
-0.023***
(0.027)
(0.036)
(0.040)
(0.031)
(0.125)
(0.012)
(0.012)
(0.024)
(0.005)
-1.090***
-1.186***
-1.346***
-0.888***
-0.463***
13.652***
-0.104***
-0.258***
-0.035***
(0.038)
(0.050)
(0.062)
(0.050)
(0.044)
(0.206)
(0.021)
(0.033)
(0.007)
Seven years or more
-1.518***
-1.660***
-2.317***
-1.748***
-1.533***
-1.385***
20.942***
-0.416***
-0.056***
(0.063)
(0.084)
(0.096)
(0.098)
(0.093)
(0.109)
(0.426)
(0.063)
(0.012)
Eight years or more
-2.600***
-2.082***
-3.304***
-3.485***
-3.152***
-3.544***
-2.993***
35.584***
-0.086***
(0.141)
(0.192)
(0.187)
(0.209)
(0.225)
(0.268)
(0.317)
(1.477)
(0.025)
-7.674***
-4.548***
-7.387***
-18.713***
-20.133***
131.071***
(0.716)
(0.601)
(0.554)
(0.733)
(1.074)
(2.834)
Two years or more
Three years or more
Four years or more
Five years or more
Six years or more
Nine years or more
-11.444*** -14.960*** -14.117***
(0.444)
(0.435)
(0.594)
F - Excluded Instruments (9, 5747)
2006.46
3543.74
3732.42
3028.77
2197.28
918.93
539.1
Observations
3,431,087 3,431,087
3,431,087
3,431,087
3,431,087
3,431,087
3,431,087
Notes: All regressions include all controls and FE from model 1 in Table 4.
Clustered standard errors by schools in brackets; * significant at 10%; ** significant at 5%; *** significant at 1%
531.28
562.32
3,431,087
3,431,087
39 | Table A2: First Stage results using as controls only students in schools eventually treated by CPE
Dependent variables: Number of years being served by CPE program
One year
Proportion of CPE treated schools in
neighbour municipalities at t-1 for
One year or more
3.739***
(0.038)
Two years or more
0.371***
(0.016)
Three years or more
0.393***
(0.028)
Four years or more
0.328***
(0.048)
Five years or more
0.316***
(0.066)
Six years or more
0.579***
(0.096)
Seven years or more
1.203***
(0.159)
Eight years or more
2.293***
(0.366)
Nine years or more
Number of Years served by CPE
Two years Three years Four years Five years Six years
0.206***
(0.013)
4.261***
(0.044)
0.580***
(0.033)
0.518***
(0.063)
0.544***
(0.090)
0.849***
(0.128)
1.667***
(0.197)
4.192***
(0.370)
0.100***
(0.016)
0.268***
(0.028)
5.203***
(0.085)
0.145**
(0.065)
-0.114
(0.098)
-0.154
(0.144)
-0.369
(0.226)
0.423
(0.375)
0.177***
(0.009)
0.353***
(0.017)
0.528***
(0.025)
8.186***
(0.097)
0.560***
(0.055)
0.562***
(0.089)
0.613***
(0.153)
0.899***
(0.248)
0.133***
-0.003
(0.008)
(0.008)
0.260*** 0.069***
(0.014)
(0.012)
0.386*** 0.115***
(0.021)
(0.017)
0.568*** 0.157***
(0.030)
(0.026)
11.121*** 0.206***
(0.144)
(0.038)
0.593*** 13.950***
(0.069)
(0.228)
0.193
-0.870***
(0.135)
(0.141)
0.040
-2.512***
(0.243)
(0.327)
3.352*** -4.226*** 10.452***
(0.802)
(0.698)
(0.737)
3225.25
2605.98
1009.86
2,322,715 2,322,715 2,322,715
Seven years
-0.160***
(0.012)
-0.184***
(0.017)
-0.237***
(0.023)
-0.387***
(0.035)
-0.568***
(0.053)
-0.792***
(0.078)
19.859***
(0.432)
-4.841***
(0.479)
8.321*** 16.859*** 5.509***
-24.520***
(1.166)
(1.258)
(1.229)
(0.986)
F( 9,4155)
2140
44.57.28
4719.45
525.71
Observations
2,322,715 2,322,715 2,322,715
2,322,715
Notes: All regressions include all controls and FE from model 3 in Table 4.
Clustered standard errors by schools in brackets; * significant at 10%; ** significant at 5%; *** significant at 1%
Eight years Nine years
-0.256***
(0.025)
-0.403***
(0.042)
-0.560***
(0.060)
-0.849***
(0.090)
-1.256***
(0.131)
-1.802***
(0.189)
-2.921***
(0.310)
31.097***
(1.668)
-0.025***
(0.005)
-0.048***
(0.010)
-0.073***
(0.015)
-0.108***
(0.023)
-0.154***
(0.033)
-0.228***
(0.048)
-0.370***
(0.078)
-0.668***
(0.148)
-34.755*** 129.109***
(2.241)
(3.099)
408.14
539.64
2,322,715 2,322,715
40 | 41