Proposal for a Randomized Evaluation of Laptops in Nigerian Schools Faisal Anwar and Lauren Velazquez A-165 (Economics of Education) Final Paper May 9, 2008 1 Contents EXECUTIVE SUMMARY .................................................................................................................................. 4 1. Introduction and Problem Statement ................................................................................................... 7 2. The Nigerian Context ............................................................................................................................ 7 2.1 Education in Nigeria ............................................................................................................................ 7 2.2 Education Finance and Governance ................................................................................................... 8 2.3 Nigerian Access to ICT ......................................................................................................................... 8 3. Our Proposed Intervention ................................................................................................................... 9 3.1 About the XO Laptop from OLPC ........................................................................................................ 9 3.2 Organizational Model for the Intervention....................................................................................... 10 4. The Promise of ICTs in Nigerian Education and Society...................................................................... 11 5. Literature Review of Other Educational Interventions ....................................................................... 12 5.1 Technology-Based Interventions ...................................................................................................... 12 5.2 Other Interventions that Demonstrate Best Practices for Evaluation .............................................. 14 5.3 Key Results from Impact Evaluation ................................................................................................. 15 6. Why a Large-Scale Evaluation of Laptops Makes Sense ..................................................................... 16 6.1 Setting the Context for Cost-Benefit Analysis of the Laptop ...................................................... 16 6.2 Rationale for a Large Scale Study of the XO Laptop ......................................................................... 17 7. Experimental Design ........................................................................................................................... 18 7.1 Fundamental Research Questions .................................................................................................... 18 2 7.2 Outcomes of Interest ........................................................................................................................ 19 7.3 Defining Our Target and Sample Populations ................................................................................... 20 7.4 The Different Treatment Groups ...................................................................................................... 22 7.5 Randomization Strategy .................................................................................................................... 23 7.6 Important Operations Details ........................................................................................................... 24 7.7 Data Analysis ..................................................................................................................................... 27 7.8 Extensions ......................................................................................................................................... 28 8. Conclusion ........................................................................................................................................... 30 Appendix A: Evaluation Timeline for Single-Year Study ............................................................................. 31 Summer 2008 (Prior to School Year)....................................................................................................... 31 School Year 2008-2009 (9th Graders) ...................................................................................................... 31 Summer 2009 .......................................................................................................................................... 32 Bibliography ................................................................................................................................................ 33 3 EXECUTIVE SUMMARY The following proposal outlines an evaluation model to assess the impact of giving laptops and internet access to secondary school children in Nigeria. Background Nigeria is currently one of the poorest nations in the world. According to the non-profit data organization “Gapminder,”1 the average per capita income in Nigeria is only $1.33 a day. As part of a global effort to eradicate poverty in nations like Nigeria, the UN has created targeted priorities for government intervention. These “millennium development goals” ask nations to meet a broad range of goals that relate to poverty and economic development. The millennium development project specifically discusses the importance of enhancing educational attainment and informational communication technology access in developing nations.2 Currently Nigeria is struggling with both of these issues. Education in Nigeria today is woefully inadequate: less than 70% of the adult population literate and less than 40% of the population finishes the 9th grade. In addition to having a weak education system, Nigeria also has been unable to provide widespread access to ICT: less than 0.6% of the population has computers. Our proposal evaluates the potential benefits of government-funded procurement of XO $100 laptops in secondary schools to address these concerns. We have created an evaluation model that will assess the impacts on student achievement, school attendance, teacher attendance and ICT literacy from this intervention. In addition to looking at these outcomes we can track future wages of program participants to understand the larger economic ramifications of this intervention. Proposed Intervention To achieve gains in schooling and ICT access, we propose a program that will place laptops in secondary schools throughout Nigeria. Some laptops will come with internet access, others will come with educational software while other laptops will be provided to students as is. Students are free to use laptops as they see fit and teachers can choose to integrate the laptops into their teaching if they desire. 1 www.gapminder.org 2 http://www.un.org/millenniumgoals/# 4 Promise of ICT Employment of computers and other ICT in schools is based on the premise that access to technology and global communication facilitate constructivist learning. Constructivist learning is based on the premise that student-directed learning is the best way to achieve comprehension. Students can use the built-in software on the laptops to complete class assignments in innovative ways. They can also tap in to the informational power of the Internet to access knowledge beyond their immediate localities. Thus, laptops provide an opportunity to complement classroom instruction with innovative student use of a new technology. A Randomized Evaluation for Laptops in Schools In order to understand the impact of laptops on schooling and ICT literacy, we propose establishing a randomized trial that will track cohorts of students within schools. Given the high stakes associated with any scalable laptop program, we believe that investing in a thorough and relatively costly evaluation process is warranted. Furthermore, making such an investment in a select few countries (like Nigeria) can be done in a way that provides external validity to the broader developing world. We will choose a random sample of high schools from across Nigeria. From this random sample, 6/7 of the schools will receive some form of treatment – laptops that are configured with varying capabilities and different distribution ratios for laptops per student. Ultimately, we hope that our randomized design will help policymakers to answer some fundamental questions about the laptop program: Do laptops in schools improve learning outcomes for students? Which outcomes (attendance, literacy levels, ICT literacy, project-based and self-directed learning capacity) do they help to improve the most? In terms of achieving desirable learning outcomes, can laptops be shared among a group of 5 students, or must each child receive an individual unit? How essential is internet access to improved education through laptops? Should policymakers ensure that any laptop program starts with internet access, or are the educational gains through internet connectivity insignificant? 5 In addition to gathering and analyzing this quantitative data, we will survey members from all sample groups to assess other measures like ICT literacy, internet usage, satisfaction, time spent on work etc. By using a combination of test and survey data we can measure a broad range of education outcomes in order to assess the true revolutionary potential of laptops in schools. Finally, we will outline several additional extensions to our experimental design that can help get at larger issues that may be of interest to policy makers. One important extension is a proposed 10 year study that would try to link laptop usage to employment outcomes. In a sense, this expanded study would measure the final outcome that any educational intervention seeks to identify – the additional value of schooling with the intervention in labor market outcomes. 6 1. Introduction and Problem Statement Our proposed evaluation is intended to determine the impact of laptop computers in improving educational outcomes. There are two fundamental problems that laptop-driven intervention is meant to address. First, we shall show that the education system in Nigeria is in serious need of improvement, especially at the secondary school level. Secondly, there is an acute need to develop competencies in information and communications technologies (ICTs) among Nigerian students since this may be a critical path to opening up economic opportunity for many of this nation’s poor. We shall review research that shows the promise of computers in improving educational outcomes, including technological literacy. This background will set the stage for motivating our proposed intervention. The latter half of this proposal is devoted to detailing a randomized evaluation that measures educational outcomes as defined by test scores, attendance rates and other variables. This study will allow policymakers to decide whether a laptop-based intervention is indeed worthy of investment in Nigerian secondary schools. 2. The Nigerian Context Nigeria is currently not on target to fully meet its millennium development goals. One of the UN’s priorities for promoting global partnerships is to have nations work “in cooperation with the private sector, make available the benefits of new technologies— especially information and communications technologies”. Nigeria is one of the 25 poorest nations in the world today, and has the 3rd largest population of people living in poverty (as defined by living below $1 a day), behind only India and China. As of 2004, 54% of Nigeria’s population lived below this poverty threshold-the national per capita GDP is only $393 USD. To address poverty and promote growth, the UN has advocated for integration of technology into developing societies and for improved education. 2.1 Education in Nigeria Historically, Nigeria has had three separate education systems: a system for indigenous education, a system for religious Islamic education and a system for secular “European style” education.3 Rural communities often relied on indigenous systems that stressed agrarian and trade education. As formal 3 http://www.onlinenigeria.com/education/index.asp 7 education increasingly became a necessity, Nigeria grew its investment in education. Eventually, schooling became the largest social investment made by the government. By the 1990s, more than 17 million children attended primary or secondary school in Nigeria.4 With respect to teacher salaries, Nigeria spends significantly less than its neighbors, but it has dramatically improved pay over the past decade. While Nigeria has stressed education in recent years, levels of school completion and school attendance remain worrisome. According to the World Bank, less than 40% of the Nigerian population stayed in school until grade 9.5 A strong indicator of Nigeria’s education troubles is the adult literacy rate: less than 70% of the adult population are literate today in Nigeria.6 2.2 Education Finance and Governance The funding (and hence governance) of public education in Nigeria varies according to level. Primary education has received the bulk of its support from local governments, especially in recent years as the federal government has pulled back its subsidies and the local governments have been asked to bear their constitutional responsibilities to basic education. Our intervention and evaluation focuses on secondary education, which is largely funded by state governments. The federal government also provides some support, but local governments, in general, do not contribute sizeable funds to secondary schools in Nigeria (Hinchliffe, 2002). Figure 1 breaks down education funding for secondary schools in Nigeria. 2.3 Nigerian Access to ICT As a developing nation, Nigeria has had limited access to technology. As of 2004, slightly over 1% of Nigerians even had access to the internet. As a developing nation, Nigeria ranks near the bottom quartile 4 ibid http://www.worldbank.org/research/projects/edattain/edattain.htm 6 http://www.unicef.org/infobycountry/nigeria.html 5 8 with regards to access to information and communication technology. While the national population exceeds 140 million, there is only an estimated 860,000 computers nationally. Nigeria is beginning to distribute computers but there is no guarantee that once individuals have access to computer technology that they will be able to utilize technology and integrate it into communities to improve the efficiency of Nigerian society. Research beyond our proposed evaluation is needed to determine how capable Nigerians are in utilizing ICTs and whether specific educational interventions can address this need more specifically. 3. Our Proposed Intervention The intervention we will evaluate is to supply schoolchildren in Nigeria with laptops that are preloaded with educational software. A core purpose of the evaluation is to test several different hardware and software features that will determine the final configuration for any intervention that is rolled out at a large scale. The one sure thing about the final intervention (and all treatment groups for the evaluation) is the XO Laptop that is supplied by the One Laptop Per Child (OLPC) foundation. 3.1 About the XO Laptop from OLPC XO laptops have been designed specifically for schoolchildren in developing countries and therefore have many features that will be useful in the Nigerian context. Each laptop is capable of networking with other computers and the wider internet through a mesh network mechanism. Such connectivity is unique because it allows users to communicate even in the absence of a wider internet infrastructure. Such a capability will be essential in places like rural Nigeria. The fact that each laptop can be manually charged in the absence of an electrical outlet is also important for underdeveloped regions of Nigeria (OLPC, 2008b). Beyond the specific features of the laptop, the philosophy behind it development ensures that the laptop is optimized for the best educational outcomes. While the OLPC project is associated with introducing computers in schools, an essential part of the organization’s mission is to facilitate student learning beyond the school walls: “We believe the emerging world must leverage this resource by tapping into the children's innate capacities to learn, share, and create on their own. Our answer to that challenge is the XO laptop, a children's machine designed for ‘learning learning.’ (OLPC, 2008a)” 9 It is clear that a mobile computing solution for ICT literacy allows children greater opportunity to learn on their own and to disseminate the benefits of educational technology to their families and local communities. Furthermore, such a technology allows students to test their creativity and intellectual abilities by applying the tools in the laptop to real problems in their communities. Our intervention will encourage schools to use the laptops beyond the school building, but we may limit students’ abilities to take laptops home with them based on logistical factors (especially for the evaluation stage). 3.2 Organizational Model for the Intervention The basic business model for our intervention is to work with OLPC to procure laptops, but then to actually define the program vision and manage implementation through a specialized non-profit agency. This is a model similar to the Open Learning Exchange project in Nepal, where a non-profit organization has created a curriculum program for Nepalese schools that is distributed through laptops purchased from OLPC (OLE, 2008). Figure 2 illustrates the arrangement we envision. The job of the evaluation team will be to work with the Laptops for Nigeria non-profit and the state and national governments of Nigeria to execute an evaluation that will test the viability of such an intervention. Since local governments provide the majority of secondary school funding in the public sector (Hinchliffe, 2002), it is essential that any evaluation and large-scale intervention involve them from the start in order to succeed. Another important question about the intervention itself is what the actual laptop capabilities will be. As mentioned earlier, the configuration for any final program involving the XO laptop will depend on what our evaluation finds regarding the value of different configurations. Our evaluation is designed to test 10 research questions that pertain to the efficacy of the following features in a large-scale laptop distribution program: Individual laptops for each student. Internet access for laptops in the school facility. Learning software for different subject areas. With respect to software, there are millions of options and configurations that are available. In our evaluation, we will focus on a leading suite of math-enrichment software tools that will be bundled with laptops and test whether these have a significant impact on math scores. All laptops will be equipped with a basic suite of productivity software as well (word processing, calculators, web browsers, etc.). 4. The Promise of ICTs in Nigerian Education and Society Logically, XO laptops clearly hold the promise of improving the quality of education in Nigeria by widening access to education and learning, promoting retention and eliminating inequalities between urban and rural localities. The goal of our evaluation is to determine whether this potential is a reality for secondary school students in Nigeria. Several sources in the literature support our hypothesis that new learning technologies, like the XO laptop provided by One Laptop Per Child (OLPC), will improve educational outcomes. One of the most relevant lines of research for the XO laptop is with regards to constructivist learning. Constructivist learning is a pedagogy based on the idea that humans learn best through learner directed exploration where teachers act as facilitators. This model of learning was championed by early developmental psychologist Jean Piaget. Computers and laptops may hold unique potential to promote constructivist learning since students can program things according to their interests and once online they can seek out information that they desire when they desire it. XO laptops come equipped with standard software (such as the Scratch programming environment) that may facilitate constructivist learning. Several studies have found that constructivist learning approaches do promote higher achievement in math and reading literacy (Law, Chanand, & Sachs, 2008; White-Clark, DiCarlo, & Gilghriest, 2008; Zhang, 2008). 11 5. Literature Review of Other Educational Interventions Beyond the research on constructivist learning, there is also significant research on a variety of other educational interventions. This literature includes technological innovations (such as flip charts and specially designed electronic devices) as well as policy innovations (such as contract teachers). Below, we describe some of the most prominent studies and then list the key findings in a table. Ultimately, policymakers will need to compare the relative costs and benefits found in the table at the end of this section with the costs and benefits of an XO laptop program. Comparing both the magnitude and breadth of improvement in educational outcome between these interventions should guide decision makers as to what approach is best for their schools. 5.1 Technology-Based Interventions Below are some studies that evaluate how technology interventions into classrooms correlate with student achievement and other educational outcomes (school attendance etc). All of the studies below evaluate programs in developing nations much like Nigeria. While numbers may not correlate exactly with the expected outcomes and costs that would occur in Nigeria, these studies can provide insight into which programs have proven to be the most effective and efficient in developing regions. When evaluating OLPC, we will need to compare how much change we can anticipate in measured outcomes over the cost per child to assess relative cost effectiveness. 5.1.1 Pic Talk In India, students were given access to simple machines that pronounced words and displayed pictures to match words. This tool was a supplement to language learning programs were teachers were often weak in English. These machines were associated with positive gains in student test scores (.29 SD). When calculated on a per student basis, this intervention cost approximately $1.31 per SD gain in test score (Muralidharan, 2008). While this program is extremely cheap, its gains are significantly smaller than other programs like Computer Assisted Learning (CAL) that is discussed below. 5.1.2 Flip Charts In an evaluation of classroom inputs in Keya, researchers assessed the impact that the addition of simple flipcharts into classrooms had on student achievement. Flipcharts, like textbooks, combine visual aids and information as a teaching tool. Flipcharts were a seen as a desirable intervention because they are 12 cheaper than textbooks (Glewwe, Kremer, Moulin, & Zitzewitz, 2000). In fact, providing wall charts to 4th grade classrooms would cost approximately $80 while providing textbooks to each child in an average 4th grade school would cost $800. In this study, 89 schools were randomly chosen to receive flip charts that had drawings and explanations for specific subjects. 89 other schools where used as control. The researchers compared test score data of schools that did and did not receive the flipcharts and teacher training. The effects of these tools as an input for student learning range from .20-.05 SD (depending on whether a retrospective OLS or a DID model is used). When a prospective analysis was used (to control for omitted variable bias), the findings were not statistically significant. Even if the intervention only cost $80 per school, it would not be cost effective to spend money on a program that had no significant effect. 5.1.3 Laptop Usage in Israel Unlike the other studies referenced here, Israel’s “Tomorrow 98 Programme” is the only program that occurred in a developed nation. While Nigeria is not a developing nation, it, like Israel, has invested heavily in computer access in schools already. The study done in Israel provides useful insight to different types of computer interventions and their relative effectiveness. In this program, treatment schools received funding for computer training programs, specific software and additional hardware and computer upgrades (Angrist & Lavy, 2002). In order to receive funding, schools had to apply for funding and demonstrate a preexisting computer program and a need to expand. Researchers sampled 200 schools from a stratified sample. 122 schools were chosen from the sample to receive treatment. In treatment schools, one fourth grade class and one 8th grade class were chosen to take a math and language test. In addition to test score data, researchers gave a survey to teachers to assess their use of other technology like overhead projectors in the classroom. Each computer cost $3000 (computer, training, hardware and software). This study found a negative correlation between the intervention and gains in math scores. Other subjects had no statistically significant gains and no spillovers were detected. Given the high cost of this intervention and the negative effects, the researchers concluded that the intervention was not cost effective. 5.1.4 Computer Assisted Learning in Prantham India In the Indian city of Vodadora, researchers evaluated the impact that a computer assisted learning program, run by the NGO Pratham, had on student achievement. In this study, treatment students were given 2 hours of shared computer time weekly where they played math computer games. 100 schools in 13 the city received 4 computers as part of a government program. Two years after schools received these computers, Pratham randomly chose half of the primary schools that had received computers to participate in the Balsakhi Computer assisted learning (CAL) program. CAL schools were chosen from a stratified sample that accounted for a schools grade level, gender, instruction language and Balsakhi treatment status (Banerjee, Cole, Duflo, & Linden, 2005). CAL treatment schools received specific teacher training on computers and were given the specially designed math software. Control schools had computers but received no training or specialized software. In year 2 of the study, schools switched so that those that did not receive treatment in year 1 did. These games adjusted to the individual student’s ability and progress. Evaluations of this program reveal that when the treatment of computer games were introduced, math scores increased by .35 SD in year 1 and .47 SD in year 2 (Banerjee et al., 2005). This intervention was shown to have no impact on school attendance or dropout rates. The CAL program cost approximately $15.18 per student (when divided by the first year gain of .35 SD this cost is $43 per SD). 5.2 Other Interventions that Demonstrate Best Practices for Evaluation 5.2.1 De-worming Hookworm infections effect more than 740 million people worldwide and can lead to anemia, gastrointestinal diseases and difficulties in childbirth (Hotez et al., 2004). School aged children are most vulnerable to hookworm infection. Researchers have long hypothesized that children do not attend school or do not perform well in school because they are suffering from the symptoms of hookworm infection. One of the most notable studies evaluating this question occurred in Kenya in 1998. In this study, the school related effects of deworming medication was studied for more than 30,000 school aged children. Treatment was assigned at the school level to minimize externalities (since hookworm is so easily transmitted if one student is dewormed in a school it is likely that the rate of transmission for other children in the school will also fall). In this study, absenteeism fell by 25% but there were not notable gains in test score achievement (Miguel & Kremer, 2004). 5.2.2 Contract Teachers Several studies have been conducted to assess the impact of using contract teachers in classrooms (either college educated or not) on student achievement. The rationale behind these evaluations is that 14 teacher salaries are extremely costly and oftentimes schools suffer from overcrowded classrooms and teacher absenteeism. By hiring non-certified teachers, districts hope to reduce class size, provide additional help to children and motivate teachers to attend school. The results for these studies vary and are summarized in table 1. 5.3 Key Results from Impact Evaluation Table 1 below provides a condensed summary of many of the prominent evaluations discussed above. Table 1: Summary of prominent educational impact evaluations. INTERVENTION REGION COST EFFECT EFFECT SIZE Hookworm Kenya $3.50 per Increased health and .076 gain(females) and child per school attendance .088 (mal Gains in Math scores in .284 mean increase in both evaluation years scores increased year schooling Contract Teacher India wth smaller gains in literacy Pic Talk India $1.31 per sd Flip charts Kenya $81 per No notable gains in test school scores Subsidize Israel Computer software .29 SD .05-.2 SD $3000 per computer and hardware upgrades Computer Assisted Learning India $43 per SD Increased math scores, .35 no increases in attendance 15 This table provides some context in which to assess the impact of our own intervention in Nigeria. We do acknowledge that results may not be directly comparable because of different national contexts and research designs. But, we feel it is important to use this data as a starting point for any policy discussion. Up to this point, we have stated the basic educational problem that our XO laptop intervention seeks to address. We have also established why there is a strong reason to believe that this intervention will have a positive effect on many different educational outcomes. We have reviewed related literature and provided a context within which cost-benefit analysis for the laptop intervention can occur. We can now turn our attention to describing the evaluation that will help determine the actual value of our proposed intervention. In the next section, we provide an argument as to why a sizable and potentially costly evaluation is warranted for the type of intervention proposed. We then detail the key features of our impact evaluation. 6. Why a Large-Scale Evaluation of Laptops Makes Sense Before describing in detail the implementation of our evaluation, it is important to step back and explore the high level goals that such an evaluation would serve. In particular, we would like to describe the balance we are setting between the costs of an actual rollout of the XO Laptop and the costs of an evaluation that studies the benefits of a laptop-based program. 6.1 Setting the Context for Cost-Benefit Analysis of the Laptop In the developing nation context, the cost of any program involving computers will be quite significant. In sub-saharan Africa, it is estimated that the per-pupil spending on education is less that $40 per year (Brown, 2005). Limited data is available on the education spending in Nigeria, partly because reporting of costs and enrollment figures are unreliable across many of the states. However, we can make some basic estimates of spending using data provided by Hinchliffe (2002). The range in annual per pupil spending at the secondary level in Nigeria varies from about $65 for public schools to approximately $210 in private schools7. Households in Nigeria spend around $30 yearly to supply materials for children attending secondary school in government institutions. 7 Per pupil spending at public secondary schools was reported to be N 3080, although there was more than a doubling of spending across the board in education because of higher teacher salaries by 2002. We thus used a conservative estimate of spending at N 7500 in 2002 and then converted to U.S. dollars. Per pupil spending at private schools also varied by state, but was roughly N 24,000. converted to U.S. dollars. 16 Looking at the current levels of expenditure on secondary education, we can see why any laptop initiative will need to be justified by exceptional results. Even an arrangement that provides one laptop for every 5 children and no internet connection will cost at least $20 per child, not including maintenance and administrative costs. This would nearly double household spending on materials for education if families are asked to bear the costs of the laptop. If, instead, the government must bear the cost as part of its larger funding for public schools, then this will represent at least a 30% increase in per pupil spending at the secondary level. Of course, the $20 per pupil figure is a very optimistic scenario, since maintenance and administrative costs have yet to be accounted for. Not only are the per pupil costs of the laptop very high, but many current arrangements for introducing the XO laptop have involved extremely large orders to bring the cost of each individual laptop in to the realm of affordability. A recent scheme in Nigeria called for the purchase of 1 million XO laptops, but that has stalled due to politics and questions about whether such a large investment is warranted (IRIN, 2007). 6.2 Rationale for a Large Scale Study of the XO Laptop Policymakers have a responsibility to identify which interventions will be the best use of very limited resources in education. The previous section makes it clear than any meaningful rollout of a laptop program at the national level will be extremely expensive. It is also clear that for such an initiative to make reasonable policy there must be substantial and broad gains in educational outcomes (especially in comparison to the costs and benefits of other educational interventions, such as those listed earlier in table 1). Our evaluation is intended to help policymakers make this essential cost-benefit analysis before they invest precious resources in a laptop-based program. Given the high stakes associated with any scalable laptop program, we believe that investing in a thorough and relatively costly evaluation process is warranted. Furthermore, making such an investment in a select few countries (like Nigeria) can be done in a way that provides external validity to the broader developing world. Such a model is similar to the literature on contract teachers, where a few high quality studies in developing countries have helped to shed light on the efficacy of contract teachers across the developing world (Banerjee et al., 2005; Duflo, Dupas, & Kremer, 2007; Duthilleul, 2005; Muralidharan & Sundararaman, 2006). Our vision is therefore to have a fairly comprehensive and large study in Nigeria that will not only shed light on whether purchasing millions of laptops in that country is reasonable, but also whether such a policy makes sense in other developing nations. If funding is available, we also suggest extending the evaluation to a ten year longitudinal analysis of the laptop 17 program so that long term benefits in labor market employment and wages can be accounted for along with shorter term educational outcomes. 7. Experimental Design Given the background and intervention described earlier, we can now detail the plans for our evaluation. The goal of this evaluation is to determine whether Nigeria, the Laptops for Nigeria NGO and state governments in Nigeria should go ahead with a laptop program in their schools. This evaluation also seeks to inform these entities about what a final intervention should look like by determining which features are essential to meaningful learning outcomes through the laptop and which features are not worth the cost. As argued in the previous section, our plan is ambitious since it is meant to inform extremely large and disruptive investments in Nigeria and in the broader developing world. Our basic evaluation framework is a randomized control trial, since this technique is currently one of the most robust methods for identifying causal impacts on educational outcomes. We considered alternative approaches, such as difference in difference analysis or regression discontinuity designs, but felt that randomized trials are the most robust mechanism to determine causality. Given our willingness to spend a sizable amount of money on the evaluation itself, we believe that the potentially higher cost of randomized trials is not an issue. Under the randomized trial framework, we will need to test our research questions by randomly assigning units to different treatment and control groups and then measuring our outcomes across our sample. The rest of this section provides details on our randomized study. We describe an ambitious study, but leave out some of the most ambitious and complex extensions from the main part of the experimental design. In the last part of this section, we list s additional research questions that can be answered through some of these extensions and how our study would need to be modified to answer the additional questions. 7.1 Fundamental Research Questions Our evaluation is designed to answer several research questions that are all intended to help policymakers decide whether laptops are a cost-effective mechanism for improving learning outcomes, especially in comparison to other educational interventions that have been studied in the developing country context (de-worming, contract teachers, voucher programs, etc.): 18 Do laptops in schools improve learning outcomes for students? Which outcomes (attendance, literacy levels, ICT literacy, project-based and self-directed learning capacity) do they help to improve the most? In terms of achieving desirable learning outcomes, can laptops be shared among a group of 5 students, or must each child receive an individual unit? How essential is internet access to improved education through laptops? Should policymakers ensure that any laptop program starts with internet access, or are the educational gains through internet connectivity insignificant? Through these research questions, we not only tackle the broader issue of laptop viability in developing nations’ schools, but we also seek to understand what components of computing technology are most vital to improving student gains. 7.2 Outcomes of Interest Table 2 below lists the different outcomes that our evaluation will attempt to measure. The central outcome for our study will be the score on a composite math and reading examination designed to measure basic and higher level skills. We can then split this score across the reading and math sections to get our outcome values for the individual math and reading categories. We intend to hire consultants to design an appropriate test that will measure these outcomes, similar to the strategy employed by Muralidharan (2006). Despite the focus on a single test for our core outcome, this evaluation seeks to examine the impact of laptops across a broad range of possible changes in learning. Potentially expensive and revolutionary interventions like computers in the classroom should be evaluated based on their impact across a wide spectrum of learning outcomes, not just a single test score. This is why we intend to closely track all of the outcomes listed in table 2 and to present the whole breadth of our findings to policymakers at the conclusion of the evaluation. 19 Table 2: Outcomes of interest for OLPC evaluation. OUTCOME Attendance MEASUREMENT TOOL School Records Reading Scores Examination Math Scores Examination ICT Literacy Team-Oriented Learning Examination Random classroom visits and survey data. Project-Based Learning Random classroom visits and survey data. Self-Directed Learning Disciplinary Incidents Homework Assignment Rate Homework Completion Rate Teacher Attendance Classroom visit and observations. Random classroom visits and survey data. Survey data Survey data Survey data DESCRIPTION Obtained by disaggregating a composite score on a comprehensive examination. Obtained by disaggregating a composite score on a comprehensive examination. Test will ask students to complete basic tasks (writing a letter, copying and moving files, etc.) on computers other than the types they have been provided. Visits and surveys will help to reinforce final conclusion regarding the level of teamwork. Visits and surveys will help to reinforce final conclusion regarding the amount of projectbased learning Field workers will sit in on a class and then ask students if they can think of ways to extend concepts being learned (how would they try to learn about something they haven’t yet been exposed to). Sample process variable that is also measured by field workers. Sample process variable that is also measured by field workers. Sample process variable that is also measured by field workers. Sample process variable that is also measured by field workers. To help us ensure that our sample is balanced prior to running our evaluation and to increase the power of our experimental design, we intend to collect baseline data for test scores and attendance rates. If the funds are available, then we will collect baseline data for the survey-measured outcome variables as well. Appendix A gives a rough timeline of how we anticipate carrying out our measurements. 7.3 Defining Our Target and Sample Populations We are primarily concerned with answering our research questions for individual students. However, we are choosing to cluster at the school level for several important reasons. First, evaluating our outcomes at the school level helps to control one common threat to experimental validity: spillover effects from students who do receive the laptop to those who do not. Secondly, we may not even be sure who is 20 using the laptop in a student distribution scheme since students may actually share laptops with their peers in school. If there were treatment and control students in the same school, then it would be quite likely that students in the treatment group would use their laptops less than measured and more likely that students in the control group managed to receive some of the treatment (access to the laptops). Furthermore, distributing each laptop individually would be logistically difficult and expensive. Classes would have some students who do have the laptop and others who do not, so instruction could not be re-optimized to make best use of the new technology. Some students and teachers may just perceive a random assignment of laptops to some subset of the students in school as being unfair. Thus, there is ample justification to distribute laptops at the school level. Our evaluation will focus on public secondary schools in Nigeria. Secondary schools across the developing world (and in Nigeria in particular) have much lower rates of attendance than primary schools while costing the government more money per pupil to operate. It is thus more likely that a laptop intervention will be identified as cost-effective at the secondary school level. Furthermore, we also believe that the laptop may benefit secondary students more than primary school students. The educational goals of secondary school go beyond just introducing basic math and reading literacy. Such students should begin to develop higher order thinking skills while also sharpening the basic skills that they learned in high school. Thus, we can test this population for its proficiency in basic skills continued since primary schools and additional skills (teamwork, project-based learning, selfdirected learning, etc.) that are developed at the secondary level. Our evaluation sample will be randomly drawn from the available public secondary schools in Nigeria. This sample will then be split randomly in to one control group and several treatment groups. To determine the optimal size for the control and treatment groups, we conducted a power analysis using Optimal Design software. We estimated the average number of pupils per 9th grade cohort in each school to be 150 students. Our key outcome variable is the composite examination score, although we would also like to detect differences in our other outcome variables as well. For simplicity, we set our target effect size to be 0.2 standard deviations for our key outcome and for the other additional outcomes. 21 We would like to achieve a power of 80% at a significance level of 0.05. We estimate the intra-cluster correlation (within each school) to be 0.1. Given these parameters, we would require a minimum of 86 sample points for a one treatment study (43 samples in the control and 43 in the treatment). We expect to have 6 treatment groups in addition to 1 control group, so this would require 7*43 = 301 participating schools. Since our effect size is fairly conservative (at 0.2 SD instead of something much larger like 0.5 SD), we believe that this sample size will give us more than enough power to identify differential impacts to all our outcome variables, including the composite test scores. 7.4 The Different Treatment Groups To help answer our research questions, we will break up our final sample in to one control and several treatment groups. Our final breakdown will look like what is shown in table 3. Table 3: Control and treatment groups for randomized evaluation. No Laptop No Internet or Addt'l Software One Laptop per Child CONTROL GROUP (43 Schools) One Laptop per 5 Children 43 Schools 43 Schools Internet 43 Schools 43 Schools Additional Learning Software 43 Schools 43 Schools 7.4.1 Individual or Shared Laptop Distribution Having separate treatments for one laptop per child and one for every 5 children will allow us to determine whether individual and shared laptop arrangements result in different student outcomes. We can also compare these two treatment groups to the comparison group to test whether there is indeed a positive impact of laptops at all. 7.4.2 Internet Access We also want to test whether Internet access is an essential component for computing technology in the classroom and whether an investment in computers must be coupled with an investment in Internet 22 capacity. The Internet has the potential to expose students to information from across the world, both supplementing and complementing the learning opportunities they have through their instructors and curricular materials. We can determine whether students do indeed take advantage of the information available on the Internet for learning by comparing students who receive internet access with those who have laptops without internet. 7.4.3 Additional Learning Software There are literally millions of software configurations that we can test on any computing machine. For the sake of cost and simplicity, we will choose one leading software package intended to support high school mathematics. We will test the effect of this software on the various learning outcomes, focusing specifically on math literacy. Our goal is not to provide an exhaustive set of conclusions about the value of all software, but to use this evaluation as a starting point for further study of specific software packages and their value to education. For Internet access and additional learning software, we could have created a design that tests all possible combinations of the two features. For the purposes of simplicity and cost, we chose to test each individual feature alone and the control status of having neither feature. This leaves out the case where a computer is equipped with both Internet and additional learning software. We feel that just testing each feature alone will provide a good sense of its individual value and that there is little reason right now to believe that learning software will be greatly enhanced by Internet access or vice versa. If there is software that requires internet access to be effective (such as software that downloads questions or materials from the web) or if there is extra funding to test the combination, then we can easily extend the design above to also include cells for both additional software and Internet connectivity. 7.5 Randomization Strategy Figure 3 below outlines our basic randomization strategy. As discussed earlier, our target population is 9th graders in all Nigerian public high schools. In order to conduct the evaluation, we will need the cooperation of the national and state governments as well as the individual schools themselves. We presume that we will get permission from the national government since otherwise this study will simply not be possible. For state governments and individual schools, we intend to seek out the permission of as many of them as possible. This is to help us ensure that our pool of potential participants is close to the larger target population. 23 By having most or all schools willing to participate, we can then randomize across schools and ensure that our final evaluation will have external validity (at least for Nigerian public high schools). In such a situation, our evaluation sample of size 301 would come through random selection from the participants who have agreed to participate. Once we have our evaluation sample, we will then randomize the assignment of each school to either the control or one of the 6 different treatment categories listed in table 3. We will randomize and assign right after we measure our baseline data (refer again to our timeline in Appendix A). In this way, we can use the baseline data to ensure that our randomized samples are balanced across baseline outcome characteristics and any other variables of interest. We would check for this balance before assigning each randomly selected group to treatment or control status. If we are comfortable with the sample balance across the groups, then we will assign each group to one of the 7 cells in our study. If not, then we will re-randomize until we do have 7 balanced samples. 7.6 Important Operations Details At this point, we have laid out the basic research questions for our experiment and the broad theoretical framework (randomized experiments) under which we will evaluate the impact of laptops in schools. To ensure that our conclusions are valid, we must now address essential details about operations and anticipated threats to validity. 7.6.1 Data Collection Our evaluation timeline provides a clear description of our data collection strategy. We will be using three key instruments to measure our outcome variables: examinations, class observations by field researchers, and survey data distributed to participants. We will focus on the cohort of 9th graders at each high school. Since many students change schools once they jump to the secondary level, it is not feasible to collect student data prior to their beginning the 9th grade. We will therefore distribute baseline exams and conduct any additional baseline analysis within the first month of the school year. 24 Once we have collected our baseline data, we will randomize our evaluation sample into 7 cells. Then we will check for sample balance across all the cells and re-randomize if it is unbalanced on one of the variables we are tracking. Once we are sure that our cells are balanced, we will randomly assign them to treatment and control groups and distribute the treatment. We will then give schools some time to get accustomed to the use of the laptops before we conduct class observations and distribute posttreatment surveys and exams. Our key outcome measure will be a composite score on a math and readings skills exam that we design with the help of education researchers and local assessment firms. As is the case with exams like the SAT in the United States, the scores can be disaggregated across subject as well. Unlike the SAT, however, our exam will not be a norm-referenced test; rather, it will report a raw score that measures proficiency in math and verbal skills. We intend to also categorize each question based on the content that it tests and the type of skill that it measures (basic knowledge or higher order thinking). In our final analysis, we can then drill down and see how different control and treatment groups did on different types of questions. Later in this proposal, we describe several extensions that can be done to the study to answer additional research questions and how these extensions may affect the operations of our evaluation. The next parts in this section address specific threats to validity that further inform our data collection and operations plans. 7.6.2 Piloting our Study in a Select Few Secondary Schools If time and money allow, we would like to delay the actual evaluation outlined in appendix A by 1 year and use the 2008-2009 cohort of 9th graders to ensure the quality of our operations and data collection procedures. In such a scenario, we would select 7 schools that will not be in the evaluation sample and run a test experiment on them for 1 year. Such a pilot allow us to ensure exams are fairly distributed, laptops are correctly working, support for educators can be made available and any non-compliance is identified, prevented and accounted for in the real study. Furthermore, we can use this first year to help us get better estimates for parameters like the intra-cluster correlation that will go into our power analysis and sample size selection. 7.6.3 Preventing Spillover Effects of Treatments Spillover effects where students in the control schools are exposed to students in the treatment schools may lead to underestimation of treatment effects if the laptops do indeed improve performance. We have tried to prevent such a scenario by clustering at the school level, so students within a school all 25 have the same treatment status. We hope that a pilot will help us to determine whether there are still problems with students sharing laptops when they take them home or if other issues of lost or stolen laptops arise that may present attrition biases. Based on what we see during our pilot study, we will determine whether students should be allowed to take laptops home with them or if the computers must be kept within the school building (in secured cases) and used only during class. 7.6.4 Managing Attrition in Participating Schools Several scenarios can lead to students in a treatment group not receiving exposure to laptops. One issue will arise when students lose their laptops. In such a case, our policy will be to ask the student to look on with others in class. In our final analysis, we will also keep track of how many student lost their laptops as an outcome. This will allow us to calculate additional costs for the laptop program and also to adjust our findings based on whether many students in treatment groups did not have access to their computers. Lost or stolen laptops that are not replaced are not the only risks for attrition or non-compliance. The biggest source of attrition will be students dropping out of school. This can threaten the validity of our experimental conclusions, especially if there is some systematic relationship between dropouts and the various cells in our study. Typically, the weakest or most disadvantaged students can be expected to drop out at the highest rates and struggle the most on assessments. If, however, the presence of laptops keeps more of these students in school, then we risk underestimating our treatment effects as weaker students drop out from the control schools in greater numbers as compared to treatment schools. We will use incentives to minimize the number of students who participate in the study but do not complete post-treatment assessments and surveys. All students (whether they are still in school or not) will receive a relatively sizeable monetary reward (in the range of $5, which is substantial in Nigeria). This reward will be communicated to students only after everyone has taken baseline exams and been assigned to treatment or control groups. We believe this incentive structure is optimal for measuring the impact of the laptop without substantially altering the dropout dynamics at participating schools. Students may still dropout under this arrangement, but we ensure that close to everyone is measured whether or not the student has remained in school. 7.6.5 Preventing Unauthorized Use of Computers Another possible issue to consider is whether students in some groups will end up playing games or using the computers for other unauthorized activity. We could potentially lock computers from installing 26 new programs and only have a specific predefined suite of software on each laptop. However, the majority of games and distractions that students can engage in are online, especially with more advanced Web 2.0 technologies that allow for software to run within web browsers. So, cells that have internet access will have access to a limitless supply of sites that cannot all be tracked with a reasonable cost. We believe the best approach is to acknowledge this as a potential phenomenon among our treatment groups, especially those that are given internet access, and to include it as part of our research inquiry. We can still lock computers that don’t have internet access so that students do not install unauthorized software. However, we must expect that students with internet access will be able to preoccupy themselves with both educational and non-educational websites. Our results for the group of schools that have internet access will guide further research on the impact of the internet on student outcomes. If their learning outcomes are lower than the other groups in the study (including the control), then we can at least conclude that internet usage is a distraction from school work. In this situation, further measures need to be devised to determine why students are not performing as well and how they can be motivated to use the internet for positive gains. 7.7 Data Analysis Once we have all completed our evaluation, we can conduct data analysis to determine the effect of the treatment. We have already discussed how we will use our baseline scores as well as any additional variables (particularly any that are chosen to stratify our sample) to determine whether our sample is balanced. A simple approach to quantifying effect size is to simply measure the means of different outcome variables in each cell and then perform t-tests to determine statistical difference from other cells. We can also implement this approach through a more sophisticated regression framework where we create dummy variables for the different treatment options and then calculate parameter estimates that represent the average treatment effect for each scenario (Duflo, Dupas et al., 2007; Muralidharan & Sundararaman, 2006). It will also be helpful to drill down in our data across relevant subgroups. One possible avenue for exploration would be to group students according to baseline characteristics and see if there are heterogeneous treatment effects based on initial levels of performance and attendance. For example, we could follow those students who scored among the top 10% in baseline exams and those students who were among the bottom 10% and determine how their scores changed after the treatment. This 27 would allow us to answer whether any of the treatments disproportionately benefit weaker students over stronger students (or vice versa). As our next section on extensions discuss, we can define several different variables on which to stratify our initial evaluation sample so that we can specifically track differential effects across different subpopulations. 7.8 Extensions We have presented a detailed outline of a randomized trial to test the effect of laptop computers in schools. We posed a specific set of research questions that have important policy implications, and then described a design that allows us to test each of the questions in our study. Due to budget constraints and for the sake of simplicity, we did not include several additional research questions that can also be answered with a few extensions to the study. In this section, we outline some additional research questions that may be of interest to policymakers and provide a brief synopsis of how our study would need to be modified to answer these additional questions. 7.8.1 Evaluating How Laptops Impact the Long Term Benefits of Education It can be argued that all the outcome variables we have defined for our study are indeed process variables, with the ultimate outcome being wage and quality of life measures that are not included in typical evaluations. This evaluation can be extended over a period of 10 years to determine the impact of laptop computers on life outcomes for a cohort of students. At the most fundamental level, we are asking here whether laptops in school improve the employment and wage prospects of students. Answering this question would allow policymakers to determine net present value of returns to additional years of secondary education increases when schools have laptops. They can then perform a more accurate cost-benefit analysis for procuring laptops that accounts for future returns. To answer such a question, the control and treatment cohorts could be assigned and managed in the same way as we have described already. The primary difference from our current study would be the length of time that we would measure outcomes for students and the precise outcomes that we would measure. The core outcomes would no longer be test scores, but would be student incomes. We could measure these incomes for a specific set of years (perhaps 4, 6, 8 and 10 years after the study). Our analysis would focus on determining whether the average incomes for students with and without laptops differ significantly (through a regression framework). Since we have 6 different treatment groups, we could also determine whether any of the specific configurations that were set up during high school had a particularly strong impact on incomes. 28 7.8.2 Evaluating the Short and Long Term Impacts of Laptop Access Another research question is whether exposure to laptops, internet and learning software has long term as well as short term educational benefits. Do any gains in student achievement persist after students have not used the laptop for some time? To answer this question, we could continue the laptop program for half of the treated schools in the 10th grade. In such a scenario, we would have three groups to compare as the initial cohort of students in the study completes the 10th grade: 1. Students in the comparison group who never received a laptop. 2. Students who received laptops as 9th graders, but did not have any during the 10th grade. 3. Students who received laptops in both the 9th and 10th grades. In such a setup, we could compare groups (1) and (2) to determine if students who have not used laptops recently have any persistent educational gains over students who never received them to begin with. We could also compare the outcome means for groups (2) and (3) to determine how much more benefit continuous use of laptops provides as compared to use during a single year. We could continue to track students among the 6 different treatment cells to determine if there is any differential across treatment types as well. Finally, we could compare groups (1) and (3) to determine whether laptops have significant benefits after students have had them for longer than one year (just during the 9th grade). 7.8.2 Stratifying on Other Relevant Variables We can use stratification to ensure that certain underrepresented schools are included in our study. Another benefit of stratification, however, is to help us compare the effects of the treatment on different types of schools (Duflo, Glennerster, & Kremer, 2007). Policymakers may want to know whether the gains of laptops are the same in rural and urban schools. To test such a question, we would have two evaluation samples: one for urban schools and one for rural schools. We would randomize as described earlier, except over both of these samples. Then we would conduct our experiment and use our final results to help us determine the impact on rural and urban schools separately and Nigerian public high schools as a whole (the latter would require combining the weighted means from our individual strata). There are a multitude of other variables along which we can stratify, depending upon 29 the specific wishes of the state and local governments and the OLPC community: income levels for students in schools, computer access outside of school, etc. 8. Conclusion Computing technology for education, as promoted by OLPC, holds great promise for improving short and long term educational outcomes. However, such technology remains extremely expensive in the developing world, even if the cost has been reduced to $100 per unit. We have proposed a randomized evaluation that can help policymakers determine whether laptops are truly a worthy investment for secondary schools. Upon completion of this evaluation, we hope that policymakers will be able to have a clear sense of the breadth and magnitude of benefits from computers in school and will be able to compare these benefits to other educational interventions that have been tried in the developing world. It’s important at this point to step back and remember that computers are not meant to simply improve a single targeted educational outcome. Indeed, they have revolutionized nearly every other sector where they have been employed. There is reason to believe, therefore, that a similar redefinition will occur for schooling. We have tried to design our evaluation in a way that accounts for these systematic changes by providing a diverse set of outcome measurements that will be tracked. We have also proposed an extension to our evaluation that would track long-term student outcomes in terms of employment outcomes. After all, the best way to determine whether a revolutionary approach to education is actually working is to allow the labor market to decide between the quality of graduates. 30 Appendix A: Evaluation Timeline for Single-Year Study This timeline is specific to an experiment that tests one cohort of students starting in the 9th grade. It does not apply to the extension suggested where there are several grade-based cohorts to test for the effect of the laptop on short-term and long-term learning outcomes. We are also assuming that a standard school year runs from September thru June. Summer 2008 (Prior to School Year) Potential participants list compiled by gathering permission contracts from all relevant parties. Randomized selection of evaluation sample from list of potential participants. Schools in evaluation sample notified of their involvement in the study. School Year 2008-2009 (9th Graders) September, 2008 Baseline exams given to 9th graders. Other baseline data also collected. Initial randomization of evaluation sample in to 7 cells. No assignment to treatment or control made. Baseline data compiled and entered into database by the end of the month. October, 2008 Ensure sample balance across baseline data and any additional relevant variables. Once sample is balanced, then randomly assign each cell to the control group or one of the 6 treatment groups. Distribute laptops and any additional equipment or software to all the treatment groups. November, 2008 to March, 2009 Allow schools time to integrate technology and become accustomed to its use in the classroom. Provide basic support for treatment schools to ensure that they know how to use the hardware and software available. April, 2009 to May, 2009 Send field teams to random schools unannounced. Have them record outcome measurements that depend on classroom observation (project-based learning, discipline issues, etc.). June, 2009 Conduct endline examinations for math and reading skills as well as for ICT literacy. Distribute and collect surveys for teachers and students to measure additional outcomes. 31 Summer 2009 Upload all exam, observation and survey data in to database and begin analysis. Depending on available funds, determine whether to continue evaluation for another year and whether to also plan for a longer longitudinal analysis to follow wages and life outcomes (10 year study). 32 Bibliography Angrist, J., & Lavy, V. (2002). New Evidence on Classroom Computers and Student Learning. The Economic Journal, 112, 735-765. Banerjee, A., Cole, S., Duflo, E., & Linden, L. (2005). Remedying Education: Evidence from Two Randomized Experiments in India. Brown, G. (2005). Remarks by the Rt Hon Gordon Brown MP, Chancellor of the Exchequer on Education in Africa. Retrieved May 1, 2008, from http://www.hmtreasury.gov.uk/newsroom_and_speeches/speeches/chancellorexchequer/speech_chx_140105 _education.cfm Duflo, E., Dupas, P., & Kremer, M. (2007). Peer Effects, Pupil-Teacher Ratios, and Teacher Incentives: Evidence from a Randomized Evaluation in Kenya. Duflo, E., Glennerster, R., & Kremer, M. (2007). Using Randomization in Development Economics Research: A Toolkit. Centre for Economic Policy Research. Duthilleul, Y. (2005). Lessons Learnt in the Use of Contract Teachers. Paris, France: UNESCO International Institute for Educational Planning. Glewwe, P., Kremer, M., Moulin, S., & Zitzewitz, E. (2000). Retrospective versus Prospective Analysis of School Inputs: The Case of Flip Charts in Kenya. Cambridge, MA: National Bureau of Economic Research. Hinchliffe, K. (2002). Public Expenditures on Education in Nigeria: Issues, Estimates and Some Implications. Washington, D.C.: The World Bank. Hotez, P., Brooker, S., Bethony, J., Bottazzi, M., Loukas, A., & Xiao, S. (2004). Hookworm Infection. The New England Journal of Medicine, 351(18), 799-807. IRIN. (2007). Nigeria: Laptops-in-schools debate turns messy. Retrieved May 2, 2008, from http://www.irinnews.org/Report.aspx?ReportId=76023 Law, Y.-K., Chanand, C., & Sachs, J. (2008). Beliefs about learning, self-regulated strategies and text comprehension among Chinese children. Psychological British Journal of Educational Psychology, 78, 51-73. Miguel, E., & Kremer, M. (2004). Worms: Identifying Impacts on Education and Health in the Presence of Treatment Externalities. Econimetrica, 72(1), 159-217. Muralidharan, K. (2008). A-165 Course Lecture for April 25, 2008. Muralidharan, K., & Sundararaman, V. (2006). Teacher Incentives in Developing Countries: Experimental Evidence from India. OLE. (2008). Open Learning Exchange Nepal Website. Retrieved April 2, 2008, from http://olenepal.org/ OLPC. (2008a). One Laptop Per Child Mission Statement. Retrieved March 31, 2008, from http://laptop.org/vision/mission/ OLPC. (2008b). One Laptop Per Child Wiki. Retrieved March 31, 2008, from http://wiki.laptop.org/go/The_OLPC_Wiki White-Clark, R., DiCarlo, M., & Gilghriest, N. (2008). Guide on the Side. The High School Journal(AprilMay 2008). Zhang, L. (2008). Constructivist pedagogy in strategic reading instruction: exploring pathways to learner development in the English as a Second Language (ESL) classroom. Intructional Science, 36, 89116. 33
© Copyright 2026 Paperzz