Never Put Off Till Tomorrow? Ignacio Martinez Mathematica Policy Research This paper identifies the causal effect of procrastination on achievement in a Massive Online Open Course (MOOC). I use two approaches: instrumental variables (IV) and a randomized control trial. I show that rain and snow affect when a student takes a quiz and, therefore, can be used as an IV for procrastination. I find that taking the course’s first quiz on the day it is published, rather than procrastinating, increases the probability of course completion by 15.4 percentage points. With the randomized control trial, I show that very low-cost intervention can increase student achievement. I sent an email (directive nudge) encouraging a randomly selected group of students to procrastinate less. Students assigned to the treatment group were 16.85 percent more likely to complete the course. I also find that the effects are heterogeneous across countries, suggesting that it may be advisable to customize nudges to country characteristics. This online experiment may also provide valuable lessons for traditional classrooms. Keywords: procrastination, achievement, MOOCs, nudge, online education Thomas Jefferson’s axioms for personal conduct famously include “Never put off till tomorrow what you can do to-day.”1 Conventional wisdom agrees with him and establishes that procrastination is bad for student achievement: without deadlines, students may procrastinate on their work, which may reduce learning. On the other hand, some people thrive under pressure. For that type of person, active procrastinating could be the optimal choice. Procrastination is difficult to measure and endogenous to most outcomes of interest. Most papers rely on self-reporting, which could cause a Hawthorne effect: students may change and procrastinate less because they are asked to record their behavior. Also, low-ability students may be more likely to procrastinate; for these students, changing their work habits might not improve their outcomes. Massive Online Open Courses (MOOCs) provide an ideal laboratory to study procrastination. The rich data that MOOC providers collect include both when the course material is published and when the students interact with it. Therefore, researchers can observe procrastination directly without relying on self-reported measures. Using data from Coursera, a MOOC provider, Diver and Martinez (2015) provide descriptive evidence of the strong negative correlation between procrastination and achievement. As shown in Figure 1, students who delay making their first attempt at quiz 1 perform, on average, worse than those who do not procrastinate. In this paper, I use two approaches to estimate the causal effect of procrastination on achievement. First, I use weather data for an instrumental variables (IV) approach; this is possible because MOOCs collect information on individual IP addresses. Second, I use directive nudges for an experimental approach. Weather shocks provide a source of variation that predicts procrastination. I show that rain and snow affect when a student takes a quiz and, therefore, can be used as an IV. For example, on a day with rainfall, a student is 2.3 percentage points less likely to attempt the first quiz the day it is published relative to a day that is not raining nor snowing. On a day with snowfall, a student is 5.0 percentage points more likely to do so relative to a day that is not snowing nor raining. Next, I show that a directive nudge can affect students’ choices and help students improve their achievement. These results are more important than the weather IVs because they can be replicated in all types of classrooms. Students randomly assigned to the treatment group received an email that provided information about the negative correlation between procrastination and achievement. These students were 17 percent more likely (relative to a very low base rate) to successfully complete the course than students in the control group. Additionally, I show that the effect of the treatment is heterogeneous among different countries. For example, Germans, Spaniards, and Indians assigned to the treatment group were more likely to obtain the course certificate, with rates that were 167 percent, 67 percent, and 40 percent higher, respectively. On the other hand, the treatment had a negative effect in Japan and Denmark. The remainder of this paper is organized as follows. In the next section, I discuss the relevant literature. Following that, I present the economic model. I next describe the data, and then show that weather can be use as an instrument. Following that is a section describing the randomized control trial and its results, and then my conclusions. Literature In this section I discuss how phycologist, economist and behavioral economist have study procrastination. I then discuss how weather has been previously used as an instrument for procrastination. Finally, I discuss how the literature is taking advantage of MOOCs as a massive research laboratory. Phychology literature: Psychologists have studied procrastination since the 1970s. Ellis and Knaus (1977) claim that “procrastination constitutes an emotional hang-up that does you considerable damage.” Based on their work as psychotherapists, they also claim that about 95 percent of college-level individuals procrastinate. They never consider that they are basing their “guesstimate” on a highly selected sample (that is, their patients). Neither do they consider that procrastination could be correlated with some other unobserved characteristic that is the real cause of a patient’s problems. Knaus (2001) describes procrastination as our “ancient nemesis.” He claims procrastination may have originated as early as 2.5 million years ago, when our ancestors first grouped into small clans and someone decided to needlessly put off doing something beneficial for the clan. These hypotheses are founded on small surveys that rely on indirect measures of procrastination. Moreover, none of these studies address the problem of procrastination being an endogenous choice. Chun Chu and Choi (2005) were the first to consider that “active” procrastination could be good: some people choose to procrastinate because they know they will do better under pressure. For their study, they invited students to respond to a questionnaire titled “Survey of University Students’ Time Use.” Some 230 undergraduate students filled out the questionnaire; however, the paper does not mention how these students compared to their peers who chose not to participate in the study, raising concerns about selection bias and external validity. In this paper, I use a direct measure of procrastination and both an IV and an experimental approach to deal with the endogeneity concerns. Economic Literature: The first paper to address procrastination in the economic literature is Akerlof (1991). Akerlof argues that although procrastination might initially appear to be outside the appropriate scope of economics, it affects the performance of individuals and institutions in the economic and social domains. He proposes an economic model in which procrastination occurs when present costs are unduly salient in comparison with future costs, leading individuals to postpone tasks until tomorrow without foreseeing that, when tomorrow comes, the required action will be delayed yet again. This model challenges the common assumption in economics that individuals are rational maximizers. Anderson and Block (1995) contend that the examples Akerlof offers can be explained within the framework of the standard economic model. They argue that Akerlof confuses later regret with prior irrationality, which is parallel to confusing ex post with ex ante. Working with women, O’Donoghue and Rabin (2001) developed a model where an individual chooses from a menu of options and is partially aware of her self-control problems. Their model predicts that additional options can induce procrastination, and that a person may procrastinate more in pursuing important goals than unimportant ones. They argue that their second result arises because the greater the effort the individual intends to incur, the more likely she is to procrastinate in executing those plans. Instead of using the standard economic assumption that preferences are time consistent (that is, a person’s relative preference for well-being is the same whenever she is asked), they model individuals with present-biased preferences. Ericson (2014) presents a theoretical framework to examine the role of the interaction between present bias and limited memory for reminders and deadlines. The model shows that the way to deliver a reminder depends on whether an individual is present biased: anticipated reminders are good for time-consistent individuals, whereas they can be bad for present-biased ones. The paper also shows that the optimal timing of reminder delivery depends on the degree of present bias. Finally, Siegfried (2001) argues that combating procrastination is essential in order to have a successful undergraduate economics honors program. He contends that getting students to work on their thesis early is the key to success. In order to achieve this, his university uses a series of short-term deadlines and the “fear of personal embarrassment.” Banerjee and Duo (2014) use data from edX to show a discontinuity on grades and “on time” enrollment. They argue that this suggests that “disorganization” is negatively correlated with performance. Behavioral Economics Literature: Behavioral economists merge insights from economics and phycology, Kahneman (2003). Just like our financial resources and our time are limited, we also have important limitations in our capacity to process and retain information, in our ability to control our behavior and followthrough on intended actions, and in the extent to which we respond to the individual incentives that we face. This “psychology of scarcity” generates three important behavioral principles. First, our cognitive resources are limited and can be overwhelmed. Second, we do not act with perfect self-control. Third, our decisions and choices can be influenced by non-economic factors. These insights can be used to design effective intervention. For example, Castleman and Page (2015). show that reminders via text messages can increase college matriculation by 7.1 percentage points. Weather and procrastination: One approach I use to control for the endogeneity of procrastination is to use weather as an instrument. The IV method is a signature technique in the econometrics toolkit, as discussed in Angrist and Krueger (2001). Connolly (2008) links the American Time Use Survey to rain data from the National Climatic Data Center (NCDC). She finds that on rainy days, men shift on average 30 minutes from leisure to work, suggesting that rain raises the marginal value of work. In this paper, I linked data from the NCDC on rain and snowfall to student IP addresses from Coursera data. Using the IP addresses, I geolocate students and assigned them an NCDC weather station. Assuming that procrastinators do not choose their location in response to the weather, rain and snow are an exogenous source of variation for procrastination that allows me to identify the causal effect of procrastination on achievement. MOOCs as a massive research laboratory: Diver and Martinez (2015) explore the opportunities and challenges that MOOCs generate for research. Using data from Coursera, they show a strong negative correlation between procrastination and achievement. In this paper, I go beyond studying the correlation to determine the causal effect of procrastination on achievement by using both an IV and an experimental approach. The experimental approach consists of nudging students by sending information in an email, similar to Martinez (2014a). The term “nudge” was first used by Thaler and Sunstein (2008) to describe “any aspect of the choice architecture that alters people's behavior in a predictable way without forbidding any option or significantly changing their economic incentives.” It is possible that the results from the informational nudge in Martinez (2014a) and the directive nudge in this paper are a form of Hawthorne effect; however, in Martinez (2014b), the author explores and rejects this hypothesis. Model In this section, I present a simple model in which a student chooses whether to take a quiz in the first period t 1 , delay and take it in the second period t 2 , or not take it at all.2 Imagine that this student has the option to buy a lottery ticket each period, where buying the lottery ticket represents taking the quiz. However, the price that the student has to pay is a function of his or her characteristics (that is, the student’s ability), the environment that period (the weather), and the time (the ticket in the second period is cheaper). In theory, a very high-ability student, whose ticket will win for sure, will buy a lottery ticket only in the period with the lowest price. A mediumability student, whose ticket has a lower probability of winning, will buy a ticket each period in order to maximize his or her chances of winning. Finally, a low-ability student will only buy a ticket (if at all) at a very low price, because he or she has a low probability of winning. The tradeoff in the model is the known cost of taking the quiz versus the expected payoff of succeeding in the course. Taking the quiz the first period will increase the probability of success, but a student may choose to delay until the second period if he or she knows (or expects that) the cost will be lower then.1 However, in reality, high-ability students are less likely to delay taking the quiz and are also more likely to score higher in any given attempt. Therefore, there is an inference problem. Each period the student gets utility u from non-Coursera activities (that is, leisure and work). Non-Coursera utility is a function of ability a, weather Xt, and effort et.3 Attempting the quiz, et 1 , costs the student non-Coursera utility. That is: u a, X t , et 1 u a, X t , et 0 . The model assumes that the student can only take the quiz once per period. When a student attempts the quiz in period t, he or she receives the uncertain grade g. The model predicts that if high-ability students enjoy non-Coursera activities relatively less than low-ability students, u a high , X t , et 1 u a high , X t , et 0 u alow , X t , et 1 u a low , X t , et 0 , then they will take the quiz earlier.4 In the second period, if the student’s best grade, g max g1; g2 , is greater than g , the student gets a payoff equal to W. Not attempting the quiz in a given period yields a grade equal to zero. On the other hand, attempting the quiz yields an uncertain grade that is an increasing function of ability and an unobserved random variable, εt (that is, productivity shock): 1 With the lottery analogy, the probability of success if you take the quiz the first period is: Probability that the ticket you bought the first period wins plus the probability that it does not win but your ticket the second period does. gt f (a, t ) The utility of attempting the quiz in period 2, e2 1 , for a student with a grade from period 1 equal to g1, ability a, and exposure to weather X2 depends on the utility from non-Coursera activities when attempting the quiz, plus the expected payoff of succeeding in the course: V ( g1 , X 2 , a, e2 1) u(a, X 2 , e2 1) Pr[max( g1 , g2 ) g a W . The utility of not attempting the quiz in period 2 depends on the utility from non-Coursera activities when not attempting the quiz, plus the payoff of succeeding in the course if the grade in period 1 is greater than the threshold: v( g1 , X 2 , a, e2 0) u(a, X 2 , e2 0) 1( g1 g )W . Therefore, the period 2 problem can be written as: V ( g1 , X 2 , a) max V ( g1 , X 2 , a, e2 1), V ( g1 , X 2 , a, e2 0). This, in turn, indicates that the period 1 problem, in which the student decides whether to attempt the quiz or delay, can be written as: V ( X 1 , a) max u a, X 2 , e2 0 E V 0, X 2 , a , u a, X 2 , e2 1 E V g1 , X 2 , a , Not taking the quiz Taking the quiz This problem can be solved recursively. A student with g1 > g will not attempt the quiz in period 2.5 A student with g1 > g will attempt the quiz if the expected payoff of attempting the quiz is greater than the loss in non-Coursera utility. That is: Pr g2 g W u(a, X 2 , e2 0) u(a, X 2 , e2 1) Therefore, in the first period, a student will attempt the quiz if the difference between the expected utility of going to period 2 with a grade equal to g1, E V g1 , X 2 , a and of going to period 2 with a grade equal to zero, E V 0, X 2 , a is greater than the loss in non-Coursera utility in period 1 due to taking the quiz, u a, X 2 , e2 0 u a, X 2 , e2 1 . In the lottery analogy, a student will buy a ticket in period 1 if the expected value of entering period 2 with a possible lottery payoff is greater than the price the student has to pay. That is: E V g1 , X 2 , a E V 0, X 2 , a u a, X 2 , e2 0 u a, X 2 , e2 1 If a student receives new information that increases the expected value of attempting the quiz in period 1, then E V g1 , X 2 , a ; similarly, if the expected value of going into period 2 with a grade of zero decreases, then E V 0, X 2 , a . In either case, students who were at the margin of attempting the quiz in period 1 will take the quiz and not delay. We can interpret the empirical strategies that I employ in terms of changes like these in key parameters in the model. The intention of the informational experiment is to increase the expected value of attempting quizzes earlier. Similarly, if the utility of non-Coursera activities decreases, u a, X t , et , students at the margin will procrastinate less. In a later section, I show that rain and snow affect procrastination. Data I use data from two sources. The first is Coursera, an education platform that partners with top universities and organizations worldwide to offer free online courses to the public. The second is the National Climatic Data Center (NCDC) of the National Oceanic and Atmospheric Administration (NOAA).6 Coursera data provide me with a measure of procrastination and a platform to run an experiment. I linked these data with NCDC data about weather conditions (rain and snow) for each Coursera participant, whose geographic locations I identified by their IP addresses in the Coursera data. Coursera I use data from the third edition of Foundations of Business Strategy (FBS) course by Michael J. Lenox of the University of Virginia. The course initially ran from January 13 to February 25, 2014, and enrolled 75,180 students. FBS explores the underlying theory and frameworks that provide the foundations of a successful business strategy. The class is divided into weekly modules. Each module consists of an introductory video, a reading from the strategist’s toolkit, a series of video lectures, a quiz, and a case study to illustrate points in the lectures. Students wishing to receive a Statement of Accomplishment must satisfy the following criteria: 1. Complete the six quizzes. Students can take each quiz up to three times; their best score on each quiz is the one recorded. Quizzes have 10 questions with four choices each. 2. Submit a final project, a strategic analysis for an organization of their choice. 3. Assess five peers’ strategic analyses using the peer assignment function. The maximum final grade is based on 100 points: 50 points for the final project, 42 points for the quizzes (spread evenly over the six quizzes), and 8 points for post and comment up-votes. Those who receive 70 points or more and assess five peers’ strategic analyses receive a Statement of Accomplishment. In this paper, I use the 24,122 students who, at the time of enrollment, expressed their intention to complete all the course work necessary to obtain the Statement of Accomplishment. 7 These students were randomly assigned to treatment and control groups to test whether a directive nudge would affect their behavior and achievement. Only 1,212 of these students received a Statement of Accomplishment. Figure 2 shows when each of the quizzes was published and the timing of the directive nudge intervention, which occurred the day quiz 6 was published. Two questions are difficult to answer with Coursera’s data: (1) who are the students and (2) why did they enroll in this MOOC? Figures 3 to 7 attempt to address these questions using survey data. As discussed in Diver and Martinez (2015), one should be cautious using these data, because only a small fraction of the students complete surveys. In this case, of the 24,122 students who expressed the intention of obtaining a Statement of Accomplishment, only 2,332 completed the course survey. The selection problem is evidenced by the fact that the probability of obtaining a certificate is just 3 percent for someone who did not complete the survey, but 20 percent for those who did complete it. Merging data from different sources can mitigate this problem. For example, by merging the course survey with the Coursera Standardized survey, I was able to observe the level of education for 4,972 out of the 24,122 students in this experiment. For those whose education level I could not observe, the probability of obtaining a certificate is 3 percent, whereas for those I could observe, it is 12 percent. The majority of students were male (68 percent). The median age was 32, with an age range from 15 to 88. Most were employed (74 percent), with 15 percent unemployed (15 percent) and 5 percent full-time students. The survey also shows that the students were educated: 42 percent had a B.A., 21 percent a master’s degree, and 16 percent a professional degree. The survey shows that obtaining the Statement of Accomplishment was very important for 40 percent of the respondents, important for 37 percent, a bit important for 17 percent, and not at all important for only 7 percent. Finally, we know that students were taking the course for a great number of different reasons. The survey responses selected the most frequently are “to improve my skill set for the job I want in the future” (11 percent), “to deepen my knowledge of this topic” (11 percent), “for personal enjoyment and love of learning” (11 percent), and “because I want to add the successful completion of the course to my CV/resume” (9 percent). Weather Data The NCDC has data from more than 90,000 weather stations around the world.8 The data include maximum and minimum daily temperatures, rain, and snowfall. Using stations’ and students’ latitude and longitude, I could match the students to their nearest weather station. Connolly (2008) shows that, for men, a rainy day shifts about half an hour from leisure and home production to work, suggesting that rain raises the marginal value of work.9 As in her paper, I define a rainy day as a one with at least 0.10 inches of rain. Additionally, I take snow into consideration, and define a snowy day as one with at least 0.10 inches of snow. Rain and snow may have different effects on student choices. For example, on a rainy day, a student may stay at work until later and be less likely to do Coursera work when he or she gets home. On the other hand, on a snowy day, a student may stay home instead of going to work, increasing the likelihood of doing Coursera work. The Effect of Procrastination Using Weather as an Instrument I estimate linear regressions using rain and snow as IVs for procrastination. The course started on Monday, January 13, and the first quiz was published that day (Figure 2). The ordinary least squares (OLS) estimate in the first column of Table 1 shows that a student who took the quiz on day 1 is 15.4 percentage points more likely to have obtained a Statement of Accomplishment than a student who did not take the quiz that day, relative to the very low base of 0.6 percent who obtained the credential. If procrastination is correlated with some other unobserved characteristic (such as ability), this estimate would be biased, probably upward. To deal with this endogeneity bias, I estimate a first-stage relationship that shows that, relative to a day with no rain nor snow, a student is 2.3 percentage points less likely to take the quiz on the first day if it is raining, and 5 percentage points more likely if it is snowing. Connolly (2008) suggests that on a rainy day, people are more likely to spend more time at work. If this is so, they would have less time to do their Coursera activities. On the other hand, on a snowy day, people are more likely to stay home and do their Coursera work. The second-stage regression shows that being induced to take the quiz on the day it was published rather than procrastinating increases the probability of obtaining the Statement of Accomplishment by 13.8 percentage points. Thus, the estimate shrinks, because some good students self-select.10 Table 2 shows that, when controlling for country fixed effects, the effect of the weather on procrastination are statistically insignificant. Remember that students from almost 200 countries are taking this course. Therefore it is not surprising that adding so many covariates has this effect. Nevertheless, if countries are valid instruments for procrastination, the second stage regression shows that taking the quiz the day it is published increases the probability of obtaining the certificate by almost 20 percentage points. Finally, notice that my IV strategy only considers the first day of the course. To incorporate the other quizzes to the model, I would need a dynamic structural model. That type of model can account for the weather and decesions made every day. For example, to model the second day I would need to consider how the choices made the first day affected the choices made the second day. That additional complication is not present when you only consider day 1. Althought the dynamic structural model can be more realistic, it can also be computationally intractable. This type of model is solved with by backwards induction. That is, we solve the problem the last day given all the decision made before that determined the current state space. Given the size of that state space many simplifying assumptions would have to be made to be able to estimate the model. It is true that my simple IV strategy is more limited than a dynamic structural model. Nevertheless, it serves the goal of motivating the randomized control trial described bellow. Randomized Email Nudge I divided the group of 24,122 students into randomly assigned treatment and control groups. On February 17, 2014, the day quiz 6 was published, I sent an email to the students in the treatment group, as shown in Figure 8. This email was a low-cost directive nudge encouraging students not to procrastinate.11 Following the what works clearinghouse (WWC) standards, Table 3 shows that this design satisfies baseline equivalence. The WWC defines effect size as the difference in means of the intervention and comparison groups over the pooled standard deviation. The table shows an effect size of 0.016 in the pre-test (quiz 1 before the intervention), which satisfies baseline equivalence. Table 4 shows the effect of the treatment on quiz 6 outcomes. In order to deal with attrition, I assign a grade of zero to students who did not attempt the quiz. “Took Q6” shows that students in the treatment group were 0.8 percentage points, or 7.74 percent, more likely to take the quiz. “maxGrade” shows that students assigned to the treatment group scored seven points, or 8.92 percent, better on their best attempt on quiz 6 than students in the control group. “Procrastination” suggests that, conditional on taking the quiz at some point, students assigned to the treatment group procrastinated 2.1 fewer hours on average. This difference is not statistically significant. The nudge affected not only students’ behavior in quiz 6, but also their choices to go back and take quizzes 1 to 5 for the first time. For example, Table 5 shows that students receiving the email nudge were about a half percentage point more likely to attempt quizzes 1 to 5 for the first time after the intervention than students in the control group. The students who were nudged into taking the quizzes increase my measure of procrastination in the treatment group.12 Students assigned to the treatment group were more likely not only to complete quiz 6, but also to obtain a Statement of Accomplishment. Table 6 shows that students assigned to the treatment group were 0.8 percentage points, or 16.85 percent, more likely to obtain the course certificate. For an intervention with negligible cost, this is an important effect. Finally, Table 7 shows that the effect on outcomes is heterogeneous across countries. Germans assigned to the treatment group were 166 percent more likely to obtain the certificate, Spaniards were 69 percent more likely, and Indians were 37 percent more likely. On the other hand, the treatment had a negative effect in Japan and Denmark, with negative rates of 84 percent and 81 percent respectilevy. Finally, in most countries, including the US, the is no effect. Conclusions This paper examines the role of procrastination on achievement. Understanding whether or not there is a causal relationship such that inducing procrastinators to take action improves their learning is fundamental in order to design a course with incentives that maximize student achievement. First, I show that a student is less likely to take a quiz on a rainy day and more likely to take a quiz on a snowy day. Using weather as an instrument, I show that attempting the first quiz on the day it is published increases the probability of obtaining the Statement of Accomplishment by 13.8 percentage points. Second, I show that a directive nudge can strongly increase achievement. For example, students in the treatment group were 0.8 percentage points, or 16.85 percent, more likely to obtain the course certificate. For an intervention with negligible cost, this is an important effect. However, the effects are heterogeneous across countries. For example, Germans, Spaniards, and Indians assigned to the treatment group were more likely to obtain the course certificate, with rates that were 166 percent, 69 percent, and 37 percent higher, respectively. On the other hand, the treatment had a negative effect in Japan and Denmark, with negative rates of 84 percent and 81 percent respectilevy. In order to understand the causes of this heterogeneity, more data are needed. For example, it is possible that the level of education or employment status differs among the countries, or the variance may simply be caused by cultural differences. Current Coursera survey data are not good enough to address this question, but data may be available in the future, as Coursera is constantly improving its platform. Future research should explore other directive nudges to improve achievement. For example, Martinez (2014a) shows evidence that telling students how they are performing relative to their peers can improve ranking by 8.43 percentage points, on average. Future intervention could test the impact of telling students how those in the top of the class are using their time before attempting a quiz. Additional questions include whether there is a critical time for nudging and whether follow-ups help. Acknowledgements I am grateful to Sarah Turner, Leora Friedberg, Kristin Palmer, James McMath, Jacalyn Huband, and Katherine Holcomb. I am solely responsible for any errors. The most recent version of this paper is available at http://goo.gl/QPJpDo. Notes 1. Jefferson offered this advice on more than one occasion. See http://www.monticello.org/site/jefferson/canons-conduct 2. In this model, the student is rational and has time-consistent preferences. The student will choose to procrastinate if that is the optimal choice given his information set. An alternative model could generate procrastination by assuming agents have time-inconsistent preferences, as in Choi et al. (2003). 3. For simplicity, I assume perfect information about the weather. 4. This is equivalent to assuming that the cost of exertion is lower for a high-ability student. 5. Adding direct utility from grades would allow this model to explain students attempting the quiz even though they do not need to do it in order to obtain W. 6. These weather data are publicly available at http://goo.gl/x2UrXq. 7. At the time of enrollment students had to answer the following question: “How many of the assignments and quizzes do you intend to do?” Only students who selected “all” were considered for this study. Other students may have little or no intention of taking the quizzes. 8. In this paper I use their daily data, but it is possible to access hourly records. 9. Results for women are weak. A rainy day is associated with three more minutes at work. 10. Note that Table 1 uses a linear probability model. Wooldridge (2012) says “… the linear probability model is useful and often applied in economics. It usually works well for values of the independent variables that are near the average in the sample.” Angrist and Pischke (2008) give several empirical examples where the marginal effects of a dummy variable estimated by LPM and probit are indistinguishable. IV probit cannot be used here because the endogenous regressor is discrete. Vytlacil and Yildiz (2007) propose nonparametric identification and estimation of the average effect of a dummy endogenous regressor in models where the regressors are weakly but not additively separable from the error term. A dynamic discrete choice model with 41 periods and new choices appearing every week could be a better specification, but it would be more costly to solve and does not provide any clear advantage over this simple one. 11. I inserted a unique white pixel in the body of the email; when it loaded from the server, it confirmed that the email was opened. Thus, I can confirm that 43.02 percent of these emails were opened. Because images are blocked by default on some email clients, these numbers are lower bounds for the open rates. The open rate is crucial to calculating the effect of the treatment on the treated. 12. For example, imagine that there are only two students in the treatment and two in the control group. One student in each group took quiz 1 before the intervention. After the intervention, the student in the treatment group decides to stop procrastinating and take the quiz, but the student in the control group keeps on procrastinating. This will increase my measure of procrastination for the treatment group. References Akerlof, G.A. (1991). Procrastination and obedience. American Economic Review, 81(2), 1–19. Anderson, G.M., & Block, W. (1995). Procrastination, obedience, and public policy: The irrelevance of salience. American Journal of Economics and Sociology, 54(2), 201–215. Angrist, J.D., & Krueger, A.B. (2001). Instrumental variables and the search for identification: From supply and demand to natural experiments. The Journal of Economic Perspectives, 15(4), 69–85. Angrist, J.D., & Pischke, J. (2008). Mostly harmless econometrics: An empiricist's companion. Princeton University Press. Banerjee, A.V., & Duo, E. (2014). (Dis)organization and success in an economics MOOC. The American Economic Review, 104(5), 514–518. Castleman, B. L., & Page, L. C. (2015). Summer nudging: Can personalized text messages and peer mentor outreach increase college going among low-income high school graduates?. Journal of Economic Behavior & Organization, 115, 144-160. Choi, J.J., Laibson, D., Madrian, B.C., & Metrick, A. (2003). Optimal defaults. American Economic Review, 180–185. Chun Chu, A.H., & Choi, J.N. (2005). Rethinking procrastination: Positive effects of active procrastination behavior on attitudes and performance. The Journal of Social Psychology, 145(3), 245–264. Connolly, M. (2008). Here comes the rain again: Weather and the intertemporal substitution of leisure. Journal of Labor Economics, 26(1), 73–100. Diver, P., & Martinez, I. (2015). MOOCs as a massive research laboratory: Opportunities and challenges. Distance Education. Ellis, A., & Knaus, W.J. (1977). Overcoming procrastination: Or how to think and act rationally in spite of life's inevitable hassles. Institute for Rational Living. Ericson, K.M.M. (2014). On the interaction of memory and procrastination: Implications for reminders. Working Paper 20381, National Bureau of Economic Research. URL http://www.nber.org/papers/w20381. Kahneman, D. (2003). Maps of bounded rationality: Psychology for behavioral economics. The American economic review, 93(5), 1449-1475. Knaus, W.J. (2001). Procrastination, blame, and change. Journal of Social Behavior and Personality, 15(5), 153–166. Martinez, I. (2014a). The effects of informational nudges on students' effort and performance: Lessons from a MOOC. Martinez, I. (2014b). The Hawthorne effect in MOOCs. O'Donoghue, T., & Rabin, M. (2001). Choice and procrastination. The Quarterly Journal of Economics, 116(1), 121–160. Siegfried, J.J. (2001). Principles for a successful undergraduate economics honors program. The Journal of Economic Education, 32(2), 169–177. Thaler, R.H., & Sunstein, C.R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press. Vytlacil, E., & Yildiz, N. (2007). Dummy endogenous variables in weakly separable models. Econometrica, 75(3), 757–779. Wooldridge, J. (2012). Introductory econometrics: A modern approach. Cengage Learning. Table 1. Regression results, taking quiz 1 the day published and achievement Certificate Took quiz 1 on Jan. 13 Rain on Jan. 13a OLS (1) 0.154*** (0.003) Snow on Jan. 13b Longitude 0.006*** (0.002) Observations 23,463 R2 0.102 2 Adjusted R 0.102 Residual std. error 0.206 (df = 23461) F statistic 2,678.940*** (df = 1; 23461) * ** *** Note. p < .1; p < .05; p < .01. Constant Dependent variable Took quiz 1 on January 13 First stage (2) −0.023*** (0.006) 0.050** (0.020) −0.0002*** (0.00004) 0.291*** (0.004) 23,463 0.002 0.002 0.451 (df = 23459) 17.446*** (df = 3; 23459) Certificate Second stage (3) 0.138** (0.063) 0.011 (0.018) 23,463 0.101 0.101 0.206 (df = 23461) a Dummy variable equal to 1 if it rained more than 0.1 inches the day quiz 1 was published. b Dummy variable equal to 1 if it snowed more than 0.1 inches the day quiz 1 was published. Table 2. Regression results, taking quiz 1 the day published and achievement with country fixed effect Dependent variable Certificate OLS (1) Took quiz 1 on Jan. 13 Took quiz 1 on January 13 First stage (2) Certificate Second stage (3) 0.153*** 0.198*** (0.003) (0.023) Rain on Jan. 13a 0.005 (0.011) Snow on Jan. 13b 0.035 (0.024) Constant -0.002 (0.003) 0.322*** (0.006) -0.007 (0.007) Country-level FE Observations R2 Adjusted R2 X 23,463 0.111 0.104 X 23,463 0.018 0.010 Residual Std. Error 0.206 (df = 23292) 0.449 (df = 23291) X 23,463 0.094 0.094 0.207 (df = 23461) F Statistic 17.055*** (df = 170; 23292) 2.433*** (df = 171; 23291) Note. *p < .1; **p < .05; ***p < .01. a Dummy variable equal to 1 if it rained more than 0.1 inches the day quiz 1 was published. b Dummy variable equal to 1 if it snowed more than 0.1 inches the day quiz 1 was published Table 3. Baseline equivalence Effect size Treatment Control 90% confidence interval p-value Pre-test 0.016 2.0436 1.9906 (-0.0171, 0.123) 0.2135 Average Age 0.0211 34.6108 34.386 (-0.3971, 0.8464) 0.5522 Proportion male 0.032 0.6916 0.6767 (-0.0066, 0.0364) 0.2552 Table 4. Quiz 6 maxGrade Effect.size Treatment Control 90% confidence interval 0.0272 0.8506 0.7809 (0.0154, 0.124) p-value 0.0347 firstGrade 0.0183 0.5278 0.4966 (-0.005, 0.0673) 0.1561 Attempts 0.0289 0.2461 0.2241 (0.0059, 0.0381) 0.0247 Took Q6 0.0244 0.1002 0.093 (0.001, 0.0135) 0.058 n 12,061 12,061 Table 5. Attempting quizzes 1–5 Quiz 1 Quiz 2 Quiz 3 Quiz 4 Quiz 5 n Effect size Treatment Control 0.0237 0.0196 0.0164 0.025 0.0268 0.0229 0.0286 0.0337 0.0287 0.0184 0.0409 0.0373 0.025 0.0564 0.0507 12,061 12,061 90% confidence interval (0.0003, 0.006) (0.0006, 0.0072) (0.0013, 0.0087) (-0.0005, 0.0077) (0.0009, 0.0104) p-value 0.0657 0.0518 0.0262 0.1532 0.0518 Table 6. Achievement Effect.size Treatment Control 90% confidence interval p-value Certificate 0.0357 0.0541 0.0463 (0.0032, 0.0124) 0.0056 n 12,061 12,061 Table 7. Achievement by country Country India Germany Bulgaria Japan Saudi Arabia Spain Denmark Czech Republic Slovenia Belgium Bangladesh Croatia Trinidad and Tobago Syrian Arab Republic Malaysia Portugal Brazil Turkey Greece Argentina Thailand Ghana Ireland Egypt … United States Effect 90% confidence size Treatment Control interval 0.0795 0.0697 0.0507 (0.0041, 0.0338) 0.2299 0.0844 0.0317 (0.01, 0.0954) 0.4189 0.1304 0.0204 (0.0195, 0.2006) -0.3326 0.0137 0.0857 (-0.1325, -0.0116) pvalue n Treatment n Control 0.0356 1421 1360 0.0427 154 189 0.0468 46 49 0.0508 73 70 0.2975 0.1521 -0.3493 0.0732 0.101 0.02 0.0125 (0.0084, 0.113) 0.0596 (0.004, 0.0788) 0.1042 (-0.1657, -0.0027) 0.0568 0.0687 0.0896 82 287 50 80 285 48 0.4877 -0.6737 -0.2851 -0.2947 -0.4291 0.2174 0.0667 0 0 0 0.0571 0.3333 0.0435 0.0513 0.0833 0.1069 0.109 0.1596 0.16 0.1617 23 15 39 26 28 35 12 46 39 24 0.4044 0.0952 0.1623 21 14 0.6649 0.2279 0.1632 0.058 0.2046 -0.1733 -0.3253 0.2612 0.1896 0.2176 -0.1549 0.2 0.0725 0.063 0.0378 0.0526 0.0556 0.0278 0.0732 0.0847 0.0256 0.0179 0 0.0244 0.029 0.0275 0.0149 0.102 0.1071 0.0196 0.0385 0 0.0465 (-0.0444, 0.4444) (-0.0113, 0.1075) (-0.0089, 0.0769) (-0.0027, 0.0233) (-0.0117, 0.0871) (-0.1091, 0.0161) (-0.1901, 0.0313) (-0.0227, 0.1298) (-0.0291, 0.1217) (-0.0176, 0.0689) (-0.0767, 0.0194) 0.1679 0.1823 0.1916 0.1927 0.2081 0.2214 0.2343 0.245 0.3105 0.3236 0.3247 10 69 127 1006 76 108 36 41 59 39 56 11 82 138 1019 67 98 28 51 52 33 86 0.0021 0.0476 0.0472 (-0.0088, 0.0096) 0.9364 2898 2883 (-0.0034, 0.3239) (-0.541, 0.0077) (-0.0945, 0.0076) (-0.1116, 0.009) (-0.1821, 0.0154) 0 (-0.018, 0.2084) Figure 1. Procrastination and achievement Figure 2. Timeline Figure 3. I am … years old Figure 4. What ONE of the following best describes your current situation? Figure 5. My highest level of education is Figure 6. How important is it for you to obtain the Statement of Accomplishment at the end of the course? Figure 7. People have different reasons for taking the course. Please check all of the reasons that STRONGLY influenced your decision to take Foundations of Business Strategy. Figure 8. Treatment
© Copyright 2026 Paperzz