Direct Evidence on Sticky Information from the Revision Behavior of Professional Forecasters Karlyn Mitchell Department of Business Management Poole College of Management North Carolina State University Raleigh, NC 27695-7229 [email protected] Douglas K. Pearce* Department of Economics Poole College of Management North Carolina State University Raleigh, NC 27695-8110 [email protected] * Corresponding author April 1, 2017 Direct Evidence on Sticky Information from the Revision Behavior of Professional Forecasters Karlyn Mitchell Department of Business Management Poole College of Management North Carolina State University Douglas K. Pearce Department of Economics Poole College of Management North Carolina State University ABSTRACT We provide evidence on the sticky-information model of Mankiw and Reis (2002) by examining how often individual professional forecasters revise their forecasts. We draw interest rate and unemployment rate forecasts from the monthly Wall Street Journal surveys. We find evidence that forecasters frequently leave forecasts unchanged but revise more often the larger the changes in the information set; additionally, the information sensitivity of revision frequencies increased after 2007. We also find that, on average, forecasters in our sample revise more frequently than found in previous research but that revised forecasts are not consistently more accurate. Keywords: Expectations, Sticky Information, Forecasts, Survey Data JEL Codes: C53, D83, D84, E37, E47 1. Introduction How economic agents process information to form expectations continues to be a central issue in macroeconomics. Recent work proposes alternatives to the full information, rational expectations model that presumes agents form expectations from complete information and revise them when relevant new information appears. Woodford (2003) relaxes the full information assumption to develop a model in which agents extract signals from noisy information (the noisy-information model). Sims (2003) considers limits to information processing which lead rational agents to form expectations from incomplete information (the rational inattention model). Reis (2006) and Mankiw and Reis (2002) posit significant costs of acquiring and processing information that deter agents from updating their information sets and revising their expectations every time new information arrives (the sticky-information model). The sticky-information model has received empirical support from Mankiw, Reis, and Wolfers (2003) and Coibion and Gorodnichenko (2015), who examine indirectly the frequency with which professional forecasters revise their forecasts. Mankiw, Reis, and Wolfers first simulate inflation forecasts of agents who asynchronously collect information and revise their forecasts using a sticky-information model. They then compare dispersion in the simulated forecasts to dispersion in the actual forecasts of professional forecasters (consumers) and find that the simulated series mirrors the actual series most closely when the agents revise their inflation expectations about every 10 months (12.5 months).1 Coibion and Gorodnichenko (2015) assume that professional forecasters make full information, rational expectations forecasts but that costs prevent some from revising their forecasts every period. They estimate the frequency of forecast revision by regressing the average forecast error for a specific horizon on 1 Mankiw, Reis, and Wolfers (2003) use the Livingston Survey for professional forecasts and the Michigan Survey of Consumer Attitudes and Behavior for consumer expectations. Like Mankiw, Reis, and Wolfers, Carroll (2003) also finds that households revise their expectations about once a year, based partially on professional forecasts. 1 the revision of the average forecast. They conclude that forecasters revise their inflation forecasts once every 6 to7 months, on average.2 Later work investigates the sticky-information model using more direct methods. Andrade and LeBihan (2013) measure the fraction of forecasters revising their forecasts each quarter in the European Survey of Professional Forecasters. They find that, on average, forecasters update their inflation forecasts about every 4 months, more frequently than found by Mankiw, Reis, and Wolfers (2003) and Coibion and Gorodnichenko (2015).3 Pfajfar and Santoro (2013) examine numbers of households revising their inflation forecasts in the Michigan Survey. They find that households are more likely to revise their expectations the more newspaper reports about inflation have appeared recently. Dovern et al. (2015) examine monthly GDP forecasts by individual professional forecasters in thirty-six countries assembled by Consensus Economics and find that forecasters revise their forecasts about every three months. These findings challenge the sticky-information model (Coibion, 2015). In this paper, we produce new evidence on the sticky-information model by studying monthly forecasts of three economic variables made one to twelve months ahead by individual, professional economists in the Wall Street Journal (WSJ) Economic Forecasting Survey from 2003 to 2014. Our evidence is new because forecasts from the WSJ survey have not, to our knowledge, been used for this purpose. We begin by documenting properties of the economists’ forecasts and forecast revision behavior. Then, we estimate models of their revision behavior to study how forecast horizon and economic changes affect the economists’ propensities to revise 2 Coibion and Gorodnichenko (2015) use the quarterly Survey of Professional Forecasters (SPF). Mertens and Nason (2015) estimate a model similar to that of Coibion and Gorodnichenko (2015) on forecasts of the GDP deflator from the SPF and find that forecasters reduced their revision frequency from about every 5 months in the 1970s to about every 7-8 months after 2000. 3 Andrade and LeBihan (2013) use the quarterly European Survey of Professional Forecasters. Armantier et al. (2016) conduct an experiment on how households revise their inflation expectations and find that 42-47 percent do not revise their expectations when given the opportunity. 2 forecasts. The dependent variable in our models is the fraction of forecasters not revising their forecasts since the last survey because it implies a revision frequency measure comparable to that of Coibion and Gorodnichenko (2015). We also examine whether the economists’ revision behavior changed after the financial crisis. Then, we estimate the forecast revision model of Coibion and Gorodnichenko (2015) on our data and compare its estimated forecast revision rates with the revision rates we observe directly for the WSJ economists. Finally, we examine whether economists who revise their predictions forecast more accurately. Our use of the WSJ surveys has two main advantages. First, the surveys are monthly, allowing forecasters to revise forecasts more frequently than possible with either the quarterly (US and European) Surveys of Professional Forecasters or the semi-annual Livingston Survey. Second, the surveys identify each forecaster by name, allowing us to associate forecasters with their individual forecasts. Thus we can construct direct measures of forecast revision frequency and conduct direct tests of whether forecasts are state-independent and constant through time, as assumed by Coibion and Gorodnicheko (2015). Our study uses the WSJ economists’ forecasts of three variables: the 10-year Treasury bond rate, the fed funds rate, and the unemployment rate. We choose these variables because they are rarely revised after being observed, avoiding ambiguity about whether forecasters intended to forecast initially reported or revised values of a variable. We also choose them because the WSJ survey asks for forecasts of the values of these variables on specific days or months rather than asking for forecast averages over rolling horizons, like many other surveys. Compared with forecast averages, single-date values yield cleaner measures of revision frequency by avoiding revisions which correct for earlier forecast errors.4 4 For example, if we are forecasting the average annual inflation rate for 2014 and we make our new forecast in, say, May 2014 after observing the actual monthly inflation rate for April 2014, we may change our forecast by replacing 3 To preview our results, we find some support for the sticky-information model in that substantial numbers of forecasters do not revise their forecasts of the three variables we study at every opportunity. The fraction of non-revisers varies with the variable forecasted. The fraction is also state dependent: forecasters are more likely to revise their forecasts of a variable the greater the change in that variable since the prior survey. This finding is significant because indirect methods of studying revision frequency presume state-independent forecaster behavior. We document how one such method misestimates the directly observed forecast revision frequency in the WSJ survey. While we find that the WSJ forecasters do not revise their forecasts at every opportunity, we also find that the average time between revisions is shorter than reported by most previous studies, casting some doubt on how well the sticky-information model can account for the persistence of macro-economic shocks. Additionally, we find that recently revised forecasts are not consistently more accurate than unrevised forecasts.5 The rest of the paper is organized as follows. Section 2 details our data. Section 3 describes our tests of the state dependency of forecast revision behavior and reports our results. Section 4 presents extensions of our basic model. Section 5 concludes our paper. 2. The Data 2.1. The Wall Street Journal Survey We take our data from the Wall Street Journal Economic Forecasting Survey from March 2003 through December 2014. The forecasters include the chief economists from large commercial banks and investment banks, heads of forecasting firms, and prominent business our previous expectation of the April 2014 inflation rate with the actual value. We would be classified as revising our forecast even if we did not change our expectations of monthly inflation for months from May to December. 5 Pfajfar and Santoro (2013) report a similar finding. The inability of professional forecasters to forecast more accurately despite updating may support the noisy- information model. Dräger et al. (2016) find that professional forecasters and consumers who form expectations consistent with economic theory forecast more accurately. 4 economists from industry. The economists submit forecasts of several economic variables in the first or second week of each month and the WSJ publishes them on-line shortly thereafter.6 Economists’ names and employers appear along with their forecasts, unlike the Livingston Survey, the (US and European) Surveys of Professional Forecasters and Consensus Economics. This is important because it permits us to follow individual economists as they change their employers and ensures that we record forecast revisions only for economists participating in consecutive surveys.7 Over our sample period the number of economists in each survey ranges from 45 to 60, averaging about 54. A total of 101 economists appear in our sample. Features of the WSJ survey make it well-suited for investigating the sticky-information model. The WSJ asks economists to predict the June 30 and December 31 values for the 10-year Treasury bond rate and the fed funds rate and the June and December unemployment rate, inter alia. (Before June 2007, the WSJ requested forecasts of the unemployment rate for May and November). Since end dates are fixed, economists’ forecast horizons decline over time, from twelve months to one. The economists can potentially access very recent information before making their forecasts: current interest rate data are available almost contemporaneously and the unemployment rate for a given month is announced on the first Friday of the next month, generally before the new WSJ forecasts are due.8 6 Monthly surveys began in March 2003 but until 2008 no forecasts were collected at the start of January or July. The WSJ Economic Forecasting Survey web site is: http://projects.wsj.com/econforecast/#ind=gdp&r=20. Prior to March 2003, the WSJ surveyed economists twice a year. For an analysis of the semi-annual forecasts, see Mitchell and Pearce (2007). 7 Engelberg, Manski, and Williams (2011, footnote 9) note that id numbers in the Survey of Professional Forecasters need not identify the same individuals over time. As a referee noted, following institutions might be preferable to following individuals if forecasts are made by institution-specific models without forecaster adjustments. 8 Survey results posted at the WSJ survey web site contain some apparent errors. In instances where a forecaster’s prediction is substantially different from the prediction for the same target date in the preceding and succeeding surveys, we consider the prediction a probable transcription error. For example, one forecaster predicted that on December 31, 2008 the 10-year bond rate would be 3.88 % in the September survey, 1.27 % in the October survey and 3.68 % in the November survey. Appendix A lists the probable errors. We omit the questionable data points in the results reported here, but including them has little effect. 5 Figure 1 plots the surveyed economists’ 4-months-ahead forecasts of the 10-year bond rate, the fed funds rate, and the unemployment rate by target date.9 Horizontal bars denote actual rates on the target dates. The plots show that the economists differ in their opinions, often substantially, as is typical for forecast surveys.10 The sticky-information model explains differing opinions as differences in the dates on which economists updated their forecasts, an explanation which assumes economists make full-information, rational predictions whenever they update. In general, differences in forecasts may reflect differential access to information, differences in forecasting models, different loss functions, and/or differing prior beliefs (Manzan, 2011). The economists’ 10-year Treasury bond rate forecasts exhibit roughly the same degree of dispersion and accuracy throughout the 12-year sample period. The spread of forecasts in a typical survey is about 150 basis points. In about half the surveys, the forecasts cluster mainly above or mainly below the actual bond rate on the target date. The economists did not foresee the plunge in the bond rate to 2.25% in December 2008 from around 4% during most of 2008. The economists’ fed funds rate forecasts seem to reflect their reading of monetary policy. Specifically, between March 2003 and mid-December 2008 the Federal Reserve set singlevalued funds rate targets which it moved in multiples of 25 basis points. In this same period the economists’ forecasts are also in multiples of 25 basis points, with a spread of about 100 basis points. The spread widened to 250 basis points in the March 2008 survey asking for the funds rate on June 30, 2008, reflecting greater uncertainty. No one predicted the end-of-year drop in the funds rate. In December 2008, the Fed simultaneously abandoned the single-valued target for a target range of 0 to 25 basis points and issued forward guidance indicating it would hold the 9 The 4-month horizon is representative of middle range forecasts. For comparison, we show 10-months-ahead and 2-months-ahead forecasts in Appendix B. We do not show 12-months-ahead and 1-month-ahead forecasts because they were not collected before 2008. 10 Mankiw and Reis (2003) use differences of opinion in forecast surveys to motivate their sticky-information model. Dovern et al. (2015) presents a rigorous analysis of differences in opinion across three surveys of professional forecasters. 6 funds rate within this range for a considerable time (Campbell et al., 2012). These changes may have reduced the frequency of subsequent forecast revisions. Post-2008 most forecasts are inside the target range, with a few forecasts above 50 basis points in multiples of 25 basis points. Forecasts of the unemployment rate resemble forecasts of the bond rate in dispersion and accuracy. The range of forecasts in a given survey is 1 to 1.5 percentage points. In about half the surveys, the forecasts are roughly evenly divided between under- and over-predicting the actual unemployment rate on the target date. Just as the economists failed to foresee declining interest rates, they failed to foresee climbing unemployment in late 2008 and early 2009. 2.2. Forecast Revisions We construct a direct measure of economists’ forecast revision behavior to investigate information rigidity. We presume an economist revising a prior forecast updated his information set before announcing the revision. Andrade and Le Bihan (2013, p. 973) observe that a forecaster could update his information and not revise his forecast, thus forecast revision frequency should be viewed as a lower bound on information updating frequency. Since actual and predicted interest rates (unemployment rates) are reported to 2 decimal places (1 decimal place), forecasters would revise their forecasts only if they predicted interest rate (unemployment rate) changes of at least 1 basis point (10 basis points). We compute our direct measure of forecast revision behavior as follows. We identify the number of economists who supplied forecasts on both survey dates t-1 and t and then compute the fraction of those economists who did not revise their forecasts, a fraction we call Nochanget; we do this for every survey date. Nochanget is comparable to the proportion of forecasters not updating, λ, that Coibion and Gorodnichenko (2015) estimate. Unlike λ, Nochanget is a direct 7 measure of the fraction of forecasters who do not update which requires no assumptions about forecasting method or forecaster rationality.11 Figure 2 displays Nochanget for forecasts of the 10-year bond rate, fed funds rate, and unemployment rate at each forecast horizon averaged across all surveys. For bond rate forecasts, Nochanget averages about 0.35 for most horizons but appears lower for the one- and sevenmonth horizons. Economists in the WSJ survey make forecasts for these two horizons at the same time, since the start of June (December) is one month before June 30 (December 31) and seven months before December 31 (June 30), the dates for which they predict the bond rate. However, a formal test that the Nochanget averages are equal across horizons does not reject this hypothesis.12 Nochanget averages of about 0.35 imply that the economists revised their bond rate forecasts roughly twice every three months. For unemployment rate forecasts, the economists’ forecast revision behavior resembles their behavior for bond rate forecasts: Nochanget ranged between 0.30 and 0.45 with no apparent relationship between revision rate and forecast horizon, implying economists revised their unemployment rate forecasts about twice every three months. The surveyed economists revised their fed funds rate forecasts less frequently than their bond rate or unemployment rate forecasts. For the fed funds rate forecasts, Nochanget averages about 0.65, implying a revision rate of no more than once every three months. If instead of predicting the actual fed funds rate the economists were predicting the fed funds rate target, this revision rate suggests they expected a target change at about every other meeting of the Federal Open Market Committee (FOMC), which meets roughly twice every three months. The 11 Changes in revision frequency could arise from changes in the panel of forecasters. While there is turnover in the panel, about two-thirds of all revisions come from participants who responded to about eighty percent of the surveys. See Engelberg Manski, and Williams (2011) for a discussion of how changes in the panel could affect the usefulness of mean or consensus forecasts. 12 Hotelling T2 tests indicate that the average values of Nochanget are not significantly different for the three variables, with F(10,1) values of 2.81, .64, and 10.01 for the bond rate, unemployment rate, and fed funds rate, respectively. Coibion and Gorodnichenko (2015) report that their measure of information rigidity does not appear to vary across forecast horizon. 8 economists may, in fact, have been updating their information sets more frequently than their fed funds rate forecasts if new information was insufficient to predict a change in Fed policy. The behavior of Nochanget both contrasts with and confirms findings of Coibion and Gorodnichenko (2015). The average revision rates we observe for economists in the WSJ survey forecasting the bond, fed funds and unemployment rates are greater than the revision rates they estimate for economists in the Survey of Professional Forecasters forecasting the inflation rate and other variables. These differences may reflect the difference between monthly and quarterly surveys.13 Our finding that substantial proportions of forecasters forgo revising their forecasts at every opportunity supports Coibion and Gorodnichenko’s interpretation of their results as originating from costly revision rather than noisy information. However, the higher average revision rates we observe also cast doubt on whether infrequently revised expectations can account for the persistent effects of shocks at a quarterly frequency. Figure 3 documents heterogeneity in the forecast revision behavior of the WSJ economists with histograms of non-revision frequency. Specifically, for each economist having at least 25 chances to revise a prior forecast of a variable we compute the percentage of forecasts not revised; we do this for economists’ forecasts of the bond rate, the fed funds rate and the unemployment rate. We then group the percentages into ten categories (0-10%, 11-20%, etc.) and compute the percentage of economists in each category. (Economists who revised every forecast (no forecasts) are in the 0-10% (91-100%) category.) The economists show considerable heterogeneity in revising forecasts of all three variables, but especially the bond rate. About one-third of economists revised their bond rate 13 Estimates reported in Figure1, Panel B of Coibion and Gorodnichenko (2015) imply forecast revision times for a long-term interest rate (the AAA bond rate), a short-term interest rate (the three-month Treasury Bill rate), and the unemployment rate of about once every 3.6 months, 4.5 months, and 5 months respectively. Andrade and LeBihan (2013) report that forecasters in the European Survey of Professional Forecasters revise their forecasts about once every four months. Of course, quarterly data restrict the minimum forecast revision time to once every 3 months. Mankiw, Reis, and Wolfers (2003) report substantially less frequent revisions, once every 10 to 12 months. 9 forecasts frequently, leaving 20% or less of their forecasts unchanged (Panel A). Nearly half left from 31% to 60% of their forecasts unchanged, while the remaining one-sixth left 61% to 80% of their forecasts unchanged. The economists showed generally more reluctance to revise their fed funds rate forecasts, likely reflecting the timing of FOMC meetings. Over the full sample period, about sixty percent of economists did not revise between 51% and 70% of their forecasts (Panel B). They behaved similarly over the 2003-2008 sub-period when the Fed used a singlevalue funds rate target (Panel C). The economists’ behavior is most homogeneous in revising unemployment rate forecasts (Panel D): their unrevised forecast percentages span one less category than their interest rate forecasts and the distribution of economists is fairly symmetric.14 The foregoing evidence on forecast revision frequency is consistent with the notion that the costs of acquiring and processing information prevent forecasters from updating their forecasts whenever new information becomes available. Heterogeneity in revision behavior suggests that costs and/or benefits vary across forecasters. This evidence begs the question of whether forecast revision rate is independent of the size of recent changes in the variable being forecasted. Coibion and Gorodnichenko (2015) assume in their framework that the revision rate is not state dependent, although they find evidence that more volatile periods exhibit less information stickiness.15 We address this question in the next section. 14 We also investigated the role of employer type in forecast revision behavior. We defined ten employer types: commercial banks, investment banks, investment-advising firms, forecasting and research firms, insurance companies, other financial institutions (e.g., Fannie Mae), bond-rating firms, academia, professional associations, and nonfinancial institutions. Using a subsample of economists who responded to at least 25 surveys we computed the mean frequency of non-revision by employer type. Only economists at “other” financial institutions and bondrating firms have significantly different mean revision rates, revising their forecasts more frequently than economists at other employer types. They represent only about 5 percent of the WSJ economists, however. 15 Coibion and Gorodnicheko (2015) report evidence that forecasters revise less frequently during the Great Moderation. They note that “recessions, as periods of increased volatility, should be times when economic agents update and process information faster than in expansions since the (relative) cost of ignoring macroeconomic shocks in recession rises.” (page 2674) 10 3. Is the Degree of Information Stickiness State Dependent? 3.1. The Model Empirically testing the state dependency of forecasters’ forecast revision processes requires us to model changes in the information set for the economy. While we cannot measure all incoming information forecasters might access, we can measure one seemingly important piece of information: the amount of recent change in the variable a forecaster is predicting. In an efficient Treasury bond market, for example, bond rate changes since the last survey should be a good measure of new information which embeds itself in the current rate. Analogous arguments can be made about changes in the funds rate and the unemployment rate. A practical advantage of representing changes in the information set by recent changes in the variables forecasted is that actual values of these variables are available to all economists at virtually no cost. Some extreme examples illustrate the effect of information set changes on forecasts. Specifically, after the Fed lowered the fed funds rate target by 125 basis points in January 2008 all economists in the February 2008 survey revised their fed funds rate predictions for June 30, 2008 and nearly all revised their predictions for December 31, 2008. Similarly, after seeing the funds rate target fall by 100 basis points during October 2008 nearly all economists in the November 2008 survey revised their fed funds rate forecasts for December 31, 2008. We use the timing of the WSJ survey to define our change variables. While we observe neither the exact date an economist submits a forecast nor the most recent value of the forecasted variable he observed prior to submission, we do know that the WSJ assembles its surveys in the first or second week of each month. This fact leads us to compute the change in the actual bond rate, fed funds rate, and fed funds rate target from the last business day of the month before the prior survey to the last business day of the month before the current survey. Analogously, we 11 compute the change in the unemployment rate as the difference in unemployment rates announced at the starts of the prior and current months.16 Our forecast revision model relates the fraction of economists not revising their forecasts of a variable (Nochanget) to the absolute change in that variable in the prior month (bond rate, |∆it-1|; fed funds rate, |∆ffrt-1|; or unemployment rate, |∆Ut-1|) and to the forecast horizon. We allow the horizon to have a nonlinear effect by including indicator variables for each horizon: Nochanget = α + β |∆variablet-1| + Σj=sj=S γj Djt + et (1) where Djt is a zero-one indicator for forecast horizon of length j. 17 We expect larger values of |∆it-1|, |∆ffrt-1|, and |∆Ut-1| to cause more economists to revise their forecasts, leading β to be negative. The signs of the γj are unclear: Figure 2 shows that the unconditional means of Nochanget may rise or fall as the target date grows more distant but differences in the unconditional means by horizon are not statistically significant, as noted earlier. The design of the WSJ survey leads us to estimate equation (1) for two different sets of forecast horizons. At each survey, participants make shorter horizon (1- to 6-month-ahead) and longer horizon (7- to 12-month-ahead) forecasts of each variable. For example, the March survey reports bond rate, fed funds rate and unemployment rate forecasts the economists made at the start of March for the ends of June and December, four and ten months ahead, respectively.18 New information arriving between the February and March surveys may affect economists’ June and December forecasts. Given this survey design, we estimate equation (1) separately on data 16 The Bureau of Labor Statistics announces the unemployment rate on the first Friday of a month for the previous month. Thus for example, we presume that economists submitting March 2010 unemployment rate forecasts for June 2010 have observed the change in the unemployment rate from January 2010 to February 2010. We use the announced unemployment rates in the real-time data set from the Federal Reserve Bank of Philadelphia (see Croushore and Stark, 2001) to insure that survey participants had access to this information, since there are slight adjustments subsequent to the initial unemployment rate announcements. 17 Since our dependent variable ranges from zero to one, OLS could give misleading results as it does not impose this restriction. Consequently, we also estimated the models using the quasi-maximum-likelihood estimation method of Papke and Wooldridge (1996). The results, which are very similar to the OLS results, appear in Appendix C. 18 Before June 2007 the WSJ survey reported economists’ unemployment rate forecasts for May and November. 12 for shorter- and longer-horizon forecasts. In estimates on shorter-horizon data, j = {2,3,4,5,6} with j=1 being the omitted category; in estimates on longer-horizon data, j = {8,9,10,11,12} with j=7 being the omitted category. 3.2. Model Estimates for the Full Sample Period Table 1 reports estimates of equation (1) on data for the 2003-2014 sample period. Initial estimates produced F-tests favoring constrained versions of equation (1); Table 1 reports these Ftests and estimates of the constrained models. (Unconstrained estimates are available upon request.) Specifically, F-tests on the equation (1) estimate using shorter-horizon bond rate forecasts imply that the coefficients of Dj, j={2,…,6}, are all non-zero but jointly equal; an analogous statement applies to the equation (1) estimate using longer-horizon forecasts. Column 1.1 (1.2) reports the constrained model estimate on shorter-horizon (longer-horizon) forecasts. The constraint is imposed by replacing Dj, j={2,…,6} with D1, an indicator for a one-month horizon. (The constraint is imposed analogously in column 1.2). When |∆it-1| =0, the constrained estimate implies that 38% (44%) of forecasters do not change their prior-month bond rate forecasts with horizons of 2 to 6 months (8 to 12 months), and 23% (32%) do not change forecasts with horizons of one month (seven months). When |∆it-1| 0, non-revisions decline significantly: a two-standard-deviation change in the 10-year bond rate, about 38 basis points, reduces the percentage of non-revisers by about 11 percentage points at all horizons (-.30 x .38). Columns 1.3-1.6 report estimates of equation (1) on fed funds rate forecasts with no horizon effects. (In unconstrained model estimates, F-tests cannot reject the hypothesis that all γj=0, j={2,…,6} and j={8,…,12}.) We report model estimates for two alternative information variables: |∆ffrt-1| and |∆ffrtargett-1|, the absolute change in the effective funds rate and the Fed’s funds rate target, respectively. We use the latter on grounds that economists may consider target changes as well as actual funds rate changes when forecasting. When |∆ffrt-1| = 0 or |∆ffrtargett-1| 13 = 0 about two-thirds of forecasters do not revise their shorter-run forecasts (columns 1.3 and 1.5) and about sixty percent do not revise their longer-run forecasts (columns 1.4 and 1.6). Twentyfive-basis-point changes in the actual and target rates reduce Nochanget for shorter-horizon forecasts by 7 and 13 percentage points, respectively, and reduce longer-horizon forecasts by 7 and 9 percentage points, respectively.19 The last three columns of Table 1 report estimates of equation (1) on unemployment rate forecasts. (In unconstrained model estimates, F-tests cannot reject the hypothesis that all γj=0, j={2,…,6}, but can reject the hypotheses that all γj=0 and all γj=, j={8,…,12}; further F-tests show that γ8= γ9= 0 and γ10= γ11= γ12<0.) Column 1.7 reports an estimate of equation (1) on shorter-horizon unemployment rate forecasts without horizon indicators. With no change in the actual unemployment rate from the prior month, about 46% of forecasters leave their shorterhorizon forecasts unrevised. A two-standard-deviation change in the unemployment rate, about 24 basis points, reduces this fraction by about 16 percentage points. A similarly constrained model estimated on longer-horizon forecasts yields nearly identical results (column 1.8). Adding the horizon indicator D10+, defined as D10+D11+D12, reveals a small horizon effect (column 1.9). Specifically with an unchanged unemployment rate, 50% of economists leave their unemployment rate forecasts unrevised 7 to 9 months before the target date whereas only 42% leave forecasts unrevised 10 to 12 months before the target date. A two-standard-deviation change in the unemployment rate reduces both percentages by about 14 percentage points.20 19 When we include both rate changes in the same model, only the funds rate target change has a significant coefficient in the model estimate using shorter-horizon forecasts while neither rate change has a significant coefficient in the model estimate using longer-horizon forecasts. 20 As noted earlier, the WSJ economists make six forecasts at each survey – three variables and two horizons. To study the possibility that an economist makes joint forecasts, we computed correlation coefficients between pairs of Nochanget measures. All fifteen coefficients are non-negative with the highest correlations between Nochanget measures of the same variable at shorter and longer horizons (about .85). Less correlated are Nochanget measures of the bond rate and the funds rate (.35-.50). Coefficients are smaller for the remaining pairs of measures; half are statistically insignificant. To accommodate possibly joint forecasts, we estimated the models reported in Table 1 by seemingly unrelated regressions. Since SUR estimation requires balanced panels, we lose some observations. These 14 In summary, the evidence in Table 1 reveals three patterns. First, changes in the variables economists forecast reduce the percentages of unrevised forecasts, consistent with state dependency of forecast revisions. Second, recent changes in the variables economists forecast do not push the percentages of unrevised forecasts to zero, consistent with the sticky information model. Third, forecast horizon has little measurable effect on forecast revision frequency. 3.3. Did Forecaster Behavior Change After the Financial Crisis? We put the hypothesis of state-dependent forecast revisions to a stronger test by exploiting the presence in our sample period of both the end of the Great Moderation and the 2007-2009 financial crisis and its aftermath. Prior academic research shows that volatility in many economic variables increased starting in 2007 (Clark, 2009; Stock and Watson, 2012). The unanticipated bankruptcy of Lehman Brothers in September 2008 radically changed perceptions about “too big to fail,” the reliability of government interventions into financial markets, and financial market fragility. Andrade and LeBihan (2013) find greater dispersion in forecasts of professional European economists after 2007 and Dovern (2013) reports higher probabilities that international forecasters revised their forecasts during recessions. With greater economic uncertainty post-2008, we expect that forecasters revised their forecasts more frequently following changes in the information set. We test this theory by comparing estimates of constrained versions of equation (1) produced by forecasts from 2003-2007 and from 2008-2014. Table 2 reports model estimates from bond rate and unemployment rate forecasts but not from fed funds rate forecasts, since the Fed’s funds rate target remained unchanged after December 2008.21 estimates are reported in Appendix D, Tables D1 and D2. Breusch-Pagan tests indicate contemporaneously correlated residuals. Nevertheless, the SUR estimates are qualitatively very similar to the OLS estimates reported in Table 1. 21 Model estimates on data from before the December 2008 decision are very similar to those reported in Table 2 for the whole period and are reported in Appendix D, Table D3. 15 Model estimates from bond rate forecasts show that post-2007, fewer economists revised prior forecasts with an unchanged bond rate and more revised with a changed rate (Table 2, Panel A). F-tests show significant differences in the model estimates for the two sub-periods. For shorter-horizon forecasts pre-2008, an unchanged bond rate yields a Nochanget estimate of about 33% for 2- to 6-month forecast horizons and about 7% for 1-month horizons; post-2007 the estimates are 40% and 25%. Pre-2008, a 38-basis-point (two-standard-deviation) bond rate change from the prior month reduces Nochanget by about 5 percentage points; post-2007 the reduction is about 14 percentage points. Longer-horizon forecasts show a similar pattern. Model estimates from unemployment rate forecasts show that forecasters were more sensitive to unemployment rate changes after 2007 (Table 2, Panel B). Pre-2008, an unemployment rate change the month before a survey has no significant effect on Nochanget for either shorter- or longer-horizon forecasts; post-2007, a 24-basis-point (two-standard-deviation) change reduces Nochanget by about 15 percentage points for both forecast horizons. With no change in the unemployment rate the month before a survey, the estimate of Nochanget is roughly the same for both sub-periods and both forecast horizons: between 42% and 49%.22 4. Extensions 4.1. Estimates of the Coibion-Gorodnichenko Model In this section we compare our direct method of testing the sticky-information model using estimates of equation (1) with the indirect method developed by Coibion and Gorodnichenko (2015), hereafter CG. CG develop a model to infer the average forecast revision frequency for a sample of forecasters whose individual forecasts are unobserved. CG assume that forecasters form full information, rational expectations predictions whenever they forecast but 22 The estimate of Nochanget is about 35% for forecast horizons of ten or more months in the 2003-2007 sub-period. 16 that frictions prevent continuous updating. CG also assume the probability of an individual forecaster revising a forecast on a given date is (1-λ), making the average time between revisions [1/(1-λ)]. CG derive a relationship between the average forecast error and the change in the average forecast: xt+h – Ft xt+h = [λ/(1- λ)] (Ft xt+h – Ft-1 xt+h) + vt+h,t (2) where xt+h is the actual value of the variable forecasted h periods ahead and Ft xt+h is the average forecast at time t across all forecasters in the survey. CG note that an estimate of equation (2) on aggregate data yields an estimate of [λ/(1-λ)], from which the average time between revisions may be inferred. Support for the sticky-information model comes from evidence that λ>0 so that [1/(1-λ)]>1, the average time between forecast revisions exceeds the time between surveys. Since the probability of non-revision, λ, is analogous to our directly observed measure of non-revision, Nochanget, we compare values of the two measures produced by forecasts from the WSJ survey. Table 3 reports our results. Panel A shows estimates of equation (2) on shorter- and longer-horizon forecasts of the bond rate, the funds rate and the unemployment rate. Statistically insignificant estimates of β=λ/(1-λ) in the model estimates using bond rate forecasts and shorthorizon funds rate forecasts imply λ estimates of zero (columns 3.1a – 3.3a). Conversely, statistically significant estimates of β in the estimates using longer-horizon funds rate forecasts and unemployment rate forecasts imply λ estimates exceeding zero (columns 3.4a – 3.6a). Panel B reports average numbers of months between forecast revisions produced by the two methods. Values reported for the CG model are inferred from the β estimates in Panel A; values reported for the Nochanget model are computed from the average values of Nochanget at each forecast horizon, displayed as Figure 2.23 The direct and indirect methodologies produce 23 In the CG model, β = [λ/(1- λ)], λ = β/(1+β). The average time between revisions, 1/(1-λ), is 1+ β when the β estimate is statistically significant, and one otherwise. Nochanget is conceptually similar to λ. The average number of months between forecast revision is 1/(1- avg Nochange) where avg Nochange is the value of Nochange t for a 17 different estimates of forecast revision frequency. For the bond rate, the CG model estimates imply monthly revision of forecasts whereas the Nochanget averages imply revisions closer to every month and a half (columns 3.1b and 3.2b). For the fed funds rate, the CG model estimates imply revisions once a month for shorter-horizon forecasts and once every two and two-thirds months for longer-horizon forecasts; the Nochanget averages imply revisions closer to once every three months for both forecast horizons. For the unemployment rate, the CG model estimates imply revisions about every one and three-quarter months for shorter-horizon forecasts and about every four months for longer-horizon forecasts; the Nochanget averages imply revisions about every one and two-thirds months for both horizons. In summary, although the indirect and direct estimates of average revision frequency are similar in magnitude, the former can understate or overstate the observed degree of information rigidity. 4.2. Are Recently Revised Forecasts More Accurate? The heterogeneity in forecast revision behavior documented in Figure 3 may reflect differential rewards to forecast accuracy, leading us to investigate whether recently revised forecasts are more accurate than unrevised forecasts. To test this hypothesis, we compute the squared forecast error for each economist for every target date and horizon and then regress the squared forecast errors on a binary indicator variable coded one if the economist’s forecast is unchanged from the prior survey. Only forecasters who responded to both the current and previous surveys are included. Each regression has a sample size of about 50, roughly the average number of economists per survey in our 12-year sample period. Table 4 reports the given variable 1 to 6 months or 7 to 12 months before the target date averaged over all of the surveys in our sample period. 18 outcome of this experiment by reporting the percent of surveys in which revised forecasts are significantly more or less accurate than unrevised forecasts by variable and forecast horizon.24 Revised forecasts are often more accurate than unrevised forecasts but not consistently so. Differences in accuracy are greatest for the 10-year bond rate. At a one-month forecast horizon, revised forecasts are significantly more accurate than unrevised forecasts in 64% of the surveys and are significantly less accurate in 0% of the surveys; analogous metrics at a twomonth horizon are 48% and 4%. At the remaining horizons, revised forecasts are significantly more (less) accurate than unrevised forecasts in at most 30% (5%) of the surveys. Declining forecast accuracy at longer horizons is consistent with increasingly noisy information about the target date. Differences in the accuracy of revised and unrevised unemployment rate forecasts are smaller. At a one-month horizon, revised unemployment rate forecasts are significantly more accurate than unrevised forecasts in just 28 % of the surveys and are significantly less accurate in 9% of the surveys. At the eleven other forecast horizons, revised forecasts are significantly more (less) accurate than unrevised forecasts in at most 20% (13%) of the surveys. Revised fed funds rate forecasts are significantly more accurate than unrevised forecasts only slightly more often than they are less accurate. For the majority of surveys, however, revised and unrevised forecasts of the three variables are statistically indistinguishable in accuracy. 5. Conclusions The sticky-information model predicts that forecasters will not revise their forecasts when new information arrives if the costs exceed the benefits. This paper contributes to the evidence on sticky information in several ways. First, we test the model using data from the WSJ 24 Revisers are significantly more (less) accurate than non-revisers in a survey if the coefficient estimate of the indicator variable is statistically significant at the 10%-level or better and positive (negative). The coefficient estimates appear in Appendix E. 19 Economic Forecasting Survey, which publishes the names and forecasts of professional forecasters. From these data we can see precisely when forecasters revise their forecasts and measure rates of forecast revision without making assumptions about forecast rationality, as researchers must do when testing the sticky-information model using datasets without individuals’ forecasts. Additionally, the WSJ Survey is monthly, permitting a higher frequency investigation than prior research using quarterly, semi-annual or infrequent surveys. The paper is, to our knowledge, the first to use the WSJ Survey to evaluate the sticky-information model. Second, we investigate the state dependency of forecast revision frequency by testing whether frequency changes after an increase in the volatility of the variables forecasted. Third, we compare direct estimates of forecast revision frequency with indirectly inferred estimates produced using a technique from the literature. Finally, we examine whether forecast revision improves forecast accuracy. Our results both support the sticky-information model and cast doubt on the model’s adequacy as an explanation for the persistence of macro-economic shocks. While we find that many forecasters revise their forecasts only every other month or less frequently, we also find that forecasters revise their estimates somewhat more frequently than other researchers have found. Given that our measure of forecast revision frequency is likely a lower bound to the frequency of information updating, our results suggest that frictions other than the costs of acquiring and processing information likely play a role in the responses to economic shocks. Forecasters in the WSJ Survey revise their forecasts of the fed funds rate less frequently than forecasts of the 10-year U.S. Treasury bond rate or the unemployment rate, perhaps due to the timing of FOMC meetings. Forecast horizon appears to exert little influence on the frequency of forecast revision. Forecasters exhibit considerable heterogeneity in their revision frequencies, consistent with substantial variation in the costs and benefits of revising across forecasters. We 20 find evidence that forecast behavior is state dependent, with forecasters revising their forecasts more frequently in more volatile times. Our direct measures of revision frequency are similar in magnitude to those estimated indirectly, but the latter can understate or overstate the observed degree of information rigidity. Finally, we find only weak evidence that revising forecasts improves forecast accuracy, particularly at longer horizons. 21 References Andrade, P. and H. Le Bihan. “Inattentive Professional Forecasters.” Journal of Monetary Economics, 60(8), 2013, 967-82. Armantier, O., S. Nelson, G. Topa, W. Van der Klaauw and B. Zafar. “The Price is Right: Updating Inflation Expectations in a Randomized Price Information Experiment.” Review of Economics and Statistics, 98(3), 2016, 503-523. Campbell, J.R., C.L. Evans, J.D. Fisher, A. Justiniano, C.W. Calomiris and M. Woodford. “Macroeconomic Effects of Federal Reserve Forward Guidance” [with comments and discussion]. Brookings Papers on Economic Activity, Spring 2012, 1–80. Carroll, C. “Macroeconomic Expectations of Households and Professional Forecasters.” Quarterly Journal of Economics, 118(1), 2003, 269-298. Clark, T.E. “Is the Great Moderation Over? An Empirical Analysis.” Federal Reserve Bank of Kansas City Economic Review, Fourth Quarter 2009, 5-42. Coibion, O. “Comments on Dovern, Fritsche, Loungani and Tamarisa.” International Journal of Forecasting, 31(1), 2015, 155-6. Coibion, O. and Y. Gorodnichenko. “Information Rigidity and the Expectations Formation Process: A Simple Framework and New Facts.” American Economic Review, 105(8), 2015, 2644-78. Croushore, D. and T. Stark. "A Real-Time Data Set for Macroeconomists." Journal of Econometrics, 105(1), 2001, 111-30. Dovern, J. “When are GDP forecasts updated? Evidence from a large international panel.” Economics Letters, 120(3), 2013, 521-524. Dovern, J. “A Multivariate Analysis of Forecast Disgreement: Confronting Models of Disagreement with Survey Data.” European Economic Review, 80(1), 2015, 16-35. Dovern, J, U. Fritsche, P. Loungani and N. Tamirisa. “Information Rigidities: Comparing Average and Individual Forecasts for a Large International Panel.” International Journal of Forecasting, 31(1), 2015, 144-54. Dräger, L., M.J. Lamla and D. Pfajfar. “Are Survey Expectations Theory-Consistent? The Role of Central Bank Communication and News.” European Economic Review, 85(1), 2016, 84-111. Engelberg, J., C.F. Manski and J. Williams. “Assessing the Temporal Variation of Macroeconomic Forecasts by a Panel of Changing Composition.” Journal of Applied Econometrics, 26(7), 2011, 1059-78. 22 Mankiw, N. and R. Reis. “Sticky Information Versus Sticky Prices: A Proposal to Replace the New Keynesian Phillips Curve.” Quarterly Journal of Economics, 117(4), 2002, 1295-328. Mankiw, N., R. Reis and J. Wolfers. “Disagreement about Inflation Expectations.” NBER Macroeconomic Annual, 18, 2003, 209-48. Manzan, S. “Differential Interpretation in the Survey of Professional Forecasters.” Journal of Money, Credit and Banking, 43(5), 2011, 993-1017. Mertens, E. and J.M. Nason. “Inflation and Professional Forecast Dynamic: An Evaluation of Stickiness, Persistence, and Volatility.” Working Paper 06/2015, Centre for Applied Macroeconomic Analysis, Crawford School of Public Policy, Australian National University, 2015. Mitchell, K. and D. Pearce. “Professional Forecasts of Interest Rates and Exchange Rates: Evidence from the Wall Street Journal's Panel of Economists.” Journal of Macroeconomics, 29(4), 2007, 840-54. Papke, L.E. and J.M. Wooldridge. “Econometric Methods for Fractional Response Variables with an Application to 401(k) Plan Participation Rates.” Journal of Applied Econometrics, 11(6), 1996, 619-32. Patton, A.J. and A. Timmermann. “Why Do Forecasters Disagree? Lessons from the Term Structure of Cross-Sectional Disagreement.” Journal of Monetary Economics, 57(7), 2010, 80320. Pfajfar, D. and E. Santoro. “News on Inflation and the Epidemiology of Inflation Expectations.” Journal of Money, Credit and Banking, 45(6), 2013, 1045-67. Reis, R. “Inattentive Producers.” Review of Economic Studies, 73(3), 2006, 793-821. Sims, C.A. “Implications of Rational Inattention.” Journal of Monetary Economics, 50(3), 2003, 665-90. Stock, J.H. and M.W. Watson. “Disentangling the Channels of the 2007-2009 Recession.” Brookings Papers on Economic Activity, Spring 2012, 81-135. Woodford, M. “Imperfect Common Knowledge and the Effects of Monetary Policy.” In Knowledge, Information, and Expectations in Modern Macroeconomics: In Honor of Edmund Phelp, edited by P. Aghion, R. Frydman., J. Stiglitz, and M. Woodford, Princeton University Press, 2003, 25-58. 23 Table 1 Forecast Revision and Recent Changes in Forecasted Variable General Model: Nochanget = α + β |Δxt-1| + Σ γj Djt + et Forecast Revisions of: Horizon Length: Column: 10-year Bond Rate 1-6 (1.1) 7 - 12 (1.2) -.298** (.059) -.150** (.028) -.297** (.055) Fed Funds Rate Effective Target 1-6 7 - 12 1-6 7 - 12 (1.3) (1.4) (1.5) (1.6) Unemployment Rate 1-6 (1.7) 7 - 12 (1.8) 7 - 12 (1.9) -.655** (.089) -.629** (.083) -.597** (.080) Explanatory Variable: |Δxt-1| D1 -.282** (.053) -.260** (.044) -.536** (.047) -.351** (.071) -.117** (.023) D7 -.079** (.023) D10+ Constant F tests: all γj=0 all γj= R2 Sample size .379** (.016) .442** (.023) .681** (.017) .614** (.016) .678** (.016) .605** (.016) 4.65** 0.70 5.20** 0.36 .44 .43 .74 .36 .88 .87 .72 .51 .333 121 .279 121 .117 126 .088 125 .274 126 .125 125 .463** (.018) .471** (.018) 1.79 2.20 2.61* 3.21* .221 131 .272 119 .500** (.019) .337 119 The table reports OLS estimates of the forecast revision model shown for the 2003-2014 sample period. Nochange is the fraction of forecasters in the current WSJ survey with forecasts unchanged from the prior survey. Forecasts are of the 10-year Treasury bond rate, the fed funds rate and the unemployment rate on a target date (30 June or 31 December). Separate estimates are reported for surveys 1-6 months and 7-12 months before the target date. |Δxt-1| is the absolute change in x from the last business day of the month before the prior survey to the last business day of the month before the current survey; x is the bond rate, the effective fed funds rate, the target fed funds rate and the unemployment rate in columns (1.1)-(1.2), (1.3)-(1.4), (1.5)-(1.6), and (1.7)-(1.9), respectively. Dj = 1 if j is the number of months until the forecast target date (30 June or 31 December) and 0 otherwise. D10+ =1 if the number of months until the forecast target date is 10 or more and 0 otherwise. Robust standard errors appear in parentheses. The F-tests are from unconstrained estimates of the models in which the full set of horizon indicators appear (Dj, j={2,...,6} or j={8,…,12}). ** and * denote statistical significance at the 05. and .10 levels, respectively. 24 Table 2 Constancy of Forecaster Revision Behavior, 2003-2007 versus 2008-2014 Panel A: 10-year Bond Rate Forecast Revisions: Nochanget = β0 + β1 |∆it-1| + β2 Djt + et Horizon Length: Sample Period: Column: Explanatory Variable: |∆it-1| D1 1-6 months 2003-07 2008-14 (2.1a) (2.2a) -.140* (.077) -.256*** (.030) -.361*** (.088) -.146*** (.031) D7 Constant .328** (.023) F tests across time: β0, β1, β2 = R2 Sample size .400** (.021) 7-12 months 2003-07 2008-14 (2.3a) (2.4a) -.158** (.064) -.383*** (.088) -.111*** (.035) .375*** (.024) -.103*** (.033) .471*** (.021) 5.69*** .235 37 .389 84 3.75** .193 45 .341 76 Panel B: Unemployment Rate Forecasts Revisions: Nochanget = β0 + β1 |∆Ut-1| + β2 Djt + et Horizon Length: Sample Period: Column: Explanatory Variable: |∆Ut-1| 1-6 months 2003-07 2008-14 (2.1b) (2.2b) -.114 (.301) -.639*** (.094) .468*** (.035) .428*** (.024) D10+ Constant F tests across time: β0, β1, β2 = R2 Sample size 7-12 months 2003-07 2008-14 (2.3b) (2.4b) .052 (.240) -.136*** (.034) .489*** (.028) 8.27*** .003 49 .272 82 -.617*** (.088) -.047 (.030) .470*** (.028) 4.74*** .261 41 .350 78 The table reports OLS estimates of the forecast revision models shown. Forecasts are of the 10-year Treasury bond rate or the unemployment rate on 30 June or 31 December. Separate estimates are reported for surveys 1-6 and 7-12 months before the target date. In Panel A, Nochange is the fraction of forecasters in the current WSJ survey with bond rate forecasts unchanged from the prior survey. |∆i t-1| is the absolute change in the bond rate from the last business day of the month before the prior survey to the last business day of the month before the current survey. D1 =1 (D7 =1) if the number of months until the forecast target date is 1 (7) and 0 otherwise. In Panel B, Nochange is the fraction of forecasters in the current survey with unemployment rate forecasts unchanged from the prior survey. |∆U t-1| is the absolute change in the unemployment rate from the last business day of the month before the prior survey to the last business day of the month before the current survey. D10+ =1 if the number of months until the forecast target date is 10 or more and 0 otherwise. In both panels robust standard errors appear in parentheses. ***, ** and * denote statistical significance at the .01, .05 and .10 levels, respectively. 25 Table 3 The Coibion-Gorodnichenko Model Panel A: CG Model estimates: xt+h – Ft xt+h = β [Ft xt+h – Ft-1 xt+h] + εt+h Average Forecast Errors of: Horizon Length: Column: Explanatory Variable: 10-year Bond Rate 1-6 7-12 (3.1a) (3.2a) Fed Funds Rate 1-6 7-12 (3.3a) (3.4a) [Ft xt+h – Ft-1 xt+h] .238 .216 .100 .286 .517 .340 F tests, horizon effects: R2 Sample size 0.60 .007 130 0.57 .001 108 1.07 .031 135 1.650*** .356 1.35 .171 112 Unemployment Rate 1-6 7-12 (3.5a) (3.6a) .783** .355 1.26 .080 135 3.081*** .559 0.60 .304 112 Panel B: Average Number of Months between Forecast Revisions: Indirect and Direct Estimates Forecast Revisions of: Horizon Length: Column: Average number of months between forecast revisions: 10-year Bond Rate 1-6 7-12 (3.1b) (3.2b) Fed Funds Rate 1-6 7-12 (3.3b) (3.4b) Unemployment Rate 1-6 7-12 (3.5b) (3.6b) CG Model 1.00 1.00 1.00 2.65 1.78 4.08 Nochanget 1.41 1.54 3.35 2.75 1.61 1.62 Panel A reports estimates of the sticky-information model of Coibion and Gorodnichenko (2015) using the WSJ forecasts of the 10-year Treasury bond rate, the fed funds rate and the unemployment rate on 30 June or 31 December. xt+h is the actual value of the variable forecasted h periods ahead and Ft xt+h is the average forecast across all forecasters at time t. xt and Ft xt+h refer to the bond rate, the fed funds rate and the unemployment rate in columns (3.1a)-(3.2a), (3.3a)-(3.4a), and (3.5a)-(3.6a), respectively. Separate estimates are reported for surveys 1-6 months and 7-12 months before the target date. *** and ** denote statistical significance at the .01 and .05 levels, respectively. F tests are for unreported model estimates which include dummy variables permitting different intercepts and slope coefficients by forecast horizon; the F tests are for the hypothesis that these coefficients are jointly zero. Panel B compares the average number of months between forecast revisions from the CG model estimates in Panel A and the direct measures of forecast revision plotted in Figure 2. In the CG model the average number of months between forecast revisions is 1/(1-λ) = 1+β if β is statistically significant, and zero otherwise. Nochanget is conceptually similar to λ. The average number of months between forecast revisions is 1/(1- avg Nochange) where avg Nochange is the average value of Nochange for a given variable 1 to 6 months or 7 to 12 months before the target date averaged over all of the surveys in our sample period. 26 Table 4 Forecast Accuracy of Revised versus Unrevised Forecasts, by Variable Forecasted and Forecast Horizon Horizon, in months: 1 2 3 4 5 6 7 8 9 10 11 12 10-year bond rate: Revisers more accurate Revisers less accurate Number of surveys (8/2003 – 12/2014) 64 0 14 48 4 23 30 9 23 30 0 23 26 4 23 21 0 14 30 4 22 14 5 22 18 5 22 23 0 22 14 0 22 9 0 11 Fed funds rate: Revisers more accurate Revisers less accurate Number of surveys (1/2003 – 6/2008) 0 50 2 23 18 11 18 18 11 9 18 11 20 0 10 0 0 1 20 0 10 20 0 10 10 10 10 10 20 10 10 10 10 Unemployment rate: Revisers more accurate Revisers less accurate Number of surveys (1/2003 – 12/2014) 28 9 22 13 0 23 13 4 24 8 13 24 20 7 15 9 5 22 9 4 23 13 9 23 0 9 23 13 9 23 14 0 14 Variable forecasted: % of surveys: 15 8 13 This table summarizes statistically significant differences between the mean squared forecast errors of forecasters who did and did not revise their forecasts from the previous survey. For each target date and forecast horizon we first compute the squared forecast error of every economist and then regress the squared forecast errors on a binary indicator variable coded one if the economist’s forecast is unchanged from the prior survey. Each regression has a sample size of about 50. (1-, 5-, 6-, 11- and 12-months-ahead forecasts of some variables are unavailable because the WSJ did not consistently request them. For the fed funds rate, comparisons of forecast accuracy stop after mid-2008 when the Federal Reserve pegged the funds rate target.) Revised forecasts are more (less) accurate if the estimated coefficient of the indicator variable is statistically significant at the 10% level or better and positive (negative). The estimated coefficients are reported in Appendix E. 27 Figure 1 4-Months-Ahead Forecasts of the Bond Rate, Fed Funds Rate and Unemployment Rate Panel A: 10-year bond rate 7 6 Percent 5 4 3 2 1 0 4-months-ahead forecasts Actual 10-year bond rate Panel B: Fed funds rate 7 6 Percent 5 4 3 2 1 0 4-months-ahead forecasts Actual fed funds rate 28 Figure 1 -- continued Panel C: Unemployment rate 10.5 9.5 Percent 8.5 7.5 6.5 5.5 4.5 3.5 4-months-ahead forecasts Actual unemployment rate 29 Figure 2 Nochanget, the Fraction of Forecasters Not Revision Forecasts, by Forecast Horizon 0.80 0.70 0.60 Nochange 0.50 0.40 0.30 0.20 0.10 0.00 0 1 2 3 10-year bond rate 4 5 6 7 Forecast Horizon Fed funds rate 8 9 10 11 12 Unemployment rate 30 Figure 3 Distribution of Forecasters by Percent of Unrevised Forecasts Percent of forecasters Panel A: 10-year bond rate forecasts, 2003 - 2014 40 35 30 25 20 15 10 5 0 Percent of unrevised forecasts Panel C: Fed funds rate forecasts, 2003 - 2008 Percent of forecasters 40 35 30 25 20 15 10 5 0 40 35 30 25 20 15 10 5 0 Percent of unrevised forecasts Percent of unrevised forecasts Panel D: Unemployment rate forecasts, 2003 - 2014 Percent of forecasters Percet of forecasters Panel B: Fed funds rate forecasts, 2003 - 2014 40 35 30 25 20 15 10 5 0 Percent of unrevised forecasts 31 Appendix A Questionable Entries in the WSJ Survey Data 1. 10-year Bond rate Forecasts Survey June 2008 Target Date June 2008 Questionable data Prakken & Varvares forecast is 1.65 with previous forecast of 3.55 Correction: omitted June 2008 Dec 2008 Prakken & Varvares forecast is 2.13 with previous forecast of 4.1 and subsequent forecast of 4.2 Correction: omitted Oct 2008 Dec 2008 Prakken & Varvares forecast is 1.27 with previous forecast of 3.88 and subsequent forecast of 3.68 Correction: omitted Nov 2008 Dec 2008 Sterne forecast is .9 with previous forecast of 3.70 and subsequent forecast of 3.00 Correction: omitted Oct 2008 June 2009 Prakken & Varvares forecast is 1.13 with previous forecast of 4.35 and subsequent forecast of 3.36 Correction: omitted Nov 2008 June 2009 Sterne forecast is 1.3 with previous forecast of 3.70 and subsequent forecast of 3.50 Correction: omitted Dec 2008 June 2009 Wilson forecast is 1.65 with previous forecast of 2.89 and subsequent forecast of 2.80 Correction: omitted April 2009 June&Dec2009 Wyss forecasts recorded as .028 and .03 with previous being 2.9 and 3.1 and subsequent 3.2 and 3.5 Correction: changed to 2.8 and 3.0 July 2012 June 2013 Leamer/Shulman reported as 25 with before and after of 2.2 and 2.5 Correction: changed to 2.5 32 Appendix A -- continued 2. Fed funds rate forecasts Survey Feb 2003 Target Date June 2003 Questionable data Shilling forecast is .05 with previous forecast of .75 and subsequent forecast of .5 Correction: omitted Sept 2009 Dec 2009 Johnson forecasts are recorded as -.125 instead of .125 Correction: corrected to .125 June 2011 Dec 2011 June 2011 June 2012 Maki forecast is .0125 with previous forecast of .125 and subsequent forecast of .125 Correction: corrected to .125 Maki forecast is .0125 with previous forecast of .125 and subsequent forecast of .125 Correction: corrected to .125 July 2012 all Several cases of forecasts recorded as .0125 with previous and subsequent forecasts of .125 Forecasters for whom reported forecasts for Dec 2012 and June 2013 are .0125 when for the survey before and after the forecasts are .125 are Behravesh, Carey, Coronado, Fiorini-Ramirez, Ethan Harris, Maury Harris, Maki, Prakken/Varvanes, Resler, and Soss. These were changed to .125. Daane also reported to forecast .0125 but this is repeated in subsequent surveys. 3. Unemployment rate forecasts Survey Feb 2006 Target Date May 2006 Questionable data Swonk forecast of 3.4 with previous forecast of 5.0 and subsequent forecast of 4.8 Correction: omitted Feb 2006 Nov 2006 Swonk forecast of 3.6 with previous forecast of 5.0 and subsequent forecast of 4.7 Correction: omitted August 2006 Nov 2006 Duncan forecast of 2.8 with previous forecast of 4.8 and subsequent forecast of 4.8 Correction: corrected to 4.8 33 Appendix A -- continued 3. Unemployment rate forecasts -- continued Survey May 2008 Target Date June 2008 Questionable data Sterne forecast is 2.9 with previous forecast of 5.2 Correction: omitted May 2008 Dec 2008 Sterne forecast is 2.4 with previous forecast of 5.0 and subsequent forecast of 5.1 Correction: omitted Dec 2008 Dec 2008 Brinkman forecast of 8.3 with previous forecast of 6.9 Correction: left in since subsequent surveys are also high Feb 2009 June 2009 Meil forecast is 5.8 with previous forecast of 8.3 and subsequent forecast of 9.0 Correction: omitted Nov 2013 Dec 2013 Handler forecast is 1.7 and probably should be 7.1 Correction: corrected to 7.1 34 Appendix B 2- and 10-Months-Ahead Forecasts Panel A: 2-month-ahead bond rate forecasts 7 6 5 4 3 2 1 0 Panel B: 10-month-ahead bond rate forecasts 7 6 5 4 3 2 1 0 35 Appendix B -- continued Panel C: 2-month-ahead Federal funds rate forecasts 7 6 5 4 3 2 1 0 Panel D: 10-month-ahead Federal funds rate forecasts 7 6 5 4 3 2 1 0 36 Appendix B -- continued Panel E: 2-month-ahead unemployment rate forecasts 10.5 9.5 8.5 7.5 6.5 5.5 4.5 3.5 Panel F: 10-month-ahead unemployment rate forecasts 10.5 9.5 8.5 7.5 6.5 5.5 4.5 3.5 37 Appendix C Papke-Wooldridge Estimates of Forecast Revision and Recent Changes in Forecasted Variable General Model: Nochanget = α + β |Δxt-1| + Σ γj Djt + et Forecast Revisions of: Horizon Length: Column: 10-year Bond Rate 1-6 (C.1) 7 - 12 (C.2) -1.750*** (.365) [-.358] -.935*** (.222) [-.160] -1.494*** (.293) [-.339] Fed Funds Rate Effective Target 1-6 7 - 12 1-6 7 - 12 (C.3) (C.4) (C.5) (C.6) Unemployment Rate 1-6 (C.7) 7 - 12 (C.8) 7 - 12 (C.9) -3.036*** (.469) [-.714] -2.872*** (.422) [-.680] -2.743*** (.404) [-.650] Explanatory Variable: |Δxt-1| D1 -1.243*** (.301) [-.284] -1.144*** (.263) [-.278] -2.546*** (.398) [-.583] -1.510*** (.386) [-.367] -.566*** (.118) [-.120] D7 -.344*** (.102) [-.081] D10+ Constant F tests: all Dj=0 all Dj= Sample size -.424*** (.076) 20.49** 3.94 121 -.212** (.072) 24.55** 1.44 121 .758*** (.076) .471*** (.069) .751*** (.073) .428** (.066) -.125 (.078) 2.29 1.84 126 3.88 1.54 125 4.93 3.97 126 3.78 2.19 125 9.38 9.20 131 -.095 (.075) 13.27** 13.14** 119 .029** (.080) 119 The table reports Papke-Woolridge estimates of the forecast revision model shown for the 2003-2014 sample period. Nochange is the fraction of forecasters in the current WSJ survey with forecasts unchanged from the prior survey. Forecasts are of the 10-year Treasury bond rate, the fed funds rate and the unemployment rate on a target date (30 June or 31 December). Separate estimates are reported for surveys 1-6 months and 7-12 months before the target date. |Δxt-1| is the absolute change in x from the last business day of the month before the prior survey to the last business day of the month before the current survey; x is the bond rate, the effective fed funds rate, the target fed funds rate and the unemployment rate in columns (C.1)-(C.2), (C.3)-(C.4), (C.5)-(C.6), and (C.7)-(C.9), respectively. Dj = 1 if j is the number of months until the forecast target date (30 June or 31 December) and 0 otherwise. D10+ =1 if the number of months until the forecast target date is 10 or more and 0 otherwise. Robust standard errors appear in parentheses. Numbers in brackets are the marginal effects of the variable, comparable to the OLS estimates in Table 1. *** and **denote statistical significance at the .01 and .05 levels. 38 Appendix D Table D1 SUR Estimates of Forecast Revision and Recent Changes in Forecasted Variable General Model: Nochanget = α + β |Δxt-1| + Σ γj Djt + et Forecast Revisions of: Horizon Length: Column: 10-year Bond Rate Fed Funds Rate Unemployment Rate 1-6 (D1.1) 7 - 12 (D1.2) Effective 1-6 7 - 12 (D1.3) (D1.4) 1-6 (D1.5) 7 - 12 (D1.6) -.209** (.054) -.164** (.032) -.216** (.056) -.359** (.067) -.671** (.119) -.624** (.096) Explanatory Variable: |Δxt-1| D1 -.317** (.063) -.109** (.033) D7 -.035** (.014) D10+ Constant R2 Sample size .365** (.016) .421** (.016) .752** (.018) .684** (.018) .457** (.020) .487** (.018) .310 111 .232 111 .194 111 .163 111 .217 111 .267 111 The table reports SUR estimates of the forecast revision model shown for the 2003-2014 sample period. Nochange is the fraction of forecasters in the current WSJ survey with forecasts unchanged from the prior survey. |Δxt-1| is the absolute change in x from the last business day of the month before the prior survey to the last business day of the month before the current survey; x is the bond rate, the effective fed funds rate, and the unemployment rate in columns (D1.1)-(D1.2), (D1.3)-(D1.4) and (D.5)-(D.6), respectively. Dj = 1 if j is the number of months until the forecast target date (30 June or 31 December) and 0 otherwise. D10+ =1 if the number of months until the forecast target date is 10 or more and 0 otherwise. ** denotes statistical significance at the .05 level. Breusch-pagan test χ2(15) = 322.582 with a p value =0.000. 39 Appendix D -- continued Table D2 SUR Estimates of Forecast Revision and Recent Changes in Forecasted Variable General Model: Nochanget = α + β |Δxt-1| + Σ γj Djt + et Forecast Revisions of: 10-year Bond Rate Fed Funds Rate Unemployment Rate Target Horizon Length: Column: 1-6 (D2.1) 7 - 12 (D2.2) 1-6 (D2.3) 7 - 12 (D2.4) 1-6 (D2.5) 7 - 12 (D2.6) -.211** (.054) -.164** (.032) -.215** (.056) -.497** (.072) -.349** (.073) -.663** (.119) -.602** (.096) Explanatory Variable: |Δxt-1| D1 -.111** (.033) D7 -.035** (.015) D10+ Constant R2 Sample size .366** (.016) .421** (.016) .738** (.015) .667 (.017) .456** (.020) .484** (.018) .310 111 .232 111 .298 111 .174 111 .217 111 .267 111 The table reports SUR estimates of the forecast revision model shown for the 2003-2014 sample period. Nochange is the fraction of forecasters in the current WSJ survey with forecasts unchanged from the prior survey. |Δxt-1| is the absolute change in x from the last business day of the month before the prior survey to the last business day of the month before the current survey; x is the bond rate, the target fed funds rate and the unemployment rate in columns (D2.1)-(D2.2), (D2.3)-(D2.4), and (D2.5)-(D2.6), respectively. Dj = 1 if j is the number of months until the forecast target date (30 June or 31 December) and 0 otherwise. D10+ =1 if the number of months until the forecast target date is 10 or more and 0 otherwise. ** denotes statistical significance at the .05 level. Breusch-pagan test χ2(15) = 317.249 with a p value =0.000. 40 Appendix D -- continued Table D3 Estimates of Forecast Revision and Recent Changes in Forecasted Variable Prior to the Zero Lower Bound General Model: Nochanget = α + β |Δxt-1| + Σ γj Djt + et Forecast Revisions of: Horizon Length: Column: Fed Funds Rate Effective Target 1-6 7 – 12 1-6 7 - 12 (D3.1) (D3.2) (D3.3) (D3.4) Explanatory Variable: |Δxt-1| -.223** (.065) -.168** (.078) -.503** (.070) -.287** (.091) Constant .668** (.034) .564** (.029) .668** (.034) .566** (.028) .094 53 .058 53 .271 53 .141 53 R2 Sample size The table reports OLS estimates of the forecast revision model shown for the 2003-2007 sample period. Nochange is the fraction of forecasters in the current WSJ survey with forecasts unchanged from the prior survey. Forecasts are of the fed funds rate on a target date (30 June or 31 December). Separate estimates are reported for surveys 1-6 months and 7-12 months before the target date. |Δxt-1| is the absolute change in x from the last business day of the month before the prior survey to the last business day of the month before the current survey; x is the effective fed funds rate and the target fed funds rate in columns (D3.1)-(D3.2) and (D3.3)-(D3.4). Robust standard errors appear in parentheses. ** denotes statistical significance at the .05 level, respectively. 41 Appendix E Difference between Average Squared Forecast Errors of Non-Revisers and Revisers, by Horizon Panel A: 10-year Bond Rate Forecasts Horizon, in months: Target Date: Dec 2003 June 2004 Dec 2004 June 2005 Dec 2005 June 2006 Dec 2006 June 2007 Dec 2007 June 2008 Dec 2008 June 2009 Dec 2009 June 2010 Dec 2010 June 2011 Dec 2011 June 2012 Dec 2012 June 2013 Dec 2013 June 2014 Dec 2014 1 2 3 4 5 .043** 1.308*** .022 -.023 .187** .271*** .051 -.017 .170*** .019** .229** .231* .103** .032 .025 .015 -.015 .278*** .028 .028** .029** .025 .148** .035 -.340** .463*** .043 .285 .233** .121*** .009 .291*** .076 .024 .055 .224** .275** .052 -.061* .116* .055 .013 .051** .015 .041 -.060 .006 .659*** .047 .106 .016 .232* .031 .255*** .074 .169 -.113** .122 .184** .249** -.092 -.129 .112 .106 .085* .001 .018 .084 -.208 .006 -.227 .294* .097 .044 -.080 .042 .892*** .223* .200* .022 .036 .236** .435** .009 -.011 .246** .316** .131* -.002 -.045 .010 -.002 -.144 -.803** .250 .045 .034 .018 .087 .459 -.065 .217* .141** -.003 .420 .472** 6 7 8 9 10 11 12 -.054 -.242 .776*** -.024 .039 .111 -.175 .260 .196** -.048 .085 .247** .365 .284 .012 .015 .377** .336*** .097 .088 .084 -.904** .077 -.699 -.402 .272 -.002 .361** .090 .994** .092 1.471*** .001 -.006 .531 .706*** .097 -.079 .138 .182** .059 -.069 .115 -.121 -.063 -.391 .051 .132 .213 -.184 -.032 .245 -.155 -.454 .270** -.202* .388 .689*** .120 .251** .447** -.022 .145 .063 .041 -.005 -.070 -.367 -.001 .203 .054 .065 .214** -.026 .493 -.010 .244** -.208** .327 .280 -.010 .209 .704*** .155 .200* .061 .116 -.183 .082 .382 .194 .103 .058 .273 .265 -.243 .899 .579* .226** -.089 .238 .451* .231 .131 .373 .216 -.114 .242*** -.033 -.222 -.199 .390 -.388 .103 .517* .453 .215 .102 1.339 -.154 .064 .027 .0001 1.024** -.200 ---‡ .143 -.088 .125 -1.606 -.411 .103 -.110 .113 .601** .457 This panel reports differences in mean squared forecast errors of forecasters who did and did not revise their forecasts of the 10-year government bond rate from the previous survey for the target date shown, by months until the target date. Positive differences indicate larger mean squared errors for non-revisers. The WSJ survey did not request 1-, 6- and 12-months-ahead forecasts of the bond rate until June 2008. ***, ** and * indicate differences significantly different from zero at the .01, .05, and .10 levels. ‡ No forecast revision data are available for this date. 42 Appendix E-- continued Panel B: Fed Funds Rate Forecasts Horizon, in months: Target Date: June 2003 Dec 2003 June 2004 Dec 2004 June 2005 Dec 2005 June 2006 Dec 2006 June 2007 Dec 2007 June 2008 1 2 -.035 -.011* -.039* .008 .038** .027** .001 -.002 .024** -.010 -.023** .004 .003 3 -.076* .004 .026** .032** -.020** .019 .016 -.001 -.015 .018 -.046 4 -.039 .011 -.006 .016 -.006 -.087* .058* .012 -.150** .101 .049 5 -.064 .017 -.009 -.068 -.036 .093** .040 .015 -.124 .239** ---† 6 7 8 9 10 11 .265 .014 -.044 .066* .035 -.034 .080 .040** .209 .210 .975 .201 -.136 .214** .075 -.021 .109 -.024 .132 .051 .356* .276 -.058 .226 -.199** .181** .153 -.025 .127 -.319 -.322 .549** .006 .035 -.304** .099 -.376* -.043 .240 .019 1.210 .292 .110 .204 -.209 -.210** .524** .197 .151 -.185 .474 12 This panel reports differences in the mean squared forecast errors of forecasters who did and did not revise their forecasts of the fed funds rate from the previous survey for the target date shown, by months until the target date. Positive differences indicate larger mean squared errors for non-revisers. The WSJ survey did not request 1-, 6- and 12-months-ahead forecasts of the fed funds rate until June 2008. ***, ** and * indicate differences significantly different from zero at the .01, .05, and .10 levels. † All forecasters revised their forecasts. 43 Appendix E-- continued Panel C: Unemployment Rate Forecasts Horizon, in months: 1 2 3 4 5 6 7 8 9 10 11 12 Target Date May 2003 -.003 .020 .060* -.031 Nov 2003 -.007 -.015 -.012 -.185 .288 -.106 -.043 .262 .269 May 2004 -.007** .010 .007 .004 .003 -.013 .082 -.129 .031 Nov 2004 -.001 -.011 -.010 .012 -.017 .024 .045* .024 .006 May 2005 -.004 .009 .003 .012 .002 -.011 .011 .007 -.043 Nov 2005 .001 -.004 -.005 -.003 .016 -.009 .007 .026 .015 May 2006 .008** .011 -.022 .047 .019 .023 -.051* -.060* .028 Nov 2006 .024** .015 .001 -.096 -.030 .005 -.004 -.069 .104* ** May 2007 .004 .033 -.020 .037 .021 .060 .100 -.070 -.142* Nov 2007 .011 -.006 .040 .048 Dec 2007 .004 -.003 .004 -.014 -.035 June 2008 -.204 -.042* .085* -.050 .072 .038 .029 -.010 .056 .239* ** Dec 2008 .003 ---† .016 .425 .106 .083 -.199 -.391 .017 .655 -.079 .251 June 2009 .234** -.179 .417 -.355 -.671 .935 12.987** .060 .915 -.331 1.178 Dec 2009 .040* -.158 .014 -.034 -.017 .110 -.202 .483 -.444 3.430** -.666 .072 * ** June 2010 -.033 -.038 -.011 -.087 .038 .085 -.195 .166 .010 .247 -.017 .599* * Dec 2010 -.039 .008 .053 -.043 -.021 -.026 .018 .067 .124 .025 .062 .042 June 2011 -.010 .048 .044 .030 .123* .074 .033 -.115* -.046 .061 -.003 .080 Dec 2011 .160** -.050 .062 -.165** -.074 -.075 .097** .039 .030 .075 .176** -.037 June 2012 .008* -.014 .006 .016 .023 .070 .052 -.089 -.052 -.312* .064 .053 ** Dec 2012 -.003 .006 .018 -.015 -.010 -.092 .028 .047 .002 .094** .081 .141 June 2013 -.003 .007 .021* .011 .001 .039* .027 .127** .071 .075 -.069 -.128** Dec 2013 .066** -.022 -.053 -.068** .101* -.005 -.147** -.029 -.110 .119 -.160 .091 ** ** June 2014 .013 -.001 -.056 .028 .115 .243 .128* -1.044 -.181** -.116 .001 -.009 Dec 2014 -.006 .026** -.015 -.028 -.068*** .069 -.027 -.031 -.029 .054 .010 .494** This panel reports differences in the mean squared forecast errors of forecasters who did and did not revise their forecasts of the unemployment rate from the previous survey for the target date shown, by months until the target date. Positive differences indicate larger mean squared errors for non-revisers. The WSJ survey switched from requesting unemployment rate forecasts for May and November to June and December starting in June 2007. The WSJ survey did not consistently request 1-, 6- and 12-months-ahead forecasts of the unemployment rate until after June 2009. The large outlier at the 8-month horizon for the June 2009 target date is because all but one forecaster revised their forecasts following a 40-basis-point drop in the unemployment rate from the prior month. ***, ** and * indicate differences significantly different from zero at the .01, .05, and .10 levels. † All forecasters revised their forecasts. 44
© Copyright 2026 Paperzz