Prepared by - UNLV: CBER - University of Nevada, Las Vegas

Prepared by
Ryan T. Kennelly, Economic Analyst
Center for Business and Economic Research
Lee Business School
University of Nevada, Las Vegas
October 2012
The
Center
for
Business
and
Economic
Research
University of Nevada, Las Vegas
4505 S. Maryland Parkway
Las Vegas, Nevada 89154-6002
(702) 895-3191
[email protected]
http://cber.unlv.edu
Copyright© 2012, CBER
ABSTRACT:
In this paper, we create new economic indexes for the metropolitan area of Las Vegas, Nevada. We first
construct a coincident index, using two employment series of Las Vegas as proxies for the current state
of the economy. We create the coincident index using the method outlined by The Conference Board
(2001).
After producing the coincident index, we construct a leading index. First, a rigorous set of criteria
ranging from traditional, proven methods to modern day econometric techniques determine which data
series lead the Las Vegas economy. The resulting series contain elements from the national, local, and
neighboring states’ economies. Once again, we employ The Conference Board method intending to yield
a suitable leading index, but aren’t satisfied with the results.
Therefore, we develop a new process, using regression analysis to weight each variable. Furthermore,
we restrict our data to different time periods producing three different sets of weights. Our most
successful leading index comes from limiting the data to the most recent business cycle. This captures
currently intact and relevant economic relationships, resulting in an accurate, reliable leading index.
Table of Contents
1. Introduction ...................................................................................................................................1
2. Previous Work on Economic Indicators ...........................................................................................2
3. Constructing a Coincident Index for Las Vegas .................................................................................6
3.1 Conference Board Method ........................................................................................................7
4. Constructing a Leading Index for Las Vegas ................................................................................... 10
4.1 Criteria for Leading Economic Indicators .................................................................................. 11
4.2 Causality Testing ..................................................................................................................... 13
4.3 Constructing a Leading Index .................................................................................................. 17
4.3.1 Conference Board Method ............................................................................................... 17
4.3.2 Leading Index – A New Method ........................................................................................ 19
4.4 Leading Index Evaluation ........................................................................................................ 33
5. Summary of Findings .................................................................................................................... 35
6. Concluding Remarks ..................................................................................................................... 36
Bibliography .................................................................................................................................... 38
List of Figures
Graph 1: Las Vegas Coincident Index ...............................................................................................9
Graph 2: Las Vegas Coincident Index vs. Las Vegas Real GMP ...........................................................9
Graph 3: Leading Index – Conference Board Method ..................................................................... 18
Graph 4: Leading Index – Conference Board Method ..................................................................... 18
Graph 5: Leading Index – All Data ................................................................................................. 24
Graph 6: Leading and Coincident Indexes – All Data ...................................................................... 25
Graph 7: Leading Index, Financial Crisis – All Data ......................................................................... 25
Graph 8: Leading Index – 1992 Data .............................................................................................. 28
Graph 9: Leading and Coincident Indexes – 1992 Data ................................................................... 29
Graph 10: Leading Index, Financial Crisis – 1992 Data .................................................................... 29
Graph 11: Leading Index – 2002 Data ............................................................................................ 31
Graph 12: Leading and Coincident Indexes – 2002 Data ................................................................. 32
Graph 13: Leading Index, Financial Crisis – 2002 Data .................................................................... 32
List of Tables
Table 1: List of Economic Indexes ....................................................................................................5
Table 2: Augmented Dickey-Fuller Results ..................................................................................... 14
Table 3: Granger Causality Testing ................................................................................................ 16
Table 4: Regression Results – Intermediate Table .......................................................................... 21
Table 5: Regression Results – All Data ........................................................................................... 22
Table 6: Leading Index Regression Results – 1992 Data .................................................................. 27
Table 7: Leading Index Regression Results – 2002 Data .................................................................. 30
Table 8: Leading Index Weights – 2002 Data.................................................................................. 31
Table 9: Correlation Testing .......................................................................................................... 35
New Economic Indexes for Las Vegas, Nevada
Ryan T. Kennelly*
1. Introduction
Nowadays, businesses and policymakers use economic indexes to track and predict the
economy. They do this to make informed decisions, ones that will best serve themselves or their
constituents. At the national level, gross domestic product (GDP) represents the current state of the
economy. However, GDP is only available quarterly, has reporting lags, and is revised frequently.
Because of these problems, The Conference Board publishes two monthly indexes, one that coincides
with and one that leads the U.S. economy. These are useful in assessing the current state of the
economy and predicting where it is heading, an invaluable tool to policymakers and businesses.
Lately, criticism has plagued these indexes, primarily the leading index for its performance after
the end of Great Recession in June 2009.1 Nevertheless, many regional economists have attempted to
replicate indexes similar to The Conference Board’s. The Conference Board employs the method
developed by the Department of Commerce (1977, 1984). As with any method, there are pros and cons.
We will later discuss these in detail.
The method used by The Conference Board has been around a long time, and although
successful, it doesn’t have much of an econometric appeal. As econometric theory advanced, other
methods emerged that made use of the new techniques. Most notably, Stock and Watson (1989)
created an index model using vector autoregressive (VAR) techniques. Their main assumption is that the
current state of the economy was unobserved. Although advanced econometrically, their leading index
*
The author would like to thank Stephen P.A. Brown, Nasser Daneshvary, Rennae Daneshvary, Stephen M. Miller,
and Reza Torkzadeh for helpful comments and insights.
1
The U.S. Leading Index published by the Conference Board has indicated a quick recovery since the end of the
Great Recession; whereas, in reality, the economy is recovering at a slow pace.
1
for the United States performed poorly during the early 1990s. An overview of their method is in section
2.
Although neither The Conference Board nor Stock and Watson method is perfect, many
economists created variants of them, realizing the potential rewards of having information about the
economy. Some created indexes for the nation, others for state and regional areas. In this paper, we do
two things. First, we create a coincident index for the Las Vegas metro area using The Conference Board
method. Secondly, we develop a new method for creating a leading index, combining the best qualities
of the above methods into one. We first use economic theory to determine a list of data series that may
lead our coincident index. Then we employ Granger causality testing to actually show which series lead
the “current state.” Furthermore, we run a series of regressions that finalize the selections for our
leading index. In our method, we are able to keep the traditional, proven methods of The Conference
Board and infuse them with the most modern econometric techniques. By merging the old with the
new, we create a leading index that will satisfy both the statisticians and the econometricians2, all while
hopefully predicting the Las Vegas economy in an accurate manner.
2. Previous Work on Economic Indicators
The roots of creating economic indexes can be traced back to Mitchell and Burns (1938). They
were among the first to construct series that lead, lag, and coincide with the national economy. Their
philosophy was to let the data speak for themselves, and not make underlying assumptions about which
data series should lead the economy. Observation and careful statistical work were at the core of their
methods. Not surprisingly, being the forerunners of this practice, their efforts were met with substantial
criticism. Koopmans (1947) claimed it was “measurement without theory,” and reprimanded the
2
Statisticians are more in favor of letting data speak for themselves, while econometricians prefer to have a solid
base of economic theory and then use statistics to validate.
2
authors for “observing and summarizing the cyclical characteristics of a large number of economic
series” without a formal theoretical framework.
At that time, Koopmans spoke on behalf of the Cowles Commission, located at the University of
Chicago. The heart of the Cowles Commission was that economic theory suggests testable hypotheses.
Their method was to use theory to build a model of the economy, test the suggested hypotheses, and
subsequently reject or fail to reject the underlying theory. Koopmans’ criticism of Mitchell and Burns
represented the Commission’s disdain for work that wasn’t solid in statistical and economic theory.
After Mitchell’s death in 1948, Vining (1949) – a fellow economist at the National Bureau of
Economic Research (NBER) – issued a rebuttal on behalf of his colleague. He claimed that Koopmans’
and the Cowles Commission’s methods put a “straightjacket on economic research.” He also argued that
the current macroeconomic theory wasn’t strong enough to construct systems of equations that
accurately represent the economy. In addition, he asserted that Koopmans’ approach overlooked the
benefits of simple observation.
These remarks started a feud between the “statistical economists” represented by NBER and the
“econometricians” represented by the Cowles Commission. However, after the Great Depression and
subsequent advances in statistical theory and econometric modeling, the empirical first methods of
NBER were pushed aside for models based on Keynesian economics.
It wasn’t until the 1970s that the debate between empirical-inductive and theoretical-deductive
methods resurfaced. The models developed in the 1950s and 1960s couldn’t predict or explain the high
inflation and high employment witnessed in the 1970s. The “statistical economists” argued that this
unpredicted result was a symptom of the fundamental problems underlying the Keynesian model.
In conjunction with the mounting criticism of the theoretical-deductive methods, Sims (1980a,
1982) created a process of tracking the economy with roots in Mitchell’s empirical-inductive methods. It
consisted of a dynamic time series model, containing properties of an unrestricted system of equations
3
to summarize business cycle facts. This is more commonly known now as vector autoregressions. VAR
models are used to capture linear dependence between a set of variables. By not focusing on individual
coefficients within the model, the data were able to “speak for themself”. While Sims’ method had far
more formal theoretical work than Mitchell’s, the underlying message remained the same – use careful
statistical work with minimal theory to provide insights into economic behavior. Once again, Sims’ work
was met with the same criticism as Mitchell’s.
Sims’ work with VAR models leads us to the last major contribution in measuring business
cycles, Stock and Watson (1989). In summary, the methodology is as follows. First, they construct a
coincident index. Their main assumption is that “current state of the economy” is unobserved and
reflected in several indicators. Each indicator is influenced by past values of the unobservable current
state along with other forces. So for an indicator I, we have:
(1)
where S is the unobservable state, u the error term, and a a constant. In addition, it is assumed that S
and u follow autoregressive processes:
(2)
(3)
where et and zt are error terms. Combining (1), (2), and (3) for many monthly indicators produces a
system of equations that can estimate the change in the unobservable state,
. They estimate this
model using maximum likelihood in standardized log differences, set the index equal to 100 at some
point in time, and construct the coincident index with the estimated changes. To construct a leading
index, indicators determined to lead the economy serve as the ’s, and the equation is changed to:
and the system is used to predict future changes in the unobservable state.
4
While advanced econometrically speaking, it is not without faults. The Stock and Watson model
for the United States completely missed the early 1990’s recession (the first recession after they
constructed it), by not predicting or representing it. Furthermore, Phillips (1999) compared and
contrasted the Stock and Watson model with Conference Board’s method, and concluded that The
Conference Board’s method was superior in the regard of predicting turning points of the economy.
While all these methods were developed to construct indexes for the nation, indexes can be
useful at the local level as well. Many regional economies mirror the national economy to an extent, but
fluctuate on their own. For example, the economy of Las Vegas is heavily influenced by tourism. While
tourism plays a role nationally, it is much less significant than in Las Vegas. Following that thought, we
would want to use variables and data that capture tourism in indexes for Las Vegas. The same process
applies to other regional economies where manufacturing or oil may play a large role. Many economists
have realized the value of indexes and created them for their own area. Table 1 is a list of some regional
indexes. Under the “type of index” column, C stands for coincident and L for leading. In addition, the last
column denotes the method used, either as Conference Board (CB), Stock and Watson (SW) or a variant
(V).
Table 1: List of Economic Indexes
Author(s)
Dua and Miller
Phillips
Gazel and Potts
Balcilar et al.
Crone
Kurre and Riethmiller
Crane
Slaper
Region
Connecticut
Texas
Southern Nevada
Nevada
All 50 States
Erie, PA
Milwaukee, WI
Indiana
Year
1996
2005
1995
2010
2006
2005
1993
2009
Type of Index
C,L
C,L
C,L
C,L
C,L
L
L
L
Method
CB
SW-V
CB-V
CB
SW
CB-V
CB-V
CB-V
5
Since Mitchell and Burns begin to track the economy with indexes in 1938, economists have
developed many methods for “best” measuring or predicting changes in the economy. The most
prevalent of these techniques either stem from The Conference Board or Stock and Watson. While The
Conference Board method has a better track record and is more storied, the Stock and Watson method
has great econometric appeal. As it stands, when constructing a coincident or leading index for a region,
one must carefully weigh the pros and cons of each before settling on a particular process.
3. Constructing a Coincident Index for Las Vegas
Before creating a leading index, we start by creating a coincident index to capture the current
state of the economy. For the national and state level, GDP can serve this purpose. Even some metros –
Las Vegas included – have gross metropolitan product (GMP) compiled by the Bureau of Economic
Analysis (BEA), but it is only released on an annual basis. Since GMP is released infrequently, we need a
new measure.
Now the question becomes, how do we measure the economy? In essence, we are looking for
series that consistently move with the local economy. For Las Vegas, the economy is best modeled using
two proxy series: Las Vegas establishment employment and Las Vegas household employment.3 Both
are reported monthly on a timely basis and collected from the Department of Employment, Training and
Rehabilitation (DETR). Current Employment Statistics (CES) reports total nonfarm employment, whereas
Local Area Unemployment Statistics (LAUS) reports household employment. For a metro area, data
series that represent the current state of the economy are rare. Employment is generally considered a
good proxy for how well the economy is performing even if it may lag a little at the national level. We
also tried including the unemployment rate, but it made the index too volatile.
3
Employment is commonly used in coincident indexes at the state or regional level. Nationally, however,
employment may be a lagging indicator.
6
We utilize the procedure outlined by The Conference Board (2001) to combine the two series
into an index. We chose The Conference Board method over Stock and Watson for its proven record of
accuracy not only at the national level, but also at the state and regional levels. This is built upon the
method outline by the U.S. Department of Commerce (1977, 1984).
3.1 Conference Board Method
First we compute the symmetric monthly changes for each variable. This ensures that positive
and negative changes in a series receive the same weight. For a series in levels, this amounts to:
(
)
⁄
(
)
(1)
where Xit is the data for month t of component i.
The second step takes each component’s symmetric changes, multiplies them by their
standardization factor4 (wi), and sums them together.
∑
Now we can compute the raw index. The series
represents the symmetric changes in the
index ( ), so we can use equation (1), replacing X with I and c with r. Then we have:
(
)
4
Standardization factors determine how monthly changes in each component contribute to the index. The
standardization of each component is the inverse of the standard deviation of the symmetric changes. This allows
each component to contribute to the monthly change in the index. Also, the standardization factors are
normalized to sum to one. For us, the standardization factor on Las Vegas MSA Employment is 0.5162 while the
factor on Las Vegas Household Employment is 0.4838.
7
Solving for
yields:
[
]
Step four is to rebase the index to the desired year. For us, this is 1982, for reasons we will
explain later when developing our leading index. To do this, the index levels obtained in step 4 are
multiplied by 100 then divided by the preliminary index level of the desired base year.
Using this method, we create a coincident index for the Las Vegas economy. Two employment
series (Las Vegas MSA and household employment) serve as proxies for the current state of the
economy. The coincident index is illustrated in Graph 1 below. Our data constrain us to the dates from
1980 onward, and Las Vegas recessions5 as indicated by the index are shaded gray. The recessions in the
early 1980s and 2000s are represented, along with the financial crisis of 2008. However, what about the
recession in the early 1990s? During the 1990s, the Las Vegas economy was booming due to increased
tourism. Consequently, the effect of the 1991-1992 national recession was hardly noticed locally, as
represented by the index.
For comparison, we can use Real GMP data from the BEA for Las Vegas. These are shown in
Graph 2. Unfortunately, the BEA only has GMP for the Las Vegas metro area back to 2001, but even
looking at this small time period, we can be comfortable with our choices to accurately represent the
current state of the economy. One potential problem with this index could be volatility, but throughout
the approximately 30 years, the recessions in the early 1980s and 2000s along with the financial crisis of
2008 are clearly defined.
5
Defining recessions we will be similar to the method used by NBER. While there is no fixed rule, generally five to
six months of growth or decline will signify a trough or peak. However, the decision has to be a judgment call in the
end.
8
Graph 1: Las Vegas Coincident Index
380
330
280
230
180
130
80
Graph 2: Las Vegas Coincident Index vs. Las Vegas Real GMP
400
93000
390
88000
380
370
83000
360
350
340
78000
Coincident
Real GMP
73000
330
68000
320
310
63000
9
4. Constructing a Leading Index for Las Vegas
The intuition of leading economic indicators is pretty straightforward. The idea being that although
we know the current state of the economy through our coincident index, there must be measurable
changes that occur in the economy before peaks and troughs. The hard part is identifying and
representing those changes.
But why not create a model of the economy, and then use it to predict the future? There are a
variety of reasons, as outlined by Kurre and Riethmiller (2005). For starters, a full-blown econometric
model takes significantly more time and resources than a leading index. Also, a large amount of data is
needed to create a model. Much less data are needed to create an index, something especially
important for a metro area, as well-maintained local data series are rare. Lastly, since a model is built
upon past patterns, it can easily miss turning points. So not only is creating an index easier to do, it is
also more reliable.
In this section, we first address how to choose leading economic indicators with older methods,
ones not based in modern econometrics. After obtaining that list of candidate variables, we use Granger
causality testing to further narrow our choices. Following that, we make an attempt at combining the
series using The Conference Board method. Unsatisfied with the results, we add another step, creating a
new process. The methodology consists of running a series of regressions to finalize the variables to be
used in our leading index and summing the coefficients on individual data series to weight each variable
in the final leading index.
10
4.1 Criteria for Leading Economic Indicators
If we can find series that lead the economy, The Conference Board method detailed above could
give us a way to create an index. However, how do we choose the series?6 Kurre and Weller (1999)
along with Kurre and Riethmiller (2005) outline the following criteria for choosing leading indicators.
1) Ocular Regression – This method relies on visually inspecting the data, to see if they lead the
coincident index. Before modern day econometric techniques, this was the most prominent
way to choose series.
2) Using Previous Economic Indicator Work – Sometimes the best place to start is by looking at
the work of others. We have mentioned a few of the many indexes that have been created,
and building upon another’s success can be very effective.
3) Economic Theory – Does this variable make sense in terms of the economy? In Las Vegas,
tourism is a strong part of our local economy. Would it make sense that an index of
Southern California’s economy (where a large part of the tourism comes from) may lead our
economy? That is, if Southern California’s economy is doing well, should we expect to reap
the benefits in Las Vegas?
While helpful, the above criteria are very broad, and do little to narrow down what could be an
extremely long list of candidates. Crane (1993) along with Kurre and Weller (1999) also suggest the
following criteria:
1) Data Availability and Lags – We don’t want to use data that are not available frequently, or
has just begun to be collected. Ideally, we want at least one full business cycle of data to
6
It is desirable to use more than one series to either predict or represent the economy. The more series that we
use, the less likely that exogenous factors unrelated to the economy’s performance will distort our index (this is
truer for leading indexes where we might have series exclusive to one sector of the economy. We want an index
that represents the economy as a whole, not just one particular piece).
11
determine relationships within the economy. Additionally, we don’t want the data to be
revised repeatedly or have large publication lags. Having to wait six months before
publishing an index that leads the economy by six months is counterproductive.
2) Substitutability – Does the data series potentially substitute for some national series used in
the U.S. Leading Economic Indicator (LEI – published by The Conference Board)? For many
regional indexes, the U.S. LEI serves as a base model for their own index. The LEI contains
average weekly initial claims for unemployment insurance. State-level initial claims for
unemployment insurance would be a substitute for that variable.
3) Missed Turning Points – An ideal series will never miss a turning point, but such series are
rare.
4) False Turning Points – A series with a lot of false signals may never miss a turning point,
satisfying criterion number 3, but is of no use – having a similar effect to the “Boy Who Cried
Wolf”. How would we know if we were actually headed into a recession if it were predicted
every few months?
5) Volatility – This is an extension of criterion number 4, an extremely volatile (bumpy) series
will make it harder to determine whether or not we are at a true turning point.
6) Length of the Lead – A four to six month lead is ideal, anything less doesn’t serve a purpose
and anything more becomes vague as the consistency of the lead will become variable.
7) Consistency of the Lead – If the length of the lead varies, it is difficult to predict when the
economy will reach its peak or trough. However, if the indicator always leads by the same
amount, we are able to make a good prediction.
12
4.2 Causality Testing
All the criteria that we discussed thus far can be classified as traditional methods (none rely on
econometric tests). These were used before Stock and Watson paved the way on using econometrics to
create indexes. For us, the list of possible leading economic indicators was still large after eliminating
many possibilities using traditional criteria. It was clear that we needed some way to further evaluate
indicators, and that we should look to econometrics for help. Granger causality testing became our
solution.
Granger causality testing determines whether one variable is useful in predicting future values
of another. The first step is to check whether the series is integrated or stationary. If a series is
integrated, a shock to it will be permanent, whereas a stationary series will eventually return to its
trend. As per common econometric practice, an Augmented Dickey-Fuller (ADF) test was used. The
format of an ADF test is as follows:
for a data series y, where
is a constant,
is the error term, and p is the lag order of the
autoregressive process. The lag order can be found using an Akaike Information Criterion (AIC).7 The
null hypothesis of an ADF is
while the alternative is
. If we fail to reject, the data series is
said to have a unit root (is integrated). We first run the ADF test with the series in levels, and if we fail to
reject the null, the test is done after first differencing the series. We continue differencing until the
series is stationary. In Table 2 are results of selected variables.8
7
An AIC test measures the relative goodness of fit of a model by determining the tradeoff between bias and
variance in model construction. The general form of the test is: AIC = 2k – 2ln(L), where k is the number of
parameters in the model, and L is the maximized value of the likelihood function. So for each n = 1…p, we have an
AICn. The min{AICn} denotes the preferred model where n gives the lag order of the autoregressive process.
8
A large number of national and local data series were tested after clearing the criteria in section 4.1. However, to
avoid long lists of results, only selected variables are shown.
13
Table 2: Augmented Dickey-Fuller Results
Variable
Levels
1st Difference
AZ
-2.79*
-
CA
-
-2.65*
Coincident
-
-5.044***
M2
-
-4.20***
McCarran
-
-28.21***
Occu
-
-18.51***
S&P 500
-
-15.98***
Taxable Sales
-
-10.05***
Visitor
-
-19.48***
*, **, and *** denote significance at the 10%, 5%, and 1% level, respectively
All data are reported monthly and are seasonally adjusted. AZ and CA are the Arizona and
California Leading Indexes, respectively, as compiled by the Philadelphia Fed9. Coincident is the
coincident index created previously in this paper. M2 is collected by the Board of Governors of the
Federal Reserve System. McCarran is the total passengers enplaned and deplaned from McCarran
airport as reported by itself. Occu is the hotel/motel occupancy rate in Las Vegas provided by Las Vegas
Convention and Visitors Authority. SP500 is the S&P 500 Index, which is provided by Standard and Poors.
Tax is Clark County taxable sales as reported by the Nevada Department of Taxation, and Visitor is the
Las Vegas visitor volume collected by Las Vegas Convention and Visitors Authority. All data predate 1982
except for the Arizona and California Leading Indexes, making analysis from that year forward intuitive.
As seen above, AZ is stationary in levels, whereas the remaining variables are stationary in first
differences.
9
The Arizona and California leading indexes are comprised of state-level housing permits, state initial
unemployment insurance claims, the interest rate spread between the 10-year Treasury bond and the 3-month
Treasury bill, and delivery times from the Institute for Supply Management (ISM) manufacturing survey.
14
The second step of Granger causality testing is to run the following regression:
∑
where
is the coincident index and
∑
is the economic indicator we are testing for month t
in stationary form. For our purposes, n was chosen to be six, as a six-month lead is most desirable.10
After obtaining the regression results, an F-test is run on all the chosen indicators lags to determine
significance. If an indicator’s lags are statistically significant, it is said to Granger cause the coincident
index. This tells us that the indicator is useful in predicting future movement of the coincident index –
confirming that it is a leading economic indicator. Table 3 shows the data series that were significant in
predicting future values of the coincident index.
10
A six-month lead provides time for businesses and policymakers to make adjustments but lacks the problems of
longer leads. Generally, as you increase the predicted lead time, the variability of that lead increases as well. A sixmonth time period will consistently lead around six months, whereas a twelve month lead might result in an actual
lead of between seven months and fifteen months. In addition, tests were run for n=3 all the way up to n=12 with
similar results in terms of relative significance.
15
Table 3: Granger Causality Testing
Variable
F-Statistic
Probability
AZ
5.01433
0.00006***
CA
5.18454
0.00004***
M2
1.79475
0.0991*
McCarran
5.52141
0.00002***
Occupancy
4.36805
0.0003***
S&P 500
5.46449
0.00002***
Taxable Sales
8.96316
0.00***
Visitor Volume
4.098
0.0005***
4.98336
0.00005***
1.71645
0.1161
1.287981
0.2619
Gaming
11
Housing Permits
12
Convention Attendance
13
*, **, and *** denote significance at the 10%, 5%, and 1% level, respectively
11
The numbers shown for total monthly gaming revenue in Clark County are actually for reverse causality. So
growth in the coincident index causes growth in gaming revenue. Causality from gaming revenue to the coincident
index was insignificant.
12
Clark County housing permits, monthly.
13
Las Vegas convention attendance, monthly.
16
4.3 Constructing a Leading Index
After the methods described in 4.1 and 4.2, we have a list of variables that lead the “current
state” of the economy. In this section, we first make an attempt to create a leading index using The
Conference Board method, but fail. We then develop a new method, consisting of multiple “hollowedout” regressions in three different time periods, with the final result being our leading index for Las
Vegas, Nevada.
4.3.1 Conference Board Method
Our first thought after identifying leading economic indicators was to use The Conference Board
method once again. As previously mentioned, this method has a history of accuracy, both at the national
and regional level. The variables used for this leading index are:

Arizona and California Leading Indexes

M2 money supply

Total McCarran enplaned and deplaned passengers

Las Vegas hotel and motel occupancy rate

S&P 500 Index

Clark County taxable sales

Las Vegas visitor volume
The resulting index is portrayed in Graph 3 (shaded areas are recessions indicated by the
coincident index). Also, Graph 4 shows the leading index along with the coincident index.
17
Graph 3: Leading Index – Conference Board Method
435
385
335
285
235
185
135
85
Graph 4: Leading Index – Conference Board Method
385
435
335
385
285
335
285
235
235
185
185
Leading
Coincident
135
85
135
85
18
There are a number of flaws with this leading index, the most prominent being that there is no
lead before recessions. The index drops after the recession has begun for both the early 2000’s
recession and the 2008 financial crisis – a quality of a coincident index. Having no early warning about
upcoming recessions defeats the purpose of a leading index.
4.3.2 Leading Index – A New Method
The results from The Conference Board method suggest that we need a new way to create the
leading index. Traditional techniques and Granger causality testing have provided a list of variables, but
we must combine them in a different fashion. The method proposed here uses regression analysis, along
with subsequent F-tests to determine the significance of each variable. Before we run the regression,
each variable is indexed with December 1981 = 100.14 Now we run the following regression:
∑
∑∑
where Coint is the coincident index for month t, X is a vector of all the significant economic indicators in
Table 3, and c a constant. In addition, j and i represent the indicator and month, respectively. For this
particular index, n = 4 and m = 6. The reasoning behind this is that we are looking for a four to six-month
lead on the coincident index.15
Note that this regression is very similar to a Granger Causality Test. We deem this a “hollowedout” regression - referring to the first few lags, which are included in the Granger test that we omit from
our equation. This “hollowing out” technique tells us which variables are able to lead our coincident
index in the desired time period of four to six months.
Next a series of F-tests were run on each triplet of variables. Of the groups that are not
significant, the least significant was dropped from the equation, and the regression run again. Table 4 is
14
15
The Arizona and California Leading Indexes begin in 1982, making this date an intuitive choice.
Other values of n and m were tried, ranging from 2 to 8, but using these values provided the best fit.
19
an intermediate table; in this case, we would eliminate Occu from the equation and run the regression
again. The same process was repeated until only significant variables remained - the final results are
shown in Table 5.
20
Table 4: Regression Results – Intermediate Table
Variable
Coefficient
t-Statistic
Joint Significance
COINCIDENT(-4)
COINCIDENT(-5)
COINCIDENT(-6)
0.930026
0.190887
-0.179986
7.194023
1.140515
-1.414815
0.0000
AZ(-4)
AZ(-5)
AZ(-6)
0.389606
-0.232628
-0.189127
1.442082
-0.445364
-0.711194
0.0000
CA(-4)
CA(-5)
CA(-6)
1.345443
-1.707921
0.441561
3.55211
-2.365805
1.159865
0.0000
M2(-4)
M2(-5)
M2(-6)
-0.109994
0.162127
-0.091268
-0.879361
0.833961
-0.729266
0.0000
MCCARRAN(-4)
MCCARRAN(-5)
MCCARRAN(-6)
0.021539
-0.000153
-0.038171
1.402653
-0.009087
-2.586364
0.0327
OCCU(-4)
OCCU(-5)
OCCU(-6)
0.049161
-0.026387
0.004026
0.770945
-0.371538
0.065449
0.879
SP500(-4)
SP500(-5)
SP500(-6)
0.013199
-0.000602
-0.016321
2.708033
-0.081608
-3.31846
0.0000
TAX(-4)
TAX(-5)
TAX(-6)
0.02475
-0.006748
-0.001783
2.912818
-0.750408
-0.201218
0.0021
VISITOR(-4)
VISITOR(-5)
VISITOR(-6)
0.000118
0.015212
0.036079
0.004423
0.481762
1.357567
0.035
C
-4.947006
-0.778915
R-squared
Adjusted R-squared
0.999623
0.999592
21
Table 5: Regression Results – All Data
Variable
Coefficient
t-Statistic
COINCIDENT(-4)
0.932064
7.267581
COINCIDENT(-5)
0.178844
1.076606
COINCIDENT(-6)
-0.182983
-1.482249
AZ(-4)
0.400223
1.490975
AZ(-5)
-0.246003
-0.473379
AZ(-6)
-0.183583
-0.693205
CA(-4)
1.317544
3.513204
CA(-5)
-1.689709
-2.350372
CA(-6)
0.451552
1.19594
M2(-4)
-0.115677
-0.932516
M2(-5)
0.168258
0.872001
M2(-6)
-0.092467
-0.742446
MCCARRAN(-4)
0.024172
1.641814
MCCARRAN(-5)
0.000163
0.009824
MCCARRAN(-6)
-0.037636
-2.608692
SP500(-4)
0.013256
2.732441
SP500(-5)
-0.000815
-0.111104
SP500(-6)
-0.016195
-3.306797
TAX(-4)
0.025071
2.978394
TAX(-5)
-0.006247
-0.699496
TAX(-6)
-0.001385
-0.158401
VISITOR(-4)
0.0119
0.567229
VISITOR(-5)
0.004524
0.198381
VISITOR(-6)
0.0359
1.751307
C
-1.870116
-1.532513
R-squared
0.999622
Adjusted R-squared
0.999594
Joint Significance
0.0000
0.0000
0.0000
0.0000
0.0253
0.0000
0.0005
0.0285
22
In this regression, all the available data were used (December 1981 – May 2011). Now we must
determine the weight of each variable for the index. This was done by taking the sum of the absolute
values of the coefficients of an individual triplet and dividing by the sum of the absolute values of all the
coefficients.
∑
|
|
⁄
∑
∑
|
|
For month t, each series was multiplied by its weight and then summed to yield the final leading index.
∑(
)
Before continuing, an explanation of the weights is warranted. We were faced with a few choices for
constructing the weights. For example, we could have not taken absolute values and allowed for some
components to have negative weights as opposed to the above. The intuition behind our method is that
we want to know the magnitude of the effect for each component. Although each component is indexed
to 100 in December 1981, they grow at different rates. Weighting by the magnitudes automatically
adjusts for the different growth rates when constructing the leading index.
23
4.3.2.1 Leading Index: Full Sample
The final result is displayed in Graph 5, with recessions indicated by the coincident index shaded
in grey. In Graph 6, we see both the coincident and leading index together to get a better picture. Upon
first glance, it seems that the leading index serves all of its purposes well. There is a significant lead on
all peaks and troughs in all three recessions; it’s not volatile and is consistent. However, looking at the
very end of the index, we can see that the leading index begins to rise quickly, whereas the coincident is
struggling. This is pictured in Graph 7. Although this leading index is a good fit for the previous
recessions, it doesn’t seem to represent the last couple of years well. This problem leaves us unsatisfied
and looking for a new solution.
Graph 5: Leading Index – All Data
585
485
385
285
185
85
24
Graph 6: Leading and Coincident Indexes – All Data
585
Leading Index
385
Coincident Index
335
485
285
385
235
285
185
185
135
85
85
Graph 7: Leading Index, Financial Crisis – All Data
612
390
607
602
380
597
Leading Index
370
592
587
360
582
350
577
Coincident Index
572
340
25
4.3.2.2 Leading Index: 1992 Data
Because the problem occurs within the most recent data, perhaps we are putting too much
weight on older data. After every business cycle, the economy restructures itself. So it’s possible by
constructing the leading index with data back to 1982, we are capturing relationships that no longer
exist.
To combat this, we use the same method two more times. The first time we use data from April
1992 (the end of the early 90’s recession as reported by NBER) until the most recent. The second time
we use data from December 2001 (the end of the early 2000’s recession as reported by NBER) onward.
This is an attempt to capture the most recent and intact economic relationships among the variables.
For simplicity, we will call these variations the 1992 and 2002 leading indexes, respectively.
Note that in both cases, we have data that span at least one business cycle, the minimum for
evaluating interactions among economic variables. First, we look at the 1992 leading index in the same
format as we looked at the all data leading index. Regression results are reported in Table 6, and Graphs
8, 9, and 10 display the results.
26
Table 6: Leading Index Regression Results – 1992 Data
Variable
Coefficient
t-Statistic
COINCIDENT(-4)
0.679876
4.753338
COINCIDENT(-5)
0.107933
0.585553
COINCIDENT(-6)
-0.005852
-0.04081
AZ(-4)
0.942234
2.32879
AZ(-5)
-0.408533
-0.52998
AZ(-6)
-0.521127
-1.32916
CA(-4)
1.096243
2.355805
CA(-5)
-1.252997
-1.44989
CA(-6)
0.367067
0.786028
M2(-4)
-0.246002
-1.92106
M2(-5)
0.292813
1.509301
M2(-6)
-0.097663
-0.75651
MCCARRAN(-4)
0.04364
2.778842
MCCARRAN(-5)
0.022084
1.226023
MCCARRAN(-6)
0.000929
0.057325
SP500(-4)
0.006776
1.364842
SP500(-5)
-0.001418
-0.19346
SP500(-6)
-0.01501
-2.93151
VISITOR(-4)
-0.011234
-0.44404
VISITOR(-5)
-0.04979
-1.81817
VISITOR(-6)
-0.030394
-1.17592
C
-17.75056
-6.81302
R-squared
0.999205
Adjusted R-squared
0.999127
Joint Significance
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0012
27
Note that in this version of leading index, Tax is no longer included. We have no economic
intuition or theory as to why this is, other than that the economy of Las Vegas was restructured in the
early 1990s.
The 1992 leading index still has all the good qualities of a reliable leading index, such as
smoothness and consistence. Although the end of the financial crisis is still not modeled perfectly, it is
definitely better than its predecessor as the increase at the end is less steep.
Graph 8: Leading Index – 1992 Data
585
485
385
285
185
85
28
Graph 9: Leading and Coincident Indexes – 1992 Data
Leading Index
385
585
Coincident Index
335
485
285
385
235
285
185
185
135
85
85
Graph 10: Leading Index, Financial Crisis – 1992 Data
638
390
633
628
380
623
370
618
Leading Index
613
360
608
350
603
Coincident Index
598
340
29
4.3.2.3 Leading Index: 2002 Data
Although the 1992 leading index models the end of the financial crisis better, there is still room
for improvement. Constructing the 2002 index allows us to capture the most recent economic
relationships. Table 7 contains the regression results; Table 8 shows the weights, while Graphs 11, 12
and 13 give us closer looks.16
Table 7: Leading Index Regression Results – 2002 Data
Variable
Coefficient
t-Statistic
Joint Significance
COINCIDENT(-4)
COINCIDENT(-5)
COINCIDENT(-6)
0.435439
0.055073
0.146041
2.405474
0.2479
0.767815
0.0000
AZ(-4)
AZ(-5)
AZ(-6)
1.191803
-0.939418
-0.008065
1.986601
-0.823958
-0.013734
0.0000
CA(-4)
CA(-5)
CA(-6)
0.425271
-0.280396
-0.291863
0.746419
-0.257141
-0.460029
0.0000
MCCARRAN(-4)
MCCARRAN(-5)
MCCARRAN(-6)
0.04229
0.00289
0.009558
3.012942
0.200433
0.697116
0.008
SP500(-4)
SP500(-5)
SP500(-6)
0.009723
-0.002149
-0.009086
1.57964
-0.264387
-1.614556
0.0961
C
7.96521
0.67018
R-squared
Adjusted R-squared
0.995629
0.994992
16
In addition to the “hollowed-out” approach, we also ran the model including lags one through three of the
coincident index. This allowed the coincident index to fully explain itself within the time period, and subsequently
eliminated the S&P 500 Index from our final model. This is not surprising, considering the S&P 500 Index was only
marginally significant in the “hollowed-out” model. However, we feel the “hollowed-out” approach is superior,
since when given the data at time period t (looking to forecast four to six months into the future), lags one through
three of the coincident index aren’t available.
30
Table 8: Leading Index Weights – 2002 Data
Variable
AZ
CA
McCarran
SP500
Total
Weight
0.66
0.31
0.02
0.01
1
In this version of the leading index, even more variables are eliminated. One interesting note is
that M2 was dropped from the index. Although M2 has been a strong leading economic indicator in the
past, we can probably attribute its lack of appearance to the Federal Reserve beginning to pay interest
on banks’ reserves, causing a huge spike in the money supply. This spike causes M2 to not be a good
indicator, and therefore was eliminated from the regression due to insignificance. We can infer that the
strong past relationship of M2 and economy holds it in the previous two leading indexes.
Graph 11: Leading Index – 2002 Data
685
585
485
385
285
185
85
31
Graph 12: Leading and Coincident Indexes – 2002 Data
Leading Index
685
385
585
335
Coincident Index
485
285
385
235
285
185
185
135
85
85
Graph 13: Leading Index, Financial Crisis – 2002 Data
390
725
715
380
705
370
Coincident Index
695
360
685
Leading Index
350
340
675
665
32
This final version of the leading index provides everything we are looking for. Just like the
previous two it is smooth and predicts all turning points with a consistent lead of 4-6 months. On top of
that, it models 2010-2011 well, not quickly increasing but showing a long, slow recovery, as seen in
Graph 13. By limiting our data to the most recent business cycle, we are able to capture the relevant
economic relationships needed to produce a respectable leading index.
4.4 Leading Index Evaluation
Now that we have found a useful leading index, let’s revisit the criteria laid out earlier. The following
criteria are considered traditional methods, as they don’t rely on modern econometric techniques.
1) Data Availability and Lags – Because we choose series for the leading index with these
criteria in mind, they are satisfied.
2) Substitutability – We have a few components in our leading index that are potential
substitutes for components in the U.S. LEI. Both the Arizona and California leading indexes
have state-level initial unemployment insurance claims and new building permits. The U.S.
LEI includes these variables at the national level. Also, our index has the S&P 500 Index,
common to the U.S. LEI as well.
3) Missed Turning Points – Our index doesn’t miss any turning points by leading all peaks and
troughs for recessions.
4) False Turning Points – Defining a turning point by three to four consecutive months of
increase or decline in the index, the index doesn’t give false turning points.
5) Volatility – Looking at Graph 11 we can see that the leading index is smooth, not jagged.
33
6) Length of the Lead – The index leads peaks by an average of 5 months and troughs by an
average of 6.5 months. Earlier we commented that a 4-6 month lead is most desirable.
7) Consistency of the Lead – The length of the lead for peaks is very consistent, with each one
being exactly five months. However, the lead on troughs is a bit more variable. For the early
80’s recession, we have a lead of 4 months. The lead for the trough of the early 2000
recession is 3 months. For the ‘08 financial crisis, we have almost a double-dip like structure
in the coincident index. An argument can be made for the true trough being either in Nov
2010 or June 2011. Either way, the leading index models this period well, with a trough in
Dec 2009 (lead of 11 months) followed by a period of stagnate growth resulting in another
trough in Oct 2010 (lead of 8 months).
Based on traditional methods, our leading index satisfies nearly all of the criteria. The only
blemish occurs in the consistency of leading troughs, specifically during the financial crisis. However,
looking at the period from 2007 until now, in Graph 13, we still conclude that the leading index provides
an accurate forecast of the coincident index.
Continuing the trend of combining traditional methods with econometrics, we evaluate the
leading index based on correlation testing as well. Taking the first differences of the coincident and
leading indexes, we run a correlation test between lags of the first differenced leading index and the
first differenced coincident index. We do this for three different time periods. The first time period
consists of all data, from 1982 to 2011. Our highest correlation coefficient is 0.49, indicating a relevant
but only moderate correlation between the indexes. We then proceed in a manner similar to creating
our leading index, restricting the data to the two most recent business cycles, and then only the most
recent. Our correlation coefficients increase in both cases, reaching a high of 0.74 for our 3rd and 4th lag
in the most recent data.
34
Is it surprising that the correlation coefficients increased drastically? Recall that when we
created our leading index, we used data restricted to the most recent business cycle. Our intuition was
that this captured the most recent and relevant economic relationships and will therefore be a better
predictor in the future. A critique of our leading index could be the low correlation coefficient attributed
to a more loose connection in the past. However, a leading index is not created to model the past; it is
created to help predict the future, and that is where ours excels.
Table 9: Correlation Testing
∆CI
∆CI
∆CI
∆LI(-3)
0.51
0.67
0.74
∆LI(-4)
0.52
0.68
0.74
∆LI(-5)
0.49
0.65
0.69
∆LI(-6)
0.48
0.64
0.67
∆LI(-7)
0.48
0.63
0.65
Jan '82 - Aug '11
Apr '92 - Aug '11
Dec '01 - Aug '11
5. Summary of Findings
Our first task was to create a coincident index for Las Vegas to represent the “current state of
the economy.” Since employment is generally used in regional indexes as a measure of the well-being of
the economy, we combined two employment series of Las Vegas and created an index using The
Conference Board method. We chose this method over Stock and Watson’s after examining their results
and the literature comparing the two methods (which was in favor of The Conference Board).
After constructing our coincident index, we were faced with many choices for constructing our
leading index. Instead of pursuing a purely empirical-inductive or theoretical-deductive approach, we
chose a path that allowed us to infuse the traditional, proven methods with modern day econometric
techniques, such as Granger causality testing, to discover leading economic indicators. Following our
approach from the coincident index, we employ The Conference Board method again, but are left with
unsatisfactory results.
35
Because our previous results suggested we needed a new method, we ran a series of “hollowedout” regressions, enabling us to truly see what combination of leading indicators best predicts the
coincident index. After calculating weights based upon our last regression, we have a vastly improved
leading index, but are not convinced by its performance, particularly in the last few years.
By constricting the time periods of data used in the regressions, we are able to capture the
structural changes in the economy that happen over time, and redefine what indicators do and don’t
belong in the index. Our final restriction to only the most recent business cycle provides an index
unmatched by any other period. This is our final leading index.
6. Concluding Remarks
Since Burns and Mitchell’s work in 1938, there have been countless debates on how to properly
construct indexes that represent and predict the economy. First, the empirical-inductive approaches of
NBER prevailed, followed by their disappearance due to advanced models based on Keynesian theory.
But when the unpredicted high inflation and high unemployment arose during the 1970s, some claimed
the full-blown econometric models were dead and turned away from those theoretical-deductive
methods. Then Stock and Watson developed their own method, using modern-day econometric
techniques never before seen in this field, fueling the debate between the two approaches.
In this paper, we attempt to find a harmonious middle ground in creating new economic indexes
for the metropolitan area of Las Vegas, Nevada. We first construct a coincident index, using two
employment series of Las Vegas as representatives for the current state of the economy. We combine
these series using the method outlined by The Conference Board, developed from the statistical
approaches at NBER and the Department of Commerce.
After producing the coincident index, we construct a leading index. A rigorous set of criteria
ranging from traditional, proven methods to modern day econometric techniques determine which
36
series lead the Las Vegas economy. After finding our candidate variables, we employ The Conference
Board method with little success. Therefore, we develop a new method of creating a leading index,
using regression analysis to weight each data series. Restricting our data to specific time periods
produces three different sets of indicators and weights. Limiting data to the most recent business cycle
captures the currently intact and relevant economic relationships, resulting in an accurate, reliable
leading index.
As with all economic indexes, we can’t be entirely sure that it will perform as well as desired.
The fate could be much the same as Stock and Watson’s index, failing to predict the first recession after
its construction. If that were the case, each part of the method would have to be revisited and
reevaluated. That being said, we feel that given the economic theory and econometric practices
available now, that this method represents the best possible way to construct an economic index.
The stance taken here to use both empirical-inductive and theoretical-deductive methods is
much like that of Hoover’s (1994). He drew an analogy between economics and astronomy.
Astronomers don’t point their telescopes in random directions hoping to make a discovery. Although a
telescope is an extremely valuable tool, without knowing where to look, it becomes useless. It is through
observations and careful calculations that an astronomer knows where to look. Economics is much the
same. Observation and careful statistical work, such as that done by Mitchell and Burns, can tell us
where to look. Only then can we use our telescope, econometrics, to discover and create something
new.
37
Bibliography
Auerbach, A.J. (1982), The Index of Leading Economic Indicators: “Measurement Without
Theory,” Thirty Five years Later, The Review of Economics and Statistics, 64, 589-595.
Balcilar, M., Gupta, R., Majumdar, A. and Miller, S.M. (2010), Forecasting Nevada Gross Gaming
Revenue and Taxable Sales Using Coincident and Leading Employment Indexes,
University of Connecticut Working Paper.
Burns, A.F. and Mitchell, W.C. (1938), Statistical Indicators of Cyclical Revivals, National Bureau
of Economic Research Bulletin 69, New York. Reprinted as Chapter 6 of G.H. Moore, ed.
Business Cycle Indicators. Princeton: Princeton University Press. 1961.
Crane, S.E. (1993), Developing a Leading Indicator Series for a Metropolitan Area, Marquette
University, Economic Development Quarterly, 7, 267-281.
Crone, T.M. (1988), Using state indexes to define economic regions in the U.S., Journal of
Social and Economic Measurement, Special Issue on Regional economic Models, 25.
Crone, T.M. (2000), A New Look at the Economic Indexes for the States in the Third District,
Federal Reserve Bank of Philadelphia.
Crone, T.M. (2006), What a New Set of Indexes Tells Us About State and National Business
Cycles, Federal Reserve Bank of Philadelphia.
Dua, P. and Miler, S.M. (1996). Forecasting and Analyzing Economic Activity with Coincident and
Leading Indexes: The Case of Connecticut. Journal of Forecasting, 15, 509-526.
Gazel, R.C. and Potts, R.D. (1995), Southern Nevada Index of Leading Economic Indicators,
Center of Business and Economic Research at University of Nevada, Las Vegas.
Gazel, R.C. (1995), The Southern Nevada Index of Leading Indicators: A Good Performance in
1995, Center of Business and Economic Research at University of Nevada, Las Vegas.
Gazel, R.C., Potts, R.D. and Schwer, R.K. (1997), Using a Regional Index of Leading Economic
Indicators to Revise Employment Data, Center of Business and Economic Research at
University of Nevada, Las Vegas.
38
Koopmans, T.C. (1947), Measurement Without Theory, The Review of Economics and Statistics,
29, 161-172.
Kurre, J.A. and Weller, B.R. (1999), Is Your Economy Ready to Turn on You? Constructing a
Leading Index for a Small Metro Area, Penn State Erie.
Kurre, J.A. and Riethmiller, J.J. (2005), Creating an Index of Leading Indicators for a Metro Area,
Economic Research Institute of Erie.
Lucas, Robert E., Jr (1976), Econometric Policy Evaluation: A Critique, The Phillips Curve
and Labor Markets, Carnegie-Rochester Conference Series on Public Policy 1, eds.
Brunner, Karl and Allan H. Meltzer, Amsterdam: North-Holland, 19-46.
Niemira, M.P. and Klein, P.A. (1994), Forecasting Financial and Economic Cycles,
New York: John Wiley.
Phillips, K.R. (1999), The Composite Index of Leading Economic Indicators: A Comparison of
Approaches, Journal of Economic and Social Measurement, 25.
Phillips, K.R. (2005), A New Monthly Index of the Texas Business Cycle, Federal Reserve Bank of
Dallas – San Antonio Branch, Journal of Economic and Social Measurement, 30, 317-333.
Shiskin, J. (1961), Signals of Recession and Recovery, National Bureau of Economic Research.
Simkins, S. (1999), Measurement and Theory in Macroeconomics, North Carolina A&T State
University.
Sims, Christopher A. (1980a), Comparison of Interwar and Postwar Business Cycles: Monetarism
Reconsidered, American Economic Review 70:2, 250-257.
Sims, Christopher A. (1980b), Macroeconomics and Reality, Econometrica 48:1, 1-48.
Sims, Christopher A. (1980c), Scientific Standards in Econometric Modeling, Discussion Paper
No. 82-160, Center for Economic Research, Department of Economics, University of
Minnesota.
Sims, Christopher A. (1982), Policy Analysis with Econometric Models, Brookings Papers on
Economic Activity 1, 107-152.
39
Slaper, T.F. and Cohen, A.W. (2009), The Indiana Leading Economic Index: Indicators of a
Changing Economy, Indiana Business Research Center, Indiana Business Review.
Stock, J.H. and Watson, M.W. (1989), New Indexes of Coincident and Leading Economic
Indicators, National Bureau of Economic Research Macroeconomics Annual, 351-394.
The Conference Board (2001), Components and Construction of Composite Indexes, Business
Cycle Indicators Handbook, 47-63
United States Department of Commerce (1977), Composite Indexes of Leading, Coincident, and
Lagging Indicators: A Brief Explanation of Their Construction, Handbook of Cyclical
Indicators, A Supplement to the Business Conditions Digest, Bureau of Economic
Analysis, 73-76.
United States Department of Commerce (1984), Composite Indexes of Leading, Coincident, and
Lagging Indicators: A Brief Explanation of Their Construction, Handbook of Cyclical
Indicators, A Supplement to the Business Conditions Digest, Bureau of Economic
Analysis, 65-70.
Vining, Rutledge (1949), Koopmans on the Choice of Variables to Be Studied and the Methods
of Measurement, The Review of Economics and Statistics 31, No. 2, 77-86.
Vining, Rutledge (1951), Economic Theory and Quantitative Research: A Broad Interpretation
of the Mitchell Position, The American Economic Review 41, No. 2, Papers and
Proceedings of the Sixty-third Annual Meeting of the American Economic Association,
106-118.
40
An affirmative action/equal opportunity institution.