An Empirical Estimation
of the
Rational Expectations
Competitive Storage Model∗
Fabio Gabrieli†
Università degli Studi di Verona
First (very preliminary) draft: December 20, 2005
Please do not quote without permission
Abstract
This study estimates by Simulated Method of Moments (SMM), using
price data alone, the commodity storage model following the standard
competitive framework based on Williams and Wright (1991), a simple
but intuitive economic model of price determination. Results show that
the SMM estimator presents small bias and variance. However, particular
attention may be paid to the choice of starting values, since restricting
their choice to economically reasonable figures may not be a sufficient
strategy to follow. In fact, whenever not accurately chosen they can eventually mislead parameter estimates. Finally, in efficiency terms the competing Pseudo-Maximum Likelihood (PML) estimator seems to perform
better than this simulation estimator when considering the storage model
framework.
JEL classification: C15, B4, Q11.
Keywords: Simulated Method of Moments, Commodity Prices, Storage,
Rational Expectations, Monte Carlo Analysis.
1
Introduction
This study is concerned with the estimation of what has come to be known as the
rational expectations competitive storage model.1 As an equilibrium structural
model, it is explicitly derived from economic relations by incorporating the
microeconomics of supply, demand and storage, and aims at replicating the
equilibrium price for storable commodities.
∗I
have strongly benefited from conversations with Geert Dhaene (K.U.Leuven).
candidate, Department of Economics.
1 See, for instance, Gustafson (1958), Scheinkman and Schechtman (1983), Williams and
Wright (1991), Deaton and Laroque (1992, 1995, 1996) and Chambers and Bailey (1996).
† Ph.D.
1
In the last decade, following the outstanding work of Williams and Wright
(1991), there has been an increasing interest in understanding the complex nonlinear relationship between storage and commodity price dynamics. The nonlinearity here results from the fact that it is not possible to carry negative amounts
of stock, which in turn generates “price cycles characterized by flat bottoms and
sharp peaks” (Gilbert, 1995, p.385).
Although the modern storage theory can trace its origins to the works of
Williams (1935), Kaldor (1939) and Working (1949), to date only a few publications have been devoted to its empirical estimation. In fact, the lack of an
analytically tractable closed form solution of the model denied for decades any
kind of econometric inference.2
The first successful attempt dates back to a series of pathbreaking papers
by Deaton and Laroque (1992, 1995, 1996), who apply numerical methods to
estimate the rational expectations storage model, when only prices are observed.
This is due to the fact that “the absence of reliable quantity data renders our
rational expectations storage model unestimable by the Generalized Method of
Moments” (Miranda and Rui, 1999, p.2), henceforth GMM. However, recently
developed simulation-based econometric methods may be considered as a valid
alternative estimation strategy when facing the abovementioned difficulties.
Three main examples of simulation estimation frameworks are: Efficient
Method of Moments, Indirect Inference, and Simulated Method of Moments,
henceforth EMM, II and SMM, respectively. These estimation methods have
become very popular recently and have been successfully applied to various
time-series as well as panel data models in many different fields in economics.3
Common feature of all simulation estimators is that a set of moment conditions
is approximated using simulations from the structural model in a GMM-type
minimum distance criterion function framework. These estimators differ only
in their choice of moments. It is basically an indirect estimation strategy in its
use of the information contained in data.
Simulation-based estimation approach to the storage model problem can
be outlined as follows: summarize the information contained in prices in an
auxiliary (reduced-form) model and match the parameters of the auxiliary model
obtained from observed prices to the parameters of the same model obtained
from simulated prices. This particular estimation procedure is known as the
indirect inference methodology.
The main purpose of this study is, therefore, to estimate through simulationbased econometric techniques the competitive storage model using price data
alone in a simple but intuitive economic model of price determination. In practice, it relies on two other works: Williams and Wright (1991), as regards to the
method used to solve the storage model, and Michaelides and Ng (2000), who
investigate the finite sample properties of the three abovementioned simulation2 As pointed out by Gilbert (1995, p.385): “The price of nonlinearity is that it is not,
in general, possible to derive an analytic representation for the RE function relating the
commodity price to the state variables even if the supply and demand equations are known
precisely.”
3 Among these are ARMA models (Ghysels, Khalaf and Vodonou, 1994; Chumacero, 1997,
2001; DeLuna and Genton, 2001), commodity price and storage models (Michaelides and Ng,
2000), exchange rate target zone models (Chung and Tauchen, 2001), models with lagged latent variables (Laroque and Salanie, 1993), one dimensional SDE models (Cleur and Manfredi,
1999), real business cycle models (Smith, 1993) and stochastic volatility models (Monfardini,
1998; Andersen, Chung and Sorensen, 1999; Dhaene, 2004).
2
based estimators in the context of Deaton and Laroque’s (1996) version of the
speculative storage model under rational expectations. However, it contributes
to the existing literature while it applies one of these recent methods to the
estimation of the storage model using empirical data.
The work is arranged as follows. Section 2 introduces the theoretical rational expectations model of competitive storage. Section 3 briefly discusses
the simulation-based estimation approach and the choice of the method used
herein. Section 4 describes the SMM estimator, while section 5 provides a controlled simulation experiment to assess the performance of the chosen estimation
methodology. Section 6 presents the results of new estimates of the simple competitive storage model on the same twelve series of annual commodity prices
analyzed in the literature (see Deaton and Laroque (1992, 1995, 1996), Miranda
and Rui (1999), Revoredo (2000, 2001) and Cafiero (2002, 2003)). A concluding
section summarizes the findings.
2
The Competitive Storage Model
In other areas of applied economics researchers usually start with an economic
model which is believed to well-describe the problem under study. Given available data researchers then choose an econometric technique best suited to estimate the model parameters and to test the propositions. Price determination
for equilibrium storage models is not different. It is based on a structural model
describing the behavior of a set of prices, which are the only observable variable
available.
The basic competitive storage model can be represented by the following
equations:
Et [pt+1 ]
1+r
Et [pt+1 ]
pt + κ −
1+r
pt
ht + st−1
pt + κ −
=
0;
st > 0
(1)
≥ 0;
st = 0
(2)
= f (ct |θd );
= ct + st
f 0 (·) < 0
(3)
(4)
where pt is the price in period t, Et [pt+1 ] represents expectations, in period t, of
prices in period t+1, κ the marginal cost of storage services (assumed constant),
r the interest rate, st carryover, f (·) the inverse consumption demand function,
ct level of consumption, θd is a vector of unknown parameters, ht quantity
produced and st−1 quantity carried out of period t − 1 into period t.
Equations (1) and (2) represent the so-called competitive intertemporal arbitrage condition. It states that, whenever stocks are held, the difference between
the price expectation (conditional on the current information) for the commodity, discounted at the prevailing interest rate, and its current price has to be
equal to the cost of storage (1), and smaller than or equal to the cost of storage when stocks are not held (2). It also represents the first order condition of
the stockholding activity, thus generating the derived demand for commodity
stocks. This asserts that the dynamic equilibrium in the commodity markets
will be guaranteed by the action of price-taking, expected-profit-maximizing,
risk-neutral agents (or stockholders), who, whenever expected appreciation ex3
ceeds the marginal cost of storage, will increase their stockholdings until the
equilibrium is restored.
Other than the behavior of stockholder agents there are in principle two
other structural equations in the model: the consumption equation (3), which,
in this case, is a simple decreasing inverse linear demand function, such as:
pt = a + bct ,
(5)
where a > 0 and b < 0 (other variables, such as income variation of consumers,
are suppressed for simplicity), and the production equation that can be either
elastic or inelastic with respect to expected prices.4
In its simplest formulation, adopted here, production is a stochastic, i.i.d.
process, where the volume of commodity produced (harvested), determined exogenously, is affected by a continuous normally distributed (with a mean of zero)
productivity shock (harvest yields), ν j , which, for convenience, is approximated
by a discrete symmetric distribution with nine points (see Table 1). The distribution of this random disturbance (shared by all producers) is assumed to be
common knowledge, and in each period the value of ν j is an independent draw
from that distribution. Thus, realized production is:
ht = ht (1 + νtj ),
(6)
where ht is constant (perfectly inelastic); in other words, there is no supply
response. An improvement of this assumption can be made by determining production (harvest) endogenously, as storage affects producers (farmers) price expectations, hence production (planting) decisions (see Scheinkman and Schechtman, 1983; Williams and Wright, 1991).5 Yet, this procedure would add more
complexity to the model to be simulated, thus increasing further the time required to do each simulation.
The model is completed by the market clearing (or equilibrium) condition
(4), where aggregate supply for the commodity, or its total availability, xt (given
by the sum of current production plus previous year carryover), has to be equal
to its aggregate demand, qt (given by the sum of the derived demand for storage
plus the demand for current consumption):
ht + st−1 ≡ xt = qt ≡ st + ct .
While (1) and (2) provide conditions that must hold at an equilibrium, nothing has been said about expectations formation. In fact, in order to solve the
4 On this point it is worth to notice that this standard basic structure of the model is
not predominant in all, but agricultural markets. In fact, as pointed out by Gilbert (1995,
p.390): “In agricultural markets it is conventional to regard production as responding to
lagged expected prices and consumption as exhibiting a current period price response—see
Williams and Wright (1991) and Deaton and Laroque (1992). This reflects the fact that
production of agricultural food products is time consuming while consumption requires no
advance planning. This situation is reversed in metals industries,” where “[. . . ] consumption
decisions are based on lagged prices,” and, “[. . . ] in an annual model, production may be seen
as dependent on current price. So while in agricultural markets, the price brings demand into
line with variable supply, in metals markets the price brings supply into line with variable
demand.”
5 Other versions of the theoretical model allow for the possibility of considering non i.i.d.
production (harvest) processes, such as when it follows a first-order autoregression, as in
Deaton and Laroque (1995, 1996), or periodic disturbances, as in Chambers and Bailey (1996).
4
Table 1: Approximate discrete distribution of a continuous normally distributed shock
j
Prob[ν j ]
(1 + ν j )
1
.0401
0.80
2
.0659
0.85
3
.1212
0.90
4
.1745
0.95
5
.1966
1.00
6
.1745
1.05
7
.1212
1.10
8
.0659
1.15
9
.0401
1.20
model it is required to define the way expectations are formed. Since price
in any period depends on the amount of carrying-out stocks, st (notice that
pt = f (ht + st−1 − st |θd )), expectations about prices in a future period will
depend on anticipated storage decisions made in that period, which in turn will
be affected by anticipated storage decisions in subsequent periods. Thus, the
stockholders’ price expectations Et [pt+1 ] must be estimated using a numerical
technique based on the recursive logic of dynamic programming. The approach
used in this work is that of polynomial approximation (see Williams and Wright,
1991), where the function that relates the expected future price (appearing in
the arbitrage equations) with current storage is approximated by a cubic polynomial, such as:
Et [pt+1 |st ] ≈ ψ0 + ψ1 st + ψ2 s2t + ψ3 s3t .
(7)
Such a function, as argued by Cafiero (2003, p.5), “[. . . ] is a legitimate description of the equilibrium, given that the amount of carrying-out stock, st , completely characterizes the state of the system at the end of period t and knowledge
of the Et [pt+1 ] allows to compute all other current optimal decisions.”6
In practice, the resulting function Et [pt+1 |st ] is approximately consistent
with the profit-maximizing arbitrage condition (1). In other words, the expected
future price resulting from storage is the same price used by stockholders while
deciding the amount to be stored; in that sense, stockholders’ expectations
are internally consistent (see Appendix herein for a detailed description of the
procedure).
6 Actually, different algorithms have been proposed for the numerical solution of the storage
model (see Wright and Williams, 1984; Williams and Wright, 1991; Deaton and Laroque,
1995, 1996; Judd, 1999). However, as pointed out by Cafiero (2003, p.4), they all share a
common basic strategy: “[. . . ] to find a solution amounts to defining a solution function—
which expresses one of the ‘controls’ of the model as a function of the ‘state’—and then
approximating it with a flexible form through iterative procedures. The natural ‘state’ for
this model would be the total availability, xt ,” and “[. . . ] the ‘control’ would naturally be
thought of as the amount of carrying-out stocks, st , even though one could also consider, as
alternative controls, the quantity consumed, ct , or even the equilibrium price, pt .”
5
Once the expected price function has been estimated, equilibrium storage
st for any given value of the continuous variable xt can be found by solving
(1) up to (3) for pt and st ; hence a time series of prices can be simulated.
Non-negativity conditions on storage mean that price spikes can occur during
stock-outs, where the price of current period consumption is much higher than
the expected marginal value of future consumption. Since the storage rule
and probability distribution of prices simulated using the rational expectations
model provide a result that is consistent with optimizing behavior, it is not
optimal to store more than that indicated by the solution to the arbitrage rule.
This is because higher levels of storage imply that the opportunity cost of current
consumption is higher than the expected value of future consumption, after
taking into account the full cost of carry.
Slightly different versions of this standard approach, entirely described in
Williams and Wright (1991, Chapter 3, Appendix), can also be found in many
other studies focusing on establishing an equilibrium price model for primary
commodities as, for instance, Deaton and Laroque (1992, 1995, 1996), Miranda
and Rui (1999), Routledge, Seppi and Spatt (2000), Revoredo (2000, 2001)
and Cafiero (2002, 2003). All of them share the same modeling features: supply is determined by speculative storage and random behavior of production
(harvests), and price is obtained through numerical approximations that relate
supply, demand and storage. However, these works differ by the choice of the
function to be approximated and the method of approximation used. For the
former, one may “[. . . ] define the solution in terms of the price, treating it as
a function of the available supply, i.e. pt = f (xt ),” as suggested by Gustafson
(1958), and adopted later by Deaton and Laroque (1992, 1995, 1996) and by
Miranda and Rui (1999); or define the solution “[. . . ] based on approximating
the function that expresses the expected, next period price conditional on the
amount of carrying-out stock, i.e. ψ(st ) ≡ Et [pt+1 |st ]” (Cafiero, 2003, p.4-5),
as in Wright and Williams (1982, 1984), Miranda and Helmberger (1988), and
Williams and Wright (1991). For the latter, “possible candidates include simple
low order polynomials, splines, and more sophisticated linear combinations of
orthogonal polynomials” (Cafiero, 2003, p.6).7
Although the focus of this study is on the econometric estimation of the
model, it is worth to notice that any estimation method, which already makes
use of an optimization procedure, will necessarily nest the solution to the storage model within some other optimization procedure. Hence, when choosing
the estimation method to be applied, given the computationally demanding requirements for empirical implementation of the model, one has to take it into
account in order to find the best balance between accuracy and speed of the
algorithm.
3
A Note on Simulation-Based Estimation
Simulation based estimation methods aim at estimating the parameters of a
structural model by simulating data from that model and determining the parameters that minimize a GMM-type criterion function for some chosen set of
moment conditions. The parameter vector of interest, θ, is estimated by finding
7 See Judd (1999) and Miranda and Fackler (2002) for a comprehensive discussion on function approximation methods in computational economics.
6
the global optimum of the objective function, which basically consists of one of
two calibration criteria: path calibration, which simply measures the distance
between observed path, yt , and the simulated path, yet ; or moment calibration,
which measures the distance between a set of empirical moments computed using
yt and yet , respectively. Since, as shown by Gourieroux and Monfort (1996), the
path calibrated estimator will not generally be consistent, it is usually preferred
to employ one of the many consistent moment calibrated estimation methods.
These methods differ in their choice of the calibration moments.
Assume that yt (θ) is a vector stochastic process generated by a specified
structural equilibrium model. Sometimes, due to inherent nonlinearities and/or
the presence of latent variables in the structural model, the objective function
becomes intractable for classical estimation. However, if it is possible to simulate
values of yt from given values of the structural parameters, θ, then the estimation
becomes feasible. Hence, simulated-based GMM-type models consist of two
parts: a structural economic model that describes the mapping from parameters,
shocks and exogenous variables to endogenous variables; and the set of moment
conditions used for estimation.
The estimation is carried out by minimizing a criterion function of the following form:
θb = arg min ξ(θ)0 W ∗ ξ(θ),
(8)
θ∈Θ
where θ is a p × 1 vector of structural parameters, ξ(θ) is a q × 1 moment vector
and W ∗ is a q × q optimal positive definite weighting matrix. In practice, then,
simulation based estimation methods differ only in their choice of ξ(θ) (hence
the criterion function).
Although there are different moment calibrated estimation methods such as
EMM, II and SMM, for the reasons to be described below, I will only focus on
the last one, that is, SMM.
As already mentioned, Michaelides and Ng (2000) compare the performance
of EMM, II and SMM in small samples of 100 or 200 observations. They use
the same auxiliary model for EMM and II. For SMM, after choosing a first set
of moment conditions which they call “[. . . ] naive as they do not explicitly
recognize the non-linear nature of the price process,” they end up by preferring a second set of moment conditions that, other than the first two central
moments and the first-order autocovariance, “[. . . ] incorporates more relevant
information about the structural model by taking into account the skewness and
kurtosis of the price process” (Michaelides and Ng, 2000, p.245), for a total of
five empirical moments to be matched. After investigating various numbers of
simulations, they conclude that the small sample performance of II is best for
the estimation of an MA(1), while EMM beats II and SMM for the estimation of
a rational expectations storage model. However, with respect to computational
time, SMM is the fastest and II is by far the slowest. In fact, for SMM “the
simulations for a given configuration (of T for various numbers of N ) take at
most one day to compute” while it takes twice as much for the simplest EMMcase and “[. . . ] several days just to do 100 Monte Carlos with N = 500” for II
with the non-linear auxiliary models proposed. Finally, they claim that their
“[. . . ] experience with applying the Indirect Inference estimator to non-linear
auxiliary models has been discouraging,” so as “[. . . ] from a pure computational standpoint, SMM comes out the winner, followed by EMM, and then II”
(Michaelides and Ng, 2000, p.254-255).
7
Considering the storage model is computationally costly by its own, as it
requires already a demanding optimization procedure to be solved, the abovementioned findings suggest that the choice of SMM may be the best balance
between accuracy and speed of the estimation procedure.
4
Simulated Method of Moments
This section describes the chosen moment-based estimation method, namely
SMM. This estimator has been proposed by McFadden (1989) and Pakes and
Pollard (1989) to estimate discrete-choice problems and Lee and Ingram (1991)
and Duffie and Singleton (1993) to estimate time-series models.
The Simulated Method of Moments estimator of θ uses a simple moment
matching scheme, as it tries to match the moments from the observed data (empirical moments) to the moments from the simulated data (theoretical moments)
as closely as possible. Thus the form of the calibration moment conditions is
E [m(yt ) − m(e
yt (θ))] ,
(9)
where m(yt ) and m(e
yt (θ)) are moment functions from observed and simulated
data respectively. Replacing the population moment conditions with their sample counterparts it reads
T
N
1 X
1X
m(yt ) −
m(e
yt (θ)),
ξ(θ) =
T t=1
N t=1
(10)
where N = T · H is the simulation sample size and H ≥ 1. Now the SMM
estimator, θbSM M , can be defined as the solution to the following minimization
problem:
θbSM M = arg min ξ(θ)0 W ∗ ξ(θ).
(11)
θ∈Θ
“The simulations can be seen as approximating the actual population moments of the structural
model by performing Monte Carlo integration as deterPN
mined by N −1 t=1 m(e
yt (θ))” (Michaelides and Ng, 2000, p.234). In addition,
Lee and Ingram (1991) show that under certain regularity conditions
T
1X
a.s
m(yt ) −→ E [m(yt )]
T t=1
and
N
1 X
a.s
m(e
yt (θ)) −→ E [m(e
yt (θ))]
N t=1
T → ∞,
as
as
N → ∞.
Under the null hypothesis that the economic model is correct at θ0 these
expectations will be equal to each other:
E[m(yt )] = E[m(e
yt (θ0 ))].
(12)
As stated by Lee and Ingram (1991, p.199), this means that at the true value
N
of θ, the simulated series, {e
yt (θ0 )}t=1 , will be drawn from the same distribution
T
as the observed data, {yt }t=1 .
8
Furthermore, Lee and Ingram (1991) and Duffie and Singleton (1993) show
that, under certain regularity conditions, the distribution of the SMM estimator
is asymptotically normal, that is:
√
T (θbSM M − θ0 ) −→ N (0, Ω),
where the p × p covariance matrix is given by
0
!−1
1
∂m(e
yN (θ))
∂m(e
y
(θ))
N
Ω= 1+
I0−1 E
,
E
H
∂θ0
∂θ0
(13)
θ=θ0
where the derivatives ∂m(e
yN (θ))/∂θ0 are computed numerically and the expectation approximated by the average over the simulated N data points, and I0 is
the long run covariance matrix of m(yt ), whose inverse is the optimal weighting
matrix when q > p:
!#−1
"
T
1 X
−1
∗
,
(14)
m(yt )
W = I0 = lim V ar √
T −→∞
T t=1
where W ∗ is a positive definite matrix. As usual, by using the optimal weighting
matrix, a larger weight is given to the moments that are most informative. It
is worth to notice that the asymptotic covariance matrix of the moments is
estimated using only observed data.
To conclude this brief description of the SMM estimation method, a goodnessof-fit test, i.e. the overidentifying test statistic (OID) proposed by Hansen
(1982), is presented. It can be formed using the criterion function and the optimal weighting matrix, which converges to a Chi-square distribution with q − p
degrees of freedom:
i
1 h b
d
OIDSM M = T 1 +
ξ(θSM M )0 W ∗ ξ(θbSM M ) −→ χ2 (q − p),
(15)
H
where ξ(θbSM M )0 W ∗ ξ(θbSM M ) is the value of the objective function at the optimum.
5
A Monte Carlo Experiment
This section provides a controlled simulation experiment to assess the performance of the chosen estimation methodology outlined above. In order to gain
a better understanding of the properties of the estimator and the algorithm to
solve and estimate the storage model, I first apply the SMM estimator within
the framework of a first-order moving average model, MA(1), as an illustrative
example, and then consider the much more complex storage model.
5.1
The MA(1) Model
T
Assume we know that the DGP for a series {yt }t=1 can be represented by a
structural model, such as the MA(1):
yt = µ + εt − φεt−1 ;
εt ∼ N (0, 1).8
(16)
8 The only difference here with respect to Michaelides and Ng (2000) is the addition of a
constant term in the structural model.
9
The objective here is the estimation of the vector of structural parameters
θ = (µ, φ)0 by the SMM estimator. In the current example the mean, variance and the first three autocovariances of yt are considered; hence the chosen
moment function is given by:
0
j = 1, 2, 3,
m(yt ) = yt , (yt − ȳ)2 , (yt − ȳ)(yt−j − ȳ) ,
T
where ȳ is the mean of {yt }t=1 . After simulating a total of N = T ·H points, the
procedure tries to match the empirical moments of the observed series, m(yt ),
with the moments of the simulated series, m(e
yt ). In practice, for SMM the
Monte Carlo experiment is given by the following steps:
1. specify the vector of true parameters as, for instance, θ0 = (1, 0.5)0 and
generate a Monte Carlo sample series (which is treated as “observed”
T
data) of {yt }t=1 for each replication using (16). Three sample sizes are
considered herein: T = 100, 200, 500. Standard normal errors are obtained
by the rndns() command in GAUSS using a seed value (here seed =
1010 + j, where j indexes the replication);
2. determine the simulation size, N . Given H = 10, three different simulation sizes are considered: N = 1000, 2000, 5000. In each case one simulated path is obtained for the error series to be used in simulating the
endogenous variable and moments. The same set of error series (but with
a different seed value from 1; here seed = 2011 + j) is used within the
minimization algorithm;
3. estimate the model parameters, given a guessed set of initial values for θ
(here θ0 = (0, 0)0 );
4. calculate OID test statistics.9
Steps 1 to 4 are replicated 500 times (using a different seed for each replication), then yielding 500 estimates of µ and φ. Besides, in order to limit the effect
of the starting values used to simulate the series, twice as much observations
are generated in every simulation, while for estimation purposes the first half is
discarded.
Recall that the optimal weighting matrix is given by W ∗ = I0−1 . In this
experiment as well as in the empirical estimation of section 6, the long-run
covariance matrix, I0 , is estimated using the Newey-West, a heteroskedasticity
and autocorrelation consistent procedure, with appropriate weights. That is
b0 +
Ib0 = Γ
l h
X
i
bj + Γ
b 0j ω(j, l),
Γ
(17)
j=1
9 The OID is the test for overidentifying restrictions and has a χ2 distribution with q − p
degrees of freedom, where q is the number of moments, and p is the number of structural
parameters. In these experiments the OID test statistic tests if the model is correctly specified,
so the null hypothesis is that the structural model is the true model. Hence, the number of
times OID test statistics rejects the true null hypothesis is calculated. This is done by counting
the number of times the p-value from step 4 is smaller than 0.05 (conventional significance
level) and dividing the resulting number by the number of MC replications, hence yielding
the percentual number of rejections.
10
b j is the j-th order estimated autocovariance matrix
where Γ
T
X
0
bj = 1
Γ
m
b tm
b t−j
T t=j+1
(j = 0, 1, . . . , T − 1),
(18)
ω(j, l) is the weights and l(T ) is the bandwidth.10 Following Newey and West
(1987) the bandwidth, which depends on the sample size T , is given by:
i
h p
l = 4 2/9 T /100 .
As regards to ω several weighting structures, also called kernels or lag windows, have been proposed in the literature.11 The main difference between them
is in the way they generate the weights. In this first experiment, in order to
test the performance of different weighting specifications, I make use of two lag
windows: Bartlett’s and Parzen’s.
Newey and West (1987) suggest to use Bartlett kernel, which assigns linearly
decreasing weights:12
wj = 1 −
j
.
l+1
This is similar to a Triangular window, thus simply a tent function. The other
widely used specification for the weighting function is the so-called Parzen’s
window, a piece-wise cubic approximation of the Gaussian window:
1 − 6x2 + 6|x|3 , if 0 ≤ |x| ≤ 0.5;
wj (x) =
2(1 − |x|)3 ,
if 0.5 < |x| ≤ 1,
j
where x = l+1
.
The Monte Carlo experiment was run using GAUSS 6.0. To minimize the
SMM criterion function (11), the estimation routine makes use of the BFGS
(Broyden, Fletcher, Goldfarb, and Shanno) algorithm available in the OPTMUM GAUSS library.13 Results are summarized in Table 2. In this table (as
in Table 3), Mean is the average (over the 500 replications) of the estimated
parameter values and ASE is the average asymptotic standard error, while Median and SD are the median and standard deviation of the empirical parameter
distribution. Taken both singularly or in pairs these statistics are an important
source of information about the small sample distribution of the estimated parameters. The Mean, for instance, indicates the downward/upward (in the case
it is well below/above the true parameter values) bias of the estimate. A significant difference between Mean and Median denotes the skewness of the small
10 Actually, in econometric analyses, the practitioner is not often interested in the covariance
b per se, but mainly wants to compute it for use in inferential procedures. This is
matrix Ω
because price data described by econometric models typically contains autocorrelation and/or
heteroskedasticity of unknown form and for inference in such models it is essential to use
covariance matrix estimators that can consistently estimate the covariance of the model parameters.
11 Other kernel functions considered in the literature are the truncated, Tukey-Hanning and
quadratic spectral kernel.
12 For many data structures, it is a reasonable assumption that the autocorrelations should
decrease with increasing lags, so that it is rather intuitive that the weights wj should also
decrease.
13 For more on this issue, please refer to section 5.2.
11
sample distribution of the estimates. Also, if a large difference exists between
SD and ASE as, for instance, SDASE, this may suggest the inappropriateness
of using the asymptotic formulation to compute the standard error, as it might
understate the actual variability of the estimates in a small sample context.
Overall, it seems that, at least for this specific example, the most crucial
improvement of estimates comes from increasing the sample size, not the simulation size. In fact, raising the sample size T from 100 to 200 and 500, while
keeping N fixed to 1000, 2000 or 5000, increases the accuracy and efficiency of
all cases treated here. On the other hand, increasing the number of simulations
N has no effect on mean and median bias, even though it slightly increases the
efficiency of the estimator.
For the two weighting specifications analyzed, the SMM estimator seems to
overestimate the constant term, µ, while it underestimates the MA(1) parameter, φ. Also, in efficiency terms, there seems to be no apparent gains from
using a particular kernel. In fact, the ASEs of both specifications are reasonably close to the reported standard deviations in the Monte Carlo sample. It
is particularly interesting to notice that, although all OID test statistics have
reasonable size properties, the test does not seem to falsely reject the true model
less frequently as the sample size increases, as one may expect. Actually, as far
as the χ2 statistic is concerned, only in one case, where T = 200, N = 5000,
and using Bartlett weights, the SMM estimator has a size below the nominal
size of 5%, i.e. 4.6%, while in all other cases it is oversized. As both weighting
structures yield very similar results (see Table 2), based on their performance of
the OID test, the long-run covariance matrix of sections 5.2 and 6 is calculated
using Newey-West procedure with a Bartlett kernel.
5.2
The Storage Model
This kind of analysis, also regarding the storage model, has already been done
in Michaelides and Ng (2000). Yet, because the model approach used here
differs from the one presented by them, by both the choice of the function to be
approximated and the method of approximation applied, I decided to re-do it in
order to gain more insight into the properties of the selected estimator and its
potential drawbacks before proceeding with the model estimation using actual
data. The experiment is based on generating a set of artificial price series from
a given parameterized competitive storage framework with known stochastic
properties.
In its simplest case, i.e. with inelastic supply, a simple linear demand, such
as (5), and constant marginal costs, the storage model implies the estimation of
four structural parameters: θd (say a and b, to stick to the prevailing notation), κ
and r. Following the literature, the interest rate is fixed at 5%, so the structural
parameter vector to be estimated is limited to three elements: θ = (a, b, κ)0 .
At this point, three crucial choices need to be made: the type of function
to be approximated and the approximation and estimation methods. Following
the motivations described in previous sections I now present a Monte Carlo
experiment estimation of the competitive storage model based on Williams and
Wright (1991) by applying SMM.
While the choice of SMM as the best estimation method for this specific
case was made on the basis of reasonable motivations, as regards to the type
of function to be approximated and the approximation method to be used it
12
Table 2: Performance of the SMM estimator of MA(1)
T
True values
N
b
b
µ
1
Mean
Median
ASE
SD
φ
0.5
Mean
Median
ASE
SD
OID
0.137
0.138
0.134
0.136
0.132
0.131
0.105
0.090
0.101
0.085
0.098
0.084
0.076
0.064
0.070
0.060
0.065
0.057
0.465
0.466
0.467
0.471
0.470
0.472
0.486
0.484
0.487
0.485
0.490
0.487
0.491
0.492
0.493
0.492
0.496
0.498
0.096
0.097
0.094
0.095
0.093
0.093
0.078
0.075
0.075
0.072
0.073
0.071
0.056
0.054
0.051
0.050
0.048
0.048
0.118
0.132
0.116
0.129
0.114
0.127
0.111
0.101
0.075
0.097
0.071
0.094
0.070
0.073
0.051
0.066
0.048
0.062
0.045
0.466
0.470
0.468
0.471
0.471
0.473
0.487
0.487
0.489
0.487
0.492
0.488
0.491
0.491
0.493
0.494
0.496
0.499
0.101
0.097
0.099
0.095
0.097
0.093
0.082
0.076
0.078
0.073
0.076
0.072
0.059
0.054
0.054
0.050
0.051
0.048
0.124
Results using Bartlett kernel
100
1000
100
2000
100
5000
200
1000
200
2000
200
5000
500
1000
500
2000
500
5000
1.039
1.021
1.035
1.023
1.034
1.017
1.016
1.006
1.012
1.005
1.012
1.005
1.010
1.007
1.006
1.004
1.006
1.006
0.094
0.074
0.120
0.070
0.046
0.244
0.132
0.072
Results using Parzen kernel
100
1000
100
2000
100
5000
200
1000
200
2000
200
5000
500
1000
500
2000
500
5000
1.023
1.011
1.020
1.013
1.020
1.013
1.008
1.000
1.005
0.998
1.005
0.998
1.005
1.002
1.001
0.999
1.002
1.001
0.090
0.092
0.130
0.080
0.052
0.260
0.136
0.078
Note: starting values are µ0 = 0 and φ0 = 0. T is the length of the actual data and N is
the length of simulated series. ASE denotes asymptotic standard error and SD denotes
the standard deviation of the estimated parameters in 500 replications. OID is the test
for overidentifying restrictions. The null is rejected if the test statistic is greater than the
critical value of a χ2 (3) at a significance level of 5%.
13
becomes less clear. In fact, although it is intuitive why Et [pt+1 ] is a function of
st , “admittedly, it is by no means obvious [. . . ] why it is amenable to approximation by a polynomial” (Williams and Wright, 1991, p.81). The main idea
behind it is that, regardless of the conditions of supply and demand in period
t + 1, price in period t + 1 will be a decreasing, presumably smooth, function of
the amount of stocks carried over, st , which therefore should be well represented
by a polynomial.14
Regarding the algorithm built to run the Monte Carlo experiment, it basically consists of two distinct procedures: the procedure to solve the storage
model (detailed in the Appendix), which strongly relies on Williams and Wright
(1991), and that is imbedded within the second one; and the optimization procedure to solve the minimization problem of equation (11), which in turn relies
on the scheme presented in Michaelides and Ng (2000). To solve the model
one has to numerically solve the arbitrage equation (1) for positive storage by
making use of, for instance, Newton’s Method to search for the solution. To
the second optimization problem the so-called OPTMUM library of GAUSS is
used. The standard setup applies the BFGS, a secant-type method of Broyden,
Fletcher, Goldfarb, and Shanno. The choice is guided by the fact that, as it
uses an approximation of the Hessian, the computational requirements, hence
time, are considerably reduced (Aptech, 2005). The gradient of the objective
function is used as the convergence criterion. In practice, when the elasticity of
the criterion with respect to each parameter is less than 10−4 OPTMUM exits
the j iteration, hence yielding the j set of estimated coefficients.15
The first part of the computer routine follows the structure presented in
Williams and Wright (1991, p.82-84), and has been discussed in section 2.
The second part of the routine, actually the outer loop part, based on
Michaelides and Ng (2000, p.249), can be summarized as follows (for the j th
Monte Carlo replication):16
1. let θ0 = (0.5, −0.5, 0.05) and compute the expected price function (the
same for all j);
T
2. simulate {εt }t=1 standard normal variates using seed 1010 + j;
3. simulate 100 + T points of pt , and let the last T points be treated as the
data;
0
4. compute m(pt ) = pt , (pt − p̄)i , (pt − p̄)(pt−1 − p̄) ,
i = 2, 3, 4;
100+N
5. simulate {e
εt }t=1
N ≡ T · H;
standard normal variates using seed 2011 + j, where
6. start at the initial guess θ0 = (0.6, −0.4, 0);
7. begin with iteration i = 1,
14 See Judd (1999) for more detailed explanations about the use of polynomials as an approximation method.
15 Please refer to the computer routines, available upon request, and to Optimization Toolbox
3.1 for GAUSS Manual for more detailed information on the procedures.
16 A few features of this description, such as notation and the parameter values, have been
adapted to the specific case treated herein.
14
(a) for a given θi , compute the expected price function and simulate
100 + N points of pet using {e
εt } as production levels (harvests); the
last N points are retained;
(b) evaluate the criterion function ξ(θ)0 W ∗ ξ(θ) and stop if convergence
is achieved;
(c) update θi and return to (7a) (for the same j) until ξ(θ)0 W ∗ ξ(θ) is
minimized.
8. update j and go back to 2.
Here the solution of the storage model is layed out in points 1, 3 and 7(a), while
the remaining points form the backbone of the estimation algorithm. In particular, point 4 illustrates the chosen set of moment conditions that, other than the
first two central moments and the first-order autocovariance, by recognizing the
non-linear nature of the price process under investigation, takes into account
the skewness and kurtosis of prices. This choice is driven by the fact that, as
emphasized by Duffie and Singleton (1993), the moments should have enough
variations to allow for identification of the structural parameters.17 As regards
to points 2 and 5 notice that {εt } and {e
εt } can take one of the nine values, as
shown in Table 1. This is basically done by simulating an Uniform(0,1) distribution variable to be used as an indicator to select one of the nine points that
approximate the normal distribution (Michaelides and Ng, 2000).
Also here the effect of the starting values is limited by simulating extra
observations. However, in order to reduce computational burden 100 extra
observations are generated in every simulation and the Monte Carlo experiment
consists of only 100 replications. In practice just one case, where T = 100 (the
original data sample has only 88 observations) and N = 500, is investigated.
This is because “increasing N from 500 to 1000 and 2500 did little to improve
the bias and efficiency of the estimates” (Michaelides and Ng, 2000, p.250).
Results, reported in Table 3, are in line with Michaelides and Ng’s (2000).
The SMM estimator tends to underestimate both a and b and overestimates
κ. Although the mean bias may be considered acceptable for the first two
parameters, it is considerably high (or non-negligible) for κ. Comparing to the
results presented in Michaelides and Ng (2000, p.250, Table 3), the estimates
herein are slightly more efficient, and, except for κ, they exhibit the same,
relatively low level of bias. At this point, an important issue deserves particular
attention. As noticed by Michaelides and Ng (2000, p.251), “the asymptotic
standard errors are sometimes less than half the standard deviations. Because
such discrepancies are not observed for the simple MA(1) model reported in
Table 1 [Table 2 here], one possible explanation is that the SMM estimates are
more variable in practice than what is predicted by theory, as the structural
model becomes increasingly complex.” In fact, as already mentioned in section
5.1 and further stressed by Ruge-Murcia (2003, p.18), “if S.D. is much larger
than A.S.E., this indicates that using the asymptotic formula to compute the
standard error might understate the true variability of the estimate in small
samples.” For what concerns the test of overidentifying restrictions, the χ2 test
is slightly undersized.
17 Indeed,
“ignoring the higher-order moments (such as skewness and kurtosis) when the
actual process is non-linear, for instance, will lead to both higher mean and median biases
and less efficient estimates” (Michaelides and Ng, 2000, p.251).
15
Table 3: Performance of the SMM estimator of the storage model
T
N
True values
100
500
b
b
a
0.5
Mean
Median
0.469
0.468
b
b
κ
ASE
SD
−0.5
Mean
Median
ASE
SD
0.05
Mean
Median
ASE
SD
OID
0.031
0.047
-0.470
-0.462
0.029
0.045
0.094
0.065
0.042
0.079
0.044
Note: starting values are equal to a0 = 0.6, b0 = −0.4, κ0 = 0.
In addition, it is worth to notice that, as pointed out by Michaelides and
Ng (2000, p.251), “the estimates are improved on both counts [smaller bias and
smaller variance], however, as T increases from 100 to 200.” This is something
to be kept in mind when analyzing results, mainly of next section, since in
the empirical case to be applied therein the original time series has only 88
observations.
6
Empirical Estimation
In this section we turn to the estimation of the commodity storage model by
means of the SMM estimator using empirical data.
Commodity price data (available on request) were obtained from the Commodity Division of World Bank, and consist of world prices for twelve selected
commodities from 1900-1987, deflated by the U.S. consumer price index to 1980
constant dollars. These are the same data and deflation procedure as employed
by Deaton and Laroque (1992, 1995, 1996), Miranda and Rui (1999), Revoredo
(2000, 2001) and Cafiero (2002, 2003) in their studies.18
Results are shown in Table 4. Also, in order to allow for estimates comparison, I reproduce in the same table results obtained by Revoredo (2000, p.17,
Table 1, model 3), who estimates the Williams and Wright’s (1991) version of
the storage model using the Pseudo-Maximum Likelihood (PML) estimator.19
This particular empirical estimation exercise has also revealed an important
issue: attention may be paid to the choice of starting values, since restricting
their choice to economically reasonable values may not be a sufficient strategy
to follow. In fact, whenever not accurately chosen they can eventually mislead
parameter estimates. In order to avoid this unfortunate event, I eventually
decided to make use of the same estimated parameters obtained by Revoredo
18 As in Miranda and Rui (1999), each deflated price series may be further normalized to
have a historical mean of one by dividing by the sample average in order to allow easier
comparisons of parameters estimates across commodity price series.
19 Further comparisons can also be done with results obtained by Deaton and Laroque
(1996), Miranda and Rui (1999) and Cafiero (2003), where the PMLE is applied to slightly
different versions of the storage model. For the first and third works the cost of storage is
given by the interest rate and a proportional shrinkage coefficient (γ), while storage costs for
the second work are represented by a function based on the supply of storage theory (see
Working, 1949; Brennan, 1958; Telser, 1958), such as κ(st ) = θ0 + θ1 ln(st ).
16
(2000) as initial, guessed values.20
Overall, results are relatively close to the ones computed by Revoredo (2000),
who uses the PML estimator. Most parameters are statistically significant,
although this feature of estimates may be carefully considered while taking
into account the possibility that ASEs have inappropriately been computed.
When significantly differences apply these may be due to the fact that estimates
are limited to only 100 Monte Carlo replications, and particularly because the
benchmark model has not been calibrated to each specific commodity market.
In effect, it is reasonable to believe that even slightly different storage capacity
and supply shock assumptions may lead to considerably different price dynamics,
hence different estimates. It is also worth to remember that results are sensitive
to the choice and number of moments used.21
7
Conclusion
This work focus on the application of a recently developed simulation-based
estimation method, i.e. the Simulated Method of Moments (SMM), to the empirical estimation of the so-called commodity storage model within a competitive
framework. This simulation method can come into handy because of its ease
of application even for estimating considerably complex economic models, e.g.
when characterized by the presence of latent variables or nonlinearities, hence
leading to an intractable log-likelihood function and/or not explicitly available
GMM moment conditions. Commodity storage models fall into this category.
Therefore this particular estimation technique provides applied researchers with
a new set of tools that enable econometricians to estimate parameters of interest
which are otherwise not estimable by conventional methods.
This study also provided two experiments utilizing Monte Carlo simulation
methods in order to analyze how SMM works in a controlled setting. These
examples are illustrative in that it was possible to compare the chosen estimator
performance within different model specifications, i.e. a straightforward MA(1)
and the challenging storage model framework.
Preliminary results suggest that the SMM estimator presents small bias and
variance, hence performing well, specially when dealing with relatively simple
structural models, while it seems to weaken these properties when considering
demanding non-linear models. The illustrative examples also implied that the
major improvement in terms of precision and accuracy of parameter estimates
comes from increasing the sample size. Increasing the simulation size can provide
some improvement but it is fairly negligible.
Specially critical in the empirical case analyzed is the choice of starting
values, which whenever not accurately chosen can eventually mislead parameter
estimates.
Finally, in efficiency terms the competing Pseudo-Maximum Likelihood (PML)
estimator (see Revoredo, 2000, p.17) seems to perform slightly better than this
simulation estimator when considering the storage model framework.
20 By so doing, the range of feasible values the set of parameters may take is considerably
reduced, hence guaranteeing to reach the global optimum of the optimization algorithm.
21 In this respect it may be interesting to verify the applicability of tests to consistently
determine the moments to be chosen, as in Andrews (1999) to the case of GMM, with regards
to its simulation counterpart.
17
Table 4: Estimated parameters of the storage model
b
a
ASE
b
b
b
ASE
κ
ASE
-0.344
-0.307
-0.474
-0.448
-0.496
-0.940
-0.758
-0.625
-1.270
-0.370
-0.275
-0.732
0.084
0.093
0.117
0.093
0.095
0.178
0.319
0.171
0.310
0.057
0.092
0.104
0.273
0.105
0.092
0.165
0.130
0.318
0.294
0.156
0.401
0.137
0.059
0.245
0.161
0.041
0.038
0.069
0.051
0.182
0.245
0.092
0.252
0.048
0.024
0.049
-0.234
-0.240
-0.376
-0.351
-0.348
-0.602
-0.686
-0.392
-0.758
-0.196
-0.430
-0.348
0.036
0.052
0.066
0.051
0.053
0.107
0.245
0.077
0.116
0.023
0.116
0.032
0.003
0.008
0.011
0.030
0.033
0.017
0.013
0.048
0.029
0.041
0.010
0.045
0.002
0.003
0.004
0.008
0.012
0.008
0.036
0.022
0.016
0.010
0.003
0.007
Results using SMM
Cocoa
Coffee
Copper
Cotton
Jute
Maize
Palm Oil
Rice
Sugar
Tea
Tin
Wheat
0.445
0.429
0.685
0.684
0.763
1.363
1.100
0.906
1.570
0.659
0.406
1.021
0.105
0.086
0.088
0.091
0.091
0.205
0.806
0.195
0.407
0.071
0.076
0.096
Results using PML*
Cocoa
Coffee
Copper
Cotton
Jute
Maize
Palm Oil
Rice
Sugar
Tea
Tin
Wheat
0.142
0.255
0.552
0.681
0.634
0.690
0.595
0.768
0.545
0.510
0.397
0.757
0.027
0.031
0.048
0.063
0.039
0.100
0.608
0.064
0.118
0.019
0.036
0.042
ASEs have been computed using a Newey-West adjusted var-cov
matrix with Bartlett kernel to correct for heteroschedasticity and
autocorrelation.
*Results reproduced from Revoredo (2000, pag.17).
References
Andersen, T., H. Chung and B. Sorensen (1999). “Efficient Method of Moments
Estimation of a Stochastic Volatility Model.” Journal of Econometrics 91, 6187.
Andrews, D. W. (1999). “Consistent Moment Selection Procedures for Generalized
Method of Moments Estimation.” Econometrica 67 (3), 543-564.
Aptech Systems, Inc. (2005). Optimization for GAU SS T M . Maple Valley, WA.
Brennan, M. J. (1958). “The Supply of Storage.” American Economic Review 48,
50-72.
Cafiero, C. (2002). Estimation of the Commodity Storage Model. Ph.D. thesis, Department of Agricultural and Natural Resource Economics, The University of
California at Berkeley.
18
Cafiero, C. (2003). “Computational Methods for the Analysis of the Dynamics of
Prices for Storable Commodities.” Working paper 2/2003, Dipartimento di
Economia e Politica Agraria, Università degli Studi di Napoli Federico II.
Chambers, M. J. and R. E. Bailey (1996). “A Theory of Commodity Price Fluctuations.” Journal of Political Economy 104 (5), 924-957.
Chumacero, R. A. (1997). “Finite Sample Properties of the Efficient Method of
Moments.” Studies in Nonlinear Dynamics and Econometrics 2 (2), 35-51.
Chumacero, R. A. (2001). “Estimating ARMA Models Efficiently.” Studies in
Nonlinear Dynamics and Econometrics 5 (2), 103-114.
Chung, C. and G. Tauchen (2001). “Testing Target-Zone Models Using Efficient
Method of Moments.” Journal of Business and Economic Statistics 19 (3),
255-269.
Cleur, E. and P. Manfredi (1999). “One Dimensional SDE Models, Low Order
Numerical Methods and Simulation-Based Estimation: a Comparison of Alternative Estimators.” Computational Economics 13, 177-197.
Deaton, A. and G. Laroque (1992). “On the Behaviour of Commodity Prices.”
Review of Economic Studies 59 (1), 1-23.
Deaton, A. and G. Laroque (1995). “Estimating a Nonlinear Rational Expectations Commodity Price Model with Unobservable State Variables.” Journal of
Applied Econometrics 10, S9-S40.
Deaton, A. and G. Laroque (1996). “Competitive Storage and Commodity Price
Dynamics.” Journal of Political Economy 104 (5), 896-923.
De Luna, X. and M. Genton (2001). “Robust Simulation-Based Estimation of
ARMA Models.” Journal of Computational and Graphical Statistics 10 (2),
370-387.
Dhaene, G. (2004). “Indirect Inference for Stochastic Volatility Models via the LogSquared Observations.” Tijdschrift voor Economie en Management 49 (3), 421440.
Duffie, D. and K. J. Singleton (1993). “Simulated Moments Estimation of Markov
Models of Asset Prices,” Econometrica 61 (4), 929-952.
Ghysels, E., L. Khalaf and C. Vodonou (1994). “Simulation Based Inference in
Moving Average Models.” CIRANO working paper series, University of Montreal.
Gilbert, C. L. (1995). “Modelling Market Fundamentals: a Model of the Aluminium Market,” Journal of Applied Econometrics 10 (4), 385-410.
Gourieroux, C. and A. Monfort (1996). Simulation-Based Econometric Methods. New York: Oxford University Press.
Gustafson, R. L. (1958). “Carryover Levels for Grains: a Method for Determining Amounts that are Optimal under Specified Conditions.” USDA, Technical
Bulletin, #1178.
Hansen, L. (1982). “Large Sample Properties of Generalized Method of Moments
Estimators.” Econometrica 50, 1029-1056.
Judd, K. L. (1999). Numerical Methods in Economics. Cambridge, MA: MIT Press.
Kaldor, N. (1939). “Speculation and Economic Stability.” Review of Economic
Studies 7, 1-27.
Laroque, G. and B. Salanie (1993). “Simulation-Based Estimation of Models with
Lagged Latent Variables.” Journal of Applied Econometrics 8, S119-S133.
19
Lee, B. and B. F. Ingram (1991). “Simulation Estimation of Time Series Models.” Journal of Econometrics 47, 197-205.
McFadden, D. (1989). “A Method of Simulated Moments for Estimation of Discrete
Response Models without Numerical Integration.” Econometrica 57 (5), 9951026.
Michaelides, A. and S. Ng (2000). “Estimating the Rational Expectations Model
of Speculative Storage: A Monte Carlo Comparison of Three Simulation Estimators.” Journal of Econometrics 96, 231-266.
Miranda, M. J. and P. G. Helmberger (1988). “The Effects of Price Band Buffer
Stock Programs.” American Economic Review 78, 46-58.
Miranda, M. J. and P. L. Fackler (2002). Applied Computational Economics and
Finance. Cambridge, MA: MIT Press.
Miranda, M. J. and X. Rui (1999). “An Empirical Reassessment of the Commodity Storage Model.” Unpublished working paper, Ohio State University.
Monfardini, C. (1998). “Estimating Stochastic Volatility Model through Indirect
Inference.” Econometrics Journal 1, C113-C128.
Muth, J. F. (1961). “Rational Expectations and the Theory of Price Movements.”
Econometrica 29, 315-35.
Newey, W. K. and K. D. West (1987). “A Simple, Positive Semi-Definite, Heteroskedasticity and Autocorrelation Consistent Covariance Matrix.” Econometrica, 55 (3), 703-708.
Pakes, A. and D. Pollard (1989). “Simulation and Asymptotics of Optimization
Estimators.” Econometrica 54, 755-785.
Revoredo, C. (2000). “The Commodity Storage Model in the Presence of Stockholdings by Speculators and Processors.” Paper presented at the sixth conference
on Computing in Economics and Finance, Barcelona, Spain.
Revoredo, C. (2001). Storage and Commodity Price Behavior. Ph.D. thesis, Department of Agricultural and Resource Economics, The University of California
at Davis.
Routledge, B., D. J. Seppi and S. Chester (2000). “Equilibrium Forward Curves for Commodities.” The Journal of Finance 55 (3), 1297-1338.
Ruge-Murcia, F. J. (2003). “Methods to Estimate Dynamic Stochastic General
Equilibrium Models.” Cahiers de recherche 2003 (23), Departement de Sciences
Economiques, Université de Montréal.
Scheinkman, J. and J. Schechtman (1983). “A Simple Competitive Model with
Production and Storage.” Review of Economic Studies 50 (3), 427-441.
Smith, A. A. (1993). “Estimating Nonlinear Time Series Models using Simulated
Vector Autoregressions.” Journal of Applied Econometrics 8, S63-S84.
Telser, L. G. (1958). “Futures Trading and Storage of Cotton and Wheat.” The
Journal of Political Economy 66 (3), 233-255.
Williams, J. B. (1935). “Speculation and the Carryover.” Quarterly Journal of
Economics 6.
Williams, J. C. and B. D. Wright (1991). Storage and Commodity Markets. Cambridge: Cambridge University Press.
Working, H. (1949). “The Theory of Price of Storage.” American Economic Review
39, 1254-62.
Wright, B. D. and J. C. Williams (1982). “The Economic Role of Commodity
Storage.” The Economic Journal 92, 596-614.
Wright, B. D. and J. C. Williams (1984). “The Welfare Effects of the Introduction of Storage.” Quarterly Journal of Economics 99, 169-191.
20
Appendix: derivation of the storage rule
Because the amount stored cannot be negative, derivation of competitive storage behavior is analytically intractable. However, by making use of numerical methods, a
quite accurate calculation of the relationship between equilibrium storage and current
availability, the so-called storage rule, becomes feasible.
This algorithm relies on the description for the derivation of the storage rule presented in Williams and Wright (1991, p.82-84). The objective here is to choose, for
any given total amount available, xt , levels of storage, st , that would result from the
actions of profit-maximizing stockholders who have rational expectations about price
in period t + 1.
The competitive storage behavior is implicit in the arbitrage equation (1):
p(xt − st ) + κ −
Et [pt+1 |st ]
= 0;
1+r
st > 0
(19)
Instead of directly approximate the storage rule, first numerically derive the function Et [pt+1 |st ], which relates the expected future price with current storage, and then
solve the arbitrage condition (19), thus obtaining the st associated with any xt . The
derivation of Et [pt+1 |st ] has the following structure:
1. choose a first guess ψ(st ) for Et [pt+1 |st ], where ψ(st ) is a third-order polynomial
in st ;
e
e
2. choose an N ×1 vector st of equally spaced discrete values sit of st , i = 1, . . . , N ;22
j
3. multiply ht by (1 + ν ), j = 1, . . . , M , where the elements of the ν vector of
shocks are also equally spaced and each ν has probability prob[ν j ];
e
4. add sit to each of the realized values of production generated in the previous
step to produce a vector xit+1 , M elements long, of amounts available in the
next period;
e
i
5. for each element xij
t+1 of xt+1 numerically solve the implicit function:
ij
p(xij
t+1 − st+1 ) + κ −
ψ(sij
t+1 )
=0
1+r
(20)
ij
for sij
t+1 . If solution is negative, set st+1 equal to zero. The search for the
ij
solution st+1 is made by making use of Newton’s Method, which by means of
repeated loops, searches over possible values;
ij
6. for each pair xij
t+1 , st+1 that solves the arbitrage equation (20), given certain
values for the parameters of the vector θ = (a, b, κ)0 , calculate the associate
ij
market price p(xij
t+1 − st+1 ) from the inverse demand function;
7. using the vector of these prices, calculate the expected price:23
X
M
Et [pt+1 |sit ] =
ij
j
p(xij
t+1 − st+1 )prob[ν ];
(21)
j=1
22 Following how stated in Williams and Wright (1991, p.83): “N should be at least 10 or 12
and sN
t should be close to the maximum storage seen in relative long simulations. Inasmuch
as this long-run maximum depends on the solution for the storage rule, it must be guessed
initially. If the selection for sN
t proves to be markedly off, it must be changed and the routine
begun again.”
23 This is, as pointed out by Williams and Wright (1991, p.84), “[. . . ] the expected price,
from the perspective of period t, given that the carryout from period t is st and that storage
in period t + 1 uses ψ as its estimate of Et+1 [pt+2 ].”
21
e
8. when steps 3 to 7 are repeated for each of the elements of st , the associated values
of Et [pt+1 |sit ], i = 1, . . . , N are fitted to, say, a third-order polynomial ψ ∗ (s),
where sit are the independent variables. If the fitted values of this regression
differ by less than a certain chosen small amount from the values using the guess
ψ(st ) from the first step, ψ ∗ (s) is adopted as the function approximating the
equilibrium Et [pt+1 |st ], otherwise ψ ∗ (s) is adopted as the new guess, replacing
ψ(s), and going through steps 2 to 8 again.
To conclude, notice that, as pointed out by Williams and Wright (1991, p.84):
“the initial guess for ψ in this procedure is not consistent with rational storage. Only
when a revised ψ reproduces itself are the expectations of future storage behavior
consistent with current storage. Thus, although the storage in all but the last iteration
is suboptimal, the program deduces the rational (i.e. optimal) behavior.”
22
© Copyright 2026 Paperzz