Aggregate Hazard Function in Price

* Humboldt-Universität zu Berlin, Germany
This research was supported by the Deutsche
Forschungsgemeinschaft through the SFB 649 "Economic Risk".
http://sfb649.wiwi.hu-berlin.de
ISSN 1860-5664
SFB 649, Humboldt-Universität zu Berlin
Spandauer Straße 1, D-10178 Berlin
SFB
649
Fang Yao*
ECONOMIC RISK
Aggregate Hazard
Function in Price-Setting:
A Bayesian Analysis Using
Macro Data
BERLIN
SFB 649 Discussion Paper 2010-020
Aggregate Hazard Function in Price-Setting: A Bayesian
Analysis Using Macro Data
Fang Yao*
Humboldt University of Berlin
First draft: Oct. 20, 2009
This version: March. 26, 2010
Abstract
This paper presents an approach to identify aggregate price reset hazards from the joint
dynamic behavior of in‡ation and macroeconomic aggregates. The identi…cation is possible
due to the fact that in‡ation is composed of current and past reset prices and that the
composition depends on the price reset hazard function. The derivation of the generalized
NKPC links those compostion e¤ects to the hazard function, so that only aggregate data is
needed to extract information about the price reset hazard function. The empirical hazard
function is generally increasing with the age of prices, but with spikes at the 1st and 4th
quarters. The implication of this …nding for sticky price modeling is that the pricing decision
is characterized by both time- and state-dependent aspects.
JEL classi…cation: E12; E31
Key words: Sticky prices, Aggregate hazard function, Bayesian estimation
0
* I am grateful to Michael Burda, Carlos Carvalho, Harald Uhlig, Lutz Weinke, Alexander Wolman and other
seminar participants in Berlin for helpful comments. I acknowledge the support of the Deutsche Forschungsgemeinschaft through the SFB 649 "Economic Risk". All errors are my sole responsibility. Address: Institute for
Economic Theory, Humboldt University of Berlin, Spandauer Str. 1, Berlin, Germany +493020935667, email:
[email protected].
1
Introduction
In the current generation of monetary models, e¤ects of monetary policy are closely related to
the speed of the aggregate price level reacting to a nominal disturbance. The adjustment of
aggregate price in turn depends on two factors. One is the optimal reset price an adjusting
…rm chooses, and the other is the fraction of …rms changing their prices. With the exception
of a few micro-founded state-dependent models1 , the majority of research on sticky prices is
limited to addressing the optimal reset price decision, but leaving the adjustment timing to
be exogenously given by some simpli…ed assumptions, e.g. models encorporating the Calvo
(1983) or Taylor (1980) approaches. Put in more technical terms, it amounts to restricting the
price reset hazard function to a speci…c shape and studying other issues on the basis of this
assumption.
Until recently, the aggregate price reset hazard function remains a largely ignored topic in the
macro literature. It begins, however, to draw more attention, because the competing theoretical
models of sticky prices deliver clear mappings between speci…c aggregate hazard functions and
implications for macro dynamics and monetary policy. Pioneer work by Wolman (1999) and
Kiley (2002) demonstrated that aggregate dynamics should be sensitive to the hazard function
underlying di¤erent pricing rules. For this reason, the aggregate hazard function provides a new
metric used to select theoretical models and identify the most relevant propagation mechanism
for monetary policy shocks.
Despite its uses, empirical studies of the aggregate hazard function are rare in the macro
literature. By contrast, fast growing evidence from micro data sets becomes available in the
recent years2 . However, I want to argue that distinguishing between the macro and micro
hazard function is important, because it is the aggregate hazard function that of great interest
to macroeconomists. The aggregate hazard is de…ned as the probability of the aggregate price
adjustment reacting to aggregate shocks. In the theoretical macro models, those hazard rates
can be clearly mapped into impulse responses of aggregate variables. By contrast, mapping
between micro hazard functions and macro dynamics is much trickier3 . For example, Caplin and
Spulber (1987) demonstrated that, when the selection e¤ect is present, the aggregate economy
is completely immune to price stickiness at the micro level, and thereby has no real e¤ect of
monetary policy. Hazard functions estimated from the micro data are therefore not a perfect
substitute for the aggregate hazard function de…ned in the theoretical models. Besides this
theoretical consideration, there are also empirical pitfalls that cause for attention in interpreting
micro hazard rates. First, micro hazard rates are typically higher than aggregate hazard rates,
because individual prices react to both idiosyncratic and aggregate shocks. It is very di¢ cult to
disentangle them. Second, evidence of the shape of the hazard functiony from microeconometric
studies is not conclusive4 . Micro data sets di¤er substantially in the range of goods included,
1
See: e.g. Caplin and Spulber (1987), Dotsey et al. (1999), Caballero and Engel (2007) and Golosov and Lucas
(2007). The strength of those models is to endogenize both the optimal reset price decision and the adjustment
timing decision in the same framework. However, due to the complexity of this approach, few of them are actually
applied in the policy analysis.
2
See: e.g. Bils and Klenow (2004), Alvarez et al. (2006) Midrigan (2007) and Nakamura and Steinsson (2008)
among others.
3
See: Mackowiak and Smets (2008) for elaboration on this point.
4
Some …nd strong support for increasing hazard functions (e.g.: Cecchetti, 1986 and Fougere et al., 2005),
1
the countries and time periods covered, and thereby make it di¢ cult to compare their results;
and, even though comprehensive micro data sets have now become available, they are usually
short compared to aggregate time series data. Most of the CPI or PPI data sets for the U.S. or
Europe are only available from the late 80’s5 . It is reasonable to think that the shape of hazard
functions could depend on the underlying economic conditions, and would therefore change over
the time periods of the collected data.
The objective of this paper is to estimate the aggregate price reset hazard function directly
from the time series data. To do that, I …rst construct a fully-speci…ed DSGE model featuring
nominal rigidity that allows for a ‡exible hazard function of price setting, derive the generalized
New Keynesian Phillips curve (NKPC hereafter) and then estimate this model with the Bayesian
approach. The identi…cation of the aggregate hazard function is achieved due to the fact that
in‡ation rate can be decomposed into current and past reset prices and its composition is
determined by hazard rates. The derivation of the generalized NKPC links those compostion
e¤ects to the hazard function, so that only aggregate data is needed to extract information about
the price reset hazard function. This identi…cation method is based on a generic assumption of
the …rm’s level pricing behavior, making the mapping between the hazard function and aggregate
dynamics robust. However, this method is not free from other typical identi…cation problems
which prevail in the estimated New Keynesian models6 , e.g. observational equivalence of the
labor supply elasticity. For those poorly identi…ed structural parameters, I conduct various
sensitivity tests to check the robustness of hazard function estimates.
I estimate the hazard function using the U.S. quarterly data of in‡ation, the growth rate
of real output and e¤ective federal funds rate from 1955 to 2008. The empirical aggregate
hazard function has a U-shape with a spike at the fourth quarter. The interpretation of this
…nding is that price setting is characterized by both time- and state-dependent aspects. For
the time-dependent pricing aspect, one quarter and 4-quarter seem to be the most important
frequencies of the aggregate price adjustment. About 34.2% of prices hold for less than one quarter, while, 12.4% of prices have the mean duration of four quarters. Besides the time-dependent
pricing pattern, the upward-sloping hazard function indicates that the state-dependent pricing also plays an important role in price decisions, especially when a price becomes outdated.
In fact, this generalized time-dependent model can be viewed as a shortcut for the more microfounded state-dependent model, when we consider a relative stable economy. The hazard
function of the deviation from the optimum largely coincides with the hazard function of timesince-last-adjustment. The longer a price is …xed, the more likely it deviates signi…cantly from
the optimum, and hence its probability of being adjusted rises. Since the annual in‡ation rates
in my data set stay under 2% for most of the sample periods except for 1970’s, it is arguable
that the time elapsed since last adjustment is a good proxy for the deviation from the optimum,
therefore the increasing part of the hazard function gives the pricing decision a state-dependent
aspect.
This paper is broadly related to progress in developing empirical models of sticky prices based
on the New Keynesian framework. The early empirical model of sticky prices was solely based on
while others …nd evidence in favour of decreasing hazards (e.g.: Campbell and Eden, 2005, Alvarez, 2007 and
Nakamura and Steinsson, 2008).
5
For more details see Table 2 in Alvarez (2007)
6
For the recent discussion on this topic, see Canova and Sala (2009) and Rios-Rull et al. (2009).
2
the NKPC under the Calvo pricing assumption (See, e.g. Gali and Gertler, 1999, Sbordone, 2002
and Linde, 2005). These authors estimated the NKPC with GMM, and found a considerable
degree of price rigidity in the aggregate data. The empirical price reset hazard rate is around
20% per quarter for the U.S. and 10% for Europe. These results, however, are at odds with
increasingly available micro evidence in two ways. First, recent micro studies generally conclude
that the average frequency of price adjustments at the …rm’s level is not only higher, but also
di¤ers substantially across sectors in the economy7 . Second, the Calvo assumption implies a
constant hazard function, meaning that the probability of adjusting prices is independent of the
length of the time since the last price revision, and the ‡at hazard function has been largely
rejected by empirical evidence from micro level data (See, e.g.: Cecchetti, 1986, Campbell
and Eden, 2005 and Nakamura and Steinsson, 2008). Given these discrepancies between the
macro and micro evidence, empirical models allowing for more ‡exible price durations or hazard
functions have become popular in the recent literature. Jadresic (1999) presented a staggered
pricing model featuring a ‡exible distribution over price durations and used VAR approach to
demonstrate that the dynamic behavior of aggregate data on in‡ation and other macroeconomic
variables provide information about the disaggregated price dynamics underlying the data. More
recently, Sheedy (2007) constructed a generalized Calvo model and parameterized the hazard
function in such a way that the resulting NKPC implied intrinsic in‡ation persistence when the
hazard function was upward sloping. Based on this hazard function speci…cation, he estimated
the NKPC using GMM and found evidence of an upward-sloping hazard function. Coenen et al.
(2007) developed a staggered nominal contracts model with both random and …xed durations,
and estimated the generalized NKPC with an indirect inference method. Their results showed
that price rigidity is characterized by a very high degree of real rigidity, as opposed to modest
nominal rigidity with an average duration of about 2-3 quarters. Carvalho and Dam (2008)
estimated a semi-structural multi-price-duration model with the Bayesian approach, and found
that allowing for prices to last longer than 4 quarters is crucial to avoid underestimating the
relative importance of nominal rigidity.
The remainder of the paper is organized as follows: in section 2, I present the model with
generalized time-dependent pricing and derive the New Keynesian Phillips curve; section 3
introduces the empirical method and the data I use to estimate the model. At the end, results
regarding the hazard function are presented and discussed; section 4 contains some concluding
remarks.
7
See: e.g. Bils and Klenow (2004), Alvarez et al. (2006) Midrigan (2007) and Nakamura and Steinsson (2008)
among others.
3
2
The Model
In this section, I present a DSGE model of sticky prices due to nominal rigidity. I introduce
nominal rigidity by means of a general form of hazard functions8 . A hazard function of price
setting is de…ned as the probabilities of price adjustment conditional on the spell of time elapsed
since the last price change. In this model, the hazard function is a discrete function taking
values between zero and one on its time domain. Many well known models of price setting in
the literature can be shown to imply hazard functions of one form or another. For example, the
most prominent pricing assumption of Calvo (1983) implies a constant hazard function over the
in…nite horizon.
2.1
Representative Household
A representative, in…nitely-lived household derives utility from the composite consumption good
Ct , and its labor supply Lt , and it maximizes a discounted sum of utility of the form:
"1
!#
1+
1
X
C
L
t
t
t
max
E0
:
H
1
1+
fCt ;Lt ;Bt g
t=0
Here Ct denotes an index of the household’s consumption of each of the individual goods, Ct (i);
following a constant-elasticity-of-substitution aggregator (Dixit and Stiglitz, 1977).
Ct
Z
1
1
1
Ct (i)
di
;
(1)
0
where > 1, and it follows that the corresponding cost-minimizing demand for Ct (i) and the
welfare-based price index, Pt ; are given by
Ct (i) =
Pt =
Z
Pt (i)
Pt
Ct
(2)
1
1
1
Pt (i)
1
di
:
(3)
0
For simplicity, I assume that households supply homogeneous labor units (Lt ) in an economywide competitive labor market.
The ‡ow budget constraint of the household at the beginning of period t is
Z 1
Bt
Pt Ct +
Wt Lt + Bt 1 +
(4)
t (i)di:
Rt
0
Where Bt is a one-period nominal bond and Rt denotes the gross nominal return on the bond.
t (i) represents the nominal pro…ts of a …rm that sells good i. I assume that each household owns
an equal share of all …rms. Finally this sequence ofh periodibudget constraints is supplemented
with a transversality condition of the form lim Et TBTR > 0.
T !1
8
s=1
s
In the theoretical literature, the general time-dependent pricing model has been …rst outlined in Wolman
(1999), who studied some simple examples and found that in‡ation dynamics are sensitive to di¤erent pricing
rules. Similar models have also been studied by Mash (2004) and Yao (2009).
4
The solution to the household’s optimization problem can be expressed in two …rst order
necessary conditions. First, optimal labor supply is related to the real wage:
H
Lt Ct =
Wt
:
Pt
(5)
Second, the Euler equation gives the relationship between the optimal consumption path and
asset prices:
1 = Et
2.2
2.2.1
"
#
R t Pt
:
Pt+1
Ct
Ct+1
(6)
Firms in the Economy
Real Marginal Cost
The production side of the economy is composed of a continuum of monopolistic competitive
…rms, each producing one variety of product i by using labor. Each …rm maximizes real pro…ts,
subject to the production function
Yt (i) = Zt Lt (i)
(7)
where Zt denotes an aggregate productivity shock. Log deviations of the shock, z^t ; follow an
exogenous AR(1) process z^t = z z^t 1 + "z;t , and "z;t is white noise with z 2 [0; 1). Lt (i) is the
demand of labor by …rm i. The parameter a measure the degree of decreasing-return-scale of
the production technology.
Following equation (2), demand for intermediate goods is given by
Pt (i)
Pt
Yt (i) =
Yt :
(8)
In each period, …rms choose optimal demands for labor inputs to maximize their real pro…ts
given nominal wage, market demand (8) and the production technology (7):
max
Lt (i)
t (i)
=
Pt (i)
Yt (i)
Pt
Wt
Lt (i)
Pt
(9)
And real marginal cost can be derived from this maximization problem in the following form:
mct =
Wt =Pt
:
(1 a) Zt
Furthermore, using the production function (7), output demand equation (8), the labor supply
condition (5) and the fact that at the equilibrium Ct = Yt , I can express real marginal cost only
in terms of aggregate output and technology shock.
mct = Yt
+
5
Zt
(1+ )
:
(10)
2.2.2
Pricing Decisions under Nominal Rigidity
In this section, I introduce a general form of nominal rigidity, which is characterized by a
set of hazard rates depending on the spell of the time since last price adjustment. I assume
that monopolistic competitive …rms cannot adjust their price whenever they want. Instead,
opportunities for re-optimizing prices are dictated by the hazard rates, hj , where j denotes the
time-since-last-adjustment and j 2 f0; Jg. J is the maximum number of periods in which a
…rm’s price can be …xed.
Dynamics of the Price-duration Distribution In the economy, …rms’ prices are heterogeneous with respect to the time since their last price adjustment. Table 1 summarizes key
notations concerning the dynamics of the price-duration distribution.
Duration
j
0
1
..
.
Hazard Rate
hj
0
h1
..
.
j
..
.
J
hj
..
.
hJ = 1
Non-adj. Rate
j
1
1
=1
..
.
h1
Survival Rate
Sj
1
S1 = 1
..
.
Distribution
(j)
(0)
(1)
..
.
j
j
= 1 hj
..
.
J =0
Sj =
i=0
..
.
SJ = 0
i
(j)
..
.
(J)
Table 1: Notations of the Dynamics of Price-vintage-distribution.
Using the notation de…ned in table 1, and also denoting the distribution of price durations
at the beginning of each period by t = f t (0); t (1)
t (J)g, we can derive the ex-post
~
distribution of …rms after price adjustments ( t ) as
~t (j) =
8
<
:
J
P
hi t (i) , when j = 0
i=1
j
t (j) , when j = 1
(11)
J:
Intuitively, those …rms reoptimizing their prices in period t are labeled with ‘Duration 0’, and
the proportion of those …rms is given by hazard rates from all duration groups multiplied by
their corresponding densities. The …rms left in each duration group are the …rms that do not
adjust their prices. When period t is over, this ex-post distribution, ~ t ; becomes the ex-ante
distribution for the new period, t+1 : All price duration groups move to the next one, because
all prices age by one period.
As long as the hazard rates lie between zero and one, dynamics of the price-duration distribution can be viewed as a Markov process with an invariant distribution, , and is obtained by
solving t (j) = t+1 (j): It yields the stationary price-duration distribution (j) as follows:
(j) =
Sj
J 1
j=0
, for j = 0; 1
Sj
6
J
1:
(12)
Here, I give n
a simple example. When J = 3, then
o we have the stationary price-duration distri1
1
1 2
bution = 1+ 1 + 1 2 ; 1+ 1 + 1 2 ; 1+ 1 + 1 2 :
Let’s assume the economy converges to this invariant distribution fairly quickly, so that regardless of the initial price-duration distribution, I only consider the economy with the invariant
distribution of price durations.
The Optimal Pricing under the General Form of Nominal Rigidity Given the general
form of nominal rigidity introduced above, the only heterogeneity among …rms is the time when
they last reset their prices, j. Firms in price duration group j share the same probability of
adjusting their prices, hj , and the distribution of …rms across durations is given by (j).
In a given period when a …rm is allowed to reoptimize its price, the optimal price chosen
should re‡ect the possibility that it will not be re-adjusted in the near future. Consequently,
adjusting …rms choose optimal prices that maximize the discounted sum of real pro…ts over the
time horizon in which the new price is expected to be …xed. The probability that the new price
will be …xed at least for j periods is given by the survival function, Sj , de…ned in table 1.
Here, I setup the maximization problem of an adjuster as follows:
max Et
Pt
JP1
j=0
d
Sj Qt;t+j Yt+jjt
T Ct+j
:
Pt+j
Pt
Pt+j
Where Et denotes the conditional expectation based on the information set in period t, and
Qt;t+j is the stochastic discount factor appropriate for discounting real pro…ts from t to t + j.
An adjusting …rm maximizes the pro…ts subject to the demand for its intermediate good in
period t + j given that the …rm resets the price in period t and can be expressed as.
Pt
Pt+j
d
Yt+jjt
=
Yt+j :
This yields the following …rst order necessary condition for the optimal price:
Pt =
JP1
j=0
1
Sj Et [Qt;t+j Yt+j Pt+j1 M Ct+j ]
JP1
j=0
;
(13)
Sj Et [Qt;t+j Yt+j Pt+j1 ]
where M Ct denotes the nominal marginal cost. The optimal price is equal to the markup
multiplied by a weighted sum of future marginal costs, whose weights depend on the survival
rates. In the Calvo case, where Sj = j , this equation reduces to the Calvo optimal pricing
condition.
Finally, given the stationary distribution, (j), aggregate price can be written as a distributed
sum of all optimal prices. I de…ne the optimal price which was set j periods ago as Pt j .
Following the aggregate price index from equation (3), the aggregate price is then obtained by:
Pt =
JP1
j=0
(j)Pt 1j
7
!
1
1
:
(14)
2.3
New Keynesian Phillips Curve
In this section, I derive the New Keynesian Phillips curve for this generalized model. To do
that, I …rst log-linearize equation (13) around the constant price steady state. The log-linearized
optimal price equations are obtained by
"
#
JP1 j S(j)
p^t = Et
(mc
c t+j + p^t+j ) ;
(15)
j=0
where
:
=
J
X1
j
j=0
S(j) and mc
c t = ( + )^
yt
(1 + ) z^t :
In a similar fashion, I derive the log deviation of the aggregate price by log linearizing equation
(14).
p^t =
JP1
k=0
(k) p^t
k:
(16)
After some algebraic manipulations on equations (15) and (??), I obtain the New Keynesian
Phillips curve as follows9
^t =
JP1
k=0
(k)
Et
1
(0)
JP1
(k)^ t
k
k+1 ;
JP1
j
S(j)
j=0
where
mc
c t+j
(k) =
k=2
k
+
JP1 JP1
j
S(j)
^ t+i
i=1 j=i
JP1
S(j)
j=k
JP1
j=1
;
S(j)
=
JP1
j
k
!
S(j):
(17)
k=0
At the …rst glance, this Phillips curve is quite di¤erent from the one in the Calvo model.
It involves not only lagged in‡ation but also lagged expectations that were built into pricing
decisions in the past. All coe¢ cients in the NKPC are derived from structural parameters
which are either the hazard function parameters or the preference parameters. When J = 3, for
example, then the NKPC is of the following form
9
The detailed derivation of the NKPC can be found in the technical Appendix (A).
8
^t =
1
(
+
+
+
1
+
mc
ct +
1 2)
1
1+
1 2
1
1+
Et
1+
where
:
=1+
1
1
2
^t
1;
+
2
1 2
1
+
1 2)
2
mc
c t+1 +
1
1
Et
1 2
1 2
+
(
1 2
1 2
1
1
Et
1
2
mc
ct +
mc
ct
mc
ct
1 2
1 2
2
1+
1
+
1 2
(
1
+
mc
c t+2 +
mc
c t+1 +
1 2
mc
ct +
1 2)
2
1
+
1
+
2
1
+
2
mc
ct
2
2
1 2
^ t+1 +
1 2
^t +
1 2
^t
2
1 2
2
1+
1 2
^ t+2
^ t+1
1 2
^t
(18)
1 2:
As we can observe that all coe¢ cients in equation (18) are expressed in terms of nonadjustment rates ( j = 1 hj ) and the subjective discount factor, ; thereby the coe¢ cients in
the generalized NKPC (GNKPC here after) link dynamic e¤ects of reset prices on in‡ation to
the hazard function. As a result, information about the price reset hazard rates can be extracted
from the aggregate data, such as in‡ation, through the dynamic structure of the Phillips curve.
The economic reason why those lagged dynamic components should appear in the GNKPC
but miss in the Calvo model is because they exert two opposing e¤ects on current in‡ation
through p^t and p^t 1 respectively. The magnitudes of these e¤ects depend on the price reset
hazard function. In the general case, the e¤ect of past optimal prices on current aggregate
price p^t should be di¤erent to those a¤ecting lagged aggregate price p^t 1 . As a result, lagged
expectations and lagged in‡ations should appear in the generalized NKPC. Conversely, in the
Calvo case, the constant hazard function leads relevant reset prices to exert the exactly same
amount of impact on both p^t and p^t 1 , and thereby it causes them to be cancelled out.
2.4
The Final System of Equations
The general equilibrium system consists of equilibrium conditions derived from the optimization
problems of economic agents, market clearing conditions and a monetary policy equation. Market clearing conditions require real prices clear the factor and good markets, while monetary
policy determines nominal value of the real economy. I choose a Taylor rule to close the model.
It = It
i
1
"
Pt
Pt
Yt
Yt 1
1
y
#1
i
eqt :
(19)
Equation (19) is motivated by the interest rate smoothing speci…cation for the Taylor rule10 ,
which speci…es a policy rule that the central bank uses to determine the nominal interest rate
in the economy, where
and y denote short-run responses of the monetary authority to log
deviations of in‡ation and the output growth rate, and qt is a sequence of i:i:d: white noise with
zero mean and a …nite variance (0; 2q ).
10
See: the empirical work by Clarida et al. (2000)
9
After log-linearizing those equilibrium equations around the ‡exible-price steady state, loglinearized general equilibrium system consists of the NKPC, equation (20), real marginal cost,
equation (21), the household’s intertemporal optimization condition, equation (22), the Taylor
rule, equation (19), and exogenous stochastic processes. In the IS curve, I add an exogenous
shock, dt ; to represent real aggregate demand disturbances11
^t =
JP1
W1 (k)Et
JP1
k
j=0
k=0
mc
c t = ( + )^
yt
z
z^t
dt =
d
dt
qt v N (0;
1
+
W3 (i)^ t+i
i=1
Et [^ t+1 ]) + dt ;
i)
z^t =
k
(1 + ) z^t ;
Et [^
yt+1 ] = y^t + (^{t
^{t = (1
W2 (j)mc
c t+j
JP1
^t +
+
t
1 + "t
2
q );
y
(^
yt
where
y^t
t
k
!
JP1
W4 (k)^ t
k+1 ;
(20)
k=2
(21)
(22)
1)
+ i^{t
v N (0;
where "t v N (0;
2
z );
2
d );
1
+ qt ;
(23)
(24)
(25)
(26)
where weights (W1 ; W2 ; W3 ; W4 ) in the NKPC are de…ned in the equation (17). I collect
the structure parameters into a vector = ; ; ; ; ; y ; i ; j s; z ; d ; i ; 2z ; 2d ; 2q . In the
empirical study, I am interested in estimating values for those structural parameters by using
the Bayesian approach.
3
Estimation
In this section, I solve and estimate the New Keynesian model described above by using the
Bayesian approach. The full information Bayesian method has some appealing features in comparison to methods employed in the literature. As pointed out by An and Schorfheide (2007),
this method is system-based, meaning that it …ts the DSGE model to a vector of aggregate
time series. Through a full characterization of the data generating process, it provides a formal
framework for evaluating misspeci…ed models on the basis of the data density. In addition, the
Bayesian approach also provides a consistent method for dealing with rational expectations –
one of the central elements in most DSGE models.
3.1
Bayesian Inference
I apply the Bayesian approach, set forth by DeJong et al. (2000), Schorfheide (2000) and
Fernandez-Villaverde and Rubio-Ramirez (2004) among others, to estimate the structural parameters of the DSGE model. The Bayesian estimation is based on combining information gained
from maximizing likelihood of the data and additional information about the parameters (the
prior distribution). The main steps of this approach are as follows:
First, the linear rational expectation model is solved by using standard numerical methods
(See: e.g. Uhlig, 1998 and Sims, 2002) to obtain the reduced form equations in its predetermined
and exogenous variables.
11
Introducing this shock is not necessary for the theoretical model, but, in the Bayesian estimation, due to the
singularity problem I need three shocks for three observables.
10
For example, the linearized DSGE model can be written as a rational expectations system
in the form
0(
)St =
1(
)St
1
+
( ) t+
!(
)! t :
(27)
Here, St is a vector of all endogenous variables in the model, such as y^t , ^ t , ^{t ; etc: The
vector t stacks the innovations of the exogenous processes and ! t is composed of one-periodahead rational expectations forecast errors. Entries of ( ) matrices are functions of structural
parameters in the model. The solution to (27) can be expressed as
St =
1(
)St
1
+
( ) t:
(28)
The second step involves writing the model in state space form. This is to augment the
solution equation (28) with a measurement equation, which relates the theoretical variables to
a vector of observables Y _obst .
Y _obst = A( ) + BSt + CVt :
(29)
Where A( ) is a vector of constants, capturing the mean of St ; and Vt is a set of shocks to the
observables, including measurement errors.
Third, when we assume that all shocks in the state space form are normally distributed,
we can use the Kalman …lter (Sargent, 1989) to evaluate the likelihood function of the observables L( jY _obsT ). In contrast to other maximum likelihood methods, the Bayesian approach
combines the likelihood function with prior densities p( ), which includes all extra information about the parameters of interest. The posterior density p( jY _obsT ) can be obtained by
applying Bayes’theorem
p( jY _obsT ) _ L( jY _obsT ) p( ):
(30)
In the last step, is estimated by maximizing the likelihood function given data L( jY _obsT )
reweighted by the prior density p( ), in that numerical optimization methods are used to …nd
the posterior mode for and the inverse Hessian matrix. Finally, the posterior distribution is
generated by using a random-walk Metropolis sampling algorithm12 .
3.2
Data and Priors
According to the empirical framework and research questions to be addressed in this paper,
I choose following three observables: the growth rate of real GDP per capita, the annualized
in‡ation rate calculated from the consumer price index (CPI) and the nominal interest rate
for the U.S. over the period 1955.Q1 - 2008.Q413 . The output growth rate and in‡ation are
detrended by the Hodrick-Prescott …lter. Based on the de…nition of the model’s variables and
the observables, the measurement equations are de…ned as follows:
12
I implement the Bayesian estimation procedure discussed above by using the MATLAB based package
DYNARE, which is available at: http://www.cepremap.cnrs.fr/dynare/
13
Details on the construction of the data set are provided in Appendix (B).
11
y_obs = y^t
y^t
1
_obs = ^ t
i_obs = ^{t :
The priors I choose are in line with the mainstream values used in the Bayesian literature
(e.g. Smets and Wouters, 2007 and Lubik and Schorfheide, 2005). They are centered around
the average value of estimates of micro and macro data with fairly loose standard deviations.
I …x two parameters in advance. The discount factor is equal to 0:99, implying an annual
steady state real interest rate of 4%. The elasticity of substitution between intermediate goods
is set to be 10, implying an average mark-up of around 11%. Both values are common in the
literature.
The key structural parameters in this model are the hazard rates or the non-adjustment rates
with respect to time-since-last-adjustment, j . I choose the priors for the these parameters based
on micro evidence on the mean frequency of price adjustments, reported by Bils and Klenow
(2004). They …nd that the U.S. sectoral prices on average last only 2 quarters, which implies
the hazard rate is equal to 0.5. Because the main goal of this study is to …nd out which shape
of hazard function …ts best to the macro data, I set all non-adjustment rate with the same
mean of 2 quarters and a very loose standard deviation of 0.28. This prior leads to a 95%
inter-quantile-range basically covering the whole interval between 0 and 1 quite evenly, except
for the two extremes. In addition, same priors for all j re‡ect the view of a pricing model using
a constant-hazard assumption. By choosing a large standard deviation for the prior, I allow the
data to speak out quite freely about the shape of hazard rates over the time horizon, so that I
can evaluate theoretical models from the point of view of hazard function.
Moving to the other structural parameters, the prior for the relative risk aversion, , is set
to follow a gamma distribution with mean 1.5 and a standard error of 0.375. This prior covers a
wide range of values from the experimental and macro literature; The inverse elasticity of labour
supply, ; is di¢ cult to calibrate, because there is a wide range of evidence in the literature. I
choose the prior for this parameter to be normally distributed around the mean of 1.5. A mean
of 1.5 is commonly esti50mated in the micro-labor studies (See: e.g. Blundell and Macurdy,
1999). I set a large standard error of 1.0. In the sensitivity analysis, I also check the robustness
of my result to the other values of the prior mean.
Proceeding with priors for parameters in the Taylor rule, the priors for
and y are centered
at the values commonly associated with a Taylor rule. Following Smets and Wouters (2007), I
set a prior for response coe¢ cient to deviation of annualized in‡ation
to be centered around
1.5 with a standard error of 0.1, and a prior for response coe¢ cient to output growth rate y
to be centered around 0.5 with a standard error of 0.1. This rule also allows for interest rate
smoothing with a prior mean of 0.5 and a standard deviation of 0.1.
Finally, I assume that the standard errors of the innovations follow an inverse-gamma distribution with a mean of 0.1 and two degrees of freedom. The persistence of the AR(1) process
of the productivity shock is beta-distributed with a mean of 0.8 and the standard deviation of
0.1, and the persistence of the AR(1) process of the aggregate demand shock is beta-distributed
with a mean of 0.5 and the standard deviation of 0.1.
12
3.3
Results
By applying the methodology described above, I proceed to gauge the degree of nominal rigidity
in terms of the estimated structural parameters based on those prior distributions discussed
above. The posterior modes of parameters are calculated by maximizing the log likelihood
function of the data, and then the posterior distributions are simulated using the “MetropolisHastings”algorithm. The results presented here are based on 500,000 out of 1 million total draws
and the average acceptance rate is around 0:31. They obtain convergence and relative stability
in all measures of the parameter moments. The posterior mode, mean and 5%, 95% quantiles
of the 17 estimated parameters are reported in Table 2 and the prior-posterior distributions
are plotted in the …gure appendix. The data provides strong information about most of the
structural parameters, except for the inverse of the elasticity of labor supply and one of Taylor
rule parameters. In those cases, I conduct various sensitivity tests to check the robustness of
estimates of hazard function to changes in those poorly identi…ed structural parameters.
Parameters
y
i
1
2
3
4
5
6
7
z
d
z
m
d
Prior
Posterior (M-H 500,000)
Dist.
gamma
normal
Mean
1.5
1.5
S.D.
0.375
1.0
Mode
4.311
1.459
Mean
4.149
1.282
5%
3.379
-0.072
95%
4.994
2.553
normal
normal
beta
1.5
0.5
0.5
0.1
0.1
0.1
1.914
0.745
0.634
1.938
0.740
0.622
1.796
0.579
0.572
2.083
0.899
0.674
beta
beta
beta
beta
beta
beta
beta
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.28
0.28
0.28
0.28
0.28
0.28
0.28
0.403
0.941
0.991
0.663
0.978
0.975
0.641
0.454
0.855
0.931
0.676
0.833
0.801
0.590
0.334
0.717
0.848
0.431
0.648
0.620
0.247
0.571
0.998
0.999
0.962
0.995
0.983
0.994
beta
beta
invgam
invgam
invgam
0.8
0.5
0.1
0.1
0.1
0.1
0.1
2.0
2.0
2.0
0.992
0.850
1.663
1.176
0.721
0.988
0.849
1.859
1.211
0.735
0.978
0.811
1.142
1.069
0.591
0.998
0.887
2.602
1.347
0.873
Table 2: Posterior Distributions of Parameters (U.S.83-08)
Estimate for the relative risk aversion is high (4.149), but well in line with the benchmark
values for macroeconomic studies, while the inverse of the elasticity of labor supply is not well
identi…ed in this model. Prior and posterior distributions are very close to each other, indicating
that data does not provide information on this parameter under the current identi…cation scheme.
The estimated monetary policy reaction function is consistent with the common view of the
Taylor rule. Monetary policy responds strongly to the deviation of in‡ation (1.938), but not
as much as to the output growth rate (0.74). There is a considerable degree of interest rate
smoothing, as the posterior mean of i is around 0.62. But the response of nominal interest rate
13
to output growth rate is also not well identi…ed.
I turn now to the nominal rigidity that is represented by the estimates of non-adjustment
rates, j . Contrary to the prior distributions, which are motivated by the Calvo model where all
hazard rates are constant over time, the estimates reveal that the hazard function changes shape
over time and the data strongly advocates a non-constant hazard function. The mean frequency
of price adjustment is 32% per quarter which implies a mean price duration of 9.2 months. This
result is consist with what is found by Nakamura and Steinsson (2008) using micro data. More
importantly, price reset hazards vary substantially around this mean, depending on how long
the price has been …xed. I will discuss the hazard function in more detail after the sensitivity
analysis.
3.4
Robustness Tests
In this section, I test the robustness of the estimates for structural parameters, especially for
the hazard function. Table (3) and (4) report the posterior modes under alternative priors,
di¤erent model setups and data using di¤erent detrending methods. In Table (3), I summarize
results using the Hodrick-Prescott …lter detrended data. I conduct the sensitivity analysis in the
following steps: …rst, I check the prior sensitivity by altering the prior mean of ; from 1.5 to
0.5. I choose to check this parameter because the estimation result shows that inverse of labor
supply elasticity is poorly identi…ed. In addition, there is no consensus about the calibration
value for this parameter in the literature. The …rst three columns in the table compare the results
from the three alternative priors. 0.5 is a typical value motivated in the macro literature, while
= 1:5 is commonly estimated in the micro-labor studies (See: e.g. Blundell and Macurdy,
1999). I also check the value = 1, which can be often found in the RBC literature. I …nd
that changing prior for
mainly a¤ects the posterior mode of
itself, leaving estimates of
other parameters for preference and monetary policy qualitatively unchanged. These results
manifest an observational equivalence problem commonly found in estimating DSGE models.
The log likelihood function is mostly ‡at on the choices of priors for . Posterior estimates
are mainly driven by the prior instead of data. As for the non-adjustment rates, the choice
of the prior for labor elasticity a¤ects magnitudes of non-adjustment rates at all frequencies.
Interestingly, it shows that making labor supply more elastic, decreasing in the value of , leads
to more frequent price adjustments estimated. Non-adjustment rates are signi…cantly lower at
all frequencies except for the 7th. Despite changes in the magnitude, the general pattern of the
hazard function remains the same.
In the next two columns, I change the model setup. When adopting the standard Taylor
rule with output gap instead of output growth rate, it results in a large change in the estimated
y , which becomes very small, indicating that central bank reacts less to output gap in the
monetary policy decision making, because it is an unobservable in the economy. Estimates for
non-adjustment rates , however, are almost identical as in the benchmark case. I also change
the model setup by …xing the hazard rates to the value of 0.5, implying an average duration
of 2 quarters14 . As seen in the last column, …xing the hazard rates has implications for the
14
I call it the pseudo-Calvo model because, in this case, I truncate the hazard function at the 7th quarter. As
a result, it is not exactly equivalent to the Calvo model, which implies an in…nite horizon for the hazard function.
This pseudo-Calvo can be view as an approximation of the real Calvo model. I conduct also the pseudo-Calvo
14
Tests
Parameter
= 1:5
4:311
(0:322)
(0:062)
(0:063)
(0:332)
1:46
1:063
0:618
1:687
(0:867)
1:914
(0:091)
0:745
y
(0:097)
0:403
1
(0:076)
0:941
2
(0:138)
0:991
3
(0:028)
0:663
4
(0:151)
0:980
5
(0:067)
0:980
6
(0:064)
0:641
7
Log Margin. Likeli.
H-P …lter Detrended Data
=1
= 0:5 T aylorRule
4:012
4:097
4:790
(0:647)
1:864
(0:087)
0:794
(0:097)
0:050
(0:034)
0:772
(0:148)
0:968
(0:153)
0:408
(0:157)
0:914
(0:232)
0:926
(0:242)
0:737
(0:918)
1:859
(0:087)
0:793
(0:097)
0:051
(0:034)
0:781
(0:148)
0:968
(0:152)
0:407
(0:156)
0:916
(0:231)
0:926
(0:241)
0:741
(0:433)
1:956
(0:084)
0:111
Calvo
2:949
(0:247)
0:315
(0:309)
2:052
(0:083)
0:704
(0:069)
(0:097)
0:469
0:5
0:918
0:5
0:992
0:5
0:624
0:5
0:973
0:5
0:978
0:5
0:586
0:5
(0:085)
(0:126)
(0:072)
(0:145)
(0:147)
(0:137)
(0:262)
(0:388)
(0:386)
(0:252)
907:32
914:91
912:63
935:57
918:04
Table 3: Sensitivity Check for H-P Detrended Data
Tests
Parameter
= 1:5
4:352
(0:640)
(0:627)
(0:621)
(0:473)
2:10
1:72
1:373
2:305
(0:886)
1:912
(0:081)
y
1
2
3
4
5
6
7
Log Margin. Likeli.
Linearly detrended Data
=1
= 0:5 T aylorRule
4:357
4:364
5:226
0:669
(0:099)
0:504
(0:075)
0:716
(0:126)
0:980
(0:063)
0:143
(0:218)
0:193
(0:645)
0:462
(1:216)
0:329
(0:854)
1:911
(0:081)
0:669
(0:099)
0:493
(0:075)
0:710
(0:126)
0:979
(0:064)
0:124
(0:216)
0:182
(0:604)
0:446
(1:180)
0:359
(0:816)
1:911
(0:081)
0:670
(0:878)
1:908
(0:097)
0:258
(0:489)
1:942
(0:797)
1:952
(0:077)
0:662
(0:099)
(0:395)
(0:098)
0:482
0:5632
0:5
0:789
0:5
0:969
0:5
0:397
0:5
0:601
0:5
0:785
0:5
0:221
0:5
(0:075)
0:705
(0:128)
0:979
(0:065)
0:106
(0:215)
0:180
(0:608)
0:437
(1:163)
0:379
(0:077)
(0:138)
(0:187)
(0:135)
(0:389)
(0:415)
(1:149)
(1:147)
(1:147)
(0:460)
792:12
792:44
792:79
837:09
Table 4: Sensitivity Check for Linearly Detrended Data
15
Calvo
3:671
797:88
estimates of other structural parameter. For example, it leads to a much lower estimate for the
intertemporal elasticity of substitution and inverse of labor elasticity. In addition, in terms of log
marginal likelihood, both output-gap-Taylor-rule model and the …xed-hazard setup are clearly
less favorable by the data. In the last row of the table, I report the log marginal likelihood of the
data for each model. It shows that changing priors of only marginally a¤ects the data density,
but the data gives strong support of the ‡exible hazard model as opposed to the …xed-hazard
model and the output gap Taylor rule. The Bayes factor in favor of the ‡exible hazard model is
approximately in the order of 105 :
I conduct the same tests by using linearly detrended data again, which is reported in Table
(4). All results from previous exercises are con…rmed, but the drawback of using linearly detrended data is that they do not deliver accurate information about the hazard function. As seen
in the table, non-adjustment rates are much di¤erent to what we have from the HP-detrended
data and those after the 3rd quarter are all statistically insigni…cant. The reason of it could
be that the linearly detrending mixes macro dynamics at the business cycle frequencies with
those from other frequencies, so that it biases the estimates and reduces the e¢ ciency of the
estimation too.
3.5
Aggregate Hazard Function and Implications for Macro Modeling
In this part, I evaluate new evidence on the aggregate price reset hazard rates obtained from
my empirical analysis and also discuss its implication for macro modeling of sticky prices.
Hazard Function (U.S.55-08)
0.8
0.7
Posterior Mean
10% Quantile
90% Quantile
0.6
0.5
0.4
0.3
0.2
0.1
0
1
2
3
4
Quarter
5
6
7
Figure 1: The Price Reset Hazard Function
I plot the estimates of hazard rates in Figure (1). The hazard rate is high one quarter
after the last price adjustment (55%), and drops in the next two consecutive quarters to around
10% and rise again in the 4th quarter. Afterwards, hazard rates are largely increasing with
the age of the price. Overall, the hazard function has a U-shape with a spike at the fourth
model with longer horizons, but it does not change the main conclusion.
16
quarter. I also calculate the distribution of price durations from the estimated hazard rates by
using formula (12). It yields that around 34.2% of prices last for less than one quarter. From
micro data studies, we learn that prices of apparel, unprocessed food, energy and travel are the
most frequently adjusted prices, whose median durations last less than three months. Moreover,
12.4% of prices have the mean duration of four quarters. Examples for those prices are services,
such as hairdressers or public transportation.
This …nding has important bearing on the macro modeling of sticky prices. Overall, I …nd
new evidence can not be explained by any single theory of sticky prices. For the …rst half of
the hazard function (from the …rst to the fourth quarters), it appears that the pricing decision
is mainly characterized by either the ‡exible price setting or by a time-dependent aspect (e.g.
Taylor, 1980). The survey evidence has shown that many …rms conduct yearly price revisions
due to costly information. This kind of behavior can also be motivated by the theory of customer
markets, which indicates that long-term customer relationships are an important consideration
in pricing decisions (See: e.g. Rotemberg, 2005). On the other hand, the upward-sloping part
of the hazard function indicates that the state-dependent pricing also plays an important role
in the pricing decision, the more outdated a price becomes. In fact, this generalized timedependent model can be viewed as a proxy of the more microfounded state-dependent model.
More microfounded pricing models, such as Dotsey et al. (1999), show that the state-dependent
pricing behavior implies an increasing hazard function. If we consider a relative stable economy,
the hazard function of the deviation from the optimum largely coincides with the hazard function
of time-since-last-adjustment. The longer a price is …xed, the more likely it deviates signi…cantly
from the optimum, and hence its probability of being adjusted rises. Since in my data set, the
annual in‡ation rates stay under 2% for most of the sample periods except for the turbulent
decade between early 1970s and early 1980s, it is arguable that the time elapsed since last
adjustment is a good proxy for the deviation from the optimum, therefore the increasing part
of the hazard function gives the pricing decision a state-dependent aspect.
To summarize these results, the new evidence of the aggregate hazard function reveals that,
for the less sticky prices ranging in duration from one to four quarters, time-depend pricing plays
a major role, while, for stickier prices with a duration longer than 4 quarters, the state-dependent
pricing dominates.
17
Conclusion
In this paper, I document new evidence on the shape of the aggregate hazard function. I
construct a DSGE model featuring nominal rigidity that allows for a ‡exible hazard function
of price setting. The generalized NKPC possesses a richer dynamic structure, with which I can
infer the shape of the hazard function underlying aggregate dynamics. Identifying the hazard
function from aggregate data is a useful exercise, because, …rstly, estimating hazard rates directly
from a DSGE model provides the most consistent way to compare the theoretical concept with
the empirical evidence. Secondly it overcomes some weaknesses of estimates using micro data,
such as contamination by the idiosyncratic e¤ects and the limited availability of the long time
series data. As a result, this study delivers some useful insights for macroeconomists, which can
be readily used to guide macro modeling.
At last, a caveat should be note that the identi…cation method relies on scrutinizing e¤ects
of past reset prices on current in‡ation, and those e¤ects decay over time. Concequently, the
information contents of aggregate data for hazard rates deteriorate fast with the length of the
time since last price adjustment. After the fourth quarter, estimated hazard rates become
imprecise. One remedy of this problem is to use more volatile monthly data.
18
A
Deviation of the New Keynesian Phillips Curve
Starting from 15
2
J
X1
j
J
X1
j
= Et 4
p^t
2
= Et 4
3
Sj
(mc
c t+j + p^t+j )5
j=0
Sj
j=0
3
2
(31)
J
X1
mc
c t+j 5 + Et 4
j
Sj
j=0
3
p^t+j 5
(32)
The last term can be further expressed in terms of future rates of in‡ation
J
X1
j
Sj
p^t+j
=
1
S1
p^t +
J 1
p^t+1 +
+
SJ
1
p^t+J
1
j=0
=
1
1
=
S1
p^t +
S1
+
S1
p^t +
J 1
(^
pt+1
J
X1
p^t +
j
p^t ) +
+
2
Sj
^ t+j +
S2
SJ
1
p^t+J
1
J 1
p^t+1 +
+
SJ
1
p^t+J
2
j=0
1
=
2
S1
+
+
S2
p^t +
J
X1
j
Sj
^ t+j +
j=1
3
+
S3
J 1
p^t+1 +
+
SJ
1
J
X1
j
Sj
^ t+j
1
j=2
p^t+J
2
..
.
1
=
J 1
S1
+
+
+
SJ
1
p^t +
J
X1
j
Sj
^ t+j
j=1
+
J
X1
j
Sj
J 1
^ t+j
1
+ ::: +
SJ
1
^ t+1
j=2
= p^t +
J
X1 JX1
j
Sj
^ t+i
i=1 j=i
The optimal price can be expressed in terms of in‡ation rates, real marginal cost and aggregate prices.
2
3
2
3
J
J
X1 j Sj
X1 JX1 j Sj
p^t = p^t + Et 4
mc
c t+j 5 + Et 4
^ t+i 5
(33)
j=0
i=1 j=i
Next, I derive the aggregate price equation as the sum of past optimal prices. I lag equation
19
33 and substitute it for each p^t
p^t =
(0) p^t + (1) p^t 1 +
+ (J 1)^
pt J+1
1
2
0
0
J
J
X1 JX1
X1 j Sj
mc
c t+j A + Et @
(0) 4p^t + Et @
=
2
+
..
.
+
p^t =
into equation 16
j
j=0
(1) 4p^t
+ Et
2
1) 4p^t
(J
J
X1
k=0
1
2
6
6
6
(k) 6p^t
6
4
0
1@
J
X1
Sj
j=0
+ Et
J+1
j
J
X1
J+1 @
@
k + Et k
|
1A
mc
c t+j
0
0
1
J
X1
j
Sj
j=0
j
Sj
j=0
i=1 j=i
k
+
{z
Ft
0
1@
+ Et
Sj
13
^ t+i A5
J
X1 JX1
j
Sj
^ t+i
i=1 j=i
1
J+1 A
mc
c t+j
mc
c t+j
j
0
j
1 A5
J
X1 JX1
J+1 @
+ Et
J
X1 JX1
13
Sj
^ t+i
i=1 j=i
k
j
Sj
^ t+i
i=1 j=i
3
17
7
A7
k 7
7
5
}
13
J+1 A5
(34)
Where Ft summarizes all current and lagged expectations formed at period t.
Finally, we derive the New Keynesian Phillips curve from equation 34.
p^t =
J
X1
(k) p^t
J
X1
k+
k=0
^t =
J
X1
(k)Ft
k
{z
}
k=0
(k) p^t
|
p^t
k
Qt
1
+ Qt
k=0
=
=
=
(0) (^
pt
p^t
1)
+ (0)^
pt
(0) (^
pt
p^t
1)
+ ( (0) + (1)) p^t
(0) ^ + ( (0) + (1)) ^ t
{z
}
|{z} t |
W (0)
1
+ (1)^
pt
1+ (
1
1
+
+ (J
+ (2)^
pt
2
1)^
pt
+
p^t
J+1
+ (J
(0) + (1) + (2)) p^t
2
1)^
pt
+ (J
1
+ Qt
p^t
J+1
1)^
pt
+ Qt
1
p^t
J+1
1
+ Qt
}1
+ Qt
W (1)
..
.
= W (0) ^ t + W (1)^ t
1
+
+ W (J
2)^ t
J+2
+ W (J 1)p^t
| {z }
J+1
p^t
1
+ Qt
=1
= W (0) ^ t +
(1
W (0))^ t =
^t =
(1
J
X1
k=2
+ W (J
W (2))^ t
2)^ t
(1
1
1 W (k)
^t
1
(0)
J+2
k+1
+
J
X1
k=0
+ p^t
|
W (J
J+1
^t
1))^ t
(k)
Ft
1
(0)
The generalized New Keynesian Phillips curve is:
20
{z
k
p^t
J+2
J+2
J+2
}
+ p^t
+ Qt
J+2
+ p^t
|
2
{z
^t
p^t
1
^t =
J
X1
k=0
(k)
Et
1
(0)
J
X1
(k)^ t
k
0
@
k+1 ;
J
X1
j
Sj
j=0
where
k=2
mc
c t+j
(k) =
k
+
J
X1 JX1
Sj
^ t+i
k
i=1 j=i
JP1
Sj
j=k
JP1
j=1
21
j
;
Sj
=
J
X1
j=0
j
Sj
1
A
(35)
B
Data
The data used in this paper is taken from the FRED (Federal Reserve Economic Data) maintained by the Federal Reserve Bank of St. Louis.
Growth rate of real GDP per capita: is based on the Real Gross Domestic Product (Series:
GDPC1). They are in the unit of billions of chained 2005 dollars, quarterly frequency and
seasonally adjusted. To construct per capita GDP, I use the Civilian Noninstitutional Population (Series: CNP16OV) from the Bureau of Labor Statistics, U.S. Department of Labor. The monthly data is converted into quarterly frequency by arithmetic averaging. Per
capita real output growth is de…ned as: 100 [ln (GDPt =P OPt ) ln (GDPt 1 =P OPt 1 )] :
Finally the data is detrended by the Hodrick-Prescott …lter.
In‡ation rate: is calculated by using Consumer Price Index for all urban consumers: all
items (Series: CPIAUCSL), seasonally adjusted. The monthly data is converted into
quarterly frequency by arithmetic averaging. Annualized In‡ation rate is de…ned as 400
ln (Pt =Pt 1 ) : Finally the data is detrended by the Hodrick-Prescott …lter.
Nominal interest rate: is the E¤ective Federal Funds Rate (Series: FEDFUNDS). The
monthly data is converted into quarterly frequency by arithmetic averaging. The data is
detrended with the trend in‡ation calculated by using the Hodrick-Prescott …lter and then
mean adjusted.
22
C
Figure
SE_e
SE_q
SE_d
15
15
15
10
10
10
5
5
5
0
0
0
2
4
6
0.5
1
delta
0
1.5
0.2 0.4 0.6 0.8 1 1.2 1.4
phi
1
eta_p
4
0.4
3
0.5
2
0.2
1
0
2
4
6
0
-2
0
eta_y
2
4
0
1.2 1.4 1.6 1.8
6
alpha_1
6
2.2
4
4
3
2
alpha_2
3
4
2
2
2
1
1
0
0
0.2 0.4 0.6 0.8 1 1.2
0.2
alpha_3
0.4
0.6
0.8
1
0.2 0.4 0.6 0.8
alpha_4
10
3
0.2 0.4 0.6 0.8
1
2
0.5
1
0
1
0
alpha_6
0.5
0
1
0
0.5
alpha_7
3
1
rho
1.5
4
1.2
alpha_5
2
5
1
4
1.5
0
0
60
1
40
2
0.5
20
1
0
0
0.5
0
1
0
0.5
rho_ps
0
1
0.6
0.8
vi
15
10
10
5
5
0
0.4
0.6
0.8
1
0
0.4
0.6
0.8
Prior and Posterior Distributions for data
1955-2008
23
1
References
Alvarez, L. J. (2007), What do micro price data tell us on the validity of the new keynesian
phillips curve?, Kiel working papers, Kiel Institute for the World Economy.
Alvarez, L. J., E. Dhyne, M. Hoeberichts, C. Kwapil, H. L. Bihan, P. Lünnemann,
F. Martins, R. Sabbatini, H. Stahl, P. Vermeulen, and J. Vilmunen (2006), Sticky
prices in the euro area: A summary of new micro-evidence, Journal of the European Economic
Association, 4(2-3), 575–584.
An, S. and F. Schorfheide (2007), Bayesian analysis of dsge models, Econometric Reviews,
26(2-4), 113–172.
Bils, M. and P. J. Klenow (2004), Some evidence on the importance of sticky prices, Journal
of Political Economy, 112(5), 947–985.
Blundell, R. and T. Macurdy (1999), Labor supply: A review of alternative approaches, in:
O. Ashenfelter and D. Card (eds.), Handbook of Labor Economics.
Caballero, R. J. and E. M. Engel (2007), Price stickiness in ss models: New interpretations
of old results, Cowles Foundation Discussion Papers 1603, Cowles Foundation, Yale University.
Calvo, G. A. (1983), Staggered prices in a utility-maximizing framework, Journal of Monetary
Economics, 12(3), 383–98.
Campbell, J. R. and B. Eden (2005), Rigid prices: evidence from u.s. scanner data, Working
paper series, Federal Reserve Bank of Chicago.
Canova, F. and L. Sala (2009), Back to square one: Identi…cation issues in dsge models,
Journal of Monetary Economics, 56(4), 431–449.
Caplin, A. S. and D. F. Spulber (1987), Menu costs and the neutrality of money, The
Quarterly Journal of Economics, 102(4), 703–25.
Carvalho, C. and N. A. Dam (2008), Estimating the cross-sectional distribution of price
stickiness from aggregate data, Ssrn elibrary working papers, SSRN.
Cecchetti, S. G. (1986), The frequency of price adjustment : A study of the newsstand prices
of magazines, Journal of Econometrics, 31(3), 255–274.
Clarida, R., J. Galí, and M. Gertler (2000), Monetary policy rules and macroeconomic
stability: Evidence and some theory, The Quarterly Journal of Economics, 115(1), 147–180.
Coenen, G., A. T. Levin, and K. Christo¤el (2007), Identifying the in‡uences of nominal
and real rigidities in aggregate price-setting behavior, Journal of Monetary Economics, 54(8),
2439–2466.
DeJong, D. N., B. F. Ingram, and C. H. Whiteman (2000), A bayesian approach to
dynamic macroeconomics, Journal of Econometrics, 98(2), 203–223.
24
Dixit, A. K. and J. E. Stiglitz (1977), Monopolistic competition and optimum product
diversity, American Economic Review , 67(3), 297–308.
Dotsey, M., R. G. King, and A. L. Wolman (1999), State-dependent pricing and the
general equilibrium dynamics of money and output, The Quarterly Journal of Economics,
114(2), 655–690.
Fernandez-Villaverde, J. and J. Rubio-Ramirez (2004), Comparing dynamic equilibrium
models to data: a bayesian approach, Journal of Econometrics, 123(1), 153–187.
Fougere, D., H. L. Bihan, and P. Sevestre (2005), Heterogeneity in consumer price stickiness - a microeconometric investigation, Working paper series, European Central Bank.
Gali, J. and M. Gertler (1999), In‡ation dynamics: A structural econometric analysis, Journal of Monetary Economics, 44(2), 195–222.
Golosov, M. and R. E. Lucas (2007), Menu costs and phillips curves, Journal of Political
Economy, 115, 171–199.
Jadresic, E. (1999), Sticky prices - an empirical assessment of alternative models, Imf working
papers, International Monetary Fund.
Kiley, M. T. (2002), Partial adjustment and staggered price setting, Journal of Money, Credit
and Banking, 34(2), 283–98.
Linde, J. (2005), Estimating new-keynesian phillips curves: A full information maximum likelihood approach, Journal of Monetary Economics, 52(6), 1135–1149.
Lubik, T. and F. Schorfheide (2005), A bayesian look at new open economy macroeconomics,
Economics working paper archive, The Johns Hopkins University,Department of Economics.
Mackowiak, B. and F. Smets (2008), On implications of micro price data for macro models,
Working Paper Series 960, European Central Bank.
Mash, R. (2004), Optimising microfoundations for in‡ation persistence, Tech. rep.
Midrigan, V. (2007), Menu costs, multi-product …rms, and aggregate ‡uctuations, CFS Working Paper Series 2007/13, Center for Financial Studies.
Nakamura, E. and J. Steinsson (2008), Five facts about prices: A reevaluation of menu cost
models, The Quarterly Journal of Economics, 123(4), 1415–1464.
Rios-Rull, J.-V., F. Schorfheide, C. Fuentes-Albero, M. Kryshko, and R. S. lia
Llopis (2009), Methods versus substance: Measuring the e¤ects of technology shocks on
hours, NBER Working Papers 15375, National Bureau of Economic Research, Inc.
Rotemberg, J. J. (2005), Customer anger at price increases, changes in the frequency of price
adjustment and monetary policy, Journal of Monetary Economics, 52(4), 829–852.
Sargent, T. J. (1989), Two models of measurements and the investment accelerator, Journal
of Political Economy, 97(2), 251–87.
25
Sbordone, A. M. (2002), Prices and unit labor costs: a new test of price stickiness, Journal
of Monetary Economics, 49(2), 265–292.
Schorfheide, F. (2000), Loss function-based evaluation of dsge models, Journal of Applied
Econometrics, 15(6), 645–670.
Sheedy, K. D. (2007), Intrinsic in‡ation persistence, CEP Discussion Papers dp0837, Centre
for Economic Performance, LSE.
Sims, C. A. (2002), Solving linear rational expectations models, Computational Economics,
20(1-2), 1–20.
Smets, F. and R. Wouters (2007), Shocks and frictions in us business cycles: A bayesian
dsge approach, American Economic Review , 97(3), 586–606.
Taylor, J. B. (1980), Aggregate dynamics and staggered contracts, Journal of Political Economy, 88(1), 1–23.
Uhlig, H. (1998), A toolkit for analysing nonlinear dynamic stochastic models easily.
Wolman, A. L. (1999), Sticky prices, marginal cost, and the behavior of in‡ation, Economic
Quarterly, (Fall), 29–48.
Yao, F. (2009), The cost of tractability and the calvo pricing assumption, SFB 649 Discussion
Papers 042, Humboldt University, Berlin, Germany.
26
SFB 649 Discussion Paper Series 2010
For a complete list of Discussion Papers published by the SFB 649,
please visit http://sfb649.wiwi.hu-berlin.de.
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
"Volatility Investing with Variance Swaps" by Wolfgang Karl Härdle and
Elena Silyakova, January 2010.
"Partial Linear Quantile Regression and Bootstrap Confidence Bands" by
Wolfgang Karl Härdle, Ya’acov Ritov and Song Song, January 2010.
"Uniform confidence bands for pricing kernels" by Wolfgang Karl Härdle,
Yarema Okhrin and Weining Wang, January 2010.
"Bayesian Inference in a Stochastic Volatility Nelson-Siegel Model" by
Nikolaus Hautsch and Fuyu Yang, January 2010.
"The Impact of Macroeconomic News on Quote Adjustments, Noise, and
Informational Volatility" by Nikolaus Hautsch, Dieter Hess and David
Veredas, January 2010.
"Bayesian Estimation and Model Selection in the Generalised Stochastic
Unit Root Model" by Fuyu Yang and Roberto Leon-Gonzalez, January
2010.
"Two-sided Certification: The market for Rating Agencies" by Erik R.
Fasten and Dirk Hofmann, January 2010.
"Characterising Equilibrium Selection in Global Games with Strategic
Complementarities" by Christian Basteck, Tijmen R. Daniels and Frank
Heinemann, January 2010.
"Predicting extreme VaR: Nonparametric quantile regression with
refinements from extreme value theory" by Julia Schaumburg, February
2010.
"On Securitization, Market Completion and Equilibrium Risk Transfer" by
Ulrich Horst, Traian A. Pirvu and Gonçalo Dos Reis, February 2010.
"Illiquidity and Derivative Valuation" by Ulrich Horst and Felix Naujokat,
February 2010.
"Dynamic Systems of Social Interactions" by Ulrich Horst, February
2010.
"The dynamics of hourly electricity prices" by Wolfgang Karl Härdle and
Stefan Trück, February 2010.
"Crisis? What Crisis? Currency vs. Banking in the Financial Crisis of
1931" by Albrecht Ritschl and Samad Sarferaz, February 2010.
"Estimation of the characteristics of a Lévy process observed at arbitrary
frequency" by Johanna Kappusl and Markus Reiß, February 2010.
"Honey, I’ll Be Working Late Tonight. The Effect of Individual Work
Routines on Leisure Time Synchronization of Couples" by Juliane
Scheffel, February 2010.
"The Impact of ICT Investments on the Relative Demand for HighMedium-, and Low-Skilled Workers: Industry versus Country Analysis"
by Dorothee Schneider, February 2010.
"Time varying Hierarchical Archimedean Copulae" by Wolfgang Karl
Härdle, Ostap Okhrin and Yarema Okhrin, February 2010.
"Monetary Transmission Right from the Start: The (Dis)Connection
Between the Money Market and the ECB’s Main Refinancing Rates" by
Puriya Abbassi and Dieter Nautz, March 2010.
"Aggregate Hazard Function in Price-Setting: A Bayesian Analysis Using
Macro Data" by Fang Yao, March 2010.