Você está na página 1de 88

ESSAYS ON MODEL UNCERTAINTY IN MACROECONOMICS

DISSERTATION Presented in Partial Fulllment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Mingjun Zhao, B.A., M.A., M.A.S ***** The Ohio State University 2006

Dissertation Committee:
Bill Dupor, Adviser Paul Evans Pok-sang Lam

Approved by

Adviser Graduate Program in Economics

UMI Number: 3226392

Copyright 2006 by Zhao, Mingjun All rights reserved.

UMI Microform 3226392 Copyright 2006 by ProQuest Information and Learning Company. All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code.

ProQuest Information and Learning Company 300 North Zeeb Road P.O. Box 1346 Ann Arbor, MI 48106-1346

c Copyright by Mingjun Zhao 2006

ABSTRACT

My dissertation grapples with the issues of model uncertainty in macroeconomics, and analyzes its consequences for monetary policy. It consists of three essays. In the rst essay (Chapter 1), Monetary Policy under Misspecied Expectations, I examine policy choices for the central bank that faces uncertainty about the process of expectation formation by economic agents. The economy contains both rule-ofthumb agents who base their expectations on recent observations and agents who have rational expectations. The central bank is uncertain about the fraction of the rule-of-thumb agents. This uncertainty concern enables me to partially rationalize the over cautious policy stance of the Fed: empirically observed policy in the past two decades involves much weaker responses than optimal policies derived from various micro-founded models. It is well understood that when the economy is more forwardlooking, the central bank displays more aggressive responses to ination and output. But the uncertainty-averse central bank evaluates policies by the performance in the worst case. In my economy this has a high fraction of agents that are backwardlooking. The best policy the central bank chooses thus involves moderate responses. That is to say, this minimax policy moves closer toward actual less responsive policy. In the second essay (Chapter 2), Phillips Curve Uncertainty and Monetary Policy, I investigate the eect of model uncertainty on policy choices employing a more general approach, which nests the minimax and Bayesian approaches as limiting cases. ii

The central bank is uncertain about whether the economy has a sticky price Phillips curve or a sticky information Phillips curve. I argue that how the central bank chooses a policy depends both on its perception of uncertainty environment and on its attitude towards uncertainty. I nd that as the central bank either becomes more uncertainty-averse or considers sticky information more plausible, the response to ination decreases and to output increases. The third essay (Chapter 3) is entitled Optimal Simple Rules in RE Models with Risk Sensitive Preferences. This paper provides a useful method to solve optimal simple rules under risk sensitive preference in macro models with forward looking behavior. An application to a new Keynesian model with lagged dynamics is oered and risk sensitive preference is found to amplify policy responses.

iii

This is dedicated to my parents.

iv

ACKNOWLEDGMENTS

First and foremost, I would like to appreciate Bill Dupor for years of invaluable guidance, mentoring and encouragement. He is a superior researcher, teacher and advisor, whose brilliant insights have constantly inspired me throughout my research. Im indebted to him for most of my intuition and knowledge in Macroeconomics. Without his help, I could never have completed this dissertation. I would also like to thank Paul Evans, whose advice and comments led to great improvement of earlier versions of the essays. I am also grateful to Pok-sang Lam and Huston McCulloch for their stimulating discussions and valuable suggestions. My thanks also go to Joseph Kobaski, Lung-fei Lee and Masao Ogaki in the department. They have devoted their precious time to discussions on drafts of the essays. I also thank Leonard Kiefer, Hyeongwoo Kim, Virgiliu Midrigan, Takayuki Tsuruga for many useful suggestions. I owe so much to so many friends at the Ohio State University that I cannot list all I should acknowledge. I thank them sincerely. I am deeply grateful to my parents, Taorong and Junhu, and sister, Mingmei. Without their love, support and encouragement, none of my achievements would have been possible. Special thanks are given to my ance, Lei Cheng, for her unconditional support in countless aspects. She is the best blessing to my life.

VITA

February 23, 1978 . . . . . . . . . . . . . . . . . . . . . . . . . . Born - Anhui, P. R. China 2000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.A. Economics, Renmin University of China 2002 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M.A. Economics, The Ohio State University 2005 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Master of Applied Statistics, The Ohio State University 2002-present . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graduate Teaching Associate, The Ohio State University.

FIELDS OF STUDY
Major Field: Economics Studies in: Macroeconomics Monetary Policy Applied Econometrics

vi

TABLE OF CONTENTS

Page Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vita . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapters: 1. Monetary Policy under Misspecied Expectations . . . . . . . . . . . . . Introduction . . . . . . . . . . . . . . The Model . . . . . . . . . . . . . . . Monetary Policy and Model Detection Computation Details and Results . . . 1.4.1 Benchmark Case . . . . . . . . 1.4.2 Inertial Policy Rules . . . . . . 1.4.3 Sensitivity Analysis . . . . . . 1.5 Conclusions . . . . . . . . . . . . . . . 2. 1.1 1.2 1.3 1.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 5 9 11 11 19 22 26 28 28 31 33 35 ii iv v vi ix x

Phillips Curve Uncertainty and Monetary Policy . . . . . . . . . . . . . . 2.1 2.2 2.3 2.4 Introduction . . The Model . . . Policy evaluation Solution Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.4.1 Calibration . . . . . 2.4.2 model solution . . . 2.5 Results . . . . . . . . . . . 2.6 An Economic Interpretation 2.7 Conclusion . . . . . . . . . 3.

. . . . . . . . . . . . . . . . . . . . . of Degree of . . . . . . .

. . . . . . . . . . . . . . . . . . . . . Uncertainty . . . . . . .

. . . . . . . . . . . . . . . . . . Aversion . . . . . .

. . . . .

. . . . .

35 36 37 46 48 50 50 51 52 55 59

Optimal Simple Rules in RE Models with Risk Sensitive Preference . . . 3.1 3.2 3.3 3.4 Introduction . . . . . Problem Formulation . Optimal Simple Rules An Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4.

Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Appendices: A. B. C. D. Rational Expections Solution Characterization to the Hybrid Model . . . Loss Function Characterization . . . . . . . . . . . . . . . . . . . . . . . Detection Error Probability . . . . . . . . . . . . . . . . . . . . . . . . . State Space Representation of the DSGE Models . . . . . . . . . . . . . D.1 D.2 D.3 D.4 Sticky Price DSGE . . . . Sticky Information DSGE Solution Method . . . . . Loss function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 65 66 68 68 68 69 70 72

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

viii

LIST OF TABLES

Table 1.1 1.2 2.1 2.2 2.3 Benchmark Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . Detection Error Probability . . . . . . . . . . . . . . . . . . . . . . . Parameter Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Various Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Uncertainty Premium Interpretation ( = 0.1 and = 0.5) . . . . . .

Page 13 14 35 44 47

ix

LIST OF FIGURES

Figure 1.1 1.2 1.3 Model-specic optimal rules and their performances ( = 0.6, = 1)

Page 16

Loss functions for dierent model-specic optimal rules ( = 0.6, = 1) 18 Model-specic optimal inertial rules and their performances ( = 0.6, = 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Loss Functions for Dierent Model-specic Optimal Rules ( = 0.6, = 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Model-specic optimal rules with varying preference weight on interest rate ( = 0.6) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Model-specic optimal rules with varying preference weight on the output gap ( = 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How Taylor-type rule coecients vary with degree of model uncertainty aversion with only a demand shock, = 0.1 and = 0.5 . . . . . . . How Taylor-type rule coecients vary with degree of model uncertainty aversion having both demand and supply shocks, = 0.1 and = 0.5 How Taylor-type rule coecients vary with plausibility of SP model, = 0.1 and = 0.5 . . . . . . . . . . . . . . . . . . . . . . . . . . .

20

1.4

23

1.5

24

1.6

26

2.1

39

2.2

41

2.3

42

CHAPTER 1

MONETARY POLICY UNDER MISSPECIFIED EXPECTATIONS

1.1

Introduction

Since [59], a considerable literature has spawned which has focused on characterizing desirable monetary policy in terms of simple interest-rate feedback rules, i.e. guidelines where the central bank sets the interest rate, the policy instrument, in response to economic conditions. But unfortunately, the optimal policy rules derived from various micro-founded models such as those collected in [19] typically imply much stronger responses to ination and output than what is empirically estimated from US data. One branch of the literature has tried to resolve this puzzle by introducing uncertainty and robustness concerns. Broadly speaking, there are two distinctive ways of introducing model uncertainty. The rst research strategy, advocated by Bennett T. McCallum, takes a small set of particular models and evaluates the performance of a given rule across the models, and thus characterizes monetary policy rules that work reasonably well in a variety of plausible quantitative models. [40] and [41] investigate the performance of monetary policy rules using ve macroeconomic models that reect a wide range of views on 1

aggregate dynamics. But they do not try to actually nd the robust optimal rule. The second line of research views model uncertainty as uncertainty about parameters of the structural model, but policymakers are condent that the reference model is the right one for the true economy. [47] and [50], for example, consider parameter uncertainty in a backward-looking model, and [28] studies this in a forward-looking macromodel. In both cases, they model policymakers as knowing all parameters except two, for which they know only bounds. From a methodological point of view, there are two main ways to treat model uncertainty. The rst is to employ traditional Bayesian decision theory. The other applies the ideas of least favorable prior decision theory ([30]) combined with the advances in the robust control theory. The Bayesian approach to parameter uncertainty posits that policymakers have priors over all parameters in the model. The optimal policy is derived from minimizing an expected loss, integrating over the parameters with respect to the prior density. Brainards analysis ([8]) is a classical example, and since this work it has generally been accepted that parameter uncertainty can lead to less aggressive policy. Although the Bayesian approach has intuitive appeal in decision theory, many people doubt its usefulness for practical policymaking. The key to such analysis is the existence of prior distributions, which requires an unrealistic amount of information for policymakers. The alternative approach assumes no prior distribution1 of the true model and assumes the policymaker is uncertainty-averse. Thus the policymaker evaluates policies by their worst-case performance across the various models under consideration. Such a rule is designed to avoid an especially
1

Or, equivalently, we can say that it assumes multiple prior distributions.

poor performance of monetary policy in the event of an unfortunate model specication. However, most of existing research along this line nds misspecication tends to amplify rather than attenuate the response of the interest rate to ination and output ([28]; [52]; [56] etc.). [17] nd whether uncertainty concerns amplify or attenuate the interest response depends on the interaction between shocks and the source of uncertainty. They then develop a non-attenuation principle, which not only explains existing amplication ndings, but also stipulates the necessary conditions to get desired attenuation results. Although this spate of work has produced thought-provoking results, it also incurs some criticisms. As [53] points out, to some extent, the existing studies employing the minimax approach pay a lot of attention to uncertainty about the coecients of the structural equations, while giving little consideration to the real structural dynamics. In this paper, we propose a model of the economy that nests both forward-looking and backward-looking ingredients. On one extreme, it is a forward-looking model derived from rst principles, but on the other extreme, it is a variant of backwardlooking model with many empirical justications. These two extremes imply dierent dynamics. The exact dynamics of this nested model depend on the share of rule-ofthumb economic agents in the economy. Those agents simply take the recent economic data to be their expectations for the future. Thus in our model economy uncertainty about one of the coecient parameters represents underlying structural uncertainty. We further discipline the set of models in consideration by using detection error

probability as proposed by [33]. Each model under consideration is hard to distinguish from others in the set. We also take a worst-case approach2 . [42] also study the robustness of simple policy rules under model uncertainty that includes purely forward-looking and purely backward-looking models as well as a hybrid model, but they mainly focus on Bayesian analysis to three competing non-nested models. Under our nested model with partially misspecied expectations, we nd that the central bank does not display such strong response as in the rational expectations models. When the economy is more forward-looking, the central bank will be more aggressive in combating ination. Since the uncertainty-averse central bank evaluates policies by the performance in the worst case, policy will have moderate responses to expected ination and the output gap. This is because the worst case will entail a large fraction of agents who are rule-of-thumbers. Finally, if the central bank commits itself to inertial rules, concerns about misspecied expectations also prompt the central bank to reduce its degree of policy inertia. The paper also uncovers the implications of uncertainty about expectations formation on policy choices. It complements the recent research work that analyzes monetary policy with adaptive learning expectations ([48]). In the next section of the paper we describe our nested model and its justication. Section 1.3 denes the relevant monetary policy rules. Section 1.4 contains computation details and our interpretation of the results. Section 1.5 concludes.
The formal rationale for the worst-case approach is that minimax behavior solves the Ellsberg paradox in decision theory. Or, humorously, the policymaker follows the alleged Murphys Law: If anything can go wrong, it will.
2

1.2

The Model

We start from a standard New Keynesian model ([64]) characterized by two structural equations: the consumption Euler equation (1.1) and the dynamic pricing equation (1.2). They can be derived as log-linear approximations to equilibrium conditions of an underlying dynamic equilibrium model with sticky prices. yt = Et yt+1 1 (Rt Et t+1 ) + t t = Et t+1 + yt + t (1.1) (1.2)

here yt denotes the output gap (dened as the deviation of output from its natural level, i.e., the equilibrium level of output under exible prices), t is the ination rate, and Rt is the deviation of the short-term nominal interest rate from its steady-state value. E represents economic agents expectations. t and t are the structural shocks. The structural parameters and are both positive by assumption. The parameter represents the inverse of the intertemporal elasticity of substitution, and , which is the slope of the short run aggregate supply curve, can be interpreted as a measure of the speed of price adjustment. is the time discount factor of the price-setters and assumed to be the same as the discount factor of the representative household. We deviate from the standard forward-looking model by assuming that there are two types of economic agents. In forming their expectations, one has rational expectations and the other follows an adaptive expectation. We assume that the economic agents type is determined in period 0 and they know their types. The central bank is uncertain about their types3 . Assume a fraction a of economic agents have rational
3

This framework is similar to [28].

expectations and for them Et t+1 = Et t+1 Et yt+1 = Et yt+1 Possibly because of high cost of collecting sucient information to form correct expectations, or because of the existence of optimization costs, the remaining fraction of agents just follow a backward-looking rule of thumb Et t+1 = 1 t1 Et yt+1 = 2 yt1 (1.3) (1.4)

where 1 and 2 are rule-of-thumb coecients. Although this adaptive expectations behavior is admittedly simplistic and justied only on tractability grounds, we believe that their presence captures an important aspect of actual economies which is missing in standard forward-looking models. For example, historical data suggest that ination displays signicant persistence in the face of shocks, while the standard dynamic pricing equation allows current ination to be a jump variable that can respond immediately to any disturbance. Rule-of-thumb behavior incorporates endogenous persistence by including the lagged variables in the structural equations. [26] introduced this rule-of-thumb behavior to analyze macroeconomic dynamics. Empirical support of such behavior among a substantial fraction of households in the U.S. and other industrialized countries can be found in [9]. In this way, we get a hybrid model that nests both forward-looking and backwardlooking models as special cases. More generally, we can allow for the fraction of rms using a rule-of-thumb to set prices to be dierent from the fraction of consumers using a rule-of-thumb to decide spending. This kind of generality can bring much richer 6

ination and output dynamics in the nested model. Now, the structural equations for our economy are yt = aEt yt+1 a 1 (Rt Et t+1 ) + (1 a)2 yt1 (1 a) 1 (Rt 1 t1 ) + t t = aEt t+1 + (1 a)1 t1 + yt + where t and
t t

(1.5) (1.6)

now include both structural shocks t and t and forecast errors from

rule-of-thumb behaviors (1.3) and (1.4), and we term them collectively as exogenous shocks. When the fraction of rule-of-thumb agents approaches zero, our economy is back to the standard theoretically forward-looking model of [64]; when the fraction approaches 1, it is a variant of [51], an empirically justied backward-looking model. Depending on the value of a, our model exhibits substantial dierences in ination and output dynamics, partially reecting ongoing theoretical and empirical controversies about macroeconomic models. More recently, [27] discuss the determinacy of interest rate rules with the presence of rule-of-thumb households in their macromodel. We assume that the policymaker commits at the beginning of period 0 to a policy rule, which is set according to the reaction function proposed by [15]:
Rt = Et (t+1 ) + y yt Rt = Rt1 + (1 )Rt

(1.7) (1.8)

where Rt represents the well known Taylor rule and the lagged interest rate captures

the tendency of the central bank toward smoothing interest rates. We assume that the commitment to such a rule is credible, and in particular that the policymaker 7

does not revise it at later dates using additional information he might have gathered about unknown fraction of rational agents. Although for the standard Taylor rule ([59]), the federal funds rate responds to lagged ination and output rather than their expected future values, this forward-looking rule nests it as a special case: lagged ination or a linear combination of lagged ination could be a sucient statistic for forecasting future ination. Since the central bank does not know the true a, it does not know which equilibrium4 will realize and therefore cannot form unique Et t+1 as equation (1.7) requires. To simplify the analysis, we assume the central bank targets on ination expectations in the nancial markets, whose participants are presumably forming rational expectations and is uniquely determined. And this forward-looking specication makes the central banks behavior consistent with our model setting. We can rationalize the consumption Euler equation (1.5) and the dynamic pricing equation (1.6) by referring to habit persistence in the utility function of the representative household as in [24]5 and certain relative wage contracts as in [25] or a form of price indexation dierent from the standard Calvo specication as in [13] and in [29]. In [1], two similar equations are derived by introducing the rule-of-thumb economic agents. Each of the equations (1.5), (1.6) and (1.8) comprising our structural economy exhibits endogenous persistence, which allows for more realistic dynamics. It overcomes the drawback of forward-looking model by allowing us to explain the empirical persistence in ination, the output gap and the interest rate. In purely forward-looking
Given the known policy rule and the structural equations (1.5) and (1.6), rational expectations equilibrium can be determined for the (rational) agents. Appendix A provides the rational expectations solution to our economy.
5 In this case, the parameter a would be a function of the degree of habit persistence and indexes the importance of lagged consumption (output) relative to current consumption (output). 4

models, structural errors have to be assumed to be serially correlated to t the data. Also, when a non-structural VAR approach is used, the Akaike information criterion (AIC) or the Schwartz information criterion (SIC) often lead to choose higher order VARs6 . Although such models t the data better, it is a daunting task to justify macroeconomic models including more than one lag ([14]).

1.3

Monetary Policy and Model Detection

As is standard in the monetary policy literature, we assume that the objective of the central bank is to minimize the weighted sum of the unconditional variance of ination, the output gap and the change in the interest rate E [Lt ] = Var (t ) + Var (yt ) + Var(Rt Rt1 ) (1.9)

where and are the weights on output stabilization and interest rate smoothing respectively7 . Appendix B shows how to express this loss function in terms of the model solution.
For our purpose, the central bank sets the interest rate Rt (chooses the coecients

, y and ) to minimize the loss function, but the central bank does not know the exact value of a. So the central bank faces real structured model uncertainty, not just parameter uncertainty as discussed in [56], [47], [28], etc. Changes in the parameter a aect two elements of the model. First, smaller values of a increase the degree of endogenous persistence in ination and the output gap. That is, the importance of
Both model selection criteria strike a balance between a better t and model parsimony, and the SIC penalizes extra lag terms more heavily. In a more general framework, the weights , are related to deep-parameters in the structural equations. So the central bank should take loss function uncertainty into account too. In this paper, however, some sensitivity analysis to and in Section 1.4.3 reveal that our ndings are robust to this uncertainty.
7 6

lagged ination and the output gap relative to expected ination and the expected output gap increases. Second, changes in a complicate the channel through which the monetary policy inuences the economy as shown in equation (1.5). Inspired by [33], we will further discipline the models in consideration with a statistical theory of detection. The models under consideration should not be easy to distinguish with the available data. The detection error probability approach takes an agnostic position on whether the true data generating process is given by one model or the other, and species a probability of making the wrong choice between the two models on the basis of in-sample t with a given sample size. By calibrating the boundary of the set so that the model on the boundary is hard to distinguish from the reference model, detection error probability is argued to be able to dene a reasonable degree of robustness an uncertainty-averse decision maker desires. Obviously, the set calibrated depends on the reference model chosen. We will generalize this method and calibrate the set in which models are hard to distinguish from each other, rather than the model on the boundary and the specic reference model. Appendix C describes the computation in some detail. Whats the optimal monetary policy when the central bank faces such model uncertainty? We consider two methods to identify the robust optimal rule. The rst approach argues that model uncertainty aversion prompts the policymaker to adopt a worst-case model to devise a robust rule. Therefore the central banks extremization problem is:
{ , y , } {a}

min max E [Lt ]

(1.10)

subject to (1.5), (1.6) and (1.8). 10

The model-specic optimal rule is the rule in which the parameters , y and minimize the loss given a particular value of a. The minimax rule minimizes the expected loss over all the models considered. The second method takes a Bayesian perspective and weighs the outcomes from the dierent models according to priors over the models. In this case, the central banks problem is to
{ , y , }

min Ea E [Lt ]

(1.11)

subject to (1.5), (1.6) and (1.8), where Ea denotes Bayesian weighting over dierent values of a.

1.4

Computation Details and Results

Typically this kind of problem has to be solved numerically, if it can be solved at all. In this section, we will apply the numerical method to solve our problem.

1.4.1

Benchmark Case

Table 1.4.1 concisely summarizes the values assumed for the dierent nonpolicy parameters and they were chosen so that one period equals a quarter. We set the utility discount factor equal to 0.99, implying a steady state real annual return of 4 percent. The elasticity of intertemporal substitution, 1 , is set to 1 following [15]. As to , the output elasticity of ination, values found in the literature range from 0.05 ( [58]) to 1.22 ( [12]). We will follow [15] to set it equal to 0.3, which is consistent with the empirical ndings in [49]. By our assumption of simple rule-of-thumb behavior, 1 and 2 can be estimated employing the univariate time series models (1.3) and

11

(1.4)8 . We use quarterly data from 1949Q1 to 2004Q2. The ination rate is the annualized quarterly change in the GDP deator (seasonally adjusted). The output gap is the percent deviation of real GDP (measured in chained 2000 dollars, seasonally adjusted) from potential GDP (as calculated by the Congressional Budget Oce). We obtain 1 = .97 and 2 = .889 . In the benchmark case, equals 0.6, which indicates ination variability is more distasteful than output variability. The weight on interest smoothing, , is set to 1 so variability of nominal interest rate changes is as costly as ination variability. Empirical studies (e.g., [55]) reveal that must be larger than to match US data. A value of 0 is assigned to the inertial parameter in the benchmark case, a simplication that can enable us to make a compelling comparison between our results and the standard Taylor rule. Otherwise, the central bank faces a tradeo between policy aggressiveness and its degree of inertia. Here to capture the central banks inertial behavior as shown in the US data, we place a large weight on interest variability in its objective function. We defer consideration of inertial interest rate rules to the next subsection. Subsection 1.4.3 looks at the consequences of relatively more or less emphasis on output stabilization and interest rate smoothing in the objective function. As discussed in Appendix A, given these parameters values, P and F depend highly nonlinearly on the policy parameters and y as well as the value of a10 . This nonlinear dependence of the reduced form parameters on a implies that uncertainty
These rule-of-thumbers are convinced by [46] that the prediction performance of a parsimonious univariate time series model could outweigh an elaborate structural model. Data are from the FRED II database of the Federal Reserve Bank of St Louis at the web address http://research.stlouisfed.org/fred2/ There could be no stationary solutions to our structural economy. Trials reveal that > 1 and y > 0 guarantee the existence of stationary solutions for dierent as in our case, which is consistent with the vast majority of the monetary policy literature.
10 9 8

12

Parameters 1 1 2

Values 0.99 1 0.3 0.97 0.88 0.6 1 0

Description utility discount factor elasticity of intertemporal substitution output elasticity of ination rule-of-thumb ination coecient rule-of-thumb output coecient weight on output in objective function weight on interest rate change in objective function degree of inertia in the interest rate rule

Table 1.1: Benchmark Parameters

about a truly reects structured model uncertainty. [60], on the other hand, start from a reduced form and consider model uncertainty by augmenting the transition matrix P with a matrix of perturbations. The rule that guarantees stability of the economy for the maximum perturbation is pursued by the robust central bank. Based on possible discrete values of a, Table 1.4.1 reports the detection error probabilities for each combination of models with the standard Taylor rule in a sample of 20 observations11 . The larger the sample, the lower the detection error probability. As suggested by [33], the central bank is assumed to design robust rules that work well for alternative models whose detection error probabilities are .15 or greater. So the uncertainty concern central bank will examine possible as from [0 0.1 0.7]

because this constitutes the maximum set of models in which each model is hard to detect from the others. To keep things as simple as possible, we will characterize the monetary policy response to unit exogenous shocks. A simulated annealing algorithm is used to search for the optimal coecients and y directly.
11

With model-specic optimal rules, we obtain similar results.

13

a 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 .5 .5 .50 .50 .49 .47 .41 .26 .12 .07 .04

0.1 .5 .5 .50 .50 .49 .46 .38 .24 .10 .05 .04

0.2 .50 .5 .5 .50 .49 .46 .36 .19 .07 .04 .05

0.3 .50 .50 .50 .5 .49 .45 .35 .16 .06 .06 .16

0.4 .49 .49 .49 .49 .5 .46 .33 .15 .09 .19 .32

0.5 .46 .46 .46 .45 .46 .5 .36 .22 .25 .39 .46

0.6 .39 .37 .36 .34 .34 .37 .5 .40 .46 .49 .50

0.7 .27 .22 .19 .15 .15 .20 .39 .5 .50 .50 .5

0.8 .13 .10 .07 .06 .09 .26 .45 .5 .5 .5 .5

0.9 .06 .05 .04 .06 .18 .38 .49 .50 .5 .5 .5

1 .06 .03 .05 .15 .32 .46 .50 .5 .5 .5 .5

Table 1.2: Detection Error Probability

The Minimax Analysis Figure 1.1 shows various model-specic optimal policy rules and their performances. When the share of rule-of-thumb economic agents becomes smaller (i.e. a is larger), optimal rules call for stronger responses to ination. The increases in the coecients of the Taylor rule as the share of forward-looking agents increases are quantitatively obvious. With nominal rigidities present, by varying the nominal rate, the central bank can eectively change the short term real rate. The larger a is, the more leverage the central bank can gain over the real economy by increasing the ination response coecient. In addition, when most of the agents are forward-looking, beliefs about how the central bank will set the interest rate in the future matter a lot. Larger response to expected ination not only makes the classic mechanism work better, its strong signal to maintain ination low also helps to reduce current ination with less cost in terms of smaller loss. On the other hand, as argued in [14], the central bank only responds to the output gap to the extent it has predictive power 14

for ination. And equation (1.6) indicates, this prediction power does not directly depend on the share of rule-of-thumbers. So it is not surprising to see the irregular and quantitatively insignicant changes in the response coecient y . Its straightforward to see that as the economy is more forward-looking, ination and output variabilities are more under control. The variances of the interest rate change display a hump shape along the increase of forwardness for the economy as shown in the lower part of Figure 1.1. Note that as a becomes very large, optimal rules involve more aggressive responses. At the same time, ination and the output gap are less persistent. So the economy has a faster transition to the steady state, making the unconditional variance for the interest rate change smaller. This benecial eect could outweigh the penalty on instrument variability in the more forward-looking economy. The minimized total losses share the similar pattern. Since the central bank is uncertain about a, by the minimax criterion it chooses the optimal rule in the worst case a = 0.4, which has responses coecients of = 2.50 and y = 0.37. This minimax rule is the best policy the robust central bank can adopt. To see this more clearly, let us look at slices of the loss function surface for various model-specic optimal policy rules as in Figure 1.2. The solid line shows the performance of the minimax rule; each of the dotted lines is the loss function for a policy that is optimal for a particular value of a (indicated beside each line). Several observations are apparent. First, the minimax rule achieves the best performance in the least favorable case a = 0.4, but generally in other cases its not optimal. Second, if the central bank sets the interest rate according to the optimal rules associated with a small value of a, the resulting losses will be large if a turns out to be high. The central bank could have been achieved a much smaller loss by responding more 15

Response Coefficients

3.5 3 2.5 2 1.5 1 0.5 0 0 0.1 0.2 0.3 0.4 0.5

0.6

0.7

4 3.5 3 Loss

Variance

2.5 2 1.5 1 0.5 0 0.1 0.2 0.3 0.4 0.5 y


t t

0.6

0.7

a: fraction of forwardlooking agents

Figure 1.1: Model-specic optimal rules and their performances ( = 0.6, = 1)

16

aggressively. But when the central bank assumes an aggressive policy (e.g., optimal rules corresponding to large values of a) to stabilize ination and output uctuations, strong stabilization eect can be undone by realization of small a. Since when the true value of a is small, ination and output dynamics are mainly determined by the rule-of-thumbers. Aggressive responses cannot eectively inuence those dynamics; instead the central bank has to adjust nominal interest rates frequently in transition periods, making the overall performance of aggressive rules worse. To guard against these unfavorable situations, the central bank chooses to respond moderately to expected ination and the output gap. The relatively at solid line suggests the minimax rule performs acceptably well across dierent values of a, and the central bank can successfully insulate itself against models in the broad range. Therefore, anxiety about misspecied expectations leads the central bank to weaken its responses to expected ination and the output gap relative to optimal responses in the more forward-looking economy. This conforms the ndings in [60]. As mentioned earlier, they work with the reduced form based on the structural model and pursue robust rules sustaining maximum structural perturbations. They nd that rules that are robust to structured model uncertainty are less aggressive. Misspecied expectations can be viewed as a specic case of structured model uncertainty. [47] also contend optimal policies become less aggressive as more structure is placed on uncertainty. The Bayesian Analysis In order to conduct Bayesian analysis, the central bank has to assign a prior about the distribution of a. Taken a at prior of a over [0 0.1 0.7], the optimal rule

has = 2.51 and y = 0.59. Compared with the model-specic optimal rules derived 17

4 =3.12,y=.49 3.9 (a =.6) (a =.1) 3.8 (a =.7) (a =.5) (a =.2) (a =.3) =1.98,y=.54 (a = 0)

3.7

3.6

Stackelberg Eq.

Loss

3.5

=2.50,y=.37 (a=.4)

3.4

3.3

3.2

3.1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

a: fraction of forwardlooking agents

Figure 1.2: Loss functions for dierent model-specic optimal rules ( = 0.6, = 1)

18

above, the Bayesian robust rule averages ination responses over dierent models in consideration. In some sense, it conrms Brainards conservatism principle. We can get more attenuated responses to ination by placing more weights on backward looking models. However, this is an unpleasant feature of the Bayesian approach, since in practical policy decision making, policymakers dont have sucient information to form a unique prior over the models in consideration ( [56]). [37] also study optimal monetary policy in the presence of an uncertain fraction of rule-of-thumbers in the economy, their Bayesian analysis yields more aggressive policy response with uncertainty concerns. The dierence between their conclusions and ours (and also many in previous literature) results mainly from their consideration about the cross-parameter restrictions on the loss-function uncertainty.

1.4.2

Inertial Policy Rules

We now turn to inertial policy rules. The inertial character of central bank behavior shows up in various estimated central bank reaction functions and is hotly debated in the literature (e.g. [65]). Figure 1.3 displays model-specic optimal inertial rules and their performances12 . As in the benchmark case, a larger fraction of forward-looking agents in the economy makes the responses to ination more aggressive. Although the interest rates now are much more responsive to changes in expected ination and the output gap than under non-inertial rules, the variances of the interest rate changes are actually smaller because of the inertia. Given the same weights in the objective function, the
Here is bounded between 0 and 0.7. When takes a value greater than 0.7, our macro system is not always stable under the specied range of and y and our algorithm fails to nd the optimal policy. [42] reach similar restriction on for pure backward-looking model analyzed.
12

19

10

Response Coefficients

8 6 4 2 0 y

0.1

0.2

0.3

0.4

0.5

0.6

0.7

3.5 3 Loss

Variance

2.5 2 1.5 1 0.5 0 0 0.1 0.2 0.3 0.4 R


t

0.5

0.6

0.7

a: fraction of forwardlooking agents

Figure 1.3: Model-specic optimal inertial rules and their performances ( = 0.6, = 1)

20

reduction of the variance of the change in the interest rate contributes much to the better performance of the inertial rules over the rules in the benchmark case. The optimal degree of inertia also goes up as the economy becomes more forwardlooking. For forward-looking agents, inertial behavior adds to predictability and allows them to form better expectations. Rule-of-thumbers instead would take the lagged interest rate as a guide for the central banks stance on monetary policy. So it exacerbates the deviation of the economy from the optimal paths. The more rule-ofthumbers, the more persistent the deviation will be. Therefore the smaller a, the less inertia the central bank will choose, since the benet of predictability diminishes when the economy is less forward-looking. As the economy becomes more forward-looking the predictability benet of an inertial rule increases, and the deviation from the optimal paths is less persistent. In fact, when the expectations of a certain fraction of the agents are not misspecied, the central bank displays as much inertia as it can. This is consistent with the argument in [65]. In the pure forward-looking economy he considered, he contended that super inertial policy rules (i.e. > 1) may be desirable13 . The uncertainty-averse central bank will choose its policy rule based on the worst case, which corresponds to a = 0.2. As before, concerns about misspecied expectations prompt the central bank to dampen its response to expected ination compared to the optimal responses in the more forward-looking economy. Furthermore, the large fraction of backward-looking agents forces the central bank to decrease the degree of policy inertia14 . Figure 1.4 displays the performances of various model-specic
Inertial policy rules perform quite poorly in backward-looking models such as [51]. This is also in line with our argument. [28] nds that with parameter uncertainty the central bank increases the degree of its policy inertia. However, in his model, parameter uncertainty does not truly reect dynamic uncertainty.
14 13

21

optimal inertial rules in dierent models. The solid line is for the minimax rule and each of the dashed lines is for other model-specic optimal rules. If the central bank chooses highly inertial rules with aggressive responses, which would be optimal if a were large, it becomes vulnerable to the economy with a small a. If the central bank chooses less inertial rules, which would be optimal when a is small, the central bank will be potentially penalized with an economy that has a large fraction of forward-looking agents. In this case the central bank would have done better by having selected a more inertial rule. The relatively at solid line indicates that the minimax rule performs acceptably well across dierent models in consideration and the central bank insulates itself against the malevolent moves of nature. Thus we again conclude that the concerns about misspecied expectations induce the central bank to reduce its policy inertia and attenuate responses to expected ination. The Bayesian analysis can yield muted responses, but once again its hard to justify any unique prior we have to assign for the distribution of a.

1.4.3

Sensitivity Analysis

In this subsection, we move back to non-inertial policy rules and look at optimal rules for several illustrative cases of dierent preferences of dierent parameterizations of the central bank objective functions. Figure 1.5 shows how the optimal rule varies with the fraction of forward-looking agents as we vary the preference weight on the change in the interest rate among = 0.6, = 1 and = 1.7. The weight on the output gap remains at its benchmark value = .6. Solid lines are replications of the benchmark results shown in the upper part of Figure 1.1. Crosses on the lines indicate the minimax rules.

22

4.5 =9.34,y=1.05,=.7 (a=.6) 4 (a=.5) (a=.4) (a=.3) =2.79,y=.94,=.42 (a=0) (a=.7)

3.5

Loss

(a=.1)

2.5

=4.96,y=1.52,=.61 (a=.2)

0.1

0.2

0.3

0.4

0.5

0.6

0.7

a: fraction of forwardlooking agents

Figure 1.4: Loss Functions for Dierent Model-specic Optimal Rules ( = 0.6, = 1)

23

4.5 =1 =0.6 =1.7

3.5

Response Coefficients

2.5

1.5

1 y 0.5

0.1

0.2

0.3

0.4

0.5

0.6

0.7

a: the fraction of forwardlooking agents

Figure 1.5: Model-specic optimal rules with varying preference weight on interest rate ( = 0.6)

24

As the penalty on instrument variation becomes large, without any surprise, the paths of the responses to expected ination and the output gap shift downwards. Furthermore, these paths share the same pattern as in the benchmark case: the larger the fraction of forward-looking agents, the more aggressive the response to ination. Facing uncertainty about this fraction, the central bank devises its policy rule based on the worst case. Each worst case involves a large fraction of rule-of-thumbers with dierent preference weights on the change in the interest rate. When is small, the central bank generally responds more aggressively to exogenous shocks, and the worst case would correspond to the economy that is more backward-looking. But a larger fraction of backward-looking agents causes the central bank to attenuate its policy responses to a larger degree. Next we keep = 1 and examine the eect of varying the importance of output stabilization. As in Figure 1.5, the solid lines in Figure 1.6 replicate the benchmark result. The stars on the lines denote the worst cases. Intuitively, as the central bank puts more weight on output smoothing, the paths of the optimal response to the output gap shift upwards, but the changes in the responses to ination are quantitatively negligible. With uncertainty concerns, the central bank once again dampens its policy response. Varying the preference for output stabilization aects the degree of attenuation. The reason behind this is similar to varying interest stabilization weight. As becomes large, in general the central bank is more responsive to the output gap while showing little dierence in its response to ination. The corresponding worst case then moves to a small a. These results strengthen our argument that uncertainty about misspecied expectations indeed leads to weakened policy responses. It is clear that the particular 25

3.5 =0.6 =0.2 =1 2.5

Response Coefficients

1.5

0.5

0.1

0.2

0.3

0.4

0.5

0.6

0.7

a: the fraction of forwardlooking agents

Figure 1.6: Model-specic optimal rules with varying preference weight on the output gap ( = 1)

numerical coecients derived above for optimal rules should not deserve much emphasis. Nonetheless, our basic argument for attenuated responses with concerns about misspecied expectations would seem likely to extend to the more general setting.

1.5

Conclusions

This paper proposes a nested model and employs a general method based on a minimax strategy to derive robust optimal monetary policy rules the best rules among those that yield an acceptable performance in various realizations of the model when the true dynamic of the model is unknown. Model uncertainty is captured

26

as uncertainty about the underlying structure of the model. We combine a forwardlooking model and a backward-looking model into one by introducing rule-of-thumb economic agents. The unknown share of the rule-of-thumb agents is the source of uncertainty. While most of the existing studies with model uncertainty concerns nd that robust policy should be more responsive, we have shown that the opposite is likely to be true in the model considered. The robust optimal Taylor rule requires the interest rate to respond less aggressively in general to uctuations in ination or the output gap compared to the optimal rules derived from pure rational expectations models. This constitutes a modest empirical success as the model with uncertainty moves closer towards reconciling the theory with the less responsive Taylor rule parameters estimated from US data. The paper also captures the importance of expectation formations in designing optimal monetary policy. The critical role of the formation of ination expectations and output expectations for understanding the eectiveness of monetary policy should receive more attention in the policy analysis.

27

CHAPTER 2

PHILLIPS CURVE UNCERTAINTY AND MONETARY POLICY

2.1

Introduction

Originating from a negative statistical relation between the unemployment rate and wage ination, the Phillips curve has evolved to be a core element in macroeconomic theory. Since it characterizes the dynamic relation between output and ination, one real variable and one nominal variable, it plays a central role in monetary policy analysis. Ironically, despite its importance, it remains controversial among macroeconomists what specication it should take. A number of inuential Phillips curves have been proposed historically. Some feature permanent tradeos and others temporary tradeos. Instead of examining each of them, this paper considers two Phillips curves that are derived from microfoundations. The sticky price Phillips curve (SPPC), derived under the assumption that prices are sticky15 , says that ination is determined by past ination, current expectations about future ination and the current output gap. On the other hand, the sticky information Phillips Curve (SIPC), which is derived under the assumption
15

Dynamic indexation as in [13] is incorporated to account for ination inertia.

28

that information is sticky, says that ination is determined by the current output gap and past expectations about current ination and output gap. As shown in [61], in a DSGE framework, both the model with SPPC and the model with SIPC have plausible implications about the eects of monetary policy: disinations result in recessions ([4]) and monetary policy shocks have a delayed and gradual eect on ination ([13]). So the central bank may will be undecided between them. Its problem is to choose a Taylor-type rule which works reasonably well in a DSGE given its uncertainty about the Phillips curve, or equivalently, uncertainty about the stickiness assumption in price setting. Model uncertainty is captured as the uncertainty about the Phillips curve. To account for model uncertainty, we adapt a framework due to [39], which nests both the worst-case approach and the Bayesian approach as limiting cases. The key feature of our approach is that how the central bank chooses the policy rule depends both on the perception of each model plausibility and on its attitude towards model uncertainty16 . The Bayesian approach, as in [41], takes into account only the plausibility of dierent competing models, in eect assuming that the central bank does not care about model uncertainty. The worst-case approach, on the other hand, takes the degree of model uncertainty aversion to innity and totally ignores the dierent priors of the candidate models17 . Our general approach with the separation of uncertainty and uncertainty attitude instead allows us to investigate (a) how the rule chosen changes when the central banks attitude towards model uncertainty varies and (b) how the rule chosen changes when the central banks perception about model
[11] uses the same approach to analyze the policy choices when the central bank is undecided between forward looking ([64]) and backward looking model ([51]). Uncertainty or ambiguity, as we perceive it, arises when the decision maker is unable or unwilling to assign a single prior over the dierent competing models.
17 16

29

plausibility varies. We nd under our framework that when the central bank becomes more uncertainty averse, the response to ination decreases and to output increases. When the SIPC is deemed to be more likely, we also nd that the response to ination decreases and to output increases. These results stem from an asymmetry between households consumption decision and rms pricing decision as well as the dierence between sticky prices and sticky information. Households are assumed to be able to adjust their consumption without any stickiness while rms face either sticky price or sticky information, thus making output more sensitive than ination to exogenous shocks and monetary policy. Under the sticky information scheme, a given policy takes eect much slower since only a fraction of rms will update their information and respond to the policy each period. Therefore, if the central bank thinks the SIPC is more plausible, it will be more eective to stabilize the economy by responding more to output and less to ination. Stronger uncertainty aversion prompts the central bank to pursue a policy more robust across the models. Given identical output dynamics and distinct ination dynamics, robust policy is thus more output responsive and less ination responsive. To gauge the economic signicance of the degree of aversion to model uncertainty, we relate it to the premium that the central bank would like to pay to be indierent between facing model uncertainty and achieving the average loss under SPPC and SIPC for sure. We nd that the same degree of uncertainty aversion implies dierent premiums given dierent views of uncertainty environment. Its also shown that in our framework the premium is more important than the magnitude of the change in the policy rule.

30

The rest of this paper is organized as follows: Section 2.2 describes a dynamic stochastic general equilibrium model with either sticky price or sticky information price setting. Section 2.3 describes how the central bank evaluates its monetary policy. Section 2.4 sketches model solution and Section 2.5 contains our interpretation of computation results. An economic interpretation of the degree of model-uncertainty aversion is oered in Section 2.6. Section 2.7 concludes.

2.2

The Model

The skeleton of the model draws heavily from [61], and it is a stylized, but fully specied DSGE model with intertemporally optimizing households, rms that have either sticky information or Calvo sticky prices, and a central bank. Rather than work through the details of the derivation, we instead directly introduce the key equations. The rst structural equation is the consumption Euler equation, which is obtained by log-linearizing a set of equilibrium conditions. yt = Et yt+1 [Rt Et t+1 ] + gt (2.1)

where yt denotes the output gap (dened as the dierence between distorted and exible price - full information output), Et t+1 is the expected ination rate, and Rt is the nominal interest rate. With 0 normalization of government expenditures in steady state, the parameter would represent the intertemporal elasticity of substitution. The disturbance gt depends on present and future expected values of government expenditures and changes in potential output. We interpret it as a demand shock since it shifts the consumption Euler equation. For convenience, assume that gt is an AR(1) process: gt = g gt1 + 31
g,t

(2.2)

where

g,t

2 is a white noise process with variance g .

As proposed in the recent literature, there are two plausible pricing behaviors for rms. In the rst setting, price is assumed to be sticky: in each period, with probability 1 , rms cannot adjust their prices and these rms have to apply a rule of thumb to update their prices Pt (i) = t1 Pt1 (i). Under this framework, we arrive at the Sticky Price Phillips Curve (SPPC) t = 1 t1 + Et t+1 + yt 1+ 1+ 1+ (2.3)

where is the time discount factor of the price-setters. It is assumed to be the same as the discount factor of the households. =
(11 )(11 ) , 1

where measures the

degree of real rigidity or the degree of strategic complementarity. If is small, each rm pays more attention to its relative price than to macroeconomic conditions. So a small is interpreted as a high degree of real rigidity or a high degree of strategic complementarity. Because of both forward and backward looking components, ination behaves inertially in response to various shocks. In the second setting, information is assumed to be sticky: with probability 2 rms cannot update their information in each period and they just use the information set they updated k periods ago and set the price Pt (i) = Etk Pt (i). So here every rm can adjust its price each period but the expectations used to set that price is updated sporadically. Under this scenario, we can derive the so-called Sticky Information Phillips Curve (SIPC) 1 2 t = yt + (1 2 ) k1 Etk (t + yt ) 2 2 k=1

(2.4)

When new information arrives, only a fraction of rms will be informed and incorporate the new information in their price setting while most of rms still set prices 32

based on old information. As time passes by more and more rms will set prices using the new information. Therefore, ination behaves inertially in response to new information. The nominal interest rate is taken as the instrument of monetary policy by the central bank
Rt = Rt1 + (1 )Rt Rt = t + y yt

(2.5) (2.6)

where Rt represents the well known Taylor rule and is called smoothing parameter

indicating how inertial the central banks policy is. As shown in [61], both the model with a sticky price Phillips curve and the model with a sticky information Phillips curve do equally well in delivering the conventional hump-shaped ination response to a monetary policy shock ([13]) and contractionary announced and credible disinations ([4]). So through some process of theorizing and data analysis, the central bank is undecided between these two models.

2.3

Policy evaluation

As is standard in the monetary policy literature, the central bank is assumed to minimize the weighted sum of the unconditional variance of ination, the output gap and the change in the interest rate L = Var(t ) + Var(yt ) + Var(Rt ) (2.7)

For our purpose, the problem of the central bank is to set the interest rate according to the Taylor-type rule (2.6) such that it works well in both the SP and SI models. Let q be the subjective probabilistic scenario it attaches to the SP model. 33

Given revealed perception of model uncertainty, the central bank is going to min qT (LSP ) + (1 q)T (LSI ) (2.8)

{ ,y }

subject to the above structural and behavioral equations (2.1), (2.3), (2.4) and (2.5). Here LSP denotes the loss with SP Phillips curve and LSI is the loss under SI Phillips curve. For the remainder of the paper, we will assume the transformation T to be the increasing function T (L) = eL , =0 (2.9)

where the parameter measures the degree of model uncertainty aversion (model uncertainty love if < 018 ). For = 0 one recovers the standard problem, i.e. T (L) = L19 . So it corresponds to model uncertainty neutrality. There are two features in the objective (2.8) that are worthy mentioning. First, as in the Bayesian approach, the prior of the models matters in the decision process. Second, dierent from the Bayesian approach, the objective is a weighted average of each models loss transformed by T , which captures the attitude of the central bank towards model uncertainty. With a separation between the eect of a change in attitude towards model uncertainty and the eect of a change in uncertainty environment (characterized by the central banks subjective beliefs), we are going to address the following two questions: (i) does a higher degree of model-uncertainty aversion make the monetary policy more
Heath and Tversky ([34]) show that decision maker tends to be uncertainty loving in their competence hypothesis experiments.
19 18

T is unique up to ane transformations, taking the limit of T (L) =

eL 1

as 0.

34

Parameter 1 2 g

Value Description 0.99 Subjective discount factor 0.35 pseudo elasticity of intertemporal substitution 0.32 Degree of real rigidity 0.75 Degree of price stickiness 0.75 Degree of information stickiness 0.8 Interest smoothing coecient 0.95 Autocorrelation of demand shock Table 2.1: Parameter Values

or less aggressive? and (ii) how will the central banks changing views about the plausibility of the models aect its policy choice? Intuitively we can vary to investigate (i) and vary q to probe (ii).

2.4 2.4.1

Solution Method Calibration

Table 2.1 summarizes the calibration of our model. The discount factor equals 0.99, implying a steady state real annual return of 4 percent20 . In the presence of positive steady state autonomous spending (i.e., not interest-sensitive, as represented by the government expenditures), is interpreted as a pseudo-EIS, EIS discounted by the consumption output ratio. Setting the steady state consumption to output ratio to US empirical estimate 0.7 and the EIS
1 21 , 2

we arrives at = 0.35. As to , the

degree of real rigidity, there is no agreement about which value it takes. In the partial equilibrium sticky information model [43] proposed, their successes are very vulnerable to this real rigidity parameter ( = 0.1 in that paper). In the general equilibrium
20 21

The parameters were chosen so that one period is a quarter. This is a compromised value given micro and macro evidences.

35

model with sticky information [36] developed, he argues the optimal parameterization of should be greater than 122 and concludes that the sticky information assumption fails to generate plausible implications for monetary policy. However, the assumption of specic labor market would generate a high degree of real rigidity, as shown in [61], in which both the sticky price DSGE and the sticky information DSGE display sound dynamic implications for various macro variables. So we follow [61] to set to 0.32. The degree of price stickiness 1 is set to 0.75, thus, rms set optimal prices on average once a year in the sticky price model. The degree of information stickiness 2 is also set to 0.75, so rms obtain on average new information once a year in the sticky information model23 . , the smoothing coecient for the interest rule, is assigned a value of 0.8 close to the estimate in [16]. The demand shock is assumed to have an autocorrelation of 0.95.

2.4.2

model solution

Here we briey sketch the model solution. Both the sticky price DSGE and the sticky information DSGE combined with the monetary policy rule (2.5) can be cast into the general form24 AEt Xt+1 = BXt + Cet et = et1 + G
22 23

(2.10) (2.11)

[12] also nd that is above 1 in a sticky price DSGE.

[10] provides a similar estimate by examining the spreading speed of macroeconomic expectations.
24

See the Appendix D for detail.

36

We can then employ standard method to solve our models and get recursive laws of motion. Appendix D further shows how to express loss functions in terms of the model solutions derived. Determining the weights in the loss function is not without controversy. Under sticky price assumption, [64] shows 0.01 in Calvo style pricing while [22] nd 1 with Taylor style pricing. Under sticky information assumption, [5] argue that the loss function can be approximated using only output gap variation term. Although theoretical motivation of interest rate smoothing in the loss is a dicult task, empirical studies reveal that must be larger than to match US data (e.g., [55]). Based on these25 , we will set = 0.1 and = 0.5 in the benchmark analysis and vary them to check robustness as well. To economize a computation, we set g = 1%
26

. Simulated annealing is employed to search optimal solutions over parameter spaces

1.1 10 and 0.1 y 1027 .

2.5

Results

To start, let us take each model in isolation. The optimal rule in the sticky price model has = 10.00 and y = 3.98. In the sticky information model, however, the optimal responses involve mildest response to ination and a stronger response to output ( = 1.10, y = 6.27). Given a demand shock, the output gap increases by equation (2.1). Since the shock is assumed to be highly persistent, increased expected
Even we have condent calibration values, we are not relieved here since our central bank faces model uncertainty while calibration exercises are usually conducted under the assumption the calibrated model is the right one. We also divide the T transformation (2.9) by a constant when becomes very large. The manipulation would speed up the computation without aecting the optimal solutions of and y . In the case of multiple equlibria for SP model or SI model, we follow [7] in selecting smallest eigenvalues.
27 26 25

37

output further pushes up current output. Ination will increase as well under both sticky price and sticky information settings. However, the demand shock is built into ination very slowly under the sticky information scheme. Each period, only a fraction of rms can update their information and adjust their prices. Furthermore, a high degree of strategic complementarity will urge those informed rms to respond less to the shock. Under the sticky price setting, though only a fraction of rms can adjust their prices experiencing the shock, these rms will front load their prices, i.e., set prices equal to the weighted average of their optimal prices till next expected adjustment opportunity. For the remaining rms, they also incorporate the uprising ination pretty quickly by using rule-of-thumb price updating. Therefore, to stabilize the ination and output variations eectively, the sticky price model calls for a much stronger response to ination than the sticky information model, and less to output than the sticky information model. In view of the dierent ination dynamics involved, uncertainty about the Phillips curves is likely to have important eects on the choice of the optimal rule. Figure 2.1 shows how Taylor-type rule coecients vary with degree of model uncertainty aversion as we vary the weight that the central bank attaches to the sticky price model among q = 0.2, q = 0.5 and q = 0.8. As the central bank becomes more model-uncertainty averse, it will respond less to ination and more to output, but the magnitudes in the changes of y are negligible. This is because uncertainty concern prompts the central bank to respond more to common factors in both models, and in the economy we consider here, output shares the same dynamics. As the central bank gives less weight to the sticky price model, the path of the response to ination shifts downwards and that of output upwards. This is because as the central bank 38

3 q=0.2 q=0.5 q=0.8

2.5

1.5

0.1

0.2

0.3 0.4 0.5 0.6 0.7 : degree of model uncertainty aversion

0.8

0.9

6.3 6.25 6.2 y 6.15 6.1 6.05 0

0.1

0.2

0.3 0.4 0.5 0.6 0.7 : degree of model uncertainty aversion

0.8

0.9

Figure 2.1: How Taylor-type rule coecients vary with degree of model uncertainty aversion with only a demand shock, = 0.1 and = 0.5

39

believes less in the sticky price model, it becomes more important to choose rules which perform well in the sticky information model. In monetary policy analysis, usually a markup shock is attached to the Phillips curve (e.g., [64]; [14])28 . These shocks are dierent from the demand shock introduced above, and their key characteristic is that they change the equilibrium level of output under exible and full-information prices but dont change the ecient level of output. With this supply shock, we can enable the model to generate variation in ination that arises independently of movement in excess demand, as is shown in the data (e.g., [25]). Here, let us add ut to (2.3) and (2.4) and similarly assume it follows an AR(1) process ut = u ut1 + and
u,t u,t

(2.12)

is a white noise with standard deviation u 29 .

In this case, the rule chosen not only has to neutralize demand shocks as we explained above, it also has to strike the optimal balance between output and ination since supply shocks bring about the tension between them. As before, our general conclusion is still held: as the central bank is more uncertainty averse, it tends to decrease the response to ination and increase the response to output; if the central bank gives less weight to the sticky price model, the path of the response to ination shifts downwards and to output upwards. Comparing Figure 2.2 to Figure 2.1, however, the magnitudes of the responses to ination are much larger and to output much smaller. The optimal rules now reect a tradeo between the goals of stabilizing ination and stabilizing output. A higher supply shock (markup) means
It can be justied in various ways, e.g., variable taxation, shifts in the degree of collusion in an industry.
29 28

u takes the same value as g and u the same as g in the calculation.

40

6 5.5 5

q=0.2 q=0.5 q=0.8

4.5 4 3.5 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 : degree of model uncertainty aversion 0.8 0.9 1

5 4.5 4
y

3.5 3 2.5

0.1

0.2

0.3 0.4 0.5 0.6 0.7 : degree of model uncertainty aversion

0.8

0.9

Figure 2.2: How Taylor-type rule coecients vary with degree of model uncertainty aversion having both demand and supply shocks, = 0.1 and = 0.5

41

7 6 5 4 3 2 0.1 =0 =.5 =1

0.2

0.3

0.4 0.5 0.6 q: plausibility of SP model

0.7

0.8

0.9

5 4.5 4 y 3.5 3 2.5 2 0.1 0.2 0.3 0.4 0.5 0.6 q: plausibility of SP model 0.7 0.8 0.9

Figure 2.3: How Taylor-type rule coecients vary with plausibility of SP model, = 0.1 and = 0.5

42

a higher ination (since rms desire higher prices) for a given output level, so low output is needed to support low ination. Given the higher priority to stabilizing ination than to stabilizing output ( = 0.1) and this tradeo, optimal policy calls for stronger response to ination and less to output. We can also nd, as the central bank becomes more uncertainty averse, the eect of plausibility perceptions shrinks. Figure 2.3 illustrates this more clearly, which indicates how Taylor-type rule coefcients vary with the plausibility of the sticky price model. If the central bank is indeed model uncertainty neutral, dierent perceptions of model plausibility have a large impact on the choice of policy rule. The rules derived are the Bayesian optimal rules extensively studied by [41]. In this case, the levels of loss associated with dierent models matter in the decision-making. A rule can be allowed to perform poorly in a particular model as long as that model is not very plausible. As the central bank thinks the sticky price Phillips curve more plausible, it becomes more important to choose rules which perform well in the sticky price model. Therefore, as q increases, increases and y decreases since the sticky price model demands strongest response to ination and very mild response to output (see Table 2.2). However, the relative responsiveness of the losses to chosen rules is going to matter when the central bank displays constant model uncertainty aversion. Thus, when we make the central bank more uncertainty averse, it gives more implicit weight to the sticky information model because the loss in the sticky price model is relatively insensitive to the change in the rules chosen. The uncertainty environment has less inuence on the choice of policy rules since the central bank with strong uncertainty aversion will still seek rules which perform decently in the sticky information model even though it thinks that the sticky information model is not very plausible. 43

( = 0.1, = 0.5) Optimal Rule in SP: = 10.00, y = 0.44 Optimal Rule in SI: = 2.57, y = 5.31 Minimax Rule: = 2.83, y = 4.96 q = 0.2 q = 0.5 q = 0.8 y y y 0 3.77 3.67 4.43 3.03 5.55 2.69 0.5 3.12 4.51 3.30 4.27 3.51 3.99 1 3.00 4.69 3.09 4.56 3.19 4.41 5 2.86 4.89 2.88 4.87 2.90 4.84 10 2.84 4.92 2.85 4.91 2.86 4.89 20 2.83 4.94 2.84 4.93 2.84 4.92

( = 0.1, = 0.1) Optimal Rule in SP: = 10.00, y = 0.45 Optimal Rule in SI: = 4.03, y = 10.00 Minimax Rule: = 4.50, y = 8.62 q = 0.2 q = 0.5 q = 0.8 y y y 5.81 6.00 6.78 4.93 8.13 4.05 4.95 7.53 5.22 6.96 5.56 6.37 4.77 7.95 4.91 7.61 5.08 7.25 4.57 8.47 4.60 8.39 4.63 8.31 4.54 8.55 4.55 8.51 4.57 8.47 4.52 8.59 4.53 8.57 4.54 8.55

44

Table 2.2: Various Rules

Table 2.2 displays the optimal rules with selected (, q) for two dierent combinations of and , the weight for output stabilization and interest change stabilization respectively. First, both the left panel and right panel deliver the same message as we have emphasized above. Both the change in the ambiguity perception and the change in the ambiguity attitude aect the choice of policy rules. As the central bank becomes more uncertainty averse while holding the same perception for the economic environment, it will attenuate responses to ination and amplify responses to output. When the central bank thinks the economy is more likely to be sticky information while the attitude towards model uncertainty is xed, it will also call for tender responses to ination and stronger responses to output. So in this sense, our ndings are robust to the parametrization of the weights in the objective function. Next, when the central bank cares more about the interest change variation than about the output variation relative to ination variation, the rules chosen will involve smaller changes with respect to the change in ination and output. Compared to those in the right panel, the rules in the left panel are less aggressive since it is more costly for the central bank to vary the interest rate too much when is larger. From Table 2.2, we can also nd that as the central bank is more and more model uncertainty averse, the policy prescribed gets closer to the minimax rule proposed by [30]. In fact, when goes to innity, we obtain exactly the minimax rule. Since the worst case in our economy corresponds to the SIPC, the less weight attached to the sticky price Phillips curve, the faster the rules converge to the minimax rule. Now suppose, if we observe a change in the central banks behavior, i.e., the rules dier in two time points, what can we infer from the change? From the above analysis, we are aware that a change in either the uncertainty environment or the uncertainty 45

attitude can result in such a transition in the chosen rule. Can we single out each of these eects, especially the model uncertainty aversion eect? In the following two sections, we attempt to link the Feds behavior to the evolving economic conditions and shed some light along this direction.

2.6

An Economic Interpretation of Degree of Uncertainty Aversion

In analogy to the risk theory, we develop an uncertainty premium, which is dened by Eq T (L) = T (Eq (L))(1 + h) (2.13)

here h is the proportional premium that would make the central bank be indierent between facing model uncertainty and achieving the average of the models losses for sure. Then, in the neighborhood of the average loss, we obtain h= 1 Varq (L) 2 Eq (L) (2.14)

To gauge the economic signicance of this uncertainty premium, we consider the value of loss in the historical data in US. Table 2.3 reports the historical loss under the tenure of Paul Volcker (1979Q31987Q2) and Alan Greenspan (1987Q32004Q2)30. If each tenure is treated as one model, the maximum ratio
Varq (L) Eq (L)

is 4.66 when the

probability of the Volcker regime is 0.24. We also calculate this ratio under the
empirically observed Taylor rule over 1979Q3 to 2004Q2, Rt = 2.21t + 0.95yt31 , and

under the optimal Bayesian rules ( = 0) given , and each q in our model.
30 31

Data source: http://research.stlouisfed.org/fred2/ We obtain this by running an OLS regression pre-specifying = 0.8.

46

Empirical Taylor Rule Optimal Bayesian Rules Loss under Volcker: 10.19 ( = 2.21, y = 0.95) (i.e. = 0 in Table 2.2) Loss under Greenspan: 1.07 q = 0.2 q = 0.5 q = 0.8 q = 0.2 q = 0.5 q = 0.8 q = 0.24 V arq (L) 2.81 6.47 7.89 1.04 3.97 7.55 4.66 Eq (L) Note: Losses under Volcker and Greenspan tenures are approximated as the sample variance of Var(t ) + Var(yt ) + Var(Rt ). q maximizes Varq (L) given losses under Volcker and Greenspan. Eq (L) Table 2.3: Uncertainty Premium Interpretation ( = 0.1 and = 0.5)

47

Now suppose that the central bank is willing to pay a premium of h = 40% of the average loss. What degree of model uncertainty aversion corresponds to this premium? For the two historical regimes, the degree of model uncertainty aversion which yields a premium of 40% is at least 0.17. In our model economy, when the central bank gives equal weight to SPPC and SIPC (q = 0.5), the ratio
Varq (L) Eq (L)

under

the empirical Taylor rule is 6.47. This implies the degree of model uncertainty aversion which yields a premium of 40% is 0.12. With = 0.12, the optimal rule has = 3.89 and y = 3.53 while the optimal rule when = 0 is = 4.43 and y = 3.03. So with uncertainty aversion coecient 0.12, we have a signicant decline in response to ination and an increase in response to output together with 40% premium. Alternatively we can quote the premium that the central bank would pay for a given degree of model uncertainty aversion. For example, if the central bank believes much more in SIPC (q = 0.2), = 0.5 means a premium of 70.3% and the optimal rule is = 3.12 and y = 4.51. Compared to the optimal rule when = 0, the ination response decreases by 17.2% and the output response increases by 22.9%. In contrast, if the central bank assigns equal weight to SPPC and SIPC, = 0.5 corresponds to a premium of 197.3% and the optimal responses to ination and output relative to those when = 0 fall by 25.5% and rise by 40.9% respectively. Clearly, the premium is more important than the magnitude of the change in the policy rule.

2.7

Conclusion

This paper considers a central bank that nds two Phillips curves plausible. One is derived under the assumption that rms face sticky price and the other from the

48

sticky information assumption. The central bank accounts for its model uncertainty in a framework proposed by [39], and its task is to design a Taylor-type rule which works reasonably well in both models given its perception of uncertainty environment and its degree of aversion to model uncertainty. We try to investigate how the rule changes when the central banks belief about model plausibility varies and how the rule changes when the central banks attitude toward uncertainty varies. Our analysis indicates that both being more uncertainty averse and perceiving the sticky information model as more plausible can lead to less response to ination and more response to output. Under our framework, output is more policy responsive than ination due to an asymmetry between households and rms. Furthermore, policy is much slower to penetrate into the economy with the assumption of the sticky information. Therefore, robustness concerns make the central bank respond more aggressively to output and less to ination.

49

CHAPTER 3

OPTIMAL SIMPLE RULES IN RE MODELS WITH RISK SENSITIVE PREFERENCE

3.1

Introduction

In dynamic economic applications, it is very useful to incorporate forward looking rational expectations elements. This paper extends the version of [31] risk sensitive control in backward looking system and solves simple optimal rules. Solution of optimal policies in a linear-quadratic framework for a class of RE macromodels is extensively discussed in [54] and [32]. The jump variables in this class of models do not respond to contemporaneous exogenous shocks. The economies we considered are much broader and the unconstrained optimal commitment rules are generally not easy to derive. However, its straightforward to pursue some kind of simple rules. The paper illustrates the extension with an application to the optimal monetary policy in a DSGE model. The central bank is assumed to have a risk sensitive preference and commit to a nominal income growth rule. Compared to the standard intertemporally time-consistent preference, risk sensitive preference makes the central bank undertake a precautionary policy stance. It will respond more aggressively 50

resulting in higher current period loss in exchange for tighter control of future uncertainty.

3.2

Problem Formulation

Many dynamic linear rational expectations models evolve according to AEt xt+1 = Bxt + Cut + Det (3.1)

where xt is an n 1 vector of endogenous variables, ut is a k 1 vector of policy instruments, and et is an m 1 Gaussian noise with mean zero and covariance matrix . Some of the endogenous variables are predetermined (backward looking), and we assume that they are ordered rst in the vector xt . For notational convenience, partition the vector xt accordingly into (x1t , x2t ) where x1t is n1 1 and x2t is n2 1 with n1 + n2 = n and x10 given. The policy maker chooses policy instruments ut to minimize the xed point L0 of Lt = xt Qxt + ut Rut 2 log E[exp( Lt+1 )|t] 2 (3.2)

where (0, 1) is a discount factor and < 0 is the risk sensitive parameter. Lt+1 indicates the continuation value of the loss. When = 0, we obtain the standard specication because in that case Lt = xt Qxt + ut Rut + E[Lt+1 |t]. So here < 0 reects an additional aversion to the continuation loss risk beyond that represented by the expectation operator.32 The current formulation of discounted, risk-adjusted losses is similar to [31], and I generalize the stable dynamic system they considered to incorporate rational expectations. This additional risk adjustment can be viewed as a special case of Epstein
32

By the convexity of the exponential function, E[exp( L)|t] exp( E[L|t]). 2 2

51

and Zins ( [21]) recursive preference specication. More generalized policy rules are considered as well. In the standard control theory, we anchor the policy to the predetermined variables, something we know at the current period. Recently, policy rules targeting on expected variables draw a lot of attention as well, such as [15].

3.3

Optimal Simple Rules

The policy maker is assumed to be able to commit to a simple decision rule of the form ut = F1 xt F2 Et xt+1 . There may be restrictions on the elements in F1 and F2 . Augmenting equation (3.1) with the policy rule gives B C x A 0nk Et t+1 = F1 Ikk ut+1 F2 0kk D 0nk xt + 0km 0kk ut et 0k1 (3.3)

which in terms of yt = (xt , ut ) can be written as A Et yt+1 = B yt + D vt (3.4)

where vt = (et , 0k1) . This can be solved using the generalized Schur decomposition as in [38]. Given the square matrices A and B , the decomposition gives the unitary square complex matrices Q and Z
33

such that and B = QH T Z H (3.5)

A = QH SZ H

where QH and Z H are the transpose of the complex conjugates of Q and Z, and S and T are upper triangular. Reorder the decomposition so that the stable generalized eigenvalues come rst.34 Count the number of the stable generalized eigenvalues ns . When ns = n1 , there is a unique solution to (3.3). If there are more stable eigenvalues
33 34

This is also called qz decomposition and this Q is dierent from the one in (3.2).

t Generalized eigenvalues are dened as sii , i = 1, . . . , n where tii and sii are diagonal elements ii of T and S, and those with modulus less than one are called stable.

52

than there are predetermined variables (ns > n1 ), we can follow Blanchard and Kahn to select smallest eigenvalues needed. Following [38], the solution to (3.3) can be written as
y2t = M1 y1t + M2 vt y1,t+1 = N1 y1t + N2 vt

(3.6) (3.7)

where y1t = x1t , y2t = (x2t , ut )


1 M1 = Z21 Z11

35

and (3.8) (3.9) (3.10)

1 1 M2 = (Z22 Z21 Z11 Z12 )T22 Q2 D 1 1 N1 = Z11 S11 T11 Z11

1 1 1 1 1 N2 = Z11 S11 T11 Z11 Z12 T22 Q2 D + Z11 S11 [T12 T22 Q2 D + Q1 D ] (3.11)

and Si,j , Ti,j , Zi,j and Qi (i, j = 1, 2) are conformable partitioned matrices of S, T, Z and Q. As in [31], it is desirable to represent losses in terms of the true state variables at each period recursively. Suppose the problem has been solved for time t + 1 and future periods, and the loss at time t + 1 can be written as a quadratic form in the true state variables at t + 1 as Lt+1 = x1,t+1 Vt+1 x1,t+1 + dt+1 . Then using the lemma36 in [35] 2 (Lt+1 |t) log E[exp( x1,t+1 Vt+1 x1,t+1 + dt+1 |t] 2 1 = dt+1 + log det(1 + N2 Vt+1 N2 ) + x1t N1 Vt+1 N1 x1t
35

(3.12)

Equivalently, the solution can be expressed as y2t = M1 y1t + M2 et and y1,t+1 = N1 y1t + N2 et where M2 and N2 are the rst m columns of M2 and N2 .
36

The formula works only when (1 + N2 Vt+1 N2 ) > 0.

53

where Vt+1 = Vt+1 Vt+1 N2 (1 + N2 Vt+1 N2 )1 N2 Vt+1 . Given yt =

Q 0nk 0 I y1t , = n1 y1t + n1 m et = M1 y1t + M2 et and W = 0kn R M2 M1 y2t

we can further compute the current loss in terms of true state variables x1t Lt = E[yt W yt |t] + (Lt+1 |t) = x1t Vt x1t + dt where Vt = M1 W M1 + N1 Vt+1 N1 and dt = trace(M2 W M2 ) + dt+1 + log det(1 + N2 Vt+1 N2 ) (3.15) (3.14) (3.13)

Note since the jump variables depend on the Gaussian noise, we should take the expectations conditional on information at period t for the rst term in the current loss. Thus, we successfully map a translated quadratic loss measure for next periods losses into a translated quadratic loss measure today, both expressed in terms of true state variables at respective periods. Accordingly the initial period loss is x10 V x10 + d where V the xed point of (3.14) and d = N2 V N2 ) . By choosing the elements in the committed rule F1 , F2 to minimize (3.16), we can nd the optimal simple rule. Numerical non-linear optimization algorithms such as Nelder-Mead or simulated annealing can be employed. Sometimes, they are also termed handcrafted feedback rules37 .
In a broad sense, handcrafted feedback rules mean all feedback rules that are not computed from formal optimizing procedures such as dynamic programming and Riccati equations.
37

(3.16)

1 1

trace(M2 W M2 ) +

log det(1 +

54

3.4

An Application

To illustrate the properties of this rational-expectations-augmented risk sensitive preference and proposed solution approach, I apply this to a well discussed monetary model. This is a modied version of new Keynesian model with lagged dynamics. With the presence of habit formation, consumption Euler equation takes the form yt = (1 )yt1 + Et yt+1 (Rt Et t+1 ) + eyt (3.17)

where yt is the current output gap, t is ination rate and Rt is the nominal interest rate. The coecients and are closely related to the index of the importance of habit formation in the utility function. Habit formation has recently been studied by [24] and [45] in models for the analysis of monetary policy. An alternative specication for lagged output dynamics is the introduction of rule-of-thumb consumers, such as [9] and [1]. Analogous to output, ination also features lagged dynamics. Dynamic indexation as in [13] and rule-of-thumb pricing setting as in [26] are two alternatives to obtain lagged ination in the formation of current ination. t = (yt + yt1 ) + (1 )t1 + Et t+1 + et (3.18)

Both the output shock eyt and the ination shock et are assumed iid normal with
2 2 mean 0 and variance y and respectively.

The central bank sets short interest rates to minimize the loss
2 2 2 Lt = Et {t + y yt + R Rt }

2 log E[exp( Lt+1 )|t] 2

(3.19)

As emphasized earlier, the central banker adjusts continuation losses to reect an additional sensitivity to risk. It is then straightforward to rewrite the model in the 55

form (3.1) and (3.2), with x1t = (yt1 , t1 ), x2t = (yt , t ), and ut = Rt . We solve for optimal nominal GDP growth rule Rt = [yt yt1 + t ] (3.20)

that is to say, the commitment rule subjects to the restriction F1 = [, 0, , ] and F2 = [0, 0, 0, 0] . As shown in [45], nominal income growth rules describe US monetary policy practice since 1979 as well as, if not better than the inuential Taylor type rules. In addition, not like the Taylor type rules, nominal income growth rules avoid the need to measure capacity or potential output. We are going to look at the eects of risk sensitive preference on the choice of policy rule. The parameters in the model are set as follows: = 0.98, = 0.75, = 0.2, = 0.3, = 0.5, y = 0.5, R = 0.2 and = y = 1. Initially the economy is in the steady state. Risk sensitive preference leads to stronger reaction. The policy coecient = 2.65 for the standard preference (i.e. = 0), and 2.82 for risk-sensitive preference with = 0.01. As increases, the policy becomes more responsive38 . So here the risksensitive central bank engages in a precautionary policy. Risk-sensitive preferences imply that the central bank is not indierent to the question of when uncertainty is resolved. It is willing to take a policy action today that tries to remove the possibility of a future bad outcome. By responding more aggressively to the nominal income growth, the current change of interest rate is larger, enducing higher current loss. On the other hand, future paths of the economy are under better control. Just as a precautionary consumer would sacrice part of current consumption and save more to guard against future uncertainty, the risk-sensitive central bank chooses to
The condition for Jacobson lemma will fail as gets too large for given variances of shocks. See footnote 36.
38

56

vary interest rate more in hope that the future economy is in good hands. In this sense, the risk-sensitive preference and robust control deliver similar messages. [33] demonstrate the equivalence between these two approaches for the backward looking economic system. Although lack of formal proof of this equivalence or disequvalence in forward looking models, as hinted by [63], risk-sensitive preference approach makes more sense than model uncertainty aversion argument in practice, especially with rational expectations in the model. In the robust control argument, the central banker has standard loss preference and is uncertain about the economy structure. Robust policies are designed from a distorted model. It obviously eliminates the separation between the forcasters and the decision makers. Dierent worst case scenarios result in dierent forecasts about forward looking variables. Then the sta needs to incorporate the policymakers uncertainty aversion into the forecasting exercise. While with risksensitive preference, the policy decision and forecasting exercises will be based on one structural economy that the decision maker is condent about. The sta can provide fairly accurate and timely forecasts and the choice among the alternative paths for the decisionmakers. The risk sensitive preference determines which instrument path is eventually chosen. In the current setting, the relative variances of shocks hitted on the economy also matter in the design of monetary policy. First, they are useful for private agents to form rational expectations and thus have impact on the path of the economy. More importantly, the relative variances are a good measure of additional risk that risksensitive central bank cares about beyond that represented by expectation operators, which basically reects rst moment risk. For example, if = 1 and y = 1.5, 57

= 2.53 for the standard preference and = 2.84 when = 0.01. The same increase of degree of risk sensitivity now prompts larger amplication of response coecent, since dierent variabilities of shocks implicate more uncertainty to the central bank.

58

CHAPTER 4

CONCLUDING REMARKS

In my view, phenomenon attributable to model uncertainty abounds in many economic situations, including empirical conundrums in either micro or macroeconomics. I am excited about the research program in which I can pursue both the application and methodological innovations in modeling model uncertainty. Chapter 1 uses a worst-case scenario to examine the eect of model uncertainty aversion on the choice of monetary policy rule. The policymaker evaluates policies by their worst-case performance across the various models under consideration. The resulting rule is designed to avoid an especially poor performance of monetary policy in the event of an unfortunate model specication. We have applied this methodology to the standard New Keynesian macroeconomic model and found that the minimax rule partially rationalized actual conservative policy stance of the Fed observed in the past two decades. This worst-case scenario analysis is applicable to other economic issues as well. For example, in [18], we study the savings problem of a person who is uncertain about his own model specications in forecasting his future income path. This concern can help explain the excess smoothness puzzle, i.e., people insuciently increase consumption

59

in response to positive, persistent income growth shocks. With misspecication concerns, a person dampens his consumption increase in response to this shock in order to protect against a worst-case possibility that future income increases fail to materialize. For a plausible degree of misspecication concern, the addition of robustness explains 35 to 40 percent of the excess smoothness puzzle. Chapter 2 employs a more general approach to deal with model uncertainty. Adapted from [39], our approach here nests both the worst-case approach and the Bayesian approach as limiting cases. The key feature of our approach is that how the central bank chooses the policy rule depends both on the plausibility of each model and on its attitude towards model uncertainty. This general approach allows us to investigate (i) how the rule chosen changes when the central banks attitude towards model uncertainty varies and (ii) how the rule chosen changes when the central banks perception about model plausibility varies. Faced with both the sticky price Phillips curve and the sticky information Phillips curve, the central bank in Chapter 2 chooses dierent policies with dierent degree of uncertainty aversion and dierent perception of Phillips curve plausibility. [39] relaxes reduction between rst and second order probabilities to accommodate ambiguity sensitive preferences. This seminal work has ample applications in macroeconomics and can provide thought-provoking implications to our study. For example, it enables us to formally incorporate learning into model umbiguity. Chapter 3 shows how robustness can be induced by risk sensitive preferences. The close relationship of risk-sensitive preference to model uncertainty aversion is extensively discussed by [33]. I complement their study by oering a derivation of simple rule under commitment in the standard forward-looking macro model. In 60

addition, I argue that risk sensitive preference framework is more desirable in the actual monetary policymaking process. The risk sensitive preference can be viewed as a special case of Epstein and Zins ([21]) recursive preference specication. Recursive preference turns out to be quite successful in accounting for many empirical puzzles in nancial economics. [6], for example, nds that rst-order risk aversion substantially increases excess return predictability, though this increased predictability is insucient to match the data. Along the diverse lines of applications of model uncertainty aversion and robustness, there are still several interesting areas to be explored in future work. In the model misspecication and asset pricing, [2] develop a quartet of semigroups and address the pricing of risk and misspecied diusion process. However, the eects of concern about robustness are likely to be especially important in environments with large shocks that occur infrequently, so that I believe that modelling of robustness in the presence of jump components will be very promising. I am also interested to see how the model misspecication concern and robustness help explain many puzzles documented in international economics (for example, [20] and [3]).

61

APPENDIX A

RATIONAL EXPECTIONS SOLUTION CHARACTERIZATION TO THE HYBRID MODEL

In this Appendix, we characterize the rational expectations solution of the model. Our macroeconomic system can be expressed in matrix form as follows t a 0 0 1 0 t+1 0 a 0 Et yt+1 1 1 yt = a 1 0 (1 )y 1 Rt (1 ) 0 0 Rt+1 1 0 t1 (1 a)1 0 0 + (1 a) 1 1 (1 a)2 0 yt1 + 0 1 t t 0 0 0 0 Rt1 (A.1) Compactly, AXt = BEt Xt+1 + CXt1 + Dut , ut (0, ) (A.2)

where Xt = (t , yt , Rt ) , A, B, C, D are the coecient matrices of structural parameters, ut is the vector of exogenous shocks. Since we are interested in the optimal responses of monetary policy to exogenous shocks, i.e. systematic components of monetary policy rule, we do not include any policy shocks in our system. is the diagonal variance matrix. Following the standard undetermined coecients approach, a RE solution to the system (A.2) can be written as Xt = P Xt1 + F ut 62 (A.3)

Inserting this into the dynamic system above, we can get BP 2 AP + C = 0 BP F AF + D = 0 (A.4) (A.5)

For P satisfying (A.4) to be admissible as a solution, it must be real-valued and exhibit stationary dynamics. The singularity of the matrix B makes the standard Blanchard & Kahn ([7]) method inapplicable in our case. Instead, we utilize the generalized Schur (QZ) decomposition adapted to method of undetermined coecients ([62]). Following [62], dene the 2m 2m matrices = A C , Im 0m,m = B 0m,m 0m,m Im

where m is the number of predetermined endogenous variables, which in our case equals 3. Find unitary 2m 2m matrices Q and Z such that QZ = S is upper triangular; and QZ = T is upper triangular. Assume that Z21 and Q21 are invertible, the matrix
1 P = Z21 Z22

(A.6)

solves the matrix quadratic equation (A.4), where Qij and Zij are the m m ij -th submatrices of Q and Z , the complex conjugate transposes of Q and Z. We can characterize the stationarity, uniqueness and real-valuedness as follows: If the number of ratios
tii sii

that are smaller than 1 is less than that of predetermined

variables, there are no stable solutions. If the same, then there exists a unique solution. If more than, we have multiple solutions. Here the ratios 63
tii sii

are in eect

the generalized eigenvalues of with respect to . In the case of multiple stationary solutions, several alternative criteria have been proposed for selection of a bubble-free path39 . We will follow the popular saddle path or stability criterion rst made by [7]: which is equivalent to choosing the smallest m eigenvalues if there are more than m stable eigenvalues (i.e., with modulus less than 1). Given P , we can get F by taking columnwise vectorization: vec(F ) = [In (BP ) In A]1 vec(D) where n is the dimension of exogenous shocks, which equals 2 in our case. (A.7)

Among these are Taylors ([57]) minimum-variance criterion, the expectational-stability criterion of [23] and the minimal-state variable criterion ([44]). In many cases, the last two point to the same solution as the saddle path criterion.

39

64

APPENDIX B

LOSS FUNCTION CHARACTERIZATION

It is convenient to collect all the variables appearing in the loss function in Xt = (t , yt , rt , rt1 ) = (Xt , rt1 ) . Dene Ht = (t , yt , rt rt1 ) = H Xt , where 1 0 0 0 H = 0 1 0 0 . 0 0 1 1 Let 1 0 0 K = 0 v 0 0 0

be the weighting matrix, then

E [Lt ] = E [Ht KHt ] = trace (KH ) ,

(B.1)

where H is the unconditional covariance matrix of the goal variables and H = HX H . To get X , rst we note that transition law (A.3) can be equivalently written as Xt = P Xt1 + F ut , where 0 0 P , P = 0 0 0 1 0 .

F =

F 0 0

and ut = ut .

Now it is easily seen that X can be obtained by solving the Sylvester equation X = P X P + F F . 65 (B.2)

APPENDIX C

DETECTION ERROR PROBABILITY

Detection error probabilities can be calculated using LR tests. Consider two alternative models i and j with DGP Xt = Pi Xt1 + Fi ut Xt = Pj Xt1 + Fj ut (C.1) (C.2)

where ut is assumed to be Gaussian disturbance. For a sample of size T, the log likelihood function under model i is 1 log Lii = T
T

t=1

{log

1 2 + (ut 1 ut )} 2

(C.3)

The log likelihood function for model j, given that model i is the DGP, is40 log Lij = 1 T
T

t=1

{log

1 i i 2 + ((Fj+ (Xti Pj Xt1 )) 1 (Fj+ (Xti Pj Xt1 ))} (C.4) 2

When model i generates the data, the log likelihood ratio ri = log Lii log Lij should be positive. Calculate the probability of the mistake pi = Prob(mistake|i) = freq(ri 0)
40

F + is Moore-Penrose pseudo inverse of F and X0 = 0 for simplicity.

66

Similarly, pj = Prob(mistake|j) = freq(rj 0) is the frequency of making a mistake when model j is true. Attach equal prior to model i and j, and the detection error probability is dened as 1 p = (pi + pj ) 2 (C.5)

To compute this detection error probability, we simulate a large number of samples and calculate the empirical counterpart.

67

APPENDIX D

STATE SPACE REPRESENTATION OF THE DSGE MODELS

D.1

Sticky Price DSGE

Let Xt = (t , yt , Rt , t1 , Rt1 ) be a vector of endogenous variables, et = (ut , gt ) be a vector of exogenous variables. Then given the Taylor-type rule Rt = Rt1 + (1 )( t + y yt ), the Sticky Price DSGE can be cast into the following form AEt Xt+1 = BXt + Cet 1 0 0 1+ 0 1 0 , B = (1 ) (1 )y 1 0 0 0 1 1 1 0 0 0 C = 1 1 0 0 0 . (D.1)

1+ 0 0 0 1 0 0 where A = 0 0 0 0 0 0 0 0 0 0 0 1 and

1 1+

0 0 0 0

0 0 , 0 0

D.2

Sticky Information DSGE

Since we face an innite sum of lagged expectations for the Sticky Information Phillips curve, this generates a potentially innite state space. We choose to terminate the innite sum when adding more terms has little change to the recursive equilibrium 68

law of motion. For more details, see [61]. In order to cast in the form of the general linear dierence system, we can replace terms of the form Et1 t with st = Et1 t and add to the system a dummy equation st+1 = Et t+1 . For example, combined with the Taylor-type rule, the Sticky Information DSGE when k = 1 can be written as AEt Xt+1 = BXt + Cet by dening Xt = (t , yt , Rt , Rt1 , Et1 t , Et1 yt , Et1 yt1 ) , where 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 (D.2)

B= and

12 0 0 1 2 (1 2 ) (1 2 ) 1 2 0 1 0 0 0 0 (1 ) (1 )y 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0

A=

C =

1 1 0 0 0 0 0

D.3

Solution Method

We employ the generalized Schur Decomposition to solve the model in the form of (D.1). The idea is to reduce (D.1) into blocks of equations. The stable solution is then found by solving the unstable block forward and the stable block backward. Partition Xt into (dt , kt ) , where dt indicate endogenous nonpredetermined variables 69

and kt are predetermined state variables. Find upper triangular matrices S and T and unitary matrices Q and Z such that QAZ = S, QBZ = T 41 . Equation (D.1) can be written as SEt Wt+1 = T Wt + Cet (D.3)

where Wt = Z Xt , C = QC. Correspondingly, the partitioned version of (D.3) is W1,t+1 S11 S12 Et W2,t+1 0 S22 = T11 T12 0 T22 W1,t + W2,t C1 C2 et (D.4)

By construction42 , S11 and T22 are invertible. Consider the second block of the above system, we can proceed recursively line by line to get W2,t in terms of et (this is much more ecient than inverting T22 directly for large dimension system in terms of computation). Inserting this back into the rst block, we can then solve out W1,t . Having solved for the full vector Wt , it is straightforward to invert the transformation to nd Xt . Consolidating the VAR(1) exogenous variables, the solution has the following state space form Xt et
Xt

kt et
Mt

(D.5)

Mt = F Mt1 + G

(D.6)

D.4

Loss function

We can express the goal variables as Ht = ( t yt Rt Rt1 ) = H Xt . Dene K = 1 0 0 0 0 , then L = E[Ht KHt ] = Trace(KH ), where H = H H . From the X 0 0 model solution (D.5) and (D.6), we know X = M and M = F M F + GG ,
41 42

Under the model solvability that |Az B| = 0 identically in z, this decomposition always exists. For detailed procedure, see [38].

70

where is the variance-covariance matrix of

t.

Solving the last Sylvester equation

gives us M , and with a few more steps of algebra, the loss.

71

BIBLIOGRAPHY

[1] J. D. Amato and T. Laubach. Rule-of-thumb behaviour and monetary policy. European Economic Review, 47(5):791831, OCT 2003. [2] E. W. Anderson, L. P. Hansen, and T. J. Sargent. A quartet of semigroups for model specication, robustness, prices of risk, and model detection. Mimeo, APR 2003. [3] D. K. Backus, S. Foresi, and C. I. Telmer. Ane term structure models and the forward premium anomaly. Journal of Finance, 56(1):279304, FEB 2001. [4] L. Ball. Credible disination with staggered price setting. American Economic Review, 84(1):282289, MAR 1994. [5] L. Ball, N. G. Mankiw, and R. Reis. Monetary policy for inattentive economies. Journal of Monetary Economics, 52(4):703725, MAY 2005. [6] G. Bekaert, R. J. Hodrick, and D. A. Marshall. The implications of rst-order risk aversion for asset market risk premiums. Journal of Monetary Economics, 40(1):339, SEP 1997. [7] O. J. Blanchard and C. M. Kahn. The solution of linear dierence models under rational-expectations. Econometrica, 48(5):13051311, 1980. [8] W. C. Brainard. Uncertainty and the eectiveness of policy. American Economic Review, 57(2):411425, 1967. [9] J. Y. Campbell and N. G. Mankiw. Consumption, income, income, and interest rates: reinterpreting the time series evidence. In O. J. Blanchard and S. Fischer (Eds.), NBER Macroeconomics Annual 1989, pages 185216. MIT Press, Cambridge, MA, 1989. [10] C. D. Carroll. The epidemiology of macroeconomic expectations. In L. Blume and S. Durlauf (Ed.), The Economy as an Evolving Complex System, III. Oxford University Press, 2006.

72

[11] G. Cateau. Monetary policy under model and data-parameter uncertainty. Bank of Canada Working Paper 2005-6, 2005. [12] V. V. Chari, P. J. Kehoe, and E. R. McGrattan. Sticky price models of the business cycle: Can the contract multiplier solve the persistence problem? Econometrica, 68(5):11511179, SEP 2000. [13] L. J. Christiano, M. Eichenbaum, and C. L. Evans. Nominal rigidities and the dynamic eects of a shock to monetary policy. Journal of Political Economy, 113(1):145, FEB 2005. [14] R. Clarida, J. Gali, and M. Gertler. The science of monetary policy: A new keynesian perspective. Journal of Economic Literature, 37(4):16611707, DEC 1999. [15] R. Clarida, J. Gali, and M. Gertler. Monetary policy rules and macroeconomic stability: Evidence and some theory. Quarterly Journal of Economics, 115(1):147180, FEB 2000. [16] B. Dupor and T. Conley. The fed response to equity prices and ination. American Economic Review, 94(2):2428, MAY 2004. [17] B. Dupor and W. F. Liu. Robust policy and non-attenuation. Mimeo, 2005. [18] B. Dupor and M. Zhao. Consuming robustly. Mimeo, 2005. [19] J. B. Taylor (Ed.). Monetary Policy Rules. University of Chicago Press, Chicago, 1999. [20] C. Engel. The forward discount anomaly and the risk premium: A survey of recent evidence. Journal of Empirical Finance, 3:123192, 1996. [21] L. G. Epstein and S. E. Zin. Substitution, risk-aversion, and the temporal behavior of consumption and asset returns - a theoretical framework. Econometrica, 57(4):937969, JUL 1989. [22] C. J. Erceg and A. T. Levin. Optimal monetary policy with durable and nondurable goods. European Central Bank Working Paper No. 179, 2002. [23] G. Evans. Expectational stability and the multiple equilibria problem in linear rational-expectations models. Quarterly Journal of Economics, 100(4):1217 1233, 1985. [24] J. C. Fuhrer. Habit formation in consumption and its implications for monetarypolicy models. American Economic Review, 90(3):367390, JUN 2000.

73

[25] J. C. Fuhrer and G. Moore. Ination persistence. Quarterly Journal of Economics, 110(1):127159, FEB 1995. [26] J. Gali and M. Gertler. Ination dynamics: A structural econometric analysis. Journal of Monetary Economics, 44(2):195222, OCT 1999. [27] J. Gali, J. D. Lopez-Salido, and J. Valles. Rule-of-thumb consumers and the design of interest rate rules. Journal of Money Credit and Banking, 36(4):739 763, AUG 2004. [28] M. P. Giannoni. Does model uncertainty justify caution? robust optimal monetary policy in a forward-looking model. Macroeconomic Dynamics, 6(1):111144, FEB 2002. [29] M. P. Giannoni and M. Woodford. Optimal interest-rate rules: I. general theory. NBER Working Papers 9419, 2003. [30] I. Gilboa and D. Schmeidler. Maxmin expected utility with non-unique prior. Journal of Mathematical Economics, 18(2):141153, 1989. [31] L. P. Hansen and T. J. Sargent. Discounted linear exponential quadratic gaussian control. Ieee Transactions on Automatic Control, 40(5):968971, MAY 1995. [32] L. P. Hansen and T. J. Sargent. Robust control of forward-looking models. Journal of Monetary Economics, 50(3):581604, APR 2003. [33] L. P. Hansen and T. J. Sargent. Robustness. Princeton University Press, forthcoming edition, 2006. [34] C. Heath and A. Tversky. Preference and belief: ambiguity and competence in choice under uncertainty. Journal of Risk and Uncertainty, 4(1):528, JAN 1991. [35] D. H. Jacobson. Optimal stochastic linear systems with exponential performance critiria and their relations to deterministic dierential games. IEEE Transactions on Automatic Control, 18(2):124131, 1973. [36] B. D. Keen. Sticky price and sticky information price setting models: what is the dierence? Mimeo, 2004. [37] T. Kimura and T. Kurozumi. Optimal monetary policy in a micor-founded model with parameter uncertainty. Journal of Economic Dynamics & Control, forthcoming. [38] P. Klein. Using the generalized schur form to solve a multivariate linear rational expectations model. Journal of Economic Dynamics & Control, 24(10):1405 1423, SEP 2000. 74

[39] P. Klibano, M. Marinacci, and S. Mukerji. A smooth model of decision making under ambiguity. Econometrica, 73(6):18491892, NOV 2005. [40] A. Levin, V. Wieland, and J. C. Williams. Robustness of simple monetary policy rules under model uncertainty. In J. B. Taylor, Monetary Policy Rules, pages 263299. University of Chicago Press, Chicago, 1999. [41] A. Levin, V. Wieland, and J. C. Williams. The performance of forecast-based monetary policy rules under model uncertainty. American Economic Review, 93(3):622645, JUN 2003. [42] A. T. Levin and J. C. Williams. Robust monetary policy with competing reference models. Journal of Monetary Economics, 50(5):945975, JUL 2003. [43] N. G. Mankiw and R. Reis. Sticky information versus sticky prices: A proposal to replace the new keynesian phillips curve. Quarterly Journal of Economics, 117(4):12951328, NOV 2002. [44] B. T. McCallum. On non-uniqueness in rational-expectations models - an attempt at perspective. Journal of Monetary Economics, 11(2):139168, 1983. [45] B. T. McCallum and E. Nelson. Nominal income targeting in an open-economy optimizing model. Journal of Monetary Economics, 43(3):553578, JUN 1999. [46] C. R. Nelson. The prediction performance of the frb-mit-penn model of the us economy. American Economic Review, 62(5):902917, DEC 1972. [47] A. Onatski and J. H. Stock. Robust monetary policy under model uncertainty in a small model of the us economy. Macroeconomic Dynamics, 6(1):85110, FEB 2002. [48] A. Orphanides and J. C. Williams. Imperfect knowledge, ination expectations, and monetary policy. NBER Working Paper No. 9884, 2003. [49] J. M. Roberts. New keynesian economics and the phillips curve. Journal of Money Credit and Banking, 27(4):975984, NOV 1995. [50] G. D. Rudebusch. Is the fed too timid? monetary policy in an uncertain world. Review of Economics and Statistics, 83(2):203217, MAY 2001. [51] G. D. Rudebusch and L. E. O. Svession. Policy rules for ination targeting. In J. B. Taylor (Eds.), Monetary Policy Rules, pages 203253. University of Chicago Press, Chicago, 1999. [52] T. J. Sargent. Comment. In J. B. Taylor (Eds.), Monetary Policy Rules, pages 144154. University of Chicago Press, Chicago, 1999. 75

[53] C. A. Sims. Pitfalls of a minimax approach to model uncertainty. American Economic Review, 91(2):5154, MAY 2001. [54] P. Soderlind. Solution and estimation of re macromodels with optimal policy. European Economic Review, 43(4-6):813823, APR 1999. [55] U. Soderstrom, P. Soderlind, and A. Vredin. New-keynesian models and monetary policy: A re-examination of the stylized facts. Scandinavian Journal of Economics, 107(3):521546, 2005. [56] J. H. Stock. Comment. In J. B. Taylor (Eds.), Monetary Policy Rules, pages 253259. University of Chicago Press, Chicago, 1999. [57] J. B. Taylor. Conditions for unique solutions in stochastic macroeconomic models with rational expectations. Econometrica, 45(6):13771385, SEP 1977. [58] J. B. Taylor. Aggregate dynamics and staggered contracts. Journal of Political Economy, 88(1):123, 1980. [59] J. B. Taylor. Discretion versus policy rules in practice. Carnegie Rochester Conference Series on Public Policy, 39:195214, 1993. [60] R. J. Tetlow and P. von zur Muehlen. Robust monetary policy with misspecied models: Does model uncertainty always call for attenuated policy? Journal of Economic Dynamics & Control, 25(6-7):911949, JUN-JUL 2001. [61] M. Trabandt. Sticky information vs. sticky prices: a horse race in a dsge framework. Mimeo, 2003. [62] H. Uhlig. A toolkit for analyzing nonlinear dynamic stochastic models easily. In R. Marimon and A. Scott (Eds.), Computational Methods for the Study of Dynamic Economies, pages 3061. Oxford University Press, Oxford, 1997. [63] C. E. Walsh. Implications of a changing economic structure for the strategy of monetary policy. In Monetary Policy and Uncertainty: Adapting to a Changing Economy, pages 297348. Federal Reserve Bank of Kansas City, 2003. [64] M. Woodford. Ination stabilization and welfare. Contributions to Macroeconomics, 2:151, 2002. [65] M. Woodford. Optimal interest-rate smoothing. Review of Economic Studies, 70(4):861886, OCT 2003.

76

Você também pode gostar