Lars E.O. Svensson
CEPR, and NBER
University of Wisconsin and NBER
Federal Reserve Bank of St. Louis
Review 90(4), 2008, 275-293
We study the design of optimal monetary policy under uncertainty using a Markov jump-linear-quadratic (MJLQ) approach. We approximating the uncertainty that policymakers face by different discrete modes in a Markov chain, and by taking mode-dependent linear-quadratic approximations of the underlying model. This allows us to apply a powerful methodology with convenient solution algorithms that we have developed. We apply our methods to analyze the effects of uncertainty and potential gains from experimentation for two sources of uncertainty in the New Keynesian Phillips curve. Our examples highlight that learning may have sizeable effects on losses, and while it is generally beneficial it need not always be so. The experimentation component typically has little effect, and in some cases it can lead to attenuation of policy.
JEL Classification: E42, E52, E58
Keywords: Optimal monetary policy, learning, recursive saddlepoint method