Lars E.O. Svensson
CEPR, and NBER
University of Wisconsin and NBER
First version: November 2007
This version: March 2008
We study the design of optimal monetary policy under uncertainty in a dynamic stochastic general equilibrium models. We use a Markov jump-linear-quadratic (MJLQ) approach to study policy design, approximating the uncertainty by different discrete modes in a Markov chain, and by taking mode-dependent linear-quadratic approximations of the underlying model. This allows us to apply a powerful methodology with convenient solution algorithms that we have developed. We apply our methods to a benchmark New Keynesian model, analyzing how policy is affected by uncertainty, and how learning and active experimentation affect policy and losses.
JEL Classification: E42, E52, E58
Keywords: Optimal monetary policy, learning, recursive saddlepoint method