Bayesian and Adaptive Optimal Policy under Model Uncertainty – Abstract

PDF

Lars E.O. Svensson
Sveriges Riksbank,
Princeton University,
CEPR, and NBER

Noah Williams
University of Wisconsin and NBER

First draft: June 2006
This version: September 2007

We study the problem of a policymaker who seeks to set policy optimally in an economy where the true economic structure is unobserved, and he optimally learns from observations of the economy. This is a classic problem of learning and control, variants of which have been studied in the past, but seldom with forward-looking variables which are a key component of modern policy-relevant models. As in most Bayesian learning problems, the optimal policy typically includes an experimentation component reflecting the endogeneity of information. We develop algorithms to solve numerically for the Bayesian optimal policy (BOP). However, computing the BOP is only feasible in relatively small models, and thus we also consider a simpler specification we term adaptive optimal policy (AOP) which allows policymakers to update their beliefs but shortcuts the experimentation motive. In our setting, the AOP is significantly easier to compute, and in many cases provides a good approximation to the BOP. We provide some simple examples to illustrate the role of learning and experimentation in an MJLQ framework.

JEL Classification: E42, E52, E58
Keywords: Optimal policy, multiplicative uncertainty