BEGIN:VCALENDAR
VERSION:2.0
CALSCALE:GREGORIAN
PRODID:iCalendar-Ruby
BEGIN:VEVENT
CATEGORIES:Class/ Workshop
DESCRIPTION:Federico Bugni - Duke University\n\nOn the Iterated Estimation
of Dynamic Discrete Choice Games (joint with J. Bunting)\n\nAbstract: We
study the asymptotic properties of a class of estimators of the structural
parameters in dynamic discrete choice games. We consider K-stage policy ite
ration (PI) estimators\, where K denotes the number of policy iterations em
ployed in the estimation. This class nests several estimators proposed in t
he literature. By considering a “maximum likelihood” criterion function\, o
ur estimator becomes the K-ML estimator in Aguirregabiria and Mira (2002\,
2007). By considering a “minimum distance” criterion function\, it defines
a new K-MD estimator\, which is an iterative version of the estimators in P
esendorfer and Schmidt-Dengler (2008) and Pakes et al. (2007). First\, we
establish that the K-ML estimator is consistent and asymptotically normal f
or any K. This complements findings in Aguirregabiria and Mira (2007)\, who
focus on K = 1 and K large enough to induce convergence of the estimator.
Furthermore\, we show that the asymptotic variance of the K-ML estimator ca
n exhibit arbitrary patterns as a function K. Second\, we establish that t
he K-MD estimator is consistent and asymptotically normal for any K. For a
specific weight matrix\, the K-MD estimator has the same asymptotic distri
bution as the K-ML estimator. Our main result provides an optimal sequence
of weight matrices for the K-MD estimator and shows that the optimally weig
hted K-MD estimator has an asymptotic distribution that is invariant to K.
This new result is especially unexpected given the findings in Aguirregabir
ia and Mira (2007) for K-ML estimators. Our main result implies two new and
important corollaries about the optimal 1-MD estimator (derived by Pesendo
rfer and Schmidt-Dengler (2008)). First\, the optimal 1-MD estimator is opt
imal in the class of K-MD estimators for all K. In other words\, additional
policy iterations do not provide asymptotic efficiency gains relative to t
he optimal 1-MD estimator. Second\, the optimal 1-MD estimator is more or e
qually asymptotically efficient than any K-ML estimator for all K.
DTEND:20180418T171000Z
DTSTAMP:20181212T123603Z
DTSTART:20180418T154000Z
GEO:42.447296;-76.482254
LOCATION:Uris Hall\, 498
SEQUENCE:0
SUMMARY:Joint Econometrics & Industrial Organization Workshop: Federico Bug
ni
UID:tag:localist.com\,2008:EventInstance_3133237
URL:http://events.cornell.edu/event/econometrics_workshop_federico_bugni
END:VEVENT
END:VCALENDAR