Cornell University

This is a past event. Its details are archived for historical purposes.

The contact information may no longer be valid.

Please visit our current events listings to look for similar events by title, location, or venue.

Joint Econometrics & Industrial Organization Workshop: Federico Bugni

Wednesday, April 18, 2018 at 11:40am to 1:10pm

Uris Hall, 498
Central Campus

Federico Bugni - Duke University

On the Iterated Estimation of Dynamic Discrete Choice Games (joint with J. Bunting)

Abstract We study the asymptotic properties of a class of estimators of the structural parameters in dynamic discrete choice games. We consider K-stage policy iteration (PI) estimators, where K denotes the number of policy iterations employed in the estimation. This class nests several estimators proposed in the literature. By considering a “maximum likelihood” criterion function, our estimator becomes the K-ML estimator in Aguirregabiria and Mira (2002, 2007). By considering a “minimum distance” criterion function, it defines a new K-MD estimator, which is an iterative version of the estimators in Pesendorfer and Schmidt-Dengler (2008) and Pakes et al. (2007).  First, we establish that the K-ML estimator is consistent and asymptotically normal for any K. This complements findings in Aguirregabiria and Mira (2007), who focus on K = 1 and K large enough to induce convergence of the estimator. Furthermore, we show that the asymptotic variance of the K-ML estimator can exhibit arbitrary patterns as a function K.  Second, we establish that the K-MD estimator is consistent and asymptotically normal for any K.  For a specific weight matrix, the K-MD estimator has the same asymptotic distribution as the K-ML estimator. Our main result provides an optimal sequence of weight matrices for the K-MD estimator and shows that the optimally weighted K-MD estimator has an asymptotic distribution that is invariant to K. This new result is especially unexpected given the findings in Aguirregabiria and Mira (2007) for K-ML estimators. Our main result implies two new and important corollaries about the optimal 1-MD estimator (derived by Pesendorfer and Schmidt-Dengler (2008)). First, the optimal 1-MD estimator is optimal in the class of K-MD estimators for all K. In other words, additional policy iterations do not provide asymptotic efficiency gains relative to the optimal 1-MD estimator. Second, the optimal 1-MD estimator is more or equally asymptotically efficient than any K-ML estimator for all K. 

Event Type

Class/ Workshop




EconMetrics, economics, EconSeminar, EconIO



Contact E-Mail

Contact Name

Amy Moesch

Contact Phone



Federico Bugni

Speaker Affiliation

Duke University

Open To

Cornell Economics Community (List Serve Members)

Google Calendar iCal Outlook