Bayesian posteriors for arbitrarily rare events

We study how much data a Bayesian observer needs to correctly infer the relative likelihoods of two events when both events are arbitrarily rare. Each period, either a blue die or a red die is tossed. The two dice land on side 1 with unknown probabilities p[subscript 1] and q[subscript 1], which can...

Full description

Bibliographic Details
Main Authors: Fudenberg, Drew (Contributor), He, Kevin (Author), Imhof, Lorens A. (Author)
Other Authors: Massachusetts Institute of Technology. Department of Economics (Contributor)
Format: Article
Language:English
Published: National Academy of Sciences, 2018-01-19T20:28:12Z.
Subjects:
Online Access:Get fulltext
Description
Summary:We study how much data a Bayesian observer needs to correctly infer the relative likelihoods of two events when both events are arbitrarily rare. Each period, either a blue die or a red die is tossed. The two dice land on side 1 with unknown probabilities p[subscript 1] and q[subscript 1], which can be arbitrarily low. Given a data-generating process where p[subscript 1] ≥cq[subscript 1], we are interested in how much data are required to guarantee that with high probability the observer's Bayesian posterior mean for p[subscript 1] exceeds (1-δ)c times that for q[subscript 1]. If the prior densities for the two dice are positive on the interior of the parameter space and behave like power functions at the boundary, then for every ϵ > 0; there exists a finite N so that the observer obtains such an inference after n periods with probability at least 1-ϵ whenever np 1 ≥N. The condition on n and p[subscript 1] is the best possible. The result can fail if one of the prior densities converges to zero exponentially fast at the boundary.