Uniform and Subuniform Posterior Robustness: The Sample Size Problem
Nov 30 1991
The following general question is addressed: givin iid realizations X1, X2,..., Xn from a distribution Pθ with parameter θ, where θ has a prior distribution π belonging to some family T, it is possible to prescribe a sampel size n0 such that for n ≥ n0, no posterior robustness is guaranteed to obtain for any actual data we are likely to see or even for all possible data. Formally, we identify a "natural" set C such that P (The observation vector X ∉ C) ≤ ε, for all possible marginal distributions implied by T, and protect ourselves for all X in the set C. Such a set C typically exists if T is tight. The plausibility of such a preposterior guarantee of postexperimental robustness depends on many things: the actual decision problem, the nature of the loss, whether the loss function is known, the variety of priors in T, whether the model is regular or nonregular, the dimension of the parameter θ, etc. We explore a variety of these questions
There are two aspects in these results: one of them is to establish the plausibility itself; this is done by showing uniform convergence to zero of ranges of posterior quantities. This part forms the mathematical foundation of the program. The second aspect is to provide actual sample size prescriptions for a specific goal to be attained. This forms the applicant part of the program.
For instance, for testing that the mean of a multinormal distribution belongs to some (measurable) set B, the range of the posterior probability of the hypothesis converges to 0, uniformly for all likely X and uniformly in B, at a rate 1/√(n). In the one dimensional case, the range of the posterior mean converges to 0 uniformly for all likely X, uniformly over the class of all Lipshitz functions, at a rate 1/n. These assume conjugate priors. If log π has a bounded gradient, then in any arbitrary dimension, a remarkably strong robustness obtains. For instance, any pair of HPD credible sets of a given level are guaranteed to be visually identical for large n. This is proved by showing that uniformly in X, the Hausdorff distance between the two sets goes to 0 at a rate 1/√(n).
It is demonstrated that much as in classical theory, the rates and the calculations are different in nonregular models. In particular, the classical rate for the regular case can be maintained uniformly over a broad class of loss functions. These results are for the uniform case.
Keywords:Prior, posterior, uniform robustness, confidence sets, risks, hypothesis tests, nonregular, nonconjugate