A blog post by Professor Christian Robert mentioned a paper by Professors James Berger and Tom Sellke, which I downloaded several years back but never got around to reading.
J. O. Berger, T. M. Sellke, “Testing a point Null Hypothesis: The irreconcilability of p values and evidence”, Journal of the American Statistical Association, March 1987, 82(397), Theory and Methods, 112-122.
I even overlooked the paper when I lectured about the statement on statistical significance and p-values by the American Statistical Association at my former employer. But it’s great. Abstract is below.
The problem of testing a point null hypothesis (or a “small interval” null hypothesis) is considered. Of interest is the relationship between the P value (or observed significance level) and conditional and Bayesian measures of evidence against the null hypothesis. Although one might presume that a small P value indicates the presence of strong evidence against the null, such is not necessarily the case. Expanding on earlier work [especially Edwards, Lindman, and Savage (1963) and Dickey (1977)], it is shown that actual evidence against a null (as measured, say, by posterior probability or comparative likelihood) can differ by on order of magnitude from the P value. For instance, data that yield a P value of .05, when testing a normal mean, result in a posterior probability of the null of at least .30 for any objective prior distribution. (“Objective” here means that equal prior weight is given the two hypotheses and that the prior is symmetric and nonincreasing away from the null; other definitions of “objective” will be seen to yield qualitatively similar results.) The overall conclusion is that P values can be highly misleading measures of the evidence provided by the data against the null hypothesis.