Richard D. Morey
2 min readJan 5, 2018

--

Hi Zoltan,

I don’t have any problem, in principle, with lowering the default alpha, though I’m not convinced it is necessary. My main problem is with the argument that I think misses the relationship between Bayesian evidence (formal, related to changes in credences) and frequentist evidence (informal, related to error rates). There’s no general mapping between the two (and whether p=.05 is “weaker” than the Bayesian evidence will depend on the models). So any general claim that there is a mapping between the two misses critical aspects of both classical (the relationship between one- and two-sided tests) and Bayesian (the importance of the prior models) statistics, muddying the waters.

You argue that there is nowhere principled to stop setting an ever lower alpha.

This bit was only meant to refer to the argument from the empirical replication rates. That argument doesn’t support any particular alpha, because for any alpha, you could employ the argument again against that alpha, even without empirical evidence (because it is a general property of correlations).

You imply costs and benefits should be rigorously argued to get to this default

Actually, I wouldn’t argue this, as you’ll see in Part 3 :). I don’t think one can justify alpha at all.

put people off doing single experiments to make dramatic claims, which is precisely what we want to put people off, right?

Yes, but what we put in its place is important. Making dramatic claims with a single p=.0049 is not an improvement; at least, not if we want robust psychological science! Give me a chain of four or five well-designed, transparently-planned experiments showing parametric manipulation of an effect in a well-validated, relevant DV, and forget the default criterion.

--

--

Richard D. Morey
Richard D. Morey

Written by Richard D. Morey

Statistical modeling and Bayesian inference, cognitive psychology, and sundry other things

No responses yet