Andrew Gelman points to an article on The Fallacy of Hypothesis Testing. I'm not sure I agree with everything in the article but this paragraph caught my eye:
Third, I've learned that the scientific community's emphasis on hypothesis-based research leads too many scientists to devise experiments to prove, rather than test, their hypotheses. Many journal submissions lack any discussion of alternative competing hypotheses: Researchers don't seem to realize that collecting data that are consistent with their original hypothesis doesn't mean that it is unconditionally true. Alternatively, they buy into the fallacy that absence of evidence for something is always evidence of its absence.
Gelman responds:
... I imagine many of my social science colleagues could present a defense of hypothesis testing. (Just to be clear, I think we're talking here about the idea of posing and testing hypotheses, not the textbook statistical methods called "hypothesis testing." The hyp testing that Pepperberg is talking about could just as easily be done using confidence intervals or whatever; her real distinction, I think, is between studies that are exploratory and studies that are designed to test particular scientific theories.
The general criticism seem to be that hypothesis testing is conducted in the absence of competing models. But if different models lead to the same hypothesis test then the question seems to be one of differentiating between alternative models.
No comments:
Post a Comment