tag:blogger.com,1999:blog-14297067.post5421568860986085850..comments2016-09-13T08:32:14.653-07:00Comments on PanCrit.Org: Stephen Ziliak and Deirdre McCloskey: The Cult of Statistical SignificanceChris Hibberthttp://www.blogger.com/profile/12235621011708498622noreply@blogger.comBlogger3125tag:blogger.com,1999:blog-14297067.post-45239649231756934952009-12-13T06:09:24.163-08:002009-12-13T06:09:24.163-08:00Dears,
We're glad to get support from Algosom...Dears,<br /><br />We're glad to get support from Algosome, (and even from the original review, which is strangely harsh considering that he agrees with us: with such friends, who needs enemies?). But we beg to disagree on one important point. It's not true that statistical significance tells that an effect is real. Realness is not an on/off characteristic in a science: in math and philosophy, yes; not in history or physics. In a science we always want to know How Big. In some contexts an insignificant coefficient can be important, and very commonly a significant one (the correlation of GDP with ice cream fat content) is not important. Bigness is (you can tell from its name) quantitative. It's never qualitative, to be decided by some characteristic inhering in the number itself, independent of an exercised human judgment. And Algosome would probably agree that to always set p = .05 is therefore irrational. But Algosome is quite right that then there's a second step (having gotten a regression coefficient), which is a judgment for the particular scientific or policy purpose. It is ask, How Big is Big.<br /><br />Sincerely,<br /><br /><br />Deirdre McCloskeyDeirdrehttp://www.blogger.com/profile/05638820963402539684noreply@blogger.comtag:blogger.com,1999:blog-14297067.post-26054491685141600122009-08-19T21:03:15.619-07:002009-08-19T21:03:15.619-07:00The previous commentator says that "Statistic...The previous commentator says that "Statistical significance is an indicator of whether the effect is real." That is simply not true. <br /><br />Statistical significance merely says that the probability of sampling error is low. But that is far from implying that the effect is real. A lot of other errors such as ommited variable bias, measurement error can keep the effect from being real. There is no alternative but to look at the size to guess if the effect is real.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-14297067.post-22623376467259547682009-05-17T17:16:00.000-07:002009-05-17T17:16:00.000-07:00This isn't really hard, but it's amazing how many ...This isn't really hard, but it's amazing how many people don't get it. It's a two-step process: Statistical significance is an indicator of whether an effect is <I>real</I>, while statistical power is an indicator of <I>potential</I> importance. With a small sample, you may find a big effect purely by accident. If a result fails the reality criterion, you should ignore it regardless.<br /><br />Unfortunately the publish-or-perish paradigm of academic advancement requires researchers to promote their results no matter how small or useless they actually are. This turns out to be effectively in collusion with the marketplace that wants to promote products even if they are totally worthless. Nobody is motivated to take the second step and differentiate products by effectiveness except the poor consumer who is not in a position to know that there is better information to be obtained, and who is usually not educated enough to understand what it would mean even if it were made available.<br /><br /><I>Caveat emptor.</I>Algosomehttp://www.blogger.com/profile/12400245912315158196noreply@blogger.com