HN Books @HNBooksMonth

The best books of Hacker News.

Hacker News Comments on
The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives (Economics, Cognition, And Society)

Stephen T. Ziliak, Deirdre N. McCloskey · 4 HN comments
HN Books has aggregated all Hacker News stories and comments that mention "The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives (Economics, Cognition, And Society)" by Stephen T. Ziliak, Deirdre N. McCloskey.
View on Amazon [↗]
HN Books may receive an affiliate commission when you make purchases on sites after clicking through links on this page.
Amazon Summary
“McCloskey and Ziliak have been pushing this very elementary, very correct, very important argument through several articles over several years and for reasons I cannot fathom it is still resisted. If it takes a book to get it across, I hope this book will do it. It ought to.” ―Thomas Schelling, Distinguished University Professor, School of Public Policy, University of Maryland, and 2005 Nobel Prize Laureate in Economics “With humor, insight, piercing logic and a nod to history, Ziliak and McCloskey show how economists―and other scientists―suffer from a mass delusion about statistical analysis. The quest for statistical significance that pervades science today is a deeply flawed substitute for thoughtful analysis. . . . Yet few participants in the scientific bureaucracy have been willing to admit what Ziliak and McCloskey make clear: the emperor has no clothes.” ―Kenneth Rothman, Professor of Epidemiology, Boston University School of Health The Cult of Statistical Significance shows, field by field, how “statistical significance,” a technique that dominates many sciences, has been a huge mistake. The authors find that researchers in a broad spectrum of fields, from agronomy to zoology, employ “testing” that doesn’t test and “estimating” that doesn’t estimate. The facts will startle the outside reader: how could a group of brilliant scientists wander so far from scientific magnitudes? This study will encourage scientists who want to know how to get the statistical sciences back on track and fulfill their quantitative promise. The book shows for the first time how wide the disaster is, and how bad for science, and it traces the problem to its historical, sociological, and philosophical roots. Stephen T. Ziliak is the author or editor of many articles and two books. He currently lives in Chicago, where he is Professor of Economics at Roosevelt University. Deirdre N. McCloskey, Distinguished Professor of Economics, History, English, and Communication at the University of Illinois at Chicago, is the author of twenty books and three hundred scholarly articles. She has held Guggenheim and National Humanities Fellowships. She is best known for How to Be Human* Though an Economist (University of Michigan Press, 2000) and her most recent book, The Bourgeois Virtues: Ethics for an Age of Commerce (2006).
HN Books Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this book.
I just finished a pretty interesting book on this topic:

"The Cult Of Statistical Significance":

http://www.amazon.com/Cult-Statistical-Significance-Economic...

It basically goes through a bunch of examples, mostly in Economics, but in medicine also (Vioxx) where statistical significance has failed us and people have died for it. As someone who works with statistics for a living, I found to book interesting - but it was pretty depressing to find out that most scientist are using t-test and p-values because it seems to be the status quo and it is the easiest way to get published. The authors of this book suggest a few different things -- publishing the size of your coefficients and using a loss function. In the end, they make the point that statistical significance is different than economic significance, political significance, etc.

Deirdre McCloskey (an economist) has an entire book devoted to this[1]. Her article here: http://www.deirdremccloskey.com/docs/jsm.pdf covers the main argument in the book. One important point she makes is that not all fields misuse p-values and statistical significance. In physics significance is almost always used appropriately, while in social sciences (including economics) statistical significance is often conflated with actual significance.

[1]: http://www.amazon.com/The-Cult-Statistical-Significance-Econ...

sukilot
That difference is likely because reality won't believe you if you state the significance wrong, but people will.
Feb 02, 2015 · RA_Fisher on Science’s Biggest Fail
I have a simple criterion for a summary judgement of the reliability of results:

a) Is the data made available? b) Is it a Bayesian analysis? c) Has a power study been offered?

As a statistician, I have a keen awareness of the ways that p-values can depart from truth. You can see Optimizely's effort to cope (https://www.optimizely.com/statistics). You can read about it in The Cult of Statistical Significance (http://www.amazon.com/The-Cult-Statistical-Significance-Econ...). This Economist video captures it solidly (http://www.economist.com/blogs/graphicdetail/2013/10/daily-c...).

The key component missing is a bias towards positive results. Most scientists only have two statistics classes. In these classes they learn a number of statistical tests, but much less how things can go wrong. Classic, "just enough to be dangerous."

In order to cope, I have a personal set of criteria to make a quick first sort of papers. It's a personal heuristic for quality. I assume some degree of belief (Bayes, FTW!) that those that offer the full data set along side conclusions feel confident in their own analysis. Also, if they're using Bayesian methods, that they've had more than two stats classes. Finally, if they do choose Frequentist methods, a power study tells me that they understand the important finite nature of data in the context of asymptotic models / assumptions.

I'd suspect that other statisticians feel this way, because I've heard that privately --- what do you think of my criteria?

Fomite
I've encountered way too much "It must be good because its Bayes!", too much "It's Bayes because I used MCMC, ignore my flat uninformative prior..." etc. to put much stock in that as a metric.

I'm also involved in enough medical research where data just can't ethically be made available that, well...you and I clearly disagree.

xrange
R A Fisher is now promoting Bayesian analysis???

https://www.google.com/?gws_rd=ssl#q=Fisher+%22prominent+opp...

tjradcliffe
These are reasonable criteria.

I also tend to be very sensitive to failure to correct for multiple hypotheses, as this is something I see all the time, particularly when people start sub-setting data: "We looked for an association between vegetables in the diet and cancer incidence, but only found it between kale and lung cancer." This happens all the time, and people report such associations as if they were the only experiment being run, whereas in fact they have run some combinatorically huge list of alternative hypotheses, and unsurprisingly have found one that looks significant at p=0.05 (which is a ridiculously lax acceptance criterion.)

I also pretty much categorically reject case control studies: http://www.tjradcliffe.com/?p=1745 They are insanely over-sensitive to confounding factors. They can and do have legitimate uses to guide further research, but should never be used as the basis of policy or action beyond that.

There's also a sense one gets from many papers that the researchers are black-boxing their statistical analysis: that they have plugged the numbers into some standard package and take the results at face value. While I appreciate that maybe not everyone can have a solid technical grasp of this stuff, it always bothers me when I see that because it is far too easy to generate garbage if you don't understand precisely what you're doing.

[Disclaimer: I am an experimental and computational physicist who has never taken a stats course, but believe myself to be competently self-educated in the subject and have spent part of my career doing data analysis professionally using primarily Bayesian methods.]

The field receives a pretty scathing review in The Cult of Statistical Significance. The summaries offered there are pretty damning: http://www.amazon.com/The-Cult-Statistical-Significance-Econ...
HN Books is an independent project and is not operated by Y Combinator or Amazon.com.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.