Kevin Drum mulls an annoying trend in social science research and harrumphs.
Call me naive, but WTF? I have no training at all, and I’m keenly aware of the problems Gelman is talking about. How is it possible to complete a PhD program and not have this kind of thing drilled into your consciousness for all time? Can there really be people out there who are being trained that “statistically significant” = real, and nothing more? It’s mind boggling. Are there any PhD programs out there that would would fess up to this?
I attended a scientific Bachelor’s program and earned graduate degrees in a few different fields, so I might as well weigh in here. Short answer, yes. There are a lot of them. Long answer, it depends on what field of science you have in mind. In ‘descriptive’ fields like ecology where manipulating the thing you study is hard to impossible, students tend to walk away with a grasp of statistics that is comprehensive and exacting. Most will learn the proper use of complex software packages like SAS that let you apply many levels of analysis to huge data sets. On the other hand laboratory fields often hand out Ph.D.s to people who only know to run a two-tailed t-test in Excel, maybe, and give a blank uncomprehending stare if you suggest they test for outliers or apply post-hoc standards to a hypothesis that they thought up after the data came in. You tend to see that in laboratory disciplines, where more control over conditions lets you narrow the scope of experiments to a couple of variables at most.
In the end it always comes down to what reviewers expect. Professional scientists have to keep a lot of balls in the air at once. Publish or perish is a real thing that has a potent ability to focus the attention. Most people thus do not have the energy to learn statistical rules that have no bearing on whether your paper or grant will get accepted. During my oceanography years I got raked over the coals for some pretty arcane rules of correlation and inference, so I cracked the books in order to move my career forward. In my life as a cell biologist I have published in second-tier Nature journals and PNAS, several times each, and nobody has yet demanded anything more demanding than printing my significance values in a larger font size. How did I get those numbers? Should I have used an ANOVA or a post-hoc test? Whatever it’s all good. People in each field learn as much about statistics as they need to keep publishing and scoring grants. For us lab rats the important stuff could fit on the margins of Arthur Laffer’s cocktail napkin.
Then again cell biology is a relatively ‘hard’ field with a lot of turnover in areas with enough interest to make the big journals. BS tends to get caught and refuted pretty fast. Thus you can half-ass something and see it published, but most people don’t.
Regarding sociology, I read most of that stuff for entertainment value only. In most cases there are just too many variables for anyone to control. If something comes up that has an unusually strong and consistent result such as the Stanford marshmallow experiment or the Milgram prison fiasco, or if a number of independent groups consistently report the same thing, then I tend to pay attention. Other stuff occupy that dangerous nexus between easy to do, no best practices yet and very hard to control properly. fMRI is my prime candidate for emergency bathroom reading only, followed by any field with -omics in it that is younger than five years old. Best practices take a little while to develop. Any field of ‘science’ practiced by MD’s is a tetchy topic for me at least; everyone knows about the professional rivalry between MD’s and the other kind of doctor, and I am a Ph.D. Let’s leave it at that.