How many sprinters does it take to change a lightbulb, or what about important but not significant?
I often have discussions with non-methodologists about statistics and research methods. What can I say, it’s what I do. Statistics is one of those things that everyone has to learn, but most people feel they just don’t understand. So, instead of trying to understand it, they create hard-and-fast rules for themselves to get around it. It’s like chronic double-clickers; you know, that one person you know at work or in your family that double-clicks EVERYTHING, and doesn’t know how to right-click? Generally, it’s people who don’t understand, or aren’t comfortable with computers. They know that some things require double-clicks and it seems to work most of the time, so instead of figuring out that hyperlinks only need one click, they just double-click everything. And they use the menus for everything, including cutting and pasting, because they find CTRL-C and CTRL-V to be too advanced, and even after you show them several times that, “Hey, look how much faster it is to do it this way!” they say something like, “That’s too complicated for me,” but won’t relinquish the mouse or keyboard to you, and then still complain that, “It takes forever to do that on a computer.” And no, this is definitely not my dad. In any way whatsoever. In the research world, it’s similar–people who ONLY use ANOVAs, or rely on normality statistics to figure out if a distribution is normal (just graph it!) for instance…but I’m getting off topic.
There is the argument that statistics limit progress, or that the requirement for something to be statistically shown is restricting because there can be small, but important effects in small populations, therefore making it mathematically very difficult to “obtain” a p-value less than 0.05. Read More...