Why not 0.06?
Steve Accera asked me an interesting question a few months back. Why is 0.05 the cut-off for a p-value to be considered statistically significant?
To understand the answer to this question, we first have to talk about how to interpret a p-value in the first place. For the expanded explanation, click here. Read More...
More is not better–when statistics turn bad (it’s just not as entertaining as when animals do)
As with everything, more is not usually better. In statistics, more actually makes you more prone to being accidentally wrong (or, in statistics-lingo, spurious). Today, I’d like to talk about a basic concept that is taught to students in their introductory statistics courses (and hence, you would think, most researchers): The effect of multiple significance testing. Read More...
This episode is brought to you by the letter p
What does the p-value actually mean? Read More...
Abstracts! Huh! What are they good for?
I think it’s a pretty safe statement to say that most fitness end-users, and even trainers, do not have the time or the interest (or, in some cases, the access) to read full papers. Most people tend to have easy access to PubMed abstracts, and are quite happy to read the “chunk-style” format of an abstract (thanks, Lou) because they’re generally short, and fairly easy to understand (because brevity forces simplicity most of the time). Read More...
Gymnastics makes you short.
Bias. It’s everywhere. But one of the most annoying biases I’ve seen in a lot of the popular fitness publications, including websites, blogs, as well as paper magazines is what I call the “sport causality bias”, or “elite athlete selection bias”. Read More...
Different kinds of important.
In clinical research, there are two kinds of important–the important kind and the unimportant kind. Read More...
The beauty (and truth) of randomization
One very important distinguishing feature of many studies is whether it is a randomized controlled trial, or just a regular controlled trial. Read More...