If research isn’t the "real world", then what can it show us?

When I was in physics class in my undergraduate degree, I remember doing an experiment to demonstrate the laws of momentum. I don’t remember the specifics of the experiment, but I do remember using an air-table (similar to an air hockey table, but smaller) and pucks (similar to air hockey pucks, but smaller) to demonstrate how conservation of momentum occurs in an almost frictionless environment.

Now, I don’t know about you, but I don’t live in a frictionless world. Those of you who live on air-hockey-table-like-environments, I can’t speak for you. And back then, I probably thought very similarly to the way some people think about research now: “If nothing on Earth is truly frictionless, short of air-hockey tables and mag-lev devices, why do we have to do this experiment?” The answer back then, was, “Because we’re telling you do it, so quit bitching already and push the friggin’ puck.” But I’ve tried that answer on people who ask, “We never use supplement X/training technique Y in tightly controlled situations, so how do the results of these studies help us?” and a) it doesn’t go over very well, and b) they’re really confused about some mysterious puck.

The problem with the mentality that all research is imperfect or impractical is that it throws the baby out with the bathwater. It’s analogous to saying that no workout program is perfect, so why bother working out at all?

So, if the real world isn’t a research study, how does research inform us about the so-called real world?

Most well-designed intervention research is, well…designed to look at the effect of a single change (or collection of changes) on some attribute. A particular diet’s effect on weight loss; a drug on death rates from heart attacks; a training program on strength. It’s also designed to isolate the change as much as possible so that a causal relationship can be established between the change and the effect. Failure to isolate (either methodologically or statistically) makes the casual relationship blurred and therefore muddies the waters a lot.

A failure to establish a causal relationship means one of two things: 1) the change under investigation doesn’t cause any effect, or 2) the conditions are too muddled to see the effect under the noise of failing to isolate the change sufficiently. In the first case, the change of interest doesn’t do anything, so it’s worth abandoning as long as we’re sure that’s why we failed to find an effect.

But in the second case, there are things we need to ask ourselves before we can justify abandoning a line of investigation. After all, we don’t want to accidentally throw away something that could actually be helpful. The first question is whether the effect we’ve observed is important enough to continue. Even muddled, if the signal is large enough to show through the noise, it may be worth pursuing. If the effect just isn’t that important (as we’ve already seen in several revews), it may as well not be there, and therefore, it’s worth dropping and spending our energies elsewhere.

But noise notwithstanding, it is important not to discard things that are potentially helpful. This means that when we design trials on newer interventions, we want to give it the best chance of showing us that it is capable of creating an effect.

So when it comes to training and diet studies, selecting the right group of people to study becomes vitally important. It is for this reason that a good designer looking to determine if a change works at all will choose a study population that has the most potential to change. That population is the population furthest away from the theorized “optimal effect”. Subjects who have the most distance to travel are more likely to travel A distance. If everyone was 7/10 at the start, that’s only 3 point to move. The chances of seeing an effect are statistically diminished.

“But all beginners improve no matter what they do,” is the most common argument against the utility of this type of research. However, this is where the comparison group comes into play. An appropriately designed trial accounts for “beginner’s luck” by creating two comparable groups. If the intervention is useful, then there should be a change that is better than the control group that doesn’t get the intervention. If the change isn’t better, or is too small for us to care, then the experiment is essentially over.

Once an intervention has been shown to work in individuals far away from optimal, then we can start looking at creating better generalizability to individuals who might actually use the intervention (i.e. non-beginners). And the same process applies.

Although somewhat less applicable to some exercise-based studies, testing individual components of a nutrition or training program/strategy under highly controlled conditions gives us an idea of whether the new supplement/exercise/diet/thing has a snowball’s chance in hell of working in the real world. If it can’t work under optimal conditions, then it’s not going to work in sub-optimal ones.


Click Here to view the Full Version of our Website