STAHP! Please, just STAHP.
There’s a dark humour quote in surgery, “There is no surgery so simple that a resident can’t screw it up.” I’ve decided to make up an adage, “There is no concept or idea that the fitness/nutrition industry can’t corrupt,” because I never thought I would have to ever, ever, write a post like this.
It astonishes me that there is now a clear line in the sand between “evidence-based” trainers and non-“evidence-based” trainers. There’s even a third camp, which I’ll nick-name the “anti-evidence-based” trainers, where it’s not just a passive, I-don’t-really-read-much-research, but an active, I-explicitly-ignore-and-deride-research.
It’s astonishing because evidence-based practice is just a decision-making framework. It differs from inheritance-based practice, because it involves questioning the inherent biases that exist in decision-makers; from educational bias (I do it this way because this was how my teachers were taught and their teachers and their teachers) to recall bias (I do it this way because I place more weight on the times where this has worked compared to the times where it has not worked.) The main vehicle for this questioning is usually in research where there are explicit attempts to minimize (or at least consider) bias (and there are a number of reasons for this), but scientific literature is by no means the only method by which questions are asked and answered.
However, this weird schism between evidence-based and anti-evidence-based trainers is unique to the world of fitness—and partly because there’s virtually no accountability mechanisms for trainers and nutritionists. Litigation risk for trainers is low—as reflected by the fact that some certified trainers can buy 1-million dollars in liability insurance for about $200 a year, while others don’t carry any at all. The choice to consider research in decision-making is voluntary; and as such, the evidence-based approach has become a marketing tool; as has the approach to consciously disregard it.
There are two huge disadvantages to this phenomenon:
1) Splitting the industry down this line only serves to ultimately harm the overall ecosystem of it.
The major benefit to evidence-based practice is that it puts the patient/client first. Decisions are made with as much information as you can get at the time of the decision. Sometimes, that’s knowing that there isn’t any research to help your decision and that your experience and the experiences of colleagues with which you come into contact are about as complete as it’s going to get. But if you never search for the research in first place, you can’t know that that particular scenario is the one that applies. This overall knowledge means that when you present your client/athlete with a decision, that they’re going in with the most up-to-date knowledge from all sources. Ultimately, they’re the ones assuming risk—you’re just the programmer. Shouldn’t they know as much about what they’re getting into as possible?
Defining one’s self as ‘evidence-based’ alienates those trainers and practitioners who would probably eventually adopt it if they weren’t fighting for a perceived loss in market share and circling the wagons to create a counter-movement. Using evidence-based practice as a marketing tool to lure away clients from those who are not actively advertising as evidence-based creates a perception of loss, and incentive to create a counter culture. Ultimately, the party that loses is the client caught in the middle. If you have an evidence-based approach, you should recognize that widespread adoption of it is what spurs on new evidence. The longer this schism exists, the slower the progress of this field because less than 100% of the field is racing toward a common goal.
2) Adopting an evidence-based approach does not necessarily make you any better of a trainer; it just makes your decisions more informed.
Yeah, I said it. No, it’s not a typo. It doesn’t always make me a better surgeon either.
Investing in learning research-based knowledge (which, unfortunately, is what evidence-based practice seems to be most associated—more on this in a later post possibly) only makes you a better trainer if what you learn actually results in an advantageous outcome for your athlete or client. Sometimes, it’s a measurable benefit; sometimes, it’s an unmeasurable prevention. A lot of the times, the decision you make is exactly the same, with or without research-based knowledge; assuming you had the same knowledge before you started reading more and that you didn’t learn anything new from reading all that research.
A case of how this pans out in my field is carpal tunnel syndrome. The definitive surgical treatment for carpal tunnel syndrome (or compression of the median nerve under the transverse carpal ligament) is to release the transverse carpal ligament. This has been done for decades and has been handed down from surgeon to resident for just as long. We have lots of evidence now on how effective it is, and how it compares against other non-surgical treatments, but amongst surgical options, that’s it. Whether I learned it from searching the literature, or from my professors happens not to matter in this case: The outcome is the same. Experience and literature converge. The fact that I have searched and read the literature exhaustively, in this case, has not made me a better surgeon; only a well-informed one. I’m almost sure that there are no other surgical options; and I’m not blindly offering the surgery to my patients not knowing if there is a better option out there.
If I told you that you could, in an afternoon, know how something was going to work in 30-50 more people than you’ve trained in your life, would you spend that afternoon? What about 100-200 more people than you’ve trained in your life? 1000? The “advantage” that evidence-based practice provides is that it makes your decisions more informed. If your decisions are at the border of human understanding, you’ll know it. If your decisions are not backed-up, you’ll know it. If your decisions are well-supported, you’ll know it. It allows you to utilize the data of hundreds, maybe thousands of experiences that you will never have ,to do what’s best for your client. It also prevents you from repeating the mistakes of the past (You are NOT the first person to discover Ancel Keyes’ starvation paper) and from having to “re-invent the wheel” in your development as a trainer (You don’t have to go through a ‘linear periodization’ programming phase before you try undulating.)
But that doesn’t necessarily mean that it changes your decisions. There are at least two other factors in an evidence-based approach that can drive your decision (your own experience and education, or “expertise” as they call it in EBP papers; and client preferences and client limitations.) Hence, it doesn’t equate to definitively getting better results; it’s just a better approach. From a responsibility/ethics point of view and under the presumption that you’re practicing an “athlete-centered” approach, you’re better. From an outcomes point of view, it doesn’t always follow.
I haven’t even touched on how many trainers who market themselves as “evidence-based” don’t really understand the approach itself; or how critical appraisal isn’t really a skill that you can just learn on your own; or how “reading thousands of scientific articles” is not actually a credential. More posts for another time.
Similar to how great craftsmanship should go unnoticed; a true evidence-based approach shouldn’t need an explicit statement to draw attention to it. So stop. Just stop using what is a baseline decision-making framework as a marketing tool. Just do it. Your excellence will come through either way.