HIIT vs Steady State–Again.

Thanks to Mike Knowles who suggested this week’s study (which, incidentally, was incorrectly cited in that publication you sent me, but that’s not your fault 🙂 ).

Commitment can be such a harsh word for some–especially scientists. I did my undergraduate degree at Queen’s University, where I was a part of the faculty of Arts and Science. I’m sure this isn’t what they meant when they named the faculty, but it’s a propos to my point today, because even in “Science”, there is Art. As researchers we strive so hard to be objective, and reductionist, that we lose sight of the qualitative judgments that are required to do our work. Too often, we are blinded by “objective” p-values, effect sizes and sample size “calculations”. I think it’s an advanced concept (and certainly one that I did not grasp fully until I was almost finished my PhD) that research design and statistical analysis is as much a subjective “art” as it is a “science”. Anyone can apply rigid principles and create _a_ study (for instance, “Is my brother smarter than a hamster?”) This is not to say that research cannot be objective, but that it takes a certain “finesse” to create a study that is not only objective and as unbiased as possible, but also practical and relevant to current practice (or, the “real world”).

Commitment in science is about sticking to your guns and justifying why you do what you do. It means not leaving the decision making up to a p-value. It means deciding what effects are practically important BEFORE you run the significance tests to see if they’re “statistically” important. It means realizing that statistics are used to SUPPORT your hypothesis, not to prove it. And it means figuring out what you’re going to need to optimize your tests so that you will ACTUALLY answer the question as opposed to leaving the type I and type II error rates hanging in the air to chance.

So, that being said, the topic this week goes back to HIIT vs Steady state training (I know, it’s a recurring theme. I promise to review another topic next week).

Gibala MJ et al. Short-term sprint interval versus traditional endurance training: similar initial adaptations in human skeletal muscle and exercise performance. Journal of Physiology 575:901-911, 2006.

I’m not going to reprint the abstract here because you already know that reading abstracts on their own is not the most useful exercise.

Rationale:

There has been some evidence to show that short-term sprint interval training (SIT) improves factors that are associated with improvements in endurance activity performance, but no one has put them head-to-head in a standardized way to see if SIT might be a, “…time-efficient strategy to induce muscle and performance adaptations similar to high volume endurance training.”

The question being asked by the researchers is, “Does SIT do as good as a job as ET (endurance training) in improving exercise capacity and molecular and cellular adaptations in skeletal muscle?”

[So, right off the bat, this is NOT a fat-loss study, or a weight-reduction study. Let’s get that clear right away. But what it is, is a study that may address one of the largely popular concerns that HIIT compromises “health benefits” in that one does not accrue these “health benefits” that have been shown in ET studies.

The problem with this study is that, by its question, it’s a non-superiority trial. That is, the researchers are trying to show that there is no difference, rather than trying to prove that SIT is better than ET. And the problem with “traditional basic” research design, is that its main limitation is its inability to positively show “no difference”, since you cannot accept the null hypothesis; you can only reject or fail to reject it. If you reject it, then you are saying that the difference between two groups in the observed data is as, or more extreme than difference you would have obtained purely by chance. If you fail to reject the null hypothesis, then you are saying that you lack sufficient evidence to show that the difference you observed was as or more extreme than by chance alone. But lack of evidence is not positive proof of no difference. As the saying goes, “Absence of proof is not the same as proof of absence.”

So in the best case scenario, where the researchers will fail to reject the null hypothesis (i.e. that there is no difference between the two groups with respect to exercise capacity and muscle adaptations), we will still be unable to conclude that there IS no difference, due to the overriding limitation of the study design itself. Non-superiority, or equivalence trials are a totally different animal from traditional designs, and should generally not be performed without some expert guidance.]

In their introduction, the authors state, “We hypothesized that both SIT and ET would increase muscle oxidative capacity and 750 kJ time trial performance, given the major contribution from aerobic metabolism during this task. In contrast, we hypothesized that SIT but not ET would increase muscle buffering capacity and 50 kJ time trial performance given the large contribution from non-oxidative metabolism during this task.”

[I like the second, concrete and useful hypothesis, but the language of the first one demonstrates the lack of commitment to answering the question. We are not interested in knowing the both SIT and ET increase oxidative capacity and time trial performance. We already know that they do. If we didn’t know that they did, then this study could not be justified. We are interested in knowing what they do COMPARED to each other. If you never COMPARE the two protocols directly, you CANNOT make generalizable statements about them.

There used to be more prevalent “classically bad” designs where researchers would put two groups through separate protocols, compare them against themselves and say things like, “Well, group A improved significantly when compared to themselves, and group B didn’t improve significantly when compared to themselves. We didn’t bother comparing them to each other, but we’re going to conclude that group A is better than group B.” Absurd, eh? Similar concept here, just a slightly different twist.]

Methods:

Sixteen men were recruited to this study. They were physically active students who, “…took part in some form of recreational exericse two to three times per week (jogging, cycling, etc). None of the subjects were engaged in regular training for a particular sporting event.” The 16 men were randomly assigned to one of two groups: the SIT or the ET group.

[Insert my usual comment about lack of sample size justification, and lack of reporting on random allocation methods here.]

Pre-testing: Subjects all went through VO2 max and Wingate testing. Also, they all practiced a 50 kJ and 750 kJ time trial to familiarize themselves with the testing protocol for the study itself. This all happened on separate days, and at least 3 days before baseline testing.

Baseline and post-testing: Subjects all did the 50 KJ and 750 kJ time trials, with no verbal, time or physiological feedback (i.e. no one was encouraging them to go faster, or harder; no one told them how long they had been pedaling for; no one told them what their physiological outcomes were). The only feedback they got was estimated distance on a computer monitor (50 kJ was approximately 2 km, 750 kJ was approximately 30 km). Subject also had a resting needle muscle biopsy taken from the vastus lateralis muscle. The order in which the testing was done was 1) muscle biopsy, 2) 1h after, the 50 kJ time trial, 3) 48h after, the 750 kJ time trial.

Training: Forty-eight hours after the 750 kJ test, subjects began the training protocol. Both groups trained 3 days a week (Monday, Wednesday, Friday) for 2 weeks. The SIT group’s protocol was 30 seconds of maximal cycling with 4 minutes of recovery. Escalation was 4 repetitions for sessions 1 and 2, 5 reps for sessions 3 and 4, and 6 reps for sessions 5 and 6. The ET group’s protocol was 90-120 minutes of continuous cycling at 60% VO2max. Escalation was 90 minutes in sessions 1 and 2, 105 minutes in sessions 3 and 4, and 120 for sessions 5 and 6. All training sessions were supervised by study personnel.

Additional collected data: Subjects did diet logs.

Muscle analysis: I’m not trying to say that this stuff isn’t important, but unless you’re interested in the really fine details of this analysis, I’m not going to summarize things like which primer sequences were used for Western blot anlaysis. They measured muscle oxidative capacity by quantifying the amount of cytochrome C oxidase (mitochondrial subunit) expressed in the muscle biopsy (both from a protein standpoint and an mRNA expression standpoint). Muscle buffering capacity was measured with a previously published protocol, and muscle glycogen was measured with fluorometry.

[The big take home message here–and this really is the biggest strength of this study, is that there was a deliberate planning to ensure that the SIT group did substantially less volume and spent substantially less time than the ET group training.]

Results:

Time trials: Both groups significantly improved their time trials times. The SIT group improved by 10.1% and the ET group improve by 7.5% in the 750 kJ time trial, but the actual time improvements were not reported, nor was there any variance reported around the improvement numbers. No significant difference was found between the two groups with respect to this improvement. They did report the mean times (with standard error) for the 50 kJ test. The SIT group went from 117s (SE 6s) to 113s (SE 6s). There were some problems with the reporting of the ET group. The researchers reported that the ET group improved by 3.5% in the 50kJ test, but reported pre-values of 115s (SE 9s) and post-values of 122 (SE 10s). I think there’s a typo in this article somewhere.

Muscle oxidative capacity: Maximal activity of COX improved within both groups. The researchers failed to detect a significant difference between the two groups, but stated, “No difference between groups”.

Muscle buffering capacity: Muscle buffering capacity increased by 7.6% (no raw numbers or variance reported) in the SIT group, vs. 4.2% in the ET group. No significant difference was detected between groups.

Muscle glycogen content: Same result as above–both groups improved when compared to themselves, no difference detected between the two groups.

[In this case, it just happened by chance that the fact that they did multiple uncorrected significance tests didn’t result in a spurious significant finding.]

Interpretation:

The difficulty with interpreting this study is that it doesn’t actually definitively answer the research question it set out to answer. “Absence of proof is not proof of absence.” The real question is whether the 4 seconds that make the SIT group different from the ET group in the 750 kJ time trial is important, REGARDLESS of the statistics. Whether or not the p-value was greater than 0.05 is actually pretty irrelevant either way. If the 4 seconds is important, then they should have done the study with enough power to detect that difference. If it’s not important, then even with a p-value less than 0.05, the difference is STILL not important. Obviously, I don’t think being able to cycle 4 seconds faster over approximately 30 km is that big of a deal–particularly in a population of recreational athletes, so in some ways, the conclusion is the same. But the difference between my interpretation and their interpretation is that I’m not relying on the statistic to support or not support my argument. Four seconds isn’t important. That’s it. The statistics can say what they want, those 4 seconds aren’t going to get any more important. In THIS population anyways.

The problem though, is that my interpretation also has no further back-up to demonstrate that it’s a valid one. And this is because the statistics and, more importantly, the design were mishandled to answer the question they said they set out to answer.

Bottom line:

This study is weak evidence to show that with respect to time-trial times and physiological muscle predictors of endurance performance, SIT has similar benefits to ET, but with substantially less time commitment. The good news, I suppose, is that if you’re already doing HIIT, at least it’s not evidence to show that you shouldn’t be. But, I think that’s more a result of luck than actual deliberate design.

P.S. I’m a little disappointed that no one seemed to get the title of last week’s tutorial and its parallel to the song, “War”, and the chorus, “War, what is it good for? Absolutely nothin’!” All that effort for a catchy title… (I even had the “Huh!” in it.)


Click Here to view the Full Version of our Website