Screwed if you do, screwed if you don’t: People don’t want to read what they don’t want to hear


This WOULD be a catastrophe. This week, I was asked to write a commentary for Fitocracy on what has now become known as the “Red Meat Will Kill You” Study. The fallout in the blogosphere has been pretty dramatic (and by dramatic, I mean drama-filled and theatrical). There have been a few well-written, thoughtful commentaries, but by far, the bulk of criticism has been the general “correlation, not causation” crowd. While I think this is, by and large, a HYUGE step forward in general research literacy, it also makes me wonder if it’s just another sign of polarized, blinded thinking.

There is a fundamental difficulty with measuring long-term outcomes that are distantly removed from single-point events, and continuous repeated exposures. The three mainstream ways to get at the question of, “Does X make you live longer/shorter?” are:

1) A prospective cohort design: This is when you select subjects on their exposure (usually the risk factors) and measure how the outcome varies between different levels of exposure (usually death).

2) A case-control design: This is when you select subjects based on their outcome (usually death) and look backwards at how the exposure varied for different levels of the outcome (usually the risk factors).

3) A randomized controlled trial: This is when you randomly assign subjects to one exposure out of two or more possible exposures (usually _A-_ risk factor modification) and then see how the outcome varies between different exposure levels.

In the cases of both types of cohort designs (prospective or case-control), you’ll always run into the ‘correlation not causation’ issue. It’s one of the fundamental limitations of any cohort study. The cohort design usually satisfies the condition of temporal relationship when it comes to death (you ate more red meat before you died and you can’t have eaten it after you died); sometimes satisfies the condition of strength if it is has both practical and statistical significance, and depending on how the exposure is measured, satisfies the criteria for dose-reponse (the higher amount of red meat you eat, the more likely you will die earlier than a lower-red-meat-eater.) Consistency comes only with repeated studies, plausibility and coherence depends on how well you can argue and back-up your findings, alteration by experimentation can only be proven by a subsequent study, and consideration of alternate explanations is interpretation-dependent and not objectively defensible.

Individuals too blinded by “levels of evidence” ratings and the such, tend to dismiss observational studies in their utility of getting at causality issues, which brings us to the randomized controlled trial–because everyone “knows”, it’s THE SH!T.

The thing with a randomized controlled trial in behavioural studies (which includes nutrition), is that it’s not really that feasible to keep people going in a trial until they die, unless they’re going to die soon. Therefore, you cannot directly measure death as an outcome. Therefore, you have to make assumptions about the outcome you’re going to measure and select a proxy for death. You also have to make the assumption that the “more/less favourable” outcome value can be maintained until death. There’s a randomized controlled trial (which I have not read and only use here as an example) where the authors concluded that converting type 2 diabetics to an ovo-lacto-vegetarian diet resulted in lower protein excretion in the urine (protein excretion is a sign of diabetic kidney failure.) If we make the assumption that this is a well-conducted trial, does that mean that type 2 diabetics should all go ovo-lacto? Opponents to this kind of recommendation would cite that the study does not measure mortality, and therefore we cannot draw the conclusion that reducing protein in the urine will actually make them live longer. And further, I would hazard to say that even if a study (which would have to be observational in nature) that showed elevated levels of protein in the urine was ASSOCIATED with early death, you’d have to throw that out too, because that’s just a CORRELATION.

Personally (and this is probably going to make me GROSSLY unpopular), I think taking pot shots on the fundamental limitations of any study design is easy pickings (which I have been guilty of in my earlier days; Good judgement comes from experience; Experience comes from bad judgement), particularly when the results of a study don’t agree with your own beliefs. It’s easy and possible to spin a fundamental limitation into a complete rejection. No matter how it’s studied, a study is screwed if it measures death directly (because it’s probably not feasible, or at least prohibitively difficult outside of a cohort context), and screwed if it doesn’t (AND screwed again because inevitably, the proxy can only be justified by an association.)

The reality is that the “Red Meat Will Kill You” study shows that after considering the contributions of factors such as BMI, gender, cholesterol, physical activity and smoking (with the limitations that come with the data collection method), there is STILL a portion of the variance that is left over of earlier death that can be explained by self-reported increased red meat consumption. The interpretation of this explanation, in my opinion, is not entirely clear due to a lack of further analysis and reporting (which I alluded to in my Fitocracy post). To throw out the study because correlation does not imply causation is to practice blind science, and only highlights a fairly superficial understanding of what a study design can and cannot accomplish.

Is the study overhyped? Probably. But that’s the media’s job. However, in reading the criticisms of the study as they crop up, it feels more like people are trying to protect their current belief systems rather than to critically analyze what can truly be taken or not taken away from a study that despite a daunting problem, has managed to add another piece to a very large and complicated puzzle. Being a scientist or an evidence-based practitioner means having to confront your beliefs and framing contrary evidence in an appropriate context. Your beliefs should be able to hold up to the substance of a challenge. Summarily dismissing one on a superficial basis only does your belief a disservice.


Click Here to view the Responsive / Mobile Version of our Website