Non-nutritive sweeteners: The pain continues

It’s been 2 weeks since the systematic review and meta-analysis on non-nutritive sweeteners was published. Let’s look at what happened on how it was reported. If you are interested in my interpretation of the July 2017 systematic review and meta-analysis published in the Canadian Medical Journal, which is what these media examples were reporting on, the link is here.

To examine this, I Google-searched the term, “nonnutritive OR artificial sweetener study 2017” Around page 5 is when I got the first hit that did not have to do with the CMAJ study, and that’s where I stopped tallying. My search date was July 31, 2017.

First off, I wanted to see how the top five hits on Google titled their articles. I also read the top five articles compared to the CMAJ’s press release.

Science Daily:
Title: Artificial sweeteners linked to risk of weight gain, heart disease and other health issues
Article: Direct transcription of press release

NPR:
Title: Artificial sweeteners don’t help lose weight, review finds
Article: Appears to have interviewed the lead author. Cites one previous study for context. Attempts to balance viewpoint with statement from industry.

WNEP:
Title: Study suggests link between artificial sweeteners and weight gain
Article: Appears to have interviewed the lead author. Provides a brief history of artificial sweeteners.

Forbes:
Title: The irony of artificial sweeteners: Not a tool for weight loss, study finds.
Article: Did not interview the authors. Descriptive of the press release but not a direct transcription

Fortune:
Title: Artificial sweeteners won’t help you lose weight, according to a sad but important new study
Article: Did not interview the authors. Descriptive of the press release, but not a direct transcription. Some context on health halos and eating habits.

Of the first five pages of search results, 33/50 websites that reported on this study mentioned “weight gain” in the title or the text appearing on the Google search page. What I found most disappointing, but not surprising is the apparent evidence that none of these media outlets seems to have done any fact-checking or verification beyond a press release or interview with the authors (is interviewing the party who wrote the news release actually a fact-check?) Perhaps someone who is more media-savvy can address this.

I think the authors (when asked) did a good job of not overselling their study. Weight gain results reported in media coverage did seem to be confined to the cohort study analysis; however, with no real context for it. The graph responsible for the “weight gain” risk came down to three populations studied in the same paper:

Smith JD, Hou T, Hu FB et al. A comparison of different methods for evaluating diet, physical activity, and long-term weight gain in 3 prospective cohort studies. J Nutrition, 145: 2527-34, 2015.

However, if you read Smith et al, 2015, it is a study examining contributing factors that might help explain very gradual and slow weight gain in non-obese individuals over the course of decades; not a population looking to lose weight (or, arguably, not even a population that needed to lose weight; and possibly parts of this population remaining non-obese despite gradual weight gain.) The association of artificial sweeteners and weight gain should be placed in this context: non-obese people not looking to lose weight.  This can utterly change the research question that can be answered, and therefore puts to question whether you can address risks associated with artificial sweetener use in what most medical experts would consider divergent populations. If you’re looking to comment on whether artificial sweeteners have a place in weight loss strategy (not the explicit purpose of this meta-analysis, but the context in which many websites chose to frame their reports), can you really include a study in which weight loss was not a goal?

In fact, the goal of the Smith et al (2015) paper was to try to figure out what happens when you analyze dietary information in cohort data in three different ways:

1) When you measure eating habits at time 0 and then look at how weight changes over 4 years;

2) When you measure how eating habits change over 4 years and then look at how weight changes over 4 years, and

3) When you measure how eating habits change over 4 years and the look at how weight changes 4 years after another four years.

It turns out that the results can be VASTLY different, from “Diet soda” being associated with about a 0.5lb gain in weight every 4 years (yes, that is the weight gain that it being reported in these articles) to being associated with about a 0.3lb LOSS of weight every 4 years, depending on which analysis you decide to use.

The authors of Smith et al (2015), conclude that studies using “prevalent analysis” (ie. measuring diet at time 0 and then looking at weight 4 years later) should be interpreted “with caution” due to problems with misclassification and reporting errors. Ironically, “prevalent analysis” is the one used in the meta-analysis; despite evidence that this analysis was considered inferior and possibly erroneous. As a “fact check”, this basically renders the finding of even a “loose association” of artificial sweetener with weight gain pointless; and certainly not worthy of a headline of any kind.

At the end of the day, I don’t think the meta-analysis helps to clarify whether artificial sweeteners are either effective or harmful. From the perspective of the randomized controlled trials, there is substantial missing data that would have affected the analysis; from the perspective of the cohort data, most of it rests on a paper whose main conclusion is that the approach used by the authors of the meta-analysis, is, in fact, possibly error-prone, with proof of a divergent findings on an alternate analysis.

If you don’t use artificial sweeteners, you don’t have to start. If you do use them, you don’t have to stop. I don’t think that this paper helps anyone in making the decision to change their current course of behaviour; and that the media reporting on this paper is unsurprisingly misleading, with a clear lack of what I would consider standard fact-verification.

One case, a sample does not make. Is this a one-off, anonmalous example of discrepancy between doing a thorough interpretation and careless reporting? Should media be held to a higher standard? Is this even a reasonable or realistic expectation? Do they have the ability to be held to a higher standard? And most importantly, who is reading the science that you are being fed via media outlets?


Click Here to view the Full Version of our Website