I want to start off by stating three really important things:
Fat-phobia is rampant, especially in the medical world, and causes significant mental and physical harm. Being fat is not a moral failing. Fat people should not be treated differently than “straight-sized” people.
Weight is not an indicator of health. Full stop.
There is still so much we don’t know about how adiposity and body composition impact human health. In order to make progress in this space, we need to abandon the entrenched anti-fat bias.
The reason that I embarked on this “fact-checking” project is not to dispute any of the above (which I believe Michael and Aubrey would agree with), but to highlight how much misinformation this podcast is spreading. While my initial goal was to provide a scientific review of the studies and data they reference, I was alarmed by how many basic factual errors the podcast contains. It is one thing for them to misinterpret data and misrepresent study findings because they lack a scientific background and I understand that some people won’t care about some of the misinformation here, or find my corrections unnecessary. But even if you don’t listen to Maintenance Phase for the science, I think the lack of simple fact-checking is concerning. If it was a one-off or just a couple of minor details, that would be understandable. But you’ll see below that it’s pretty pervasive throughout the episode that things like basic numbers from papers are reported wrong and some details are completely fabricated. There are things in this episode that are just flat out made up. It’s irresponsible, sloppy journalism.
Ok, with that out of the way, I will get to the content of the episode. I am guessing that the title of this episode is intended to be click-bait, but, as is typical of click-bait titles, the content of the podcast does not actually address the question posed in the title. Here’s why:
This podcast is solely about longevity, not other health issues. Bad for you =/= longevity.
As Aubrey and Michael have already said BMI =/= fat. The meta-analysis that is the focus of this podcast is looking at the association between BMI and lifespan. Given that BMI is not an appropriate measure of adiposity, a study that uses it as an independent variable is not telling us anything about the health risks of being fat.
The podcast focuses on a single meta-analysis and the controversy surrounding it, rather than engaging with the larger body of literature about adiposity and health outcomes, including alternative measures to BMI.
A more accurate, albeit less click-baity title, for this episode would be: Two Academics Duke it Out Over Whether BMI Predicts Longevity. Again, if Aubrey and Michael wanted to address if being fat is bad for you, they should have looked at the literature around adiposity and health complications, but they chose to focus on this Flegal vs. Willett piece, instead.
What follows is a non-comprehensive (please point out things I missed) evaluation of the content of the “Is Being Fat Bad For You?” episode. Text in italics is quoted directly from the transcript (any grammatical errors are from the transcription).
But to me, it's very important to acknowledge that stigma against fat people and the belief that fat people are unhealthy, long predates any science showing that. This really good book called Fat: A Cultural History of Obesity, where they identify the first reference of an obesity epidemic was in 1620. The Catholic Church invented the Seven Deadly Sins, and one of them was Gluttony.
The book that Aubrey cites here actually explicitly states that the idea of an “obesity epidemic” in humans did not arise until the late 20th century. The text says, “ The notion of an ‘epidemic’ of ‘obesity’ (in cattle) as a ‘form that chimes in with a rather artificial idea of animal beauty’ was part of the debate between ‘contagionists and non-contagionists’ as early as the 1860s. It is only in the very late 20th century that obesity becomes an ‘epidemic’ in humans.” (See screen shot from the book below.) I do not know where 1620 came from, but it’s troubling that Aubrey is distorting the facts here. Additionally, it is well-documented that Hippocrates wrote extensively about obesity in the Hippocratic Corpus in the 4th and 3rd century BC. He is widely quoted as saying: “Those who are constitutionally very fat are more apt to die quickly than those who are thin.” (Aphorisms 2.44) Galen and Plutarch and a variety of other ancient physicians wrote about obesity, as well. That is to say, obesity existed far before 1620. The stigma is not justified nor were those ancient medical writings necessarily correct, but it is incorrect to suggest that the stigma developed without any basis. Additionally, the Seven Deadly Sins came about in the 6th century (way before 1620) and gluttony is not specific to food - it includes material goods, as well.
The idea was, there was something about Jewishness that gave people diabetes, it was like a genetic predisposition basically.
So yes, this was not a scientifically accurate observation, but I’m not sure why Michael presents this as if it’s a crazy idea. Jews are a diverse population, but there are diseases that are disproportionately prevalent among Ashkenazi Jews (see Tay-Sachs, for example). Jewishness giving people diabetes is not at all the same as Ashkenazi Jews of Eastern European heritage having a genetic predisposition for a specific disease. For Michael to present this as an absurd notion suggests a lack of understanding of how genetic diseases work.
If we think back to the BMI episode, the earliest connotation of the BMI was the fattest 15% of people would be considered overweight. It didn't have anything to do with health risks. It didn't have anything to do with anything, but the fattest among us need to be defined in this way. Then you create a bunch of funding streams to find out why it's so unhealthy and so terrible to be what we already think of as terrible.
This is not what the earliest connotation of BMI was. Ancel Keys put forth what we now call BMI in 1972 (originally proposed by Quetelet, a mathematician, purely because it was a more stable estimate than other formulas) because he was strongly opposed to the life insurance height and weight tables that were used at the time to define underweight and overweight. Even Keys said that it was a bad measure of body fat, though. Anway, at its inception, BMI did not have any cut-points. It wasn’t until 1995 that the first uniform categories for BMI were established by an Expert Consultation Group assembled by the WHO in 1993. The report published in 1995 explicitly says, “Because BMI does not measure fat mass or fat percentage and because there are no clearly established cut-off points for fat mass or fat percentage that can be translated in cut-offs for BMI, the Expert Committee decided to express different levels of high BMI in terms of degrees of overweight rather than degrees of obesity (which would imply knowledge of body composition).” In other words, the Expert Committee was well aware that BMI was not a measure of excess body fat, but rather a measure that relates height to weight. The cutoffs proposed by this report are, “based principally on the association between BMI and mortality.” The NIH cutoffs at the time were notably higher than the WHO cutpoints, classifying women and men as overweight if they had a BMI of ≥27.3 and ≥27.8, respectively. This was acknowledged to “have no particular relation to a specific increase in disease risk” and to be “somewhat arbitrary” and was later standardized to align with the WHO guidelines. How were these cut-points derived? They were based on the 85th percentile of BMI for men and women aged 20-29. This does not mean that the “fattest 15% would be considered overweight,” as Aubrey says. This is a misinterpretation of statistics and highlights the lack of attention to detail that pervades this podcast.
What they start doing in the 1940s and 1950s is they start getting these large groups of people and they get a representative sample of the country.
Actually, most cohort studies are not representative of the country. That isn’t a defining feature of a cohort study. You would be hard pressed to find a legitimate epidemiologist who says the Nurses Health Study for sure is nationally representative.
The central issue with calculating these mortality rates is, you can't just look at the raw numbers.
This tangent is a bit strange, because Michael and Aubrey are suggesting that epidemiologists didn’t adjust for confounding variables in previous analyses which is entirely false. The examples that Michael provides are absurdist. It’s a bad faith argument (and disturbing, honestly) to equate these Pew poll results about religion and life expectancy with a robust statistical analysis. Of course it is true, as Michael says, that “people become fat for all types of reasons.” That is why we adjust for confounding variables in epidemiology studies. It’s unclear what his point is here, except that he appears to be suggesting that studies are currently looking at obesity in a vacuum. That’s misleading and is a misrepresentation of the science to date.
This is the hardest thing about this, is because you have to keep in mind issues outside of the data. One of the most famous findings from these, is in the 1990s, the Journal of the American Medical Association published a study that said that left-handed people die nine years younger than right-handed people.
Not true. This study was actually published in the New England Journal of Medicine, and his summary is not even close to what the study entailed. It appears that Michael did not read this paper or even the news articles about the study. That’s not to say it wasn’t a terrible study - it’s just another example of bad journalism. Here’s what was actually done: the authors got death certificates for 987 individuals in southern California. They sent questionnaires to the next-of-kin to inquire about handedness. Then they calculated the average age at death for right-handed individuals and left-handed individuals. They found that the mean age at death was 75 years for right-handers and 66 years for left-handers. Thus, they concluded that right-handed individuals live 9 years longer. I have no idea what study Michael is referring to. This is also not a cohort study, it is a cross-sectional study. Anyway, the methods of this study are highly flawed, and a 1991 article in New Scientist does a great job explaining why. Essentially, the study does not take into account the age distributions in each handedness group. Because left-handed individuals tended to be younger, the apparent longevity gap was actually just an artifact of the different age distribution, not a real difference in lifespan. Again, this has nothing to do with what Michael explained, nor does Michael’s explanation make sense for this kind of finding.
In 2004, we get the infamous paper that is called Years of Life Lost to Obesity.
The paper Michael cites here is a 2003 paper, and it’s not from the CDC. I assume that he meant to refer to “Actual Causes of Death in the United States, 2000” which was indeed a 2004 paper from the CDC. I would argue it is basic journalism to get the title of a paper you are reporting on correct. Importantly, this paper also doesn’t say that “obesity is poised to overtake smoking as the number one cause of death in America” (but that is what Flegal said about it). It also estimates 365,000 deaths due to overweight not obesity. It is unclear how Michael or Aubrey could have “debunked” this study, because there is nothing to debunk. Flegal wrote a pretty extensive piece about the differences in methods contributing to the different estimates that I encourage people to read. As another aside, a good portion of what Michael says about this paper is ripped directly from Flegal’s paper, “The obesity wars and the education of a researcher: A personal account” without any quotations or acknowledgment.
This paper comes out, gets a ton of media coverage. Less than a year later, Katherine Flegal puts out her paper and in her paper, instead of showing that obesity causes 365,000 deaths a year, her paper shows that obesity causes 112,000 deaths, but it also reduces deaths by 86,000 because slightly overweight people are actually less likely to die.
This is a great example of the self-proclaimed “methodology queens” revealing a lack of understanding of epidemiology (and perhaps the scientific process, in general). It is not uncommon for studies using different databases to get different results - that is why responsible scientists don’t say that a single study has definitively “proven” anything. You can see in the Mokdad et al. (and in Allison et al.) that the estimated number of deaths attributable to overweight differs for each of the six data sources. While Michael and Aubrey are implying that the Mokdad et al. paper was totally wrong and fat-phobic, the reality is that Mokdad and Flegal used very different data sources and statistical methods. It’s not surprising for different statistical models to get pretty different results, think about all the wildly different projections of COVID deaths. It is harmful to trust in scientific institutions when we fail to acknowledge that science is a continual process of iteration. This is why we do multiple studies looking at the same research question and why we try to answer questions in different ways. Flegal even addresses this in her recent account of this whole incident and has a very nice table showing the differences between the papers. As I mentioned above, she even published a whole separate paper on why the estimates are different! Michael and Aubrey did not acknowledge this at all. Anyway, now we can delve into each of the two studies, for an actual look at the methods.
I will start by saying that the Mokdad et al. paper has some serious quality issues. There is a correction at the end of the text which, in combination with a correction published in 2005, explains that there was a mistake in their original calculations that led to them overestimating the number of deaths attributable to overweight. Michael raises this later in the episode but totally fabricates the reason for the correction. Mokdad et al. used prevalence estimates from the 1999 and 2000 NHANES surveys and estimated the annual deaths attributable to overweight (>25) using a method previously published by Allison et al. in 1999. Hazard ratios for death came from 6 different cohort studies and were adjusted for age, sex, and smoking status. The reference category was BMI 23-25. As an aside, I recommend that anyone who is interested read the Allison et al. paper. You will see that they also reported some hazard ratios <1, resulting in the appearance of a “protective effect” of some BMI categories. This was likely due to sampling variation, as none of the hazard ratios <1 were statistically significant (all of the confidence intervals included 1). This is evidence that Flegal’s paper was not groundbreaking or uniquely supportive of the idea that BMI is prognostic of mortality. But I digress.
After performing the analysis, Mokdad et al. looked at the results and reasoned that since the effect of overweight on mortality would likely be delayed, the total number of deaths estimated from the 1999-2000 prevalence data would likely be an overestimate of the actual number of deaths. So they took the average of the 1991 and 2000 estimates. This is how they arrived at the 365,000 number. We can argue about whether that makes sense or not, but it seemed reasonable to them.
Flegal et al. used BMI prevalence estimates from NHANES 1999-2002 data (note that this is different from Mokdad et al.) and the relative risk of death came from NHANES I, II, and II (1971-1975 with follow up to 1992, 1976-1980 with follow up to 1992, and 1988-1994 with follow up to 2000). They used a method previously published by Gail et al. which allowed them to adjust the hazard ratios for death by age, sex, smoking status, race, and alcohol consumption. The reference category was BMI 18.5 to <25 (again, note that these are different from Mokdad et al.). In fact, Flegal et al. highlighted this: “...using a reference category of 23 to less than 25 rather than the normal weight category would result in increased estimates of excess deaths for low weight and for obesity.” (Emphasis mine)
Michael neglects to mention a few things right off the bat that you can see without even reading the full Flegal paper. First, the 112,000 number is excess deaths among people with BMI ≥30, so the quantity being estimated in this paper is not directly comparable to that in Mokdad et al., which estimated deaths among individuals with BMI >25. Second, you will see that in the abstract Flegal et al. call out that “The relative risks of mortality associated with obesity were lower in NHANES II and NHANES III than in NHANES I.” This is very interesting, because Mokdad et al. only used NHANES I. Since NHANES I has a higher mortality risk associated with obesity, it might explain some of the discrepancy in the results. In fact, Flegal acknowledges this in the paper, saying, “However, the largest difference is due to the inclusion of the mortality data from NHANES II and NHANES III, which decreased estimates by 63% or more relative to NHANES I mortality data alone.” I am left wondering if Michael read either paper. If they had, or if they had a better understanding of study design and biostatistics, they might be slower to entirely dismiss Mokdad et al. as simply alarmist and fat-phobic.
In their conclusion, Flegal et al. said “Obesity is associated with a modestly increased relative risk of mortality, often in the range of 1 to 2. In this range, estimates of attributable fractions, and thus numbers of deaths, are very sensitive to minor changes in relative risk estimates.” This is exactly how we should be seeing these two papers. Different methods, different source data, a LOT of uncertainty.
Right. This is what they call The Obesity Paradox, which is my favorite thing to yell about, because it's only a fucking paradox if you can't imagine fat people living healthy lives.
This is, again, equating “healthy” with longevity, which is not true. Actually, Flegal’s own paper about the obesity paradox speaks to this. She highlights that weight guidelines for specific diseases are not based on mortality. For example, “obesity management guidelines for patients with diabetes are based on considerable evidence showing other benefits of weight loss for people with diabetes, including improved glycemic control and reduction in the need for blood pressure and lipid-lowering medications.” In other words, weight reduction is sometimes about morbidity, not just mortality.
Considering how many diseases cause you to sort of waste away as you get older, it makes sense, they would live longer from some of these conditions.
This is worth highlighting, because Michael reverses course on this later in the episode where he says this isn’t really that common and is a bogus reason to remove sick people from an analysis.
Basically, the number one finding of her paper is that like people in the BMI overweight category are slightly less likely to die. So, a little bit of fat has some protective effect on mortality rates. The other big finding is that skinny people are more likely to die.
Neither of these is a big finding, and the “number one finding” is not that people in the overweight category are less likely to die. In fact, that’s not even what this study found. None of the hazard ratios for the BMI 25 to <30 category are statistically significant (all of the confidence intervals include 1). The confidence intervals for the hazard ratios for the “underweight” BMI category only exclude 1 in adults 60-69 and ≥70 (i.e., adults 25-60 are not more likely to die if they are underweight than adults with BMI 18.5 to <25). Also, BMI =/= fatness, as we have already established.
In the fattest category, like the obese category, she logs 26,000 deaths. In the skinniest category, she logs 33,000 deaths.
The 26,000 number is BMI ≥25. Not the “obese” category.
We now find ourselves in 2005. There's this 2004 paper that finds that obesity is really bad for you, overweight people are going to die, fat people are totally going to die, it's just really, really, really obvious deep line, and we've got 365,000 deaths caused by obesity every year. And we've also got Katherine Flegal’s 2005 paper that says it like, “It's really not that many people. Once you subtract the lives that it saves from the lives that it takes, it's like 25,000 deaths a year due to obesity.”
False. Flegal et al. reported there would be ~112,000 excess deaths associated with obesity, not 25,000.
Yes. To this day, this is still framed as a scientific debate and two different ways to look at obesity data and who can say? But the important thing to know about these two estimates is that one of them is wrong.
Actually, both of the estimates are wrong, that’s why they are called “estimates”! It is also important to point out that 365,000 is the corrected estimate from the Mokdad paper.
They put the deaths in the wrong years. Human error in Microsoft Excel. Also, there's weird methodological stuff. Remember how I said earlier that when you look at tobacco deaths, you can't just like count up the smokers that die because you have to control for all this other stuff because it's not a representative sample of the population. What they did in this study, when they say obesity is about to overtake tobacco, they adjusted the tobacco deaths downward because, well, people who smoke are more likely to be poor. We have to artificially make that number smaller to make it more valid, but they didn't do that with obesity deaths.
False. I have no idea where Michael came up with this. You can read the correction and see that Mokdad et al. did not adjust the tobacco analysis for income. Additionally, adjusting for a confounder is not “artificially” doing anything. It is removing bias. This statement belies Michael’s lack of comprehension of epidemiology. But the most important thing here is that Michael completely fabricated all of these details except for the Excel error which is what the authors reported.
I can't believe they did this, some of their cohort studies ended in the 1970s. Some of the deaths were in 1970s, even though this paper is being published in 2004, but the heart attack, cardiovascular death rates in the 1970s were sky high.
This comment reveals a deep lack of understanding. Mokdad et al. sourced the relative mortality risk from six different cohort studies. The reason they did that was because they were updating an analysis from 1999 that estimated the same quantity for 1991. That is stated clearly in the text of the article. So while they certainly did not use the best data sources available, it is clear why they did it. But again, that is irrelevant, because that is just where the relative mortality came from. The baseline mortality was from 2000 CDC data. I am not sure why Michael says that “some of the deaths were in the 1970s”, because the paper is estimating deaths in 2000 using relative mortality risks from the NHANES I data. When the deaths occurred in the cohort studies is not important, because Mokdad et al. aren’t counting those deaths. It’s also worth noting that Flegal’s paper also included NHANES I, so if Michael is concerned about the use of those data, her paper is flawed, too.
You're actually much less likely to die of a heart attack as a fat person now than you were as a normal weight person in the 50s and 60s. If you look at the death rates, they've all been declining for years, even as the population has gotten fatter.
Again, this is irrelevant to the results because the data were used for relative risks, not absolute.
Right. The other problem with these studies is that they're built exclusively around BMI categories. So, every single person in these big cohort studies is organized in normal way.
This is confusing, because Michael and Aubrey selected these studies to review. There are many other studies that don’t look at BMI, but they chose these ones. It is strange to specifically choose to look at these papers and then say this is the problem. This issue with BMI is exactly why it doesn’t make sense for Michael and Aubrey to be analyzing this study to answer “is being fat bad for you?”
The problem with these categories for mortality research is that a lot of those cohort studies rely on self-reported BMI. So, you ask people their weight, you ask people their height, and then you calculate that they're overweight, or their normal weight, or whatever. The problem is that when you do this, a huge number of people end up in the wrong categories because people, I don't want to say lie about their weight because a lot of people don't know, accurate-- I do not know my weight. I have not weighed myself in five years. If somebody asked me my weight, I would be wrong.
Self-report is notoriously a problem. However, four of the six studies in Allison et al. measured height and weight and did not rely on self report. Self-report really isn’t the big issue here. The big issue is that BMI is a problematic measure. Another aside, but that last bit seems pretty tone-deaf (and a strange flex?) for a thin white guy who is speaking to a bunch of listeners (and a host) who are often hounded about their weight.
Let's say you have two people, and they're both 5’8” and they both say that they're 180 pounds. So, that's the data in your spreadsheet. But in reality, one of them is 185 pounds, and they're cutting a little bit of their weight off. The other one is 200 pounds, and they're cutting a little bit more of their weight off. In the actual numbers, this isn't actually that big of a deal and people who defend self-reported data, they'll say like, “Well, when people lie about their weight, on average, they only really lie by, whatever 2%, 5%, 10%, something like that. It's not that big of a deal.” But it's not actually about how many pounds they're cutting off of themselves. The problem with those two people is that the cut off between overweight and obese is 190 pounds. The person who's 200 pounds, who says that he's 180, he just jumped from one category to the other.
Again, somewhat irrelevant given that these things were measured in most of the studies. But I also don’t exactly follow what Michael’s point is, here. I’m not sure why the “real problem” is the cutoff. The “real problem” is that BMI categories are arbitrary. This, again, is why this whole episode makes no sense. If you want to talk about if being fat is bad for you, you shouldn’t choose to look at papers about BMI and then just say “whelp, these papers are based on BMI so they don’t address the question.” You chose to look specifically at this paper!
People have actually done studies where they compare data that's self-reported and data that's actually measured. Some of these studies, the normal weight participants, 30% of them should have been classified as overweight or obese.
So actually, the result of this specific misclassification would skew the results in a way that would make overweight and obese look less “bad” (i.e., mortality risk would be more similar across categories due to the presence of overweight/obese people in the normal weight category). Though Mokdad et al. used only two sources that were self-report, if this was a rampant issue, we’d expect their results to be a conservative estimate of the number of attributable deaths, because the hazard ratios would be spuriously closer to 1 than is true. This is just another example of why Michael and Aubrey should not be discussing these things without an expert.
Your entire study is garbage because you're not actually comparing different categories.
Relatively incorrect. With an understanding of how the misclassification occurred, you can make some assertions about the direction of the bias. Michael wants to throw it all out, but conveniently only in this case, and not in the 2013 Flegal meta-analysis which also included self-report data. This rhetoric is really detrimental to science communications. All studies are flawed. That does not mean they are garbage (some definitely are, though!).
Another reason why Katherine's study is better than the earlier study is that she throws out all of the self-reported data. You can't just mix bad data and good data and then say anything valid about a phenomenon.
Not true. Again, four of the six data sources Mokdad used were measured, not self-report. Flegal did not just “throw out the self-reported data,” she used different data sources entirely, except for NHANES I. She relied only on NHANES data. None of these data were “bad.” And if this is Michael’s stance, Flegal’s 2013 meta-analysis would be “bad” because it included many studies with self-reported height and weight.
What are the overall societal narratives for which we will put aside, these pretty basic methodological considerations. We won't apply as much scrutiny. The CDC and the Journal of the American Medical Association and these like very high-level public health institutions were willing to print something that in the methodology says, “We are assuming all deaths of fat people are because they are fat.”
It’s bizarre for Michael to say that these are “basic methodological considerations” when I have already demonstrated that Michael does not understand much about the methods he is attempting to critique. As to the second point, Mokdad et al. literally wrote that they were assessing the “impact of poor diet and physical inactivity,” and addressed that there are many factors they couldn’t address in their analyses. The research question was about excess deaths due to obesity, but nowhere does it say that the deaths of fat people are because they are fat - I’m not sure what Michael is alluding to here. Also, something being published in JAMA does not mean that the AMA “endorses” it.
Note: I won’t delve too deeply into the whole Walter Willett vs. Katherine Flegal commentary. Suffice it to say that Walter Willet appears to be super problematic and he refuses to acknowledge being wrong and puts up absurd arguments trying to discredit Flegal. That being said, Michael and Aubrey do entirely disregard any critiques of Flegal’s meta-analysis, some of which are quite valid. Ultimately, the focus in this podcast on the ridiculous antics of Walter Willett is not relevant to the actual issue at hand and distracts from the actual science. If BMI is a bad measure of health, which we know it is, none of the results of these meta-analyses are worth discussing. I do not know why Michael and Aubrey insist on covering this as if it addresses anything useful for the listeners about health and fatness.
Fast forward to 2013, partly in response to the criticisms of her 2005 article, Flegal starts working on a much bigger meta-analysis. Originally, she just had these two datasets from American data. But what she does is she looks for all other datasets. There're hundreds of studies going on about obesity and health at any given time. It's kind of absurd. You find these random Norwegian cohort studies and the South Korean nurse collaboration or whatever.
I’m not sure what Michael means by “these two datasets from American data” because her 2005 paper was using three different data sets. Also, these cohort studies are not specifically looking at obesity and health. They are longitudinal studies collecting many variables to enable researchers to investigate many health-related questions.
In this study, which is even bigger and has more data, you can go up to 210 pounds and still not really have any elevated health risk.
Again, mortality =/= health risk.
She's always been somebody that's extremely temperate about everything she says. I've seen other interviews with her, you cannot get her to go beyond the data.
Yes, that’s appropriate given that she knows her research is looking at BMI, which we’ve already established is not a good indicator of health. In this situation, Michael is the one trying to extrapolate beyond what the research says. If BMI is bunk, it’s bunk even when it suggests that higher BMIs are not detrimental to mortality.
Well, this is the problem, is that what you end up doing with removing all the smokers is you say that you're removing the effect of smoking. You want cleaner data, but what your actual doing is you're removing a bunch of poor, uninsured, unhealthy, thin people because smokers are disproportionately thin and you're leaving in all the fat people that have those bad health outcomes. You're just removing all the sick thin people and leaving the sick fat people.
So yes, Walter’s meta-analysis was bad. A lot of issues. But this explanation is wrong, and Michael will also reverse course on this later. You can see with the sensitivity analyses that Flegal did that this actually had very little effect. And the small effect was in the opposite direction of what Michael suggests here.
Yeah. The second thing that he says is contaminating her work, is that she's not removing sick people. The idea basically is that the reason why you have these higher death rates among super skinny people isn't because they're super skinny people that post photos of themselves in bikinis. They're old people who are wasting away from some sort of preexisting disease. My grandfather died of Parkinson's. In his last two, three years of life, yeah, I think he weighed like 85 pounds when he died. In the data, he would count as a death among someone with a BMI of something like 17 or something that has nothing to do with his weight. It has to do with the fact that he has this preexisting illness that first made him thin and then killed him. The spike in mortality among thin people is because you're packing in all these people that have all kinds of diseases, like various cancers, leukemia. If you're in the late stages of a disease, you're going to have a very high mortality rate and you're going to be very thin. And that also affects the normal weight category and even that slightly higher weights, you still have all these people that basically have like wasting away due to disease.
Michael says this as if it’s baffling, but it is an important piece of study design that, depending on the research question, patients will be excluded if they have the event of interest (death, in this case) within a short period of time after the initial exposure measurement. This is to ensure that the outcome is actually a result of the exposure. This is especially important for exposure-outcome relationships that have long latency periods. For example, if I’m looking at the association between red meat consumption and cancer, if you eat a hamburger today and get diagnosed with cancer tomorrow, it’s really unlikely (probably impossible) that your cancer is from eating that burger. Hopefully that makes sense. Anyway, if the theory is that it takes a lot of time for increased BMI to result in death, it would make sense to exclude people who die within the first little bit of follow-up. That being said, the choice of excluded time is arbitrary and this isn’t a setting in which it makes sense to do that (in my opinion). Although, as Flegal showed, this has minimal impact on the results. (Also saying “spike in mortality” is misleading because all of these are teeny tiny effect sizes.)
Thin people are very likely to die according to these studies and then you watch people's minds go to, like, “Well, that's not because they're thin. It's probably because they have an illness, maybe they have a really severe eating disorder or something like that, you have all these conditions that make you thin and then kill you.” And it's like, “Wait until I tell you about fat people.” There're also medical conditions that make you fat.
Yes, being too thin is a result of some diseases but it also causes some health complications. Some medical conditions make you fat, and being fat can cause health complications. Both can be true. In fact, earlier in this episode, Michael fell into exactly that sick thin person trope.
Yeah. It's like okay obviously every fat person needs to lose weight because they're at like a 40% higher mortality risk, but why don't thin people need to gain weight?
Underweight people absolutely do need to gain weight! And most underweight people will tell you that they have been told that many times. This is a very naive statement.
And then this thing of removing everyone who dies in the first five years, you can actually check who are you removing and when you remove people, you're mostly removing fat people. Walter is saying that you have to do this to remove all these like sick, thin people. But then you actually end up removing a bunch of fat people, and of course, Katherine runs the numbers on a bunch of these studies. And it's like when you do this, you just raise the mortality rates for fat people.
It actually would not increase mortality rates for fat people if you remove all the fat people who die within 5 years. When Flegal runs the numbers (note that she removes 3 years, not 5), the opposite happens.
You start winnowing out like, okay, we can't count the smokers, and then we can't count people who were previously smokers, and then we can't count people who've had cancer or Parkinson's, and then also now we can't count people who have mental illnesses and have been treated for those, but we can include the people who haven't been treated for [crosstalk] what the fuck is this weird ass patchwork that we're coming up with here?
Nowhere does it say that people with chronic conditions who aren’t treated can be included. All people with chronic conditions were excluded, treated or not.
Exactly and also what this is like after you do all these exclusions, you're excluding everybody with a preexisting condition, everybody who's ever smoked, and everybody who dies within the first five years, Katherine finds a bunch of articles that were written by Walter and his colleagues, where they're removing 90% of the deaths. 90% of the data is gone.
Not sure where Michael got this number from. Flegal’s paper literally says: “...the authors deleted over 60% of the data that they considered and about 75% of the deaths to arrive at their final results.” This is evident even just in the abstract for Willett’s paper: “Of 10,625,411 participants in Asia, Australia and New Zealand, Europe, and North America from 239 prospective studies (median follow-up 13·7 years, IQR 11·4–14·7), 3,951,455 people in 189 studies were never-smokers without chronic diseases at recruitment who survived 5 years, of whom 385 879 died.”
Her meta-analysis had 3 million people in it. His meta-analysis has 10 million people in it.
Again, see Willett’s abstract above. His meta-analysis included ~4 million people. Not 10 million.
I also love that in this study, even all of the manipulations that will get into, thin people are still more likely to die.
If by “thin” he means, “underweight BMI category”, that’s right. But in normal human conversation, “thin” =/= “underweight” so this is a problematic way to phrase it.
The problem is among these studies that he's looking at, there's 239 studies that they're looking at, only 28 of them even have data on people with preexisting diseases.
Again, not true. The 28 number refers specifically to “complete data for three chronic diseases mentioned previously”. The full truth is: “Only 28 of the 239 studies even had complete data for the three chronic diseases mentioned previously (see eTable 2 in GBMC). Twenty studies had no information on any of the pre‐existing diseases. Only 15 of the 239 studies provided data on respiratory disease and only 68 had data on cancer. Only 19 studies had complete data on pre‐existing heart disease, stroke, and cancer. Only 56 studies had data on both cardiovascular disease and cancer.”
Once you get into BMIs above 35 or 40, it's somewhere between 2% and 5% of Americans.
This is not true. The CDC estimates that 9.2% of adult Americans have BMI 40+. This is based on 2017-2018 NHANES data.
Exactly. If we go to those people, when you look at those statistics, it's dire. They're half as likely to be college graduates, one quarter of them are earning less than $20,000 per year, they're twice as likely to be on Medicaid, the group with the highest prevalence of grade three obesity is black women who didn't complete high school.
I’m not disputing that immense disparities exist, but I would love to know where Michael got these statistics, because I don’t see these anywhere.
Medical institutions define successful weight loss as losing 10% of your body weight.
Not true. It’s things like this that make me think they just do a cursory Google search for some of their “research.” The truth is that there is no standard definition of successful weight loss. In 2001, some researchers proposed defining it as losing 10% or more of your body weight and maintaining that for 1 year. Some weight loss studies since then have also used that definition. But no medical societies or institutions adopted that definition. Here is a great article looking at different definitions: Comparison among criteria to define successful weight-loss maintainers and regainers in the Action for Health in Diabetes (Look AHEAD) and Diabetes Prevention Program trials - PMC (nih.gov).
That's not how weight cycling works. My mom weight cycled my entire fucking upbringing. It was not on five-year cycles, dude. It was on three months, one year.
Actually, there is no standard definition of weight cycling. That’s what makes this so hard to study. Michael doesn’t get to define what weight cycling is. Table 3 of this paper shows how different the definitions are in the literature.
Say it again, there's not a single method of weight loss that is non-surgical, that meets the standards of being an evidence-based treatment.
That’s false. I’m not sure what Aubrey’s definition of “evidence-based” is. But there are many evidence-based treatment strategies.
Also, the whole idea of boiling somebody down to their statistical mortality risk, that's a gross thing to do, regardless…
This, again, is perplexing. Michael and Aubrey are the ones who chose papers very explicitly about mortality in their “is being fat bad for you” episode. It’s not what all of academia is doing, nor is it what industry is doing. This is like taking a paper on cancer mortality and saying people only care about how soon cancer patients are going to die. No, you are just looking at a paper that is explicitly trying to estimate that.
It is, I imagine reading that email was not unlike reading the one sentence where they're like, “We just assumed every fat person died being fat.”
This line does not exist anywhere.
Yeah. It's not like people learned about the population level hazard ratios of adipose tissue and then decided to dislike fat people, it seems extremely obvious that people disliked fat people and then went looking for the hazard ratios. They were looking for a reason.
Not true. If you look back to the very early guidelines about BMI, they very clearly say that BMI is not to be used as a tool for intervention and is not to be interpreted without other context. The WHO Guidelines say: “The recommended cut-offs are appropriate for identifying the extent of overweight in individuals and populations, but do not imply targets for intervention.” They also say, “The cut-off points for degrees of overweight should not be interpreted in isolation but always in combination with other determinants of morbidity and mortality (disease, smoking, blood pressure, serum lipids, glucose intolerance, type of fat distribution, etc.).” These guidelines also call out the risks of weight cycling. I know it doesn’t fit the Maintenance Phase narrative to concede that there is nuance in the way that these things were created, but there was. That doesn’t mean that there aren’t huge problems with the way that BMI is used, but to act like it was intentionally an all-out war on fat people is just wrong. Plus, Michael already acknowledged that being fat is associated with mortality. This is the circular logic that this podcast is predicated on - you cannot simultaneously say that there is no health issue associated with being fat and people are making this shit up while also saying that there are health issues associated with being fat but those aren’t caused by being fat, and we need to look into those more. Which is it? There were observations back in the days of Hippocrates that excess adiposity was not good for health. This didn’t come from nowhere. It doesn’t make it right, and it doesn’t mean that fat phobia isn’t disgusting and incredibly harmful, but it undermines the whole social movement when you put out messages like this that make no sense.
It's such a challenging thing to have people doing all this shit in the name of “the science” and then when you look at “the science,” it's very human, very flawed, very unreliable, [chuckles] and very disputed.
Of course the science is human! No one is ever disputing that. But it’s not VERY flawed and it’s not VERY unreliable or VERY disputed. This podcast episode is centered on one very disgusting academic debate. This is not representative of the body of science as a whole. This conclusion is like intentionally choosing to see a horror movie and then saying, “all movies are scary.” It’s rage-baiting and it is how Michael and Aubrey are promoting a dangerous anti-science narrative.
Concluding thoughts:
While there is a lot that we still don’t know, there is also a lot that we DO know, and some incredible researchers are working very hard to understand the health impact of excess adipose tissue. For anyone interested, I pulled together a couple of review articles that detail what we know about the biological mechanisms by which adipose tissue causes metabolic disease.
Adiposity and insulin resistance: Adipose tissue and insulin resistance in obese - ScienceDirect
Some of the language in this article repeats frustrating tropes about obesity and the obesity epidemic. Again, this is the kind of rhetoric that needs to be expunged in order to make real progress in this line of research. I would skip just to section 5: “Cellular dysfunctions associated with obesity-induced insulin resistance.” It provides a really comprehensive overview of what we know about adiposity and insulin resistance (or did in 2021, at the time of publication).
Metabolically Healthy Obesity: Metabolically Healthy Obesity | Endocrine Reviews | Oxford Academic (oup.com)
Again, some of this language is problematic, but this article covers a lot of what we know and don’t know about metabolic health.
Ruth Loos is a researcher who is doing really fascinating work looking at the genetics of obesity. She has a review article from 2021 which is quite comprehensive: The genetics of obesity: from discovery to biology - PMC (nih.gov). She also has a great article discussing genetic and environmental factors that influence metabolism: Metabolic consequences of obesity and type 2 diabetes: Balancing genes and environment for personalized care - PMC (nih.gov)
Another researcher to watch is I. Sadaf Farooqi. She has done some really interesting work looking at the genetics of obesity and thinness. A recent study they did suggests that “persistent thinness” (BMI ≤18.5) has a similar heritability to “severe early onset obesity” (defined by BMI standard deviation score (SDS) > 3 and onset of obesity before the age of 10 years): Genetic architecture of human thinness compared to severe obesity - PMC (nih.gov).
I hope these resources are interesting and/or helpful to readers. They are by no means comprehensive. I remain hopeful for a future in which the medical world and society at large do not discriminate on the basis of body size.
I work in public health as well and when I heard this episode, I sat down to outline all the misunderstandings and bad faith arguments I heard, but it was such a tangled web, I gave up. Thanks for doing this, it was an interesting read.
This has been a relief for me to find you. I greatly appreciate the nuance and detail with which you’ve criticized the glib and patently false claims of Maintenance Phase. These two twittery millennials lack the patience, analysis, and depth to pursue these topics with any rigor. The podcast has annoyed me for months. My hat is off to you!