DIETARY GUIDANCE -- But the study said... it might as well have been made up!

We all see new studies about diet and weight loss all the time (well, that was until Ozempic and Mounjaro came out).  And it turns out that the data is so flawed that you might as well have made it up yourself!

There are two main types of diet-related studies, at least as far as data collection goes.  One type we’re not going to discuss involves locking someone up in a ward and strictly limiting everything so as to control as many variables as possible.  Super expensive, and rarely, if ever done anymore.  The two types I’m talking about are food survey questionnaires, and food journals.  

Food journals ask patients to report intake of specific foods and usually their macronutrients – how much fat, carbs and protein they consume.  Judgements are then made as to the categories patients might fall into.  

Food survey questionnaires ask more general questions, several times over a specific time period, asking patients to recall what they ate and how much.

Let’s take the food recall survey first – most people can’t remember what they had for breakfast, no less how many potatoes they ate over the last two weeks, two months or two years!  Needless to say, some inherent self-reporting bias is expected.  But it seems that that bias may approach error rates of 50%!

You might think food journals are better.  But when patients were asked to follow specific diet guidelines (low carb or low fat, for example), the reports of following guidelines differed from reality by 77 – 96%!!

If most of the reports are in error, would it be safer to assume the exact opposite of the conclusions being reported?  OKAY,  maybe that’s not the best idea, but it does lend credence to the notion that not every piece of advice or guidance offered is worth paying attention to.  

Having someone who knows you and your issues and needs might just be better than following the herd…just saying.

FROM MEDSCAPE INTERNAL MEDICINE / BY
YONI FREEDHOFF, MD

Disappointing Outcomes Show Futility of Weight Loss Studies

Self-reported diet adherence may be as useful and accurate as self-reported height and weight, where people somehow end up being taller and lighter than they are. At least that’s what the findings from a recent study would suggest. The study, “Are People Consuming the Diets They Say They Are? Self-Reported vs Estimated Adherence to Low-Carbohydrate and Low-Fat Diets: National Health and Nutrition Examination Survey, 2007-2018,” published in the Journal of the Academy of Nutrition and Dietetics, compared self-reported diet adherence among low-carb and low-fat dieters with two separate 24-hour recalls conducted with those same individuals.

The findings were striking. Of self-reported low-carb dieters — a diet with a generally higher degree of implementation difficulty — when compared with their 24-hour recall data, 95.9% were not meeting the low-carb threshold of consuming less than 26% of energy from carbohydrates. Of the self-reported low-fat dieters, though certainly better, 77% were found not to be consuming less than 30% of energy from fats.

The implications of this are plain. Studies on free-range, non–metabolic ward dieters that base their conclusions on self-identified diet adherence should be evaluated with great skepticism. 

And it’s not just self-identified diet adherence that challenges diet study conclusions. We should also question the reliability of food frequency questionnaires (FFQs) — especially in longer-term studies, given that the longer the study, the less likely the extrapolation of sometimes even singular FFQ measurements to reflect even a decade of ongoing dietary intake patterns as if they never change. This effect is compounded by inaccuracy among food self-reporters, as demonstrated by the recent confirmatory evidence that overall, we’re terrible food historians. The study, “Predictive Equation Derived From 6,497 Doubly Labelled Water Measurements Enables the Detection of Erroneous Self-Reported Energy Intake,” utilizing objective doubly labeled water to evaluate the accuracy of subjective self-reports of energy intake, found that roughly 50% of people’s reported intakes didn’t mirror their measured total energy expenditure (TEE)s! The study also found that, when looking specifically at people with obesity (the subjects of most diet studies), their misreporting was associated with underreporting energy intake, overreporting protein intake, and underreporting fat intake — even further skewing our ability to interpret diet impact studies.

Another generally unspoken challenge to the evaluation of diet studies is that most — including the most thoughtfully conducted and measured — tend to be studied in the context of evaluating either a commercial or a medically supervised and administered dieting program. The challenge therein has to do with the fact that dietary counseling and support is a service and not a product, and as a service, its outcomes will be deeply influenced by the skills of those administering it. What that means is it’s impossible to disentangle the skills, or lack thereof, of the study or program’s administrators from the ease or enjoyability of the diet being studied and from the outcomes of its participants. Terrific clinicians are more likely than terrible clinicians to inspire greater changes and adherence. And so, a study on a particular commercial diet program’s outcome is at best extrapolatable only to the centers and support staff who administered it. 

SUGGESTED FOR YOU

Worse, especially for long-standing commercial dieting programs, is that the programs change over time. For instance, Weight Watchers, just since 2000, has implemented at least six major changes to its plan. How then to evaluate studies both prior to the 2000s and since the 2000s, given the differences in the programs?

But again, as I’ve been saying for quite a while now, regardless of the diet or program studied, despite decades of efforts and initiatives, there has yet to be a scalable, reproducible, years-long program that delivers clinically meaningful long-term weight loss. What are we even studying when we’re evaluating diets? And who are we benefiting? Because it would appear we’re not actually studying the diets we’re reporting we’re studying, and are we really benefiting anyone given that no diet has proven itself to be superior to any other in terms of leading to long-term success? 

From the individual patient’s perspective, the diet they should be striving for is the healthiest one that they can actually enjoy. If not enjoyable, the likelihood of it being sustained long term is low to nonexistent. And one person’s best diet is another person’s worst. Which is why when looking at waterfall plots of weight loss of individuals on a particular program or diet, you’re likely to see some patients with dramatic losses vs some with even weight gains, and then a whole bunch in a nonexciting middle (assuming excitement is determined by amount lost).

Moving forward, I’m hoping for fewer weight loss diet studies consequent to both the lack of signal in diet studies to date demonstrating reproducible utility, coupled with the undeniable superiority of medication in conferring long-term, clinically meaningful losses. What remains to be seen is whether in the context of people taking obesity medications, there’s a best diet for same, and where adherence is amplified consequent to medication-induced decreases in hunger, cravings, and fullness, leaving individuals with a far greater chance of sticking with their chosen effort.

Source: https://www.medscape.com/viewarticle/disap...