Why Nutritional Science Can't Tell You What to Eat | Part I
A Three-Part Series Introducing Nutrition from First Principles
For seventy years, we’ve asked: “What should humans eat?”
The better question: “What can humans eat?”
The difference isn’t semantic. One approach generates observational studies where participants misreport their intake by 20-50%. It produces controlled trials lasting weeks when diseases develop over decades. It yields dietary guidelines that contradict themselves every five years, then wonders why nobody trusts nutrition science.
The other approach uses the methodology of harder sciences. Start with physical constraints that don’t change: biochemical limits identical across all humans, thermodynamic laws measurable in any laboratory, isotopic signatures in bones that can’t misreport their diet. Then reason forward from there.
I learned this distinction through necessity.
The Pattern That Breaks
I grew up when low-fat was moral law. By the time I was old enough to read ingredient labels, everyone knew Wheaties wasn’t actually health food. But the framework underneath—that dietary fat was dangerous, that whole grains were essential, that saturated fat caused heart disease—remained unquestioned. It was the water we swam in.
My father ate Wheaties every morning. A marathon runner who’d turned forty with a few extra pounds. His doctor recommended what every doctor recommended: run more, eat less fat, start your day with whole grain cereal. We rinsed ground beef with hot water after draining the fat. “I Can’t Believe It’s Not Butter” sat in our fridge like a health trophy.
When Crohn’s disease arrived, I did what you’re supposed to do. In fact, much was bleeding edge for the time. Months of systematic elimination. Every protocol variation: Mediterranean high-fiber, low-FODMAP, Specific Carbohydrate Diet, gut-healing approaches. All while erring within the low-fat paradigm. Some helped marginally. None stopped the pain.
I’d leave work early, drive across the street to a parking lot facing the ER entrance, and curl up in the driver’s seat. Calculating whether this time would be the time I’d actually walk through those doors. Eventually, the choice simplified: follow guidelines or survive the next four hours.
I started eating for immediate feedback. What reduced inflammation now? What let me function today? That experimentation led somewhere unexpected. I ended up eating foods I’d been taught would kill me.
But this essay isn’t about what I ate. It’s about how I figured out what I could eat.
The constraints that govern human nutrition don’t care about my experience or anyone else’s preferences. They operate the same in every human liver, every metabolic pathway, every bone that preserves its isotopic signature. What worked for me worked because it aligned with those constraints, not because it’s what you should do.
This constraint-based method works through nutrition here, but it applies anywhere expertise has failed to produce clarity. When the standard approach generates more contradiction than convergence, the problem isn’t lack of data. It’s that the tools aren’t suited to what they’re measuring.
The confusion about what to eat isn’t a nutrition problem. It’s a methodology problem. And methodology problems have methodology solutions.
When Experts Can’t Agree
The scene repeats in millions of conversations. Someone asks what they should eat. One expert says low-fat. Another says low-carb. A third says it depends on your genes, your microbiome, your blood type, your dosha. Each arrives armed with studies. Each conclusion contradicts the previous.
It’s not just frustrating. It’s destabilizing. If the people tasked with knowing cannot agree, what chance does anyone else have?
But the disagreement itself reveals something. When physicists measure the speed of light, they converge on the same number. When chemists determine molecular weights, they get identical results. When nutritionists study diet and disease, they get contradictions.
This isn’t because the human body is more complex than physics. It’s because the tools being used aren’t suited to the question being asked.
Why the Standard Tools Fail
The machinery of modern nutritional research depends on methods that were never meant to bear the weight they’re asked to carry.
Food frequency questionnaires rely on memory. Participants report what they ate last week, last month, last year. Studies comparing these self-reports to measured intake show errors of 20-50%. Not small errors. Not correctable errors. Systematic misreporting that invalidates the entire data set.
Controlled feeding trials can measure accurately, but only for weeks. Diseases like diabetes and cardiovascular disease develop over decades. A trial showing improved cholesterol after six weeks of Diet X tells us almost nothing about whether Diet X prevents heart attacks over thirty years.
Epidemiological studies can span decades, but they observe correlations without controlling variables. People who eat more vegetables also exercise more, smoke less, and visit doctors regularly. When their health outcomes improve, which factor mattered? The studies can’t tell you. They can only gesture at associations.
Behind much of this research sits industry funding. Not conspiracy, just incentive structures. The companies that profit from specific dietary recommendations fund the studies that test those recommendations. The selection bias isn’t always conscious, but it’s pervasive. This funding bias has been with us since the inception of national nutritional guidelines. Looking back now, it was obvious.
Over time, this structure casually trained a specific kind of helplessness. People stopped asking “what’s real?” and started asking “who said it?” Authority replaced verification. Even intelligent, well-meaning professionals found themselves trapped in the same pattern: endless deferral to meta-analyses whose internal contradictions they couldn’t resolve.
The problem isn’t lack of data. It’s that the data comes from tools designed for different questions.
A Different Starting Point
When the standard approach fails, we can shift from “should” to “can.”
Most scientific investigation starts with hypotheses about what should work, then tests them. Forward reasoning: “Maybe X causes Y. Let’s see if the data supports it.” This works when you can run clean experiments. It fails when clean experiments are impossible.
Ask “can” instead. Start with what can’t be otherwise: constraints that operate regardless of belief or preference. Then work forward from those constraints to see what they permit and what they eliminate.
Begin with the end in mind? No. Reason from what cannot be otherwise.
This is how you reason when experimentation is limited. Cosmologists can’t rerun the universe to test theories about its origin. They start with physical laws that must hold everywhere, then work backward to what initial conditions those laws permit. Evolutionary biologists can’t watch speciation happen in real time. They start with constraints like natural selection and genetic drift, then reconstruct what lineages those constraints could produce.
Nutrition faces a parallel limitation. An adequate dietary study to establish something approaching cause and effect would require observing large groups in controlled settings, measuring inputs and outputs precisely, for years if not decades. Even if we could justify the cost and logistics of housing hundreds of people in hospital-like conditions and feeding them measured diets, the very nature of such confinement introduces a critical variable: the psychological and physiological effects of restriction itself. Stress, circadian disruption, loss of autonomy—these alter the very metabolic processes we’re trying to study.
Epidemiological studies try to work around this by observing free-living populations. But they trade experimental control for real-world messiness. The result: associations that can’t establish causation, confounding variables that can’t be untangled, and decades of contradictory findings that leave both professionals and public confused.
Nutrition can use a different approach. Start with biochemical constraints about what humans can tolerate. Add physiological evidence that can’t misreport. Layer in metabolic mechanisms observable in any laboratory. Complete the loop with direct experience that individuals can verify.
When these constraints converge, clarity emerges. Not because someone decreed it, but because the constraints themselves eliminate contradiction.
What Constraints Reveal
The constraints that govern human nutrition operate the same in every liver, every metabolic pathway, every bone that preserves its isotopic signature. They don’t shift with dietary trends. They don’t bend to expert opinion. They simply are.
Some possibilities are mechanistically eliminated before any dietary preference enters the picture. The human body can only process so much protein before ammonia accumulates. It requires certain fatty acids it cannot synthesize. It needs specific amino acids or it dies. But there is no such thing as an essential carbohydrate.
These aren’t opinions. They’re physical realities, measurable and reproducible. The protein ceiling operates whether you believe in it or not. Energy density follows from thermodynamics. The difference between what the body requires and what it can tolerate without requirement narrows the possibility space dramatically.
Add historical evidence. Bones preserve isotopic signatures that reveal trophic level—where someone sat in the food chain. These signatures can’t misreport their diet. They can’t be influenced by recall bias or funding sources. They simply record what was consumed, readable long after death through mass spectrometry.
Add metabolic mechanisms. The Randle cycle describes how glucose and fatty acids compete for oxidation. This competition creates distinct metabolic modes. Mix high fat with high refined carbohydrates, and you create metabolic confusion that has no precedent in human evolution. The mechanism explains why certain combinations work and others don’t, independent of belief.
Add individual verification. Genetic variation matters. Some people retain lactase production into adulthood. Some have more copies of genes for starch digestion. Some respond differently to dietary fat based on APOE variants. The constraints narrow possibility space, but within that space, personal calibration through direct experience completes the picture.
Layer these together, biochemical limits, historical evidence, metabolic mechanisms, individual verification; and contradiction dissolves. Not because someone declared victory, but because the constraints themselves leave little room for alternatives.
This is what “can” reveals when we stop asking what we should eat and start examining what humans can actually tolerate, what the evidence shows actually happened, and what metabolic mechanisms can actually sustain.
The method matters more than any single conclusion. But the conclusions that emerge from proper methodology tend to surprise people who’ve only encountered nutrition through “should.”
Next time: what those constraints actually show when we read them carefully. The biochemical limits that narrow the possibility space. The isotopic evidence from bones that can’t lie. The metabolic coherence that explains why modern dietary patterns create disease. And more.

