It's pretty frustrating, isn't it? Studies on nutrition seem to contradict themselves all the time, making it hard for us to know what is correct and what isn't. One day margarine is good, then it's bad, but it must be OK because the American Heart Association still recommends it, right? Eggs are good, then they are bad, now they're back on the upswing again.
Vitamins are supposed to protect against cancer, but now they seem to cause cancer — with this type of information, how are we to know what to recommend to our patients?
Just to give you some context, it is estimated that 90 percent of the published medical information that doctors rely upon for diagnoses and prescriptions is wrong. 4 That shocking number should make you seriously pause the next time you see a headline in the paper.
If you want to learn how to discern what is true and what is not, the first thing to do is stop getting your medical information from television and magazines. Reading headlines or a blurb in the paper, or hearing a 60-second report on television, often distilled through a reporter who has little medical knowledge, insures you will never get the complete or accurate story.
Secondly, it helps to know something about the topic — if you do not know anything about, for example, soy, it would help to read more than just websites claiming how healthy it is. Reading conflicting points of view from reputable resources will allow you to gather information from all sides. You want to notice where you've decided you "know" something already and are therefore not open to hearing the whole story. And lastly, practice some critical thinking. This means not automatically believing everything you read and hear.
So what are some of the factors that influence studies? Bias is a big one, and bias comes from a variety of directions:
Researchers wanting a certain result will often find that result. At every point in the research, there are places to distort results, to make a stronger claim, or select what is going to be concluded or included.
Pressure to get funding and tenured positions or keep grants will create opportunities to bias research. The pressure to discover something new and ground-breaking that most probably will be published over studies that find a theory to be false can also cause results to be skewed.
Financial conflicts of interest, whether from pharmaceutical companies, sponsors, or simply wanting to maintain a grant, can distort results. The examples of this are, unfortunately, too numerous to mention.
Then there's the actual research. The following are the elements to look for.
Observational Studies Versus Random, Double-Blind, Placebo Studies
Observational studies are just that — several elements were observed and someone wrote a paper saying they were correlated (Not causal, but correlated — those are two very different things and are often confused). If observation was all that was needed, we'd be quite certain that the Earth is flat and that the sun revolved around us. Be very careful with this. An observation should lead to a hypothesis, which should lead to testing, which nearly always leads to a failed test, since there are many wrong hypotheses for every right one, so the odds are against any one in particular being correct. And if the observational study is reported as "fact," then most people assume it's correct, which gives us debacles such as hormone replacement therapy prescribed for preventive health, and instead increases the risk and incidence of heart disease, stroke, blood clots, breast cancer and dementia, injuring or killing tens of thousands of women.
There is simply no way to observe a large group of people, balance all the confounders and then say your interpretation is solid (smokers vs. non-smokers, vegetarians vs. meat eaters, different ages, different genders, income disparities, educational differences, exercise, overall initial health). It is like saying you can balance a vegetarian, non-smoking, exercising, female software engineer from Seattle and a construction worker in Alabama who eats at truck stops. Good luck with that. They give questionnaires to people and hope that people will tell the truth about what they do and eat and often those questionnaires are years apart. And yet we read about observational studies like this all the time, and being unable to distinguish good research from bad, assume the conclusions are solid. The book, "The China Study" by T. Colin Campbell (not to be confused with the actual China study) is a good example — I'll be getting to that in a moment.
Statistics can be skewed many, many ways, but one good example is in risk calculations. Let's say 100 men take a statin, and 100 men take a placebo. At the end of the study, two men on the statin have a heart attack, and four men on the placebo do. Absolute risk calculation says that statins reduce the risk of heart attack by 2%. But relative risk calculation sees that half as many men on the statins got a heart attack as on the placebo, and then will say statins reduce your risk of a heart attack by 50%. Which way do you think the study will be reported? (Let's just say that absolute risk is an unpopular number.) And while absolute risks are buried in the actual studies, relative risk numbers are more often in the abstracts and who wants to read the whole study when the abstract (supposedly correctly) summarizes everything?
Ultimately, if a study claimed some impressive-sounding result, and is disproven later on, many researchers have invested their careers in this area and continue to publish papers on the topic, "infecting" other researchers who read those studies in journals. Then there are researchers who continue to cite the original research as correct, often for years or decades after the original study was refuted. If you can't keep up with studies, what makes you think M.D.'s can?
Then we have the concrete problems with studies. There are studies that came to a "solid" conclusion based on testing 17 individuals. Or ones that fed vegetarian animals (rabbits) something wildly outside of their normal diet (cholesterol dissolved in sunflower oil), and then concluded that eating cholesterol causes heart disease. Or the demonization of solid fats (with research being done on hydrogenated fats versus healthy, non-adulterated fats), with predictable results of health issues with the hydrogenated fat. Or conclusions like "Meat causes a potassium deficiency" when it turns out the researchers fed the subjects boiled turkey meat, with the potassium being lost in the water, which was thrown out, I could go on and on.
Bias in the media is particularly frustrating, as this is the source where most people get their health information. It starts with a research study that says, "A is correlated with B, given C, assuming D, and under E conditions," which turns into "Scientists find potential link between A and B" and "A Causes B!" and in less reputable news outlets becomes "What You Don't Know About A Can Kill You!" The reporter is often reading something second or third hand, and possibly adding to the bias.
The book, "The China Study" is a good example of where a lot of these elements have come to play. "The China Study" (the book, not the actual China study, which said nothing like that) purports that animal protein causes cancer. The studies were done feeding rats casein (a dairy protein), which is known to cause health problems. But you can't even say that diary proteins cause cancer, because whey protein (another dairy protein) actually protects against cancer. Instead, Campbell (a known vegan, and this is important because it can contribute to bias) leaps directly to the statement that animal protein causes cancer. Really? So then how does one explain cultures like the traditional Inuit, who, with their permafrost, couldn't even grow vegetables, and therefore ate basically only meat and fats, and had a cancer rate of .01%? (Of course, they weren't the only ones).
What you also didn't read in the book was what happened to the rats: the low-protein diets actually prevented the rats' livers from detoxifying properly, and they died premature deaths of liver failure. This also occurred in the original research that Campbell replicated, but is conveniently left out of the book. Oh, and you probably didn't read this: "An examination of the original China Study data shows virtually no statistically significant correlation between any type of cancer and animal protein intake." (That's from the actual China study.) I can go on quite a bit about this (I have a whole lecture on it) so if you'd like to read more about this topic, please email me.
The conclusion is this: Don't be a lemming and follow what everyone else says. Or a parrot, in repeating poor information. Start reading more, and reading between the lines. Immediately become cautious when you read words like "possibly" or "may correlate" or "this observational study says…" and remember that an enormous number of studies that you read are wrong, and if you're just reading the headlines, that's even worse. Question everything and remember your common sense. Take everything with a grain of salt (a pound might be more helpful), and practice your critical thinking, if only because it will make you very entertaining at dinner parties and a better acupuncturist when it comes to helping your patients.
Omenn GS, Goodman GE, Thornquist MD, Balmes J, Cullen MR, Glass A, Keogh JP, Meyskens FL Jr, Valanis B, Williams JH Jr, Barnhart S, Cherniack MG, Brodkin CA, Hammar S: Risk factors for lung cancer and for intervention effects in CARET, the Beta-Carotene and Retinol Efficacy Trial.
The effect of vitamin E and beta carotene on the incidence of lung cancer and other cancers in male smokers. The Alpha-Tocopherol, Beta Carotene Cancer Prevention Study Group. N Engl J Med. 1994 Apr 14;330(15):1029-35.
Bjelakovic G, Nikolova D, Simonetti RG, Gluud C. Antioxidant supplements for preventing gastrointestinal cancers. Cochrane Database Syst Rev 2004;(4):CD004183
Ioannides, J. "Why Most Published Research Findings Are False". PLoS Med. 2005 August; 2(8): e124
Elm, E. Egger, M. "The Scandal Of Poor Epidemiological Research." BMJ. 2004 October 16; 329(7471): 868–869
Smith, G. and Ebrahim, S. "Epidemiology — Is It Time to Call It a Day?" Int. J. Epidemiol. (2001) 30 (1): 1-11
Anitschkow N, Experimental Arteriosclerosis in Animals. In: Cowdry EV, Arteriosclerosis: A Survey of the Problem. 1933; New York: Macmillan. pp. 271-322.
Dehaven, J. et al, Nitrogen and sodium balance and sympathetic nervous system activity in obese subjects treated with a low-calorie protein or mixed diet. N Engl J Med, 1980. 302(9): p. 477-82
O'Connor, Anahad. "REALLY?; Milk thistle is good for the liver." New York Times, Jan. 12, 2010.
Rambaldi A, Jacobs BP, Gluud C. Milk thistle for alcoholic and/or hepatitis B or C virus liver diseases. Cochrane Database of Systematic Reviews 2007, Issue 4.
Bounous G., et al. Whey proteins in cancer prevention. Cancer Lett. 1991 May 1;57(2):91-4.
Hakkak R., et al. Diets containing whey proteins or soy protein isolate protect against 7,12-dimethylbenz(a)anthracene-induced mammary tumors in female rats. Cancer Epidemiol Biomarkers Prev. 2000 Jan;9(1):113-7
Click here for more information about Marlene Merritt, DOM, LAc, ACN.
Join the conversation
Comments are encouraged, but you must follow our User Agreement
Keep it civil and stay on topic. No profanity, vulgar, racist or hateful comments or personal attacks. Anyone who chooses to exercise poor judgement will be blocked. By posting your comment, you agree to allow MPA Media the right to republish your name and comment in additional MPA Media publications without any notification or payment.