JOIN THE CURE

Contact Us

most recent

test tubes
By Benzi Kluger 12 Jan, 2021
In a recent blog , we looked at the failure of Vitamin A to prevent lung cancer in human trials despite massive hype and other positive research. This study demonstrated the rule that we don’t know something is safe and effective in people until it has been adequately tested in people. In this and upcoming blogs, we are going to look at why this is the case starting with the limitations of basic science and animal research. If you care about avoiding falling for medical bullshit, this blog is important; many news headlines, viral stories, and product claims are based solely on basic science or animal research when you go to the source of their claims. This blog is also important to understand key differences between how medical science advances and how medical bullshit advances. "it is no secret in the scientific community that animal models do not reliably predict how treatments will work in people." It is no secret in the scientific community that animal models do not reliably predict how treatments will work in people.1 Many things that are safe and work in animals aren’t safe and don’t work in people, and some things that work in people don’t work in animals.2 There are several reasons why animal models fail to predict how treatments will work in people including: Differences between species: Put another way, people are not simply large hairless rats (although there are some people who I wonder about). People differ in many important ways from other animals, and these differences can impact how and whether treatments will work or be safe. Differences between the model and the disease: Many human diseases don’t naturally occur in animals. When scientists try to create models of the human illness, there may be important ways that the model fails to replicate the disease in people. For example, some Parkinson’s disease animal models involve giving massive doses of a neurotoxin, a scenario that is not similar to how most people develop Parkinson’s. Biases in animal research: Just as with human studies, animal research can suffer from biases ranging from a lack of appropriate blinding of investigators to publication bias (people are more likely to publish positive findings than research showing something doesn’t work). So why do we use animal studies at all? Because animal studies have led to advances in medical science and new treatments that would have been difficult, if not impossible, to do without animal studies.3 Animal studies are an important step for developing and testing certain therapies but they are no guarantee that a therapy will work in people. So what can we learn from the successes and failures of animal experimentation: Promising results from studies in animals should lead to trials in people, not treatment in people. Looking at the Vitamin A and cancer example: when early animal studies looked promising, serious scientists called for large trials in people4 (which were conducted, and proved Vitamin A didn’t work). Meanwhile, news media, health books, and supplement manufacturers were ready to move straight to sales to the public. The problem here is not animal research, but how it is publicized. Until media and supplements act more responsibly, it will be up to you to draw the appropriate conclusions There is room to improve the quality, reliability, and reproducibility of animal research. The scientific community is taking the failure of many animal models to lead to useful treatments quite seriously.5 This includes progress in understanding differences between species, improving disease models, and calls for increasing the rigor and reproducibility of animal studies.6 Improving the quality and focus of animal studies may also improve their ethical acceptance, along with progress in seeking alternatives to animal research and raising standards for the humane treatment of animal subjects.7 We can all play a role in reducing medical bullshit related to animal research. This includes being more savvy readers of research, being more responsible about what we share, and always seeking to find the source of claims in news and on products. If you are working in news media, consider using more accurate headlines, and if you are a media consumer, call out your media sources when they are misleading. For scientists and medical professionals, we also need to be responsible for how we communicate results of animal studies and, if we perform such studies, ensure they are ethically justified and of the highest scientific rigor. References: 1. Perel P, Roberts I, Sena E, et al. Comparison of treatment effects between animal experiments and clinical trials: systematic review. BMJ 2007;334:197. 2. Bracken MB. Why animal studies are often poor predictors of human reactions to exposure. J R Soc Med 2009;102:120-122. 3. Carbone L. The utility of basic animal research. Hastings Cent Rep 2012;Suppl:S12-15 4. Peto R, Doll R, Buckley JD, Sporn MB. Can dietary beta-carotene materially reduce human cancer rates? Nature 1981;290:201-208. 5. Akhtar A. The flaws and human harms of animal experimentation. Camb Q Healthc Ethics 2015;24:407-419. 6. Frommlet F. Improving reproducibility in animal research. Sci Rep 2020;10:19239. 7. Gilbert S. Progress in the animal research war. Hastings Cent Rep 2012;Suppl:S2-3.
By Benzi Kluger 11 Jan, 2021
In a recent blog, I looked at the failure of Vitamin A to prevent lung cancer in human trials–despite massive hype and other positive research–to demonstrate the rule that we don’t know something is safe and effective in people until it has been adequately tested in people. In my last blog , I looked at some of the limitations of animal research in predicting human safety and efficacy. In this blog, we will look at how easy it is for correlations to be misleading, even if based on a large numbers of observations. In contrast to much of medicine that studies disease and health in individuals, epidemiology studies health and disease at a population level. As with animal research, there are certain advantages to this approach, such as being able to uncover the impact of certain environmental exposures on health, or determine the impact of public health policy on pandemic spread. There are also limitations, particularly when looking at correlational studies. In a correlation study, researchers collect data on one or more health outcomes of interest (e.g. lung cancer, longevity, happiness) and several potential predictors of this outcome (e.g. smoking, diet, TV watching, zip code) in a sample of people. Researchers then look for correlations between the predictors and health outcomes. This seems like a pretty straight forward way to determine whether a certain predictor causes a certain health outcome or disease, but there are many ways this can go wrong: There could be bias in the sample. If I’m interested in determining whether farm work is associated with certain diseases, but only sample English-speaking people, I could underestimate some significant risks that may impact more vulnerable non-English speakers. There could be bias in who responds. If I send out a survey on “Cannabis and Happiness,” it’s likely that people who respond to the survey may be more likely to have strong feelings on the topic than people who don’t respond. The results could simply represent a statistical fluke. Ironically, the more predictors researchers look at, the more likely it is that they will come up with an erroneous conclusion. In fact, if you look at enough predictors, you can almost guarantee that you will make an error, as happened to a Swedish research group that sought to determine whether living close to power lines caused any of a list of over 800 diseases . Even if the correlation is real, it does not prove causation. Sometimes a correlation may arise because of a shared, but unmeasured, causal factor. For example, yellow teeth may be associated with lung cancer, but that is because both are associated with smoking; teeth whitening will not prevent cancer. Sometimes the conclusions drawn may actually reflect reverse causation. For example, one may see a correlation between smoking and schizophrenia, and conclude that smoking causes schizophrenia; however, it appears that at least some of this correlation may reflect persons with schizophrenia finding some symptom relief from smoking. Sometimes a correlation may simply reflect larger trends in society or other confounding factors. This website goes into this and other causation errors in depth, including a striking graph on the correlation (NOT CAUSATION) of U.S. spending on science and deaths by hanging. The key takeaway here is that one must be skeptical of drawing strong conclusions, particularly about causation, from observational and correlational studies. This happens all the time; many news headlines and medical bullshit books are based on very weak and spurious correlations when you track down the source of the claim.
By Benzi Kluger 08 Jan, 2021
The Vitamin A and Lung Cancer Story
show more

start here: the price of bullshit series

By Benzi Kluger 05 Jan, 2021
Question: is medical bullshit really a big problem? 0r is this campaign against bullshit simply one more attempt for people in mainstream medicine to knock out any competition?
By Benzi Kluger 04 Jan, 2021
An excerpt from my book manuscript entitled Dangerous and Expensive Bullshit:
By Benzi Kluger 03 Jan, 2021
Now that we’ve covered expensive, let’s tackle the dangerous side of medical bullshit. There is a general misperception that alternative therapies are safer than conventional medicine. Part of this ties to the myth that things that are natural must be safe (see number 8 in the Medical Bullshit Detector). Part of this ties to the fact that when people are trying to sell you a health product or service they emphasize only the potential benefits, many of which are often unbelievable or untrue, and neglect to mention any side effects. It is a rule that if something has an effect it must have side effects. Take exercise for example. Despite its clear benefits, any exercise you choose has some risk of injury. Even placebos can cause side effects, the so-called nocebo effect. Myth 1: Alternative medicines don’t have side effects. Truth: My initial motivation for getting into the world of anti-bullshit was seeing several of my own patients going to Mexico or Costa Rica and spending tens of thousands of dollars on stem cell treatments that had no proven benefits but had well-documented harms, particularly a risk that the injected cells could become a cancer. There were over 20,000 emergency room visits a year related to supplements in 2015,1 a number that has certainly grown if you add cannabis-related products to the mix, and many of which are serious enough to require hospitalization. Myth 2: There is no harm in stopping conventional treatments. Truth: When you stop treating something treatable or curing something curable it gets worse. Some of the worst tragedies of medical bullshit are people who died from illnesses we now have reliable treatments for who stopped them because they fell victim to a charismatic quack. In my own practice I’ve had patients take a break from medications to try various diets and supplements only to come back months or rarely years later much more disabled. The miracle treatments they hoped would cure their illness accelerated it by stopping conventional treatments that worked. Unfortunately, the damage of these experiments is often permanent. Myth 3: My doctor doesn’t need to know about my alternative health choices. Truth: Your doctor and healthcare team should know about your alternative health treatments and supplements. For one thing, supplements can interact with prescription medications and either decrease their effectiveness or increase toxicity. Second, side effects from supplements or other treatments can mimic symptoms of illness and may lead your doctor to order unnecessary tests or start unneeded treatments. As one common example, medical marijuana can cause low blood pressure, falls, confusion, and nausea. Finally, hiding things from your doctor over time affects the openness of your relationship which in turn makes it more difficult for you to get the most of your doctor visits. Myth 4: Doctors don’t recommend medications or other treatments unless you really need them. Truth: The rule that everything looks like a nail to a hammer applies here—doctors prescribe and surgeons cut. In my own research I’ve found that some patients don’t share symptoms with their doctors for fear that it will result in another prescription.2 Common instances of overtreatment include prescribing antibiotics for self-limited viral infections, pushing surgery for back or knee pain in the absence of a surgically correctable cause,3 and almost any aggressive intervention in people with a terminal illness nearing the end of their life.4 These treatments are not merely expensive, but also dangerous, contributing to antibiotic resistance, unnecessary surgical complications, and avoidable suffering for dying patients and their families. References: 1. Geller AI, Shehab N, Weidle NJ, et al. Emergency Department Visits for Adverse Events Related to Dietary Supplements. N Engl J Med. 2015;373(16):1531-1540. 2. Boersma I, Jones J, Carter J, et al. Parkinson disease patients’ perspectives on palliative care needs: What are they telling us? Neurol Clin Pract. 2016;6(3):209-219. 3. Stahel PF, VanderHeiden TF, Kim FJ. Why do surgeons continue to perform unnecessary surgery? Patient Saf Surg. 2017;11:1. 4. Cardona-Morrell M, Kim J, Turner RM, Anstey M, Mitchell IA, Hillman K. Non-beneficial treatments in hospital at the end of life: a systematic review on extent of the problem. Int J Qual Health Care. 2016;28(4):456-469.
By Benzi Kluger 02 Jan, 2021
It is one thing to see the statistics of medical bullshit: the billions of dollars wasted by people desperate for better health who often can’t afford it, the billions of dollars made by unscrupulous individuals and corporations off of the suffering of others, the thousands of people harmed or killed by side effects of products they were told were 100% safe and effective. It is another thing to open your eyes and heart to the personal suffering that is involved.  Myth 1: All illnesses can be cured Truth: Only an asshole would claim that all diseases are curable. If you buy into this line of thinking, then when you or your loved one fails to cure their Alzheimer’s disease, lung cancer, depression or _____ (fill in with awful illness that no one would have if they could avoid it), the conclusion is that there must be something wrong with you. You also waste a lot of time (and money) on false cures that could be spent doing things that are really important to you with people you love. Myth 2: Aging can be reversed Truth: Whether we will ever reach a stage of scientific advancement that at least some aspects of aging could be slowed or stopped is an open question—there are great strides being made in understanding cellular aging and addressing age-related illness—but we aren’t there yet. In the meantime, our obsession with eternal youth contributes to a loss of reverence for the elderly, desperate efforts to look young including plastic surgery, and a loss of the spiritual growth and wisdom that typically comes with the acceptance of time and change. Myth 3: Death is optional Truth: Spoiler alert – one way or another your life will end; death is inevitable. When you have a medical system that pretends death is optional, or simply fails to acknowledge it, you end up with a lot of people getting unnecessary care in the last few weeks, days, and even hours of their life. This “care,” often consisting of tubes connecting people to various machines, is uncomfortable and causes suffering not only to the person dying but to the family members who watch it, and who often feel responsible for pulling a plug that possibly shouldn’t have been offered. By denying death, we also deny ourselves the opportunity to connect deeply to death and the dying, and to say those things that need to be said (I love you, thank you, I forgive you, forgive me) while we still can. Myth 4: Perfect people live perfectly healthy lives Truth: Bad things happen to good people, no matter how you measure goodness. This myth allows, perhaps even encourages, people who haven’t yet gotten a serious illness to judge people who have. My first wife, a Deadhead with young onset rheumatoid arthritis, was told by people at parties that she had a “dis-ease” because she wasn’t groovy enough or some such shit. By making her disease her fault, it allowed them to persist in the illusion that they would never get sick because they did yoga, took natural drugs, cleansed toxins from their aura, and the list goes on. There is a difference between living and surviving. I am all for exercise, watching one’s diet, not smoking, and wearing bike helmets to try to improve one’s chances of living a long and healthy life. But I also know there are no guarantees. If we take these things as matters of survival, we let health dogma control our life. One can also do these same activities with awareness of the precious and fragile gift of life and connect to freedom, gratitude and joy. Thank you for reading. Are there any costs to medical bullshit that you think are missing? How has medical bullshit affected you? Please send me some ideas in the comments.

cutting through the hype

test tubes
By Benzi Kluger 12 Jan, 2021
In a recent blog , we looked at the failure of Vitamin A to prevent lung cancer in human trials despite massive hype and other positive research. This study demonstrated the rule that we don’t know something is safe and effective in people until it has been adequately tested in people. In this and upcoming blogs, we are going to look at why this is the case starting with the limitations of basic science and animal research. If you care about avoiding falling for medical bullshit, this blog is important; many news headlines, viral stories, and product claims are based solely on basic science or animal research when you go to the source of their claims. This blog is also important to understand key differences between how medical science advances and how medical bullshit advances. "it is no secret in the scientific community that animal models do not reliably predict how treatments will work in people." It is no secret in the scientific community that animal models do not reliably predict how treatments will work in people.1 Many things that are safe and work in animals aren’t safe and don’t work in people, and some things that work in people don’t work in animals.2 There are several reasons why animal models fail to predict how treatments will work in people including: Differences between species: Put another way, people are not simply large hairless rats (although there are some people who I wonder about). People differ in many important ways from other animals, and these differences can impact how and whether treatments will work or be safe. Differences between the model and the disease: Many human diseases don’t naturally occur in animals. When scientists try to create models of the human illness, there may be important ways that the model fails to replicate the disease in people. For example, some Parkinson’s disease animal models involve giving massive doses of a neurotoxin, a scenario that is not similar to how most people develop Parkinson’s. Biases in animal research: Just as with human studies, animal research can suffer from biases ranging from a lack of appropriate blinding of investigators to publication bias (people are more likely to publish positive findings than research showing something doesn’t work). So why do we use animal studies at all? Because animal studies have led to advances in medical science and new treatments that would have been difficult, if not impossible, to do without animal studies.3 Animal studies are an important step for developing and testing certain therapies but they are no guarantee that a therapy will work in people. So what can we learn from the successes and failures of animal experimentation: Promising results from studies in animals should lead to trials in people, not treatment in people. Looking at the Vitamin A and cancer example: when early animal studies looked promising, serious scientists called for large trials in people4 (which were conducted, and proved Vitamin A didn’t work). Meanwhile, news media, health books, and supplement manufacturers were ready to move straight to sales to the public. The problem here is not animal research, but how it is publicized. Until media and supplements act more responsibly, it will be up to you to draw the appropriate conclusions There is room to improve the quality, reliability, and reproducibility of animal research. The scientific community is taking the failure of many animal models to lead to useful treatments quite seriously.5 This includes progress in understanding differences between species, improving disease models, and calls for increasing the rigor and reproducibility of animal studies.6 Improving the quality and focus of animal studies may also improve their ethical acceptance, along with progress in seeking alternatives to animal research and raising standards for the humane treatment of animal subjects.7 We can all play a role in reducing medical bullshit related to animal research. This includes being more savvy readers of research, being more responsible about what we share, and always seeking to find the source of claims in news and on products. If you are working in news media, consider using more accurate headlines, and if you are a media consumer, call out your media sources when they are misleading. For scientists and medical professionals, we also need to be responsible for how we communicate results of animal studies and, if we perform such studies, ensure they are ethically justified and of the highest scientific rigor. References: 1. Perel P, Roberts I, Sena E, et al. Comparison of treatment effects between animal experiments and clinical trials: systematic review. BMJ 2007;334:197. 2. Bracken MB. Why animal studies are often poor predictors of human reactions to exposure. J R Soc Med 2009;102:120-122. 3. Carbone L. The utility of basic animal research. Hastings Cent Rep 2012;Suppl:S12-15 4. Peto R, Doll R, Buckley JD, Sporn MB. Can dietary beta-carotene materially reduce human cancer rates? Nature 1981;290:201-208. 5. Akhtar A. The flaws and human harms of animal experimentation. Camb Q Healthc Ethics 2015;24:407-419. 6. Frommlet F. Improving reproducibility in animal research. Sci Rep 2020;10:19239. 7. Gilbert S. Progress in the animal research war. Hastings Cent Rep 2012;Suppl:S2-3.
By Benzi Kluger 11 Jan, 2021
In a recent blog, I looked at the failure of Vitamin A to prevent lung cancer in human trials–despite massive hype and other positive research–to demonstrate the rule that we don’t know something is safe and effective in people until it has been adequately tested in people. In my last blog , I looked at some of the limitations of animal research in predicting human safety and efficacy. In this blog, we will look at how easy it is for correlations to be misleading, even if based on a large numbers of observations. In contrast to much of medicine that studies disease and health in individuals, epidemiology studies health and disease at a population level. As with animal research, there are certain advantages to this approach, such as being able to uncover the impact of certain environmental exposures on health, or determine the impact of public health policy on pandemic spread. There are also limitations, particularly when looking at correlational studies. In a correlation study, researchers collect data on one or more health outcomes of interest (e.g. lung cancer, longevity, happiness) and several potential predictors of this outcome (e.g. smoking, diet, TV watching, zip code) in a sample of people. Researchers then look for correlations between the predictors and health outcomes. This seems like a pretty straight forward way to determine whether a certain predictor causes a certain health outcome or disease, but there are many ways this can go wrong: There could be bias in the sample. If I’m interested in determining whether farm work is associated with certain diseases, but only sample English-speaking people, I could underestimate some significant risks that may impact more vulnerable non-English speakers. There could be bias in who responds. If I send out a survey on “Cannabis and Happiness,” it’s likely that people who respond to the survey may be more likely to have strong feelings on the topic than people who don’t respond. The results could simply represent a statistical fluke. Ironically, the more predictors researchers look at, the more likely it is that they will come up with an erroneous conclusion. In fact, if you look at enough predictors, you can almost guarantee that you will make an error, as happened to a Swedish research group that sought to determine whether living close to power lines caused any of a list of over 800 diseases . Even if the correlation is real, it does not prove causation. Sometimes a correlation may arise because of a shared, but unmeasured, causal factor. For example, yellow teeth may be associated with lung cancer, but that is because both are associated with smoking; teeth whitening will not prevent cancer. Sometimes the conclusions drawn may actually reflect reverse causation. For example, one may see a correlation between smoking and schizophrenia, and conclude that smoking causes schizophrenia; however, it appears that at least some of this correlation may reflect persons with schizophrenia finding some symptom relief from smoking. Sometimes a correlation may simply reflect larger trends in society or other confounding factors. This website goes into this and other causation errors in depth, including a striking graph on the correlation (NOT CAUSATION) of U.S. spending on science and deaths by hanging. The key takeaway here is that one must be skeptical of drawing strong conclusions, particularly about causation, from observational and correlational studies. This happens all the time; many news headlines and medical bullshit books are based on very weak and spurious correlations when you track down the source of the claim.

more articles from benzikluger.com

Share by: