Take your favorite health-related podcast aimed at helping you optimize your body, brain, and mind. Pick a random episode. The podcast’s guru host may have invited an “expert” on the show to discuss the latest and greatest product to enhance some bodily asset. It could be a supplement, tool, technique, book, educational material, etc. You may feel ecstatic. “YES!” You exclaim with joy.
“This is it. This is what I have been waiting to find.” You are so overjoyed, you call in to the podcast and ask about the product: “Hey Podcast Bob (the podcast host), I have X condition. I’ve tried Z and Y but nothing helps. Do you think this could help me?”
Bob responds with the usual disclaimer, “Well, I’m not a doctor. Nothing I say here should be construed as medical advice (ahem)… But yes, I do think V could help you. Let me tell you exactly how to use it…”
Now, before you try product V, stop. Take a deep breath. Turn off the podcast. Reflect for a moment. Ask yourself “Who is this expert? What evidence do they actually have? How do I know what they said is true?”
You might as well go to the kitchen and start peeling an onion. Don’t bother the candles. Just let your eyes burn with tears as you slowly exfoliate the layers of this herbaceous perennial flowering member of the Alliaceae family (the onion) with your bare handsLayer by layer, remove the shells leading to its pearly glowing core of truth.
Trying to get to the root of who the expert is and the veracity of their claims is a painful endeavor. Podcasts, and social media in general, create nearly religious communities where the same thought patterns are perpetuated, weaving in and out of every information thread either blatantly or as subliminal undercurrents that burrow their way into your wakeful consciousness and dream-states. These themes ripple to neighboring social media communities with similar agendas. Before you know it, you have become fully entrenched in a paradigm riddled with biases, or at least that has a subtle deceptive sheen that obscures the truth.
“What is the problem with listening to (some) experts?” Let’s break down some issues.
Issues with expert opinions
Expert advice is the lowest level of evidence on the evidence hierarchy. Yes, they may be an “expert” in their field, but being an expert does not necessarily mean their perspective is true and impartial.
Degrees are earned in school, but “expert” diplomas are not handed out. Expertise can be earned, peers can recognize an expert as an expert, or expertise can be a self-proclaimed title. No matter how expertise is claimed, it can be a dubious title.
Experts may be extremely biased, harboring one dominant viewpoint. They may have worked in their field for years to decades. They may be a new convert and schooled by a mentor who held one ideology and/or taught one methodology.
If experts hold a biased perspective rather than embracing multiple viewpoints and new evidence as it emerges, they may cling to a single paradigm with unwavering belief. Experts may be more likely to cherry pick evidence that conforms to their belief systems, may be more prone to conflicts of interest, and may be so selectively focused on one field they lose sight of the broader picture which biases their overall perspective, amongst numerous other forms of biases. Your bias alarm should be screaming when you hear people whole-heatedly endorsing a single theory/opinion/product/etc. without recognizing evidence to the contrary and its limitations. Another red flag would be sharing their perspective about why counter viewpoints are wrong.
A coin always has two sides. A biased expert will only be able to tell you what one side of the coin looks like in a certain light, or will not be informed about both sides; will provide results that are only applicable to one population, for example, a small pilot study with adults of one gender and limited age range making the external validity questionable, or worse, apply the conclusions from a preclinical trial to a clinical context which should be considered a sin; will fail to recognize the overall literature base around a topic and only focus on one set of studies that confirm their beliefs; will be dogmatic about a hypothesis they developed or support and cling to it with dear life even when evidence exists to the contrary; will prop themselves up and give the impression they have solved the mystery about X disease and the product/paradigm they endorse, and possibly discovered, is the secret solution.
An unbiased expert will:
- Know the front and back of a topic
- Admit when they are wrong and admit when they do not know answers to questions
- Ask questions to clarify things they truly do not know instead of pretending to know an answer
- Understand the breadth of literature around a topic and acknowledge conflicting opinions
- Give credit where credit is due
- Refrain from inflating evidence around a subject
What is a surrogate outcome?
Aside from experts’ potential biases, it’s vital you understand what a surrogate outcome is and how experts commonly misuse them.
Briefly, a surrogate outcome is a substitute outcome that indirectly attempts to predict a hard clinical outcome. Surrogates are typically lab or radiology-based outcomes.
Examples of surrogate outcomes include:
- Cholesterol
- Blood pressure
- C-reactive protein
- Blood sugar
- White blood cell count
- Bone mineral density
- Premature ventricular contractions
Examples of hard clinical outcomes include:
- Death
- Myocardial infarction
- Stroke
- Acquired immunodeficiency syndrome,
- Fractures
Essentially, anything that leads to a disease or death is a surrogate. The actual disease or death is the hard clinical outcome.
Why are surrogate outcomes used?
Surrogates are often used because they are much cheaper and easier to study. However, surrogates do not always correlate with clinical outcomes. They often fail at predicting clinical outcomes. In other words, changing a surrogate may not actually change a clinical endpoint. It may actually change the clinical outcome in a counterproductive way.
For example, 36 cancer drugs were examined for surrogate outcomes in clinical trials and were approved by the FDA. However, when implemented into practice, 19% demonstrated no, or worsening, health-related quality of life (HRQoL), despite the average annual cost of these drugs lacking HRQoL benefit was $87,922. Similarly, out of 36 trials testing cancer drugs that examined surrogate outcomes and were FDA approved, only 14% of the drugs were found to improve overall survival. Over 50% failed to extend survival.
Surrogate outcomes are widespread
While these surrogate examples concern drugs, surrogate outcomes are rampant in clinical research. Much of what you hear in podcast ads concern surrogate outcomes, especially for hot new gizmos and supplements.
For example, consider a hypothetical scenario of a new supplement with a high level of plant phenols and sterols isolated from a plant that has been found to lower cholesterol levels. Compounds from this plant have been isolated, placed into a supplement, and tested in a very small clinical trial with 20 participants.
The product dramatically lowers cholesterol levels and is marketed for its cholesterol-lowering actions. However, despite the reduction in cholesterol, it's possible the supplement could increase cardiovascular disease risk, an association that has been seen in some observational research. If the expert never studied the clinical endpoint (cardiovascular disease) the product may indeed lower cholesterol levels but could worsen human health and quality of life for those who take it by increasing the prevalence of cardiovascular disease.
Again, this is simply a theoretical scenario. The point: making medical claims based on a surrogate outcome can be a dangerous game to play, especially for fringy biohacking supplements and toys. Aside from potential dangers of surrogate outcomes, at minimum, fringy products may be a complete waste of money if it has no true impact on an important clinical outcome. Want to watch handfuls of Benjamins go round and round the toilet, flushed off to the sewer depths? Try buying a product based on an expert’s advice. You may get the same effect.
Experts who benefit financially
Experts with a financial agenda often sell products based on clinical research conducted in small underpowered studies. These studies do not have the statistical power to detect if true differences between groups exist, rendering the results inconclusive. Statistically significant and positive results from these trials are usually overestimated. If no significant effects were found, researchers may bury the studies since they didn’t show benefit, which introduces a publication bias, or to conclude there were no differences between groups, which is the incorrect conclusion. The best that can be said for underpowered trials with non-significant effects is that there is an absence of evidence.
Preclinical to clinical
Another deceptive trick experts love is directly applying evidence from preclinical to clinical studies. For example, if taking X product upregulates a protein in vitro and that protein has been linked to Y outcome in human studies, experts may claim that X product will work in humans to achieve Y outcome. This is only a hypothesis, not evidence of an effect, and an inappropriate tactic that propagates false beliefs.
Correlations between variables can be found for almost everything, and experts love hanging their hats on correlations to drive an agenda. The plethora of correlations between diet disease are illustrated in the famous cookbook review. Schoenfeld and Ioannidis randomly selected foods from a cookbook and identified if those foods were associated with cancer risk. Over 80% of the foods had been studied in clinical trials, and research published, examining these foods with cancer risk. Out of 264 foods, an increase and decrease in cancer risk was found in 39% and 33%, respectively. Large effects were observed for studies reporting both increased and decreased risks. The median RRs (IQRs) were [2.20 (1.60, 3.44)] for studies reporting an increased risk and [0.52 (0.39, 0.66)] for those reporting a decreased risk. However, after being pooled in a meta-analysis, the effects were null [median RR 0.96 (IQR: 0.85, 1.10)]. These results demonstrate that associations with cancer risk were found for most of the foods and their single study effects were exaggerated, but when combined, the associations disappeared. The lesson is that effects between variables are common, and caution needs to be exercised when examining results that have consequences for human health, especially when reported as single studies, and highlight the importance of looking at the whole (pooled results) in addition to the parts.
Other signs to watch out for
Some other hot ticket items that should set off your “expert alarm” include:
- People who use too many qualifiers (may, potentially, possibly, etc.). All outcomes derived from clinical research are estimations. The goal of clinical trials is to estimate a population mean from a sample mean. No conclusions are 100% “the truth,” as there is always the possibility for variability between intervention protocols, populations studied, methods used, etc. But, if you hear an expert repeatedly use qualifiers to defend a conclusion, it increases the likelihood that either they don’t know what they are talking about or that they are uncertain but trying to masquerade as being certain. In either case, the qualifier serves as a shield to try and hide a bias.
- People who say thousands of studies have been conducted and can be found in PubMed/MEDLINE. Experts use this statement to try and sell products. For example, they might say a supplement affects X outcome, and this is proven because there are over 1000 studies in PubMed documenting this association. This is completely bogus. Just because a large body of research has been done means nothing. The studies could completely negate the opinion of the expert or be so heterogeneous, biased, and riddled with methodological problems that when viewed individually and collectively no definitive conclusion can be made.
- People who claim that just because a medicine was used for thousands of years that it should be used today and is safe. For example, Traditional Chinese Medicine and Ayurveda are medical systems that have existed in China and India, respectively, for thousands of years. Their practices are still used today, and there is a large body of evidence demonstrating effectiveness of these systems for some patients, diseases, and outcomes. But, thousands of years of “use” means nothing clinically. We want to know if an intervention is safe and effective, not just has it been “used” for a condition. Some interventions could be unsafe, for example, herbs may be prescribed for a condition but they are unregulated and can be contaminated with heavy metals making them dangerous for human consumption.
- People who can’t change their opinion can’t question their beliefs should not be trusted. There is a famous phrase often used in the finance world that it is best to have “strong convictions loosely held.” In other words, it's OK for experts to have unwavering faith in research and ideas but if evidence to the contrary emerges, those experts should be capable of changing their opinion.
What can we do about issues with expert opinions?
How can we move beyond these issues so more people can trust expert opinions?
We need better studies, better data, better evidence synthesis, and less bias. These issues can be mitigated if proper training is provided so studies are properly designed, conducted, reported, and interpreted. That may sound too simplistic, but the solution starts at square one with training because researchers need to know how to properly design, conduct, and interpret studies.
You may ask, “Don’t researchers know how to conduct studies? They spent years attaining graduate degrees in clinical fields. And won’t the peer review process detect issues with papers before they are published?”
Examples of failed research
Unfortunately, there are numerous examples of failed research. For instance, studies examining the antioxidant properties of vitamins have demonstrated inverse associations with all-cause mortality, cancer, and cardiovascular disease in observational studies. The protective nature of antioxidant supplementation studied in observational research disappeared when multiple randomized controlled trials were conducted that tested the same associations. Re-analysis of 37 randomized controlled trials demonstrated that 62% of results were changed after re-examining the results, including re-interpretation of the patients that should be treated and changes in the direction, magnitude of treatment effect, or statistical significance.
Similarly, out of 49 highly cited original clinical trials, subsequent studies contradicted 16% of the original research, while 16% demonstrated weaker treatment effects. Discrepancies may be due to errors in the conduct, reporting, and analysis methods. Re-analysis of 250 controlled trials demonstrated that treatment effects were overestimated (P < 0.001)
Compounding these research problems, the peer review “watchdog” system that is supposed to protect the biomedical literature is broken. Randomized trials examining reviewers’ ability to detect errors demonstrate that ~75% of reviewers fail to detect major errors in manuscripts and 68% do not recognize when conclusions are not supported by results. The end result of the research and publication processes is that between 35-90% of research investigating major domains such as psychology, genetics, oncology, and cardiovascular disease, and cancer. amongst others, cannot be reproduced or replicated. Lack of reproducible and replicable outcomes is extremely concerning because individual studies form the basis for systematic reviews/meta-analyses and guidelines that inform clinical and healthcare policy decisions. If results cannot be corroborated, decisions about the application of research may be biased. Simply put, we don’t know if the research results are true.
How do we fix this?
Where does the science community turn to rectify the issue? While publishing and peer review practices can be improved, the research has already been completed by the time the manuscript arrives at the publisher's doorstep.
A bad study is a bad study. It might help to qualify a study as poor quality or biased, so that knowledge consumers reading the study understand its limitations. But that still may not deter people with an agenda from using that research to dubiously promote their claims. Furthermore, the time, resources, and personal investment from patients involved in clinical research have already been wasted on something that will contribute little to the overall literature pool around a topic, or feed into a bias that leads to deceptive and overstated health claims.
Researchers need to perform the studies correctly and need to be trained to use appropriate methods that minimize the risk of bias. This starts with education. I’m not implying that the thousands of clinical graduate degrees are hogwash. I cringe at the implication that researchers are frauds who don’t have a clue about what they are doing. Yes, some researchers are frauds, and there are numerous examples of fraudulent research, more than possible to link here. Science is hard and imperfect. Human errors happen. But there are differences between fraud, errors, and lack of appropriately designed and interpreted trials. While errors will always exist, ensuring trials are properly conducted can be achieved through adequate training. Clinical research programs need to ensure that education is provided about how to design trials that are free from bias.
As an adjunct to proper training, perhaps there needs to be a “methods committee” that ensures research has been appropriately designed before its conducted and all clinical trials must be approved by this methods committee before the trial commences. This would be akin to an ethics committee that ensures the trial is safe and ethical prior to being conducted. Bias could of course be introduced during and after the trial. However, if the puzzle pieces of the study were in place at the outset, biases would be minimized, and theoretically the results would be more reliable. This would not stop experts from making wild statements, inappropriately extrapolating evidence from basic research to human studies, or having financial agendas that lead to deceptive claims. Just like a bridge needs a proper foundation to support the weight of passing freight, studies need to be designed correctly to hold the weight of the results and conclusions. It is the researchers that design the trials and the researchers that are responsible for the quality of studies.
We live in a world where most things are commodified. Even science, a field that seeks to understand and advance the human condition, is a commodity. People want a legacy, to discover something no one else has, and trailblaze into the unknown. Unfortunately, the science-oriented biohacking trailblazers that want you to believe in their unique theory, product, and paradigm are most often biased and blinded by their inability to see and accept ulterior perspectives even when evidence contradicts their opinion. They always can explain how they are right, others are wrong, and how their paradigm “just needs a chance” to be proven correct.
Sometimes evidence is not available to answer a question. But if there is an absence of evidence beyond theory and anecdote, even crowdsourced anecdotes from thousands of people (aka, not proof), experts need to be honest about said absence. We need to cool down the ridiculous marketing campaigns that promote new health supplements with unproven claims based on shoddy research.