Bias is the “prejudice towards something” and “deviation from the truth.”
If research is biased, the obtained results will be deviated away from an accurate estimate of the true results, leading to wrong conclusions. The objective of a clinical trial is to estimate the population mean from the sample mean. To do so, researchers perform sampling, a process where a small subset of a larger population is enrolled in a trial, allocated into groups, and an intervention is tested on the participants. When conducting a clinical trial, there are numerous opportunities for errors to be introduced into the study that distort the results.
Systematic errors, also known as bias, are one of the most detrimental errors. Systematic errors are caused by actions from the researchers and/or the participants.
These errors arise when a study is designed to answer a research question in one way that is preferred over another. In other words, the design and/or conduct of the trial is prejudiced (consciously or unconsciously) to produce incorrect results.
Overview of Risk of Bias
In clinical trials, there are seven main types of bias that can prejudice the results of a study:
- Bias due to confounding
- Bias due to selection of participants
- Bias in classification of interventions
- Bias due to deviations from intended interventions
- Bias due to missing data
- Bias in measurement of outcomes
- Bias in selective reporting
Researchers can critically appraise the risk that bias has been introduced into a clinical trial by using an assessment tool. An assessment tool is a guideline that walks researchers through a series of questions to evaluate if any of the above-listed types of bias have been introduced into the study. Assessment tools exist for each type of study design.
The gold-standard tools include:
- The ROB2 assessment tool for evaluating randomized controlled trials.
- The ROBINS-I assessment tool for evaluating observational studies using.
- The ROBIS assessment tool for evaluating systematic reviews.
- The QUADAS-2 assessment tool for evaluating diagnostic accuracy studies.
- The SYRCLE assessment tool for evaluating preclinical animal studies.
Each of these tools have multiple domains that assess the various types of bias. Researchers assign a classification, to each of the domains within the tool:
- High risk of bias
- Low risk of bias
- Unclear risk of bias
Typically, these tools are used in systematic reviews when assessing a collection of studies as a unit. The risk of bias between all studies is displayed in a graphic that shows the cumulative risk of bias across all trials included in the review, as well as the risk of bias per each domain of a study. Figures 1 and 2 show examples of these graphics.
Figure 1
Figure 2
It is best if all domains have a low risk of bias to imply the researchers conducted the study properly and the risk is low that conscious or unconscious actions could have influenced the results.
However, it is rare to find studies where all domains have a low risk of bias. In fact, most systematic reviews find unclear or high risk of bias studies across the majority of trials, signaling deficiencies in how researchers are trained to conduct studies and/or report the methods and results in publications. Further, it calls into question the overall quality of published clinical literature, which has led industry leaders to claim that most published research is false.
Caveats when interpreting risk of bias
The risk of bias is assessed based on the information reported in the publication, or provided by the authors if requested by researchers evaluating the risk of bias. if information is missing in a manuscript, it implies that the researchers used an incorrect method. rRightly, the risk of bias should be downgraded.
However, authors may not know to include specific language in the manuscript that results in downgrading the risk of bias assessment despite using an adequate method. For example, authors may report that their study was “randomized” but not provide specific details about the exact randomization process:
Did they use central computer generated randomization? A random number table? Rolling dice? Drawing cards? etc.
Lack of specifics about the randomization process means this domain must be downgraded to unclear risk of bias.
Or, authors may not describe that they wrote and published a protocol prior to conducting the study, establishing a priori the methods and outcomes they would use to help prevent selective outcome reporting and exploratory analyses. Again, lack of including information about the protocol would lead to a downgrade for this domain.
If the authors had known to include this information, their study would not have been downgraded.
All risk of bias tools are retrospective. Simple discrepancies such as lack of knowledge to include information may lead to incorrect estimates of bias, which biases the risk of bias evaluation. This is a major flaw in risk of bias tools.
Ideally, these assessment tools should be used both before and after the study is completed. First, when the study is planned, authors should use the risk of bias tool as a checklist to ensure they design their trial with the highest standards. Addressing all domains will lead to the lowest risk of bias. If these factors are accounted for before conducting the study, the methodological quality of the trial will be increased.
Second, researchers need to use the tools retrospectively, as they are now, to ensure the conduct has been held to the highest standards and lowest risk of bias. Many of the issues with the current state of clinical literature, such as those underlying the dissemination crisis, would be improved if trials were designed and conducted properly first, before retrospectively discovering a trial is at high risk of bias, wasting the money, time, and hope patients invested.
No score is applied to any individual study in a systematic review. In other words, the number of high, low, and unclear domains are not tallied into any final metric. A cumulative metric can be problematic for two reasons:
- Researchers would have to judge which outcomes are most critical to a decision, and many times, studies lack data for all outcomes. Hence, if data is available from some studies but not from others, the risk of bias would be unfairly weighted; the final metric would not provide an accurate representation of the risk of bias for the outcome and/or group of studies.
- Judgements about the importance of outcomes can vary based on different settings, societal values, and baseline risk. Systematic reviews collate the totality of evidence around a topic to inform decisions for a variety of settings. Thus, a single score applied to multiple contexts may not be appropriate.
Finally, the risk of bias should not be used as a proxy for “quality.” Quality is considered an ambiguous term with many components influencing what could be considered as quality depending on contextual factors.
Overall, risk of bias assessment tools are powerful instruments to help understand how much trust can be placed in the results and conclusions of a study. The scientific community should consider using them prospectively to aid researchers when designing and conducting trials. In doing so, the ability to produce more reproducible, replicable, and reliable evidence is increased, leading to improved evidenced-based healthcare decisions.
Also read
Explainer: What is Classification Bias
Explainer: What is Selective Reporting Bias?
Lost in Translation: The Clinical Failure of Preclinical Research
Explainer: What is a Systematic Review and Meta-analysis?