Overview of selective reporting bias
Selective reporting bias occurs when researchers choose to present a subset of results from their study by omitting certain data or complete outcomes. Consequently, the reported estimate or range of outcomes from the study may be biased.
How to detect selective reporting bias
Most research generates large amounts of data that can be analyzed in multiple ways. Sometimes researchers will select a sample of results to present in their published manuscripts and neglect to provide the full breadth of results that were captured. When the presented results have been selected based on a favored conclusion, rather than on outcomes pre-specified in a protocol, selective reporting bias has been introduced into the study.
Selective reporting bias can be extremely detrimental. If data from some participants or time-points has been excluded, the true effect estimate of the results may be over or underestimated, thereby distorting the conclusions.
Selectively biased results could have implications for clinical practice, giving healthcare providers inaccurate information about the efficacy or effectiveness of a particular treatment. Furthermore, if entire outcomes have been measured but left out of the published report, the community cannot assess the potential benefits and harms associated with the unreported outcome(s).
Example of selective reporting bias
Consider the following hypothetical example of selective reporting bias:
Researchers set out to study the effect of routine non-opiate painkiller A versus on-demand painkiller B on pain in people with severe discogenic low back pain. They measure pain via three scales:
After analyzing results from the study, the researchers discover that only the results from the VAS scale were statistically significant (P < 0.05), demonstrating that painkiller A was superior to painkiller B at reducing pain. The researchers decide that since the results from the NPS and MPQ were not significant, they would only report the results of the VAS in their publication. In choosing to present the VAS results while excluding the other scales, they bias the reported results.
Ideally, researchers publish (or post in a repository) a protocol pre-specifying the methodology they will use when conducting the study. The protocol contains information regarding the outcomes they plan to collect. When a study’s results are published, researchers typically link it to this protocol, allowing readers to compare the protocol to the publication.
The existence of a pre-registered protocol allows readers to easily spot selective reporting bias by comparing the protocol to the outcomes reported in the publication.
Unfortunately, protocols are published less frequently than they should be under ideal circumstances. In the absence of a protocol, the methods section of a paper can be used to compare the outcomes that were planned versus those reported in the paper.
Using the methods section as a proxy for a true protocol can be deceiving, however, because the researchers could have neglected to report the true range of outcomes that were measured in their methods section.
To minimize concerns about selective reporting bias and optimize methodological rigor, researchers should always specify their measured outcomes a-priori, publish a protocol prior to conducting research, and report all results regardless of the direction or significance of their effects.