False-positive results are an inherent risk in cancer research, particularly in observational epidemiology studies. The frequency and impact of misleading results can be mitigated if researchers use more exacting standards when interpreting and reporting results, according to a commentary in the July 8 online issue of the Journal of the National Cancer Institute. Additionally, the impact of false-positive results could be lessened if the researchers and other stakeholders, including journal editors, the media, and consumers, evaluated the results with a more critical eye.
In the commentary, Paolo Boffetta, M.D., of the International Agency for Research on Cancer in Lyon, France, and other senior researchers from Vanderbilt University in Nashville, Tenn., and the Mario Negri Institute in Milan, Italy, discuss some of the key pitfalls of epidemiological research. For example, the authors describe what has happened over the last decade and a half after one small study reported an apparently strong association between pesticide residues in women's blood and the risk of breast cancer. Since then, seven more studies have examined the association, and successive meta-analyses that incorporated the accumulating data showed a decreasing relative risk from a more than four-fold increased risk to no increased risk. That pattern of decreasing summary risk indicates that the initial observation was incorrect.
Common problems that can lead to the occurrence and reporting of false positives include chance, systematic errors in the design or analysis of a study that introduce various types of biases, inadequate accounting for confounding variables, multiple subgroup analyses, and a preference amongst authors and editors to report positive, novel, or statistically significant results rather than results that suggest no association. Although researchers and readers may think these factors have a relatively small impact on the outcome of a study, the authors note that a previous analysis demonstrated that unmeasured and residual confounding could generate up to a 2-fold apparent increase in risk, even though the variable being examined did not have any real risk associated with it.
For each of these issues, the authors suggest ways to mitigate the problem. For example, they suggest that researchers should not give undue emphasis to the results of post-hoc and subgroup analyses or emphasize outcomes of borderline statistical significance. "Epidemiologists by training are most often aware of the methodological limitations of observational studies, particularly those done by others, yet when it comes to practice, and especially the interpretation of their own study results, methodological vigilance is often absent," the authors write.
Moreover, both researchers and readers should view novel results critically. "One of the simplest yet potentially most effective remedies involves increasing emphasis on skepticism when assessing study results, particularly when they are new."
|Contact: Liz Savage|
Journal of the National Cancer Institute