• 7/19/2005
  • Bethesda, MD
  • Elana Hayasaka
  • Journal of the National Cancer Institute, Vol. 97, No. 14, 1019, July 20, 2005

Selective reporting biases may taint many published studies of so-called prognostic factors—biologic or genetic markers that may predict how a patient may fare during or after treatment, according to a new study in the July 20 issue of the Journal of the National Cancer Institute.

In recent years, researchers have raced to discover and publish papers on prognostic factors for cancer and other diseases. For example, at least 116 published studies alone have focused on a single tumor suppressor protein (TP53) that may predict the outcome of a patient with head and neck cancer. As researchers attempt to make sense of the growing mountains of information, some are beginning to question the validity and possible biases in many of the studies.

To examine the question of selective reporting bias, Panayiotis A. Kyzas and colleagues from the University of Ioannina in Greece analyzed information from studies that investigated the association between TP53 and mortality or survival rates. They collected three levels of information about the studies. The first level included studies that were indexed in the scientific databases MEDLINE and EMBASE using the key words “mortality” or “survival.” The other two levels of collected information include published studies not indexed under “mortality” or “survival”, and unpublished data retrieved from researchers.

The authors found that the association between TP53 status and head and neck cancer mortality weakened as they expanded their analyses from the 18 published and indexed studies alone to include the 13 studies with published data only. The association vanished after they added in the unpublished data from 11 studies.

Additionally, 72% of 18 published meta-analyses on prognostic factors for various cancers did not use the same standard end point definitions, and 89% of them failed to retrieve all of the data from the individual studies, instead including only a sample. Both of these details may have helped to fabricate the strong prognostic factor/cancer association in these studies, according to the authors.

“These biases may create a spurious knowledge base of cancer predictors that may be of no use and may be potentially harmful,” conclude the authors. However, they caution that their study has several limitations, including the fact that a lot of information could not be retrieved, making it impossible to determine the true nature of the biases.

“We believe that this study provides the most compelling evidence yet that the published prognostic literature is a serious distortion of the truth,” write Lisa M. McShane, Ph.D., of the National Cancer Institute in Bethesda, and colleagues, in an accompanying editorial. They call for more transparent reporting, better study design, appropriate analysis methods, more complete data collection, a requirement to make all data available, funding for collecting and annotating data, as well as a needed cultural change within the field of tumor marker research.

Citations:

Article: Kyzas PA, Loizou KT, Ioannidis JPA, Selective Reporting Biases in Cancer Prognostic Factor Studies. J Natl Cancer Inst 2005;97:1043–55.

Editorial: McShane LM, Altman DG, Sauerbrei W, Identification of Clinically Useful Cancer Progostic Factors: What Are We Missing? J Natl Cancer Inst 2005;97:1023–25.

Note: The Journal of the National Cancer Institute is published by Oxford University Press and is not affiliated with the National Cancer Institute. Attribution to the Journal of the National Cancer Institute is requested in all news coverage.