The silent minority – unpublished data on cancer care

Daniel F. Hayes

From 1989 to 2003, 709 phase III trials evaluating systemic cancer treatment were presented at ASCO meetings. Tam and collaborators have now reported that 9% of these trials were never published, and 13% were published after a five-year delay. More than half of these studies would have had clinical impact if published promptly.

Two key elements of the scientific method are methodology transparency and reproducibility of results by others. Traditionally, these elements have been facilitated by the well-entrenched system of peer-review publication. This concept has had almost universal acceptance among the scientific community, although in the past few years there have been calls for open publication of all scientific results without the peer-review process. Some experts have advocated the creation of a type of ‘free-for-all’ post-publication peer review, with the view that classic, pre-publication peer review is usually selective (based on whom the editor knows and on who actually agrees to referee the article) and arbitrary (based on the respective biases of the reviewers).[1] A search in PubMed with the search terms “journal: Nature” and “all fields: peer review” yielded more than 300 articles, commentaries, and letters extolling the virtues and weaknesses of the system.

Regardless of the outcomes of this debate, at present, the peer-reviewed manuscript remains the gold standard for establishing whether a scientific concept is worthy of further pursuit, and whether there should be a change in the accepted paradigm in the respective field. Although this dictum is accepted in all areas of science, perhaps it is of most relevance in the field of medicine, as acceptance of a new scientific concept leads to a change in clinical practice, thereby affecting the lives of patients afflicted with, or at risk of, a particular disease.

A recent article by Tam et al.2 in the Journal of Clinical Oncology documents a worrisome failure to publish results of phase III randomised trials that were previously reported in abstracts and presentations at the annual meeting of ASCO. They report that, of the 709 abstracts of phase III studies presented at ASCO meetings from 1989 to 2003, nearly a quarter (162 trials including almost 24,000 enrolled patients) were not published in peer-reviewed journals within five years after the meeting in which they were presented. Even after 10 years of follow up, 9% of the presentations remained unpublished. To determine what the relative impact of these studies might have been on clinical practice if they had been published, the researchers queried experts in several of the major cancer types (such as breast, lung, gastrointestinal and haematologic), who estimated that 38 of 54 (70%) of the unpublished studies “addressed important clinical questions.” Although none of the 38 studies was judged to have “critical impact,” 32 of them “may have had some impact on clinical practice if the results had been published shortly after presentation.”[2]

What can practising physicians learn from these data? Are there any unpublished results that are also unknown to the average physician, and is peer-review publication actually necessary to guide clinical practice? In the days before rapid internet access and widespread attendance at major medical meetings, clinical practice was mostly driven by four factors: publication of data in peer-reviewed journals; expert opinion expressed in published reviews and/or continuing medical education (CME) meetings; pharmaceutical representatives providing drug information; and personal or colleagues’ experience. Today, any report presented at a major meeting, without having been published in a peer-reviewed publication, can have a substantial impact on practice. Attendance at meetings has risen dramatically. Nearly 30,000 people attended the ASCO annual meeting in 2010, compared with 3000 in 1980. Furthermore, results from ASCO and other major meetings are now made widely available, occasionally in real time, as webcasts or other media presentations for those not able to attend in person. The effects of these changes on practice are exemplified by the rapid acceptance of adjuvant trastuzumab for patients with HER2-positive breast cancer following the reporting of dramatic reductions in recurrence from four prospective randomised clinical trials at the May 2005 ASCO meeting.[3,4,5] In a survey of practising oncologists conducted in February 2005, fewer than 10% reported that they would recommend adjuvant trastuzumab for a patient with node-positive, HER2-positive breast cancer.[6] In August of that same year, just three months after the ASCO presentation, more than 95% of oncologists said they would recommend adjuvant trastuzumab, preceding the peer-reviewed publications by several months. This sea change in practice was a result of physicians attending the ASCO meeting (36%), attending other meetings in which the ASCO results were provided (56%), and/or hearing about the data in either CME-like publications, audio series or in the lay press.[7]

These considerations, however, do not obviate the need for peer-reviewed publications. Meeting abstracts usually consist of only two to three paragraphs in a proceedings booklet. Often, they do not even include results data, but rather a promise that they will be presented at the meeting. Abstracts cannot replace a complete report that details the design of the study, inclusion and exclusion criteria, dose and schedules used for the treatments and, most importantly, nuances of the benefits and toxic effects of the treatments. This desired level of detail is also not provided by a ten-minute presentation prepared solely by the author (sometimes with substantial influence by a supporting pharmaceutical company) with a five-minute question and answer period. Moreover, the media and public relations coverage at major meetings may amplify the true significance of the results, as they are often fuelled by companies or individuals with vested or biased interests in the drugs involved in the covered studies. So, although it is appropriate to consider an immediate application in practice of paradigm-changing results presented at a meeting, research ethics demand rapid publication of the full details in peer-reviewed publications to guide long-term clinical behaviour.

Furthermore, a single study alone may not change practice. Tam et al.[2] raise a second concern: that the lack of publication may prevent inclusion of important results in meta-analyses of published results, which often confirm or refute conclusions made from a single study. Some meta-analyses have attempted to identify trial reports included in abstracts from major meetings, and others have been able to include patient source data from trials regardless of publication status.[8,9] However, by definition, those meta-analyses that rely on the identification of studies through publicly accessible databases are hindered as a consequence of this lack of publication.

There is a difference in studies that are published compared with those that are not. It has been established that very positive studies are often published very quickly, whereas negative studies often languish on the investigators’ desk or are not accepted by major journals and are relegated to journals with lesser impact. This publication bias, as a consequence of either authors’ recalcitrance or editors’ decisions, is a major concern in regards to making clinical decisions. Tam et al.[2] cite a number of previously recommended solutions for the problem of non-publication of clinical trial data: mandated publication as a condition of external funding or by ethics committee approval; medical journal acceptance of studies with negative results, perhaps in special sections of the journal; and/or insistence of inclusion of unpublished studies in meta-analyses and expert opinion reviews. They point out that the existence of transparent, publicly accessible trial registries is already in place, enabling interested parties to determine which studies have been opened and/or completed and whether they are published or not. A recently published article has called for such a registry in the field of clinical tumour marker studies, which suffers even more from publication bias than does that of prospective therapeutic trials.[10]

In summary, peer review is not perfect, and could certainly benefit from reform, but to paraphrase Winston Churchill’s comment about democracy: “[it] is the worst form of government except all the others that have been tried.” It is reassuring that unpublished results represent a small minority of clinical trial results in oncology, but their silence is disturbing. The stakes are high. Patients who participated in these trials did so out of a sense of altruism, and we betray that trust if we do not handle the precious data generated in these studies appropriately. Perhaps more important, future patients’ well being and even their lives are at risk, and clinical decisions affecting these patients should not be left to the whims and vagaries of poorly reported evidence. I strongly concur with the reform recommendations and urge those with roles as funders, ethics reviewers, and editors to endorse and enforce them.

References

1. R Smith (2010) Classical peer review: an empty gun. Breast Cancer Res 12 (Suppl. 4):S13

2. VC Tam, IF Tannock, C Massey et al. (2011) Compendium of unpublished phase III trials in oncology: characteristics and impact on clinical practice. JCO 29:3133–39

3. EH Romond et al. (2005) Trastuzumab plus adjuvant chemotherapy for operable HER2-positive breast cancer. NEJM 353:1673–84

4. MJ Piccart-Gebhart et al. (2005) Trastuzumab after adjuvant chemotherapy in HER2-positive breast cancer. NEJM 353:1659–72

5. H Joensuu et al. (2006) Adjuvant docetaxel or vinorelbine with or without trastuzumab for breast cancer. NEJM 354:809–820

6. N Love (ed.) (2005) Patterns of care in medical oncology – Breast cancer edition, vol. 1. Research to Practice, Miami

7. N Love (ed.) (2005) Patterns of care in medical oncology – Breast cancer edition, vol. 2. Research to Practice, Miami

8. C Lefebvre, E Manheimer and J Glanville. (2011) In: Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 (JPT Higgins and S Green eds.), chap. 6. The Cochrane Collaboration,http://www.mrc-bsu.cam.ac.uk/cochrane/handbook/index.htm#chapter_6/6_7_chapter_information.htm

9. Early Breast Cancer Trialists’ Collaborative Group (EBCTCG) (2005) Effects of chemotherapy and hormonal therapy for early breast cancer on recurrence and 15-year survival: an overview of the randomised trials. Lancet 365:1687–1717

10. F Andre et al. (2011) Biomarker studies: a call for a comprehensive biomarker study registry. Nat Rev Clin Oncol 8:171–176

Practice points

  • Practicing physicians need to keep up to date with data presented at major meetings as well as peer-reviewed publications
  • Investigators need to accept the responsibility for publishing results that they present at meetings

This article was first published in Nature Reviews Clinical Oncology vol.8 no.11, and is published with permission. © 2011 Nature Publishing Group.doi:10.1038/nrclinonc.2011.148, www.nature.com/nrclinonc

Author affiliations: University of Michigan Comprehensive Cancer Center, Ann Arbor, Michigan, USA

Competing interests: Daniel F. Hayes declares associations with the following organisations and companies: Biomarker Strategies, Chugai Pharmaceuticals, Novartis, Oncimmune, Pfizer, Veridex

Acknowledgements: Supported in part by a grant from the Fashion Footwear Charitable Foundation of New York/QVC Presents Shoes on Sale

Click on the title below to read our Impact Factor 2:
Hodgkin lymphoma – absence of evidence not evidence of absence!

Be the first to comment

Leave a Reply

Your email address will not be published.


*