This is an archived version of the Handbook. For the current version, please go to training.cochrane.org/handbook/current or search for this chapter here.

15.7  Addressing reporting biases

It is widely recognized that commercial and other pressures may affect the funding of studies and reporting of the results of studies which focus on the economic value of healthcare interventions (Drummond 1992). Despite this, until recently relatively little research attention has been focused on the issue of publication and related biases in economic evaluation studies, compared with coverage of this issue with respect to effectiveness studies. However, several recent studies have begun to examine this issue using systematic review and research synthesis methods.

 

Bell and colleagues undertook a systematic review of published cost-effectiveness studies in health care and found that studies sponsored by industry were more likely to report ratios that fall beneath, and cluster around, commonly proposed cost-effectiveness acceptability thresholds, when compared with studies sponsored by non-industry sources (Bell 2006). Miners and colleagues undertook a systematic review to compare evidence on cost-effectiveness submitted to the National Institute of Health and Clinical Excellence (NICE) by manufacturers of the relevant healthcare technologies and by contracted university-based assessment groups respectively (Miners 2005). This study found that estimated incremental cost-effectiveness ratios submitted by manufacturers were, on average, significantly lower than those provided by the assessment groups for the same technology. Friedberg and colleagues found that published economic analyses of new drugs used in oncology funded by pharmaceutical companies were one eighth as likely to reach unfavourable quantitative conclusions (and 1.4 times as likely to reach favourable qualitative conclusions) when compared to non-profit funded studies (Friedberg 1999). Other reviews focusing on this issue have reached broadly similar conclusions (Freemantle 1997, Azimi 1998, Lexchin 2003). A common theme of the discussion in these methodology review studies is the authors’ suspicion that reporting or publication biases are likely to be instrumental in the observed patterns of results. The general hypothesis is that economic analyses with results that suggest an intervention may be economically unattractive are, consciously or unconsciously, not published by sponsors, authors, or journal editors.

 

However, all of the above methodology review studies are limited by their design (limitations are usually acknowledged and discussed by the authors). The ideal and most robust study design to investigate the presence of reporting and publication biases would involve direct comparison of published and unpublished findings within studies, or direct comparison of the findings of published and unpublished studies (Song 2000). As such, a systematic, comprehensive comparison is clearly difficult to achieve, due to the inherent difficulties of identifying all relevant unpublished economic analyses. In the absence of such data, it is not possible to rule out alternative explanations for the observed patterns of results (e.g. the results could reflect the true distributions of incremental cost-effectiveness ratios).

 

Methods for addressing publication bias in systematic reviews, which can be applied, with the same caveats, in systematic reviews of economic studies, are covered in Chapter 10. Proposals that have been suggested to help address publication and related biases in economic evaluation studies, such as those that may be encountered in Cochrane reviews, are:

  1. to encourage a more transparent, consistent approach to the conduct and reporting of economic analyses, through the promulgation of good practice guidelines and checklists for use in critical appraisal of such studies – in particular review-based studies and modelling studies;

  2. to increase scrutiny of journal submissions for potential conflicts of interest of study sponsors and authors; and

  3. to increase access to all the underlying data used in an economic evaluation in order to increase transparency of methods.