Case reports of adverse events are widely found in the published literature, and are also collated by regulatory agencies. There are specific methodological problems with the evaluation of such case reports. Review authors who are potentially interested in such data will need to consider the following issues.
Anecdotal reports may turn out to be false alarms on subsequent investigation, rather than genuine indicators of the link between the intervention and adverse effect. Although one study has claimed that three quarters of a collection of anecdotal case reports from 1963 were correct (Venning 1982), a more recent systematic survey of 63 suspected adverse reactions found that most (52 of 63, 82.5%) had not yet been evaluated in more detail (Loke 2006). Controlled study data supporting the postulated link between drug and adverse event were available in only three cases, while in two cases controlled studies failed to confirm the link. Nevertheless, product information sheets or drug monographs may have been amended to include listings of these adverse events. It is thus not easy to tell whether a case report is a genuine alert or a false alarm. Still, case reports remain the cornerstone of the initial detection of new adverse effects (Stricker 2004). The removal of drugs from the market is overwhelmingly based on case reports and case series, in the past as well as in the present (Venning 1983, Arnaiz 2001). Removal of a drug from the market due to a dramatic effect does not require formal control groups (Glasziou 2007).
There is usually uncertainty as to whether the adverse event was caused by the intervention (particularly in patients who are taking a wide variety of treatments). Review authors must decide on the likelihood of the intervention having a causative role, or whether the occurrence of the adverse event during the intervention period was simply a coincidence. However, two independent review authors might not reach the same judgement from the same case report. Several studies have evaluated the responses of review authors who were asked to appraise reports of adverse event. In one study, complete agreement was obtained only 35% of the time between two observers who used causality criteria in an algorithm for assessing suspected adverse reactions (Lanctot 1995). In another study, three clinical pharmacologists, who evaluated 500 reports of suspected reactions, failed to agree on the culprit drug in 36% of the cases (Koch-Weser 1977).
A reported adverse event is more plausible if it can be explained by a well-understood biological mechanism. For example, amiodarone has an iodine-like chemical structure, which explains the commonly seen adverse effects on thyroid function
One study looked at 1520 published case reports of suspected adverse reactions, and found substantial differences in the information provided in these reports (Kelly 2003). With regard to details of patient characteristics, only three patient variables were reported more than 90% of the time, while 12 others were reported less than 25% of the time. In assessing the culprit drug, Kelly found that only one drug variable (for instance dose or duration or frequency or exact formulation) was reported more than 90% of the time; six others were reported 14 to 74% of the time. The substantial variation in the nature of the reporting means that detailed appraisal is difficult for review authors.
There is a trade-off between the desire to be ‘all-inclusive’ and the need to avoid publicizing biased or unreliable information that may trigger a false alarm. The MMR vaccination programme was disrupted by anecdotal reports in a reputable journal, with scores of people in the UK harmed by measles outbreaks from decreased vaccine uptake (Asaria 2006). The inclusion of extra (but potentially unreliable) information on ‘adverse events’ can have harmful effects, and review authors will need to carefully consider the negative impact and legal ramifications of conveying such information.