This is an archived version of the Handbook. For the current version, please go to or search for this chapter here.

16.9.4  Confidence intervals when no events are observed

It is possible to put upper confidence bounds on event risks when no events are observed, which may be useful when trying to ascertain possible risks for serious adverse events.  A simple rule termed the ‘rule of threes’ has been proposed such that if no events are observed in a group, then the upper confidence interval limit for the number of events is three, and for the risk (in a sample of size N) is 3/N (Hanley 1983). The application of this rule has not directly been proposed or evaluated for systematic reviews. However, when looking at the incidence of a rare event that is not observed in any of the intervention groups in a series of studies (which randomized trials, non-randomized comparison or case series), it seems reasonable to apply it, taking N as the sum of the sample sizes of the arms receiving intervention. However, it will not provide any information about the relative incidence of the event between two groups.


The value 3 coincides with the upper limit of a one-tailed 95% confidence interval from the Poisson distribution (equivalent to a two-tailed 90% confidence interval).  For the risk to be for a more standard one-tailed 97.5% confidence interval (equivalent to a two-tailed 95% confidence interval) then 3.7 should be used in all calculations in place of 3 (Newcombe 2000).  An alternative recommendation which gives similar values is the ‘rule of fours’ which takes the upper limit of the risk to be 4/(N+4). Either of these options is recommended for use in Cochrane reviews. For example, if no events were observed out of 10, the upper limit of the confidence interval for the number of events is 3.7, and for the risk is 3.7 out of 10 (i.e. 0.37).  If no events were observed out of 100, the upper limit on the number of events is still 3.7, but for the risk is 3.7 out of 100 (i.e. 0.037).