This is an archived version of the Handbook. For the current version, please go to or search for this chapter here.

12.4.1  Confidence intervals

Results for both individual studies and meta-analyses are reported with a point estimate together with an associated confidence interval. For example, “The odds ratio was 0.75 with a 95% confidence interval of 0.70 to 0.80”.  The point estimate (0.75) is the best guess of the magnitude and direction of the experimental intervention’s effect compared with the control intervention. The confidence interval describes the uncertainty inherent in this estimate, and describes a range of values within which we can be reasonably sure that the true effect actually lies.  If the confidence interval is relatively narrow (e.g. 0.70 to 0.80), the effect size is known precisely.  If the interval is wider (e.g. 0.60 to 0.93) the uncertainty is greater, although there may still be enough precision to make decisions about the utility of the intervention.  Intervals that are very wide (e.g. 0.50 to 1.10) indicate that we have little knowledge about the effect, and that further information is needed.


A 95% confidence interval is often interpreted as indicating a range within which we can be 95% certain that the true effect lies.  This statement is a loose interpretation, but is useful as a rough guide.  The strictly-correct interpretation of a confidence interval is based on the hypothetical notion of considering the results that would be obtained if the study were repeated many times. If a study were repeated infinitely often, and on each occasion a 95% confidence interval calculated, then 95% of these intervals would contain the true effect. 


The width of the confidence interval for an individual study depends to a large extent on the sample size. Larger studies tend to give more precise estimates of effects (and hence have narrower confidence intervals) than smaller studies.  For continuous outcomes, precision depends also on the variability in the outcome measurements (the standard deviation of measurements across individuals); for dichotomous outcomes it depends on the risk of the event, and for time-to-event outcomes it depends on the number of events observed.  All these quantities are used in computation of the standard errors of effect estimates from which the confidence interval is derived.


The width of a confidence interval for a meta-analysis depends on the precision of the individual study estimates and on the number of studies combined.  In addition, for random-effects models, precision will decrease with increasing heterogeneity and confidence intervals will widen correspondingly (see Chapter 9, Section 9.5.4).  As more studies are added to a meta-analysis the width of the confidence interval usually decreases.  However, if the additional studies increase the heterogeneity in the meta-analysis and a random-effects model is used, it is possible that the confidence interval width will increase.


Confidence intervals and point estimates have different interpretations in fixed-effect and random-effects models.  While the fixed-effect estimate and its confidence interval address the question ‘what is the best (single) estimate of the effect?’, the random-effects estimate assumes there to be a distribution of effects, and the estimate and its confidence interval address the question ‘what is the best estimate of the average effect?’


A confidence interval may be reported for any level of confidence (although they are most commonly reported for 95%, and sometimes 90% or 99%). For example, the odds ratio of 0.80 could be reported with an 80% confidence interval of 0.73 to 0.88; a 90% interval of 0.72 to 0.89; and a 95% interval of 0.70 to 0.92.   As the confidence level increases, the confidence interval widens. 


There is logical correspondence between the confidence interval and the P value (see Section 12.4.2).  The 95% confidence interval for an effect will exclude the null value (such as an odds ratio of 1.0 or a risk difference of 0) if and only if the test of significance yields a P value of less than 0.05.  If the P value is exactly 0.05, then either the upper or lower limit of the 95% confidence interval will be at the null value.  Similarly, the 99% confidence interval will exclude the null if and only if the test of significance yields a P value of less than 0.01.


Together, the point estimate and confidence interval provide information to assess the clinical usefulness of the intervention.  For example, suppose that we are evaluating a treatment that reduces the risk of an event and we decide that it would be useful only if it reduced the risk of an event from 30% by at least 5 percentage points to 25% (these values will depend on the specific clinical scenario and outcome).  If the meta-analysis yielded an effect estimate of a reduction of 10 percentage points with a tight 95% confidence interval, say, from 7% to 13%, we would be able to conclude that the treatment was useful since both the point estimate and the entire range of the interval exceed our criterion of a reduction of 5% for clinical usefulness.  However, if the meta-analysis reported the same risk reduction of 10% but with a wider interval, say, from 2% to 18%, although we would still conclude that our best estimate of the effect of treatment is that it is useful, we could not be so confident as we have not excluded the possibility that the effect could be between 2% and 5%.  If the confidence interval was wider still, and included the null value of a difference of 0%, we will not have excluded the possibility that the treatment has any effect whatsoever, and would need to be even more sceptical in our conclusions.


Confidence intervals with different levels of confidence can demonstrate that there is differential evidence for different degrees of benefit or harm. For example, it might be possible to report the same analysis results (i) with 95% confidence that the intervention does not cause harm; (ii) with 90% confidence that it has some effect; and (iii) with 80% confidence that it has a patient-important benefit.  These elements may suggest both usefulness of the intervention and the need for additional research.


Review authors may use the same general approach to conclude that an intervention is not useful.  Continuing with the above example where the criterion for a minimal patient-important difference is a 5% risk difference, an effect estimate of 2% with a confidence interval of 1% to 4% suggests that the intervention is not useful.