This is an archived version of the Handbook. For the current version, please go to or search for this chapter here.

11.5.5  Statistical considerations in ‘Summary of findings’ tables

Here we describe how absolute and relative measures of effect for dichotomous outcomes are obtained. Risk ratios, odds ratios and risk differences are different ways of comparing two groups with dichotomous outcome data (see Chapter 9, Section 9.2.2). Furthermore, there are two distinct risk ratios, depending on which event (e.g. ‘yes’ or ‘no’) is the focus of the analysis (see Chapter 9, Section In the presence of a non-zero intervention effect, if there is variation in control group risks across studies, then it is impossible for more than one of these measures to be truly the same in every study. It has long been the expectation in epidemiology that relative measures of effect are more consistent than absolute measures of effect from one scenario to another. There is now empirical evidence to support this supposition (Engels 2000, Deeks 2001). For this reason, meta-analyses should generally use either a risk ratio or an odds ratio as a measure of effect (see Chapter 9, Section Correspondingly, a single estimate of relative effect is likely to be a more appropriate summary than a single estimate of absolute effect. If a relative effect is indeed consistent across studies, then different control group risks will have different implications for absolute benefit. For instance, if the risk ratio is consistently 0.75, then treatment would reduce a control group risk of 80% to 60% in the intervention group (an absolute reduction of 20 percentage points) but would reduce a control group risk of 20% to 15% in the intervention group (an absolute reduction of 5 percentage points). 


‘Summary of findings’ tables are built around the assumption of a consistent relative effect. It is then important to consider the implications of this effect for different control group risks. For any assumed control group risk, it is possible to estimate a corresponding intervention group risk from the meta-analytic risk ratio or odds ratio. Note that the numbers provided in the ‘Corresponding risk’ column are specific to the ‘Assumed risks’ in the adjacent column.


For meta-analytic risk ratio, RR, and assumed control risk, ACR, the corresponding intervention risk is obtained as:

Corresponding intervention risk, per 1000 = 1000 * ACR * RR.

As an example, in Figure 11.3.a, the meta-analytic risk ratio is RR = 0.10 (95% CI 0.04 to 0.26). Assuming a control risk of ACR = 10 per 1000 = 0.01, we obtain:

Corresponding intervention risk, per 1000 = 1000 * 0.01 * 0.10 = 1,

as indicated in Figure 11.5.a.


For meta-analytic odds ratio, OR, and assumed control risk, ACR, the corresponding intervention risk is obtained as:



Upper and lower confidence limits for the corresponding intervention risk are obtained by replacing RR or OR by their upper and lower confidence limits, respectively (e.g. replacing 0.10 with 0.04, then with 0.26, in the example above). Such confidence intervals do not incorporate uncertainty in the assumed control risks.


When dealing with risk ratios, it is critical that the same definition of ‘event’ is used as was used for the meta-analysis. For example, if the meta-analysis focused on ‘staying alive’ rather than ‘death’ as the event, then assumed and corresponding risks in the ‘Summary of findings’ table must also refer to ‘staying alive’.


In (rare) circumstances in which there is clear rationale to assume a consistent risk difference in the meta-analysis, it is in principle possible to present this for relevant ‘assumed risks’ and their corresponding risks, and to present the corresponding (different) relative effects for each assumed risk.