This is an archived version of the Handbook. For the current version, please go to or search for this chapter here.

16.5.3 Assessing risk of bias in studies with more than two groups

Bias may be introduced in a multiple-intervention study if the decisions regarding data analysis are made after seeing the data. For example, groups receiving different doses of the same intervention may be combined only after seeing the results, including P values. Also, different outcomes may be presented when comparing different pairs of groups, again potentially in relation to the findings.


Juszczak et al. reviewed 60 multiple-intervention randomized trials, of which over a third had at least four intervention arms (Juszczak 2003). They found that only 64% reported the same comparisons of groups for all outcomes, suggesting selective reporting analogous to selective outcome reporting in a two-arm trial. Also, 20% reported combining groups in an analysis. However, if the summary data are provided for each intervention group, it does not matter how the groups had been combined in reported analyses; review authors do not need to analyse the data in the same way as the study authors.


Some suggested questions for assessing risk of bias in multiple-intervention studies are as follows:

If the answer to the first question is ‘yes’, then the second question is unimportant (so could be answered also with a ‘yes’).