Details of all experimental and comparison interventions of relevance to the review should be collected, primarily for presentation in the ‘Characteristics of included studies’ table. Again, details are required for aspects that could affect presence or magnitude of effect, or that could help users assess applicability. Where feasible, information should be sought (and presented in the review) that is sufficient for replication of the interventions under study, including any co-interventions administered as part of the study.
For many clinical trials of many non-complex interventions such as drugs or physical interventions, routes of delivery (e.g., oral or intravenous delivery, surgical technique used), doses (e.g. amount or intensity of each treatment, frequency of delivery), timing (e.g. within 24 hours of diagnosis) and length of treatment may be relevant. For complex interventions, such as those that evaluate psychotherapy, behavioural and educational approaches or healthcare delivery strategies, it is important to collect information about the contents of the intervention, who delivered it, and the format and timing of delivery.
The degree to which specified procedures or components of the intervention are implemented as planned can have important consequences for the findings from a study. We will describe this as intervention integrity; related terms include compliance and fidelity. The verification of intervention integrity may be particularly important in reviews of preventive interventions and complex interventions, which are often implemented in conditions that present numerous obstacles to idealized delivery (Dane 1998). Information about integrity can help determine whether unpromising results are due to a poorly conceptualized intervention or to an incomplete delivery of the prescribed components. Assessment of the implementation of the intervention also reveals important information about the feasibility of an intervention in real life settings, and in particular how likely it is that the intervention can and will be implemented as planned. If it is difficult to achieve full implementation in practice, the program will have low feasibility (Dusenbury 2003).
The following five aspects of integrity of preventive programs are described by Dane and Schneider (Dane 1998):
The extent to which specified intervention components were delivered as prescribed (adherence);
Number, length and frequency of implementation of intervention components (exposure);
Qualitative aspects of intervention delivery that are not directly related to the implementation of prescribed content, such as implementer enthusiasm, training of implementers, global estimates of session effectiveness, and leader attitude towards the intervention (quality of delivery);
Measures of participant response to the intervention, which may include indicators such as levels of participation and enthusiasm (participant responsiveness);
Safeguard checks against the diffusion of treatments, that is, to ensure that the subjects in each experimental group received only the planned interventions (program differentiation).
The integrity of an intervention may be monitored during a study using process measures, and feedback from such an evaluation may lead to evolution of the intervention itself. Process evaluation studies are characterized by a flexible approach to data collection and the use of numerous methods generating a range of different types of data. They may encompass both quantitative and qualitative methods. Process evaluations may be published separately from the outcome evaluation of the intervention. When it is considered important, review authors should aim to address whether the trial accounted for, or measured, key process factors and whether the trials that thoroughly addressed integrity showed a greater impact. Process evaluations can be a useful source of factors that potentially influence the effectiveness of an intervention. Note, however, that measures of the success of blinding (e.g. in a placebo-controlled drug trial) may not be valuable (see Chapter 8, Section 8.11.1).
An example of a Cochrane review evaluating intervention integrity is provided by a review of smoking cessation in pregnancy (Lumley 2004). The authors found that process evaluation of the intervention occurred in only some trials, and in others the implementation was less than ideal (including some of the largest trials). The review highlighted how the transfer of an intervention from one setting to another may reduce its effectiveness if elements are changed or aspects of the materials are culturally inappropriate.