To interpret correctly an intervention effect or lack of it, we need to ensure that the intervention was carried out as designed (Perepletchikova & Kazdin, 2005
). In addition, for study replication and for generalization of interventions to real-world settings, the intervention fidelity has to be established (Borrelli et al., 2005
). Hence, improving intervention fidelity has the effect of increasing both internal and external validity (Borrelli et al., 2005
). Moreover, research shows that fidelity is significantly associated with the outcomes achieved by a program/intervention (Durlak & DuPre, 2008
). In their review, which was based on several meta-analyses that, together, had reviewed more than 500 implementation studies targeting children and adolescents, Durlak and DuPre (2008
) found that well-implemented programs with high fidelity achieved effect sizes three times greater than poorly implemented programs.
to core program as specified in intervention manuals and competent delivery
of the intervention have often been argued to be the two important dimensions of fidelity assessment (Forgatch & DeGarmo, 2011
). However, a more comprehensive treatment fidelity framework with five components was identified by the National Institutes of Health’s (NIH) Behavior Change Consortium: (a) study design (i.e., factors considered when designing, evaluating and replicating a trial), (b) provider or facilitator training (i.e., information about how the facilitators are trained and whether the training is standardized across facilitators), (c) treatment or intervention delivery (i.e., processes to improve and monitor delivery of the intervention to establish the delivery of the intervention as intended), (d) treatment or intervention receipt (i.e., processes ensuring whether the intervention participants understand the information provided), and (e) enactment of treatment or intervention (i.e., processes to monitor and improve how participants use the skills from the interventions in their lives) (Bellg et al., 2004
; Borrelli, 2011
; Borrelli et al., 2005
). Furthermore, NIH Behavior Change Consortium recommends that “treatment fidelity should become an integral part of the conduct and evaluation of all health behavior intervention research” (Bellg et al., 2004
, p. 451). Hence, in this paper, the purpose was to examine the intervention fidelity of the intervention across each of these five domains. In addition, we assessed the participants’ satisfaction with the intervention, since satisfaction may be considered one of the crucial validators of the quality of an intervention.
This paper was a part of the evaluation of an RCT of the ACDC intervention that aims to decrease depressive symptoms among upper secondary school students (Idsoe & Keles, 2016
; Idsoe et al., 2019
; Keles & Idsoe, 2021
). We applied the comprehensive fidelity model developed by the NIH’s Behavior Change Consortium to examine how the implementation of the ACDC intervention met the fidelity strategies under five categories: study design; training; intervention delivery; receipt of the intervention; and enactment of intervention skills. We also evaluated the participants’ satisfaction with the intervention as one of the crucial validators of intervention quality.
Overall, our results revealed that the intervention achieved a fidelity of 71% according to the checklist of the use of treatment fidelity strategies developed by Borrelli et al. (2005
). Our fidelity level was approaching that defined by Borrelli et al. (2005
) as high fidelity, which was 80% adherence to their checklist across all strategies. A recent meta-analysis (Reiser & Milne, 2014
) also revealed that the included interventions achieved a mean overall level of 67% to the treatment fidelity framework, lower than the evaluation of the ACDC intervention. When we examined each of the five categories of the framework, in this trial we found out that the strategies under the design and the intervention delivery were more likely to be met. While some of the reviews also revealed that the most commonly reported elements in the fidelity literature are related to intervention design (Reiser & Milne, 2014
), other reviews showed that intervention delivery is the most commonly reported category while other categories are less discussed (Gearing et al., 2011
; Slaughter et al., 2015
With regard to the study design category, ACDC is based on a priori defined theoretical model, and a standardized manual to implement the intervention as intended; however, we still cannot ensure 100% that each participating adolescent received the same “operationalization” of the intervention on multiple sites. Hence, further strategies should be developed to enhance treatment fidelity related to the intervention study design.
In terms of facilitator training, the same trainer (i.e., the course developer) runs the training program and certifies the intervention facilitators to maintain standardization across intervention providers. However, as the intervention evaluators, we have limited information and control in terms of how skill acquisition both during and after training for the facilitators was assessed by the course developer in order to satisfy the criteria for certification. Moreover, even though there is a substantial focus on facilitator training at the beginning of intervention studies, there is less emphasis on monitoring and maintaining facilitator skills as studies progress (Bellg et al., 2004
). In addition, most of the facilitators in this evaluation study were newly recruited so may not be highly skilled/experienced facilitators and this may limit the external validity of our findings. In future studies, facilitators’ skill acquisition should be a part of the fidelity assessment of ACDC and strategies should be developed to minimize change or decay in facilitator skills. Examples of important skills could be pedagogical abilities to explain intervention content, securing intervention delivery as originally conceptualized, following the protocol, ensuring that all participants received the same information. This is important for treatment efficacy. A recent meta-analysis found that CBT for subclinical depression containing components like behavioral activation, challenging thoughts, and caregiver intervention gave better long-term outcomes for the adolescents (Oud et al, 2019
). If there are differences in how facilitators prioritize among components, this could thereby correlate with intervention outcomes and affect clinical efficacy. In relation to this, in the ACDC manual, the use of co-facilitators in the delivery of intervention is suggested, even though it was not feasible owing to the limited resources in this study. This should be considered in future studies for enhancing fidelity.
For the delivery of the intervention, both the content and form of delivery were assessed by self-reports in this fidelity analysis. The measures of intervention delivery evaluated whether the facilitators actually adhered to the intervention plan in terms of both the content and the delivery form. The results revealed that ACDC facilitators were more likely to report either “often” or “most of the time” with regard to the coverage of core components of the intervention, however, some components were more emphasized than the others. Even though observation of intervention delivery is accepted as the gold standard to ensure acceptable delivery (Bellg et al., 2004
) and even though self-report data are less correlated with intervention outcomes than observational data (Durlak & DuPre, 2008
), because of lack of resources, direct observation was not feasible in this study. At the study design stage, we, as the evaluators, considered using an observer to attend some of the sessions and evaluate adherence to the manual. However, this idea was discarded given the costs and time required across multiple sites in Norway. In future studies, more robust fidelity methods, such as independently rated audio or videotaping measures, should be considered.
The last two categories of the fidelity assessment focused on the participant rather than the facilitator. ACDC sessions not only focus on encouraging participants to learn and practice new skills taught in the sessions, but also to show enactment of specific skills through role playing. In the effectiveness evaluation of the intervention, we found changes in cognitive styles such as perfectionism and rumination after the intervention (Idsoe et al., 2019
) and these findings may also indirectly support the acquisition and enactment of the participants’ skills. On the other hand, during intervention delivery, 6% of the intervention facilitators never provided homework. This not only reflects issues regarding adherence to the manual, but it also limits our information regarding intervention receipt, since homework completion is a strategy suggested for assessing intervention receipt. Moreover, in their meta-analysis, Stice et al. (2009
) showed that depression prevention programs with homework assignments produced significantly larger effects mainly because they provide increased opportunity to acquire and apply intervention skills in a real world setting. It is a limitation that we did not collect such data on homework completion, because this could have informed us whether homework completion is associated with improvement.
The satisfaction with the intervention was quite high and the participants perceived the course as helpful in their life and they would recommend the ACDC course to others in need. However, we are also aware of the fact that even though an adolescent participant might be very satisfied with the help s/he received in the intervention, it does not necessarily mean that s/he has actually learned the tools or techniques taught in the intervention or applied them in his/her lives. A short follow-up of the participants with regard to the degree they use the techniques and tools they learned during the intervention in their lives after the intervention ended would contribute to validity of our results. Further studies should also use more qualitative data methodology (e.g., open-ended questions) to get in-depth information on intervention satisfaction. One issue here is that the participants’ reports of their satisfaction could have been biased by the fact that they did not report this anonymously, that especially may have affected their willingness to report dissatisfaction. Future studies should try to reduce this potential methodological limitation.
This analysis has its own limitations. One of the most important limitations is the lack of fidelity assessment in the control group in our evaluation study. Borelli (2011
) suggests that we cannot see the true differences between intervention and control groups without monitoring the fidelity in the UC control group. However, in our evaluation study, participants in the control group received usual care as implemented at the different sites. This may involve referrals to very different care providers such as psychologists, school nurses, or medical doctors who may provide conversations, various standard treatments, the use of pharmacotherapy or no treatment at all. Hence, in the evaluation study, the UC facilitators and the adolescents in the UC group were asked to report who they were referred to and who they received care from. However, it was not possible to develop a structured fidelity scheme for this heterogeneous group, since no restrictions were put on what kind of interventions the adolescents in the control group could receive. Another limitation in the evaluation study was in relation to the group size for each course group. In the manual of this group CBT-based course, it is specified that a group should consist of about 8–12 participants for each course since it is important that the group is not too small in terms of group dynamics (Børve, 2010
). The average group size in the evaluation study was six, and this should be kept in mind while evaluating both the intervention delivery from the course facilitators’ angle and intervention receipt from the participants’ angle. Moreover, according to the dosage reports, on average, the adolescents receiving the ACDC intervention attended 6.5 of the ten sessions. With regard to fidelity, especially for the enactment of the skills aspect, we checked weekly dosage reports post-hoc to examine whether there is a pattern in terms of missing last sessions (where the focus is mainly on practicing the new skills) or in certain clusters. Fortunately, there was no clear pattern, but we still cannot rule out the possibility that for some participants this may have occurred (i.e., missing the last sessions). However, the lack of a pattern is good in terms of the effectiveness of the intervention, since that could have led to under-estimation of the intervention effect.
Research also shows that higher levels of intervention fidelity are strongly associated with better intervention outcomes mainly because of reduction of unintended and random variability and increasing study power to detect the real effect (Borrelli et al., 2005
; Durlak & DuPre, 2008
). In future studies, investigating the effect of core components and the effect of how they are implemented on study outcomes may provide more reliable and accurate results with regard to the validity and effectiveness of ACDC. We were not able to investigate these because of lack of power/sample size.
Finally, yet importantly, there is an extensive debate on whether interventions should be implemented with maximum fidelity or whether adaptation should be encouraged (Durlak & DuPre, 2008
). As seen also in our case, studies showed that the levels of fidelity do not reach 100% and programs are modified during the implementation by providers (ibid.). Hence, this case-specific information on implementation fidelity also enables us to examine ‘what works’ or not in the real world setting to better inform the intervention implementation in long run and make necessary changes to improve/modify its implementation and examine how these modifications affect program outcomes.
In addition to fidelity, another important aspect of interventions is program reach, which is related to “the percentage of the eligible population who took part in the intervention, and their characteristics” (Durlak & DuPre, 2008
, p. 329). In the evaluation study of ACDC, one of the biggest challenges was with regard to the recruitment of the participants, especially boys. This may indicate that it may be difficult for adolescents, especially for boys, to admit that they have problems that they may need to seek help for. Alternatively, we may further speculate that since the course facilitators were mostly female, maybe it was harder for boys to ask for help. These recruitment challenges raise important issues not only about reaching those who need
help but do not seek
help, but also about starting interventions after recruiting enough participants to establish the minimum required group size. All of these issues may have had an effect on the outcomes and may affect our interpretation of the results. More proactive strategies are needed to reach both genders and these strategies should be part of the training of facilitators. Finally, future studies are also necessary to examine whether fidelity reduces over time.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.