Introduction

Concerns about the quality of health and mental health services have led to the development and prioritization of implementation science within the portfolio of health services research (Institute of Medicine 2009a; National Institutes of Health 2013). The science of implementation has advanced rapidly over the past decade (Chambers 2012). However, much of the empirical literature has focused on clinician and organizational-level factors and strategies that influence the adoption, implementation, and sustainment of evidence-based practices (EBPs; Aarons et al. 2014; Aarons et al. 2012; Glisson and Williams 2015). Despite conceptual literature that points to the importance of system-level influences on adoption, implementation, and sustainment (e.g., Aarons et al. 2011; Damschroder et al. 2009; Flottorp et al. 2013), there is a void in empirical work that attends to this matter. This may be due to the inherent difficulty of studying contexts and strategies within large systems. However, as service systems face increasing pressure to promote the adoption, implementation and sustainment of EBPs, particularly through the passing of the Patient Protection and Affordable Care Act, there is a critical need for empirical research that can inform system-level implementation efforts. The purpose of this special issue is to present a set of articles that can inform system-level implementation research and practice by suggesting how contexts and strategies can be leveraged to promote implementation and quality improvement in behavioral health, and to encourage dialogue and further empirical research in this area.

Exploration, Preparation, Implementation and Sustainment (EPIS) Framework

This special issue was informed by conceptual models of implementation that emphasize contextual factors of implementation that surround the implementation of innovations (e.g., Aarons et al. 2011; Damschroder et al. 2009; Raghavan et al. 2008). Given its focus on implementation in public service sectors, we used the Exploration, Preparation, Implementation, and Sustainment (EPIS) framework (see Fig. 1) developed by Aarons et al. (2011) to organize the contributions made by each manuscript. The EPIS framework highlights both processes and determinants of implementation (Nilsen 2015) by outlining four phases of the implementation process (EPIS) and identifying domains that are important to implementation including the inner context (e.g., organizational and therapist characteristics), outer context (e.g., the service environment, inter-organizational environment, and consumer support), and fit between the EBP being implemented and the inner and outer contexts. Within each phase, contextual variables relevant to the inner or outer context are posited. For example, during implementation, inner context variables hypothesized to be important to the implementation process include organizational culture (i.e., “the behavioral expectations and norms that characterize the way work is done in an organization;” Glisson et al. 2006, p. 858), organizational climate (i.e., the shared employee perception of the psychological impact of the work environment on their well-being; Glisson et al. 2008), and clinician attitudes toward EBPs. Outer context variables include funding, engagement with treatment developers, and leadership. A growing body of empirical work supports the importance of the contextual factors specified in the EPIS framework (e.g., Aarons et al. 2015; Beidas et al. 2015, 2016; Cook et al. 2015; Glisson and Williams 2015; Isett et al. 2007). We will organize our discussion of the manuscripts in this special issue according to the phase(s) of the EPIS they best map onto, though some manuscripts span multiple phases.

Fig. 1
figure 1

Note. Reproduced from (Aarons et al. 2011)

The Exploration, Preparation, Implementation, and Sustainment (EPIS) framework.

Brief Summary of Articles

This special issue includes 12 articles, many of which draw upon system-level implementation efforts that serve as natural laboratories for the study of implementation processes and outcomes. We discuss the contributions of each of these articles relative to broad methodological issues and each of the four phases of the EPIS framework. The special issue concludes with a commentary written by leaders of a large public behavioral health system that highlights their perspectives on system-level implementation and needed areas for research inquiry.

Methodological Contributions

The special issue includes a number of manuscripts that focus on emergent methodologies that are relevant to studying system-level implementation across the four EPIS phases. Zimmerman and colleagues (this issue) note that implementation is too often driven by a trial and error approach, and demonstrate that participatory system dynamics modeling can be a powerful tool to improve the implementation planning process. Participatory system dynamics (Hovmand 2014) is a way of triangulating stakeholder expertise, data, and model simulations. It offers the opportunity to compare different implementation plans before starting a change effort, potentially saving the time and money associated with ill-conceived implementation plans. While this method has been suggested as a potentially useful means of selecting and tailoring implementation strategies (Powell et al. 2015), Zimmerman and colleagues (this issue) break new ground as they apply participatory system dynamics to address the challenge of implementing better care for Posttraumatic Stress Disorder in the U.S. Department of Veterans Affairs mental health system. Walker and colleagues (this issue) take on the task of determining where systems should invest in EBP implementation by applying a Geographic Information Systems approach in Washington State. They identify need, service availability, and “service deserts,” in which the demand for services far outweighs delivery capacity. This compelling example illustrates how systems might allocate resources and implementation supports in a more targeted way.

Exploration Phase

During the exploration phase, organizations and individuals are aware of an area for growth in their organization and that there may be a more advantageous approach to service delivery (Aarons et al. 2011). Inner context factors such as an organization’s absorptive capacity (i.e., “organizational routines and processes by which firms acquire, assimilate, transform, and exploit knowledge to produce a dynamic organizational capability;” Zahra and George 2002), readiness to change (Weiner 2009), and the organizational context (e.g., organizational culture and climate; Glisson et al. 2008) are especially pertinent during the exploration phase (Aarons et al. 2011). Particularly relevant to this special issue, outer context factors driving awareness of alternative service delivery approaches include system-level legislation or mandates around EBPs (e.g., Trupin and Kerns 2015), funding supporting EBPs (e.g., Hoagwood et al. 2014), client advocacy groups calling for the use of EBPs (Birkel et al. 2003), and interorganizational networks (e.g., direct networking with other organizations implementing EBPs; Bunger et al. 2014; Hurlburt et al. 2014). These inner and outer context factors may propel an organization to consider alternative service delivery approaches.

A number of articles in this special issue can inform the exploration phase. The aforementioned article by Walker and colleagues (this issue) addresses this phase most explicitly by showing how Geographic Information Systems mapping can be useful in exploring high priority community targets for implementation investments. Kotte and colleagues (this issue) discuss the importance of engaging partners that span inner and outer contexts, and provide a useful example of how a longstanding partnership between the State of Hawaii and the University of Hawaii led to the exploration of more effective ways implementing a measurement feedback system (Bickman 2008) in child and adolescent mental health services. Saldana and colleagues (this issue) discuss a partnership in which the New York City child welfare system engaged treatment developers to develop and pilot an innovative approach to improving the interactions between supervisors, caseworkers, and caregivers. A careful consideration of existing EBPs led to the conclusion that a new approach would need to be developed that could more feasibly be taken to scale across an entire child welfare system. Thus, Saldana and colleagues (this issue) developed the R3 model, which combines elements of EBPs with a supervision approach so that the elements could be more seamlessly integrated into the child welfare system. The partnership essentially ensured that the characteristics of the EBP would be carefully considered and that the intervention would be “designed for dissemination” (Brownson et al. 2013). Beidas and colleagues (this issue) reported several concerns related to the exploration process from the perspective of system leaders, treatment developers, and organizational leaders in Philadelphia. One shared concern was that there was not always a thorough assessment of the fit between particular EBPs and the organizations in which they were implemented prior to initiating implementation. Powell et al. (this issue) detail how Philadelphia’s system leaders have responded to this concern and attempted to explore the fit between EBPs and organizations prior to implementation by piloting a new method of contracting with agencies based upon the Getting to Outcomes framework (Chinman et al. 2004; Community Behavioral Health 2015). This systematic approach has promise for improving communication between system and organizational stakeholders, clarifying expectations up-front, and increasing the chances that the EBP(s) implemented will be acceptable, appropriate, and feasible. Collectively, these articles speak to the importance of partnership in implementation research (Chambers and Azrin 2013), as well as deliberately considering the fit between the EBP(s) and the inner and outer contexts, particularly in the exploration phase.

Preparation Phase

The adoption and preparation phase is characterized by the initial decision to implement an EBP (Aarons et al. 2011; Rogers 2003). This phase can be influenced by inner context factors such as organizational characteristics, organizational structures, and leadership (Aarons et al. 2011). For example, larger organizations are more likely to adopt innovations (Damanpour 1991), and transformational leadership (i.e., leadership that is characterized by charisma, inspiration, intellectual stimulation, and consideration of individual staff members’ interests; Avolio et al. 1999) has been shown to be associated with more positive innovation climates (Aarons and Sommerfeld 2012, p. 2012). Outer context factors that may influence the adoption phase include legislation (e.g., definitions of evidence), funding (e.g., systems providing support for the adoption of a particular EBP), client advocacy (e.g., lawsuits such as Felix Consent Decree; Chorpita and Donkervoet 2005), and interorganizational networks (e.g., networking with other agencies within a system implementing EBPs).

Although not explicitly studied in any of the manuscripts, several of the studies shed light on the adoption decision and preparation phase. Beidas and colleagues (this issue) provide insights from system leaders, treatment developers, and organizational leaders about the factors that motivate and facilitate the adoption of EBPs; a number of stakeholders reported that they adopted EBPs because it was consistent with their mission to provide the best and most effective care. Kotte and colleagues (this issue) prepared for scaling up a measurement feedback system by piloting it with a smaller sample of care coordinators and conducting focus groups to assess barriers and facilitators to implementation. This helped them to optimize the measurement feedback system in preparation for adopting it statewide. Finally, Zimmerman et al. (this issue) show how participatory system dynamics can be useful in the preparation phase by simulating different implementation plans prior to actual implementation. Similar approaches could also be used to simulate the impact of other assessment or intervention approaches (e.g., Lyon et al. 2015).

Implementation Phase

During the implementation phase, organizations and individuals are actively engaging in the steps necessary to implement an EBP in their setting. Aarons et al. (2011) suggest several salient inner context factors at this phase, including organizational readiness for change (Weiner 2009), organizational culture and climate (Glisson et al. 2008), and innovation-values fit (Klein and Sorra 1996). There are also a host of outer context factors relevant including funding (e.g., contracts and payment for training in EBPs), interorganizational networks (e.g., information sharing across organizations within a system), the role of intervention developers (e.g., engaging with organizations implementing EBPs), and leadership (e.g., effective leadership practices for system leaders; Aarons et al. 2011).

A number of the manuscripts focus on describing implementation through naturalistic observations of ongoing system-level efforts to implement EBPs. Beidas and colleagues (this issue) present barriers and facilitators to the implementation of four EBPs in the City of Philadelphia. The three stakeholder groups converged on the importance of inner (e.g., agency competing demands) and outer context factors (e.g., funding) as barriers and facilitators to implementation. Kotte and colleagues (this issue) and Ross and colleagues (this issue) examine facilitators and barriers to the implementation of a measurement feedback system in the state of Hawaii (Kotte) and Veterans Affairs Canada (Ross). Both of these articles identify characteristics of the innovation as an important factor in successful implementation (Aarons et al. 2011; Rogers 2003). Olin and colleagues (this issue) investigate outer context and inner context factors that predicted clinician drop out from a statewide training on an EBP in the State of NY, finding that younger clinicians and those that practiced in upstate-rural areas were less likely to drop out from training in an EBP. Finally, Rosen and colleagues (this issue) present a review of research focusing on the implementation of two EBPs for PTSD in the U.S. Department of Veterans Affairs. The article serves as one model of how we can learn across multiple implementation efforts that focus on specific EBPs and settings.

Several articles go beyond naturalistic descriptions of the implementation process and describe the implementation strategies used in system-level implementation. Powell and colleagues (this issue) describe the approach taken by the City of Philadelphia to implement four EBPs, using the policy ecology framework (Raghavan et al. 2008) to emphasize the multi-level implementation strategies used. Saldana and colleagues (this issue) leverage a system-initiated effort to develop a supervisor-focused implementation approach and evaluate its feasibility in child welfare in the State of New York. Similarly, Nadeem and colleagues (this issue) present findings from an effort to develop and test a theory-based learning collaborative model as an implementation strategy in children’s behavioral health in the State of New York. Although preliminary, these pilot studies provide exemplars for how to partner with communities to develop interventions and implementation strategies that are acceptable, appropriate, and feasible in large behavioral health systems.

Sustainment Phase

The sustainment phase involves maintaining the use of EBPs so that they come to represent “treatment as usual” (Aarons et al. 2011). Little is known about the sustainment of EBPs (Wiltsey Stirman et al. 2012), despite the fact that many implementation science models include sustainment as a key consideration in the implementation process (Tabak et al. 2012). Aarons et al. (2011) suggest several inner context factors hypothesized to be important in the sustainment phase, including leadership, organizational culture, a critical mass of therapists using the EBP (Powell et al. 2013), ongoing fidelity monitoring and support, and adequate staffing. Outer context factors potentially impacting sustainment include leadership at the service system level, legislation that supports sustainment of EBPs, continued funding following the initial implementation investment, and public-academic collaborations that can support the continued process of maintaining EBPs in public settings (Aarons et al. 2011). Aarons and colleagues (this issue) empirically investigate the impact of leadership on sustainment of EBPs using mixed-methods to provide empirical support to their conceptual model. Their study suggests that sustainment was associated with leadership as hypothesized by the EPIS model. Brookman-Frazee and colleagues (this issue) use administrative data to characterize the sustainment of EBPs implemented following a fiscal mandate in Los Angeles County over 6 years. This study makes an important contribution by focusing on sustainment of multiple EBPs, whereas most studies focus on the implementation and/or sustainment of a single EBP (Chambers 2012).

Emerging Themes

Several themes are highlighted in the articles in this special issue. First and foremost, we note the practical relevance of these articles for informing system leaders and policy makers. The included studies demonstrate a balance of rigor and relevance, and answer questions about where investments in EBPs should be made (Walker et al. this issue), which approaches most improve EBP timing and reach (Zimmerman et al. this issue), and how administrative data can be used to study the penetration and sustainment of multiple EBPs in a large behavioral health system (Brookman-Frazee et al. this issue). The studies also demonstrate laudable attempts to develop and test implementation strategies that are feasible, sustainable, and scaleable in large systems (Nadeem et al. this issue; Saldana et al. this issue). The rich and actionable research questions posed in these studies are made possible by highly partnered efforts between system and organizational leaders, direct care providers, consumers of behavioral health services, treatment developers, and academics. Each of these stakeholder groups have important perspectives that need to be brought to bear in system-level implementation (Chambers and Azrin 2013). These studies testify to both the power of partnerships and the challenges and opportunities associated with aligning the visions and efforts of diverse stakeholders (Beidas et al. this issue; Powell et al. this issue).

Many of the included studies leveraged ongoing system-level implementation efforts as natural laboratories to study implementation determinants, processes, and outcomes. This is a pragmatic necessity given the cost of large-scale efforts; implementation researchers often need to take advantage of opportunities to study the implementation of services delivered outside of the context of well-controlled studies. Observational studies are both expected and encouraged (Wensing et al. 2005). However, developing, refining, and testing implementation strategies is a federal priority (Institute of Medicine 2009a, b; National Institutes of Health 2013), and we call for more studies that build the evidence base for multifaceted, multilevel, and tailored implementation strategies in behavioral health systems (Powell et al. 2015a, b; Weiner et al. 2012). Two articles described the prospective development and testing of implementation strategies (Nadeem et al. this issue; Saldana et al. this issue), and appropriate for their developmental stage, both were pilot studies. Future studies should test innovative implementation strategies using the most rigorous and pragmatic designs possible (Brown et al. In Press).

A number of articles demonstrate the relevance of methods from other fields (e.g., participatory system dynamics modeling and geographic information systems) that can be leveraged to understand large systems and potentially make implementation more efficient (e.g., Walker et al. this issue; Zimmerman et al. this issue). Systems science methods appear to be particularly relevant to the study of implementation in large systems (Burke et al. 2015), though we encourage researchers to continue to draw from diverse theoretical and empirical traditions as appropriate.

The use of implementation frameworks such as the EPIS framework (Aarons et al. 2011) and the policy ecology framework (Raghavan et al. 2008) is tremendously helpful when considering system-level implementation. The EPIS framework provides a manageable scope for the special issue, grounds the work in the extant literature, and provides a common language to use across studies. Frameworks were also used effectively within the context of individual studies to provide insight into various implementation phases (e.g., Kotte et al. this issue) and provide conceptual framing for the use of multi-level implementation strategies (Powell et al. this issue). Implementation theories and frameworks serve multiple purposes (see Nilsen 2015); however, they have been underutilized in implementation research generally (Colquhoun et al. 2013; Davies et al. 2010) and in behavioral health (Powell et al. 2014). We urge researchers to use theories and frameworks to guide the planning, conduct, and reporting of their research in order to work toward more generalizable knowledge in the field of implementation. Utilizing theories and frameworks from other fields such as organizational behavior (Denis and Lehoux 2009; Weiner 2009) and policy implementation research (Nilsen et al. 2013) may generate new insights and supplement the growing number of theories (Grol et al. 2007) and frameworks (Tabak et al. 2012) in implementation science.

The power of mixed methods and multiple types of research participants was also evidenced in a number of articles. For example, Zimmerman and colleagues’ (this issue) used a participatory system dynamics method that is inherently mixed methods (Hovmand 2014), and exemplified how the approach can both generate buy-in and a nuanced understanding of implementation challenges. Aarons and colleagues (this issue) show how mixed methods can be used to determine how quantitative and qualitative results converge as well as how qualitative findings can expand upon quantitative findings (Palinkas et al. 2011). The use of multiple types of respondents is also critical, as perceptions of stakeholders often differ. Beidas and colleagues’ (this issue) study is innovative in its integration of system leader, treatment developer, and agency director perceptions of barriers and facilitators to implementing multiple EBPs across a large behavioral health system. Their perceptions converged and diverged in various ways, indicating that failing to account for different types of stakeholders may be a recipe for failure. This special issue concludes with an important commentary from Rubin and colleagues (this issue). The authors are system leaders affiliated with Philadelphia’s Department of Behavioral Health and Intellectual disAbility Services, and their perspective is important for implementation researchers to consider as they embark upon system level implementation research.

Finally, while the articles in this special issue spanned all phases of the EPIS framework, the majority most explicitly focused on the implementation phase with less emphasis on exploration, adoption, and sustainment. This is consistent with the state of the literature in 2011 when the EPIS framework was published, and indicates a need for more research on those phases. Specifically, there is a need to better understand the types of implementation determinants that are most salient at each stage, how these determinants can be measured and assessed in a pragmatic way, and whether some implementation strategies are more appropriate or have differential effects depending upon the phase of implementation. Answering these questions will require clear reporting in the published literature, as implementation determinants and strategies will need to be described in sufficient detail so that other systems and researchers can replicate implementation approaches (Albrecht et al. 2013; Neta et al. 2015; Proctor et al. 2013). It will also require more consistent tracking of implementation and clinical outcomes, which remains a significant stumbling block for the field, particularly when taken to scale in large behavioral health systems.

Conclusion

This special issue highlights some of the exciting work being done to improve the quality of behavioral health services in large systems, and suggests several areas for improvement. We hope that it sparks dialogue and ongoing conceptual and empirical work that will contribute to a better understanding of: (1) the determinants of implementation effectiveness in large systems, (2) the processes and strategies that need to be applied to ensure that stakeholders have the support they need to deliver effective services, and (3) the best ways of capturing the implementation, service system, and clinical outcomes that are most meaningful to individuals with behavioral health disorders.