Introduction

The dissemination and implementation of evidence-based practices (EBPs) to improve the quality of mental health services and outcomes for children, adults, and families is a critical concern in the United States and abroad. Widespread adoption of EBP may help to improve the quality of care in real-world human service settings (Hoagwood 2005). Considerable resources are being used to increase the implementation of EBPs into community care settings (Magnabosco 2006). For example, the California Mental Health Services Act supports implementation of EBPs (Cashin et al. 2008); the New York State Evidence-based Treatment Dissemination Center supports EBP training and consultation (Bruns et al. 2008); and the State of Ohio Department of Mental Health (ODMH) has developed “Coordinating Centers of Excellence” to promote use of best-practices and EBPs (ODMH 2009). As part of implementation efforts, it is important to consider mental health service provider attitudes toward adopting EBPs in order to better tailor implementation efforts to meet the needs and/or characteristics of providers in community mental health agencies and programs. Previous studies have identified several dimensions of attitudes toward EBP, and developed the Evidence-Based Practice Attitude Scale (EBPAS; Aarons 2004), described in greater detail below. A number of studies have provided increasing evidence for the validity and reliability of the EBPAS in a variety of samples (Aarons 2004, 2006; Aarons et al. 2007; Aarons et al. 2010; Aarons and Sawitzky 2006).

Multiple factors at different system and organizational levels influence implementation of innovation in mental health care settings. These include the social, economic, and political context, characteristics of the innovation itself, characteristics of the organization attempting to implement the innovation, and characteristics of both the providers and clients (Aarons 2004, 2005; Glisson et al. 2008; Glisson and Schoenwald 2005; Greenhalgh et al. 2004; Grol and Wensing 2004). Mental health service providers’ attitudes toward change and innovation may influence the implementation of EBPs at several stages. First, the attitudes of providers toward innovation in general can be a precursor to the decision of whether or not to try a new practice. Second, if providers do decide to try a new practice, attitudes can impact decision processes regarding the actual implementation and use of the innovation (Aarons 2005; Candel and Pennings 1999; Frambach and Schillewaert 2002; Rogers 1995).

The Evidence-Based Practice Attitude Scale (EBPAS; Aarons 2004; Aarons et al. 2007, 2010) was developed to assess mental health provider attitudes toward adoption of innovation and EBPs in mental health and social service settings. The EBPAS assesses four dimensions of attitudes toward adoption of EBPs including: intuitive Appeal of EBP, likelihood of adopting EBP given Requirements to do so, Openness to new practices, and perceived Divergence between research-based/academically developed interventions and current practice. In the most rigorous study of EBPAS, 1089 mental health service providers from 100 mental health programs in 26 states in the United States completed the scale, with results supporting its second-order factor structure, reliability of the subscales and total scale, and scale norms (Aarons et al. 2010).

The development of the EBPAS, however, was a first step towards understanding mental health and social service provider attitudes toward adopting EBPs. The current study was designed to further explore and identify additional dimensions of attitudes towards EBPs by generating items from novel content domains and subjecting them to exploratory factor analysis in order to discern their factor structure. The identified factors might then be used for research and applied purposes. For example, attitude domains could used in developing models of innovation implementation in various service contexts. Attitudes might also be assessed in order to better inform implementation efforts while considering provider perspectives.

Methods

Item Generation

Item generation and domain identification proceeded in four phases. First, the first author and a project coordinator generated 63 items representing 12 potential content domains of attitudes toward EBP based on review of the literature and their experience with previous studies of provider attitudes toward adopting EBP (Aarons 2004, 2005; Aarons and Sawitzky 2006) and experience with the 51 mental health programs in the previous scale development study (Aarons 2004). Second, a focus group was conducted with program managers (n = 6) from six different mental health programs in San Diego County in order generate new items and to get feedback about the items and domains described above. A total of 33 items were added to the pool based on this focus group’s feedback. Third, a focus group was conducted with clinicians (n = 8) involved in an ongoing study of evidence-based practice in the public mental health system in order to generate new items and to get feedback about the previously developed domains and items. While no new domains were identified (indicating saturation) an additional 37 items were added to the pool. The result was a total of 133 items. Fourth, the first author, a post-doctoral fellow, and two research assistants worked together to eliminate redundant items and then sort the items into piles based on item similarity until consensus was reached regarding the number of categories and items within categories. This resulted in 127 items sorted into 19 categories. The categories were then sorted into eight broad domains with similar subdomains within each broad domain. The broad domains included: (1) attitudes toward supervision (monitoring/supervision, feedback/ongoing clinical support), (2) EBP fit with work responsibilities (workload, time, organizational support), (3) balancing professional growth versus status quo (adequate skills, learning, job rewards, status quo), (4) arguments against EBP (EBP fit with real world clients, art versus science, common factors, stigma, characteristics of EBP), (5) training and education (EBP fit with education/training, training), (6) research practice partnership, (7) EBP effectiveness, and (8) consumer preference.

Quantitative Analysis

Participants

Participants were recruited from mental health clinics in San Diego County. Initially, 99 county run and contracted programs providing mental health services for children, adolescents, and families were identified based on administrative data. Of the 99 programs, 72 programs were eligible because they provided either outpatient or day treatment mental health services to families, children, and/or adolescents. Twenty-six of the 99 clinics were considered ineligible because they were residential treatment facilities or lacked the appropriate organizational structure (i.e., no supervisor or program manager for the clinic). One program of the 99 programs was considered ineligible because research assistants were unable to contact the program after repeated attempts over the course of 1 year. Of the 72 eligible programs, seven programs refused (90.3% response rate). The total number of eligible participants from the 65 participating programs was 440, of which 435 agreed to participate (98.9% response rate). Fifteen individuals were administrative assistants and were not asked to respond to the EBPAS portion of the survey, resulting in a total sample size of 420.

Among the 420 participants, mean age was 36.5 (SD = 10.7; range = 21–66) and the majority of respondents were female (79%). The racial/ethnic distribution was 54% Caucasian, 6.7% African American, 23.4% Hispanic, 5% Asian American, 0.5% Native American, and 10% Other. Participants worked in the mental health services field for a mean of 8.5 years (SD = 7.7; range = 0–43), in child and/or adolescent mental health services for a mean of 7.5 years (SD = 7.6; range = 0–43), and in their present program for 3.4 years (SD = 4.3; range = 0–28.1). Highest level of education consisted of 7% with Ph.D./M.D. or equivalent, 68% with a Master’s degree, 6.5% with graduate work but no degree, 12.2% with a Bachelor’s degree, 3% with some college but no degree, 0.5% with a high school diploma, and 0.2% with less than a high school diploma. Participants’ areas of primary discipline consisted of: 3% Child Development, 0.7% Drug/alcohol Counseling, 2% Human Relations, 47% Marriage and Family Therapy, 1% Nursing, 0% Pediatrics, 0.5% Probation, 0.5% Psychiatry, 16% Psychology, 26% Social Work, and 3.1% Other discipline. Participants had an average caseload of 14 clients per month (SD = 13.4; range = 0–80). However, one participant was excluded because they worked in a youth correctional facility where the average caseload per month exceeded 1800. Among the 65 programs, 11 were public mental health programs and the remaining 54 were either private-not-for-profit or private-for-profit programs.

Procedure

Research assistants administered the survey in paper format to participants in meetings at each of the participating program locations. Consent was obtained prior to administering surveys. Staff meetings consisted of the entire team unless team members were on-call and needed to leave the meeting to address client issues, on vacation, out sick, and/or refused to complete the survey. The survey took on average approximately 60 min (range 45–180 min). Respondents returned the completed surveys to research staff and who then checked the surveys for completeness. For five programs, program managers stated that 90 min would be too long for a meeting due to time constraints. In these instances, research staff consented team members and obtained signed consent forms and designated a time (usually a week later) that they would return to collect the completed surveys. Light refreshments were provided but participants were not compensated for their participation in the survey. The first author also offered in-person feedback to each supervisor and team based on their team’s survey results.

Measures

Evidence-Based Practice Attitude Scale (EBPAS)

The EBPAS consists of 15 items measured on a 5-point Likert scale ranging from 0 (Not at all) to 4 (To a very great extent) (Aarons 2004; Aarons et al. 2007, 2010). The EBPAS is conceptualized as consisting of four lower-order factors/subscales and a higher-order factor/total scale (i.e., total scale score), the latter representing respondents’ global attitude toward adoption of EBPs. For the lower-order factors, the Appeal factor assesses the extent to which the provider would adopt an EBP if it were intuitively appealing, could be used correctly, or was being used by colleagues who were happy with it. The Requirements factor assesses the extent to which the provider would adopt an EBP if it were required by an agency, supervisor, or state. The Openness factor assesses the extent to which the provider is generally open to trying new interventions and would be willing to try or use more structured or manualized interventions. The Divergence factor assesses the extent to which the provider perceives EBPs as not clinically useful and less important than clinical experience. Previous studies suggest adequate internal consistency reliability in three samples (Cronbach’s alpha total scale ranging from .77 to .79, subscales ranging from .66 to .93; Aarons 2004; Aarons et al. 2007, 2010). Construct validity is supported by factor analyses in three previous scale development studies (Aarons 2004; Aarons et al. 2007, 2010) and convergent validity is suggested by studies of associations between EBPAS and mental health clinic structure and policies (Aarons 2004), culture and climate (Aarons and Sawitzky 2006), and leadership (Aarons 2006).

Statistical Analyses

Exploratory factor analysis was used to evaluate the factor structure of the scale using SAS® 9.2 (Statistical Analysis Systems 9.2) Factors were extracted using principal axis factoring with a promax oblique rotation (i.e., a correlated factors model). The number of factors was determined through parallel analysis (Statistical Analysis Systems 9.2), Velicer’s minimum average partial (MAP) test (Statistical Analysis Systems 9.2), and interpretability of the factor structure by examination of the oblique rotated factor pattern matrix. Parallel analysis and the MAP test are among the better methods for determining the correct number of factors based on simulation studies (Zwick and Velicer 1986). Parallel analysis and the MAP test were implemented using an existing SAS program (O’Connor 2000). Parallel analysis was based on 5000 random data matrices (based on the number of observations and variables being factor analyzed) using the eigenvalues that correspond to the 95th percentile of the distribution of random data eigenvalues. An iterative process was used in which items with relatively low primary loadings (<.40), or cross-loadings of .40 or higher were removed. The presence of missing data added to the complexity of conducting the analyses, although the amount of missing data was minimal. For example, among the 420 respondents 319 had complete data (76%) but of those with missing data 49 of the 101 (49%) had missing information on only one item, 24 had two to five items missing (24%), and the remaining 28 (27%) had more than five items missing. Thus, we created a single imputed data set using PROC MI (SAS® 9.2), in which the imputation model included all 127 of the original items. As a check on the sensitivity of the results due to imputing values, we re-ran the factor analysis with a different imputed data set, which yielded comparable results for both eigenvalues and factor loadings. Hierarchical linear models (Raudenbush and Bryk 2002) were used to regress individual subscales onto provider demographic characteristics, treating clinicians as nested in programs. Random-effects were estimated for the intercept and each of the slopes, but not their covariances.

Results

Exploratory Factor Analysis

An iterative approach was taken to conducting the factor analysis and item reduction. In the first iteration all 127 items were included. The parallel analysis suggested retaining 10 factors and the MAP test suggested 14 factors. Examining the pattern matrix of these two solutions suggested that the 14 factor solution was more interpretable. Applying the factor loading criteria indicated above suggested removal of 31 items, leaving 96 items. In the second iteration the parallel analysis suggested nine factors and the MAP test again suggested 14 factors. Again, the pattern matrix of these two solutions suggested that the 14 factor solution was more interpretable. Applying the factor loading criteria resulted in deletion of nine items. Three additional items were deleted because their content did not consistently reflect a single factor, leaving a total of 84 items. In the third iteration the parallel analysis suggested retaining eight factors and the MAP test suggested 13 factors. Examining the pattern matrix of these two solutions suggested that the eight factor solution was more interpretable. Applying the factor loading criteria resulted in deletion of 10 items. In order to limit the number of items per factor, an additional 36 items were removed from three factors with a large number of items. To accomplish this, items with the lowest factor loadings were removed as well as those with inconsistent content, leaving a total of 38 items. In the fourth and final iteration, parallel analysis of the remaining 38 items suggested retaining 8 factors and the MAP test suggested nine factors. Examining the pattern matrix of these two solutions suggested that the eight factor solution was a better fit. Three items with low factor loadings and poor interpretability were removed leaving the final 35 items. The factor loadings are presented in Table 1.

Table 1 Factor loadings from exploratory factor analysis

Examination of the items presented in Table 1 suggests that the content of items loading on factor one could best be labeled as ‘Limitations’ of EBPs and their inability to address client needs. Factor two addresses a dimension related to the ‘Fit’ of the EBP with the values and needs of the client and clinician. Factor three relates to negative perceptions of ‘Monitoring’ or oversight by supervisors. Factor four reflects content that addresses perception of skills and downplays the role of science in therapy; therefore we refer to this factor as ‘Balance’. Factor five relates to the time and administrative ‘Burden’ associated with learning EBPs. Factor six conveys the perceived likelihood of increased ‘Job Security’ or professional marketability provided by learning an EBP. Factor seven has content that addresses perceived ‘Organizational Support’ associated with learning an EBP. Finally, in contrast to the Monitoring factor, factor eight addresses positive perceptions of receiving ‘Feedback’ related to providing mental health services.

Table 2 displays the eigenvalues, proportion of variance explained, factor means, intercorrelations, and internal consistency reliabilities. Generally, internal consistencies were high, ranging from .77 to .92, and factor correlations were small to moderate, ranging from .01 to .56 in absolute value (Table 2).

Table 2 Eigenvalues, proportion of variance explained, subscale means, standard deviations, intercorrelations, and internal consistency reliabilities

Table 3 shows that the EBPAS-50 subscales correlated in expected directions with the original EBPAS subscales. The Limitations scale was correlated negatively with the EBPAS Requirements, Appeal, and Openness, and positively with the Divergence scale. The Fit scale correlated positively with the EBPAS Requirements, Appeal, and Openness scales. The Monitoring scale was correlated negatively with the Appeal, and positively with the Divergence scales. The Balance and Burden scales were positively correlated with the Divergence scale while the Burden scale was negatively correlated with the Requirements scale. The Job Security scale was positively correlated with the Requirements, Appeal, and Openness scales. The Organizational Support (for EBP) scale was positively correlated with the EBPAS Requirements scale. Finally, the Feedback scale was positively correlated with the EBPAS Requirements scale.

Table 3 Correlation of newly identified scale scores with original Evidence-Based Practice Attitude scores

Table 4 displays results for the relationship among clinician demographic characteristics and each of the subscales derived in this study. Below we address the association of each demographic characteristic with each of the new EBPAS subscales. Only significant effects (P < .05) are reported.

Table 4 Regression analysis of EBPAS 50 new subscales onto provider demographic characteristics

Females, compared to males, had higher scores on the Fit and Feedback subscales and lower scores on the Burden subscale. Higher levels of experience providing mental health services was associated with higher Balance subscale scores and clinicians with higher caseloads reported higher Burden subscale scores. Providers working in public, compared to private non-profit, programs reported lower Fit of EBP with their clinical practice.

We next examined clinician race/ethnicity in relation to subscale scores. Compared to Caucasians (the reference group), African-American respondents had lower scores on the Limitations and Balance scales. Also Compared to Caucasians, Hispanic clinicians had lower scores on the Fit and Burden subscales. We found no significant differences by professional discipline in this sample.

Finally, we examined the association of education level with the new EBPAS subscale scores. The reference category was having less than a college degree. Having some graduate school experience, a Master’s degree, Ph.D./M.D. (or equivalent), as opposed to not having a college degree, was associated with higher scores on Organizational Support subscale scores. Although the magnitude of some of the effects were of medium to large size, many were non-significant likely due to small cell sizes in categorical independent variables.

Discussion

The current study expanded our previous work on attitudes toward EBP by identifying eight additional domains of mental health and social service provider attitudes towards EBP. We used a sequential mixed-methods approach first using qualitative methods to develop items representing new EBP attitude content domains and then used quantitative data reduction techniques to develop a brief measure that can be easily used for research and applied purposes. The identified factors correspond to several of the subdomains originally conceived by the research team and community clinicians and program managers. The data reduction process, however, also resulted in the identification of additional domains that were not originally proposed. The results of this study support the presence of several EBP attitude domains. The newly identified domains did not duplicate those identified in our previous measure of provider attitudes toward adopting EBP (Aarons 2004) as demonstrated by the small to moderate convergence of new factors with previously identified EBP attitude factors. Thus we propose that combining the EBPAS’ previously validated 15 items (Aarons 2004; Aarons et al. 2007) with the current items constitutes the expanded 50 item EBPAS or “EBPAS-50.”

This study raises several directions for future research. Most immediate is further examination of the EBPAS-50 construct validity through confirmatory factor analysis using a new sample and specifying the factor structure identified in this study along with that of the original EBPAS. Given our efforts to create a measure that is relatively brief, a large sample size would not be necessary to conduct such analyses. Future research should also examine the whether there is added utility for the EBPAS-50 in contrast to the shorter 15-item EBPAS.

Next, research should examine the convergent, divergent, and criterion-related validity (including both concurrent and predictive validity) of the EBPAS-50. While there is growing evidence for the validity of the EBPAS (Aarons 2004) there is a need to establish whether the EBPAS-50 is associated with organizational and individual provider characteristics as suggested by studies of mental health clinic structure and policies (Aarons 2004), culture and climate (Aarons and Sawitzky 2006), and leadership (Aarons 2006). Such analyses should be conducted with the factors identified in EBPAS-50. Furthermore, it will be important to examine the degree to which these attitudes are associated with provider education and training in EBPs and their adoption and use of EBPs. Additional factors such as organizational context, provider professional affiliations and professional networks may also mediate or moderate the impact of attitudes on EBP fidelity or use.

A new line of research is examining potential links between factors that might influence service providers and attitudes toward adopting EBP. For example, organization type and level of bureaucracy are associated with EBPAS scale scores (Aarons 2004). A recent study demonstrated that higher levels of organizational support for EBP was associated with more positive mental health staff attitudes toward adopting EBPs and a trend for more positive attitudes to be associated with provider adoption of EBP (Aarons et al. 2009c).

Our regression analyses indicated that the new EBPAS scales were associated with a number of clinician demographic characteristics. For example, females reported greater perceived fit of EBP with characteristics and needs of clients, lower perceived burden of EBP, and greater acceptance and appreciation of feedback that is characteristic of EBP fidelity monitoring and/or coaching. Indeed, a prior study found that EBP implementation along with ongoing coaching and monitoring was associated with lower staff turnover (Aarons et al. 2009b) and EBP implementation compared to services as usual is associated with lower staff emotional exhaustion (Aarons et al. 2009a).

Clinicians with higher caseloads perceived a higher burden related to adoption of EBP. This suggests that careful planning and framing of EBP must be undertaken to determine how EBP fits into current caseloads, productivity requirements, and workflow. Organizational process realignment or job redesign might be needed in order to facilitate the fit of EBP with complex organizational and work requirements and processes (Glisson and Schoenwald 2005).

Clinicians in publicly funded programs, compared to those working in private non-profit programs, perceived poorer fit of EBPs with characteristics and needs of clients. Further study is needed to determine the degree to which this is a function of organizational structure and process or can be attributed to potential difference in case-mix, or some combination of these or other factors.

Greater clinician experience providing mental health services was associated with a greater perception that therapy is both an art and science. It is important to remember that the Balance subscale represents endorsement that therapy entails more than just scientific findings and manualized approaches but includes a balance between art and science as well as a sense of competence and satisfaction with one’s own clinical skills.

Race/ethnic differences found in our analyses warrant further comment. African-American clinicians indicated a lower perceived sense of therapy as both art and science, and also endorsed fewer perceived limitations of EBP than Caucasian clinicians. This is consistent with findings in a national sample that African-American clinicians, compared to Caucasians, endorsed lower perceived divergence between EBP and usual care (Aarons et al. 2010). In the current study, Hispanics indicated poorer perceived fit of client characteristics with EBP and a higher level of perceived burden. Future research should explore how these results relate to education and training experiences, to characteristics of the communities in which services are provided, and how these relate to clinician and client characteristics.

In addition, the ways in which organizations provide organizational support for EBP may impact provider attitudes toward adopting EBP (Aarons et al. 2009c). This builds on previous research examining the association between service provider characteristics and attitudes toward EBP. For example, studies have shown that compared to those trained in psychology, those trained in social work endorse more positive attitudes toward EBP (Aarons et al. 2010).

As alluded to above, attitudes toward implementation of EBPs can also be considered an outcome to be studied. For instance, research suggests that provider characteristics (e.g., education) and organizational context (e.g., level of organizational bureaucratic structure; organizational climate and culture, leadership) play a role in the implementation of EBPs in real world settings (Aarons 2005; Aarons et al. 2007, 2009a; Aarons and Sawitzky 2006; Glisson 2002).In addition, private organizations (compared to public) tend to garner more positive clinician attitudes toward adopting EBP partly through furnishing more support and incentives for EBP. Thus, organizational and leadership interventions could be tailored to improving provider attitudes, and subsequent uptake of EBPs.

Some limitations of this study should be noted. First, this is an exploratory scale development study and thus represents the first (qualitative item generation) and second (exploratory factor analysis) phases of this line of research. As such, caution should be exercised regarding inferences about the meaning or potential impacts of the EBPAS-50. However, the item and scale development was based on extant literature and investigator and practitioner knowledge and experience with EBPs and community-based mental health service settings. Second, no confirmatory analyses were conducted; therefore the factor structure of the EBPAS-50 requires further validation. In addition, the factor analyses were conducted without the original EBPAS items. Further studies should examine factor structure with all 50 items included. However, our examination of divergent validity in this study suggests that the new factors are relatively distinct from the newly identified domains. Significance tests examining the associations among provider characteristics and attitudes towards EBP were not corrected for multiple comparisons, therefore Type I errors may exceed the nominal alpha value. Finally, the sample of providers was largely comprised of marriage and family therapists and while this is characteristic of California, some of our significance tests may have been affected by small cell sizes for particular provider groups. Other areas of the country will likely have different proportions of providers such as social workers and psychologists and thus future studies should strive for more representative samples.

The current study builds on previous research by identifying eight new domains of mental health and social service provider attitudes toward EBPs. To the extent that these newly identified attitudes are influenced by individual and organizational factors, strategies for increasing positive attitudes could be devised and examined prior to and during EBP implementation. For example, leaders and supervisors could be trained in promoting positive attitudes and improving implementation climate. In addition, the extent to which these attitudes influence fidelity and adoption of EBPs should be examined to increase our understanding of the complex ways in which attitudes may influence behavior (Ajzen and Fishbein 1975; Jaccard and Blanton 2005). The link between attitudes and behaviors is complex. A number of intervening variables come into play including individual differences, contextual factors, and organizational factors. Individual difference variables may include self-efficacy (Chau 2001; Hsu et al. 2009; Le Blanc et al. 2010), interactional style (Alexander et al. 1976; Franks et al. 2006; Smylie 1988), and motivation (Abril 2007; Grol and Wensing 2004; Kwan and Bryan 2010) that may interact with attitudes to impact behavioral intention and change in interest in and ultimate adoption of new behaviors or treatment strategies.

Contextual influences such as reimbursement policies and productivity requirements may facilitate or limit clinician discretion so that even where there are positive attitudes toward EBP—actual implementation may be attenuated. Conversely, formalized policies may actually increase openness toward utilizing evidence-based approaches (Aarons 2004) and might override personal preference. Initiatives such as pay for performance might facilitate use of EBP when it is tied to compensation. Finally, organizational influences can impact uptake and implementation of new innovations. For example, leadership can impact whether new, more effective treatment technologies are used and sustained (Edmondson 2003). In addition, improved implementation climate can improve uptake of new innovations (Klein et al. 2001) and organizational supports for EBP can bolster attitudes toward innovation adoption, but may also increase adoption of EBP independent of attitudes (Aarons et al. 2009c).

The work presented here is important because identifying additional attitude domains expands our ability to quantitatively examine a wider range of attitudes toward EBP and assess the degree to which they are related to contextual, organizational and individual characteristics, and fit with theories of behavior change. The steps in this process are to first identify links between attitudes toward EBP, other variables, and outcomes such as EBP uptake. Next, mediators and moderators of context-attitude-behavior links should be proposed and tested. Such a research agenda will help to expand our knowledge base regarding the complex ways in which attitudes are influenced by, and influence the mental health service context and quality of care for those in need of mental health services. In addition, future studies should examine the degree to which consumers influence clinician perceptions and attitudes toward EBP.

Finally, we suggest that the newly identified items and scales be examined for both research and applied purposes and that the new scales be used with the original EBPAS scales. Thus the EBPAS-50 will include the original 15 items and four subscales of the EBPAS (Aarons 2004) resulting in a measure with 50 items and 12 subscales. The EBPAS-50 is still a relatively brief measure that can be included in studies of organizational and provider readiness to implement EBPs as well as for understanding provider response to EBPs in general and factors associated with adoption, implementation, and continued use of EBPs.