Skip to main content
Top
Gepubliceerd in:

Open Access 11-10-2022 | Original Article

Distinguishing the Dimensions of the Original Dysfunctional Attitude Scale in an Archival Clinical Sample

Auteurs: Gary P. Brown, Jaime Delgadillo, Hudson Golino

Gepubliceerd in: Cognitive Therapy and Research | Uitgave 1/2023

share
DELEN

Deel dit onderdeel of sectie (kopieer de link)

  • Optie A:
    Klik op de rechtermuisknop op de link en selecteer de optie “linkadres kopiëren”
  • Optie B:
    Deel de link per e-mail
insite
ZOEKEN

Abstract

The Dysfunctional Attitude Scale measures depression related enduring beliefs and is one of the central measures of cognitive behavioral (CBT) research and theory. It has been the central marker of etiological claims of CBT, and so any change to the understanding of the composition of the DAS would have potentially far-reaching implications for a large body of literature. We sought to capitalize on advances in psychometric techniques since the original 100-item DAS was last analyzed in a sufficiently large clinical sample to provide a definitive measurement model of this important instrument. Beyond the two dimensions usually found on the shorter forms of the scale, we identified the following subscales: imperatives, cognitive flexibility, and negative expectancy. This richer and more precise DAS structure renews its potential to meet the challenge of predicting who is prone to develop depression or experience a recurrence.
Opmerkingen

Supplementary Information

The online version contains supplementary material available at https://​doi.​org/​10.​1007/​s10608-022-10333-w.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The Dysfunctional Attitude Scale (DAS) has been a mainstay of outcome and process research into the cognitive model of depression. It is was developed by Weissman and Beck (Weissman, 1979; Weissman & Beck, 1978) to measure enduring depression-related beliefs encountered in the course of psychotherapy. In contrast to transitory negative cognitions (automatic thoughts), these beliefs are more persistent, and the same beliefs may be encountered across successive symptomatic episodes. Beck et al. (1987, p. 20) reasoned that because it was implausible that the same maladaptive cognitive patterns were recreated anew every time an individual experienced an episode of depression, these beliefs likely reflected psychological mechanisms that persisted in some manner between episodes, representing a vulnerability for the depression to recur. There is a significant body of literature supporting the DAS as a measure of vulnerability to depression within the context of research on Beck’s cognitive theory (e.g., Brown et al., 1995; Miranda et al., 1998; Otto et al., 2007), although it has not performed as predicted in some critical contexts, for example, appearing to covary over time with depression symptom levels (e.g., Barnett & Gotlib, 1998; Cristea et al., 2015). The DAS has been the focus of important critiques of the cognitive therapy model (Coyne, 1982), responses to those critiques (Segal & Shaw, 1986), and, generally, has been the central marker of etiological claims of CBT (Segal, 1988). Any change to the understanding of the composition of the DAS would have potentially far-reaching implications for a large body of literature.
DAS items were written to capture negative reasoning patterns that Beck had identified as being at the core of depression (e.g., the item “If a person is indifferent to me, it means they do not like me” reflecting an arbitrary inference). Endorsement of the belief is taken to indicate a disposition to apply such logic when the respondent encounters comparable situations to the ones described in the item. Weissman’s (1979) stated aim was to compile a set of items that “cover most of the essential dimensions of depressogenic cognitions, even if these were confounded, overlapping, or otherwise not as clear-cut as later research might help to make them” (pp. 63–64). It is evident from this that Weissman recognized that clarifying the structure of the DAS would require further study. However, in the interim, she proceeded on the assumption of a central dimension of depressogenic beliefs underlying the DAS, consistent with her finding of a dominant first factor. Accordingly, she created two parallel 40-item forms, DAS-A and DAS-B, by plotting each item’s mean in a sample of 275 undergraduates against its loading on the unrotated first factor and randomly assigning retained items with similar plot co-ordinates to one of the two forms while eliminating 21 items with relatively low means and loadings (Weissman, 1979).
It is unlikely that Weissman intended to permanently freeze the DAS at this point in its development. However, in the intervening years, the DAS-A as originally constituted by her has come to be the default version of the scale used in both research and practice and the version usually reported on in psychometric studies. A re-examination of the original full DAS would be necessary to establish if the assumptions implied by Weissman’s analysis are tenable—that the DAS was either essentially unidimensional or that any multidimensionality was uniform across forms A and B. Such an analysis would also preferably be conducted in a large clinical sample to ensure that clinically important items had not been eliminated by Weissman because they were found to be less salient in her relatively small undergraduate sample. Beck et al. (1991) undertook such an analysis, and, among other findings, identified a nine-item “Imperatives” factor within the 100-item version never previously found in psychometric studies of the DAS-A items. This factor consisted of moralistic beliefs, typically including the words “should” or “must”. Seven-item versions of the Imperatives factor were replicated in the only two other analyses (in undergraduate samples) of the full original 100-item data pools (Calhoon, 1996; Dyck, 1992). This finding by itself contradicts the assumption of essential unidimensionality of the DAS as well as uniformity across forms: only two Imperatives items appear on the DAS-A, with five appearing on DAS-B. The remaining two items—as it happens, the two items found by Beck et al. to load highest on the Imperatives factor—were among the 21 items that did not make it onto either DAS-A or B, thereby confirming the possibility that clinically important items had been eliminated. Finally, it is important to note that the content of the Imperatives factor is substantive. As Brown and Beck (1989) pointed out, the role of self-coercive moralistic beliefs in amplifying emotional problems has long been recognized in psychotherapy across diverse theoretical positions.
Within research on the dimensionality of the DAS-A itself there are broad regularities that are discernible but nothing approaching a definitive consensus concerning its dimensional composition (for selective reviews, please see de Graaf et al., 2009 and Moore et al., 2014). From one to four underlying dimensions have been reported, but two factors are most commonly found, with one of these relating to achievement/perfectionism and the second concerned with interpersonal dependence and desire for approval. Notably, the specific item composition of the factors has varied substantially across studies such that there is no stable core set of items associated with each factor. Where more than two factors are reported, these usually result from splitting one or both of the main two (achievement and approval) factors, suggesting that these findings likely result from misspecification of the number of factors. Likewise, misspecification in the opposite, “lumping” direction is likely to be the case where a single factor has been reported. In the Moore et al. (2014) study, a series of analyses combining data-driven (e.g., SEM modification indices) and subjective criteria (e.g., judging that the general factor of a bifactor model represented the presence of a single underlying dimension) resulted in the DAS being reduced to a single, 19-item perfectionism scale, with the counterintuitive result of a putative depression vulnerability scale that does not measure concern with social acceptance.
The complexity of the conditional syntax of many of the DAS items and the commensurate demands this makes on respondents is a potential important contributor to the difficulty encountered in identifying a stable structure. There has been similar difficulty identifying a core measurement structure for the Anxiety Sensitivity Index (ASI), an anxiety disorder counterpart to the DAS also comprised of beliefs in the form of “if–then” conditional propositions (e.g., Taylor et al., 2007). The compound sentence form common to the two scales may be susceptible to picking up complex sources of unstable construct-irrelevant variance that are liable to obscure measurement analyses. For example, Lilienfeld et al. (1993) pointed out that ASI items such as ‘It scares me when I feel faint’ or ‘Other people notice when I feel shaky’ may be incipiently “double-barreled,” as they require responses from people who rarely if ever feel faint or shaky as well as those who do (p. 167). A response of “not at all” can either mean that the respondent is unconcerned about the body sensation in question (they do not believe it at all when it occurs, as the ASI intends) or that the item is not applicable at all because they never experience the sensation. Such an item may therefore to some degree transmit selective applicability and so be a marker of the presence or absence of the condition in which the symptom occurs (for example, panic). Given that the ASI is purported to mainly be a predictor of panic, this produces subtle criterion-predictor confounding that will inflate its apparent predictive validity.
Other response anomalies may, in contrast, lead to underestimates of validity. In this regard, scales like the DAS and ASI that require complex judgments are known to be particularly susceptible to eliciting response sets (Cronbach, 1950). DeRubeis and colleagues (Forand & DeRubeis, 2014; Forand et al., 2016) have described a positive extreme response set encountered with the DAS according to which respondents systematically choose the highest rating in the “adaptive” direction of responding (“completely agree” or “completely disagree”) whether or not, on objective examination of item content, these extreme responses would be justified as being adaptive on rational grounds. This positive extreme response style has been shown to predict depression relapse (Brouwer et al., 2019), which means that, paradoxically, putatively more adaptive scores on these items ultimately predict worse future functioning, a clear threat to the validity of the DAS as a straightforward measure of its target construct if this response set is not eliminated or compensated for in some way.
The foregoing has focused on some of the salient measurement issues involving the DAS that have yet to be resolved despite the scale’s long history and central position within the research literature on the cognitive therapy approach. Fortunately, the passage of time has also brought with it potential solutions to perennial measurement issues. These matters may not have previously been resolved because the necessary means for the resolution of such subtle and complex measurement issues had simply not yet been developed. The recent development of newer techniques such as non-parametric factor analysis and exploratory structural equation modeling that strike a practical balance between fully exploratory and fully confirmatory approaches coming into common use may offer the promise of a resolution to some of these issues. However, the potentially most significant advances are becoming available from the developing network approach to psychometrics, which provides ways to bypass some of the central conundrums of the traditional latent variable, such as the need to pre-specify the number of dimensions. The current study sought to capitalize on these developments in a re-analysis of the clinical sample in which Beck et al. (1991) analyzed the original 100-item DAS with the aim of providing a definitive measurement model of this important instrument.

Method

Sample

The original 100-item DAS was analyzed in the clinical sample that was the basis for the Beck et al. (1991) psychometric study. The total sample of 2041 of outpatients seeking treatment at the University of Pennsylvania Psychiatry Department in Philadelphia was randomly split into an index and cross-validation sample. Most of the subjects in this sample were diagnosed with a common mental health problem, such as an affective (54.8%) or an anxiety disorder (28.0%). The service setting, structured diagnostic interview method and demographic make-up of the sample are detailed in the original paper. Confirmation was obtained from the institutional review board of the University of Pennsylvania that the planned use of the dataset conformed with ethical standards.

Analytic Strategy

Scale of Measurement

Though comprised of intrinsically ordinal Likert-type items, the DAS has mainly been analyzed with techniques suited for metric (interval or ratio level) scales. It has been shown that using metric analyses for nonmetric scales can create a range of anomalies, increase Type I and Type II error rates, and mischaracterize or even reverse effect size estimates (Liddell & Kruschke, 2018). On the whole, it is reasonable to expect that employing the appropriate nonparametric techniques that characterize rather than approximate the measurement scale employed should provide a fuller and more precise representation of the measurement structure of the DAS in terms of its underlying dimensionality and be more capable of isolating construct-irrelevant sources of variance that likely contribute to cross-sample instability.

Dimensionality

Determining the number of dimensions underlying a covariance structure has long been a conundrum in factor analysis, highly reliant on the subjective judgment of the researcher. Misspecification of the number of dimensions usually leads, in turn, to uncertain factor composition. Beck et al. (1991) sought to ameliorate this indeterminacy problem in achieving a simple structure by using the VARCLUS procedure (Pasta & Suhr, 2004) in which items are assigned by a cluster splitting algorithm rather than relying on human judgement to sort items onto scales. However, this procedure still requires a researcher-specified stopping criterion for the number of factors. Brown et al. deferred to Weissman’s finding of ten factors in her original study, which was based on the classic Kaiser eigenvalue > 1.0 criterion, and which likely meant that the nine factors ultimately found by Beck et al. reflected overfactoring. It is only recently that an entirely data-driven procedure has become available for determining both the number of factors and their item composition. Exploratory graph analysis (EGA; & Epskamp, 2016; et al., 2020) accomplishes this by using network analysis, which is not subject to the same restrictive assumptions as the traditional latent variable model. et al. (2020) showed that when simulation data is generated from an underlying model with a few correlated factors, each with a small number of indicators, and relatively small sample sizes—scenarios that are common for measures of clinically relevant constructs such as the DAS—EGA performs better in identifying the true number of factors than classical approaches such as the Kaiser (eigenvalue > 1) criterion and scree plots, and at least as well as parallel analysis and often surpassing it. Although network analysis is not premised on latent variable model assumptions, when the underlying data generation model is a factor model, network analysis will be mathematically equivalent to latent variable solutions (Christensen & Golino, 2020). In this context, latent factors show up as densely connected nodes in a network, forming communities that can be estimated using several community detection algorithms for weighted networks (Christensen et al., 2020a). Each community in a network is akin to a latent factor in latent variable models (Golino & Epskamp, 2016; et al., 2020), and recent evidence shows that other psychometric relevant metrics from factor analysis can be similarly estimated under the EGA framework, such as factor loadings (in the EGA framework it is called network loadings; Christensen & Golino, 2020). As pointed out by et al. (2020), EGA also has a more straightforward interpretation than factor analysis: it does not rely on interpreting a matrix of factor patterns and loadings, since the network can be plotted in a two-dimensional space with nodes (i.e., items) dispersed according to their connection to neighborhood nodes, making the visual identification of communities easy to depict (Golino et al., 2020, 2021a, b).

Dimension Composition

Achieving simple structure has long been viewed as the central challenge of factor analysis, but it is less widely recognized that item redundancy may lie at the root of many simple structure difficulties (e.g., Oltmanns & Widiger, 2016). Inclusion of homogeneous content is conventionally pursued as a core strategy of test construction geared to promoting internal consistency. However, if unchecked, redundancy unavoidably leads to the emergence of “splitting” artifacts, such as the “bloated specifics” (Cattell & Tsujioka, 1964), where factors form purely based on redundant content. Their nature as bloated specifics is confirmed when, for example, they make no substantive validity associations with criteria of interest. A related, potentially even more insidious problem can emerge when a particular content area becomes over-represented on an instrument simply by virtue of the ease with which relevant item content can be generated even if this falls short of including items close enough in meaning to be duplicative. Such is the case in the original 100-item DAS item pool with regard to items concerning perfectionism and achievement, which are not necessarily redundant but simply lend themselves to being restated in varied ways. This can lead to “lumping” rather than splitting problems because of the temptation is to interpret variance explained by the potentially redundant content as an index of its importance. Inclusion of such a scale may artifactually promote better fit for a bifactor or hierarchical structure and outcomes such as Moore et al.’s one factor DAS scale containing only success/perfectionism items and nothing about social acceptance. To address item redundancy, unique variable analysis (UVA; Christensen et al., 2020a, b) was carried out as a first step in item analysis.

Structural Consistency and Replicability

Number, composition, and stability of the underlying dimensions of the remaining variables were determined using the bootstrapped version of EGA, from which an estimate can be gained of the reproducibility of dimensions and their item composition. Structural consistency is the bootstrapped EGA counterpart to classical test theory concept of reliability and is defined as the extent to which a dimension is interrelated and homogeneous in the presence of other related dimensions (Christensen et al., 2020c). It is operationalized as the proportion of times that each dimension estimated via EGA has the same item composition across a set of replicate bootstrap samples (Christensen & Golino, 2019). Item replicability (or item stability) indicates how often items replicate in their empirically derived dimension and in other dimensions. Instruments with low item replicabilities tend to have a very unstable dimensionality structure that does not replicate within bootstrapped samples.

Hierarchical Structure and Fit to Data

For a fuller picture of the current findings and to contextualize the network results, given the relative novelty of the network analytic approach and EGA in particular, we carried out a further set of analyses concerning the potential higher-order dimensionality of the DAS from a latent variable perspective. The extent to which the DAS can be considered unidimensional versus multidimensional, as well as whether a network or latent variable measurement model is more justified, has critical theoretical implications which are taken up in the discussion.

Results

Dimensional Composition

A baseline exploratory graph analysis was carried for the full set of 100 DAS items using the EGAnet R package 0.9.7 ( et al., 2021a, b) applying the graphical least absolute shrinkage and selection operator (GLASSO) and the Louvain community detection algorithm. The resulting network, consisting of seven communities, is shown at the top of Fig. 1. Next, we used the UVA function to perform unique variable analysis (Christensen et al., 2020a). This function presents the user with target variables and candidate redundant variables identified on the basis of weighted topographical overlap (wTO) and implements the user’s decision concerning which steps to take to address the redundancy. Successive target variables (and their corresponding redundant variables) continue to be presented until a stopping criterion is reached. In other areas in which wTO has been applied, thresholds of wTO = 0.20 or 0.25 have been used. These thresholds identified only three redundancies among the DAS items in the current sample. Christensen et al. recommend using adaptive alpha rather than a set threshold, which adjusts the conventional alpha threshold as a function of sample size and empirical distribution to avoid false positives (i.e., overidentifying redundancies due to surplus statistical power). With the current sample size (N = 1021 in the index sample), there was still an excess of redundancies identified—items that were associated but did not appear to be redundant in meaning. The ranked list of wTO values and proposed redundancies was examined to identify a point that appeared to strike a balance between the two extremes of over- and under-identification of redundancy. Setting a fixed p-value of 0.005 (corresponding to a wTO threshold of 0.08 in the present sample) appeared suitable. Christensen et al. recommend two alternative strategies for dealing with redundancies, either forming a latent variable facet with all the redundant items to replace the item scores, which the authors favor as an approach that retains information, or retaining only one of the items in a redundant set, for example, the item with the highest corrected item total correlation. The latter approach appeared more appropriate in the present context, and so the 34 redundant items that were identified were removed. The removed items and the items they were found to be redundant with are shown in a Table SM1 within the supplemental materials.
To determine the stability of the initial 66-item, five-dimension solution we carried out an item stability analysis as described by Christensen and Golino (2019). The 66 nonredundant items were entered into a bootstrapped EGA analysis (bootEGA) with the Louvain algorithm of community detection and parametric bootstrapping whereby simulated samples with the same statistical parameters as the original sample are generated and analyzed (rather than randomly sampling from the original sample). This combination of analytic options appeared to be the best fit to the DAS based on considerations identified in the simulation studies of and Epskamp (2017) and Christensen et al. (2020a). Over the 500 iterations, four dimensions were chosen 18% of the time, five dimensions 60% of the time, six dimension 22% of the time, and seven dimensions only three times. The median network thus consisted of five dimensions, matching the number of factors from a parallel analysis conducted on the same sample. However, the fact that other solutions were found over a substantial percentage of bootstraps suggested the five-dimension solution was not stable. This was confirmed by very low dimensional stability—the percentage of time a dimension was exactly replicated, which ranged from 1 to 22.6% across the dimensions. Sources of instability were identified through analyzing the proportion of times items were reliably assigned to the same dimension across bootstrap replications for each dimension. Christensen et al. (2020a) suggest 80% as a cutoff for acceptable stability. In conjunction with information from the network diagrams regarding the item’s graphical placement, the item with the lowest stability and that appeared to most confound the overall dimensional structure was removed. The analysis was repeated without that item, and this process continued until the remaining items had at least 80% stability. The final structural consistency of the five dimensions at that point was 0.99, 0.97, 0.93, 0.86, and 0.91 and average item stability was 0.98, 0.99, 0.97, 0.98, and 0.96, respectively, suggesting a high level of reliability comparable to attaining Cronbach’s alpha coefficients of the same magnitude. The analysis was repeated in the cross-validation sample with nearly identical dimension composition and item and dimensions stability, with the exception that Item 60 was allocated to Cognitive Flexibility rather than the Acceptability to Others factor. The analysis was repeated in the full sample, with this item falling into the Acceptability to Others factor.
The composition of the dimensions in the full sample is shown in Table 1, along with their network loadings (which are computed using semi-partial correlations, and therefore lower than readers would be accustomed to with regular factor loadings, that are on a simple correlation scale. As Christensen and Golino (2020), point out, network loadings of 0.15 or less represent low loadings, between 0.15 and 0.25 moderate loadings, and 0.25 or more are high loadings). The scales included those with content typically found on the DAS-A, here named High Standards and Acceptability to Others, and an Imperatives scale. The scale called Negative Expectancy overlaps in content with the scale Beck et al. (1991) labeled “Vulnerability.” However, in analyses of the DAS-A, content from this scale typically merges with the high standards and approval factors. Finally, the Cognitive Flexibility scale has not been reported before, consisting of items Weissman eliminated for having low item means. Also indicated are items that would be considered extreme positive responding style items, according to Forand et al.’s classification. Only 12 of 42 (28.5%) of the items were style items, compared to 23 of 40 in the DAS-A (57.5%). It is notable that five of the twelve style items appear in the Acceptability to Others subscale. Nearly all of the Cognitive Flexibility and Imperative items were drawn from Weissman’s Form B or had been omitted by Weissman. Figure 1 shows the final graph of the retained items in their communities, with the initial graph of all 100 items provided for comparison.
Table 1
Final network loadings
 
ATO
HS
CF
IMP
NE
Weissman Form
46. If people whom I care about do not care for me, it is awful
0.27
    
B
94. A person doesn't need to be well liked in order to be happy
0.26
    
B*
67. I don’t need the approval of other people in order to be happy
0.25
    
A*
59. I cannot be happy unless most people I know admire me
0.20
0.10
   
A*
1. I can find happiness without being loved by another person
0.19
    
A*
88. I am nothing if a person I love doesn't love me
0.16
   
0.10
A
12. If people consider me unattractive it need not upset me
0.16
    
X
60. My own opinions of myself are more important than other's opinions of me
0.14
 
0.12
  
A*
74. A person cannot survive without the help of other people
0.11
    
X*
45. My life is wasted unless I am a success
 
0.32
   
B
49. If I don't set the highest standards for myself, I am likely to end up a second-rate person
 
0.26
   
A
47. If I fail at my work, then I am a failure as a person
 
0.24
   
A
98. If I am to be a worthwhile person, I must be truly outstanding in at least one major respect
 
0.22
   
A
7. I must be a useful, productive, creative person or life has no purpose
 
0.16
   
B
13. If you cannot do something well, there is little point in doing it at all
 
0.14
   
A*
33. People who have good ideas are more worthy than those who do not
 
0.13
   
A
22. People should have a reasonable likelihood of success before undertaking anything
 
0.10
   
A*
25. Even though a person may not be able to control what happens to him, he can control how he thinks
  
0.29
  
X
84. No one can hurt me with words. I hurt myself by the way I choose to react to their words
  
0.22
  
X
53. One should look for a practical solution to problems rather than a perfect solution
  
0.21
  
B
32. I can take responsibility only for what I do, not what other people do
  
0.20
  
X
17. An unpleasant event does not make me sad. I make myself sad by what I tell myself
  
0.19
  
X*
40. I may be able to influence other people's behaviour but I cannot control it
  
0.17
  
B
43. A person cannot change his emotional reactions even if he knows they are harmful to him
  
0.15
  
B
24. If I demand perfection in myself, I will make myself very unhappy
  
0.14
  
X
8. I can find greater enjoyment if I do things because I want to, rather than in order to please other people
  
0.12
  
X
99. I ought to be able to solve my problems quickly and without a great deal of effort
   
0.24
 
B
44. I should always have complete control over my feelings
   
0.24
 
B*
100. To be a good, moral, worthwhile person, I must help everyone who needs it
   
0.21
 
A
23. I should be able to please everybody
  
0.10
0.20
 
B*
10. I should be happy all the time
   
0.20
 
B
64. If I try hard enough I should be able to excel at anything I attempt
   
0.19
 
X
56. A person should do well at everything he undertakes
 
0.15
 
0.18
 
B
90. A person should be able to control what happens to him
   
0.17
 
B
57. If someone disagrees with me, it probably indicates he does not like me
    
0.24
A
89. People will reject you if they know your weaknesses
    
0.22
B
66. I cannot trust other people because they might be cruel to me
    
0.21
A*
79. Whenever I take a chance or risk I am only looking for trouble
    
0.20
B
42. If I make a foolish statement, it means I am a foolish person
 
0.12
  
0.19
B
55. If I do well, it probably is due to chance; if I do badly, it is probably my own fault
    
0.18
B
18. If I ask a question, it makes me look inferior
    
0.18
A
28. It is shameful for a person to display his weaknesses
    
0.16
B
Network loadings are partial correlations and were calculated in overall sample. Only loading ≥ 0.10 are shown
Network loadings interpretation. Low: < 0.15, Moderate: 0.15 to 0.25; High: >  0.25
ATO acceptability to others, HS high standards, NE negative expectancy, CF cognitive flexibility, IMP imperatives
*Style (vs. content) item, using the classification of DeRubeis and colleagues. Where X is listed for Weissman form, the item was dropped by her

Hierarchical Structure and Fit to Data

Using the CFA function in EGAnet, the network parameters were passed to the lavaan R package (Rosseel et al., 2022) for confirmatory factor analyses. The network structure fit the data well according to conventional thresholds, χ2 (809) = 2131.6, p < 0.001, CFI = 0.97, RMSEA = 0.040, GFI = 0.98, NFI = 0.96, and this was nearly the same in the cross-validation sample: χ2 (809) = 2727.5, p < 0.001, CFI = 0.95, RMSEA = 0.048, GFI = 0.97, NFI = 0.93. To gain information about the potential hierarchical structure of the DAS in light of the intercorrelation of the dimensions and to estimate model-based reliability, an exploratory bifactor structure was fit in the full sample using the OMEGA function of the psych R package (Revelle, 2021) with Schmid-Leiman rotation and maximum likelihood estimation (due to the lack of a WLMSV option in the psych package). As Reise et al., (2018) note, bifactor models will typically achieve a comparable fit to the equivalent correlated factors model, and this was the case in the current study, with the RMSEA = 0.045 comparable to what was found with the CFA. The fully unidimensional model (RMSEA = 0.066) fit less well than the bifactor model, but this still indicated a relatively good fit.
As shown in Table 2, the omega coefficient for the general factor was 0.94. This reduced to 0.68 for hierarchical omega, suggesting a substantial proportion of the reliable variance was due to the group (subscale) factors. The explained common variance (ECV) of the general factor was 0.48, which is also inconsistent with unidimensionality (which would be indicated by a higher ECV). However, the balance of remaining evidence appeared to favor unidimensionality. ECV is typically interpreted jointly with percent uncontaminated correlations, which was 75% in the present analysis and considered relatively high and favoring unidimensionality (Stucky & Edelen, 2015), though qualified by the lower ECV. The model-based omegas for the factors within the bifactor model were 0.88, 0.82, 0.89, 0.78, and 0.81 for the Negative Expectancy, Imperatives, High Standards, Cognitive Flexibility, and Acceptability to Others factors, respectively, but these reduced to 0.23, 0.43, 0.30. 52, and 0.47, respectively, for omega hierarchical subscale, suggesting that much of the subscale reliability was derived from the overall general factor. General score saturation of group factors has implications for the justifiable use of subscale scores. This was reflected in factor score determinacy, which was 0.91 for the general factor, with values for the remaining factors all below the 0.90 threshold suggested for unit weighted factor scores to be considered suitable approximations of the weighted factor score (Gorsuch, 1983). Similarly, Hancock and Mueller's (2001) H provides an estimate for the suitability for factor scores to be used as estimates of the latent variable in further analyses (e.g., structural modeling). These were all below the suggested threshold of 0.80. In contrast, the equivalent values for a correlated factor as opposed to a bifactor structure met or were relatively close to conventional thresholds.
Table 2
42-item EGA dimensions, bifactor model scale properties
 
Omega
OmegaH
H
Factor determinacy
Bifactor
Correlated factors
Bifactor
Correlated factors
General factor
0.93
0.68
0.92
 
0.91
 
Negative expectancy
0.88
0.23
0.56
0.77
0.70
0.92
Imperatives
0.82
0.43
0.66
0.72
0.80
0.89
High standards
0.89
0.30
0.67
0.78
0.79
0.92
Cognitive flexibility
0.78
0.52
0.69
0.68
0.82
0.87
Acceptability to others
0.81
0.47
0.68
0.71
0.82
0.88
Percent uncontaminated correlations = 0.75. Overall explained common variance = 0.48. Average relative parameter bias = 0.12
The results concerning the hierarchical structure of the DAS were not clear-cut. Whether the underlying data model is assumed to be a latent variable model or a network model, taken up in more detail in the discussion, has a bearing on how the evidence is weighted, with the former pointing to giving greater weight to the causal role of an overarching unobserved latent factor underlying the covariance of the factors, whereas the network approach would regard this covariance as an emergent property of the interaction of distinct but inter-related variables, a perspective that yields much improved subscale applicability indices (see H and factor determinacy in Table 2). However, it could be argued that having constituted the subscales using EGA, a network-based approach, maximizes the separation of the dimensions and, therefore, the case for correlated factors.
To gain a fuller picture and to complement the EGA approach of maximizing separation between dimensions, a series of analyses was carried out adopting an item selection strategy aimed at maximizing unidimensionality. Starting with the set of 66 items remaining following elimination of redundant items, the item with the lowest item explained common variance (I-ECV) from a bifactor EFA with Schmid-Leiman rotation was removed. A boostrapped EGA was then performed and the bifactor EFA repeated stipulating the number of factors corresponding to the median number of communities found by the EGA. The intention was for the process to stop once either the EGA called for one community or there were no more items with I-ECV’s below 0.80. The first criterion was reached first, with one EGA community stipulated with 27 items remaining and 39 had been removed. To provide a bridge with the previous set of analyses using EGA that resulted in 42 retained items in five dimensions, an exploratory factor analysis was carried out on the set of 27 items on the unidimensional scale also stipulating five factors using the psych R package with the MINRES factor method applied to a matrix of polychoric correlations and oblimin rotation. The results are shown in Table SM2. Of the 27 items, 16 also appeared on the 42-item scale, with 26 from the 42-item scale being among the 39 eliminated to reach the unidimensionality criterion. The first factor of the 27-item oblique solution approximates the High Standards factor, with one Imperatives item consistent in content with High Standards now included. The second factor contains one Cognitive Flexibility item and three other reverse keyed items. It is fairly evident this factor is not a subset of Cognitive Flexibility per se but rather a set of further High Standard-themed items that are reverse-keyed. The remaining three factors appear to be vestigial factors with low maximum loadings resulting from overfactoring an essentially unidimensional set of items; these are made up of mainly Negative Expectancy items and other items that had not been possible to stably assign to one of the EGA dimensions. A final bifactor EFA was carried out with two factors. Overall omega was 0.95, which reduced to 0.86 with group factors taken into account which, in conjunction with a percent uncontaminated correlations value of 78%, confirm the unidimensional picture. All item explained common variances except two were greater than 0.80.

Validity Analyses

Predictor and criterion variables for testing the validity of the newly constituted subscales were available for a portion of the sample (N = 1780). As a preliminary step, EGA was carried out with the BDI, BAI, and BHS. For the BHS, the well-established finding of a unidimensional structure was confirmed. However, items 4, 8, 13, 15, 16, and 20 were found to be redundant with other items and so were not included in the calculation of a total BHS scores. For both the BDI and BAI, three factor solutions were found that echo similar previous analyses of the underlying structures of these scales. McElroy et al. (2018), in a review, identified eight variations of the structure of the BDI, most of which contained a cognitive factor and an affective factor also found in the present study (please see Table 3). Many of these models also feature a somatic factor that includes sleep, weight, and appetite items, which was also identified in the present sample. Item 20 (which was subsequently dropped from the BDI-II), was not found to be stable in the present sample and so was not scored. Finally, by convention, Item 2 (pessimism) is not scored in analyses involving the Beck Hopelessness Scale. Similarly, two of the BAI factors (Subjective and Somatic) largely match those described by Steer (2009) whose sample overlapped the present one. The third dimension identified by the EGA combined sympathetic body heat items (sweating, feeling hot, flushed) that it appears has not been previously reported. This scale was labeled “Somatic 2”).
Table 3
Correlation of predictor and criterion variables
 
ATO
HS
NE
CF
IMP
BDIF1
BDIF2
BDIF3
HS
BAIF1
BAIF2
HS
0.54
          
NE
0.52
0.66
         
CF
0.32
0.25
0.37
        
IMP
0.38
0.58
0.55
0.27
       
BDIF1
0.33
0.38
0.40
0.11
0.20
      
BDIF2
0.19
0.26
0.27
0.10
0.17
0.66
     
BDIF3
0.09
0.07
0.07
0.05
0.07
0.30
0.41
    
BHS
0.31
0.35
0.39
0.15
0.12
0.60
0.52
0.21
   
BAIF1
0.11
0.08
0.12
0.01
0.08
0.33
0.34
0.29
0.19
  
BAIF2
0.09
0.04
0.13
0.05
0.15
0.29
0.33
0.29
0.15
0.52
 
BAIF3
0.16
0.09
0.15
0.05
0.16
0.41
0.39
0.32
0.26
0.35
0.64
N = 1718
HS high standards, ATO acceptability to others, NE negative expectancy, IMP imperatives, CF cognitive flexibility, BHS Beck Hopelessness Scale, BDI factors: BDIF1 (Cognitive) items 1, 2, 3, 5, 6, 7, 8, 9, and 14, BDIF2 (Affective) items 4, 10, 11, 12, 13, 15, 17, 21, BDIF3 (Somatic) items 16, 18, 19, BAI factors: BAIF1 (Somatic 2) items 1, 2, 13, 20, 21, BAIF2 (Somatic) items 3, 6, 7, 8, 11, 12, 15, 16, 19, BAIF3 (Subjective) items 4, 5, 9, 10, 14, and 17
The correlations of these variables are shown in Table 3. In the same way network analysis can help to efficiently identify patterns of substantive relationships between items in scales, it has increasingly been applied to representing relationships among constructs in the context of external scale validation (Christensen et al., 2020a, b, c; Truhan et al., 2021). We first carried out a network analysis of the DAS, BDI, and BAI subscales to confirm the specificity of DAS dimensions to depression. The EBICGLASSO function of the qgraph R package was used with the gamma tuning parameter set at 0.5 and the network plotted using the “spring” layout (Fig. 2). The specificity of DAS dimensions is emphasized in the network as a result of being based on LASSO-regularized partial correlations. More novel findings include the fact the DAS factors are specifically related to the cognitive factor of the BDI and that there are no direct associations between the Cognitive Flexibility and Imperatives factors and depressive symptoms.
The clarification of the composition of the DAS subscales invites more precise consideration of what they represent. A detailed analysis is offered in the discussion, where a key claim is made that the Negative Expectancy subscale is more reflective of the depressive state than an ongoing vulnerability to depression that is present between episodes and that, in this regard, it is more similar, within Beck’s cognitive theory of depression, to such constructs as hopelessness. The was examined in a further network analysis that included the BHS along with the BDI and DAS subscales, as shown in Fig. 3. Consistent with this supposition, Negative Expectancy had the strongest edge with the BHS. Only Acceptability to Others among the remaining subscales also had a positive edge. There was an unexpected slight negative relationship found between Imperatives and BHS. This amounted to a partial correlation of just over − 0.04 which, though modest, reflects a relationship that was robust to partialling of all other variables and LASSO regularization. To the extent that Imperatives entail being called to action whereas hopelessness implies viewing further action as fruitless, this inverse relationship is not implausible.
This view of the Negative Expectancy subscale would also suggest it should be more closely tied to symptom state than subscales of more enduring beliefs, such as High Standards and Acceptability to Others, and scales that cut across content areas, such as Imperatives and Cognitive Flexibility. To test this view of the differential relationship of the scales to depression, we tested the model depicted in Fig. 4 and, specifically, the necessity of the dashed direct paths from Acceptability to Others and High Standards to BDI score compared to their indirect effects through Negative Expectancy. The model with each variable specified as a latent variable was estimated using maximum likelihood estimation with bootstrapped standard error estimation. Model fit was good according to RMSEA (0.040) and SRMR (0.051) but below conventional thresholds for the CFI and TLI (both 0.84). Modification indices indicated fit could be improved by allowing correlated between factor indicators; however, we decided this was not justifiable and that fit was adequate for the present purposes. Table 4 presents the relevant model parameters. Negative Expectancy, High Standards, and Acceptability to Others had comparable direct effects on BDI. Half of the effect of High Standards was mediated by Negative Expectancy, whereas a fifth of the effect of Acceptability to Others was mediated by Negative Expectancy. This provides limited support for the idea that Negative Expectancy would, in effect, serve as a final common pathway to depression, a prediction that would need to be tested more definitively using longitudinal data.
Table 4
Path model coefficients for DAS subscale direct effects on BDI score and indirect effects mediated by Negative Expectancy
Effect on BDI
Beta
SE
χ2
Negative expectancy
0.21
0.058
3.64*
High standards
   
 Direct
0.182
0.055
3.31*
 Indirect
0.083
0.027
3.06*
Acceptability to others
   
 Direct
0.088
0.054
1.63
 Indirect
0.043
0.014
2.94*
*Significant at p < 0.05

Discussion

In the present study, we were able to capitalize on recent innovations in psychometric analysis to advance understanding of the DAS, arguably the most important instrument in CBT research and practice, beyond what was previously attainable. The traditional latent variable approach entails considerable subjective judgment regarding dimensionality and subscale composition. Where there is a strong signal in the data regarding the underlying structure of an instrument, this subjectivity is less liable to introduce distortion that impedes identification of the true underlying measurement model. However, items on scales like the DAS are inherently complex, which creates subtle cross-cutting sources of variance, and these are nearly impossible to identify solely on the basis of visual inspection of output, identification of areas of local dependence, and rational analysis. In the updated approach based on network mathematics followed here, dimensionality is determined by community detection algorithms, items are assigned to the dimension they associate with most stably in the long-run, and repetitive, semantically similar items are regarded as sources of distortion rather than building blocks of reliability.
As noted in the introduction, psychometric analyses of the DAS have mostly focused on Weissman’s Form A and have typically identified two or three dimensions, with an achievement/perfectionism factor and a social approval/acceptance factor almost always in the mix. While there has been one notable study (Moore et al., 2014) whose one-factor solution likely represents lumping, splitting through overfactoring is much more common, as epitomized by the current study’s predecessor, Beck et al. (1991). Table SM1 in the supplementary materials offers a compelling blueprint of what overfactoring looks like relative to the likely more precise solution of the present analysis. The factor named Negative Expectancy in the current analysis was called Vulnerability by Beck et al. (1991). Negative Expectancy is essentially a reduced version of Vulnerability with redundant items removed, which is also true of Beck et al.’s Success-Perfectionism relative to the current High Standards subscale. Beck et al.’s Need for Approval is the first “bloated specific” we encounter in the table, built on redundant items, which is also true of Need to Please Others, Need to Impress, and Avoidance of Appearing Weak. The fact that the names that were chosen for these are based on inferred motivation (all described as reflecting putative needs) is potentially a consequence of their synthetic nature, requiring a “need” to be read into what distinguishes a given group from other items in the absence of a more immediately salient basis. For the most part, these appear to result from overfactoring of the current Acceptability to Others subscale, the core of which appears on Beck et al.’s Disapproval-Dependence factor.
In contrast, the Imperatives factor is almost identical to Beck et al.’s factor with the same name, with the exception of the omission of one redundant item. The Cognitive Flexibility factor only appears in a truncated, three-item form in the Beck et al. solution as Control Over Emotions. The remaining Cognitive Flexibility items were eliminated by Beck et al.’s subsequent procedures and so appear here for the first time since Weissman’s original analyses. Notably, this scale does not appear to be merely a method variance factor due to positive keying (Rosellini & Brown, 2021) as five of nine of the ATO items were also positively keyed. A number of instruments have been developed that seek to quantify skill acquisition as a result of cognitive therapy (Barber & DeRubeis, 2001; Jarrett et al., 2011; Strunk et al., 2014) mainly tied to self report or rater assessment of actual or hypothetical behavior in response to challenging situations. The Cognitive Flexibility factor potentially adds to and complements these scales by tapping into corresponding beliefs.
Strikingly, with reference to Table 1, the content of Weissman’s Form A and Form B are remarkably non-overlapping, such that it might be said they resemble each other much more like long lost cousins than the fraternal twins they were intended to be. The Acceptability to Others and High Standards factors are mostly made up of items from DAS-A, which means it should be feasible to repeat important archival analyses that used DAS-A by applying the scoring from the measurement structure derived in the present study. Cognitive Flexibility, Imperatives, and Negative Expectancy are made up mostly of items from DAS-B. Notably, only 12 of 42 (28.6%) items on the new scale versus 23 of 40 (57.5%) of items on DAS-A were identified by DeRubeis and colleagues are “style” items judged to be prone to extreme positive responding. It may be that the emphasis on long-run stability in item analysis eliminated items that were unstable due to multiple sources of variance that included response styles. However, in a further twist, most of the style items that made it into the present version of the DAS load on the Acceptability to Others dimension. It might not be too farfetched to suppose that concern with acceptability could be confounded with a tendency to “protest too much” that one is not dependent, which would be consistent with this observed pattern of findings.
As for the Negative Expectancy subscale, its content appears, on the surface, to be heterogeneous, and a straightforward theme does not immediately emerge. However, compared to the other subscales, it appears to denote actual, rather than hypothetical, depression. The following are suggested understandings of the dimension within the context of CBT research and theory, any or all of which may prove to be supportable pending further research:
1.
The factor is a sampling of the propositional content of the thinking of individuals with ongoing depressive episodes. In line with the distinct nature of thought during depression compared to the same person’s thinking outside of an episode, Teasdale (1997) drew on Ornstein’s notion of multiple minds. Each “mind” is a comprehensive mental model which can be instantiated where appropriate to deal with situations appropriate to that “mind.” With regard to the depressive “mind-in-place” Teasdale wrote,
…normal mood is characterized by functional mental models, in which personal worth is relatively independent of whether or not one is liked by others or whether one succeeds or fails at tasks…Interpreted through [depressive] models, failure or disapproval will be interpreted more catastrophically…because such events imply global personal worthlessness. (p. 74)
 
2.
Negative expectancy operationalizes one of Beck’s central concepts with regard to depression, the negative view of the world aspect of the negative cognitive triad. In contrast to the negative view of the future and negative view of the self for which Beck and colleagues developed measurement instruments (the Beck Hopelessness Scale and Beck Self Concept Test, respectively), a corresponding scale was never developed for the third leg of the triad. The content of Negative Expectancy is consistent with Weissman’s descriptions of this aspect of the triad:
The depressed person tends to see his world as making exorbitant demands on him and as presenting obstacles that cannot be surmounted. He interprets his interactions with his environment in terms of defeat and failure, deprivation, or disparagement. (Weissman, 1979, p. 21).
 
3.
Negative Expectancy represents a disposition to depression that is more immediate than the other four subscales, which are more distal and more conditional. Using the distinction Ryle (1949) draws between different dispositions, Negative Expectancy represents ongoing proneness to experience negativity characteristic of an imminent depressive episode or one already in progress that is less dependent on congruent environmental triggers. In contrast, the other four scales represent hypothetical liabilities to become depressed given appropriate life experiences.
 
This view informed the validity analyses carried out with the Negative Expectancy factor, and the results were consistent with this view in that NE was most closely associated with the subscales of hopelessness and depression symptoms. This has potentially significant implications for important lines of research that have employed the DAS. A considerable body of research has largely shown that DAS scores covary with depressive symptoms (summarized by Barnett & Gotlib, 1998), which contradicts the concept of the DAS measuring a vulnerability that persists between episodes but rather indicates it is a concomitant of depression. As suggested above, it could be that NE measures an immediate proneness to depression that emerges along with symptoms. In contrast, the other scales are more in line with the picture of enduring vulnerability that requires a matching trigger to activate, so that DAS subscales comprise both precursors and concomitants of depression. A related line of research aimed at resolving the apparent state dependence of the DAS has come to be referred to as the cognitive reactivity paradigm. Miranda et al. (1998) first showed that elevated DAS scores differentiated remitted from never depressed participants only following a negative mood induction. It may be that here, too, the effect is mainly due to negative expectancy and largely not found in subscales that have more to do with ongoing values (e.g., HS and ATO) that should not change appreciably as a function of mood. A finer-grained analysis in terms of subscales may help explain the unreliability of the effect, which, for example, has been replicated among CBT responders (Segal et al., 2006) but not among those with incomplete symptom remission following therapy (Jarrett et al., 2012). The discrepancy may not be substantive but rather due to measurement artifacts these authors were not in a position to evaluate. For example, in the latter study, DAS-A was used at baseline and DAS-B at follow-up under the assumption that they were suitable to be used as parallel forms, an assumption the present study conclusively contradicts.
The initial sequence of analyses that identified a five-dimension measurement structure was undertaken from the standpoint of presumed multidimensionality. As they frequently do, bifactor analyses offered support for both a unidimensional and multidimensional structure. We aimed to gain further clarity by “rewinding” the process back to the point that redundant items had been eliminated and carried out a series of analyses geared to identifying a unidimensional solution. The fact that 39 items needed to be eliminated to be left with a 27-item unidimensional scale can be taken as further support for the multidimensionality of the DAS. This single dimension bore some resemblance to Moore et al.’s (2014) single dimension solution but also retained elements of Acceptability to Others and Negative Expectancy. Presuming unidimensionality purely based on the high association between the factors would, in line with the traditional latent variable approach, presuppose a reflective overall latent variable that causes its indicators, in this case, the single factors. Van den Hout (2014) argues convincingly that the latent variable approach as applied to psychopathology is tautological. It entails having a phenomenon be, at the same time, defined by and explained by its constituents. Alternatively, from the network standpoint, the constituents mutually influence each other, and their association emerges from this mutual influence rather than reflecting an underlying deeper-level construct.
Moreover, we would expect the covariance of dimensions to be maximal in clinical samples that represent the culmination of the development of their presenting problems due to patterns of mutual influence between these factors over time. Whether a hierarchical or correlated factor structure is more justified has implications for what uses the factors identified in the present study can be put to, with reference to the analyses summarized in Table 2. Future research in nonsymptomatic samples will be needed to understand the relationship between these constructs at an earlier developmental point to confirm these suppositions about the correct measurement model. Research in nonclinical samples can also help establish whether there are distinct profiles of subscales (e.g., the anaclitic vs. self-critical subtypes of Blatt and the sociotropic and autonomous subtypes of Beck), which will also be relatively difficult to establish in clinical samples that represent the endpoint of the interplay of these factors and for which scores are therefore likely at a maximum.
With regard to exploratory factor analysis, Haig (In press) has observed that “In a real sense, EFA narrows down the space of a potentially large number of candidate theories to a manageable subset by facilitating judgments of initial plausibility.” (p. 10) The present analysis suggests a plausible shape to the exhaustive pool of beliefs reflecting a disposition to depression Weissman had compiled. Figure 4 represents a potential configuration of the DAS dimensions that is not meant to be definitive but serves as a plausible starting point for further efforts. Imperatives and Cognitive Flexibility are conceived of as broad indicators of belief “style.” These will influence the manner of and the degree to which beliefs within the two value dimensions (HS and ATO) are adhered to, all of which jointly represent the risk for a particular individual for their depressive mind to be lodged in place, as reflected in the elevated activity of the legs of the cognitive triad, most particularly Negative Expectancy.
Notably, the foregoing account does not refer to the fundamental concept of the schema used by Beck to explain susceptibility to depression as well as the distinctive processing of experience during episodes. The DAS was regarded as the main means for demonstrating the action of schemas. However, as Segal (1988) argued, self-report scales like the DAS can only represent content. In contrast, structural concepts such as schemas require a means of capturing functional relations that is not possible solely with reference to content. Still, more recent understandings of beliefs as dispositions that do not require an underlying representational architecture (e.g., Schwitzgebel, 2013) and the propositional nature of learning (e.g., De Houwer, 2009) support the presence of beliefs alone as sufficient grounds for demonstrating an underlying disposition. The validity of this aspect of Beck’s theory can be upheld if scales of relevant beliefs like the DAS are markers of schematic processing even if they do not themselves constitute schemas, a question for future research.
A clear limitation of the present study is that the data was collected a generation ago. The scale used gendered language that needed to be changed for the present paper. Further data is currently being collected with a version of the scale that uses gender neutral pronouns. Analyses such as differential item functioning as a function of gender would typically be included in the sort of study undertaken here but would be undoubtedly outdated. Such analyses should be a priority for further studies with contemporary data collection. In the same vein, the DAS largely reflects the values of the dominant culture of the time, and efforts to broaden scales like the DAS to capture the more diverse contemporary culture are an ethical imperative as well as good science. Indeed, the title of the scale itself implies a value judgment that is at odds with contemporary sensibilities; an alternative name for the scale that retains the same acronym would be advisable.
The current study can be seen as establishing a new 53-item baseline pool of DAS items made up of the 42-item five-dimensional scale plus the 11 items that were non-duplicates but contributed to the general dimension in the unidimensional analyses. The latter set of items may confound delineating distinct dimensions. Still, it might include the precise belief that is the central issue for a given person when DAS items are used clinically. They may also contribute to insights about the DAS dimensions that can underpin future work on fleshing out (and potentially modernizing) the underlying constructs. More technically, further work will be needed to determine if the seven-point Likert scale with a neutral midpoint is the best format for capturing this type of belief. Beevers et al., (2007) found that a four-point scale without a neutral middle anchor was optimal; however, more recent techniques (e.g., IRTrees, Park & Wu, 2019) can potentially shed light on whether responses are anchored in response options that are not the final response given. These can also potentially provide further insight into response sets (e.g., Leventhal, 2018) such as those identified by De Rubeis and colleagues (e.g., Forand & DeRubeis, 2014), which were ameliorated in the present analysis but only fortuitously. The ultimate test of the DAS will be, as it has always been, whether it can successfully predict who is prone to develop depression or experience a recurrence. The additional scales and greater precision of measurement structure renew its potential for being equal to this purpose.

Acknowledgements

We are deeply grateful to Dr. Aaron T. Beck for actively supporting this study until shortly before his death in November of 2021.

Declarations

Conflict of Interest

Gary P. Brown, Jaime Delgadillo and Hudson declare that they have no conflict of interest.
All procedures followed were in accordance with ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2000. Informed consent was obtained from all patients for being included in the study.

Animal rights

No animal studies were carried out by the authors for this article.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
share
DELEN

Deel dit onderdeel of sectie (kopieer de link)

  • Optie A:
    Klik op de rechtermuisknop op de link en selecteer de optie “linkadres kopiëren”
  • Optie B:
    Deel de link per e-mail

Onze productaanbevelingen

BSL Psychologie Totaal

Met BSL Psychologie Totaal blijf je als professional steeds op de hoogte van de nieuwste ontwikkelingen binnen jouw vak. Met het online abonnement heb je toegang tot een groot aantal boeken, protocollen, vaktijdschriften en e-learnings op het gebied van psychologie en psychiatrie. Zo kun je op je gemak en wanneer het jou het beste uitkomt verdiepen in jouw vakgebied.

BSL Academy Accare GGZ collective

BSL GOP_opleiding GZ-psycholoog

Bijlagen

Supplementary Information

Below is the link to the electronic supplementary material.
Literatuur
go back to reference Barber, J. P., & DeRubeis, R. J. (2001). Change in compensatory skills in cognitive therapy for depression. The Journal of Psychotherapy Practice and Research, 10(1), 8–13. Barber, J. P., & DeRubeis, R. J. (2001). Change in compensatory skills in cognitive therapy for depression. The Journal of Psychotherapy Practice and Research, 10(1), 8–13.
go back to reference Beck, A. T., Rush, A. J., Shaw, B. F., & Emery, G. (1987). Cognitive therapy of depression (new ed). Guilford Press. Beck, A. T., Rush, A. J., Shaw, B. F., & Emery, G. (1987). Cognitive therapy of depression (new ed). Guilford Press.
go back to reference Beck, A. T., Brown, G., Steer, R. A., & Weissman, A. N. (1991). Factor analysis of the Dysfunctional Attitude Scale in a clinical population. Psychological Assessment, 3(3), 478–483.CrossRef Beck, A. T., Brown, G., Steer, R. A., & Weissman, A. N. (1991). Factor analysis of the Dysfunctional Attitude Scale in a clinical population. Psychological Assessment, 3(3), 478–483.CrossRef
go back to reference Beevers, C. G., Strong, D. R., Meyer, B., Pilkonis, P. A., & Miller, I. W. (2007). Efficiently assessing negative cognition in depression: An item response theory analysis of the Dysfunctional Attitude Scale. Psychological Assessment, 19(2), 199.CrossRef Beevers, C. G., Strong, D. R., Meyer, B., Pilkonis, P. A., & Miller, I. W. (2007). Efficiently assessing negative cognition in depression: An item response theory analysis of the Dysfunctional Attitude Scale. Psychological Assessment, 19(2), 199.CrossRef
go back to reference Brouwer, M. E., Williams, A. D., Forand, N. R., DeRubeis, R. J., & Bockting, C. L. H. (2019). Dysfunctional attitudes or extreme response style as predictors of depressive relapse and recurrence after mobile cognitive therapy for recurrent depression. Journal of Affective Disorders, 243, 48–54. https://doi.org/10.1016/j.jad.2018.09.002CrossRef Brouwer, M. E., Williams, A. D., Forand, N. R., DeRubeis, R. J., & Bockting, C. L. H. (2019). Dysfunctional attitudes or extreme response style as predictors of depressive relapse and recurrence after mobile cognitive therapy for recurrent depression. Journal of Affective Disorders, 243, 48–54. https://​doi.​org/​10.​1016/​j.​jad.​2018.​09.​002CrossRef
go back to reference Brown, G., & Beck, A. T. (1989). The role of imperatives in psychopathology: A reply to Ellis. Cognitive Therapy and Research, 13(4), 315–321.CrossRef Brown, G., & Beck, A. T. (1989). The role of imperatives in psychopathology: A reply to Ellis. Cognitive Therapy and Research, 13(4), 315–321.CrossRef
go back to reference Brown, G. P., Hammen, C. L., Craske, M. G., & Wickens, T. D. (1995). Dimensions of dysfunctional attitudes as vulnerabilities to depressive symptoms* 1. Journal of Abnormal Psychology, 104(3), 431–435.CrossRef Brown, G. P., Hammen, C. L., Craske, M. G., & Wickens, T. D. (1995). Dimensions of dysfunctional attitudes as vulnerabilities to depressive symptoms* 1. Journal of Abnormal Psychology, 104(3), 431–435.CrossRef
go back to reference Golino, H., Shi, D., Christensen, A. P., Garrido, L. E., Nieto, M. D., Sadana, R., Thiyagarajan, J. A., & Martinez-Molina, A. (2020). Investigating the performance of exploratory graph analysis and traditional techniques to identify the number of latent factors: A simulation and tutorial. Psychological Methods, 25(3), 292–320. https://doi.org/10.1037/met0000255CrossRef Golino, H., Shi, D., Christensen, A. P., Garrido, L. E., Nieto, M. D., Sadana, R., Thiyagarajan, J. A., & Martinez-Molina, A. (2020). Investigating the performance of exploratory graph analysis and traditional techniques to identify the number of latent factors: A simulation and tutorial. Psychological Methods, 25(3), 292–320. https://​doi.​org/​10.​1037/​met0000255CrossRef
go back to reference Golino, H., Christensen, A., Moulder, R., & Garrido, L. E. (2021a). EGAnet: Exploratory Graph Analysis—a framework for estimating the number of dimensions in multivariate data using network psychometrics (1.0.0) [Computer software]. Retrieved from https://CRAN.R-project.org/package=EGAnet. Golino, H., Christensen, A., Moulder, R., & Garrido, L. E. (2021a). EGAnet: Exploratory Graph Analysis—a framework for estimating the number of dimensions in multivariate data using network psychometrics (1.0.0) [Computer software]. Retrieved from https://​CRAN.​R-project.​org/​package=​EGAnet.
go back to reference Haig, B. D. (In press). Abductive Research Methods in Psychological Science. In L. Magnani (Ed.), Handbook of Abductive Cognition. Springer International Publishing. Haig, B. D. (In press). Abductive Research Methods in Psychological Science. In L. Magnani (Ed.), Handbook of Abductive Cognition. Springer International Publishing.
go back to reference Hancock, G. R., & Mueller, R. O. (2001). Rethinking construct reliability within latent variable systems. In R. Cudeck, S. D. Toit, & D. Soerbom (Eds.), Structural equation modeling: Present and future (pp. 195–216). Scientific Software International. Retrieved from https://ci.nii.ac.jp/naid/10025991283/. Hancock, G. R., & Mueller, R. O. (2001). Rethinking construct reliability within latent variable systems. In R. Cudeck, S. D. Toit, & D. Soerbom (Eds.), Structural equation modeling: Present and future (pp. 195–216). Scientific Software International. Retrieved from https://​ci.​nii.​ac.​jp/​naid/​10025991283/​.
go back to reference Jarrett, R. B., Minhajuddin, A., Borman, P. D., Dunlap, L., Segal, Z. V., Kidner, C. L., Friedman, E. S., & Thase, M. E. (2012). Cognitive reactivity, dysfunctional attitudes, and depressive relapse and recurrence in cognitive therapy responders. Behaviour Research and Therapy, 50(5), 280–286. https://doi.org/10.1016/j.brat.2012.01.008CrossRef Jarrett, R. B., Minhajuddin, A., Borman, P. D., Dunlap, L., Segal, Z. V., Kidner, C. L., Friedman, E. S., & Thase, M. E. (2012). Cognitive reactivity, dysfunctional attitudes, and depressive relapse and recurrence in cognitive therapy responders. Behaviour Research and Therapy, 50(5), 280–286. https://​doi.​org/​10.​1016/​j.​brat.​2012.​01.​008CrossRef
go back to reference McElroy, E., Casey, P., Adamson, G., Filippopoulos, P., & Shevlin, M. (2018). A comprehensive analysis of the factor structure of the Beck Depression Inventory-II in a sample of outpatients with adjustment disorder and depressive episode. Irish Journal of Psychological Medicine, 35(1), 53–61. https://doi.org/10.1017/ipm.2017.52CrossRef McElroy, E., Casey, P., Adamson, G., Filippopoulos, P., & Shevlin, M. (2018). A comprehensive analysis of the factor structure of the Beck Depression Inventory-II in a sample of outpatients with adjustment disorder and depressive episode. Irish Journal of Psychological Medicine, 35(1), 53–61. https://​doi.​org/​10.​1017/​ipm.​2017.​52CrossRef
go back to reference Otto, M. W., Teachman, B. A., Cohen, L. S., Soares, C. N., Vitonis, A. F., & Harlow, B. L. (2007). Dysfunctional attitudes and episodes of major depression: Predictive validity and temporal stability in never-depressed, depressed, and recovered women. Journal of Abnormal Psychology, 116(3), 475–483. https://doi.org/10.1037/0021-843X.116.3.475CrossRef Otto, M. W., Teachman, B. A., Cohen, L. S., Soares, C. N., Vitonis, A. F., & Harlow, B. L. (2007). Dysfunctional attitudes and episodes of major depression: Predictive validity and temporal stability in never-depressed, depressed, and recovered women. Journal of Abnormal Psychology, 116(3), 475–483. https://​doi.​org/​10.​1037/​0021-843X.​116.​3.​475CrossRef
go back to reference Pasta, D. J., & Suhr, D. (2004). Creating scales from questionnaires: PROC VARCLUS vs. factor analysis. In Proceedings of the twenty-ninth annual SAS Users Group International Conference, 18. Pasta, D. J., & Suhr, D. (2004). Creating scales from questionnaires: PROC VARCLUS vs. factor analysis. In Proceedings of the twenty-ninth annual SAS Users Group International Conference, 18.
go back to reference Rosseel, Y., Jorgensen, T. D., Rockwood, N., Oberski, D., Byrnes, J., Vanbrabant, L., Savalei, V., Merkle, E., Hallquist, M., Rhemtulla, M., Katsikatsou, M., Barendse, M., Scharf, F., & Du, H. (2022). lavaan: Latent variable analysis (0.6–10) [Computer software]. Retrieved from https://CRAN.R-project.org/package=lavaan. Rosseel, Y., Jorgensen, T. D., Rockwood, N., Oberski, D., Byrnes, J., Vanbrabant, L., Savalei, V., Merkle, E., Hallquist, M., Rhemtulla, M., Katsikatsou, M., Barendse, M., Scharf, F., & Du, H. (2022). lavaan: Latent variable analysis (0.6–10) [Computer software]. Retrieved from https://​CRAN.​R-project.​org/​package=​lavaan.
go back to reference Segal, Z. V., Kennedy, S., Gemar, M., Hood, K., Pedersen, R., & Buis, T. (2006). Cognitive reactivity to sad mood provocation and the prediction of depressive relapse. Archives of General Psychiatry, 63(7), 749.CrossRef Segal, Z. V., Kennedy, S., Gemar, M., Hood, K., Pedersen, R., & Buis, T. (2006). Cognitive reactivity to sad mood provocation and the prediction of depressive relapse. Archives of General Psychiatry, 63(7), 749.CrossRef
go back to reference Stucky, B. D., & Edelen, M. O. (2015). Using hierarchical IRT models to create unidimensional measures from multidimensional data. In Handbook of item response theory modeling: Applications to typical performance assessment (pp. 183–206). Routledge/Taylor & Francis Group. Stucky, B. D., & Edelen, M. O. (2015). Using hierarchical IRT models to create unidimensional measures from multidimensional data. In Handbook of item response theory modeling: Applications to typical performance assessment (pp. 183–206). Routledge/Taylor & Francis Group.
go back to reference Taylor, S., Zvolensky, M. J., Cox, B. J., Deacon, B., Heimberg, R. G., Ledley, D. R., Abramowitz, J. S., Holaway, R. M., Sandin, B., Stewart, S. H., Coles, M., Eng, W., Daly, E. S., Arrindell, W. A., Bouvard, M., & Cardenas, S. J. (2007). Robust dimensions of anxiety sensitivity: Development and initial validation of the Anxiety Sensitivity Index-3. Psychological Assessment, 19(2), 176–188. https://doi.org/10.1037/1040-3590.19.2.176CrossRef Taylor, S., Zvolensky, M. J., Cox, B. J., Deacon, B., Heimberg, R. G., Ledley, D. R., Abramowitz, J. S., Holaway, R. M., Sandin, B., Stewart, S. H., Coles, M., Eng, W., Daly, E. S., Arrindell, W. A., Bouvard, M., & Cardenas, S. J. (2007). Robust dimensions of anxiety sensitivity: Development and initial validation of the Anxiety Sensitivity Index-3. Psychological Assessment, 19(2), 176–188. https://​doi.​org/​10.​1037/​1040-3590.​19.​2.​176CrossRef
go back to reference van den Hout, M. (2014). Psychiatric symptoms as pathogens. Clinical Neuropsychiatry, 11(6), 153–159. van den Hout, M. (2014). Psychiatric symptoms as pathogens. Clinical Neuropsychiatry, 11(6), 153–159.
go back to reference Weissman, A. N. (1979). The Dysfunctional Attitude Scale: A validation study. University of Pennsylvania, 209. Weissman, A. N. (1979). The Dysfunctional Attitude Scale: A validation study. University of Pennsylvania, 209.
Metagegevens
Titel
Distinguishing the Dimensions of the Original Dysfunctional Attitude Scale in an Archival Clinical Sample
Auteurs
Gary P. Brown
Jaime Delgadillo
Hudson Golino
Publicatiedatum
11-10-2022
Uitgeverij
Springer US
Gepubliceerd in
Cognitive Therapy and Research / Uitgave 1/2023
Print ISSN: 0147-5916
Elektronisch ISSN: 1573-2819
DOI
https://doi.org/10.1007/s10608-022-10333-w