Swipe om te navigeren naar een ander artikel
This simulation study was designed to provide data on the performance of Oort’s procedure (OP) for response shift (RS) detection (regarding type I error, power, and overall performance), according to sample characteristics, at item level. A specific objective was to assess the impact of using different information criteria (IC), as alternatives to the LRT (likelihood-ratio test), for global assessment of RS occurrence.
Responses to five binary items at two times of measurement were simulated. Thirty-six combinations of sample characteristics [sample size (n), “true change,” correlations between the two latent variables and presence/absence of uniform recalibration RS (ur)] were considered. A thousand datasets were generated for each combination. RS detection was performed on each dataset following OP. Type I error and power of the global assessment of RS occurrence, as well as overall performance of the OP, were assessed.
The estimated type I error was close to 5 % for the LRT and lower than 5 % for the IC. The estimated power was higher for the LRT as compared to the AIC, which was the highest among the other IC. For the LRT, the estimated power for n = 100 and for the combination of n = 200 and ur = 1 item was below 80 %. Otherwise, for other combinations of sample characteristics, the estimated power was above 90 %.
For the LRT, higher values of power were estimated compared to IC with appropriate values of type I error. These results were consistent with Oort’s proposal to use the LRT as the criterion to assess global RS occurrence.
Log in om toegang te krijgen
Met onderstaand(e) abonnement(en) heeft u direct toegang:
The SAMSI Psychometric Program Longitudinal Assessment of Patient-Reported Outcomes Working Group, Swartz, R. J., Schwartz, C., Basch, E., Cai, D. L., Fairclough, B., & Rapkin, L. (2011). The king’s foot of patient-reported outcomes: Current practices and new developments for the measurement of change. Quality of Life Research, 20(8), 1159–1167. PubMedCentralCrossRef
Raykov, T. (2006). A first course in structural equation modeling (2nd ed.). Mahwah: Lawrence Erlbaum Associates, Publishers.
King-Kallimanis, B. L., Oort, F. J., Nolte, S., Schwartz, C. E., & Sprangers, M. A. G. (2011). Using structural equation modeling to detect response shift in performance and health-related quality of life scores of multiple sclerosis patients. Quality of Life Research, 20(10), 1527–1540. PubMedCentralPubMedCrossRef
Barendse, M. T., Oort, F. J., Werner, C. S., Ligtvoet, R., & Schermelleh-Engel, K. (2012). Measurement bias detection through factor analysis. Structural Equation Modeling A Multidisciplinary Journal, 19(4), 561–579. CrossRef
Barendse, M. T., Oort, F. J., & Garst, G. J. A. (2010). Using restricted factor analysis with latent moderated structures to detect uniform and nonuniform measurement bias; a simulation study. AStA Advances in Statistical Analysis, 94(2), 117–127. CrossRef
Woods, C. M., & Grimm, K. J. (2011). Testing for nonuniform differential item functioning with multiple indicator multiple cause models. Applied Psychological Measurement, 35(5), 339–361. CrossRef
Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19(6), 716–723. CrossRef
Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics, 6(2), 461–464. CrossRef
Sclove, L. (1987). Application of model-selection criteria to some problems in multivariate analysis. Psychometrika, 52, 333–343. CrossRef
Fischer, G., & Molenaar, I. (1995). Rasch models: Foundation, recent developments, and applications. New-York: Springer. CrossRef
Sébille, V., Hardouin, J.-B., Le Neel, T., Kubis, G., Boyer, F., Guillemin, F., & Falissard, B. (2010). Methodological issues regarding power of classical test theory and IRT-based approaches for the comparison of patient-reported outcome measures-a simulation study. BMC Medical Research Methodology, 10–24.
Satorra, A., & Bentler, P. (1994). Corrections to test statistics and standards errors in covariance structure analysis. In A. von Eye & C. C. Clogg (Eds.), Latent variables analysis: Applications for developmental research (pp. 399–419). Thousand Oaks: Sage.
Rosseel, Y. (2012). Lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48(2), 1–36.
R Development Core Team. (2013). R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing.
Schermelleh-Engel, K., Moosbrugger, H., & Müller, H. (2003). Evaluating the fit of structural equation models: Tests of significance and descriptive goodness of fit measures. Methods of Psychological Research Online, 8(2), 23–74.
Brown, T. A. (2006). Confirmatory factor analysis for applied research. New York: Guilford Press.
Bryant, F. B., & Satorra, A. (2012). Principles and practice of scaled difference chi square testing. Structural Equation Modeling A Multidisciplinary Journal, 19(3), 372–398. CrossRef
Lehmann, E. L. (2008). Testing statistical hypotheses (3rd ed.). New York: Springer.
Hu, L., & Bentler, P. (1995). Evaluating model fit. In Structural equation modeling. Concepts, issues, and applications (p. 76–99). London: Sage.
Finney, S. J., & DiStefano, C. (2013). Non-normal and categorical data in structural equation modeling. In Structural Equation (Ed.), Modeling: A second course (pp. 439–492). Charlotte: IAP, Information Age Publ.
Beaujean, A. (2014). Models with dichotomous indicator variables. In Latent variable modeling using R. A step- by- step guide (p. 93–113). New-York, NY: Taylor and Francis.
- Overall performance of Oort’s procedure for response shift detection at item level: a pilot simulation study
- Springer International Publishing