Swipe om te navigeren naar een ander artikel
Beyond the typical design factors that impact a study’s power (e.g., participant sample size), planning longitudinal research involves additional considerations such as assessment frequency and participant retention. Because this type of research relies so strongly on individual commitment, investigators must be judicious in determining how much information is necessary to study the phenomena in question; collecting too little information will render the data less useful, but requiring excessive participant investment will likely lower participation rates. We conducted a simulation study to empirically examine statistical power and the trade-off between assessment quality (as a function of instrument length) and assessment frequency across a number of sample sizes with intermittently missing data or attrition. Results indicated that reductions in power resulting from shorter, less reliable measurements can be at least somewhat offset by increasing assessment frequency. Because study planning involves a number of factors competing for finite resources, equations were derived to find the balance points between pairs of design characteristics affecting statistical power. These equations allow researchers to calculate the amount that a particular design factor (e.g., assessment frequency) would need to increase to result in the same improvement in power as increasing an alternative factor (e.g., measurement reliability. Applications for the equations are discussed.
Log in om toegang te krijgen
Met onderstaand(e) abonnement(en) heeft u direct toegang:
Algina, J., & Olejnik, S. (2003). Conducting power analyses for ANOVA and ANCOVA in between-subjects designs. Evaluation & the Health Professions, 26(3), 288–314. CrossRef
Barrera-Gómez, J., Spiegelman, D., & Basagaña, X. (2013). Optimal combination of number of participants and number of repeated measurements in longitudinal studies with time-varying exposure. Statistics in Medicine, 32(27). doi: 10.1002/sim.5870.
Browne, W. J., Lahi, M. G., & Parker, R. M. (2009). A guide to sample size calculations for random effect models via simulation and the MLPowSim software package. Bristol, United Kingdom: University of Bristol.
Cheng, Y., & Berry, D. A. (2007). Optimal adaptive randomized designs for clinical trials. Biometrika, 94(3), 673–689. CrossRef
Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45(12), 1304. CrossRef
Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2002). Applied Multple Regression/Correlation Analysis for the Behavioral Sciences (Third ed.): Routledge.
Collins, L. M., Murphy, S. A., & Strecher, V. (2007). The multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART): New methods for more potent eHealth interventions. American Journal of Preventive Medicine, 32(5), S112–S118. CrossRefPubMedPubMedCentral
Enders, C. K. (2010). Applied missing data analysis: Guilford Press.
Hedeker, D., Gibbons, R. D., & Waternaux, C. (1999). Sample size estimation for longitudinal designs with attrition: Comparing time-related contrasts between two groups. Journal of Educational and Behavioral Statistics, 24(1), 70–93. CrossRef
Hox, J. J. (2010). Multilevel analysis: Techniques and applications (Second Edition ed.): Routledge.
Liu, X., Spybrook, J., Congdon, R., Martinez, A., & Raudenbush, S. (2005). OD 2.0: Optimal design for multi-level and longitudinal research.
Maxwell, S. E. (1998). Longitudinal designs in randomized group comparisons: When will intermediate observations increase statistical power? Psychological Methods, 3(3), 275. CrossRef
Moerbeek, M. (2011). The effects of the number of cohorts, degree of overlap among cohorts, and frequency of observation on power in accelerated longitudinal designs. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 7(1), 11. CrossRef
Moerbeek, M., & Teerenstra, S. (2015). Power analysis of trials with multilevel data: CRC Press.
Muthén, B. O., & Curran, P. J. (1997). General longitudinal modeling of individual differences in experimental designs: A latent variable framework for analysis and power estimation. Psychological Methods, 2(4), 371. CrossRef
Raudenbush, S. W. (2011). Optimal Design Software for Multi-level and Longitudinal Research (Version 3.01).
Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods (Second ed.): Sage.
Remmers, H. H., Karslake, R., & Gage, N. (1940). Reliability of multiple-choice measuring instruments as a function of the Spearman-Brown prophecy formula, I. Journal of Educational Psychology, 31, (8), 583–590. doi: 10.1037/h0054189.
Rhemtulla, M., & Hancock, G. R. (2016). Planned missing data designs in educational psychology research. Educational Psychologist, 51(3–4), 305–316. CrossRef
Rhemtulla, M., Jia, F., Wu, W., & Little, T. D. (2014). Planned missing designs to optimize the efficiency of latent growth parameter estimates. International Journal of Behavioral Development, 38(5), 423–434.
Salekin, R. T. (2014). Some new directions for publication in the Journal of psychopathology and behavioral assessment: New constructs, physiological assessment, worldwide contribution, and economics. Journal of Psychopathology and Behavioral Assessment, 36(1), 1–3. CrossRef
Silvia, P. J., Kwapil, T. R., Walsh, M. A., & Myin-Germeys, I. (2014). Planned missing-data designs in experience-sampling research: Monte Carlo simulations of efficient designs for assessing within-person constructs. Behavior Research Methods, 46(1), 41–54. doi: 10.3758/s13428-013-0353-y. CrossRefPubMedPubMedCentral
Snijders, T. A. (2005). Power and sample size in multilevel linear models. Encyclopedia of statistics in behavioral science.
von Oertzen, T., & Brandmaier, A. M. (2013). Optimal study design with identical power: An application of power equivalence to latent growth curve models. Psychology and Aging, 28(2), 414. CrossRef
Weisz, J. R., Chorpita, B. F., Palinkas, L. A., Schoenwald, S. K., Miranda, J., Bearman, S. K., et al. (2012). Testing standard and modular designs for psychotherapy treating depression, anxiety, and conduct problems in youth: A randomized effectiveness trial. Archives of General Psychiatry, 69(3), 274–282. doi: 10.1001/archgenpsychiatry.2011.147. CrossRefPubMed
Widaman, K. F. (2006). Missing data: What to do with or without them. Monographs of the Society for Research in Child Development, 71(3), 42–64. doi: 10.1111/j.1540-5834.2006.00404.x.
Zimmerman, D. W., Williams, R. H., & Zumbo, B. D. (1993). Reliability of measurement and power of significance tests based on differences. Applied Psychological Measurement , 17(1), 1–9. doi: 10.1177/014662169301700101
- Quality Vs. Quantity: Assessing Behavior Change over Time
Andrew L. Moskowitz
Jennifer L. Krull
K. Alex Trickey
Bruce F. Chorpita
- Springer US
Journal of Psychopathology and Behavioral Assessment
Print ISSN: 0882-2689
Elektronisch ISSN: 1573-3505