Abstract
This paper presents a historical and conceptual analysis of a group of research strategies known as the Single-Case Methods (SCMs). First, we present an overview of the SCMs, their history, and their major proponents. We will argue that the philosophical roots of SCMs can be found in the ideas of authors who recognized the importance of understanding both the generality and individuality of psychological functioning. Second, we will discuss the influence that the natural sciences’ attitude toward measurement and experimentation has had on SCMs. Although this influence can be traced back to the early days of experimental psychology, during which incipient forms of SCMs appeared, SCMs reached full development during the subsequent advent of Behavior Analysis (BA). Third, we will show that despite the success of SCMs in BA and other (mainly applied) disciplines, these designs are currently not prominent in psychology. More importantly, they have been neglected as a possible alternative to one of the mainstream approaches in psychology, the Null Hypothesis Significance Testing (NHST), despite serious controversies about the limitations of this prevailing method. Our thesis throughout this section will be that SCMs should be considered as an alternative to NHST because many of the recommendations for improving the use of significance testing (Wilkinson & the TFSI, 1999) are main characteristics of SCMs. The paper finishes with a discussion of a number of the possible reasons why SCMs have been neglected.
Similar content being viewed by others
Notes
This number should be interpreted carefully because not all the disciplines have the common practice of reporting the details of the experimental designs in the sections of the article that the databases use for indexing purposes (e.g., keywords or abstract); similarly, the search terms that we utilized may be considered too narrow. However, a recent and more in depth search by Smith (2012) conducted only in PsycINFO for the same period 2000–2010 provided a very similar number of records, 571 articles, despite the fact the author used a wider range of terms and phrases (e.g., alternating treatment design, multiple baseline design, time-series design). Although a more thorough analysis is clearly required to provide stronger evidence in favour or against the prominence of SCMs, one that unfortunately goes beyond the scope of this paper, it seems reasonable to assume that a more precise estimation would still be considered small given the massive number of journals indexed in the databases that were searched (PsycINFO and SCOPUS).
References
Abelson, R. (1997). A retrospective on the significance test ban of 1999 (if there were no significance tests, they would be invented). In L. Harlow, S. Mulaik, & J. Steiger (Eds.), What if there were no significance tests? (pp. 117–141). Mahwah, NJ: Erlbaum.
Allison, D. B., & Gorman, B. S. (1993). Calculating effect sizes for meta-analysis: The case of the single case. Behaviour Research and Therapy, 31(6), 621–631.
American Psychological Association Board of Scientific Affairs, Task Force on Statistical Inference. (2000). Narrow and shallow. American Psychologist, 55, 965–966. doi:10.1037/0003-066X.55.8.965.
Ator, N. A. (1999). Statistical inference in behavior analysis: Environmental determinants? Behavior Analyst, 22(2), 93–97.
Barlow, D. H., & Nock, M. K. (2009). Why can’t we be more idiographic in our research? Perspectives on Psychological Science, 4(1), 19–21. doi:10.1111/j.1745-6924.2009.01088.x.
Barlow, D. H., Nock, M. K., & Hersen, M. (2009). Single case experimental designs: strategies for studying behavioral change (3rd ed.). Boston: Pearson.
Baron, A. (1999). Statistical inference in behavior analysis: friend or foe? The Behavior Analyst, 22(2), 83–85.
Behi, R., & Nolan, M. (1996). Single-case experimental designs 1: Using idiographic research. British Journal of Nursing, 5(21), 1334–1337.
Bernard, C. (1927). An introduction to the study of experimental medicine. New York: Macmillan.
Blampied, N. M. (1999). A legacy neglected: Restating the case for single-case research in cognitive-behaviour therapy. Behaviour Change, 16(2), 89–104. doi:10.1375/bech.16.2.89.
Blampied, N. M. (2000). Single-case research designs: A neglected alternative. American Psychologist, 55(8), 960–960. doi:10.1037/0003-066X.55.8.960.
Blampied, N. M. (2001). The third way: Single-case research, training, and practice in clinical psychology. Australian Psychologist, 36(2), 157–163. doi:10.1080/00050060108259648.
Blampied, N. M. (2013). Single-case research designs and the scientist-practitioner ideal in applied psychology. In G. J. Madden (Ed.), APA handbook of behavior analysis: Vol. 1. methods and principles. Washington: American Psychological Association. doi:10.1037/13937-008.
Bourret, J., & Pietras, C. (2013). Visual analysis in single-case research. In G. J. Madden (Ed.), APA Handbook of Behavior Analysis (Methods and Principles, Vol. 1, pp. 199–217). Washington, DC: American Psychological Association.
Branch, M. N. (1999). Statistical inference in behavior analysis: Some things significance testing does and does not do. Behavior Analyst, 22(2), 87–92.
Branch, M. N., & Pennypacker, H. S. (2013). Generality and generalization of research findings. In G. J. Madden (Ed.), APA Handbook of Behavior Analysis (Methods and Principles, Vol. 1, pp. 151–175). Washington, DC: American Psychological Association.
Breakwell, G. M., Hammond, S., Fife-Schaw, C., & Smith, J. A. (2006). Research methods in psychology (3rd ed.). London: Sage.
Borckardt, J., Nash, M., Balliet, W., Galloway, S., & Madan, A. (2013). Time-series statistical analysis of single-case data. In G. Madden (Ed.), APA Handbook of Behavior Analysis (Methods and Principles, Vol. 1, pp. 251–266). Washington, D.C.: American Psychological Association.
Borckardt, J. J., Nash, M. R., Murphy, M. D., Moore, M., Shaw, D., & O’Neil, P. (2008). Clinical practice as natural laboratory for psychotherapy research a guide to case-based time-series analysis. American Psychologist, 63(2), 77–95. doi:10.1037/0003-066X.63.2.77.
Brossart, D. F., Parker, R. I., & Castillo, L. G. (2011). Robust regression for single-case data analysis: how can it help? Behavior Research Methods, 43(3), 710–719. doi:10.3758/s13428-011-0079-7.
Campbell, J. M. (2004). Statistical comparison of four effect sizes for single-subject designs. Behavior Modification, 28(2), 234–246. doi:10.1177/0145445503259264.
Catania, A. C. (2008). The journal of the experimental analysis of behavior at zero, fifty, and one hundred. Journal of the Experimental Analysis of Behavior, 89(1), 111–118. doi:10.1901/jeab. 2008.89-111.
Catania, C. (2013). A natural science of behavior. Review of General Psychology, 17(2), 133–139. doi:10.1037/a0033026.
Catania, A. C., & Laties, V. G. (1999). Pavlov and Skinner: two lives in science (an introduction to B. F. Skinner’s “some responses to the stimulus ‘Pavlov’”). Journal of the Experimental Analysis of Behavior, 72(3), 455–461. doi:10.1901/jeab. 1999.72-455.
Chambless, D. L., & Hollon, S. D. (1998). Defining empirically supported therapies. Journal of Consulting and Clinical Psychology, 66(1), 7–18. doi:10.1037//0022-006X.66.1.7.
Chelune, G. J., Naugle, R. I., & Lüders, H. (1993). Individual change after epilepsy surgery: practice effects and base-rate information. Neuropsychology, 7(1), 41–52.
Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49(12), 997–1003. doi:10.1037//0003-066X.49.12.997.
Cohen, L., Feinstein, A., Masuda, A., & Vowles, K. (2014). Single-case research design in pediatric psychology: considerations regarding data analysis. Journal of Pediatric Psychology, 39(2), 124–137. doi:10.1093/jpepsy/jst065.
Crosbie, J. (1999). Statistical inference in behavior analysis: Useful friend. Behavior Analyst, 22(2), 105–108.
Cumming, G., & Fidler, F. (2009). Confidence intervals better answers to better questions. Journal of Psychology, 217(1), 15–26. doi:10.1027/0044-3409.217.1.15.
Davison, M. (1999). Statistical inference in behavior analysis: Having my cake and eating it? Behavior Analyst, 22(2), 99–103.
De Mey, H. R. A. (2003). Two psychologies: cognitive versus contingency-oriented. Theory & Psychology, 13(5), 695–709.
Dermer, M. L., & Hoch, T. A. (1999). Improving descriptions of single-subject experiments in research texts written for undergraduates. The Psychological Record, 49, 49–66.
Duryea, E., Graner, S. P., & Becker, J. (2009). Methodological issues related to the use of p < 0.05 in health behavior research. American Journal of Health Education, 40(2), 120–125.
Ferguson, C. J. (2009). An effect size primer: A guide for clinicians and researchers. Professional Psychology: Research and Practice, 40(5), 532–538. doi:10.1037/a0015808.
Fisher, R. A. (1925/1950). Statistical methods for research workers (11th ed.). Edinburgh: Oliver & Boyd.
Frerichs, R. J., & Tuokko, H. (2005). A comparison of methods for measuring cognitive change in older adults. Archives of Clinical Neuropsychology, 20(3), 321–233. doi:10.1016/j.acn.2004.08.002.
Fritz, A., Scherndl, T., & Kühberger, A. (2013). A comprehensive review of reporting practices in psychological journals: Are effect sizes really enough? Theory and Psychology, 23(1), 98–122. doi:10.1177/0959354312436870.
Goddard, M. J. (2012). On certain similarities between mainstream psychology and the writings of B. F. Skinner. The Psychological Record, 62, 563–576.
Gravetter, F. J., & Forzano, L. B. (2009). Research methods for the behavioral sciences (3rd ed.). Belmont, CA: Wadsworth, Cengage Learning.
Graziano, A., Raulin, M., & Cramer, K. (2009). Research methods: a process of inquiry. New Jersey: Pearson.
Greenland, S., & Poole, C. (2013). Living with p values: Resurrecting a Bayesian perspective on frequentist statistics. Epidemiology, 24(1), 62–68. doi:10.1097/EDE.0b013e3182785741.
Hammond, G. (1996). The objections to null hypothesis testing as a means of analyzing psychological data. Australian Journal of Psychology, 48, 104–106. doi:10.1080/00049539608259513.
Harris, R. J. (1997). Reforming significance testing via three-valued logic. In L. Harlow, S. Mulaik, & J. Steiger (Eds.), What if there were no significance tests? (pp. 145–174). Mahwah, NJ: Erlbaum.
Harlow, L. (1997). Significance testing introduction and overview. In L. Harlow, S. Mulaik, & J. Steiger (Eds.), What if there were no significance tests? (pp. 1–21). Mahwah, NJ: Erlbaum.
Hayes, S. C., & Brownstein, A. J. (1986). Behavior-Analytic view of the purposes of science, 2(2), 175–190.
Hayes, S. C., Blackledge, J. T., & Barnes-Holmes, D. (2001). Language and cognition: constructing an alternative approach within the behavioral tradition. In S. C. Hayes, D. Barnes-Holmes, & B. Roche (Eds.), Relational frame theory: A post-Skinnerian account of human language and cognition (pp. 3–20). New York: Kluwer Academic Publishers.
Heron, W. T., & Skinner, B. F. (1939). An apparatus for the study of animal behavior. Psychological Record, 3, 166–176.
Hineline, P. N., & Laties, V. G. (1987). Anniversaries in behavior analysis. Journal of the Experimental Analysis of Behavior, 48, 439–514. doi:10.1901/jeab. 1987.48-439.
Hunter, J. E. (1997). Needed: A ban on the significance test. Psychological Science, 8(1), 3–7.
Hurtado-Parrado, C. (2006). El conductismo y algunas implicaciones de lo que significa ser conductista hoy. Diversitas, 2(2), 321–328.
Hurtado-Parrado, H. C. (2009). A Non-Cognitive Alternative for the Study of Cognition: An Interbehavioral Proposal. In T. Teo, P. Stenner, & A. Rutherford (Eds.), Varieties of Theoretical Psychology - ISTP 2007 - International Philosophical and Practical Concerns (pp. 340–348). Toronto: Captus University Publication.
Iversen, I. (2013). Single-case research methods: an overview. In G. J. Madden (Ed.), APA Handbook of Behavior Analysis (Methods and Principles, Vol. 1, pp. 3–32). Washington, DC: American Psychological Association.
Johnston, J. M., & Pennypacker, H. S. (1993a). Strategies and tactics of behavioral research (2nd ed.) Hillsdale, NJ: Erlbaum.
Johnston, J. M., & Pennypacker, H. S. (1993b). Readings for the strategies and tactics of behavioral research. Hillsdale, NJ: Erlbaum.
Johnston, J. M., & Pennypacker, H. S. (2009). Strategies and tactics of behavioral research (3rd ed.) New York: Routledge.
Kratochwill, & J. R, Levin (Eds.). (1992). Single-case research design and analysis: New directions for psychology and education. Hillsdale, NJ: Lawrence Erlbaum Associates.
Lambdin, C. (2012). Significance tests as sorcery: Science is empirical-significance tests are not. Theory and Psychology, 22(1), 67–90. doi:10.1177/0959354311429854.
Lattal, K. (2013). The five pillars of the experimental analysis of behavior. In G. J. Madden (Ed.), APA Handbook of Behavior Analysis (Methods and Principles, Vol. 1, pp. 33–64). Washington, DC: American Psychological Association.
Machado, A., Lourenço, O., & Silva, F. J. (2000). Facts, concepts, and theories: The shape of psychology’s epistemic triangle. Behavior and Philosophy, 40, 1–40.
Machado, A., & Silva, F. J. (2007). Toward a richer view of the scientific method. The role of conceptual analysis. American Psychologist, 62(7), 671–681. doi:10.1037/0003-066X.62.7.671.
Maggin, D. M., & Chafouleas, S. M. (2012). Introduction to the special series: issues and advances of synthesizing single-case research. Remedial and Special Education, 34(1), 3–8. doi:10.1177/0741932512466269.
Malone, J. C. J., & Cruchon, N. M. (2001). Radical behaviorism and the rest of psychology: a review/précis of Skinner’s “About Behaviorism.”. Behavior and Philosophy, 29, 31–57.
Manolov, R., Solanas, A., & Leiva, D. (2010). Comparing “visual” effect size indices for single-case designs. Methodology, 6(2), 49–58. doi:10.1027/1614-2241/a000006.
Martella, R. C., Nelson, R., & Marchand-Martella, N. E. (1999). Research methods: learning to become a critical research consumer. Boston: Allyn & Bacon.
Martin, G. L., Thompson, K., & Regehr, K. (2004). Studies using single-subject designs in sport psychology: 30 years of research. The Behavior Analyst, 27(2), 263–280.
McGrane, J. (2010). Are psychological quantities and measurement relevant in the 21st century? Frontiers in Psychology, 1, 22. doi:10.3389/fpsyg.2010.00022.
McSweeny, A. J., & Naugle, R. I. (1993). “T scores for change”: An illustration of a regression approach to depicting change in clinical neuropsychology. The Clinical Neuropsychologist, 7(3), 300–312.
Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46, 806–834. doi:10.1037//0022-006X.46.4.806.
Meehl, P. E. (1997). The problem is epistemology, not statistics: Replace significance tests by confidence intervals and quantify accuracy of risky numerical predictions. In L. Harlow, S. Mulaik, & J. Steiger (Eds.), What if There Were no Significance Tests? (pp. 393–425). Mahwah, NJ: Erlbaum.
Michell, J. (1997). Quantitative science and the definition of measurement in psychology. British Journal of Psychology, 88, 355–383.
Michell, J. (2000). Normal science, pathological science and psychometrics. Theory & Psychology, 10(5), 639–667. doi:10.1177/0959354300105004.
Moore, J. (1981). On mentalism, methodological behaviorism, and radical behaviorism. Behaviorism, 9(1), 55–77.
Moore, J. (1990). A special section commemorating the 30th anniversary of tactics of scientific research: evaluating experimental data in psychology by Murray Sidman. The Behavior Analyst, 13(2), 159–161.
Moore, J. (2001). On distinguishing methodological from radical behaviorism. European Journal of Behavior Analysis, 2(2), 221–244.
Moore, J. (2007). Conceptual foundations of radical behaviorism. New York: Sloan publishing.
Moore, J. (2009). Why the radical behaviorist conception of private events is interesting, relevant, and important. Behavior and Philosophy, 37, 21–37.
Moore, J. (2011). Behaviorism. The Psychological Record, 61, 449–464.
Moore, J. (2013). Methodological behaviorism from the standpoint of a radical behaviorist. The Behavior Analyst, 36(2), 197–208.
Morgan, D. L., & Morgan, R. K. (2009). Single-case research methods for the behavioral and health sciences. Los Angeles: Sage.
Morris, E. K. (1992). The aim, progress, and evolution of behavior analysis. The Behavior Analyst, 15(1), 3–29.
Morris, E. K. (1998). Tendencias actuales en el análisis conceptual del comportamiento. In R. Ardila, W. López-López, A. Pérez, R. Quiñones & F. Reyes (Eds.), Manual de análisis experimental del comportamiento (pp. 19–56). Madrid: Biblioteca Nueva.
Morris, E. K., Todd, J. T., Midgley, B. D., Schneider, S. M., & Johnson, L. M. (1990). The history of behavior analysis: some historiography and a bibliography. The Behavior Analyst, 13(2), 131–158.
Mulaik, S. A., Raju, N. S., & Harshman, R. A. (1997). There is a time and a place for significance testing. In L. Harlow, S. Mulaik, & J. Steiger (Eds.), What if There Were no Significance Tests? (pp. 66–115). Mahwah, NJ: Erlbaum.
Nestor, P., & Schutt, R. K. (2012). Research methods in psychology: investigating human behavior. Los Angeles: SAGE Publications.
Nickerson, R. S. (2000). Null hypothesis significance testing: A review of an old and continuing controversy. Psychological Methods, 5(2), 241–301. doi:10.1037//1082-989X.5.2.241.
Nourbakhsh, M. R., & Ottenbacher, K. J. (1994). The statistical analysis of single-subject data: a comparative examination. Physical Therapy, 74(8), 768–776.
O’Donohue, W. T., Callaghan, G. M., & Ruckstuhl, L. E. (1998). Epistemological barriers to radical behaviorism. The Behavior Analyst, 21(2), 307–320.
O’Donohue, W. T., & Houts, A. C. (1985). The two disciplines of behavior therapy: Research methods and mediating variables. The Psychological Record, 35, 155–163.
O’Donohue, W. T., & Kitchener, R. (1999). Handbook of behaviorism. New York: Academic.
Ottenbacher, K. J. (1990). Clinically relevant designs for rehabilitation research: the idiographic model. American Journal of Physical Medicine Rehabilitation, 69(6), 286–292. doi:10.1097/00002060-199012000-00002.
Osborne, J. W. (2010). Challenges for quantitative psychology and measurement in the 21st century. Frontiers in Psychology, 1(1). doi:10.3389/fpsyg.2010.00001
Parker, R. I., & Brossart, D. E. (2003). Evaluating single-case research data: a comparison of seven statistical methods. Behavior Therapy, 34, 189–211. doi:10.1016/S0005-7894(03)80013-8.
Parker, R. I., & Hagan-Burke, S. (2007). Useful effect size interpretations for single case research. Behavior Therapy, 38(1), 95–105. doi:10.1016/j.beth.2006.05.002.
Parker, R. I., Vannest, K. J., & Davis, J. L. (2011). Effect size in single-case research: A review of nine nonoverlap techniques. Behavior Modification, 35(4), 303–322. doi:10.1177/0145445511399147.
Perner, P. (2008). Case-based reasoning and the statistical challenges. Quality and Reliability Engineering International, 24, 705–720. doi:10.1002/qre.
Perone, M. (1999). Statistical inference in behavior analysis: Experimental control is better. The Behavior Analyst, 22(2), 109–116.
Perone, M., & Hursh, D. E. (2013). Single-case experimental designs. In G. J. Madden (Ed.), APA Handbook of Behavior Analysis: Vol. 1. Methods and Principles (pp. 107–126). Washington, DC: American Psychological Association.
Photos, V., Michel, B. D., & Nock, M. K. (2008). Single-case research. In M. Hersen & A. M. Gross (Eds.), Handbook of Clinical Psychology (Vol. 1, pp. 224–245). NJ: John Wiley & Sons.
Plazas, E. (2006). B. F. Skinner: la búsqueda de orden en la conducta voluntaria. Universitas Psychologica, 5(2), 371–383.
Pruzek, R. M. (1997). An introduction to Bayesian inference and its applications. In L. Harlow, S. Mulaik, & J. Steiger (Eds.), What if There Were no Significance Tests? (pp. 287–318). Mahwah, NJ: Erlbaum.
Rachlin, H. (1992). Teleological behaviorism. The American Psychologist, 47(11), 1371–1382.
Rachlin, H. (1994). Behavior and mind: The roots of modern psychology. New York: Oxford University Press.
Rachlin, H. (2013). About teleological behaviorism. The Behavior Analyst, 36(2), 209–222.
Ribes-Iñesta, E. (1997). Causality and contingency: some conceptual considerations. The Psychological Record, 47, 619–635.
Ribes-Iñesta, E. (2001). Instructions, rules, and abstraction: a misconstrued relation. Behavior and Philosophy, 28(1), 41–55.
Ribes-Iñesta, E. (2003). What is defined in operational definitions? The case of operant psychology. Behavior and Philosophy, 126, 111–126.
Reichardt, C. S., & Golub, H. F. (1997). When confidence intervals should be used instead of statistical significance tests, and vice versa. In L. Harlow, S. Mulaik, & J. Steiger (Eds.), What if There Were no Significance Tests? (pp. 259–286). Mahwah, NJ: Erlbaum.
Robins, R. W., Gosling, S. D., & Craik, K. (1999). An empirical analysis of trends in psychology. American Psychologist, 54(2), 117–128. doi:10.1037/0003-066X.54.2.117.
Rozeboom, W. W. (1997). Good science is abductive, not hypothetico-deductive. In L. Harlow, S. Mulaik, & J. Steiger (Eds.), What if there were no significance tests? (pp. 335–391). Mahwah, NJ: Erlbaum.
Rozin, P. (2007). Exploring the landscape of modern academic psychology: finding and filling the holes. The American Psychologist, 62(8), 754–766. doi:10.1037/0003-066X.62.8.754.
Shadish, W. R. (2014). Statistical analyses of single-case designs: the shape of things to come. Current Directions in Psychological Science, 23(2), 139–146. doi:10.1177/0963721414524773.
Schmidt, S. (2009). Shall we really do it again? The powerful concept of replication is neglected in the social sciences. Review of General Psychology, 13, 90–100. doi:10.1037/a0015108.
Schmidt, F. L., & Hunter, J. E. (1997). Eight common but false objections to the discontinuation of significance testing in the analysis of research data. In L. Harlow, S. Mulaik, & J. Steiger (Eds.), What if There Were no Significance Tests? (pp. 37–64). Mahwah, NJ: Erlbaum.
Schweigert, W. A. (2012). Research methods in psychology: a handbook. Long Grove: Waveland Press.
Sharpe, D. (2013). Why the resistance to statistical innovations? Bridging the communication gap. Psychological Methods, 18(4), 572–582. doi:10.1037/a0034177.
Shull, R. L. (1999). Statistical inference in behavior analysis: Discussant’s remarks. The Behavior Analyst, 22(2), 117–121.
Sidman, M. (1960). Tactics of scientific research: Evaluating experimental data in psychology. New York: Basic Books.
Silverstein, A. (1988). An Aristotelian resolution of the idiographic versus nomothetic tension. American Psychologist, 43(6), 425–430. doi:10.1037//0003-066X.43.6.425.
Skinner, B. F. (1938). The behavior of organisms. New York: Appleton.
Skinner, B. F. (1945). The operational analysis of psychological terms. Psychological Review, 52(5), 270-270-277. doi:10.1037/h0062535
Skinner, B. F. (1956). A case history in scientific method. American Psychologist, 11(5), 221–233. doi:10.1037/h0047662.
Skinner, B. F. (1967). B. F. Skinner. In E. G. Boring & G. Lindzey (Eds.), A history of psychology in autobiography (Vol. 5, pp. 385–413). New York: Appleton.
Skinner, B. F. (1979). The shaping of a behaviorist. New York: Knopf.
Skinner, B. F. (1981). Selection by consequences. Science, 213(4507), 501–504.
Skinner, B. F. (1984). An operant analysis of problem solving. The Behavioral and Brain Sciences, 7, 583–613. doi:10.1017/S0140525X00027412.
Smith, J. (2012). Single-case experimental designs: a systematic review of published research and current standards. Psychological Methods, 17(4), 510–550. doi:10.1037/a0029312.
Smith, L. D., Best, L. A., Cylke, V. A., & Stubbs, D. A. (2000). Psychology without p values: Data analysis at the turn of the 19th century. American Psychologist, 55(2), 260–263. doi:10.1037/0003-066X.55.2.260
Solanas, A., Manolov, R., & Onghena, P. (2010). Estimating slope and level change in N = 1 designs. Behavior Modification, 34(3), 195–218. doi:10.1177/0145445510363306.
Stang, A., Poole, C., & Kuss, O. (2010). The ongoing tyranny of statistical significance testing in biomedical research. European Journal of Epidemiology, 25(4), 225–230. doi:10.1007/s10654-010-9440-x.
Stangor, C. (2011). Research methods for the behavioral sciences. Belmont: Wadsworth Cengage Learning.
Thompson, B. (2002). What future quantitative social science research could look like: Confidence intervals for effect sizes. Educational Researcher, 31(3), 24–31. doi:10.1002/pits.20234.
Thompson, T. (1984). The examining magistrate for nature: A retrospective review of Claude Bernard’s an introduction to the study of experimental medicine. Journal of the Experimental Analysis of Behavior, 41(2), 211–216. doi:10.1901/jeab. 1984.41-211.
Toomela, A. (2007a). Culture of science: Strange history of the methodological thinking in psychology. Integrative Psychological and Behavioral Science, 41(1), 6–20. doi:10.1007/s12124-007-9004-0.
Toomela, A. (2007b). History of methodology in psychology: Starting point, not the goal. Integrative Psychological and Behavioral Science, 41(1), 75–82. doi:10.1007/s12124-007-9005-z.
Toomela, A. (2008). Variables in psychology: a critique of quantitative psychology. Integrative Psychological & Behavioral Science, 42(3), 245–265. doi:10.1007/s12124-008-9059-6.
Toomela, A. (2009a). The methodology of idiographic science: the limits of single-case studies and the role of typology. In S. Salvatore, J. Valsiner, J. Travers, & A. Gennaro (Eds.), Yearbook of idiographic science - (Vol. II, pp. 13–33). Roma: Firera & Liuzzo.
Toomela, A. (2009b). What is the psyche? the answer depends on the particular epistemology adopted by the scholar. In S. Salvatore, J. Valsiner, J. Travers, & A. Gennaro (Eds.), Yearbook of idiographic science - (Vol. II, pp. 81–104). Roma: Firera & Liuzzo.
Toomela, A. (2010). Quantitative methods in psychology: inevitable and useless. Frontiers in Psychology, 1, 1–14. doi:10.3389/fpsyg.2010.00029.
Toomela, A. (2011). Travel into a fairy land: a critique of modern qualitative and mixed methods psychologies. Integrative Psychological & Behavioral Science, 45(1), 21–47. doi:10.1007/s12124-010-9152-5.
Toomela, A. (2012). Guesses on the future of cultural psychology: past, present, and past. In J. Valsiner (Ed.), Oxford handbook of culture and psychology (pp. 998–1033). Oxford University Press. doi:10.1093/oxfordhb/9780195396430.013.0049
Toomela, A. (2014). Mainstream psychology. In (T. Teo, Ed.)Encyclopedia of critical psychology. New York, NY: Springer New York. doi:10.1007/978-1-4614-5583-7.
Tryon, W. W. (2001). Evaluating statistical difference, equivalence, and indeterminacy using inferential confidence intervals: An integrated alternative method of conducting null hypothesis statistical tests. Psychological Methods, 6(3), 371–386. doi:10.1037/1082-989X.6.4.371.
Valsiner, J. (1986). Where is the individual subject in scientific psychology? In J. Valsiner (Ed.), The individual subject and scientific psychology (pp. 1–16). New York, NY: Plenum.
Wachtel, P. L. (1980). Investigation and its discontents: Some constraints on progress in psychological research. American Psychologist, 35(5), 399–408. doi:10.1037//0003-066X.35.5.399.
Wagenmakers, E., Wetzels, R., Borsboom, D., van der Maas, H. L. J., & Kievit, R. A. (2012). An agenda for purely confirmatory research. Perspectives on Psychological Science, 7(6), 632–638. doi:10.1177/1745691612463078.
Wakefield, J. C. (2007). Why psychology needs conceptual analysts: Wachtel’s “discontents” revisited. Applied and Preventive Psychology, 12(1), 39–43. doi:10.1016/j.appsy.2007.07.014.
Wilkinson, L. & The Task Force on Statistical Inference (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54(8), 594–604. doi: 10.1037/0003-066X.54.8.594
Windelband, W. (1894). History and natural science. Theory & Psychology, 8(1), 5–22.
Ximenes, V. M., Manolov, R., Solanas, A., & Quera, V. (2009). Factors affecting visual inference in single-case designs. The Spanish Journal of Psychology, 12(2), 823–832.
Zuriff, G. E. (1985). Behaviorism: A conceptual reconstruction. New York: Columbia.
Acknowledgments
The authors thank Aaro Toomela, João Antonio Monteiro, and the anonymous reviewers for their valuable suggestions and thoughtful comments.
Author information
Authors and Affiliations
Corresponding author
Additional information
An earlier version of this paper was prepared while the first author was sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC) through the postgraduate scholarships program.
Part of this paper was presented during the Sixth International Conference of the Association for Behavior Analysis (November, 2011, Granada, Spain).
Rights and permissions
About this article
Cite this article
Hurtado-Parrado, C., López-López, W. Single-Case Research Methods: History and Suitability for a Psychological Science in Need of Alternatives. Integr. psych. behav. 49, 323–349 (2015). https://doi.org/10.1007/s12124-014-9290-2
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12124-014-9290-2