U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Perspect Behav Sci
  • v.45(1); 2022 Mar

Advancing the Application and Use of Single-Case Research Designs: Reflections on Articles from the Special Issue

Robert h. horner.

1 University of Oregon, Eugene, OR 97401 USA

John Ferron

2 University of South Florida, Tampa, FL USA

This special issue of Perspective on Behavior Science is a productive contribution to current advances in the use and documentation of single-case research designs. We focus in this article on major themes emphasized by the articles in this issue and suggest directions for improving professional standards focused on the design, analysis, and dissemination of single-case research.

The application of single-case research methods is entering a new phase of scientific relevance. Researchers in an increasing array of disciplines are finding single-case methods useful for the questions they are asking and the clinical needs in their fields (Kratochwill et al., 2010 ; Maggin et al., 2017 ; Maggin & Odom, 2014 ; Riley-Tillman, Burns, & Kilgus, 2020 ). With this special issue the editors have challenged authors to articulate the advances in research design and data analysis that will be needed if single-case methods are to meet these emerging expectations. Each recruited article delves into a specific avenue of concern for advancing the use of single-case methods. The purpose of this discussion is to integrate themes identified by the authors and offer perspective for advancing the application and use of single-case methods. We provide initial context and then focus on the unifying messages the authors provide for both interpreting single-case research results and designing studies that will be of greatest benefit.

A special issue of Perspectives on Behavior Science focused on methodological advances needed for single-case research is a timely contribution to the field. There are growing efforts to both articulate professional standards for single-case methods (Kratochwill et al., 2010 ; Tate et al., 2016 ), and advance new procedures for analysis and interpretation of single-case studies (Manolov & Moeyaert, 2017 ; Pustejovsky et al., 2014 ; Riley-Tillman et al., 2020 ). Foremost among these trends is the goal of including single-case methods in the identification of empirically validated clinical practices (Slocum et al., 2014 ). Often labeled “evidence-based practices” the emerging message is that federal, state, and local agencies will join with professional associations in advancing investment in practices that have empirically documented effectiveness, efficiency, and safety. This movement depends on each discipline defining credible protocols for identifying empirically validate procedures, and in the present context, the use of single-case methods to achieve this goal.

This special issue comes to the field following recent publication of the What Works Clearinghouse 4.1 standards for single-case design (Institute of Education Sciences, 2020 ). At this time, the repeated demonstrations that single-case methods are useful, valid, and increasingly well-defined holds great promise. For single-case methods to achieve the impact they promise, however, there remains a need for (1) professional acceptance of research design standards, (2) agreement on data analysis standards (both interpreting individual studies, and for larger meta-analyses), and (3) incorporation of these standards in journal review protocols, grant review protocols, and university training programs targeting research design. This special issue offers a useful foundation for advancing the field in each of these areas.

A Role for Experimental Single-Case Designs

One important theme across the articles is recognition that the scientific community is unifying in acceptance that the core features of experimental single-case designs allow credible documentation of functional relations (experimental control). This is a large message, and one that needs to be more overtly noted across disciplines where single-case methods are less often used. Of special value is the distinction between rigorous single-case experimental designs and clinical case studies, or formal descriptive time-series analyses. The iterative collection of data across time with periodic experimenter-manipulation of treatments is useful both as a clinical tool and, when this approach is linked with designs that control for threats to internal validity, to the advancement of science.

Combining Visual Analysis and Statistical Analysis

Another major message from the recruited articles is that interpretation of single-case research designs will benefit from (even require) incorporation of statistical tools. Single-case researchers have used visual analysis as the initial step in examining evidence (Parsonson & Baer, 1978 ; Ledford & Gast, 2018 ; Kazdin, 2021 ; Riley-Tillman et al., 2020 ). Rigorous use of visual analysis involves (1) examining the data from each phase of the study to define within phase patterns, (2) comparing data patterns of adjacent phases, (3) comparing data patterns of similar phases, (4) examining the full set of data within a design to assess if the design has been effective at controlling threats to internal validity, and if there are at least three demonstrations of effect (each at a different point in time), and (5) determining if there are instances of noneffect or contra-indicated effect.

When assessing a single phase (or similar phases) of a study, the researcher considers (1) number of data points, (2) level (mean) score, (3) variability of scores, and (4) within phase trend(s). When comparing adjacent phases, the researcher examines if there is a change in the pattern of data following manipulation of the independent variable. Phase comparisons are done by simultaneously assessing (1) change in level, (2) change in variability, (3) change in trend, (4) immediacy of any change in pattern, (5) degree of overlap in data between the two phases, and (6) similarity in the patterns of data from similar phases (e.g., two baseline phases).

When assessing the overall design, the researcher looks at all the data to determine if an effect (e.g., change in the pattern of the dependent variable following manipulation of the independent variable) is observed at least three different times, each at a different point in time. The researcher also examines if there are manipulations of the independent variable where change in the dependent variable did not occur or occurred in the opposite direction expected by the hypothesis under consideration.

At present there is active discussion about the need for visual analysis as a component in the analysis protocol with single-case studies (Institute of Education Sciences, 2020 ). It is clear that the number of data points per phase, mean of these points, variability, and within phase trend are all easily calculatable. As the authors of articles in this issue note, there also are creative approaches to examining if there is change in the data patterns across adjacent phases. We view these approaches as major advances, and positive assets to the task of interpreting single-case evidence. We also recognize, however, that none of the proposed statistical options simultaneously examine the full set of variables traditionally used to guide visual analysis (level, trend, variability, immediacy, overlap, similarity of pattern across similar phases), nor do they include protocols for adjusting the weight given to each variable when assessing an effect (e.g., level is weighted differently in phases with stable data patterns than in phases with strong trends). Most important, visual analysis offers a more nuanced interpretation of data patterns. The role of outliers, within phase shifts in data patterns, and shifts in data patterns at similar times (within a multiple baseline design) are more apparent via visual analysis, and useful sources of information for assessing the stability and clinical relevance of effects. At this point we continue to see visual analysis as the appropriate first step in assessment of single-case studies, but strongly support the addition of statistical tools that yield valuable quantitative summaries of specific aspects of the analysis.

Align Data Analysis with Research Purpose

A theme that emerges in this special issue is the importance of aligning the aspects of the analysis that are quantified with the purposes of the study. We are fortunate to see in this special issue a variety of quantitative summaries that are tailored to meet a variety of purposes. There are methods helpful in estimating the size of the average treatment effect, and these vary depending on whether the focus is on quantifying in a standardized way the change in level (Cox et al., this issue ) or a change in slope or variability (Manolov et al., this issue-a ). In addition, there are methods to quantify the consistency of effects across replications (Manolov et al., this issue-b ) and other methods to summarize the degree to which the size of effects relates to characteristics of the participants (Moeyaert et al., this issue ). There are also estimates of the probability of the observed difference occurring in the absence of a treatment effect (Friedel et al., this issue ; Manolov et al., this issue-b ), methods that rely on a series of probability estimates to aid in the interpretation of FA (Kranak & Hall, this issue ), and summaries used to identify overselectivity (Mason et al., this issue ). In each case a strong rationale is available to support specific conditions where the proposed analysis would be useful. The important message is that no one analysis is applicable to all conditions and clarifying the purpose and structure of a specific study is critical when deciding which analysis to implement.

In addition to the need for aligning the statistical analysis with the study purpose, is the need for alignment of the logic and assumptions underlying the quantitative summary with the design and data from the single-case study. For example, the interpretation of a change in level as a measure of the size of the effect is made more meaningful when the experimental design controls for threats to internal validity and a visual analysis reveals an absence of trends, an absence of level shifts that are not coincident with intervention, and a problematic level of baseline responding. Probabilities based on randomization tests (Manolov et al., this issue-b ) are more meaningfully interpreted when the design incorporates randomization and the data permutations are restricted to the possible random assignments, whereas probabilities based on Monte Carlo resampling methods (Friedel et al., this issue ) are based on an assumption of exchangeability, and thus more meaningful when the time series are stable.

Single-case researchers will increasingly be expected to integrate statistical analyses in their reporting of results. The number of statistical options will continue to expand, and the analyses will become increasingly easy to implement through software applications. For the field to capitalize on these advancements, it will be important for single-case researchers to be flexible, selecting quantifications that are well matched to their purposes, study design, data, and visual analyses. Because single-case researchers cannot routinely rely on one specific quantification, efforts have begun to provide guidance in selecting among quantitative options (Fingerhut et al., 2020 ; Manolov & Moeyaert, 2017 ; Manolov et al., this issue-a ). These efforts will need to be extended so they include the techniques developed and illustrated by authors of this special issue, as well as methods that will be developed in the future to meet the varied needs of single-case researchers.

Computer Applications Supporting Analysis of Single-Case Designs

We also acknowledge the value of computer applications that can assist in analysis of data from single-case designs, and make use of statistical tools more accessible. The ExPRT application developed by Joel Levin and colleagues is one such program that provides rapid interpretation of single-case designs that have incorporated randomization criteria (Gafurov & Levin, 2021 ). The logic used by ExPRT is consistent with the approach to analysis of alternating treatment designs (ATDs) proposed by Manolov et al. ( this issue-b ) and is likely to prompt single-case researchers to consider incorporation of randomization options in the design of future experiments. The value of computer applications also is apparent in the Automated Nonparametric Statistical Analysis (ANSA) app offered by Kranak and Hall ( this issue ) as a tool for facilitating the interpretation of functional analysis data using alternating treatment designs. In addition, many of the effect sizes discussed by Manolov et al. ( this issue-a ) can be readily computed using computer applications, such as the Single-Case Effect Size Calculator (Pustejovsky & Swan, 2018 ) and SCDHLM (Pustejovsky et al., 2021 ). We anticipate an increasing number of computer applications for interpreting single-case data will become available as statistical strategies gain acceptance.

Falligant et al. ( this issue ) extend this theme by reviewing emerging statistical strategies for improving analysis of time series data. They summarize data analytic methods that will both benefit experimental studies and be especially useful in interpretation of clinical data (e.g., with designs that may not meet experimental requirements for control of threats to internal validity). Their message is joined by Cox et al. ( this issue ) in emphasizing the value of collecting rigorous time series data in clinical context even when experimental designs are contraindicated. The consistent message is that combining visual analysis with supplemental statistical assessment has value both for clinical decision making and advancing the science within a discipline. The improved array of statistical options, and the increasing ease with which they can be applied to time series data, make integration of visual and statistical analysis a likely standard for the future.

Implications for Designing Single-Case Research

The articles in this special issue emphasize innovative approaches to analysis of single-case research data. But the authors also offer important considerations for research designs. Two articles report procedures for identifying the role of intervention components (Cox et al., this issue ; Mason et al., this issue ). Too little emphasis has been given to the role of single-case designs to examine moderator variables, interaction effects, intervention components and sustained impact. Few interventions are effective across all population groups, all contexts, and all challenges. Effective research designs need to allow identification not only of the impact of an intervention in a specific context, but identification of conditions where the intervention is not effective. Likewise, a growing number of behavioral interventions include multiple procedures. Identifying the respective value of each procedural component and the most efficient combination of components is a worthy challenge for researchers and the creative application of single-case designs.

Single-case studies designed to examine component interactions or setting specificity may benefit from use of complex single-case designs that combine multiple baseline, alternating treatment and/or reversal elements (Kazdin, 2021 ). In other cases, analysis approaches may be helpful to both document effects and guide future studies. Cox et al. ( this issue ) offer examples for separating the independent and combined effects of behavioral interventions and medication on reduction of problem behavior for individuals with intellectual disabilities. Mason et al. ( this issue ) likewise document how statistical modeling can be used to isolate elements of stimulus control and document with greater precision the presence of stimulus overselectivity.

Research Protocols

Three articles in this issue focus on research protocols that will facilitate the inclusion of single-case research in larger meta-analyses documenting evidence-based practices. Aydin and Yassikaya ( this issue ) focus on the need for transforming graphic data into spreadsheets that can be used for statistical analysis. They report on the value of a PlotDigitizer application for extracting graphic data and provide documentation of the validity and reliability of this tool for delivering the data in a format needed for supplemental statistical analysis. Manolov et al. ( this issue-a , b ) propose procedures for both selecting and reporting the measures employed in any study to avoid measurement bias and misinterpretation, and Dowdy et al. ( this issue ) likewise encourage procedures to identify possible publication bias. These authors promote the value of research plan prepublication as a growing option that is both practical and valuable for maximizing rigorous and ethically implemented research protocols.

The major message from articles in this special issue is that single-case research designs are available and functional for advancing both our basic science and clinical technology. The efforts over the past 15 years to define professional design and analysis standards for single-case methods have been successful. But as the articles in this special issue show, single-case research methods are continuing to evolve. Innovative statistical procedures are improving the precision and credibility of single-case research analysis and posing important considerations for novel research design options. These innovations will continue to challenge prior assumptions, and open new opportunities. Each innovation will receive its own critical review, but collectively, the field is benefiting from the creative recommendations exemplified by the authors of this special issue.

Declarations

We have no known conflict of interest to disclose.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • Aydin, O., & Yassikaya, M. Y. (this issue). Validity and reliability analysis of the Plot Digitizer Software Program for data extraction from single-case graphs. Perspectives on Behavior Science . Advance online publication. 10.1007/s40614.021.00284-0
  • Cox, A., Pritchard, D., Penney, H., Eiri, L., & Dyer, T. (this issue). Demonstrating an analyses of clinical data evaluating psychotropic medication reductions and the ACHIEVE! Program in adolescents with severe problem behavior. Perspectives on Behavior Science. Advance online publication. 10.1007/s40614-020-00279-3
  • Dowdy, A., Hantula, D., Travers, J. C., & Tincani, M. (this issue). Meta-analytic methods to detect publication bias in behavior science research. Perspectives on Behavior Science . Advance online publication. 10.1007/s40614-021-00303-0
  • Falligant, J., Kranak, M., & Hagopian, L. (this issue). Further analysis of advanced quantitative methods and supplemental interpretative aids with single-case experimental designs.
  • Fingerhut, J., Marbou, K., & Moeyaert, M. (2020). Single-case metric ranking tool (Version 1.2) [Microsoft Excel tool]. 10.17605/OSF.IO/7USBJ
  • Friedel, J., Cox A., Galizio, A., Swisher, M., Small, M., & Perez S. (this issue). Monte Carlo analyses for single-case experimental designs: An untapped resource for applied behavioral researchers and practitioners.
  • Gafurov, B. S., & Levin, J. R. (2021, June). ExPRT ( Excel Package of Randomization Tests): Statistical analyses of single-case intervention data (Version 4.2.1) [Computer software]. https://ex-prt.weebly.com
  • Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, & What Works Clearinghouse. (2020). What Works Clearinghouse Procedures Handbook (Vers. 4.1). https://ies.ed.gov/ncee/wwc/Docs/referenceresources/WWC-Procedures-Handboo v4-1508.pdf
  • Kazdin AE. Single-case research designs: Methods for clinical and applied settings . 3. Oxford University Press; 2021. [ Google Scholar ]
  • Kranak, M., & Hall, S. (this issue) Implementing automated nonparametric statistical analysis on functional analysis data: A guide for practitioners and researchers. Perspectives on Behavior Science . Advance online publication. 10.1007/s40614-021-00290-2
  • Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2010). Single-case designs technical documentation . What Works Clearinghouse . https://ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf
  • Ledford J, Gast D. Single case research methodology: Applications in special education and behavioral sciences . 3. Taylor & Francis; 2018. [ Google Scholar ]
  • Maggin D, Odom S. Evaluating single-case research data for systematic review: A commentary for the special issue. Journal of School Psychology. 2014; 52 (2):237–241. doi: 10.1016/j.jsp.2014.01.002. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Maggin DM, Pustejovsky JE, Johnson AH. A meta-analysis of school-based group contingency interventions for students with challenging behavior: An update. Remedial & Special Education. 2017; 38 (6):353–370. doi: 10.1177/0741932517716900. [ CrossRef ] [ Google Scholar ]
  • Manolov R, Moeyaert M. Recommendations for choosing single-case data analytical techniques. Behavior Therapy. 2017; 48 (1):97–114. doi: 10.1016/j.beth.2016.04.008. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Manolov, R., Moeyaert, M., & Fingerhut, J. (this issue-a) A priori justification for effect measures in single-case experimental designs. Perspectives in Behavior Science . Advance online publication. 10.1007/s40614.021-00282-2
  • Manolov, R., Tanious, R., & Onghena, P. (this issue-b). Quantitative techniques and graphical representations for interpreting results from alternating treatment design. Perspectives on Behavior Science . Advance online publication. 10.1007/s40614-021-00289-9
  • Mason, L., Otero, M., & Andrews, A. (this issue). Cochran’s Q test of stimulus overselectivity within the verbal repertoire of children with autism. Perspectives on Behavior Science . Advance online publication. 10.1007/s40614-021-00315-2
  • Moeyaert, M., Yang, P., & Xu, X. (this issue). The power to explain variability in intervention effectiveness in single-case research using hierarchical line modeling. Perspectives on Behavior Science . Advance online publication. 10.1007/s40614-021-00304-z
  • Parsonson B, Baer D. The analysis and presentation of graphic data. In: Kratochwill TR, editor. Single subject research . Elsevier; 1978. pp. 101–166. [ Google Scholar ]
  • Pustejovsky, J. E., & Swan, D. M. (2018). Single-case effect size calculator (Version 0.5.1) Web application. https://jepusto.shinyapps.io/SCD-effect-sizes/
  • Pustejovsky, J. E., Hedges, L. V., & Shadish, W. R. (2014). Design-comparable effect sizes in multiple baseline designs: A general modeling framework. Journal of Educational & Behavioral Statistics, 39 (5), 368–393. 10.3102/1076998614547577
  • Pustejovsky, J. E., Chen, M., & Hamilton, B. (2021). scdhlm: A web-based calculator for between-case standardized mean differences (Version 0.5.2) Web application. https://jepusto.shinyapps.io/scdhlm
  • Riley-Tillman, T. C., Burns, M. K., & Kligus, S. (2020). Evaluating educational interventions: Single-case design for measuring response to intervention . Guilford Press.
  • Slocum, T. A., Detrich, R., Wilczynski, S. M., Spencer, T. D., Lewis, T., & Wolfe, K. (2014). The evidence-based practice of applied behavior analysis. The Behavior Analyst, 37 , 41–56. 10.1007/s40614-014-0005-2 [ PMC free article ] [ PubMed ]
  • Tate, R. L., Perdices, M., Rosenkoetter, U., Shadish, W., Vohra, S., Barlow, D. H., Horner, R., Kazdin, A., Kratochwill, T., McDonald, S., Sampson, M., Shamseer, L., Togher, L., Albin, R., Backman, C., Douglas, J., Evans, J. J., Gat, D., Manolov, R., et al. (2016). The single-case reporting guideline in Behavioural Interventions (SCRIBE) 2016 statement. Physical Therapy, 96 (7), e1–e10. 10.2522/ptj.2016.96.7.e1 [ PubMed ]

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Perspective
  • Published: 22 November 2022

Single case studies are a powerful tool for developing, testing and extending theories

  • Lyndsey Nickels   ORCID: orcid.org/0000-0002-0311-3524 1 , 2 ,
  • Simon Fischer-Baum   ORCID: orcid.org/0000-0002-6067-0538 3 &
  • Wendy Best   ORCID: orcid.org/0000-0001-8375-5916 4  

Nature Reviews Psychology volume  1 ,  pages 733–747 ( 2022 ) Cite this article

698 Accesses

6 Citations

26 Altmetric

Metrics details

  • Neurological disorders

Psychology embraces a diverse range of methodologies. However, most rely on averaging group data to draw conclusions. In this Perspective, we argue that single case methodology is a valuable tool for developing and extending psychological theories. We stress the importance of single case and case series research, drawing on classic and contemporary cases in which cognitive and perceptual deficits provide insights into typical cognitive processes in domains such as memory, delusions, reading and face perception. We unpack the key features of single case methodology, describe its strengths, its value in adjudicating between theories, and outline its benefits for a better understanding of deficits and hence more appropriate interventions. The unique insights that single case studies have provided illustrate the value of in-depth investigation within an individual. Single case methodology has an important place in the psychologist’s toolkit and it should be valued as a primary research tool.

This is a preview of subscription content, access via your institution

Access options

Subscribe to this journal

Receive 12 digital issues and online access to articles

55,14 € per year

only 4,60 € per issue

Buy this article

  • Purchase on Springer Link
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

typically the results from a single case research study are evaluated using ____

Similar content being viewed by others

typically the results from a single case research study are evaluated using ____

Comparing meta-analyses and preregistered multiple-laboratory replication projects

typically the results from a single case research study are evaluated using ____

The fundamental importance of method to theory

typically the results from a single case research study are evaluated using ____

A critical evaluation of the p-factor literature

Corkin, S. Permanent Present Tense: The Unforgettable Life Of The Amnesic Patient, H. M . Vol. XIX, 364 (Basic Books, 2013).

Lilienfeld, S. O. Psychology: From Inquiry To Understanding (Pearson, 2019).

Schacter, D. L., Gilbert, D. T., Nock, M. K. & Wegner, D. M. Psychology (Worth Publishers, 2019).

Eysenck, M. W. & Brysbaert, M. Fundamentals Of Cognition (Routledge, 2018).

Squire, L. R. Memory and brain systems: 1969–2009. J. Neurosci. 29 , 12711–12716 (2009).

Article   PubMed   PubMed Central   Google Scholar  

Corkin, S. What’s new with the amnesic patient H.M.? Nat. Rev. Neurosci. 3 , 153–160 (2002).

Article   PubMed   Google Scholar  

Schubert, T. M. et al. Lack of awareness despite complex visual processing: evidence from event-related potentials in a case of selective metamorphopsia. Proc. Natl Acad. Sci. USA 117 , 16055–16064 (2020).

Behrmann, M. & Plaut, D. C. Bilateral hemispheric processing of words and faces: evidence from word impairments in prosopagnosia and face impairments in pure alexia. Cereb. Cortex 24 , 1102–1118 (2014).

Plaut, D. C. & Behrmann, M. Complementary neural representations for faces and words: a computational exploration. Cogn. Neuropsychol. 28 , 251–275 (2011).

Haxby, J. V. et al. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293 , 2425–2430 (2001).

Hirshorn, E. A. et al. Decoding and disrupting left midfusiform gyrus activity during word reading. Proc. Natl Acad. Sci. USA 113 , 8162–8167 (2016).

Kosakowski, H. L. et al. Selective responses to faces, scenes, and bodies in the ventral visual pathway of infants. Curr. Biol. 32 , 265–274.e5 (2022).

Harlow, J. Passage of an iron rod through the head. Boston Med. Surgical J . https://doi.org/10.1176/jnp.11.2.281 (1848).

Broca, P. Remarks on the seat of the faculty of articulated language, following an observation of aphemia (loss of speech). Bull. Soc. Anat. 6 , 330–357 (1861).

Google Scholar  

Dejerine, J. Contribution A L’étude Anatomo-pathologique Et Clinique Des Différentes Variétés De Cécité Verbale: I. Cécité Verbale Avec Agraphie Ou Troubles Très Marqués De L’écriture; II. Cécité Verbale Pure Avec Intégrité De L’écriture Spontanée Et Sous Dictée (Société de Biologie, 1892).

Liepmann, H. Das Krankheitsbild der Apraxie (“motorischen Asymbolie”) auf Grund eines Falles von einseitiger Apraxie (Fortsetzung). Eur. Neurol. 8 , 102–116 (1900).

Article   Google Scholar  

Basso, A., Spinnler, H., Vallar, G. & Zanobio, M. E. Left hemisphere damage and selective impairment of auditory verbal short-term memory. A case study. Neuropsychologia 20 , 263–274 (1982).

Humphreys, G. W. & Riddoch, M. J. The fractionation of visual agnosia. In Visual Object Processing: A Cognitive Neuropsychological Approach 281–306 (Lawrence Erlbaum, 1987).

Whitworth, A., Webster, J. & Howard, D. A Cognitive Neuropsychological Approach To Assessment And Intervention In Aphasia (Psychology Press, 2014).

Caramazza, A. On drawing inferences about the structure of normal cognitive systems from the analysis of patterns of impaired performance: the case for single-patient studies. Brain Cogn. 5 , 41–66 (1986).

Caramazza, A. & McCloskey, M. The case for single-patient studies. Cogn. Neuropsychol. 5 , 517–527 (1988).

Shallice, T. Cognitive neuropsychology and its vicissitudes: the fate of Caramazza’s axioms. Cogn. Neuropsychol. 32 , 385–411 (2015).

Shallice, T. From Neuropsychology To Mental Structure (Cambridge Univ. Press, 1988).

Coltheart, M. Assumptions and methods in cognitive neuropscyhology. In The Handbook Of Cognitive Neuropsychology: What Deficits Reveal About The Human Mind (ed. Rapp, B.) 3–22 (Psychology Press, 2001).

McCloskey, M. & Chaisilprungraung, T. The value of cognitive neuropsychology: the case of vision research. Cogn. Neuropsychol. 34 , 412–419 (2017).

McCloskey, M. The future of cognitive neuropsychology. In The Handbook Of Cognitive Neuropsychology: What Deficits Reveal About The Human Mind (ed. Rapp, B.) 593–610 (Psychology Press, 2001).

Lashley, K. S. In search of the engram. In Physiological Mechanisms in Animal Behavior 454–482 (Academic Press, 1950).

Squire, L. R. & Wixted, J. T. The cognitive neuroscience of human memory since H.M. Annu. Rev. Neurosci. 34 , 259–288 (2011).

Stone, G. O., Vanhoy, M. & Orden, G. C. V. Perception is a two-way street: feedforward and feedback phonology in visual word recognition. J. Mem. Lang. 36 , 337–359 (1997).

Perfetti, C. A. The psycholinguistics of spelling and reading. In Learning To Spell: Research, Theory, And Practice Across Languages 21–38 (Lawrence Erlbaum, 1997).

Nickels, L. The autocue? self-generated phonemic cues in the treatment of a disorder of reading and naming. Cogn. Neuropsychol. 9 , 155–182 (1992).

Rapp, B., Benzing, L. & Caramazza, A. The autonomy of lexical orthography. Cogn. Neuropsychol. 14 , 71–104 (1997).

Bonin, P., Roux, S. & Barry, C. Translating nonverbal pictures into verbal word names. Understanding lexical access and retrieval. In Past, Present, And Future Contributions Of Cognitive Writing Research To Cognitive Psychology 315–522 (Psychology Press, 2011).

Bonin, P., Fayol, M. & Gombert, J.-E. Role of phonological and orthographic codes in picture naming and writing: an interference paradigm study. Cah. Psychol. Cogn./Current Psychol. Cogn. 16 , 299–324 (1997).

Bonin, P., Fayol, M. & Peereman, R. Masked form priming in writing words from pictures: evidence for direct retrieval of orthographic codes. Acta Psychol. 99 , 311–328 (1998).

Bentin, S., Allison, T., Puce, A., Perez, E. & McCarthy, G. Electrophysiological studies of face perception in humans. J. Cogn. Neurosci. 8 , 551–565 (1996).

Jeffreys, D. A. Evoked potential studies of face and object processing. Vis. Cogn. 3 , 1–38 (1996).

Laganaro, M., Morand, S., Michel, C. M., Spinelli, L. & Schnider, A. ERP correlates of word production before and after stroke in an aphasic patient. J. Cogn. Neurosci. 23 , 374–381 (2011).

Indefrey, P. & Levelt, W. J. M. The spatial and temporal signatures of word production components. Cognition 92 , 101–144 (2004).

Valente, A., Burki, A. & Laganaro, M. ERP correlates of word production predictors in picture naming: a trial by trial multiple regression analysis from stimulus onset to response. Front. Neurosci. 8 , 390 (2014).

Kittredge, A. K., Dell, G. S., Verkuilen, J. & Schwartz, M. F. Where is the effect of frequency in word production? Insights from aphasic picture-naming errors. Cogn. Neuropsychol. 25 , 463–492 (2008).

Domdei, N. et al. Ultra-high contrast retinal display system for single photoreceptor psychophysics. Biomed. Opt. Express 9 , 157 (2018).

Poldrack, R. A. et al. Long-term neural and physiological phenotyping of a single human. Nat. Commun. 6 , 8885 (2015).

Coltheart, M. The assumptions of cognitive neuropsychology: reflections on Caramazza (1984, 1986). Cogn. Neuropsychol. 34 , 397–402 (2017).

Badecker, W. & Caramazza, A. A final brief in the case against agrammatism: the role of theory in the selection of data. Cognition 24 , 277–282 (1986).

Fischer-Baum, S. Making sense of deviance: Identifying dissociating cases within the case series approach. Cogn. Neuropsychol. 30 , 597–617 (2013).

Nickels, L., Howard, D. & Best, W. On the use of different methodologies in cognitive neuropsychology: drink deep and from several sources. Cogn. Neuropsychol. 28 , 475–485 (2011).

Dell, G. S. & Schwartz, M. F. Who’s in and who’s out? Inclusion criteria, model evaluation, and the treatment of exceptions in case series. Cogn. Neuropsychol. 28 , 515–520 (2011).

Schwartz, M. F. & Dell, G. S. Case series investigations in cognitive neuropsychology. Cogn. Neuropsychol. 27 , 477–494 (2010).

Cohen, J. A power primer. Psychol. Bull. 112 , 155–159 (1992).

Martin, R. C. & Allen, C. Case studies in neuropsychology. In APA Handbook Of Research Methods In Psychology Vol. 2 Research Designs: Quantitative, Qualitative, Neuropsychological, And Biological (eds Cooper, H. et al.) 633–646 (American Psychological Association, 2012).

Leivada, E., Westergaard, M., Duñabeitia, J. A. & Rothman, J. On the phantom-like appearance of bilingualism effects on neurocognition: (how) should we proceed? Bilingualism 24 , 197–210 (2021).

Arnett, J. J. The neglected 95%: why American psychology needs to become less American. Am. Psychol. 63 , 602–614 (2008).

Stolz, J. A., Besner, D. & Carr, T. H. Implications of measures of reliability for theories of priming: activity in semantic memory is inherently noisy and uncoordinated. Vis. Cogn. 12 , 284–336 (2005).

Cipora, K. et al. A minority pulls the sample mean: on the individual prevalence of robust group-level cognitive phenomena — the instance of the SNARC effect. Preprint at psyArXiv https://doi.org/10.31234/osf.io/bwyr3 (2019).

Andrews, S., Lo, S. & Xia, V. Individual differences in automatic semantic priming. J. Exp. Psychol. Hum. Percept. Perform. 43 , 1025–1039 (2017).

Tan, L. C. & Yap, M. J. Are individual differences in masked repetition and semantic priming reliable? Vis. Cogn. 24 , 182–200 (2016).

Olsson-Collentine, A., Wicherts, J. M. & van Assen, M. A. L. M. Heterogeneity in direct replications in psychology and its association with effect size. Psychol. Bull. 146 , 922–940 (2020).

Gratton, C. & Braga, R. M. Editorial overview: deep imaging of the individual brain: past, practice, and promise. Curr. Opin. Behav. Sci. 40 , iii–vi (2021).

Fedorenko, E. The early origins and the growing popularity of the individual-subject analytic approach in human neuroscience. Curr. Opin. Behav. Sci. 40 , 105–112 (2021).

Xue, A. et al. The detailed organization of the human cerebellum estimated by intrinsic functional connectivity within the individual. J. Neurophysiol. 125 , 358–384 (2021).

Petit, S. et al. Toward an individualized neural assessment of receptive language in children. J. Speech Lang. Hear. Res. 63 , 2361–2385 (2020).

Jung, K.-H. et al. Heterogeneity of cerebral white matter lesions and clinical correlates in older adults. Stroke 52 , 620–630 (2021).

Falcon, M. I., Jirsa, V. & Solodkin, A. A new neuroinformatics approach to personalized medicine in neurology: the virtual brain. Curr. Opin. Neurol. 29 , 429–436 (2016).

Duncan, G. J., Engel, M., Claessens, A. & Dowsett, C. J. Replication and robustness in developmental research. Dev. Psychol. 50 , 2417–2425 (2014).

Open Science Collaboration. Estimating the reproducibility of psychological science. Science 349 , aac4716 (2015).

Tackett, J. L., Brandes, C. M., King, K. M. & Markon, K. E. Psychology’s replication crisis and clinical psychological science. Annu. Rev. Clin. Psychol. 15 , 579–604 (2019).

Munafò, M. R. et al. A manifesto for reproducible science. Nat. Hum. Behav. 1 , 0021 (2017).

Oldfield, R. C. & Wingfield, A. The time it takes to name an object. Nature 202 , 1031–1032 (1964).

Oldfield, R. C. & Wingfield, A. Response latencies in naming objects. Q. J. Exp. Psychol. 17 , 273–281 (1965).

Brysbaert, M. How many participants do we have to include in properly powered experiments? A tutorial of power analysis with reference tables. J. Cogn. 2 , 16 (2019).

Brysbaert, M. Power considerations in bilingualism research: time to step up our game. Bilingualism https://doi.org/10.1017/S1366728920000437 (2020).

Machery, E. What is a replication? Phil. Sci. 87 , 545–567 (2020).

Nosek, B. A. & Errington, T. M. What is replication? PLoS Biol. 18 , e3000691 (2020).

Li, X., Huang, L., Yao, P. & Hyönä, J. Universal and specific reading mechanisms across different writing systems. Nat. Rev. Psychol. 1 , 133–144 (2022).

Rapp, B. (Ed.) The Handbook Of Cognitive Neuropsychology: What Deficits Reveal About The Human Mind (Psychology Press, 2001).

Code, C. et al. Classic Cases In Neuropsychology (Psychology Press, 1996).

Patterson, K., Marshall, J. C. & Coltheart, M. Surface Dyslexia: Neuropsychological And Cognitive Studies Of Phonological Reading (Routledge, 2017).

Marshall, J. C. & Newcombe, F. Patterns of paralexia: a psycholinguistic approach. J. Psycholinguist. Res. 2 , 175–199 (1973).

Castles, A. & Coltheart, M. Varieties of developmental dyslexia. Cognition 47 , 149–180 (1993).

Khentov-Kraus, L. & Friedmann, N. Vowel letter dyslexia. Cogn. Neuropsychol. 35 , 223–270 (2018).

Winskel, H. Orthographic and phonological parafoveal processing of consonants, vowels, and tones when reading Thai. Appl. Psycholinguist. 32 , 739–759 (2011).

Hepner, C., McCloskey, M. & Rapp, B. Do reading and spelling share orthographic representations? Evidence from developmental dysgraphia. Cogn. Neuropsychol. 34 , 119–143 (2017).

Hanley, J. R. & Sotiropoulos, A. Developmental surface dysgraphia without surface dyslexia. Cogn. Neuropsychol. 35 , 333–341 (2018).

Zihl, J. & Heywood, C. A. The contribution of single case studies to the neuroscience of vision: single case studies in vision neuroscience. Psych. J. 5 , 5–17 (2016).

Bouvier, S. E. & Engel, S. A. Behavioral deficits and cortical damage loci in cerebral achromatopsia. Cereb. Cortex 16 , 183–191 (2006).

Zihl, J. & Heywood, C. A. The contribution of LM to the neuroscience of movement vision. Front. Integr. Neurosci. 9 , 6 (2015).

Dotan, D. & Friedmann, N. Separate mechanisms for number reading and word reading: evidence from selective impairments. Cortex 114 , 176–192 (2019).

McCloskey, M. & Schubert, T. Shared versus separate processes for letter and digit identification. Cogn. Neuropsychol. 31 , 437–460 (2014).

Fayol, M. & Seron, X. On numerical representations. Insights from experimental, neuropsychological, and developmental research. In Handbook of Mathematical Cognition (ed. Campbell, J.) 3–23 (Psychological Press, 2005).

Bornstein, B. & Kidron, D. P. Prosopagnosia. J. Neurol. Neurosurg. Psychiat. 22 , 124–131 (1959).

Kühn, C. D., Gerlach, C., Andersen, K. B., Poulsen, M. & Starrfelt, R. Face recognition in developmental dyslexia: evidence for dissociation between faces and words. Cogn. Neuropsychol. 38 , 107–115 (2021).

Barton, J. J. S., Albonico, A., Susilo, T., Duchaine, B. & Corrow, S. L. Object recognition in acquired and developmental prosopagnosia. Cogn. Neuropsychol. 36 , 54–84 (2019).

Renault, B., Signoret, J.-L., Debruille, B., Breton, F. & Bolgert, F. Brain potentials reveal covert facial recognition in prosopagnosia. Neuropsychologia 27 , 905–912 (1989).

Bauer, R. M. Autonomic recognition of names and faces in prosopagnosia: a neuropsychological application of the guilty knowledge test. Neuropsychologia 22 , 457–469 (1984).

Haan, E. H. F., de, Young, A. & Newcombe, F. Face recognition without awareness. Cogn. Neuropsychol. 4 , 385–415 (1987).

Ellis, H. D. & Lewis, M. B. Capgras delusion: a window on face recognition. Trends Cogn. Sci. 5 , 149–156 (2001).

Ellis, H. D., Young, A. W., Quayle, A. H. & De Pauw, K. W. Reduced autonomic responses to faces in Capgras delusion. Proc. R. Soc. Lond. B 264 , 1085–1092 (1997).

Collins, M. N., Hawthorne, M. E., Gribbin, N. & Jacobson, R. Capgras’ syndrome with organic disorders. Postgrad. Med. J. 66 , 1064–1067 (1990).

Enoch, D., Puri, B. K. & Ball, H. Uncommon Psychiatric Syndromes 5th edn (Routledge, 2020).

Tranel, D., Damasio, H. & Damasio, A. R. Double dissociation between overt and covert face recognition. J. Cogn. Neurosci. 7 , 425–432 (1995).

Brighetti, G., Bonifacci, P., Borlimi, R. & Ottaviani, C. “Far from the heart far from the eye”: evidence from the Capgras delusion. Cogn. Neuropsychiat. 12 , 189–197 (2007).

Coltheart, M., Langdon, R. & McKay, R. Delusional belief. Annu. Rev. Psychol. 62 , 271–298 (2011).

Coltheart, M. Cognitive neuropsychiatry and delusional belief. Q. J. Exp. Psychol. 60 , 1041–1062 (2007).

Coltheart, M. & Davies, M. How unexpected observations lead to new beliefs: a Peircean pathway. Conscious. Cogn. 87 , 103037 (2021).

Coltheart, M. & Davies, M. Failure of hypothesis evaluation as a factor in delusional belief. Cogn. Neuropsychiat. 26 , 213–230 (2021).

McCloskey, M. et al. A developmental deficit in localizing objects from vision. Psychol. Sci. 6 , 112–117 (1995).

McCloskey, M., Valtonen, J. & Cohen Sherman, J. Representing orientation: a coordinate-system hypothesis and evidence from developmental deficits. Cogn. Neuropsychol. 23 , 680–713 (2006).

McCloskey, M. Spatial representations and multiple-visual-systems hypotheses: evidence from a developmental deficit in visual location and orientation processing. Cortex 40 , 677–694 (2004).

Gregory, E. & McCloskey, M. Mirror-image confusions: implications for representation and processing of object orientation. Cognition 116 , 110–129 (2010).

Gregory, E., Landau, B. & McCloskey, M. Representation of object orientation in children: evidence from mirror-image confusions. Vis. Cogn. 19 , 1035–1062 (2011).

Laine, M. & Martin, N. Cognitive neuropsychology has been, is, and will be significant to aphasiology. Aphasiology 26 , 1362–1376 (2012).

Howard, D. & Patterson, K. The Pyramids And Palm Trees Test: A Test Of Semantic Access From Words And Pictures (Thames Valley Test Co., 1992).

Kay, J., Lesser, R. & Coltheart, M. PALPA: Psycholinguistic Assessments Of Language Processing In Aphasia. 2: Picture & Word Semantics, Sentence Comprehension (Erlbaum, 2001).

Franklin, S. Dissociations in auditory word comprehension; evidence from nine fluent aphasic patients. Aphasiology 3 , 189–207 (1989).

Howard, D., Swinburn, K. & Porter, G. Putting the CAT out: what the comprehensive aphasia test has to offer. Aphasiology 24 , 56–74 (2010).

Conti-Ramsden, G., Crutchley, A. & Botting, N. The extent to which psychometric tests differentiate subgroups of children with SLI. J. Speech Lang. Hear. Res. 40 , 765–777 (1997).

Bishop, D. V. M. & McArthur, G. M. Individual differences in auditory processing in specific language impairment: a follow-up study using event-related potentials and behavioural thresholds. Cortex 41 , 327–341 (2005).

Bishop, D. V. M., Snowling, M. J., Thompson, P. A. & Greenhalgh, T., and the CATALISE-2 consortium. Phase 2 of CATALISE: a multinational and multidisciplinary Delphi consensus study of problems with language development: terminology. J. Child. Psychol. Psychiat. 58 , 1068–1080 (2017).

Wilson, A. J. et al. Principles underlying the design of ‘the number race’, an adaptive computer game for remediation of dyscalculia. Behav. Brain Funct. 2 , 19 (2006).

Basso, A. & Marangolo, P. Cognitive neuropsychological rehabilitation: the emperor’s new clothes? Neuropsychol. Rehabil. 10 , 219–229 (2000).

Murad, M. H., Asi, N., Alsawas, M. & Alahdab, F. New evidence pyramid. Evidence-based Med. 21 , 125–127 (2016).

Greenhalgh, T., Howick, J. & Maskrey, N., for the Evidence Based Medicine Renaissance Group. Evidence based medicine: a movement in crisis? Br. Med. J. 348 , g3725–g3725 (2014).

Best, W., Ping Sze, W., Edmundson, A. & Nickels, L. What counts as evidence? Swimming against the tide: valuing both clinically informed experimentally controlled case series and randomized controlled trials in intervention research. Evidence-based Commun. Assess. Interv. 13 , 107–135 (2019).

Best, W. et al. Understanding differing outcomes from semantic and phonological interventions with children with word-finding difficulties: a group and case series study. Cortex 134 , 145–161 (2021).

OCEBM Levels of Evidence Working Group. The Oxford Levels of Evidence 2. CEBM https://www.cebm.ox.ac.uk/resources/levels-of-evidence/ocebm-levels-of-evidence (2011).

Holler, D. E., Behrmann, M. & Snow, J. C. Real-world size coding of solid objects, but not 2-D or 3-D images, in visual agnosia patients with bilateral ventral lesions. Cortex 119 , 555–568 (2019).

Duchaine, B. C., Yovel, G., Butterworth, E. J. & Nakayama, K. Prosopagnosia as an impairment to face-specific mechanisms: elimination of the alternative hypotheses in a developmental case. Cogn. Neuropsychol. 23 , 714–747 (2006).

Hartley, T. et al. The hippocampus is required for short-term topographical memory in humans. Hippocampus 17 , 34–48 (2007).

Pishnamazi, M. et al. Attentional bias towards and away from fearful faces is modulated by developmental amygdala damage. Cortex 81 , 24–34 (2016).

Rapp, B., Fischer-Baum, S. & Miozzo, M. Modality and morphology: what we write may not be what we say. Psychol. Sci. 26 , 892–902 (2015).

Yong, K. X. X., Warren, J. D., Warrington, E. K. & Crutch, S. J. Intact reading in patients with profound early visual dysfunction. Cortex 49 , 2294–2306 (2013).

Rockland, K. S. & Van Hoesen, G. W. Direct temporal–occipital feedback connections to striate cortex (V1) in the macaque monkey. Cereb. Cortex 4 , 300–313 (1994).

Haynes, J.-D., Driver, J. & Rees, G. Visibility reflects dynamic changes of effective connectivity between V1 and fusiform cortex. Neuron 46 , 811–821 (2005).

Tanaka, K. Mechanisms of visual object recognition: monkey and human studies. Curr. Opin. Neurobiol. 7 , 523–529 (1997).

Fischer-Baum, S., McCloskey, M. & Rapp, B. Representation of letter position in spelling: evidence from acquired dysgraphia. Cognition 115 , 466–490 (2010).

Houghton, G. The problem of serial order: a neural network model of sequence learning and recall. In Current Research In Natural Language Generation (eds Dale, R., Mellish, C. & Zock, M.) 287–319 (Academic Press, 1990).

Fieder, N., Nickels, L., Biedermann, B. & Best, W. From “some butter” to “a butter”: an investigation of mass and count representation and processing. Cogn. Neuropsychol. 31 , 313–349 (2014).

Fieder, N., Nickels, L., Biedermann, B. & Best, W. How ‘some garlic’ becomes ‘a garlic’ or ‘some onion’: mass and count processing in aphasia. Neuropsychologia 75 , 626–645 (2015).

Schröder, A., Burchert, F. & Stadie, N. Training-induced improvement of noncanonical sentence production does not generalize to comprehension: evidence for modality-specific processes. Cogn. Neuropsychol. 32 , 195–220 (2015).

Stadie, N. et al. Unambiguous generalization effects after treatment of non-canonical sentence production in German agrammatism. Brain Lang. 104 , 211–229 (2008).

Schapiro, A. C., Gregory, E., Landau, B., McCloskey, M. & Turk-Browne, N. B. The necessity of the medial temporal lobe for statistical learning. J. Cogn. Neurosci. 26 , 1736–1747 (2014).

Schapiro, A. C., Kustner, L. V. & Turk-Browne, N. B. Shaping of object representations in the human medial temporal lobe based on temporal regularities. Curr. Biol. 22 , 1622–1627 (2012).

Baddeley, A., Vargha-Khadem, F. & Mishkin, M. Preserved recognition in a case of developmental amnesia: implications for the acaquisition of semantic memory? J. Cogn. Neurosci. 13 , 357–369 (2001).

Snyder, J. J. & Chatterjee, A. Spatial-temporal anisometries following right parietal damage. Neuropsychologia 42 , 1703–1708 (2004).

Ashkenazi, S., Henik, A., Ifergane, G. & Shelef, I. Basic numerical processing in left intraparietal sulcus (IPS) acalculia. Cortex 44 , 439–448 (2008).

Lebrun, M.-A., Moreau, P., McNally-Gagnon, A., Mignault Goulet, G. & Peretz, I. Congenital amusia in childhood: a case study. Cortex 48 , 683–688 (2012).

Vannuscorps, G., Andres, M. & Pillon, A. When does action comprehension need motor involvement? Evidence from upper limb aplasia. Cogn. Neuropsychol. 30 , 253–283 (2013).

Jeannerod, M. Neural simulation of action: a unifying mechanism for motor cognition. NeuroImage 14 , S103–S109 (2001).

Blakemore, S.-J. & Decety, J. From the perception of action to the understanding of intention. Nat. Rev. Neurosci. 2 , 561–567 (2001).

Rizzolatti, G. & Craighero, L. The mirror-neuron system. Annu. Rev. Neurosci. 27 , 169–192 (2004).

Forde, E. M. E., Humphreys, G. W. & Remoundou, M. Disordered knowledge of action order in action disorganisation syndrome. Neurocase 10 , 19–28 (2004).

Mazzi, C. & Savazzi, S. The glamor of old-style single-case studies in the neuroimaging era: insights from a patient with hemianopia. Front. Psychol. 10 , 965 (2019).

Coltheart, M. What has functional neuroimaging told us about the mind (so far)? (Position Paper Presented to the European Cognitive Neuropsychology Workshop, Bressanone, 2005). Cortex 42 , 323–331 (2006).

Page, M. P. A. What can’t functional neuroimaging tell the cognitive psychologist? Cortex 42 , 428–443 (2006).

Blank, I. A., Kiran, S. & Fedorenko, E. Can neuroimaging help aphasia researchers? Addressing generalizability, variability, and interpretability. Cogn. Neuropsychol. 34 , 377–393 (2017).

Niv, Y. The primacy of behavioral research for understanding the brain. Behav. Neurosci. 135 , 601–609 (2021).

Crawford, J. R. & Howell, D. C. Comparing an individual’s test score against norms derived from small samples. Clin. Neuropsychol. 12 , 482–486 (1998).

Crawford, J. R., Garthwaite, P. H. & Ryan, K. Comparing a single case to a control sample: testing for neuropsychological deficits and dissociations in the presence of covariates. Cortex 47 , 1166–1178 (2011).

McIntosh, R. D. & Rittmo, J. Ö. Power calculations in single-case neuropsychology: a practical primer. Cortex 135 , 146–158 (2021).

Patterson, K. & Plaut, D. C. “Shallow draughts intoxicate the brain”: lessons from cognitive science for cognitive neuropsychology. Top. Cogn. Sci. 1 , 39–58 (2009).

Lambon Ralph, M. A., Patterson, K. & Plaut, D. C. Finite case series or infinite single-case studies? Comments on “Case series investigations in cognitive neuropsychology” by Schwartz and Dell (2010). Cogn. Neuropsychol. 28 , 466–474 (2011).

Horien, C., Shen, X., Scheinost, D. & Constable, R. T. The individual functional connectome is unique and stable over months to years. NeuroImage 189 , 676–687 (2019).

Epelbaum, S. et al. Pure alexia as a disconnection syndrome: new diffusion imaging evidence for an old concept. Cortex 44 , 962–974 (2008).

Fischer-Baum, S. & Campana, G. Neuroplasticity and the logic of cognitive neuropsychology. Cogn. Neuropsychol. 34 , 403–411 (2017).

Paul, S., Baca, E. & Fischer-Baum, S. Cerebellar contributions to orthographic working memory: a single case cognitive neuropsychological investigation. Neuropsychologia 171 , 108242 (2022).

Feinstein, J. S., Adolphs, R., Damasio, A. & Tranel, D. The human amygdala and the induction and experience of fear. Curr. Biol. 21 , 34–38 (2011).

Crawford, J., Garthwaite, P. & Gray, C. Wanted: fully operational definitions of dissociations in single-case studies. Cortex 39 , 357–370 (2003).

McIntosh, R. D. Simple dissociations for a higher-powered neuropsychology. Cortex 103 , 256–265 (2018).

McIntosh, R. D. & Brooks, J. L. Current tests and trends in single-case neuropsychology. Cortex 47 , 1151–1159 (2011).

Best, W., Schröder, A. & Herbert, R. An investigation of a relative impairment in naming non-living items: theoretical and methodological implications. J. Neurolinguistics 19 , 96–123 (2006).

Franklin, S., Howard, D. & Patterson, K. Abstract word anomia. Cogn. Neuropsychol. 12 , 549–566 (1995).

Coltheart, M., Patterson, K. E. & Marshall, J. C. Deep Dyslexia (Routledge, 1980).

Nickels, L., Kohnen, S. & Biedermann, B. An untapped resource: treatment as a tool for revealing the nature of cognitive processes. Cogn. Neuropsychol. 27 , 539–562 (2010).

Download references

Acknowledgements

The authors thank all of those pioneers of and advocates for single case study research who have mentored, inspired and encouraged us over the years, and the many other colleagues with whom we have discussed these issues.

Author information

Authors and affiliations.

School of Psychological Sciences & Macquarie University Centre for Reading, Macquarie University, Sydney, New South Wales, Australia

Lyndsey Nickels

NHMRC Centre of Research Excellence in Aphasia Recovery and Rehabilitation, Australia

Psychological Sciences, Rice University, Houston, TX, USA

Simon Fischer-Baum

Psychology and Language Sciences, University College London, London, UK

You can also search for this author in PubMed   Google Scholar

Contributions

L.N. led and was primarily responsible for the structuring and writing of the manuscript. All authors contributed to all aspects of the article.

Corresponding author

Correspondence to Lyndsey Nickels .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Reviews Psychology thanks Yanchao Bi, Rob McIntosh, and the other, anonymous, reviewer for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cite this article.

Nickels, L., Fischer-Baum, S. & Best, W. Single case studies are a powerful tool for developing, testing and extending theories. Nat Rev Psychol 1 , 733–747 (2022). https://doi.org/10.1038/s44159-022-00127-y

Download citation

Accepted : 13 October 2022

Published : 22 November 2022

Issue Date : December 2022

DOI : https://doi.org/10.1038/s44159-022-00127-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

typically the results from a single case research study are evaluated using ____

typically the results from a single case research study are evaluated using ____

  • Subscribe to journal Subscribe
  • Get new issue alerts Get alerts

Secondary Logo

Journal logo.

Colleague's E-mail is Invalid

Your message has been successfully sent to your colleague.

Save my selection

Single-Case Design, Analysis, and Quality Assessment for Intervention Research

Lobo, Michele A. PT, PhD; Moeyaert, Mariola PhD; Baraldi Cunha, Andrea PT, PhD; Babik, Iryna PhD

Biomechanics & Movement Science Program, Department of Physical Therapy, University of Delaware, Newark, Delaware (M.A.L., A.B.C., I.B.); and Division of Educational Psychology & Methodology, State University of New York at Albany, Albany, New York (M.M.).

Correspondence: Michele A. Lobo, PT, PhD, Biomechanics & Movement Science Program, Department of Physical Therapy, University of Delaware, Newark, DE 19713 ( [email protected] ).

This research was supported by the National Institute of Health, Eunice Kennedy Shriver National Institute of Child Health & Human Development (1R21HD076092-01A1, Lobo PI), and the Delaware Economic Development Office (Grant #109).Some of the information in this article was presented at the IV Step Meeting in Columbus, Ohio, June 2016.The authors declare no conflict of interest.

Background and Purpose: 

The purpose of this article is to describe single-case studies and contrast them with case studies and randomized clinical trials. We highlight current research designs, analysis techniques, and quality appraisal tools relevant for single-case rehabilitation research.

Summary of Key Points: 

Single-case studies can provide a viable alternative to large group studies such as randomized clinical trials. Single-case studies involve repeated measures and manipulation of an independent variable. They can be designed to have strong internal validity for assessing causal relationships between interventions and outcomes, as well as external validity for generalizability of results, particularly when the study designs incorporate replication, randomization, and multiple participants. Single-case studies should not be confused with case studies/series (ie, case reports), which are reports of clinical management of a patient or a small series of patients.

Recommendations for Clinical Practice: 

When rigorously designed, single-case studies can be particularly useful experimental designs in a variety of situations, such as when research resources are limited, studied conditions have low incidences, or when examining effects of novel or expensive interventions. Readers will be directed to examples from the published literature in which these techniques have been discussed, evaluated for quality, and implemented.

INTRODUCTION

In this special interest article we present current tools and techniques relevant for single-case rehabilitation research. Single-case (SC) studies have been identified by a variety of names, including “n of 1 studies” and “single-subject” studies. The term “single-case study” is preferred over the previously mentioned terms because previous terms suggest these studies include only 1 participant. In fact, as discussed later, for purposes of replication and improved generalizability, the strongest SC studies commonly include more than 1 participant.

A SC study should not be confused with a “case study/series” (also called “case report”). In a typical case study/series, a single patient or small series of patients is involved, but there is not a purposeful manipulation of an independent variable, nor are there necessarily repeated measures. Most case studies/series are reported in a narrative way, whereas results of SC studies are presented numerically or graphically. 1 , 2 This article defines SC studies, contrasts them with randomized clinical trials, discusses how they can be used to scientifically test hypotheses, and highlights current research designs, analysis techniques, and quality appraisal tools that may be useful for rehabilitation researchers.

In SC studies, measurements of outcome (dependent variables) are recorded repeatedly for individual participants across time and varying levels of an intervention (independent variables). 1–5 These varying levels of intervention are referred to as “phases,” with 1 phase serving as a baseline or comparison, so each participant serves as his/her own control. 2 In contrast to case studies and case series in which participants are observed across time without experimental manipulation of the independent variable, SC studies employ systematic manipulation of the independent variable to allow for hypothesis testing. 1 , 6 As a result, SC studies allow for rigorous experimental evaluation of intervention effects and provide a strong basis for establishing causal inferences. Advances in design and analysis techniques for SC studies observed in recent decades have made SC studies increasingly popular in educational and psychological research. Yet, the authors believe SC studies have been undervalued in rehabilitation research, where randomized clinical trials (RCTs) are typically recommended as the optimal research design to answer questions related to interventions. 7 In reality, there are advantages and disadvantages to both SC studies and RCTs that should be carefully considered to select the best design to answer individual research questions. Although there are a variety of other research designs that could be utilized in rehabilitation research, only SC studies and RCTs are discussed here because SC studies are the focus of this article and RCTs are the most highly recommended design for intervention studies. 7

When designed and conducted properly, RCTs offer strong evidence that changes in outcomes may be related to provision of an intervention. However, RCTs require monetary, time, and personnel resources that many researchers, especially those in clinical settings, may not have available. 8 RCTs also require access to large numbers of consenting participants who meet strict inclusion and exclusion criteria that can limit variability of the sample and generalizability of results. 9 The requirement for large participant numbers may make RCTs difficult to perform in many settings, such as rural and suburban settings, and for many populations, such as those with diagnoses marked by lower prevalence. 8 To rely exclusively on RCTs has the potential to result in bodies of research that are skewed to address the needs of some individuals whereas neglecting the needs of others. RCTs aim to include a large number of participants and to use random group assignment to create study groups that are similar to one another in terms of all potential confounding variables, but it is challenging to identify all confounding variables. Finally, the results of RCTs are typically presented in terms of group means and standard deviations that may not represent true performance of any one participant. 10 This can present as a challenge for clinicians aiming to translate and implement these group findings at the level of the individual.

SC studies can provide a scientifically rigorous alternative to RCTs for experimentally determining the effectiveness of interventions. 1 , 2 SC studies can assess a variety of research questions, settings, cases, independent variables, and outcomes. 11 There are many benefits to SC studies that make them appealing for intervention research. SC studies may require fewer resources than RCTs and can be performed in settings and with populations that do not allow for large numbers of participants. 1 , 2 In SC studies, each participant serves as his/her own comparison, thus controlling for many confounding variables that can impact outcome in rehabilitation research, such as gender, age, socioeconomic level, cognition, home environment, and concurrent interventions. 2 , 11 Results can be analyzed and presented to determine whether interventions resulted in changes at the level of the individual, the level at which rehabilitation professionals intervene. 2 , 12 When properly designed and executed, SC studies can demonstrate strong internal validity to determine the likelihood of a causal relationship between the intervention and outcomes and external validity to generalize the findings to broader settings and populations. 2 , 12 , 13

SINGLE-CASE RESEARCH DESIGNS FOR INTERVENTION RESEARCH

There are a variety of SC designs that can be used to study the effectiveness of interventions. Here we discuss (1) AB designs, (2) reversal designs, (3) multiple baseline designs, and (4) alternating treatment designs, as well as ways replication and randomization techniques can be used to improve internal validity of all of these designs. 1–3 , 12–14

The simplest of these designs is the AB design 15 ( Figure 1 ). This design involves repeated measurement of outcome variables throughout a baseline control/comparison phase (A) and then throughout an intervention phase (B). When possible, it is recommended that a stable level and/or rate of change in performance be observed within the baseline phase before transitioning into the intervention phase. 2 As with all SC designs, it is also recommended that there be a minimum of 5 data points in each phase. 1 , 2 There is no randomization or replication of the baseline or intervention phases in the basic AB design. 2 Therefore, AB designs have problems with internal validity and generalizability of results. 12 They are weak in establishing causality because changes in outcome variables could be related to a variety of other factors, including maturation, experience, learning, and practice effects. 2 , 12 Sample data from a single-case AB study performed to assess the impact of Floor Play intervention on social interaction and communication skills for a child with autism 15 are shown in Figure 1 .

F1

If an intervention does not have carryover effects, it is recommended to use a reversal design . 2 For example, a reversal A 1 BA 2 design 16 ( Figure 2 ) includes alternation of the baseline and intervention phases, whereas a reversal A 1 B 1 A 2 B 2 design 17 ( Figure 3 ) consists of alternation of 2 baseline (A 1 , A 2 ) and 2 intervention (B 1 , B 2 ) phases. Incorporating at least 4 phases in the reversal design (ie, A 1 B 1 A 2 B 2 or A 1 B 1 A 2 B 2 A 3 B 3 ...) allows for a stronger determination of a causal relationship between the intervention and outcome variables because the relationship can be demonstrated across at least 3 different points in time–-change in outcome from A 1 to B 1 , from B 1 to A 2 , and from A 2 to B 2 . 18 Before using this design, however, researchers must determine that it is safe and ethical to withdraw the intervention, especially in cases where the intervention is effective and necessary. 12

F2

A recent study used an ABA reversal SC study to determine the effectiveness of core stability training in 8 participants with multiple sclerosis. 16 During the first 4 weekly data collections, the researchers ensured a stable baseline, which was followed by 8 weekly intervention data points, and concluded with 4 weekly withdrawal data points. Intervention significantly improved participants' walking and reaching performance ( Figure 2 ). 16 This A 1 BA 2 design could have been strengthened by the addition of a second intervention phase for replication (A 1 B 1 A 2 B 2 ). For instance, a single-case A 1 B 1 A 2 B 2 withdrawal design aimed to assess the efficacy of rehabilitation using visuo-spatio-motor cueing for 2 participants with severe unilateral neglect after a severe right hemisphere stroke. 17 Each phase included 8 data points. Statistically significant intervention-related improvement was observed, suggesting that visuo-spatio-motor cueing might be promising for treating individuals with very severe neglect ( Figure 3 ). 17

The reversal design can also incorporate a cross-over design where each participant experiences more than 1 type of intervention. For instance, a B 1 C 1 B 2 C 2 design could be used to study the effects of 2 different interventions (B and C) on outcome measures. Challenges with including more than 1 intervention involve potential carryover effects from earlier interventions and order effects that may impact the measured effectiveness of the interventions. 2 , 12 Including multiple participants and randomizing the order of intervention phase presentations are tools to help control for these types of effects. 19

When an intervention permanently changes an individual's ability, a return-to-baseline performance is not feasible and reversal designs are not appropriate. Multiple baseline designs ( MBDs ) are useful in these situations ( Figure 4 ). 20 Multiple baseline designs feature staggered introduction of the intervention across time: each participant is randomly assigned to 1 of at least 3 experimental conditions characterized by the length of the baseline phase. 21 These studies involve more than 1 participant, thus functioning as SC studies with replication across participants. Staggered introduction of the intervention allows for separation of intervention effects from those of maturation, experience, learning, and practice. For example, a multiple baseline SC study was used to investigate the effect of an antispasticity baclofen medication on stiffness in 5 adult males with spinal cord injury. 20 The subjects were randomly assigned to receive 5 to 9 baseline data points with a placebo treatment before the initiation of the intervention phase with the medication. Both participants and assessors were blind to the experimental condition. The results suggested that baclofen might not be a universal treatment choice for all individuals with spasticity resulting from a traumatic spinal cord injury ( Figure 4 ). 20

F4

The impact of 2 or more interventions can also be assessed via alternating treatment designs ( ATDs ). In ATDs, after establishing the baseline, the experimenter exposes subjects to different intervention conditions administered in close proximity for equal intervals ( Figure 5 ). 22 ATDs are prone to “carryover effects” when the effects of 1 intervention influence the observed outcomes of another intervention. 1 As a result, such designs introduce unique challenges when attempting to determine the effects of any 1 intervention and have been less commonly utilized in rehabilitation. An ATD was used to monitor disruptive behaviors in the school setting throughout a baseline followed by an alternating treatment phase with randomized presentation of a control condition or an exercise condition. 23 Results showed that 30 minutes of moderate to intense physical activity decreased behavioral disruptions through 90 minutes after the intervention. 23 An ATD was also used to compare the effects of commercially available and custom-made video prompts on the performance of multistep cooking tasks in 4 participants with autism. 22 Results showed that participants independently performed more steps with the custom-made video prompts ( Figure 5 ). 22

F5

Regardless of the SC study design, replication and randomization should be incorporated when possible to improve internal and external validity. 11 The reversal design is an example of replication across study phases. The minimum number of phase replications needed to meet quality standards is 3 (A 1 B 1 A 2 B 2 ), but having 4 or more replications is highly recommended (A 1 B 1 A 2 B 2 A 3 ...). 11 , 14 In cases when interventions aim to produce lasting changes in participants' abilities, replication of findings may be demonstrated by replicating intervention effects across multiple participants (as in multiple-participant AB designs), or across multiple settings, tasks, or service providers. When the results of an intervention are replicated across multiple reversals, participants, and/or contexts, there is an increased likelihood that a causal relationship exists between the intervention and the outcome. 2 , 12

Randomization should be incorporated in SC studies to improve internal validity and the ability to assess for causal relationships among interventions and outcomes. 11 In contrast to traditional group designs, SC studies often do not have multiple participants or units that can be randomly assigned to different intervention conditions. Instead, in randomized phase-order designs , the sequence of phases is randomized. Simple or block randomization is possible. For example, with simple randomization for an A 1 B 1 A 2 B 2 design, the A and B conditions are treated as separate units and are randomly assigned to be administered for each of the predefined data collection points. As a result, any combination of A-B sequences is possible without restrictions on the number of times each condition is administered or regard for repetitions of conditions (eg, A 1 B 1 B 2 A 2 B 3 B 4 B 5 A 3 B 6 A 4 A 5 A 6 ). With block randomization for an A 1 B 1 A 2 B 2 design, 2 conditions (eg, A and B) would be blocked into a single unit (AB or BA), randomization of which to different periods would ensure that each condition appears in the resulting sequence more than 2 times (eg, A 1 B 1 B 2 A 2 A 3 B 3 A 4 B 4 ). Note that AB and reversal designs require that the baseline (A) always precedes the first intervention (B), which should be accounted for in the randomization scheme. 2 , 11

In randomized phase start-point designs , the lengths of the A and B phases can be randomized. 2 , 11 , 24–26 For example, for an AB design, researchers could specify the number of time points at which outcome data will be collected (eg, 20), define the minimum number of data points desired in each phase (eg, 4 for A, 3 for B), and then randomize the initiation of the intervention so that it occurs anywhere between the remaining time points (points 5 and 17 in the current example). 27 , 28 For multiple baseline designs, a dual-randomization or “regulated randomization” procedure has been recommended. 29 If multiple baseline randomization depends solely on chance, it could be the case that all units are assigned to begin intervention at points not really separated in time. 30 Such randomly selected initiation of the intervention would result in the drastic reduction of the discriminant and internal validity of the study. 29 To eliminate this issue, investigators should first specify appropriate intervals between the start points for different units, then randomly select from those intervals, and finally randomly assign each unit to a start point. 29

SINGLE-CASE ANALYSIS TECHNIQUES FOR INTERVENTION RESEARCH

The What Works Clearinghouse (WWC) single-case design technical documentation provides an excellent overview of appropriate SC study analysis techniques to evaluate the effectiveness of intervention effects. 1 , 18 First, visual analyses are recommended to determine whether there is a functional relationship between the intervention and the outcome. Second, if evidence for a functional effect is present, the visual analysis is supplemented with quantitative analysis methods evaluating the magnitude of the intervention effect. Third, effect sizes are combined across cases to estimate overall average intervention effects, which contribute to evidence-based practice, theory, and future applications. 2 , 18

Visual Analysis

Traditionally, SC study data are presented graphically. When more than 1 participant engages in a study, a spaghetti plot showing all of their data in the same figure can be helpful for visualization. Visual analysis of graphed data has been the traditional method for evaluating treatment effects in SC research. 1 , 12 , 31 , 32 The visual analysis involves evaluating level, trend, and stability of the data within each phase (ie, within-phase data examination) followed by examination of the immediacy of effect, consistency of data patterns, and overlap of data between baseline and intervention phases (ie, between-phase comparisons). When the changes (and/or variability) in level are in the desired direction, are immediate, readily discernible, and maintained over time, it is concluded that the changes in behavior across phases result from the implemented treatment and are indicative of improvement. 33 Three demonstrations of an intervention effect are necessary for establishing a functional relationship. 1

Within-Phase Examination

Level, trend, and stability of the data within each phase are evaluated. Mean and/or median can be used to report the level, and trend can be evaluated by determining whether the data points are monotonically increasing or decreasing. Within-phase stability can be evaluated by calculating the percentage of data points within 15% of the phase median (or mean). The stability criterion is satisfied if about 85% (80%–90%) of the data in a phase fall within a 15% range of the median (or average) of all data points for that phase. 34

Between-Phase Examination

Immediacy of effect, consistency of data patterns, and overlap of data between baseline and intervention phases are evaluated next. For this, several nonoverlap indices have been proposed that all quantify the proportion of measurements in the intervention phase not overlapping with the baseline measurements. 35 Nonoverlap statistics are typically scaled as percent from 0 to 100, or as a proportion from 0 to 1. Here, we briefly discuss the nonoverlap of all pairs ( NAP ), 36 the extended celeration line ( ECL ), the improvement rate difference ( IRD ), 37 and the TauU , and the TauU-adjusted, TauU adj , 35 as these are the most recent and complete techniques. We also examine the percentage of nonoverlapping data ( PND ) 38 and the two standard deviations band method, as these are frequently used techniques. In addition, we include the percentage of nonoverlapping corrected data ( PNCD )–-an index applying to the PND after controlling for baseline trend. 39

Nonoverlap of All Pairs

typically the results from a single case research study are evaluated using ____

Extended Celeration Line

typically the results from a single case research study are evaluated using ____

As a consequence, this method depends on a straight line and makes an assumption of linearity in the baseline. 2 , 12

Improvement Rate Difference

This analysis is conceptualized as the difference in improvement rates (IR) between baseline ( IR B ) and intervention phases ( IR T ). 38 The IR for each phase is defined as the number of “improved data points” divided by the total data points in that phase. Improvement rate difference, commonly employed in medical group research under the name of “risk reduction” or “risk difference,” attempts to provide an intuitive interpretation for nonoverlap and to make use of an established, respected effect size, IR B − IR T , or the difference between 2 proportions. 37

TauU and TauU adj

typically the results from a single case research study are evaluated using ____

Online calculators might assist researchers in obtaining the TauU and TauU adjusted coefficients ( http://www.singlecaseresearch.org/calculators/tau-u ).

Percentage of Nonoverlapping Data

typically the results from a single case research study are evaluated using ____

Two Standard Deviation Band Method

When the stability criterion described earlier is met within phases, it is possible to apply the 2-standard deviation band method. 12 , 41 First, the mean of the data for a specific condition is calculated and represented with a solid line. In the next step, the standard deviation of the same data is computed, and 2 dashed lines are represented: one located 2 standard deviations above the mean and the other 2 standard deviations below. For normally distributed data, few points (<5%) are expected to be outside the 2-standard deviation bands if there is no change in the outcome score because of the intervention. However, this method is not considered a formal statistical procedure, as the data cannot typically be assumed to be normal, continuous, or independent. 41

Statistical Analysis

If the visual analysis indicates a functional relationship (ie, 3 demonstrations of the effectiveness of the intervention effect), it is recommended to proceed with the quantitative analyses, reflecting the magnitude of the intervention effect. First, effect sizes are calculated for each participant (individual-level analysis). Moreover, if the research interest lies in the generalizability of the effect size across participants, effect sizes can be combined across cases to achieve an overall average effect size estimate (across-case effect size).

Note that quantitative analysis methods are still being developed in the domain of SC research 1 and statistical challenges of producing an acceptable measure of treatment effect remain. 14 , 42 , 43 Therefore, the WWC standards strongly recommend conducting sensitivity analysis and reporting multiple effect size estimators. If consistency across different effect size estimators is identified, there is stronger evidence for the effectiveness of the treatment. 1 , 18

Individual-Level Effect Size Analysis

typically the results from a single case research study are evaluated using ____

Across-Case Effect Sizes

Two-level modeling to estimate the intervention effects across cases can be used to evaluate across-case effect sizes. 44 , 45 , 50 Multilevel modeling is recommended by the WWC standards because it takes the hierarchical nature of SC studies into account: measurements are nested within cases and cases, in turn, are nested within studies. By conducting a multilevel analysis, important research questions can be addressed (which cannot be answered by single-level analysis of SC study data), such as (1) What is the magnitude of the average treatment effect across cases? (2) What is the magnitude and direction of the case-specific intervention effect? (3) How much does the treatment effect vary within cases and across cases? (4) Does a case and/or study-level predictor influence the treatment's effect? The 2-level model has been validated in previous research using extensive simulation studies. 45 , 46 , 51 The 2-level model appears to have sufficient power (>0.80) to detect large treatment effects in at least 6 participants with 6 measurements. 21

Furthermore, to estimate the across-case effect sizes, the HPS (Hedges, Pustejovsky, and Shadish) , or single-case educational design ( SCEdD)-specific mean difference, index can be calculated. 52 This is a standardized mean difference index specifically designed for SCEdD data, with the aim of making it comparable to Cohen's d of group-comparison designs. The standard deviation takes into account both within-participant and between-participant variability, and is typically used to get an across-case estimator for a standardized change in level. The advantage of using the HPS across-case effect size estimator is that it is directly comparable with Cohen's d for group comparison research, thus enabling the use of Cohen's (1988) benchmarks. 53

Valuable recommendations on SC data analyses have recently been provided. 54 , 55 They suggest that a specific SC study data analytic technique can be chosen on the basis of (1) the study aims and the desired quantification (eg, overall quantification, between-phase quantifications, and randomization), (2) the data characteristics as assessed by visual inspection and the assumptions one is willing to make about the data, and (3) the knowledge and computational resources. 54 , 55 Table 1 lists recommended readings and some commonly used resources related to the design and analysis of single-case studies.

3rd ed. Needham Heights, MA: Allyn & Bacon; 2008.

New York, NY: Oxford University Press; 2010.

Hillsdale, NJ: Lawrence Erlbaum Associates; 1992.

Washington, DC: American Psychological Association; 2014.

Philadelphia, PA: F. A. Davis Company; 2015.

Reversal design . 2008;10(2):115-128.

. 2014;35:1963-1969.

. 2000;10(4):385-399.

Multiple baseline design . 1990;69(6):311-317.

. 2010;25(6):459-469.

Alternating treatment design . 2014;52(5):447-462.

. 2013;34(6):371-383.

Randomization . 2010;15(2):124-144.

Visual analysis . 2000;17(1):20-39.

. 2012;33(4):202-219.

Percentage of nonoverlapping data . 2010;4(4):619-625.

. 2010;47(8):842-858.

Nonoverlap of all pairs . 2009;40:357-367.

. 2012;21(3):203-216.

Improvement rate difference . 2016;121(3):169-193.

. 2016;86:104-113.

Tau-U/Piecewise regression . In press.

. 2017;38(2).

Hierarchical Linear Modeling . 2013;43(12):2943-2952.

. 2007;29(3):23-55.

QUALITY APPRAISAL TOOLS FOR SINGLE-CASE DESIGN RESEARCH

Quality appraisal tools are important to guide researchers in designing strong experiments and conducting high-quality systematic reviews of the literature. Unfortunately, quality assessment tools for SC studies are relatively novel, ratings across tools demonstrate variability, and there is currently no “gold standard” tool. 56 Table 2 lists important SC study quality appraisal criteria compiled from the most common scales; when planning studies or reviewing the literature, we recommend readers to consider these criteria. Table 3 lists some commonly used SC quality assessment and reporting tools and references to resources where the tools can be located.

Criteria Requirements
1. Design The design is appropriate for evaluating the intervention
2. Method details Participants' characteristics, selection method, and testing setting specifics are adequately detailed to allow future replication
3. Independent variable , , , The independent variable (ie, the intervention) is thoroughly described to allow replication; fidelity of the intervention is thoroughly documented; the independent variable is systematically manipulated under the control of the experimenter
4. Dependent variable , , Each dependent/outcome variable is quantifiable. Each outcome variable is measured systematically and repeatedly across time to ensure the acceptable 0.80-0.90 interassessor percent agreement (or ≥0.60 Cohen's kappa) on at least 20% of sessions
5. Internal validity , , The study includes at least 3 attempts to demonstrate an intervention effect at 3 different points in time or with 3 different phase replications. Design-specific recommendations: (1) for reversal designs, a study should have ≥4 phases with ≥5 points, (2) for alternating intervention designs, a study should have ≥5 points per condition with ≤2 points per phase, and (3) for multiple baseline designs, a study should have ≥6 phases with ≥5 points to meet the What Works Clearinghouse standards without reservations. Assessors are independent and blind to experimental conditions
6. External validity Experimental effects should be replicated across participants, settings, tasks, and/or service providers
7. Face validity , , The outcome measure should be clearly operationally defined, have a direct unambiguous interpretation, and measure a construct is designed to measure
8. Social validity , Both the outcome variable and the magnitude of change in outcome because of an intervention should be socially important; the intervention should be practical and cost effective
9. Sample attrition , The sample attrition should be low and unsystematic, because loss of data in SC designs due to overall or differential attrition can produce biased estimates of the intervention's effectiveness if that loss is systematically related to the experimental conditions
10. Randomization , If randomization is used, the experimenter should make sure that (1) equivalence is established at the baseline and (2) the group membership is determined through a random process
What Works Clearinghouse Standards (WWC) Kratochwill TR, Hitchcock J, Horner RH, et al. Institute of Education Sciences: What Works Clearinghouse: Procedures and standards handbook. . Published 2010. Accessed November 20, 2016.
Quality indicators from Horner et al Horner RH, Carr EG, Halle J, McGee G, Odom S, Wolery M. The use of single-subject research to identify evidence-based practice in special education. . 2005;71(2):165-179.
Evaluative method Reichow B, Volkmar F, Cicchetti D. Development of the evaluative method for evaluating and determining evidence-based practices in autism. . 2008;38(7):1311-1319.
Certainty framework Simeonsson R, Bailey D. Evaluating programme impact: levels of certainty. In: Mitchell D, Brown R, eds. London, England: Chapman & Hall; 1991:280-296.
Evidence in Augmentative and Alternative Communication Scales (EVIDAAC) Schlosser RW, Sigafoos J, Belfiore P. EVIDAAC comparative single-subject experimental design scale (CSSEDARS). . Published 2009. Accessed November 20, 2016.
Single-Case Experimental Design (SCED) Tate RL, McDonald S, Perdices M, Togher L, Schulz R, Savage S. Rating the methodological quality of single-subject designs and n-of-1 trials: Introducing the Single-Case Experimental Design (SCED) Scale. . 2008;18(4):385-401.
Logan et al scales Logan LR, Hickman RR, Harris SR, Heriza CB. Single-subject research design: Recommendations for levels of evidence and quality rating. . 2008;50:99-103.
Single-Case Reporting Guideline In BEhavioural Interventions (SCRIBE) Tate RL, Perdices M, Rosenkoetter U, et al. The Single-Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016 statement. 2016;56:133-142.
Theory, examples, and tools related to multilevel data analysis Van den Noortgate W, Ferron J, Beretvas SN, Moeyaert M. Multilevel synthesis of single-case experimental data. Katholieke Universiteit Leuven web site. .
Tools for computing between-cases standardized mean difference ( -statistic) Pustejovsky JE. scdhlm: a web-based calculator for between-case standardized mean differences (Version 0.2) [Web application]. .
Tools for computing NAP, IRD, Tau, and other statistics Vannest KJ, Parker RI, Gonen O. Single case research: web based calculators for SCR analysis (Version 1.0) [Web-based application]. College Station, TX: Texas A&M University. Published 2011. Accessed November 20, 2016. .
Tools for obtaining graphical representations, means, trend lines, PND Wright J. Intervention central. Accessed November 20, 2016. .
Access to free Simulation Modeling Analysis (SMA) Software Borckardt JJ. SMA Simulation modeling analysis: time series analysis program for short time series data streams. Published 2006. .

When an established tool is required for systematic review, we recommend use of the WWC tool because it has well-defined criteria and is developed and supported by leading experts in the SC research field in association with the Institute of Education Sciences. 18 The WWC documentation provides clear standards and procedures to evaluate the quality of SC research; it assesses the internal validity of SC studies, classifying them as “meeting standards,” “meeting standards with reservations,” or “not meeting standards.” 1 , 18 Only studies classified in the first 2 categories are recommended for further visual analysis. Also, WWC evaluates the evidence of effect, classifying studies into “strong evidence of a causal relation,” “moderate evidence of a causal relation,” or “no evidence of a causal relation.” Effect size should only be calculated for studies providing strong or moderate evidence of a causal relation.

The Single-Case Reporting Guideline In BEhavioural Interventions (SCRIBE) 2016 is another useful SC research tool developed recently to improve the quality of single-case designs. 57 SCRIBE consists of a 26-item checklist that researchers need to address while reporting the results of SC studies. This practical checklist allows for critical evaluation of SC studies during study planning, manuscript preparation, and review.

Single-case studies can be designed and analyzed in a rigorous manner that allows researchers strength in assessing causal relationships among interventions and outcomes, and in generalizing their results. 2 , 12 These studies can be strengthened via incorporating replication of findings across multiple study phases, participants, settings, or contexts, and by using randomization of conditions or phase lengths. 11 There are a variety of tools that can allow researchers to objectively analyze findings from SC studies. 56 Although a variety of quality assessment tools exist for SC studies, they can be difficult to locate and utilize without experience, and different tools can provide variable results. The WWC quality assessment tool is recommended for those aiming to systematically review SC studies. 1 , 18

SC studies, like all types of study designs, have a variety of limitations. First, it can be challenging to collect at least 5 data points in a given study phase. This may be especially true when traveling for data collection is difficult for participants, or during the baseline phase when delaying intervention may not be safe or ethical. Power in SC studies is related to the number of data points gathered for each participant, so it is important to avoid having a limited number of data points. 12 , 58 Second, SC studies are not always designed in a rigorous manner and, thus, may have poor internal validity. This limitation can be overcome by addressing key characteristics that strengthen SC designs ( Table 2 ). 1 , 14 , 18 Third, SC studies may have poor generalizability. This limitation can be overcome by including a greater number of participants, or units. Fourth, SC studies may require consultation from expert methodologists and statisticians to ensure proper study design and data analysis, especially to manage issues like autocorrelation and variability of data. 2 Fifth, although it is recommended to achieve a stable level and rate of performance throughout the baseline, human performance is quite variable and can make this requirement challenging. Finally, the most important validity threat to SC studies is maturation. This challenge must be considered during the design process to strengthen SC studies. 1 , 2 , 12 , 58

SC studies can be particularly useful for rehabilitation research. They allow researchers to closely track and report change at the level of the individual. They may require fewer resources and, thus, can allow for high-quality experimental research, even in clinical settings. Furthermore, they provide a tool for assessing causal relationships in populations and settings where large numbers of participants are not accessible. For all of these reasons, SC studies can serve as an effective method for assessing the impact of interventions.

  • Cited Here |
  • Google Scholar

n-of-1 studies; quality assessment; research design; single-subject research

  • + Favorites
  • View in Gallery

Readers Of this Article Also Read

Effects of 2 years of exercise on gait impairment in people with parkinson..., home-based circuit resistance training to overcome barriers to exercise for..., portable neurorobotics for the severely affected arm in chronic stroke: a case ..., randomized control trial of effects of a 10-week inspiratory muscle training..., use of a quality of life measure with a young woman at three to five years post pontine hemorrhage. case report</strong>', 'kaplan sandra l. pt phd', 'journal of neurologic physical therapy', '2000', '24', '4' , 'p 152-158');" onmouseout="javascript:tooltip_mouseout()" class="ejp-uc__article-title-link"> use of a quality of life measure with a young woman at three to five years post ....

Single Case Research Design

  • First Online: 04 January 2024

Cite this chapter

typically the results from a single case research study are evaluated using ____

  • Stefan Hunziker 3 &
  • Michael Blankenagel 3  

676 Accesses

2 Citations

This chapter addresses single-case research designs’ peculiarities, characteristics, and significant fallacies. A single case research design is a collective term for an in-depth analysis of a small non-random sample. The focus of this design is in-depth. This characteristic distinguishes the case study research from other research designs that understand the individual case as a relatively insignificant and interchangeable aspect of a population or sample. Also, researchers find relevant information on writing a single case research design paper and learn about typical methods used for this research design. The chapter closes by referring to overlapping and adjacent research designs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Barratt, M., Choi, T. Y., & Li, M. (2011). Qualitative case studies in operations management: Trends, research outcomes, and future research implications. Journal of Operations Management, 29 (4), 329–342.

Google Scholar  

Baškarada, S. (2014). Qualitative case studies guidelines. The Qualitative Report, 19 (40), 1–25.

Berg, B., & Lune, H. (2012). Qualitative research methods for the social sciences . Pearson.

Berry, L. L., Conant, J. S. & Parasuraman, A. (1991), “A framework for conducting a servicemarketing audit”. Journal of the Academy of Marketing Science, 19 , 255–266.

Bryman, A. (2004). Social research methods 2nd edn. Oxford University Press, 592.

Burns, R. B. (2000). Introduction to research methods . United States of America.

Boos, M., (1992). A Typologie of Case Studies, in: M. O’S´uilleabhain, E.A. Stuhler, D. de Tombe (Eds.), Research on Cases and Theories , (Vol. I), München.

Creswell, J. W. (2013). Qualitative inquiry and research design. Choosing among five approaches (3rd ed.). SAGE.

Darke, P., Shanks, G., & Broadbent, M. (1998). Successfully completing case study research: Combining rigour, relevance and pragmatism. Inform Syst J, 8 (4), 273–289.

Article   Google Scholar  

Dey, I. (1999). Grounding grounded theory: Guidelines for qualitative inquiry . Academic Press.

Dick, B. (2005). Grounded theory: A thumbnail sketch. Retrieved 11 June 2021 from http://www.scu.edu.au/schools/gcm/ar/arp/grounded.html .

Dooley, L. M. (2002). Case study research and theory building. Advances in Developing Human Resources, 4 (3), 335–354.

Dubé, L., & Paré, G. (2003). Rigor in information systems positivist case research: Current practices, trends, and recommendations. MIS Quarterly, 27, 597–635.

Edmonds, W. A., & Kennedy, T. D. (2012). An applied reference guide to research designs: Quantitative, qualitative, and mixed methods . Sage.

Edmondson, A. & McManus, S. (2007). Methodological fit in management field research. The Academy of Management Review, 32 (4), 1155–1179.

Eisenhardt, K. M. (1989). Building theories from case study research. Academy of Management Review, 14 (4), 532–550.

Flynn, B. B., Sakakibara, S., Schroeder, R. G., Bates, K. A., & Flynn, E. J. (1990). Empirical research methods in operations management. Journal of Operations Management, 9 (2), 250–284.

Flyvbjerg, B. (2001). Making social science matter: Why social inquiry fails and how it can succeed again . Cambridge University Press.

Flyvbjerg, B. (2006). Five misunderstandings about case-study research. Qualitative Inquiry, 12 (2), 219–245.

General Accounting Office. (1990). Case study evaluations. Retrieved May 15, 2021, from https://www.gao.gov/assets/pemd-10.1.9.pdf .

Gerring, J. (2004). What is a case study and what is it good for? American Political Science Review, 98 (2), 341–354.

Glaser, B. G. (1978). Theoretical sensitivity: Advances in the methodology of grounded theory . Sociology Press.

Glaser, B., & Strauss, A. (1967). The discovery of grounded theory: Strategies for qualitative research . Sociology Press.

Gomm, R. (2000). Case study method. Key issues, key texts . SAGE.

Halaweh, M., Fidler, C., & McRobb, S. (2008). Integrating the Grounded Theory Method and Case Study Research Methodology Within IS Research: A Possible ‘Road Map’, ICIS 2008 Proceedings

Halaweh, M. (2012). Integration of grounded theory and case study: An exemplary application from e-commerce security perception research. Journal of Information Technology Theory and Application (JITTA), 13 (1).

Hancock, D., & Algozzine, B. (2016). Doing case study research: A practical guide for beginning researchers (3rd ed.). Teachers College Press.

Hekkala, R. (2007). Grounded theory—the two faces of the methodology and their manifestation in IS research. In Proceedings of the 30th Information Systems Research Seminar in Scandinavia IRIS, 11–14 August, Tampere, Finland (pp. 1–12).

Horton, J., Macve, R., & Struyven, G. (2004). Qualitative Research: Experiences in Using Semi-Structured Interviews. In: Humphrey, Christopher and Lee, Bill H. K., (eds.) The Real Life Guide to Accounting Research: a Behind-The-Scenes View of Using Qualitative Research Methods. Elsevier Science (Firm) , (pp. 339–358 ), Amsterdam, The Netherlands.

Hyett, N., Kenny, A., & Dickson-Swift, V. (2014). Methodology or method? A critical review of qualitative case study reports. International Journal of Qualitative Studies on Health and Well-Being, 9, 23606.

Keating, P. J. (1995). A framework for classifying and evaluating the theoretical contributions of case research in management accounting. Journal of Management Accounting Research, 7, 66.

Levy, J. S. (2008). Case studies: Types, designs, and logics of inference. Conflict Management and Peace Science, 25 (1), 1–18.

Maoz, Z. (2002). Case study methodology in international studies: from storytelling to hypothesis testing. In F. P. Harvey & M. Brecher (Eds.). Evaluating methodology in international studies . University of Michigan Press.

Mayring, P. (2010). Design. In G. Mey & K. Mruck (Hrsg.), Handbuch qualitative Forschung in der Psychologie (S. 225–237). VS Verlag für Sozialwissenschaften.

May, T. (2011). Social research: Issues, methods and process . Open University Press/Mc.

Merriam, S. B. (2002). Qualitative Research in Practice: Examples For discussion and Analysis . Jossey-Bass Publishers.

Merriam, S. B. (2009). Qualitative research in practice: Examples for discussion and analysis . Jossey-Bass publishers.

Meyer, J.-A., & Kittel-Wegner, E. (2002a). Die Fallstudie in der betriebswirtschaftlichen Forschung und Lehre . Stiftungslehrstuhl für ABWL, insb. kleine und mittlere Unternehmen, Universität.

Meyer, J.-A., & Kittel-Wegner, E. (2002b). Die Fallstudie in der betriebswirtschaftlichen Forschung und Lehre, Schriften zu Management und KMU, Nr. 2/02, Universität Flensburg, Mai 2002.

Mitchell, J. C. (1983). Case and situation analysis. The Sociological Review, 31 (2), 187–211.

Ng. (2005). A principal-distributor collaboration moden in the crane industry. Ph.D. Thesis, Graduate College of Management, Southern Cross University, Australia.

Ng, Y. N. K. & Hase, S. (2008). Grounded suggestions for doing a grounded theory business research. Electronic Journal on Business Research Methods, 6 (2).

Otley, D., Anthony J.B. (1994), “Case study research in management accounting and control”. Management Accounting Research, 5 , 45–65.

Onwuegbuzie, A. J., Leech, N. L., & Collins, K. M. (2012). Qualitative analysis techniques for the review of the literature. Qualitative Report, 17 (56).

Piekkari, R., Welch, C., & Paavilainen, E. (2009). The case study as disciplinary convention. Organizational Research Methods, 12 (3), 567–589.

Ridder, H.-G. (2016). Case study research. Approaches, methods, contribution to theory. Sozialwissenschaftliche Forschungsmethoden (vol. 12). Rainer Hampp Verlag.

Ridder, H.-G. (2017). The theory contribution of case study research designs. Business Research, 10 (2), 281–305.

Stake, R. E. (1995). The art of case study research . Sage.

Stake, R. E. (2005). Qualitative case studies. The SAGE handbook of qualitative research (3rd ed.), eds. N. K. Denzin & Y. S. Lincoln (pp. 443–466).

Strauss, A. L., & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedures and techniques . Sage publications.

Strauss, A. L., & Corbin, J. (1998). Basics of qualitative research techniques and procedures for developing grounded theory . Sage.

Tight, M. (2003). Researching higher education . Society for Research into Higher Education; Open University Press.

Tight, M. (2010). The curious case of case study: A viewpoint. International Journal of Social Research Methodology, 13 (4), 329–339.

Walsham, G. (1995). Interpretive case studies in IS research: nature and method. Eur J Inf Syst 4, 74–81.

Walsham, G. (2006). Doing interpretive research. European Journal of Information Systems, 15 (3), 320–330.

Welch, C., Piekkari, R., Plakoyiannaki, E., & Paavilainen-Mäntymäki, E. (2011). Theorising from case studies: Towards a pluralist future for international business research. Journal of International Business Studies, 42 (5), 740–762.

Woods, M. (2009). A contingency theory perspective on the risk management control system within Birmingham city council. Management Accounting Research, 20 (1), 69–81.

Yin, R. K. (1994). Discovering the future of the case study. Method in evaluation research. American Journal of Evaluation, 15 (3), 283–290.

Yin, R. K. (2004). Case study research: Design and methods (3rd ed.). Chongqing University Press.

Yin, R. K. (2009). Case study research: Design and methods (4th ed.). SAGE.

Yin, R. K. (2014). Case study research. Design and methods (5th ed.). SAGE.

Download references

Author information

Authors and affiliations.

Wirtschaft/IFZ, Campus Zug-Rotkreuz, Hochschule Luzern, Zug-Rotkreuz, Zug, Switzerland

Stefan Hunziker & Michael Blankenagel

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Stefan Hunziker .

Rights and permissions

Reprints and permissions

Copyright information

© 2024 Springer Fachmedien Wiesbaden GmbH, part of Springer Nature

About this chapter

Hunziker, S., Blankenagel, M. (2024). Single Case Research Design. In: Research Design in Business and Management. Springer Gabler, Wiesbaden. https://doi.org/10.1007/978-3-658-42739-9_8

Download citation

DOI : https://doi.org/10.1007/978-3-658-42739-9_8

Published : 04 January 2024

Publisher Name : Springer Gabler, Wiesbaden

Print ISBN : 978-3-658-42738-2

Online ISBN : 978-3-658-42739-9

eBook Packages : Business and Management Business and Management (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

IMAGES

  1. a single case study intervention

    typically the results from a single case research study are evaluated using ____

  2. Single case study research

    typically the results from a single case research study are evaluated using ____

  3. Critical Appraisal Guidelines for Single Case Study Research

    typically the results from a single case research study are evaluated using ____

  4. PPT

    typically the results from a single case research study are evaluated using ____

  5. Embedded single-case study design

    typically the results from a single case research study are evaluated using ____

  6. Write Online: Case Study Report Writing Guide

    typically the results from a single case research study are evaluated using ____

VIDEO

  1. Research Evidence Grading

  2. Series 3: Conducting Single-Case Research Design in the Classroom Setting: Its Power and Effect

  3. Week 6-Single Case Research

  4. How and why a single case research design (SCRD) could be a seemingly “perfect” design for a counse…

  5. RESEARCH DESIGN AND EVALUATION (ASWB EXAM)

  6. Single Subject Research: Visual Analysis of Trend

COMMENTS

  1. Research Methods Ch. 14 Study Questions Flashcards

    Typically, the results from a single-subject research study are evaluated using _____. a. descriptive statistics such as the mean and standard deviation b. inferential statistics such as a hypothesis test c. visual inspection of a graph d. consensus among at least three researchers. c. visual inspection of a graph ...

  2. Research Methods: Chapter 14 Flashcards

    Study with Quizlet and memorize flashcards containing terms like 1. How are single-case designs similar to other experimental designs? a. The data are evaluated with traditional tests for significance. b. They are capable of determining cause-and-effect relationships. c. They involve a series of observations over time. d. They involve several observations of a participant before treatment and ...

  3. Chapter 10 Flashcards

    In the notation for single-case designs, ... Typically, the results from a single-subject research study are evaluated using. visual inspection of a graph. In a graph of single-subject research data, a clear indication of a treatment effect is.

  4. Single-Case Design, Analysis, and Quality Assessment for Intervention

    Single-case studies can provide a viable alternative to large group studies such as randomized clinical trials. Single case studies involve repeated measures, and manipulation of and independent variable. They can be designed to have strong internal validity for assessing causal relationships between interventions and outcomes, and external ...

  5. Single-Case Experimental Designs: A Systematic Review of Published

    The single-case experiment has a storied history in psychology dating back to the field's founders: Fechner (1889), Watson (1925), and Skinner (1938).It has been used to inform and develop theory, examine interpersonal processes, study the behavior of organisms, establish the effectiveness of psychological interventions, and address a host of other research questions (for a review, see ...

  6. Evaluating single-case research data for systematic review: A

    The articles presented in this special section describe important developments in the use of statistics to examine single-case research data. Each of the methods employs sophisticated procedures to evaluate whether and to what extent the data in single-case intervention phases represents an improvement over that obtained in a baseline phase of a study.

  7. Advancing the Application and Use of Single-Case Research Designs

    Context. A special issue of Perspectives on Behavior Science focused on methodological advances needed for single-case research is a timely contribution to the field. There are growing efforts to both articulate professional standards for single-case methods (Kratochwill et al., 2010; Tate et al., 2016), and advance new procedures for analysis and interpretation of single-case studies (Manolov ...

  8. A systematic review of applied single-case research ...

    Single-case experimental designs (SCEDs) have become a popular research methodology in educational science, psychology, and beyond. The growing popularity has been accompanied by the development of specific guidelines for the conduct and analysis of SCEDs. In this paper, we examine recent practices in the conduct and analysis of SCEDs by systematically reviewing applied SCEDs published over a ...

  9. Single Case Research Design

    Abstract. This chapter addresses the peculiarities, characteristics, and major fallacies of single case research designs. A single case study research design is a collective term for an in-depth analysis of a small non-random sample. The focus on this design is on in-depth.

  10. Single-Case Designs

    Single-case design research can also be useful in the early stages of intervention development, as intervention strategies can be refined during the course of the study without compromising internal validity. Although the term single-case implies that studies using these methods include only one participant, that is typically not the case.

  11. Single case studies are a powerful tool for developing ...

    The majority of methods in psychology rely on averaging group data to draw conclusions. In this Perspective, Nickels et al. argue that single case methodology is a valuable tool for developing and ...

  12. Quiz 14

    Typically, the results from a single-case research study are evaluated using ____. visual inspection of a graph. statistical analysis of the data. conducting a survey. ... Typically, the results from a single-case research study are evaluated using ____. Choose matching definition. visual inspection of a graph.

  13. Generality of Findings From Single-Case Designs: It's Not All About the

    Direct replication is the strategy of repeating a study with no procedural changes to assess the reliability of a finding. This can be accomplished in the original study or in a separate study by the original or new researchers. In single-case design research, this type of replication is most apparent in the ABAB design, which includes an ...

  14. Planning Qualitative Research: Design and Decision Making for New

    When choosing a single case over a multiple-case design, five rationales might apply; the single case may be (i) critical, (ii) unusual, (iii) common, (iv) revelatory, or (v) longitudinal . Multiple cases are typically used for replication of an intervention and to present different context. ... Case study research and applications: Design and ...

  15. Answered: 59. Typically, the results from a…

    Question. 59. Typically, the results from a single-case research study are evaluated using. a. descriptive statistics such as the mean and standard deviation. b. inferential statistics such as a hypothesis test. C. visual inspection of a graph. d. consensus among at least three researchers.

  16. The Family of Single-Case Experimental Designs · Special Issue 3

    Abstract. Single-case experimental designs (SCEDs) represent a family of research designs that use experimental methods to study the effects of treatments on outcomes. The fundamental unit of analysis is the single case—which can be an individual, clinic, or community—ideally with replications of effects within and/or between cases.

  17. Single-Case Design, Analysis, and Quality Assessment for Int ...

    The Single-Case Reporting Guideline In BEhavioural Interventions (SCRIBE) 2016 is another useful SC research tool developed recently to improve the quality of single-case designs. 57 SCRIBE consists of a 26-item checklist that researchers need to address while reporting the results of SC studies. This practical checklist allows for critical ...

  18. chapter 10 Flashcards

    In the notation for single-case designs, A stable trend within a phase is defined as. ... Typically, the results from a single-subject research study are evaluated using. In a graph of single-subject research data, a clear indication of a treatment effect is ...

  19. On the History of Single-Case Methodology: A Data-Based Analysis

    To provide context surrounding the history of single-case research and to act as a benchmark for which future changes across the fields and disciplines that use single-case methods may be compared, we conducted this study to serve as an update and extension on the trends and prevalence of single-case research in the peer-reviewed literature. Our analytical sample was derived from 20 peer ...

  20. Single Case Research Design

    The research question of a case study usually starts with "why" or "how." A typical research question in case study research looks like this: How does this [system/process] work or function? ... Even single case study research results can be used for hypothesis testing in case of relatively precise predictions and a low measurement ...

  21. research methods final exam

    The results from a research study indicate that adolescents who watch more sexual content on television also tend to engage in more sexual behavior that their peers. The correlation between amount of TV sexual content and amount of sexual behavior is an example of a ____. ... the results from a single-case research study are evaluated using ...

  22. Typically the results from a singlecase research study are

    Typically, the results from a single-case research study are evaluated using ____. A : descriptive statistics such as the mean and standard deviation B : inferential statistics such as a hypothesis test C : visual inspection of a graph D : consensus among at least three researchers

  23. Exam 3 Research Methodology (11-14) Flashcards

    a factorial study that combines two different research strategies, such as experimental and nonexperimental or quasi-experimental, in the same factorial design. combined strategy. factorial research design with more than two factors; research method with three or more IV. higher-order factorial designs. Mixed design.