Ga naar de hoofdinhoud
Top

The Use of Pictorial or Graphic Representation in Reading Comprehension Interventions for Students with Autism Spectrum Disorders: A Meta-Analysis

  • Open Access
  • 11-09-2025
  • Original Article

Abstract

This meta-analysis examines the effectiveness of pictorial and graphic representations (PGR) in enhancing reading comprehension among K-12 students with autism spectrum disorder (ASD). Through synthesizing findings from five single-case experimental design studies, the analysis explores how different modalities, age groups, instructional contexts, and task types influence comprehension outcomes. Results indicate that interventions utilizing PGR show moderate-to-strong positive effects overall (Tau-U = 0.85), which means they significantly improve reading comprehension in students with ASD. However, variability was observed across modalities, with technology-based interventions demonstrating strong but varied effectiveness, and paper-based interventions exhibiting more consistent outcomes.The findings highlight the importance of carefully selecting appropriate visual supports and comprehension measures tailored to students’ cognitive profiles and instructional needs. Future research should expand sample sizes, explore group instructional settings, and further investigate the relative effectiveness of various visual modalities to optimize educational strategies for enhancing reading comprehension in students with ASD.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Introduction

Reading is a complex process that involves decoding text, recognizing words, understanding their meanings, parsing sentences, making inferences, and actively monitoring comprehension (Perfetti & Stafura, 2014). While reading, readers form situation models through dynamic interactions between linguistic representations and background knowledge (Kintsch, 1988). This process allows readers to identify coherence in texts, draw inferences, and integrate information and relevant prior knowledge to construct a comprehensive understanding of what they read (Kintsch, 1988; McNamara & Magliano, 2009). Without this deeper integration, readers may focus only on surface-level details without inferencing, which may result in fragmented comprehension and difficulty connecting ideas across the text (Cain & Oakhill, 1999; Cain et al., 2001; Kintsch & Rawson, 2005; Miller & Keenan, 2009).

Reading Comprehension in Students with ASD

Many students with ASD have a detail-oriented cognitive processing disposition, which often affects proficiency in recognizing patterns and specific details (e.g., Frith & Happé, 1994; Happé & Booth, 2008; Nuske & Bavin, 2011; Rumpf et al., 2012; Schlooz & Hulstijn, 2014; van der Hallen et al., 2015). Although this detail-oriented processing style is a cognitive strength in some contexts, it can create barriers to the integration of global meaning when reading texts. Studies suggest that between 30 and 69% of individuals with ASD experience difficulties with comprehension (Henderson et al., 2014; McIntyre et al., 2017; Solari et al., 2019), particularly in higher-order skills such as making inferences, understanding figurative language, and recognizing implicit text structures (Brown et al., 2013; Tárraga-Mínguez et al., 2021). Despite the variability in the magnitude of these estimates, by any measure, it is notable that reading comprehension challenges are not rare among readers with ASD.
The unique challenges faced by individuals with ASD in reading comprehension have been explored through the lens of social cognitive theory, the central coherence framework (Blum, 2019; Frith, 1989; Happé & Booth, 2008; Happé & Frith, 2006). Frith (1989) proposed that these challenges arise from an imbalance in how information is integrated at different levels of processing. This framework contrasts the typical inclination to synthesize diverse information to form a comprehensive understanding, known as central coherence, with the ASD tendency to focus on local information (Frith, 1989). As noted by Frith and Happé (1994), individuals with ASD excel at retaining local details while struggling to form the global coherence required to understand overarching themes and broader inferences.

Pictorial or Graphic Representations

Given these challenges, incorporating PGR may be a powerful strategy to enhance reading comprehension for students, including those with ASD. In this study, a PGR encompasses visual supports (e.g., graphic organizers) and visual representations of narratives (e.g., comic strips, animated illustrations). Visual support includes a range of tools, including graphic organizers (e.g., semantic mapping, flowchart), pictorial representations, and other visual aids designed to scaffold understanding of new skills, behavioral expectations, or activities (Hume et al., 2014). Among these, graphic organizers stand out as effective tools for helping students visually display information, as demonstrated in formats like mind maps, charts, tables, and Venn diagrams (Urton et al., 2024). Research has demonstrated their effectiveness in improving comprehension across various student populations, including those with ASD (e.g., Bethune & Wood, 2013) and learning disabilities (e.g., Dexter et al., 2011; Kim et al., 2004). However, not all graphic organizers are equally effective for all tasks. For example, simplistic structures like the ‘beginning, middle, end’ framework may not provide enough cognitive challenge for students working on inferencing skills. Colliot and Jamet (2018) emphasized that the design of visual supports must balance cognitive demands to avoid hindering learning.
Representing information in a visual format can assist in creating mental representations, or situation models, by helping students organize and retain information from texts (Duke & Pearson, 2009). Previous studies have shown that mental models serve as the foundation for comprehension across various forms of media (i.e., Cohn, 2018; McNamara & Magliano, 2009; Kendeou et al., (2020; Magliano et al., 2016). In addition, visual representations not only aid in understanding literal content but also provide scaffolding for navigating non-literal constructs essential for critical analysis and knowledge transfer (Altun, 2018; Kendeou et al., 2020; Magliano et al., 2013). By fostering connections between discrete pieces of information, visual representations can encourage deeper comprehension of texts (Dexter et al., 2011).
Blum (2019) challenges the deficit assumption in narrative comprehension among students with ASD, proposing that comprehension difficulties may be more attributable to modality. Blum (2019) compared the impact of comic-plus-text narratives with traditional text-only formats on inferential reasoning and found that although students with ASD exhibited difficulties with inferential reasoning in text-only conditions, their performance improved significantly when engaging with comic-plus-text, especially for those with prior experience with comics. The study suggests that visual narratives can serve as an alternative literacy that can scaffold comprehension by making implicit narrative elements more explicit (Blum, 2019; McVicker, 2007; Rozema, 2015). Multimedia scaffolds such as comics are not merely supplementary tools but essential components for encouraging mental representations and building coherence between narrative elements (McVicker, 2007).
Several studies have suggested frameworks that support inferential thinking across different media, including written, graphically or auditorily represented information (e.g., Blum et al., 2020; Loschky et al., 2015; Kendeou et al., (2020); Ness-Maddox, 2022). Inference plays a significant role in bridging implicit gaps by retrieving and integrating relevant information (Kintsch, 1998); McNamara & Magliano, 2009; Oakhill, 1984), and studies suggest that inferential comprehension relies on shared cognitive processes across multiple formats, including both textual and visual narratives (e.g., Kendeou et al., 2020; Kim, 2016; Magliano et al., 2013).
The Inferential Language Comprehension (ILC) framework (Kendeou et al., 2020) highlights that leveraging visual narratives—both static and dynamic—facilitates inference-making because they provide contextual scaffolding that aids comprehension. According to Kendeou et al. (2020), the ability to integrate information from visual cues helps students to bridge narrative gaps and form coherent mental models of the information presented. Furthermore, studies indicate that questioning techniques alongside visual supports further enhance inferencing skills, as they actively prompt learners to engage in the activation and integration process necessary for comprehension (e.g., Kendeou et al., 2020; Magliano et al., 2013).
Ness-Maddox (2022) investigated the extent to which readers generate different types of inferences depending on the modality of the material, comparing text-based and non-linguistic graphic narratives. In this study, college students were given either text or non-linguistic graphics and asked to engage in think-aloud activities while answering recall questions. The findings revealed strong correlations between text and non-linguistic graphic narratives for certain inference types, particularly anaphoric, bridging, elaborative, and internal state inferences. However, the frequency of specific inferences varied by modality. Participants engaging with text narratives generated more goal statements, predictions, and affective responses, whereas non-linguistic graphics prompted more bridging inferences, anaphoric inferences, and emotion inferences. Notably, those in the graphic narrative condition generated more emotion inferences, whereas those in the text condition provided more detailed story recall (Ness-Maddox, 2022).
Collectively, these studies highlight a key insight: inferential thinking extends beyond any single modality and is influenced by the way information is presented (Blum et al., 2020; Kendeou, 2015; Kendeou et al., 2020; Ness-Maddox, 2022). Given these findings, an important question emerges—how frequently have reading comprehension interventions effectively applied PGR to support students in K-12 settings in forming situation models? Addressing this question is critical for developing instructional strategies that leverage the full potential of visual scaffolds to enhance reading comprehension.

Use of PGR in Reading Comprehension Interventions

Guo et al. (2020) conducted a meta-analysis on the effectiveness of graphic displays in supporting students’ reading comprehension and ultimately included 36 articles. Their broader inclusion criteria encompassed various types of visual aids such as pictures and pictorial diagrams (see p. 9 in Guo et al., 2020). Of the identified studies, only sixteen studies exclusively included flow diagrams, which is relevant to the focus of the present study. In addition, 58% of the studies included adult participants. Guo et al. (2020) indicated that less than half of the studies reported participants’ reading skills (n = 18), which may or may not include students with reading difficulties or disabilities. Despite the limited number of reading comprehension interventions with PGR, especially in K-12 settings, there have been a few studies that included PGR to improve students’ reading comprehension.
Danaei et al. (2020) reported mixed results on the impact of PGR in reading comprehension interventions. They compared two types of interventions, augmented reality (AR) story books or traditional print books, to enhance students’ reading comprehension, with a particular focus on how graphic representations influence learning outcomes. The study involved 34 neurotypical children aged 7 to 9 in Iran. After reading, they were asked to retell the story and answer comprehension questions. AR storybooks enhance children’s reading comprehension by integrating dynamic elements such as animations, sound effects, and narration. Results revealed a significant improvement in overall reading comprehension for children who engaged with the AR-enhanced storybook. These children performed better in retelling and answering comprehension questions, particularly in implicit understanding. However, no notable difference was found between the groups in retelling the theme and setting.
Another example of using PGR in comprehension interventions is the Technology-based Early Language Comprehension Intervention (TeLCI; McMaster et al., 2024), which supports the development of reading comprehension in young children by emphasizing inferencing skills through dynamic visual narratives (i.e., videos) rather than traditional text-based reading and decoding-based approaches. McMaster et al. (2024) found that Grade 1 and 2 neurotypical students who used TeLCI demonstrated growth in their ability to make inferences. However, when comparing the experimental group with the control group, the improvements in inferencing were not significantly different, which indicates that TeLCI’s effectiveness may be comparable to other standard reading comprehension interventions. Although the aforementioned studies (i.e., Danaei et al., 2020; McMaster et al., 2024) did not specifically target students with ASD, these findings highlight the potential for inferencing skills to transfer across media.
Tárraga-Mínguez et al. (2021) conducted a systematic review analyzing the effectiveness of reading comprehension interventions specifically designed for children with ASD. The review covered 25 studies published between 2000 and 2019, including 196 participants aged 5 to 18 years. Tárraga-Mínguez et al. (2021) evaluated various interventions that targeted specific reading comprehension sub-processes, such as understanding inferences, identifying main ideas, and recognizing text structures. Their findings underscored the effectiveness of structured interventions, particularly those incorporating direct instruction, collaborative learning, and the use of visual supports such as concept maps and graphic organizers. These tools significantly improved comprehension by providing a structured framework for organizing information, which is particularly beneficial for children with ASD. However, out of the 25 studies reviewed, only one study used digital concept maps of texts that were relevant with PGR (i.e., Browder et al., 2017).

Present Study

This meta-analysis aims to better understand how PGR impacts reading comprehension specifically among students with ASD, distinguishing their effects across different modalities, age groups, intervention settings, and task types. It deliberately focuses on PGRs rather than the broader term visual support, to ensure specificity in the types of interventions analyzed. This study specifically examines interventions that use structured visual formats to facilitate comprehension by promoting situational models. The following section will discuss a more detailed definition of PGR. This study addresses three questions:
1)
Does the use of pictorial or graphic representations enhance reading comprehension achievement in students with ASD?
 
2)
Do the effects of pictorial or graphic representations vary across modalities?
 
3)
To what extent do interventions using pictorial or graphic representations affect reading comprehension achievement differently across age groups, intervention settings, and task types?
 

Methods

Search Terms, Inclusion Criteria, and Coding

Educational Resources Information Centre (ERIC), PsycINFO, and ProQuest Dissertation & Theses searches related to the population and variables of interest were used to identify relevant articles. These databases were chosen for the following reasons: ERIC is the database for education literature and provides extensive coverage of journal articles, reports, and conference papers across all areas of education (Institute of Education Sciences, n.d.). PsycINFO was chosen because it covers the psychological aspects of education, cognition, and related disciplines (American Psychological Association, n.d.). ProQuest Dissertation & Theses was included for the inclusion of graduate-level studies that may not be available elsewhere (ProQuest, n.d.).
Search terms included were ‘autism OR autism spectrum disorders OR ASD,’ ‘reading OR reading comprehension OR comprehension,’ ‘intervention OR instruction OR support,’ ‘technology OR computer OR program OR online OR screen,’ and ‘visual support OR illustration OR animation OR graphic organizer.’ Excluded search terms were ‘mathematics OR Math OR science OR social science OR music’ and ‘spell* OR soci* OR behavior* OR social stor*.’ The search was limited to scholarly journals, dissertations, theses, and conference papers, excluding sources like trade journals, wire feeds, and magazines. Studies were included in the analysis through the coding process of abstract screening, text coding, and data extraction by three trained graduate students.
Included articles had to meet three selection criteria. First, only intervention studies with K-12 students with ASD were included. Systematic reviews, meta-analyses, brief reports, and scoping reviews were excluded. Studies that included children under the age of 5 or college students were excluded. However, studies in which only part of the participants were diagnosed with ASD were included.
Second, the intervention needed to target reading comprehension and must include graphic or pictorial representations. For the analysis, we considered studies focusing on reading comprehension interventions using visual representations in any format. This included paper-based visual supports, video modeling, and technology-based interventions. However, the representations needed to deliver information to facilitate students’ comprehension to be included. For example, graphic organizers such as a Know-Wonder-Learned (KWL) chart or a table to organize what they read were not included. Highlighted sentences, bigger fonts, or pictures or illustrations that did not depict the story’s plot (e.g., an illustration of a bike next to a story about going to a picnic by bike on weekends) were excluded. In contrast, those including Venn Diagrams and timelines were included because they showed the relationships between information. Comic strips and animated illustrations that represent causes and effects were also included.
Third, the outcome measures had to assess the reading comprehension performance of students with ASD. Analyses were disaggregated by outcome task types to examine to what extent the effects of using pictorial or graphic representations vary across task types. For example, students may be asked to find main ideas, find text evidence, answer literal questions, retell, or make inferences after reading texts.
Out of 2,195 abstracts identified, 1,876 were excluded for not being implemented in K-12 settings but in preschools, clinics, or colleges, leaving 319 abstracts for further screening. Additional exclusions included 130 abstracts due to non-experimental designs and 163 that did not include interventions targeting reading comprehension. Four abstracts were removed because the outcome measure did not assess reading comprehension, and seven for not involving pictorial or graphic representations in the intervention. After this abstract screening process, fifteen studies remained for full-text review. Three coders conducted the full-text screening of these fifteen studies, resulting in the inclusion of five studies for the meta-analysis. Among the articles reviewed, one published study (i.e., Drill & Bellini, 2022) and one dissertation (i.e., Schatz, 2017) included identical descriptions of participants, designs, methods, and results. Rather than deferring to the published study, the data included in the dissertation but not in the published study were included to reduce the reporting bias.
Despite the large number of articles excluded from the initial pool of 9,724 abstracts, the number of studies included in Guo et al. (2020) and Tárraga-Mínguez et al. (2021) backs up the scarcity of reading comprehension interventions using PGR, specifically targeting students with ASD. Figure 1 shows the PRISMA flow chart (Page et al., 2021), and Table 1 provides a breakdown of study characteristics. Interrater agreement among the three coders was measured using Fleiss’ Kappa (κ = 0.554, z = 3.71, p < 0.001), which indicates moderate agreement, with statistical significance suggesting that the observed agreement was unlikely due to chance (Fleiss, 1971).
Fig. 1
PRISMA Flow Diagram for Stages of the Review (Page et al., 2021)
Afbeelding vergroten
Table 1
Summary of studies
Study
Sample
Intervention/
Program
Modality
Length of Intervention
Design
Setting
Task Type
Browder et al., 2017
n = 3, Grades 2–4
Story-map labeling using SMART notebook app
Tech-based
15 sessions on average (20–30 min/session)
MPD
One to one
Answer literal questions
Drill & Bellini, 2022
n = 3, Grades 6–8
Readers Theater, story mapping, & video self-modeling
Paper-based & tech-based
27 sessions
(30–40 min/session)
MBD
One to one, home
Answer literal & inferential questions
Kouo & Visco, 2021
n = 2, Grade 7
TinyTap app, video, graphic organizer
Tech-based
75 sessions (20 min/session)
ATD
One to one
Answer inferential questions
Sartini, 2016
n = 4, Grades 1–5
My Pictures Talk app, graphic organizer
Paper-based & tech-based
48.5 sessions on average (9.5 min/session on average)
MPD
One to one
Answer literal questions
Schatz, 2017
n = 3, Grades 6–8
Readers Theater, story mapping, & video self-modeling
Paper-based & tech-based
27 sessions
(30–40 min/session)
MBD
One to one, home
Complete Maze
MPD = multiple probe design; MBD = multiple baseline design; ATD = alternating treatment design

Statistical Analysis

This meta-analysis synthesized findings from five single-case experimental design (SCED) studies, including four multiple baseline or probe designs across participants designs and one adapted alternating treatment design. These studies examined intervention effects in various applied settings using SCED methodologies, which provide rigorous experimental control for small sample sizes. Tau-U was calculated using the Tau-U Calculator from the Single Case Research (Vannest et al., 2016), while the overall standardized effect size was computed in R (R Core Team, 2023).

Data Extraction

To extract numerical data from graphical representations in the original studies, the juicr package in R was employed (Lajeunesse, 2021). This package provides a graphical user interface that facilitates extraction and conversion of image-based data into numerical values. The second author played a key role in setting up the juicr and establishing guidelines for its use. The first author conducted the initial data extraction for all five studies, with the third author reviewing and verifying the extracted data for accuracy. Discrepancies between the two—39 disagreements out of 445 data points—were resolved through consensus discussions. An interrater agreement yielded a reliability rate of 91.24%.

Model Selection and Effect Size Estimation

To determine the appropriate meta-analytic model, both fixed-effects and random-effects models were estimated using the metafor package (Viechtbauer, 2010; see Table 2). Because ATDs require an effect size metric that accounts for immediate treatment effects across multiple sessions, Tau-U was selected as the primary metric due to its ability to synthesize data across different SCED designs. Tau-U accounts for changes in both level and trend change, adjusts for positive baseline trends when necessary, and does not suffer from the ceiling effects observed in other nonoverlap techniques (Parker et al., 2011). Tau-U effect sizes were weighted within each study, using the Tau-U Calculator from Single Case Research (Vannest et al., 2016).
Table 2
Comparison of Fixed-Effects and Random-Effects Meta-Analysis models
Model
k
Q (df)
p (Q)
Estimate
SE
95% CI
(Lower, Upper)
z
p (z)
AIC
BIC
I2
H2
Fixed-Effects
5
0.38 (4)
0.984
0.852
0.189
0.483, 1.22
4.52
< 0.0001
3.15
2.76
0.00%
0.09
Random-Effects
5
0.38 (4)
0.984
0.852
0.189
0.483, 1.22
4.52
< 0.0001
5.04
3.82
0.00%
1.00
Q(df) refers to Cochran’s Q test for heterogeneity. Estimate = effect size (Tau-U); SE = standard error; CI = confidence interval, AIC = Akaike Information Criterion; BIC = Bayesian Information Criterion

Publication Bias

To assess potential publication bias, Kendall’s Rank Correlation test (Kendall & Gibbons, 1999) was used to examine asymmetry in effect size distribution. This test evaluates the proportion of data pairs that show improvement over time and offers an interpretable measure of trend consistency (Parker et al., 2011). In addition, a forest plot (Fig. 2) was generated to visualize the effect sizes and confidence intervals for each included study included in the meta-analysis. Each study’s point estimate (i.e., Tau-U) and 95% confidence interval are displayed, along with study labels for reference. The overall effect size is represented by a diamond shape, which indicates the aggregated effect across studies. The funnel plot (Fig. 3) was also generated to examine the potential presence of publication bias by plotting effect sizes against their standard errors.
Fig. 2
Forest Plot
Afbeelding vergroten
Fig. 3
Funnel plot
Afbeelding vergroten

Data Synthesis and Subgroup Analysis

Using a quantitative synthesis of the results from included studies, statistical analyses were conducted with all data regardless of heterogeneity. However, given the comparably small number of studies included (k = 5), research questions 2 and 3 were discussed only through narrative synthesis. Subgroup analyses were described based on the modalities of interventions delivered, settings (i.e., general education classroom, special education classroom, home environment), age groups (i.e., elementary, middle, high schools), and task types.

Results

Descriptions of Studies

The five studies included fifteen total students in elementary and middle school. No studies were found that involved high school students. All participants met the inclusion criteria of being identified with ASD. Race was reported for most participants, with 54% identified as White, 20% as Black or African American, 13% as Hispanic or Latinx, and the remaining 13% not reported. This demographic breakdown, especially the representation of White or Caucasian students (54%), deviates from the Fall 2022 U.S. public school student population (National Center for Education Statistics, 2024), which was 42% White, 15% Black or African American, and 29% Hispanic or Latinx. Most participants were male (87%). Although the search was not limited by publication year, all included studies were published between 2016 and 2022.
All interventions used in these studies included a tech-based component, with three also incorporating a paper-based component. Two studies examined Readers Theater activities including story mapping and video self-modeling. The other three studies evaluated specific technologies: My Pictures Talk an app that allows users to create customized social narratives with personal pictures; TinyTap, an educational platform with a variety of learning activities; and a SMART Notebook application for creating story maps and labels.
All five studies used single-case research designs, including multiple probe across participants (k = 2), multiple baseline across participants (k = 2), and adapted alternating treatment (k = 1) designs. All interventions were conducted in one-on-one arrangements with the interventionist and the student, with two studies taking place in the home setting. In all studies, reading comprehension was a primary dependent variable. The tasks used to assess the reading comprehension skills of the student included answering story element comprehension questions, literal and inferential questions, inferential questions only, general comprehension questions, and Maze reading activities.
The intervention described in Browder et al. (2017) aligns closely with the concept of PGR by utilizing an electronic story-mapping procedure to enhance reading comprehension in students with ASD, aged 8–10. The intervention incorporated graphic organizers, specifically a digital story map, to visually structure key narrative elements such as characters, setting, problem, solution, and outcome. By focusing on the story map, the intervention highlighted the shared structural patterns found in texts of the same genre and emphasized the key relationships between essential aspects of the texts (Gardill & Jitendra, 1999). Additionally, the intervention employed iPad-based technology, which allowed students to interact with the story map through touch-based responses, audio prompts, and written inputs. The study found that students demonstrated significant improvements in answering comprehension questions after using the story map, which suggests that structured visual supports can bridge comprehension gaps for learners with ASD.
Kouo and Visco (2021) explored the effectiveness of technology-aided instruction and intervention in improving inferential reading skills for middle school students with ASD. The interventions tested included TinyTap, an interactive educational app; instructional videos; and traditional graphic organizers. TinyTap provided an interactive visual support system in which students sorted information into categories, such as background knowledge and text clues. The video intervention, on the other hand, presented a dynamic visual representation of inference-making, illustrating how background knowledge and text clues interact. The graphic organizer condition, a more traditional PGR approach, required students to manually categorize textual information into labeled boxes that represent a structured but static visual format. Among these interventions, TinyTap proved to be the most effective in enhancing inferential reading skills by outperforming both videos and graphic organizers.
Drill and Bellini (2022) and Schatz (2017) examined the impact of a multi-component intervention—Readers Theater, story mapping, and video self-modeling—on narrative reading comprehension in middle school students (Grades 6–8) with ASD. Both studies shared the same sample population and implemented identical interventions, each designed to align with the criteria of PGR by providing structured, visual, and performative supports. Specifically, Readers Theater engaged students in expressive reading and role-playing to foster inferential thinking through perspective-taking. Story mapping utilized graphic organizers to visually structure narrative elements, whereas video self-modeling allowed students to observe their own performances.
Despite the identical intervention designs, the two studies differ in their outcome measures. Drill and Bellini (2022) reported on the Comprehension Quiz Protocol (CQP), a direct measure of narrative comprehension assessed through recall and inferential questions. In contrast, Schatz (2017) analyzed performance on the Maze task, a different measure of reading comprehension that emphasizes lexical fluency and syntactic awareness. The Maze task requires students to choose contextually appropriate words from multiple-choice options inserted into a passage, drawing on vocabulary knowledge and sentence-level understanding (Fuchs & Fuchs, 1992; Kendeou et al., 2012). To avoid data duplication in the meta-analysis, only the unique dependent variable from Schatz (2017)—Maze—was included, as the CQP data were already reported in Drill and Bellini (2022).
Sartini (2016) demonstrated how explicit instruction combined with self-directed video prompting aligns with PGR to support reading comprehension in students with ASD. The intervention utilized graphic organizers to visually structure story elements based on ‘wh-’ questions and helped students create mental representations and enhance inferential reasoning. The use of an iPad application, My Pictures Talk, provided video prompts that model the process of filling out the graphic organizer to reinforce comprehension through multimodal learning. In addition, the adapted texts used in the study included multiple photos that support students in forming mental representations of sentences.

Model Fit

With the five reviewed studies, both fixed- and random-effects models yielded nearly identical estimates, with an effect size of 0.85, a standard error of 0.19, and a highly significant z-value of 4.521 (p < 0.0001), as shown in Table 2. Their confidence intervals [0.48, 1.22] also completely overlap. Notably, the heterogeneity tests reveal I² of 0.00% and a non-significant Q statistic (p = 0.98). The random-effects model confirms this with a tau² estimate of 0. Also, the fixed-effects model demonstrates lower information criteria (AIC = 3.15, BIC = 2.76) compared to the random-effects model (AIC = 5.04, BIC = 3.82). Therefore, the fixed-effects model was selected.

Intervention Effects

The fixed effects mean of 0.85 [0.48, 1.22] supports the overall moderate-to-strong effectiveness of PGR. The studies analyzed each show positive Tau-U results, with four of the five showing statistically significant positive effects (range = 0.82–1.00). Browder et al. (2017) reported the highest intervention effect suggesting a large, positive impact. The lowest was reported in Schatz (2017) with a moderate effect size of 0.61 [−0.42, 1.64].

Subgroup Analysis

The five reviewed studies included different types of PGR, and some variability of PGR effects was observed across modalities. Browder et al. (2017) and Kouo and Visco (2021) only used technology-based PGR. The former reported an effect size of 1.00 [0.13, 1.87], and the latter reported 0.95 [0.02, 1.89] with a moderate average variability in effects. The other three studies used both tech-based and paper-based modalities in their interventions. Drill and Bellini (2022) reported a high effect size of 0.84 [0.07, 1.61], whereas Schatz (2017) with the same interventions reported 0.61 [−0.42, 1.64]. Sartini (2016) also used both modalities and reported an effect size of 0.82 [0.15, 1.49].
Variability in intervention effects was also observed when considering individual and contextual factors. Regarding age groups, three out of five studies (i.e., Browder et al., 2017; Kouo & Visco, 2021; Sartini, 2016) focused on elementary school students, whereas the remaining two (i.e., Drill & Bellini, 2022; Schatz, 2017) included middle school participants in residential settings. There were four different types of task types: literal comprehension questions, inferential comprehension questions, combined literal and inferential questions, and Maze.

Publication Bias Assessment

Publication bias was assessed by examining funnel plot asymmetry using the rank correlation test with Kendall’s tau. The analysis yielded Kendall’s tau of 0.00 and a p-value of 1.00, which indicates no significant correlation between study effect sizes and their precision. This lack of correlation suggests that there is no systematic bias in the selection of studies; specifically, smaller studies are not disproportionately reporting extreme effects, which often signals publication bias. Although these results are reassuring, it is important to acknowledge that tests for funnel plot asymmetry may have limited power, particularly when the number of studies in the meta-analysis is small, as is the case in this study.

Discussion

This meta-analysis examined the effectiveness of PGR in reading comprehension interventions for students with ASD. Findings indicate that although all studies demonstrated moderate to strong positive effects, there was notable variability in intervention outcomes. The fixed effects mean effect size of 0.85 [0.48, 1.22] suggests that, on average, PGR provides significant benefits for students with ASD. However, variations in effect sizes within and across studies emphasize the importance of factors such as intervention design and target skills.

RQ 1: Intervention Effect

Based on the five studies included in this meta-analysis, there is evidence that PGR interventions have a positive impact on reading comprehension for students with ASD. Four of the five studies demonstrated statistically significant positive results, which shows a consistent pattern of benefit across different types of PGR approaches. However, Schatz (2017) shows a moderate effect size of 0.61 [−0.42, 1.64] which is indicative of variability. Despite this, the overall trend suggests that PGR is beneficial for supporting reading comprehension development in students with ASD.
The study by Browder et al. (2017) had the largest effect size of 1.00 [0.13, 1.87] with a strong impact of the iPad-based story mapping intervention. In spite of a strong average effect, the relatively large standard error of 0.442 and wide confidence interval indicate variability in individual responses to the intervention. This variability may be attributed to differences in participant skill acquisition rates and variability in baseline. Browder et al. (2017) reported that two of the three participants showed variable baselines in answering comprehension questions despite the immediate increase in intervention phases. Nonetheless, the p-value of 0.024 suggests that the intervention effect is statistically significant and supports its efficacy in improving reading comprehension for students with ASD.
A notable comparison can be drawn between Drill and Bellini (2022) and Schatz (2017), two studies that differed only in the dependent variable. Drill and Bellini (2022) reported a moderate effect size of 0.84 [0.07, 1.61] with a p-value of 0.032, which indicates the intervention had a statistically significant impact on reading comprehension. In contrast, Schatz (2017) had the lowest effect size of 0.61 [−0.42, 1.64], with a p-value of 0.243, which suggests the results were not statistically significant. The difference in findings shows that the choice of dependent variable (i.e., CQP or Maze) may have adversely impacted measurement of the intervention’s effectiveness. This will be discussed in the following section in more detail.

RQ 2: Comparisons Across Modalities

While all five studies reported positive outcomes, some variability in intervention effects was observed across different modalities, age groups, settings, and task types. In terms of modality, both technology-based and paper-based PGR interventions were effective, with the highest single effect observed in a paper-based-only study. This suggests that the broader understanding that various forms of visual supports can be beneficial (Urton et al., 2024), and that the effectiveness is not solely tied to technological sophistication. This also aligns with the broader meta-analysis by Guo et al. (2020), which found no significant difference in effect among different graphic types in general reading comprehension. Although technology-based only PGR showed stronger effects (1.00 and 0.95), studies that combined both modalities had more varied effectiveness (0.84, 0.82, and 0.61). This might suggests that paper-based PGR might be more effective in this specific context. However, due to the small sample size, the evidence remains equivocal.

RQ 3: Comparisons Across Age Groups, Settings, and Task Types

Differences in participant age (elementary vs. middle school) and comprehension task types (literal, inferential, or Maze) were noted, but again, subgroup analysis was not feasible due to the small sample size. The present study reported strong positive effects, in contrast to Guo et al.’s (2020) moderate effect of graphics on students’ reading comprehension, which may be attributed to the inclusion of multiple graphic types (e.g., pictures, pictorial diagrams, flow diagrams) and a wider range of age group. It is possible that the age differences are related to children’s earlier and more frequent exposure to visuals in settings like elementary classrooms or early childhood education, compared to the less frequent use of varied visuals in secondary education. However, it is important to interpret these findings with caution, due to the small sample size and the diverse range of prior experiences among the children included in the analysis.
In terms of comprehension task types, the largest effect was reported in Browder et al. (2017), which required students to answer literal comprehension questions aligned with one of the story elements (e.g., “How did Mr. Wolf warm up?”, Browder et al., 2017). The second- and third-largest effects were found in Kouo and Visco (2021) and Drill and Bellini (2022), in which participants answered either inferential questions only (Kouo & Visco, 2021) or both literal and inferential questions (Drill & Bellini, 2022). Sartini (2016) assessed literal questions (i.e., who, what, when, and where questions) and did not include any inferential questions. The effectiveness of PGR across both literal and inferential comprehension tasks suggests that PGR can support both surface-level understanding and the deeper cognitive processes required for inferencing, which is a common challenge for students with ASD (Brown et al., 2013). This further supports the applicability of the iLC framework (Kendeou et al., (2020) in designing interventions for this population.
Whereas Drill and Bellini (2022) incorporated a broader range of comprehension measures, including both literal and inferential questions, Schatz (2017) used a Maze task that requires students to identify missing words in a passage, which likely were not a focus of the intervention, with the same participants and interventions. The smaller effect size and lack of statistical significance in Schatz (2017) may indicate that PGR was less effective in improving skills assessed by the Maze task. Answering literal and inferential questions appeared more consistent in measuring PGR impact compared to the Maze task, which showed greater variability and may be less sensitive to the specific benefits offered by PGR. This pattern is also aligned with Danaei et al. (2020) where students using AR better performed in answering comprehension questions but not in other types of measure (i.e., retelling the story’s characteristics). The findings highlight the importance of carefully selecting outcome measures when using PGR, as different tasks may yield varying degrees of sensitivity in detecting comprehension gains.

Limitations

Despite the promising findings, a few limitations must be acknowledged. First, the small sample size (k = 5) limits the ability to conduct robust subgroup analyses and restricts generalizability to broader ASD populations. Although the reviewed studies varied in intervention design, target populations, and instructional settings, the limited sample prevented statistical comparisons of task types and intervention modalities. Future research should aim to expand sample sizes and examine moderating variables more systematically to determine which intervention characteristics contribute most to reading comprehension improvement.
Also, the participants’ racial distribution is not entirely representative of the broader U.S. public school student population. While race was not reported for 13% of participants, the available data suggest an overrepresentation of White students and an underrepresentation of other racial and ethnic groups compared to national averages. Future research would benefit from intentionally including a more diverse student population to enhance the generalizability of findings regarding interventions for students with ASD.
Another key limitation of this meta-analysis is its intentional focus on interventions using structured visual formats (i.e., PGR), which potentially limited the generalizability of the findings to other types of PGR. As indicated by Guo et al. (2020), including broader categories of visual aids such as pictures, pictorial diagrams, flow diagrams, and other mixed types of graphic displays can provide more comprehensive insights into the effectiveness of PGR in supporting reading comprehension. Future studies should therefore explore a wider range of visual formats to better understand the overall impact of PGR on reading comprehension for students with ASD.
While this meta-analysis intended to examine the differences in effect size based on the instructional settings, comparisons across instructional settings (e.g., small group vs. individualized instruction) were not possible due to the lack of variability in study settings. Future research should investigate whether group-based PGR interventions produce similar or different outcomes compared to one-on-one interventions, particularly in classroom environments where peer interaction and collaborative learning may play a role in comprehension development. Also, studies should explore whether integrating multiple modalities (e.g., paper- and tech-based approaches) yields additive benefits. Understanding these factors can help practitioners optimize reading comprehension interventions and enhance educational outcomes for students with ASD.

Implications

The findings of this meta-analysis provide important insights for practitioners, instructional designers, and researchers working with students with ASD. The overall effectiveness of PGR suggests that visual support should be systematically integrated into reading comprehension interventions to enhance student engagement and understanding. Structured visual formats such as story maps can serve as effective scaffolds for improving reading comprehension in students with ASD, as shown in Schatz (2017) and Drill and Bellini (2022). These tools may help reduce cognitive load and provide clear structure, especially for students who benefit from concrete visual representations of abstract concepts (Schatz, 2017; Stringfield et al., 2011).
The variability in intervention effects underscores the need for individualized instructional approaches. Educators should consider student-specific factors such as cognitive processing styles, prior knowledge, and learning environment when designing reading comprehension interventions. For example, Blum (2019) found that students with prior experiences with comic strips produced higher levels of inferences when given graphic narratives. Implementing differentiated instruction and adapting visual supports based on students’ unique needs may result in more effective learning outcomes. Additionally, explicit instruction in how to use these supports effectively may enhance comprehension gains, particularly for students who struggle with independent application of visual strategies (Roux et al., 2014).
The subgroup analysis further highlights several important considerations for educational practice and research. Although tech-based interventions show promise, their effectiveness varies significantly, suggesting that implementation fidelity and instructional design play critical roles in determining success. In contrast, paper-based interventions appeared to yield more stable and consistent benefits, although further research is needed to confirm this pattern across larger samples. Elementary-aged students demonstrated stronger intervention effects compared to middle school students, which indicates that earlier exposure to PGR strategies may be more beneficial. These findings suggest that early intervention efforts should prioritize structured visual strategies to build foundational comprehension skills before students face more complex reading tasks.

Conclusion

In conclusion, this meta-analysis provides compelling evidence that PGR can significantly enhance reading comprehension for students with ASD. The overall moderate-to-strong effect size observed (Tau-U = 0.85) indicates that PGR interventions are beneficial, though the extent of effectiveness varies depending on modality, task type, and individual factors. Technology-based interventions showed strong results, but there was also notable variability, with paper-based methods demonstrating more consistent outcomes. The findings suggest that careful selection of appropriate visual supports tailored to the cognitive profiles and needs of students with ASD is critical. Ultimately, the integration of PGR into reading comprehension strategies holds promise for improving educational outcomes for students with ASD, particularly when designed to align with students’ unique learning profiles.

Declarations

Conflict of interests

The authors declare no conflicts of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Titel
The Use of Pictorial or Graphic Representation in Reading Comprehension Interventions for Students with Autism Spectrum Disorders: A Meta-Analysis
Auteurs
Seulbi Lee
Sarah Quinn
Yitong Jiang
Publicatiedatum
11-09-2025
Uitgeverij
Springer US
Gepubliceerd in
Journal of Autism and Developmental Disorders
Print ISSN: 0162-3257
Elektronisch ISSN: 1573-3432
DOI
https://doi.org/10.1007/s10803-025-07014-4
go back to reference Altun, D. (2018). The efficacy of multimedia stories in preschoolers’ explicit and implicit story comprehension. Early Childhood Education Journal, 46(6), 629–642. https://doi.org/10.1007/s10643-018-0916-8CrossRef
go back to reference American Psychological Association PsycINFO. https://www.apa.org/pubs/databases/psycinfo
go back to reference Bethune, K. S., & Wood, C. L. (2013). Effects of wh-question graphic organizers on reading comprehension skills of students with autism spectrum disorders. Education and Training in Autism and Developmental Disabilities, 48(2), 236–244.
go back to reference Blum, A. (2019). Deficit or Difference? Assessing Narrative Comprehension in Autistic and Typically Developing Individuals: Comic vs. Text. https://uoregon.idm.oclc.org/login?url=https://www.proquest.com/dissertations-theses/deficit-difference-assessing-narrative/docview/2299503028/se-2
go back to reference Blum, A. M., Mason, J. M., Kim, J., & Pearson, P. D. (2020). Modeling question-answer relations: The development of the integrative inferential reasoning comic assessment. Reading & Writing,33(8), 1971–2000. https://doi.org/10.1007/s11145-020-10026-4CrossRef
go back to reference Browder, D. M., Root, J. R., Wood, L., & Allison, C. (2017). Effects of a story-mapping procedure using the iPad on the comprehension of narrative texts by students with autism spectrum disorder. Focus on Autism and Other Developmental Disabilities, 32(4), 243–255. https://doi.org/10.1177/1088357615611387CrossRef
go back to reference Brown, H. M., Oram-Cardy, J., & Johnson, A. (2013). A meta-analysis of the reading comprehension skills of individuals on the autism spectrum. Journal of Autism and Developmental Disorders, 43(4), 932–955. https://doi.org/10.1007/s10803-012-1638-1CrossRefPubMed
go back to reference Cain, K., & Oakhill, J. V. (1999). Inference making ability and its relation to comprehension failure in young children. Reading and Writing, 11, 489–503. https://doi.org/10.1023/A:1008084120205CrossRef
go back to reference Cain, K., Oakhill, J. V., Barnes, M. A., & Bryant, P. E. (2001). Comprehension skill, inference-making ability, and the relation to knowledge. Memory & Cognition,29, 850–859. https://doi.org/10.3758/BF03196414CrossRef
go back to reference Cohn, N. (2018). Visual Language theory and the scientific study of comics. In J. Wildfeuer, A. Dunst, & J. Laubrock (Eds.), Empirical comics research: Digital, multimodal, and cognitive methods (pp. 305–328). Routledge.
go back to reference Colliot, T., & Jamet, É. (2018). Does self-generating a graphic organizer while reading improve students’ learning? Computers and Education,126, 13–22. https://doi.org/10.1016/j.compedu.2018.06.028CrossRef
go back to reference Danaei, D., Jamali, H. R., Mansourian, Y., & Rastegarpour, H. (2020). Comparing reading comprehension between children reading augmented reality and print storybooks. Computers and Education, 153, 103900. https://doi.org/10.1016/j.compedu.2020.103900CrossRef
go back to reference Dexter, D. D., Park, Y. J., & Hughes, C. A. (2011). A meta-analytic review of graphic organizers and science instruction for adolescents with learning disabilities: Implications for the intermediate and secondary science classroom. Learning Disabilities Research & Practice,26(4), 204–213. https://doi.org/10.1111/j.1540-5826.2011.00341.xCrossRef
go back to reference Drill, R. B., & Bellini, S. (2022). Combining readers theater, story mapping and video self-modeling interventions to improve narrative reading comprehension in children with autism spectrum disorder. Journal of Autism and Developmental Disorders, 52(1), 1–15. https://doi.org/10.1007/s10803-021-04908-xCrossRefPubMed
go back to reference Duke, N. K., & Pearson, P. D. (2009). Effective practices for developing reading comprehension. Journal of Education, 189(1–2), 107–122. https://doi.org/10.1177/0022057409189001-208CrossRef
go back to reference Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76, 378–382. https://doi.org/10.1037/h0031619CrossRef
go back to reference Frith, U. (1989). A new look at language and communication in autism. British Journal of Disorders of Communication,24(2), 123–150. https://doi.org/10.3109/13682828909011952CrossRefPubMed
go back to reference Frith, U., & Happé, F. (1994). Autism: Beyond theory of mind. Cognition,50(1–3), 115–132. https://doi.org/10.1016/0010-0277(94)90024-8CrossRefPubMed
go back to reference Fuchs, L. S., & Fuchs, D. (1992). Identifying a measure for monitoring student reading progress. School Psychology Review, 21(1), 45–58.CrossRef
go back to reference Gardill, M. C., & Jitendra, A. K. (1999). Advanced story map instruction: Effects on the reading comprehension of students with learning disabilities. The Journal of Special Education, 33(1), 2–17, 28. https://doi.org/10.1177/002246699903300101
go back to reference Guo, D., Zhang, S., Wright, K. L., & McTigue, E. M. (2020). Do you get the picture?? A meta-analysis of the effect of graphics on reading comprehension. AERA Open. https://doi.org/10.1177/2332858420901696CrossRef
go back to reference Happé, F., & Booth, R. D. L. (2008). The power of the positive: Revisiting weak coherence in autism spectrum disorders. Quarterly Journal of Experimental Psychology, 61(1), 50–63. https://doi.org/10.1080/17470210701508731CrossRef
go back to reference Happé, F., & Frith, U. (2006). The weak coherence account: Detail-focused cognitive style in autism spectrum disorders. Journal of Autism and Developmental Disorders, 36(1), 5–25. https://doi.org/10.1007/s10803-005-0039-0CrossRefPubMed
go back to reference Hedges, L. V., Pustejovsky, J. E., & Shadish, W. R. (2013). A standardized mean difference effect size for multiple baseline designs across individuals. Research Synthesis Methods, 4, 324–341. https://doi.org/10.1002/jrsm.1086CrossRefPubMed
go back to reference Henderson, L. M., Clarke, P. J., & Snowling, M. J. (2014). Reading comprehension impairments in autism spectrum disorders. L’Année Psychologique,114(4), 779–797.
go back to reference Hume, K., Wong, C., Plavnick, J., & Schultz, T. (2014). Use of visual supports with young children with autism spectrum disorders. In J. Tarbox, D. R. Dixon, P. Sturmey, & J. L. Matson (Eds.), Handbook of early intervention for autism spectrum disorders (pp. 375–402). Springer. https://doi.org/10.1007/978-1-4939-0401-3_15
go back to reference Institute of Education Sciences (n.d.). Education Research Database ERIC. Retrieved from https://ies.ed.gov/use-work/education-research-database-eric
go back to reference Kendall, M. G., & Gibbons, J. D. (1999). Rank correlation methods (5th ed.). Arnold.
go back to reference Kendeou, McMaster, K. L., Butterfuss, R., Kim, J., Bresina, B., & Wagner, K. (2020). The inferential language comprehension (iLC) framework: Supporting children’s comprehension of visual narratives. Topics in Cognitive Science,12(1), 256–273. https://doi.org/10.1111/tops.12457
go back to reference Kendeou, P. (2015). A general inference skill. In E. J. O’Brien, A. E. Cook, & R. F. Lorch, Jr. (Eds.), Inferences during reading (pp. 160–181). Cambridge University Press.
go back to reference Kendeou, P., Papadopoulos, T. C., & Spanoudis, G. (2012). Processing demands of reading comprehension tests in young readers. Learning and Instruction, 22(5), 354–367. https://doi.org/10.1016/j.learninstruc.2012.02.001CrossRef
go back to reference Kim, A. H., Vaughn, S., Wanzek, J., & Shangjin Wei. (2004). Graphic organizers and their effects on the reading comprehension of students with LD: A synthesis of research. Journal of Learning Disabilities, 37(2), 105–118. https://doi.org/10.1177/00222194040370020201CrossRefPubMed
go back to reference Kim, Y. S. G. (2016). Direct and mediated effects of language and cognitive skills on comprehension of oral narrative texts (listening comprehension) for children. Journal of Experimental Child Psychology, 141, 101–120. https://doi.org/10.1016/j.jecp.2015.08.003
go back to reference Kintsch, W. (1988). The role of knowledge in discourse comprehension: A constructive integration model. Psychological Review,95, 163–182.CrossRefPubMed
go back to reference Kintsch, W., & Rawson, K. A. (2005). Comprehension. In M. J. Snowling, & C. Hulme (Eds.), The science of reading. A handbook (pp. 209–226). Blackwell Publishing.
go back to reference Kouo, J., & Visco, C. (2021). Technology-aided instruction and intervention in teaching students with autism to make inferences. Focus on Autism and Other Developmental Disabilities, 36(3), 148–155. https://doi.org/10.1177/10883576211012597CrossRef
go back to reference Lajeunesse, M.J. (2021). Automated, semi-automated, and manual extraction of numerical data from scientific images, plot, charts, and figures. R package version 0.1. https://CRAN.R-project.org/package=juicr
go back to reference Loschky, L. C., Larson, A. M., Magliano, J. P., & Smith, T. J. (2015). What would jaws do? The tyranny of film and the relationship between gaze and higher-level narrative film comprehension. PLoS One,10(11), e0142474. https://doi.org/10.1371/journal.pone.0142474CrossRefPubMedPubMedCentral
go back to reference Magliano, J. P., Larson, A. M., Higgs, K., & Loschky, L. C. (2016). The relative roles of visuospatial and linguistic working memory systems in generating inferences during visual narrative comprehension. Memory & Cognition,44(2), 207–219. https://doi.org/10.3758/s13421-015-0558-7CrossRef
go back to reference Magliano, J. P., Loschky, L. C., Clinton, J. A., & Larson, A. M. (2013). Is reading the same as viewing? An exploration of the similarities and differences between processing text- and visually based narratives. In B. Miller, L. Cutting, & P. McCardle (Eds.), Unraveling the behavioral, neurobiological, & genetic components of reading comprehension (pp. 78–90). Brookes Publishing Co.
go back to reference McIntyre, N. S., Solari, E. J., Gonzales, J. E., Solomon, M., Lerro, L. E., Novotny, S., Oswald, T. M., & Mundy, P. C. (2017). The scope and nature of reading comprehension impairments in school-aged children with higher-functioning autism spectrum disorder. Journal of Autism and Developmental Disorders, 47(9), 2838–2860. https://doi.org/10.1007/s10803-017-3209-yCrossRefPubMed
go back to reference McMaster, K. L., Kendeou, P., Kim, J., & Butterfuss, R. (2024). Efficacy of a technology-based early language comprehension intervention: A randomized control trial. Journal of Learning Disabilities,57(3), 139–152. https://doi.org/10.1177/00222194231182974CrossRefPubMed
go back to reference McNamara, D. S., & Magliano, J. (2009). Chapter 9 toward a comprehensive model of comprehension. In Psychology of learning and motivation. Elsevier Science & Technology,51, 297–384. https://doi.org/10.1016/S0079-7421(09)51009-2
go back to reference McVicker, C. J. (2007). Comic strips as a text structure for learning to read. Reading Teacher, 61(1), 85–88. https://doi.org/10.1598/RT.61.1.9CrossRef
go back to reference Miller, A. C., & Keenan, J. M. (2009). How word decoding skill impacts text memory: The centrality deficit and how domain knowledge can compensate. Annals of Dyslexia, 59(2), 99–113. https://doi.org/10.1007/s11881-009-0025-xCrossRefPubMedPubMedCentral
go back to reference National Center for Education Statistics (2024). Racial/Ethnic Enrollment in Public Schools. Condition of Education. U.S. Department of Education, Institute of Education Sciences. Retrieved July 20, 2025, from https://nces.ed.gov/programs/coe/indicator/cge
go back to reference Ness-Maddox, H. (2022). Nuff Said: Understanding comprehension processes and products for reading text and non-linguistic graphic narratives. [Unpublished doctoral dissertation]. College of Education & Human Development, Georgia State University.
go back to reference Nuske, H. J., & Bavin, E. L. (2010). Narrative comprehension in 4–7-year-old children with autism: Testing the weak central coherence account. International Journal of Language & Communication Disorders, 100824014249025. https://doi.org/10.3109/13682822.2010.484847
go back to reference Oakhill, J. (1984). Inferential and memory skills in children’s comprehension of stories. British Journal of Educational Psychology,54(1), 31–39. https://doi.org/10.1111/j.2044-8279.1984.tb00842.xCrossRef
go back to reference Page, M. J., Moher, D., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., & McKenzie, J. E. (2021). PRISMA 2020 explanation and elaboration: Updated guidance and exemplars for reporting systematic reviews. BMJ (Online), 372, n160–n160. https://doi.org/10.1136/bmj.n160CrossRefPubMed
go back to reference Parker, R. I., Vannest, K. J., Davis, J. L., & Sauber, S. B. (2011). Combining nonoverlap and trend for single-case research. Tau-U Behavior Therapy, 42(2), 284–299. https://doi.org/10.1016/j.beth.2010.08.006CrossRefPubMed
go back to reference Perfetti, C., & Stafura, J. (2014). Word knowledge in a theory of reading comprehension. Scientific Studies of Reading, 18(1), 22–37. https://doi.org/10.1080/10888438.2013.827687CrossRef
go back to reference ProQuest (n.d.). ProQuest Dissertations & Theses Global. Retrieved from https://about.proquest.com/en/products-services/pqdtglobal/
go back to reference Pustejovsky, J. E., Hedges, L. V., & Shadish, W. R. (2014). Design-comparable effect sizes in multiple baseline designs: A general modeling framework. Journal of Educational and Behavioral Statistics, 39(5), 368–393. https://doi.org/10.3102/1076998614547577CrossRef
go back to reference R Core Team (2023). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/
go back to reference Roux, C., Dion, E., Barrette, A., Dupéré, V., & Fuchs, D. (2014). Efficacy of an intervention to enhance reading comprehension of students with high-functioning autism spectrum disorder. Remedial and Special Education, 36(3), 131–142. https://doi.org/10.1177/0741932514533998CrossRef
go back to reference Rozema, R. (2015). Manga and the autistic mind. English Journal,105(1), 60–68.CrossRef
go back to reference Rumpf, A. L., Kamp-Becker, I., Becker, K., & Kauschke, C. (2012). Narrative competence and internal state language of children with asperger syndrome and ADHD. Research in Developmental Disabilities,33(5), 1395–1407. https://doi.org/10.1016/j.ridd.2012.03.007CrossRefPubMed
go back to reference Sartini, E. C. (2016). Effects of explicit instruction and self-directed video prompting on text comprehension of students with autism spectrum disorder [University of Kentucky]. https://doi.org/10.13023/ETD.2016.061
go back to reference Schatz, R. B. (2017). Combining readers theater, story mapping and video self-modeling interventions to improve narrative reading comprehension in children with autism spectrum disorder [Indiana University].
go back to reference Schlooz, W. A. J. M., & Hulstijn, W. (2014). Boys with autism spectrum disorders show superior performance on the adult embedded figures test. Research in Autism Spectrum Disorders, 8(1), 1–7. https://doi.org/10.1016/j.rasd.2013.10.004CrossRef
go back to reference Solari, E. J., Grimm, R. P., McIntyre, N. S., Zajic, M., & Mundy, P. C. (2019). Longitudinal stability of reading profiles in individuals with higher functioning autism. Autism, 23(8), 1911–1926. https://doi.org/10.1177/1362361318812423CrossRefPubMed
go back to reference Stringfield, S. G., Luscre, D., & Gast, D. L. (2011). Effects of a story map on accelerated reader post reading test scores in students with high-functioning autism. Focus on Autism and Other Developmental Disabilities, 26, 218–229. https://doi.org/10.1177/1088357611423543CrossRef
go back to reference Tárraga-Mínguez, R., Gómez-Marí, I., & Sanz-Cervera, P. (2021). Interventions for improving reading comprehension in children with ASD: A systematic review. Behavioral Sciences,11(1), Article 3. https://doi.org/10.3390/bs11010003CrossRef
go back to reference Urton, K., Moeyaert, M., Nobel, K., Barwasser, A., Boon, R. T., & Grünke, M. (2024). Effects of graphic organizers on outcomes for students with disabilities: Three-level meta-analysis of single-case studies. Exceptionality,33(1), 17–39. https://doi.org/10.1080/09362835.2024.2389080CrossRef
go back to reference van der Hallen, R., Evers, K., Brewaeys, K., Van Den Noortgate, W., & Wagemans, J. (2015). Global processing takes time: A meta-analysis on local–global visual processing in ASD. Psychological Bulletin,141(3), 549–573. https://doi.org/10.1037/bul0000004CrossRefPubMed
go back to reference Vannest, K. J., Parker, R. I., Gonen, O., Adiguzel, T., & Texas (2016). A&M University. Available from singlecaseresearch.org.
go back to reference Viechtbauer, W. (2010). Conducting meta-analyses in R with the metafor package. Journal of Statistical Software,36(3), 1–48. https://doi.org/10.18637/jss.v036.i03CrossRef