The incorporation of gaming into American culture started at least in part with the introduction of Pong (arcade version, 1972; home version, 1975), arguably the first modern video game accessed through a computer. Four decades later, it is staggering to look at how far gaming has progressed from a moving green dot and two lines (Kent, 2001)! Besides innovations to the appearances and functionalities of games, accessibility has also dramatically increased. Today, video games are accessed through home computers (including massive multiplayer online role-playing games [MMORPG] such as World of Warcraft), game consoles (e.g., Nintendo Wii, Sony PlayStation, or Microsoft Xbox), and handheld devices (e.g., Nintendo DS, cellular phones, or tablets), which provide users with unlimited access to video games at any time and location. In American households with children between the ages of 8 and 18, 93 % have access to a computer, 87 % have a game console, and 59 % own a handheld device (Rideout, Foehr, & Roberts, 2010). On average, American children spend at least an hour each day playing games on a console or handheld device (Rideout et al., 2010).

Given the ubiquity of gaming in contemporary society, it is of great importance to take stock of its impact on human development (S. Anderson, 2012; Rideout et al., 2010). Development occurs in the context of purposeful activity that is engaging, interesting, and motivating (Rogoff, Radziszewska, & Masiello, 1995; Sugarman & Sokol, 2012; Tobach, Falmagne, Parlee, Martin, & Kapelman, 1997; Vygotsky, 1978). It is well established that cognitive processing is driven by the task at hand, with the type of information processing required by an activity being strengthened as a consequence of time spent engaged in the activity (Heathcote, Brown, & Mewhort, 2000; Newell & Rosenbloom, 1981). Thus, if people today spend a considerable amount of time engaged with video games, one would expect to see concomitant changes in their information-processing strategies (Greenfield, 1984). In the present study, we sought to quantify the impact of video-game exposure on cognitive processing by examining specific information-processing skills (auditory processing, executive functions, mental rotation, motor skills, and visual processing) that are crucial to engagement with different sorts of video games (e.g., action games, puzzle games, and sports games), and consequently to inform some of the recent claims made to the public that video games can make people smarter (Bavelier, 2012; Hurley, 2012; Zichermann, 2011). Documenting the cognitive effects of video-game play is important, given reports of negative correlations between the amount of video-game play and academic achievement in children and adolescents (e.g., C. A. Anderson & Dill, 2000; Jackson, von Eye, Fitzgerald, Witt, & Zhao, 2011; Sharif & Sargent, 2006).

Our digitized cognitive evolution

Given the dramatic increase in the everyday use of technology over recent decades, interdisciplinary scholars have considered its impact on human cognition and development (e.g., Donald, 1991; Greenfield, 1984; Hunt, 2012). Much of modern society is organized around the navigation and use of cognitive artifacts to retrieve and synthesize information efficiently (Donald, 1991; Hunt, 2012). Video games are the most recent and obvious examples of such cognitive artifacts; video-game play places considerable demands on how people attend to, process, and appraise information, which makes them appear designed for effective learning (Gee, 2007). Greenfield, Brannon, and Lohr (1994, p. 89) outlined these demands in commenting, “the player must not only interpret, but also mentally transform, manipulate, and relate dynamic and changing images.” Developing expertise with video games has been hypothesized to sharpen a variety of cognitive skills in ways that may be beneficial for education, rehabilitation, or professional development, in fields as diverse as surgery and flight instruction (Durlach, Kring, & Bowens, 2009; Greenfield, 2009; Newcombe, 2010; Papastergiou, 2009; Rosser et al., 2007).

Following from such claims, the effects of video-game play might also result in part from the socio-cognitive components of such play. Many popular games (e.g., Guitar Hero, Rock Band, or Nintendo Wii Sports) are mimetic simulations of real-life activities through virtual reality. Mimesis lends itself to the rehearsal and refinement of skills, as is often seen in the “make-believe” games that children universally seem to enjoy (Donald, 1991; K. Nelson, 1996). Video games are an integral part of contemporary youth culture, with a powerful capacity to draw in players’ attention and elicit intense emotional responses. Peer interactions involving video-game play often have strongly collaborative as well as competitive qualities, which can motivate and reinforce learning (Gee, 2007; Shaffer, Squire, Halverson, & Gee, 2005; Steinkuehler, 2004).

Video games are complex task environments requiring a number of distinct cognitive skills for successful play (Bavelier, Green, Pouget, & Schrater, 2012). Consider the game of Tetris, which is arguably one of the least complicated puzzle games on the market, to illustrate the types of complex information processing required for playing even a relatively simple video game. During Tetris play, polygons of different shapes (visual processing) fall one at a time from the top of the screen to the bottom, while the player quickly uses keypresses (motor skills) to change the orientations of the shapes (spatial imagery) to fit them together without gaps at the bottom, where the polygons will consequently disappear if the pattern is successfully completed. While the current polygon is falling, the next in line to fall is displayed at the bottom right (peripheral visual processing), so that the player can make an informed decision (executive functioning) to plan the orientation and placement of the current polygon in preparation for the next. If/when the stack of polygons reaches the top of the screen, the game is lost. This example illustrates the multiple cognitive operations that must be undertaken for a player to stack polygons efficiently in a two-dimensional space. Players maneuvering, communicating, planning, and executing within three-dimensional spaces arguably face even greater demands in processing task-relevant information, with some game variants putting additional time pressure on players to complete patterns. It would be expected that repeated exposure to such task demands through practice would improve multiple aspects of information processing that are directly related to game performance, but it is less clear whether such improvements would transfer to nongaming contexts (Owen et al., 2010; Shipstead, Redick, & Engle, 2012).

Whether video-game training transfers broadly in order to enhance cognitive skills is a question that is fiercely debated, with some researchers making broad claims that “what video games teach is the capacity to quickly learn to perform new tasks—a capability that has been dubbed ‘learning to learn’ ” (Bavelier et al., 2012, p. 392). In contrast, other researchers have found limited evidence of transfer to untrained tasks (Lee et al., 2012), which suggests that the improvements in information processing associated with video-game play may be due to targeted training involving specific skills.

Distinguishing games from other digital artifacts

Given the variety of platforms and types of games available today, it is important to define what we consider to be a video game. Arguably, the definition can be as narrow as including only games played on a gaming console, or as broad as including all digital games—from Solitaire to Grand Theft Auto. Traditionally, video games have included only those games played on a console (Juul, 2010); however, with the growth of the Internet and with advances in microtechnology supporting game play on smart-phones, tablets, and other portable devices, the view that video games must be played on a console has been challenged. In our effort to explore the effects of video-game play on information processing through meta-analysis, we addressed this issue by considering game type as a potentially moderating variable, with the range of games limited only by the extant literature. However, a variety of digital activities (e.g., Internet surfing, social networking, and texting) are outside the scope of our research, despite proposals that such activities may alter cognition by encouraging multitasking and divided attention (Foehr, 2006; Greenfield, 2009; Rosen, 2010).

The present investigation

Interest in the effects of video-game play on cognitive processing has led to a proliferation of empirical studies in recent decades (Green & Bavelier, 2006a; Spence & Feng, 2010). An early meta-analysis of the effects of action (violent) video-game play found enhancements in visuospatial cognition (Ferguson, 2007); this meta-analysis, however, included only seven published studies. Despite a burgeoning literature, with over 100 studies to date, no quantitative meta-analysis has measured the effect sizes of the alleged benefits of various types of video games in enhancing information-processing skills. In our study, information-processing skills were operationally defined as auditory processing, executive functions, motor skills, spatial imagery, and visual processing. To evaluate the impact of video-game play on information processing, two separate meta-analyses were conducted. This was necessary in order to distinguish studies that were quasi-experimental (correlational) in design and that sampled already practiced game players from those studies that were truly experimental, in which participants were randomly assigned to game training or control conditions.

For quasi-experimental studies that compared habitual gamers to nongamers or that compared groups defined by skill level in playing a particular game, causal ambiguities remain because of issues of self-selection. That is, one cannot determine whether individuals who have better information-processing skills are more likely to find video games to be enjoyable and/or may be more skillful in playing such games. Furthermore, the overwhelming majority of quasi-experimental studies have failed to use covert recruitment—that is, the participants know that they are being recruited for the study due to their experience playing video games, rather than being asked about their video-game habits postparticipation. Without covert recruitment, video-game players might expect that they have a skill set that the researchers value and that will lead them to perform well on the experimental tasks. This expectation may enhance motivation for the participants to try harder, and possibly to perform better than participants who do not identify as video-game players and do not share this expectation—leading to a so-called Hawthorne effect (cf. Boot, Blakely, & Simons, 2011, for further critique of the issues of study quality and the causality of transfer in the video-game literature). Establishing the causality of video-game experience on information-processing skills crucially depends on experimental designs in which randomly assigned participants are trained on a specific game and the outcomes are monitored relative to appropriate controls.

As the majority of the existing studies have been quasi-experimental, we will first report findings from a meta-analysis of such studies, and then determine through a second meta-analysis a more accurate effect of gaming, without such self-selection biases and with greater generalizability. In each meta-analysis, we examined several potential moderators of the effects of video-game exposure. Our first moderator was the domain of the information processing being measured. If video games can be utilized for training, in which domains might there be relevant applications? Wherever a specific outcome or skill has been sought within the literature, researchers have selected games that mirror that skill, such as using Tetris (a puzzle game) to enhance skills within the domain of spatial imagery (e.g., Quaiser-Pohl, Geiser, & Lehmann, 2006; Terlecki & Newcombe, 2005). To the extent that an appropriately structured game exists, potentially any cognitive skill might be trainable. We categorized the existing studies under five, broad information-processing domains: auditory processing, executive functions, motor skills, spatial skills, and visual processing. Given the considerable variety of methodologies and measures used across studies, we also considered moderators that addressed generalizability: the type of video game (genre), the comparison (control) condition, and the length of training (for true experiments).

In addition to game-related conditions, we also considered participant characteristics as possible moderators. First, we considered the ages of the participants, due to developmental differences and cohort effects that could affect outcomes. With respect to the quasi-experimental studies, it is important to keep in mind how the digital environment might vary for individuals of different ages. Children now grow up in a world saturated with digital technologies, whereas many adults still have to make a concerted effort to learn to utilize digital tools (Prensky, 2001). This difference in the degree to which children’s lives today are structured through interactions with technologies, relative to their parents’ early lives, might result in smaller differences between game-playing and nonplaying children relative to adults. It is also well established that younger individuals have greater neural plasticity than do older individuals (e.g., S. L. Andersen, 2003; C. A. Nelson, 2000); hence, one might expect effect sizes following assigned video-game training to be larger in children than in adults. Moreover, whether adults (including the elderly) show benefits of video-game training is of practical importance to the field of cognitive rehabilitation (e.g., Basak, Boot, Voss, & Kramer, 2008; Drew & Waters, 1986; Goldstein et al., 1997; Levi, 2012).

Gender was also considered as a possible moderator. Some researchers, educators, and parents have voiced concerns that boys spend too much time in video-game play, especially with regard to action games that often have violent content. This might not be as much of a concern for girls, because some evidence has indicated that girls may have less positive attitudes toward technology overall; girls may thus be less likely to enjoy video games or may prefer to play different types of video games than do boys (Hartmann & Klimmt, 2006; Homer, Hayward, Frye, & Plass, 2012; Lucas & Sherry, 2004; Walkerdine, 2007). Given concerns that some video games appeal more to males than to females due to their violent and sexualized contents (Ivory, 2006), we examined the extent to which the benefits associated with video-game play generalized across gender groups.

Due to concerns regarding the publication bias against null findings (e.g., Greenwald, 1975; Hubbard & Armstrong, 1997), we also compared effect sizes across published and unpublished studies, using journal impact factors to compare high- and low-impact publications. This allowed us to evaluate the extent to which high-impact journal articles report larger effects than do other publication types (cf. Ledgerwood & Sherman, 2012, for concerns that the high-impact brief-article formats may exacerbate problems with publication bias). We also included publication year as a moderator to evaluate whether effect sizes have changed as video games have increased in sophistication and popularity over the years and as studies of their effects have proliferated. Finally, research group was examined as a potential moderator, due to a small number of research groups contributing the majority of studies to each meta-analysis.Footnote 1 In particular, we were concerned that publicity surrounding research findings might increase participants’ awareness of the research hypotheses. Thus, to evaluate the extent to which effect sizes might be influenced by lab reputation and publicity, we compared the effect sizes for each research group contributing three or more studies, with all other research groups being designated as “other.”

Method

Literature search

Studies examining the effects of video-game play on aspects of information processing were identified by means of a variety of sources. The majority of studies were identified using an EBSCO multidatabase search and PsycInfo, with searches being conducted periodically from November 2009 to August 2012. Computerized searches were conducted using the search term video game; the latest search was conducted in August 2012, which yielded 2,107 articles about video games in PsycInfo. In addition to the computerized literature search, studies were identified from citations in review and other relevant articles. To locate unpublished studies, we conducted a search in the database of Dissertation Abstracts International. To the extent that was possible, due to the availability of e-mail addresses, all first authors of the studies included in each meta-analysis were contacted with requests for any additional, unpublished data.

On the basis of the available abstract, a study was preliminarily selected because of its use of commercially available video games and information-processing outcomes, and after a close reading, it was included in one of the two meta-analyses if it met one of the following criteria: (1) quasi-experimental studies utilizing self- or parental-report data to determine the effects of high and low frequencies of game play on variables of interest; (2) quasi-experimental studies in which the participants had experience with a target video game or video-game type and in which skill level was correlated with variables of interest; (3) true experiments comparing participants who receive video-game training (whether on action, nonaction, or puzzle games) to a control group with no video-game training; (4) true experiments comparing participants who receive action-game training to a group who receive nonaction- or puzzle-game training as a control; and (5) true experiments utilizing a within-subjects design, with pre- and posttests assessing the effects of video-game training. Exclusion criteria precluded the use of several potentially relevant studies, such as articles with insufficient and unclear statistical results (e.g., not providing standard deviations or presenting results split by gender rather than video-game experience). The authors were contacted with requests for data, when possible. Studies based on qualitative data alone were not included.

Units of analysis and data sets

Many of the articles considered included multiple experiments with numerous comparisons. Such articles, if included as a single study, would have greater power in the overall computation of effect size than would studies with fewer experiments and/or comparisons. Due to this, experiments and comparisons were examined as separate units of analysis. Experiments, as the unit of analysis, referred to individual experiments, and when multiple experiments were reported in one article, each experiment was treated separately as an independent contribution. Comparisons, as the unit of analysis, referred to counting each statistical comparison as an independent contribution. Although including multiple comparisons reported for a single sample violates assumptions of independence, analysis at this level was required to test for the effects of most of the moderating variables.

Studies were designated as being either “quasi-experimental studies” or “true experiments”: As noted, the quasi-experimental studies compared habitual gamers with nongamers, or compared individuals with varying levels of gaming skill on variables of interest. For such studies, researchers gathered data through self- or parental report of amounts and types of video-game play, or compared players on the basis of levels of game performance. Participants were asked via surveys or questionnaires about their video-game playing experiences and were grouped on the basis of these answers. In the case of young children, parental reports were used (e.g., Li & Atkins, 2004). True experiments were designed to examine the effects of assigned training with a specific video game on variables of interest. Any study that included a video-game training condition in the design was coded as a true experiment; these included both between-subjects and within-subjects designs. Table 1 lists all of the quasi-experimental studies that were included, and Table 2 lists all of the true experiments. The overall values of d provided in the tables are experiment-level effect sizes (i.e., weighted averages using the pooled standard deviations). Supplemental tables with all of the comparisons are available online.

Table 1 Samples included in the video-gaming meta-analysis of studies with a quasi-experimental design
Table 2 Samples included in the video-gaming meta-analysis of studies with true experiments

On the basis of this classification, two separate meta-analyses were run. This was necessary not only because of concerns when interpreting causality, but also due to the fact that the control/comparison conditions and other moderators (i.e., length of video-game training) were not comparable across quasi-experimental studies and true experiments.

Variables coded from studies as possible moderators for the meta-analyses

An advantage of quantitative meta-analytic techniques is the ability to examine potential moderators of relations with ample statistical power. In the present meta-analyses, the following potential moderators were investigated: (1) information-processing domain, (2) target game type, (3) type of control group, (4) length of training (true experiments only), (5) age, (6) gender, (7) publication type, (8) publication year, and (9) research group. Whenever heterogeneity of variances was indicated (Johnson, 1989), levels of the moderators were further examined. Post hoc p values were used to determine which levels of the moderator in question led to statistically different effects. The moderators length of training, publication type, publication year, and research group were examined using studies as the unit of analysis; all of the other moderators were examined using comparisons as the unit of analysis, due to the range of conditions (e.g., different age groups) tested within a single study.

Information-processing domain

Although narrow classifications were possible, we coded five main domains of information processing in order to have sufficient sample sizes in each. Comparisons were categorized as one of the following: Auditory processing comprised auditory discrimination tasks (e.g., phoneme identification, tone location). Executive functions comprised executive function batteries, dual/multi-tasking, inhibition tasks (e.g., Stroop, Simon, flanker), intelligence tests (e.g., WAIS, TONI), task switching, and working/short-term memory measures. Motor skills comprised measures of gross and fine motor skills and hand–eye coordination (e.g., flight control, surgery, golf putting, rotary pursuit). Spatial imagery comprised measures of the visual–spatial manipulation of images (e.g., mental rotation, card folding, map tasks). Visual processing comprised measures of visual perception (e.g., target identification, change detection, multiple-object tracking, peripheral vision).

Target game type

For the quasi-experimental studies, we coded the game type used for distinguishing habitual players from nonplayers, or for evaluating the skill levels of players. For true experiments, we coded the game type used for the video-game training condition. Game type was coded by genre as action/violent, mimetic, nonaction, puzzle, or nonspecific (quasi-experimental only). Action/violent games comprised shooter games such as Medal of Honor and Unreal Tournament. Mimetic games, in which the player mimics the action on the screen, comprised Wii games such as Wii Sports and Wii Fit. Nonaction games comprised educational games, sports games, and simulation games such as Word Whomp, Mario Kart, and The Sims 2. Puzzle games comprised Tetris and its variations. Some studies provided the name of the game used in the study, but very often, especially in quasi-experimental studies involving self-report, only the genre was provided by the authors. In a number of the quasi-experimental studies, even game genre was unspecified, and participants were categorized as video-game players and nonplayers. These were coded as Nonspecific.

Type of control group

Consideration of the control group was coded differently for quasi-experimental studies and true experiments. For quasi-experimental studies, the type of control group was coded as nonplayer, other game, or low skill. Nonplayer applied to studies that compared habitual video-game players with nonplayers. Other game applied to studies that compared action video-game players with video-game players who reported playing mostly other types of video games. Low skill applied to studies in which participants played the same video game and were assigned to groups on the basis of their game performance. We included six studies that reported game playing as a continuous variable (based on skill level or time spent playing the game); these studies did not have a separate control group.

For true experiments with a between-subjects design, the control condition was either No game training or Other game training, which involved training with a different genre of games than the one on which the experimental group were trained (coded under the target game type). For experiments with a within-subjects design in which game training occurred between a pre- and a posttest, the control condition was Pretest.

Length of training

This moderator was used only for true experiments and was based on a median split of the length of game training across studies. As such, length of training was coded as 10 or more hours of game training or Less than 10 h of game training. The total hours of training were completed over a range of time from a single session to several weeks. The training length ranged from 15 min to 50 h. Three studies (Dorval & Pepin, 1986; Sanchez, 2012; G. Smith, Morey, & Tjoe, 2007) were excluded from this comparison for not having reported a specific amount of game training (the participants were given the game to play at home during the training period).

Age

Age was coded on the basis of the reports of the samples. Due to a small number of relevant studies including children and adolescents, studies sampling participants with a mean age spanning 3–17 years were coded as Youth. To capture the large number of studies employing university subject pools, studies with a mean age spanning 18–22 years were coded as Young adult. Studies with a mean age spanning 23–54 years were coded as Adult, and those with a mean age of 55 years or greater as Older adult. (Note that all of the studies with Older adult groups were true experiments.) All adult populations, including students, for which a mean age was not reported were coded as Adult.

Gender

The gender codes were Female only, Male only, or Mixed. Coding for this category was dependent on whether samples were single-sex or mixed.

Publication type

Publication type utilized the impact factor of the journal for all published studies and a separate code for unpublished studies (conference proceedings and dissertations). The impact factor of each journal was located on the publication website; in all cases we used the most recent impact factor (from 2009 or 2010). First-tier studies were those published in a journal with an impact factor greater than or equal to 2.0; all other studies in journals, with ranks below 2.0, and in book chapters were coded as Second-tier studies. Six studies were published in new journals, which did not yet have an impact factor; these were categorized as second-tier. Dissertations and conference proceedings were coded as Unpublished.

Publication year

Publication year was coded as early or late, split by the year 2000. The Early studies within our sample had publication dates from 1983 to 1997, and Late studies had publication dates from 2001 to 2012.

Research group

Due to a large number of studies coming from a single group of authors, we included research group as a moderator. We coded the studies on the basis of authors working together within and across institutions. We then determined research group for the quasi-experimental studies and true experiments separately, according to whether research groups had three or more studies included in the analysis. The quasi-experimental studies coded as being from the Rochester group (26 studies) comprised studies by Bavelier, Dye, Green, Hubert-Wallace, R. Li, Mishra, and their collaborators; the Toronto group (five studies) comprised studies by Castel, Feng, Pratt, Spence, and their collaborators; the Duke group (three studies) comprised studies by Clark, Donahue, Mitroff, and their collaborators; and Other (37 studies) comprised all of the other research groups. True experiments coded as being from the Rochester group (nine studies) comprised studies by Bavelier, Cohen, Green, R. Li, and their collaborators; the Illinois group (four studies) comprised studies by Boot, Basak, Simons, and their collaborators; and Other (33 studies) comprised all of the other research groups.

Reliability on moderators

Two coders were responsible for coding the moderators, with 100 comparisons (17.6 %) coded by both coders independently. Any disagreements between the two coders were resolved through discussion. Intercoder reliability on all moderators was consistently high (κ = .82–1.0).

Computation and analysis of effect sizes

Because of the range of methodologies used across studies, we used a random-effects model and conducted statistical analyses using the Comprehensive Meta-analysis, Version 2 (CMA) program (Borenstein, Hedges, Higgins, & Rothstein, 2005). A random-effects model is deemed appropriate when the participant samples and experimental factors across studies cannot be assumed to be functionally equivalent. Thus, it cannot be assumed that all effect sizes will share a common effect.

Effect sizes

The computation formulae included within the CMA program allowed for direct entry of group statistics in order to calculate effect sizes for each test-by-test comparison. When the only statistics available were F values and group means, DSTAT software (Johnson, 1993) allowed us to convert those statistics to a correlation coefficient, r. Using r and the sample means to reflect F values, we then were able to enter the necessary statistical information into CMA to calculate effect sizes.

Cohen’s d values are reported here as calculated by the CMA program as a measure of effect size. Cohen’s ds between 0.20 and 0.50 indicate small effects, Cohen’s ds between 0.50 and 0.80 indicate moderate effects, and ds greater than 0.80 indicate large effects (Cohen, 1988). Because effect size alone does not determine statistical significance, we determined the significance of the effect sizes on the basis of the p values of the resultant Z scores.

The Q statistic for overall analyses was used in combination with the I 2 statistic to determine whether each sample had a significant level of heterogeneity (quasi-experimental studies and true experiments) that would justify subsequent violations of statistical assumptions of independence when examining moderators at the levels of comparisons. The I 2 statistic further informs this decision by indicating the percentage of the variability within the sample that is due to true heterogeneity between studies and not to mere sampling error (Huedo-Medina, Sánchez-Meca, Marín-Martínez, & Botella, 2006). Q statistics were again employed, along with adjusted alphas at the level of comparisons, to determine whether levels of the moderators were statistically different from one another.

Results

Effect sizes comparing video-game players to non-video-game players are reported both at the level of experiments (each study that included a different sample of participants) and at the level of comparisons (each statistical test). The potential moderators length of training (for true experiments), publication type, and publication year are reported at the level of experiments, and all of the other moderators are reported at the level of comparisons. Table 3 provides the overall effect sizes for each meta-analysis at the level of studies and comparisons and for both fixed- and random-effects models.

Table 3 Summary of effect sizes for overall effects at the level of studies and comparisons for studies with a quasi-experimental design and for true experiments (numbers of studies and comparisons are in parentheses)

Quasi-experimental designs

Overall effects

The first meta-analysis, that of quasi-experimental studies, comprised a total of 318 comparisons from 72 studies; see Table 1 for the list of studies and their corresponding effect sizes. See the top of Table 3 for the overall effects. Under the random-effects model, the 71 studies had a moderate to large mean effect size, d = 0.61, 95 % confidence interval (CI) [0.50, 0.72], indicating that video-game play was consistently and significantly associated with enhanced information-processing skills. However, the effects were highly heterogeneous across studies, Q(71) = 147.56, p < .001, I 2 = 51.88, and comparisons, Q(317) = 1,011.41, p = .001, I 2 = 68.66. The I 2 statistic indicates that about half of the heterogeneity found within the sample was due to true variability between studies. Such heterogeneity is to be expected, given the diversity of the research methods, participant samples, and target skills. To address the issue of publication bias, fail-safe Ns were calculated at the level of studies with αs set to .05, two-tailed. At the level of studies, 4,045 unpublished studies would be needed to reduce the effects to nonsignificance. As a recommended secondary index of publication bias (Ferguson & Brannick, 2012), Duval and Tweedie’s (2000) trim-and-fill method determined, on the basis of the fixed-effect model, that potentially 27 studies were missing from the left of the mean effect. Imputation of these studies would reduce the point estimate under the random-effects model to a small effect, d = 0.35, 95 % CI [0.22, 0.48].

Moderators

Information-processing domain was analyzed at the level of comparisons, because many studies used multiple outcome measures across domains. The information-processing domain was found to moderate the effect sizes, Q(4) = 30.47, p = .001; see Table 4. Although enhanced processing was found across domains, the auditory and visual processing domains showed medium to large effects, whereas the other domains showed small effects. Post-hoc tests (adjusted α = .005 through Bonferroni correction) showed significantly larger effects for the visual-processing domain than for spatial imagery, Q(1) = 22.49, p = .001, motor skills, Q(1) = 15.65, p = .001, and executive functions, Q(1) = 9.84, p = .002. Effect sizes were marginally larger for executive functions than for spatial imagery, Q(1) = 4.13, p = .042. No other comparisons of domains approached significance.

Table 4 Summary of effect sizes moderated by information-processing domain at the level of comparisons

Target game type was analyzed at the level of comparisons and yielded a significant effect, Q(4) = 27.87, p = .001. See Table 5. The comparisons involving players of mimetic games yielded large effects, but this was based on only two comparisons from one study (Badurdeen et al., 2010). Comparisons involving players of action/violent games yielded medium effects, and comparisons involving players of the other game types yielded small effects. Post-hoc tests (adjusted α = .005) indicated larger effect sizes for players of action/violent games than for players of nonspecified game types, Q(1) = 17.72, p = .001, and puzzle games, Q(1) = 11.61, p = .001, and marginally larger effects in comparison to nonaction games, Q(1) = 5.91, p = .015. Post-hoc tests indicated marginally larger effect sizes for players of mimetic games than for players of nonaction games, Q(1) = 5.67, p = .017, puzzle games, Q(1) = 5.45, p = .020, and nonspecified game types, Q(1) = 5.54, p = .019.

Table 5 Summary of effect sizes moderated by target game type at the level of comparisons

Type of control group was also analyzed at the level of comparisons and yielded a significant effect, Q(2) = 18.42, p = .001. See Table 6. Post-hoc tests (adjusted α = .017) confirmed significantly larger effects for the nonplayer control conditions than for other-game control conditions, Q(1) = 16.84, p = .001, and marginally larger effects than for skill-level control conditions, Q(1) = 4.67, p = .031.

Table 6 Summary of effect sizes moderated by type of control group at the level of comparisons

Age was examined at the level of comparisons and was found also to moderate the effects, Q(2) = 45.97, p = .001; see Table 7. Although the effect sizes indicated that all age groups of gamers showed enhanced information processing, post-hoc tests (adjusted α = .017) indicated that the effect sizes for comparisons involving youths (3–17 years) were significantly smaller than those for comparisons involving young adults (18–22 years), Q(1) = 22.22, p = .001, or adults (23–54 years), Q(1) = 38.55, p = .001. The effect sizes for comparisons involving young adults and adults did not differ.

Table 7 Summary of effect sizes moderated by age at the level of comparisons

Gender was analyzed at the level of comparisons, as some studies included both males and females but subsequently reported the results split by gender. Gender was found to moderate effect sizes, Q(2) = 35.00, p = .001; see Table 8. Whereas the overall effects for male-only and mixed groups were significant, the effect found for studies sampling only females was not. However, the effect for female-only groups was based on only four comparisons from two studies (Feng, Spence, & Pratt, 2007; Quaiser-Pohl et al., 2006). Post-hoc analyses (adjusted α = .017) showed significantly larger effect sizes for comparisons involving male-only groups than for comparisons involving female-only groups, Q(1) = 15.02, p = .001, and mixed groups, Q(1) = 28.62, p = .001.

Table 8 Summary of effect sizes moderated by gender at the level of comparisons

Publication type was analyzed at the level of studies and yielded a significant effect, Q(2) = 10.65, p = .005, with large effects being reported in top-tier journals, and small effects being reported in second-tier journals/book chapters and unpublished dissertations; see Table 9. Post-hoc tests (adjusted α = .017) confirmed larger effects for studies in top-tier journals than for those in second-tier journals/book chapters, Q(1) = 8.88, p = .003, or unpublished dissertations, Q(1) = 7.09, p = .008. The effect sizes for studies in second-tier journals/book chapters and for unpublished dissertations did not differ.

Table 9 Summary of effect sizes moderated by publication type at the level of studies

An analysis examining publication year at the level of studies showed no difference in effect sizes for studies published prior to or after 2000, Q(1) = 0.010, p = .922.

As a further analysis at the level of studies, we examined effect sizes as a function of research group. For the quasi-experimental studies, three research groups contributed three or more studies (Rochester, Toronto, and Duke); see Table 10. The analysis yielded a significant effect of research group, Q(3) = 12.53, p = .006, with large effects for studies conducted by the Rochester and Toronto groups, medium effects for studies conducted by the Duke group, and small effects for studies conducted by other groups. Post-hoc tests (adjusted α = .008) indicated significantly larger effects for the Rochester studies than for other studies, Q(1) = 7.08, p = .008, and significantly larger effects for the Toronto studies than for other studies, Q(1) = 7.59, p = .006. The effect sizes for studies conducted by the Rochester, Toronto, and Duke research groups were not significantly different from each other.

Table 10 Summary of effect sizes moderated by research group at the level of studies

True experimental designs

Overall effects

The second meta-analysis, of true experiments, comprised a total of 251 comparisons from 46 studies. See Table 2 for the list of studies and their corresponding effect sizes; see the bottom of Table 3 for the overall effects. In the random-effects analysis, the 46 studies had a small-to-medium mean effect size, d = 0.48, 95 % CI [0.35, 0.60], indicating that video-game play enhanced information-processing skills. Marginal heterogeneity was found at the level of studies, Q(45) = 59.04, p = .078, I 2 = 23.78, but heterogeneity was significant at the level of comparisons, Q(250) = 670.77, p = .001, I 2 = 62.73. This indicates that about 24 % of the heterogeneity found within the sample was due to true variability between the studies. To address the issue of publication bias, fail-safe Ns were calculated at the level of studies, with αs set to .05, two-tailed. At the level of studies, 946 unpublished studies would be needed to reduce these effects to nonsignificance. Our secondary index (trim-and-fill; Duval & Tweedie, 2000) estimated that ten studies were potentially missing from the left of the mean effect. Imputation reduced our point estimate to a small effect, d = 0.37, 95 % CI [0.23, 0.51]

Moderators

Information-processing domain, analyzed at the level of comparisons, was found to moderate effect sizes, Q(4) = 26.65, p = .001; see Table 4. Effect sizes were large for motor skills, small for auditory processing, spatial imagery, and visual processing, and negligible for executive functions. The effect sizes were significantly greater than 0 for all domains except for auditory processing, which comprised only two comparisons from one study (Green, 2008). Post-hoc tests (adjusted α = .005) revealed significantly smaller effects for executive functions than for motor skills, Q(1) = 21.99, p = .001, and spatial imagery, Q(1) = 12.91, p = .001. Effects were marginally greater for motor skills in comparison to visual processing, Q(1) = 7.38, p = .007, and spatial imagery, Q(1) = 6.82, p = .009. No other comparisons approached significance.

Target game type moderated effects in the true experiments, Q(3) = 28.06, p = .001; see Table 5. Post-hoc tests (adjusted α = .008) indicated significantly larger effects for mimetic game training than for action/violent game training, Q(1) = 23.56, p = .001, or for puzzle game training, Q(1) = 15.63, p = .001, and marginally larger effects in comparison to non-action game training, Q(1) = 5.67, p = .017. Effects were significantly larger for nonaction game training than for action/violent game training, Q(1) = 7.00, p = .008.

Type of control group failed to moderate findings for the true experiments, Q(2) = 2.08, p = .354, with comparable small effects being observed across comparisons involving no game training, other game training, and pretest (within-subjects design) conditions.

Length of training, at the level of studies, also failed to moderate the effects of true experiments, Q(1) = 0.21, p = .647, with comparable results for studies with less than 10 versus 10 or more hours of training.

Age significantly moderated effect sizes in the true experiments, Q(2) = 14.41, p = .002. See Table 7. Whereas youth, young-adult, and adult groups showed small effects, older-adult groups showed medium effects. Post-hoc tests (adjusted α = .008) indicated that effect sizes were significantly larger for older-adult than for young-adult, Q(1) = 13.67, p = .001, or adult, Q(1) = 8.57, p = .003, groups. No other age groups differed.

Effect sizes for the true experiments did not vary by gender, Q(2) = 0.67, p = .714. See Table 8. Thus, female-only groups yielded effects similar to those of male-only groups, suggesting that video-game training benefited both genders equivalently.

Publication type (at the level of studies) yielded a significant effect, Q(2) = 5.80, p = .049; see Table 9. Whereas published articles yielded medium effects, effect sizes were not significant for the unpublished dissertations. Post-hoc tests (adjusted α = .017) showed smaller effects for unpublished dissertations relative to top-tier journal articles, Q(1) = 5.77, p = .016, and marginally smaller effects relative to second-tier journal articles/book chapters, Q(1) = 4.30, p = .038. The effect sizes did not differ for top-tier journals and second-tier journals/book chapters.

As with the quasi-experimental studies, publication year failed to moderate the effects of the true experiments, Q(1) = 0.22, p = .642.

As a further analysis, we examined the results as a function of research group. For true experiments, two research groups contributed three or more studies (Rochester and Illinois); see Table 10. Medium effect sizes were observed for studies conducted by the Rochester group and for studies conducted by other research groups. In contrast, effect sizes for the studies conducted by the Illinois group were negligible and failed to reach statistical significance. However, despite these trends, the effect of research group was not statistically significant, Q(2) = 5.51, p = .064.

Discussion

In two meta-analyses, we examined the impact of video-game play on information-processing skills, with one meta-analysis conducted on quasi-experimental studies of habitual gamers, and the other meta-analysis on true experiments involving video-game training. Both meta-analyses under random-effects models indicated significant effects of video-game experience on information processing, with moderate effects at the level of studies, and small effects at the level of comparisons. For quasi-experimental studies, effect sizes were moderated by information-processing domain, target game type, type of control group, age, gender, publication type, and research group. For true experiments, the effect sizes were moderated by information-processing domain, target game type, age, and publication type. The moderators and nonmoderators across analyses are discussed below.

Implications of moderators of video-game effects

A closer look at effects within the domain of executive functions

Information-processing domain moderated the findings of both quasi-experimental studies and true experiments. Among the quasi-experimental studies, moderate to large effects were observed in the auditory- and visual-processing domains, whereas small effects were observed in the other domains (executive functions, motor skills, and spatial imagery). However, a different pattern emerged for true experiments, with larger effects for motor skills than for other domains, especially executive functions, which showed negligible effects of video-game training. These results confirm the robustness of video-game training effects across domains, with the exception of the executive functions.

The larger effects in quasi-experimental studies than in true experiments for executive functions may have been due to habitual video-game players being drawn to games that reinforce their existing abilities. This interpretation is consistent with the pattern of results obtained by Jaeggi, Buschkuehl, Jonides, and Shah (2011) in a widely cited study on increased general intelligence associated with video-game play. This experimental study utilized a custom-made video game, as opposed to a commercial game, to train nonverbal intelligence—therefore, it was not included in the present meta-analysis: Whereas a comparison of video-game training and control groups failed to yield a significant difference with respect to improvements in nonverbal intelligence, among participants assigned to the video-game training condition, skill at playing the video game correlated with increased nonverbal intelligence.

Given the recent null findings of a large-scale study involving training of general intelligence (Owen et al., 2010), and two recent reviews documenting the ineffectiveness of working memory training (Melby-Lervåg & Hulme, 2013; Shipstead et al., 2012), we ran follow-up analyses of studies in the executive-function domain using subskill as a moderator.Footnote 2 See Table 11. The executive-function subskills were coded as Executive-function battery, Dual/multitasking, Inhibition, Intelligence, Task switching, and Working/short-term memory. Overall, executive-function subskills did not moderate the results of true experiments, Q(5) = 4.69, p = .455, with only inhibition tasks showing a significant (small) effect of training (d = 0.39, 95 % CI [0.15, 0.63], p = .001). That is, all other aspects of executive functions failed to show enhancements through video-game training.

Table 11 Summary of effect sizes for executive functions moderated by sub-skill at the level of comparisons

In contrast, executive-function subskill was found to moderate the results of the quasi-experimental studies, Q(4) = 18.51, p = .001, with moderate effects observed for dual/multitasking and task switching, and small effects for the other subskills. Post-hoc tests (adjusted α = .005) indicated significantly larger effect sizes for dual/multitasking than for inhibition, Q(1) = 12.09, p = .001, intelligence tasks, Q(1) = 10.59, p = .001, and working/short-term memory tasks, Q(1) = 8.95, p = .003. This pattern of discrepancies in the results for the quasi-experimental studies, in comparison to the true experiments, especially for the subskills of dual/multitasking and task switching, suggests the possibility of Hawthorne effects in the quasi-experimental studies—wherein habitual game players might expect to perform better in computer-based laboratory tasks, due to their selection for study on the basis of their prior gaming experience. Importantly, the lack of measurable benefits in true experiments clearly contradicts the claims of commercial, game-based brain-training programs (e.g., www.brainmetrix.com, www.lumosity.com, or www.happy-neuron.com) that training can improve reasoning, intelligence, and working memory. Interestingly, these training programs tend to utilize “gamified” versions of the tasks used to measure cognitive outcomes, such that task-specific practice effects would be expected.

Types of games and controls

Target game type moderated the findings for both quasi-experimental studies and true experiments. In quasi-experimental studies, action/violent game play showed moderate effects, and mimetic game play showed very large effects (two comparisons from one study), whereas nonaction, puzzle, and nonspecified game play showed small effects. While these results suggest a pattern consistent with claims that action game play may be more beneficial for inducing changes in information processing than are other types of video-game play (Bavelier et al., 2012), one cannot claim causality—that game play enhances skills—as it may be that people who play action/violent games may be drawn to them because of preexisting abilities. Given that most of the quasi-experimental studies focused on players of action/violent games (i.e., shooter games), which are the most popular commercial game type (Robinson, 2010), further research will be needed to evaluate the cognitive abilities of game players as a function of their game preferences. Relatedly, it would be of interest to investigate whether an exclusive focus on a specific type of video game had any beneficial or adverse consequences for information processing over varied game play.

In true experiments, action/violent game training was no more effective than game training utilizing nonaction or puzzle games, but mimetic games showed large effects. Note, however, that the majority of the mimetic game comparisons (18/21) came from a single study (Malliot, Perrot, & Hartly, 2012), which yielded especially large effects (overall d = 1.11). In the true experiments, researchers typically select specific games for training on the basis of features deemed relevant to information processing in the domain under study. For instance, experiments utilizing first-person shooter games have tended to focus on the visual-processing domain, whereas those utilizing puzzle games have tended to focus on the spatial imagery domain. This seems to suggest that the benefits of video-game training are closely tied to the specific cognitive demands of the games used in training, which are likely to be supported by task-specific patterns of neural activation (Levi, 2012; M. E. Smith, McEvoy, & Gevins, 1999). Unfortunately, to date, simply not enough studies have compared the effects of different game types within a single information-processing domain. It will be imperative for future studies to use crossover designs in which learners are trained on two different games and their abilities are assessed across multiple information-processing domains (e.g., Sanchez, 2012); ideally, such studies should utilize an additional control condition to allow for comparisons of each game type to a uniform baseline.

Type of control group was found to moderate the effects of quasi-experimental studies, but not of true experiments. In the quasi-experimental studies, the comparisons with nonplayer control groups yielded larger effects than did comparison groups based on other game play or skill level. The small effect size seen when comparing players of different types of games to one another supports the idea that any type of game play may alter information-processing skills—however, in true experiments, there was no moderating effect of control condition (i.e., studies utilizing “other-game” control groups did not yield smaller effects than did no-game controls), which appears to contradict the results from quasi-experiments. This difference may be due to game players showing only a preference, but not exclusive play of a specific game type. Additionally, as noted above, in true experiments, video games often are selected on the basis of features deemed relevant to performing the test tasks. For example, Cherney (2008) selected two video games, 3-D Antz Extreme Racing and Tetrus, that required spatial manipulation for successful game play (i.e., navigate through 3-D space in Antz and block rotation in Tetrus), and found that both video games improved mental rotation skills relative to a control condition. Given obvious similarities between the games used in training and the test tasks, concerns remain that “no game training studies have taken the necessary precautions to avoid differential placebo effects across training conditions and outcome measures” (Boot et al., 2011, p. 3). That is, participants might have differential expectations of skill improvement following video-game training, depending on whether they perceive overlap in the skills used in training and testing. Thus, to advance the literature, future studies will need to take precautions to ensure that training and control groups have equivalent expectations regarding treatment effects.

Length of training and player characteristics

In true experiments, effect sizes were comparable across studies utilizing varying amounts of training (under 10 h vs. over 10 h), which suggests that learners quickly adapt cognitive processes to the design features of specific games, and may not need extensive practice to accrue training benefits. Unfortunately, in our meta-analysis of quasi-experimental studies, we were unable to assess the impact of the number of hours of game play among habitual gamers. Understandably, it may be difficult for players to report accurately how often and for what amounts of time they have spent playing different video games, and most research reports fail to provide such data.

To examine the effect of age, we divided studies into four categories: youth (3–17 years), young adult (18–22 years), adult (23–54 years), and older adult (55 years and older). In quasi-experimental studies, effect sizes comparing habitual players to nonplayers were significantly smaller for youths than for young adults and adults (there were no studies of older adults). In WEIRD (“Western, educated, industrialized, rich, and democratic”) societies like the United States (Henrich, Heine, & Norenzayan, 2010), where the overwhelming majority of children have had prolonged exposure to gaming and other digital technologies, one might expect smaller differences among youths in effects based on playing a specific game. Arguably, the cohort of “digital natives” (Prensky, 2001, 2010) lacks a true control group for comparison, as any sample of younger participants is likely to have exposure to what are by now ubiquitous digital displays, interfaces, and mobile technologies. The present findings seem to support such an argument; nevertheless, a limitation to the extant literature is the small number of studies that have compared effects of habitual game playing in youths of different ages. Our meta-analysis included only 11 quasi-experimental studies of youth samples, with 56 comparisons (two of these studies failed to provide any further division of the results as a function of child age). On the basis of the eight studies that organized results by child age, 47 comparisons involving 3- to 14-year-olds showed consistently negligible effects, and the four comparisons involving 15- to 17-year-olds showed small effects. It would be preferable to have a further division of gaming effects among 3- to 14-year-olds due to major changes in cognitive development during these years. Optimally, longitudinal designs could be used to examine the developmental trajectories of children who varied with respect to the number of hours per day devoted to gaming.

Such research would help to resolve controversies regarding the positive versus negative effects of video games, as well as of other online activities, on academic outcomes. On the one hand, in a study of preschool children of ages 3–5 years (Li & Atkins, 2004), higher rates of computer-based play were associated with better school readiness and higher estimated intelligence on standardized assessments. On the other hand, studies involving older children and adolescents have documented worse academic outcomes (e.g., lower GPA) as a function of the amount of time children spend online (C. A. Anderson & Dill, 2000; Jackson et al., 2011; Sharif & Sargent, 2006). With “serious” educational games emerging as a core component of the gaming industry (Charsky, 2010; Connolly, Boyle, MacArthur, Hainey, & Boyle, 2012), it will be essential for research to evaluate whether the alleged benefits of serious game use are more than just hype (cf. DeLoache et al., 2010, on the null effects of the popular Baby Einstein DVDs). Moreover, further research will be needed to document how immersion in a technologically rich environment is altering the culture of childhood, and to address how specific cognitive abilities are shaped through daily exposure to video games.

Age also moderated the results of the true experiments, with larger effects for older adults than for young-adult or adult groups. Thus, unlike the quasi-experimental studies, in which negligible effects were found for youths, the true experiments demonstrated benefits of video-game training at all ages. The finding that larger effects were observed with older adults was unexpected, and must be interpreted with caution, given the small number of studies involved (i.e., six). Nevertheless, the robust training effects in older adults underscore the potential for video games to serve as effective tools for sensorimotor rehabilitation (Basak et al., 2008; Drew & Waters, 1986; Goldstein et al., 1997; Levi, 2012).

Quasi-experimental studies also indicated a potential influence of participant genders on video-game effects. Whereas small to moderate effects were observed for comparisons involving males only or mixed groups, the effects for female-only groups were negligible. However, the null effect of gaming for females was based on only four comparisons (from two studies), which leads us to be cautious in our interpretation. Rather, it points to the need for further studies to evaluate whether women choose to play video games less intensely and with different intentions than men, which might lead to different information-processing outcomes (Greenfield, 2009; Ivory, 2006). Of course, researchers tend not to analyze their results by participant gender in cases in which a gender effect is not expected. Additionally, lack of sufficient recruitment of female game players has led some researchers to include only males in their studies. The observed gender effect, although preliminary due to the very small number of comparisons, should encourage examination of gender effects in future studies of habitual game players. In contrast to the quasi-experimental studies, no gender effect emerged for true experiments, which indicates equivalent training benefits for men and women alike.

Publication bias

To evaluate the impact of publication bias, we examined publication type, which was found to moderate the results. In quasi-experimental studies, studies published in top-tier journals showed larger effects than did studies published in second-tier journals/book chapters and in unpublished dissertations. In true experiments, top-tier journals and second-tier journals/book chapters showed moderate effects and did not differ significantly from one another. However, the three unpublished dissertations showed negligible effects, which is consistent with reports of bias against the publication of nonsignificant findings (Ferguson & Heene, 2012; Scherer, Dickersin, & Langenberg, 1994). While we included several unpublished studies in each meta-analysis, all of these were dissertations accessed through Dissertation Abstracts International. Correspondence to the first authors of all of the included studies yielded a number of unpublished statistical comparisons, but failed to yield any unpublished studies. We suggest that a better solution for addressing publication bias would be to establish methods for indexing null results of unpublished studies online.

Due to concerns that bias that might have resulted from the publicity given to the findings of prolific researchers, we included research group as an additional moderator. For quasi-experimental studies, research group moderated effect sizes, with the Rochester and Toronto groups showing larger effects than did other research groups. One possibility is that publicity increases the likelihood that video-game players will become aware of university research demonstrating benefits of video games; such awareness would be expected to increase the expectation of video-game players to perform well (i.e., increased Hawthorne effects). Research group, however, did not moderate the findings of true experiments, with nearly identical effect sizes for the most active research group (Rochester) in comparison to other groups. (There was a nonsignificant trend, however, for the Illinois group to find smaller effects than the other groups.)

Beyond information processing: unresolved issues concerning video-game use

Video games are increasingly gaining attention as potential remedies for a variety of social issues. From the perspective of educational reform, video games have been viewed as effective, engaging media with the potential to increase motivation and enhance academic instruction (Blumberg & Altschuler, 2011; Boyan & Sherry, 2011; Kafai, 2006; Prensky, 2010; Shaffer et al., 2005; Shute, Ventura, Bauer, & Zapata-Rivera, 2009; Squire & Jenkins, 2003). It has been argued that they might even be effective in increasing youth awareness of global issues such as climate change (McGonigal, 2011). Such strong conclusions are premature, however, without an understanding of how video games impact information processing. In particular, if games are to be used as educational tools, they must first be shown not to undermine basic aspects of cognitive functioning. The meta-analysis of true experiments provided evidence that game training can enhance specific perceptual and motor skills, including visual and spatial processing and hand–eye coordination. However, the results failed to support the stronger claim that video games make people smarter (Hurley, 2012; Zichermann, 2011), as true experiments failed to show positive gains for multiple aspects of executive functioning, such as multitasking, nonverbal intelligence, task switching, and working memory.

As video-game play becomes increasingly recognized as a cognitively, and sometimes physically, demanding activity, the dramatic increase in the amount of time devoted to gaming in today’s culture draws attention to a variety of other potentially related issues. For example, with increased rates of chronic obesity, especially in the United States, along with concerns that exercise in the form of physical play has been replaced by sedentary gaming, there is great interest in determining whether mimetic “exergames” such as Wii Fit might be beneficial to health and fitness (e.g., Graf, Pratt, Hester, & Short, 2009; Graves, Stratton, Ridgers, & Cable, 2007; Nitz, Kuys, Isles, & Fu, 2010; Staiano & Calvert, 2011). In a recent review of 38 randomized control studies of video-game training on health-related outcomes, Primack et al. (2012) reported benefits associated with video-game play for psychological therapy as well as physical therapy and physical activity outcomes, but concluded that poor study quality was a concern.

These positive findings do not offset concerns that some children might be prone to video-game addiction (Desai, Krishnan-Sarin, Cavallo, & Potenza, 2010; van Rooij, Schoenmakers, Vermulst, van den Eijnden, & Van de Mheen, 2011), that violent video games might encourage people to behave aggressively (C. A. Anderson et al., 2004; C. A. Anderson et al., 2010; Dill & Dill, 1998; but see Adachi & Willoughby, 2011, and Ferguson, San Miguel, Garza, & Jerabeck, 2012, for evidence disputing this claim), and that video-game play might be linked to attention problems associated with attention deficit hyperactivity disorder (Bioulac, Arfi, & Bouvard, 2008; Chan & Rabinowitz, 2006; Swing, Gentile, Anderson, & Walsh, 2010; but see Ferguson, 2011). Furthermore, as children today spend more of their leisure time engaged with video games and less time reading books, there may be negative consequences for higher-order cognitive processes such as critical thinking, reflection, and mindfulness (Greenfield, 2009). Undoubtedly, future research will need to investigate the different implications of video-game play, especially with regard to possible trade-offs between the potentially positive and negative effects of video-game use throughout the lifespan.