Elsevier

Neuropsychologia

Volume 90, September 2016, Pages 180-189
Neuropsychologia

The role of working memory in rapid instructed task learning and intention-based reflexivity: An individual differences examination

https://doi.org/10.1016/j.neuropsychologia.2016.06.037Get rights and content

Highlights

  • Following instructions rapidly and efficiently is essential for collaboration.

  • This ability is reflected is efficiency (speed and accuracy) and automaticity.

  • Across individuals, efficiency correlated positively with speed and intelligence.

  • Surprisingly, efficiency/automaticity did not correlate with working-memory.

Abstract

The ability to efficiently follow novel task instructions (Rapid Instructed Task Learning, RITL) appears late in evolution, is required for successful collaborative teamwork, and appears to involve maintaining instructions in working-memory (WM). RITL is indexed by the efficiency in which the instructions are performed (RITL success) and by whether the instructions operate automatically (intention-based reflexivity). Based on prior normative work employing WM-load manipulations, we predicted that individual differences in WM would positively correlate with these RITL indices. Participants (N=175) performed the NEXT paradigm, which is used to assess RITL, and tests of choice reaction time, intelligence, and WM. Confirmatory factor analyses showed that, contrary to our predictions, successful performance in WM tasks did not predict RITL performance. Tests tapping general-fluid intelligence and reaction time positively correlated with RITL success. However, contrary to our predictions, RITL success positively correlated with little intention-based reflexivity. We suggest that for a RITL paradigm to produce intention-based reflexivity, its WM demand must be low, and, thus, performance does not reflect individual differences in WM.

Introduction

The ability to immediately and efficiently follow instructions has been labeled Rapid Instructed Task Learning (RITL) (Cole et al., 2013). According to these authors, “RITL is the process of rapidly (typically, on the first trial) learning a novel rule or task from instructions” (p. 2). While RITL enabled our ancestors to hunt in teams, it enables modern humans to succeed in teamwork during medical surgery, sports, etc. What characterizes these instances is that team members instruct one another on the fly and are expected to follow instructions immediately and proficiently, not having the luxury of practicing the instructions before carrying them out.

RITL might also be among the most recent evolutionary innovations that made us who we are. Arguably, Homo sapiens have gained an evolutionary advantage from their ability to establish complex forms of collaboration (e.g., Herrmann et al., 2007, Tomasello et al., 2012).

Recent research efforts (reviewed below) have advanced our knowledge on the normative aspects of RITL. Yet, very little is known regarding individual differences in this ability. Accordingly, our goals were to study individual differences in the ability to immediately and efficiently follow simple instructions, and to examine how these differences correlate with relevant individual differences constructs, especially working memory (WM). Below, we provide a brief review of the relevant literature.

In typical RITL tasks, the instructions are simple and consist of novel combinations of familiar elements (Cole et al., 2011). Although RITL reflects learning, it differs markedly from skill acquisition, a widely studied form of learning. First, a critical feature of RITL is that the instructions must be immediately and proficiently followed, without having the benefit of prior practice. In contrast, in typical skill acquisition studies, the focus is on how (usually extensive) practice leads to improved performance. Furthermore, whereas RITL studies often involve simple tasks, typical tasks in skill acquisition studies are complex (e.g., mirror drawing, reading). Finally, studies on skill acquisition usually involve a single task that remains relevant for at least the entire duration of the experiment. In contrast, since the focus in RITL is on the first trial or the first few trials following the instructions, studying it characteristically involves many different novel tasks, each relevant for a very short duration and typically executed only a few times (Cohen-Kdoshay and Meiran, 2007, Cole et al., 2013).

Recent computational modeling studies suggest that RITL is implemented for behavioral performance by rapid formation of new synaptic connections (Bugmann, 2012) or in dual-route architectures in which one route is slow, but flexible, and the other route is fast, yet rigid (Huang et al., 2013, Ramamoorthy and Verguts, 2012). Neuroimaging studies highlight the importance of anterior regions of the prefrontal cortex with the brain activation dynamics shifting to more posterior and sub-cortical brain regions over the course of a few trials of training (for a review, see Cole et al., 2013).

Skill is reflected in fluency but also in automaticity, the difficulty avoiding skill application. For example, the difficulty suppressing the urge to read words leads to the Stroop effect, where participants are asked to ignore the words and name the ink color in which they are written. Moreover, the Stroop effect increases with reading proficiency, at least at the early stages of learning to read (Schiller, 1966), a fact that suggests an increase in the urge to read the words.

Like skill, RITL is also reflected in performance fluency (Cohen-Kdoshay and Meiran, 2007, Meiran et al., 2015, Ruge and Wolfensteller, 2009), behavioral automaticity (Cohen-Kdoshay and Meiran, 2007, De Houwer et al., 2005, Liefooghe et al., 2012, Meiran et al., 2015, Wenke et al., 2007), and also in brain-recorded automatic motor plan activation (Everaert et al., 2014, Meiran et al., 2014). We chose to describe RITL-related automaticity as “intention-based reflexivity” (Meiran et al., 2012), mainly in order to denote fact that in RITL, only one characteristic of automaticity is seen. This characteristic indicates that the newly instructed plan “gained a life of its own” and became autonomous. In other words, this plan is (perhaps, partly) executed even when another task is required (Bargh, 1992, Tzelgov, 1997).

Standard conflict effects, such as the Stroop or the flanker effect (Eriksen, 1995) are taken to reflect both automaticity and its control: behavioral inhibition (Friedman and Miyake, 2004). Both viewpoints are valid (see Meiran (2010)) because behavior is the outcome of two opposing forces: (a) those creating the urge to execute a given response/process (e.g., the habit of reading words, in case the of the Stroop task; MacLeod, 1991); and, (b) the forces that permit one to overcome this inappropriate urge and execute the required task, instead. Although when we describe the latter, we refer to “behavioral inhibition”, we do not commit ourselves to some inhibitory mechanism (see below). Instead, we refer to “inhibition” more generically, as all the processes permitting one to overcome inappropriate urges. Intention-based reflexivity is similar to automaticity-related effect in this respect, except that the urge to execute the task results from WM representations (Meiran et al., 2012, Oberauer et al., 2013) rather than habits that are stored in long-term memory (Squire, 2004).

To achieve highly efficient performance without prior practice, instructions in RITL tasks are typically simple (Kaplan and White, 1980), must be immediately stored, and must be highly accessible. Importantly, WM has been described as “a system devoted to providing access to representations for goal-directed processing” (Oberauer, 2009, p. 47), with an emphasis on novel bindings between familiar elements, which is done in a limited-capacity sub-system of WM (especially Oberauer et al., 2013). These considerations, together with the implications of the prefrontal cortex (Cole et al., 2013) known to be related to WM, suggest that RITL relies on WM (see Engle et al. (1991); Gathercole, et al. (2008); Yang et al. (2014)).

This hypothesized link between RITL and WM has been addressed in normative studies employing a WM load manipulation. These studies show that RITL performance drops (Yang et al., 2014) and intention-based reflexivity is eliminated (Cohen-Kdoshay and Meiran, 2007, Meiran and Cohen-Kdoshay, 2012) under WM-load. In a series of studies (as yet unpublished; Pereg and Meiran, submitted), we showed similar effects of WM load in the NEXT paradigm that we used here.

To summarize, despite the fact that intention-based reflexivity is theoretically linked to two opposing forces (i.e., the urge to execute the instructions and the inhibition of this urge), the normative studies seem to suggest that WM plays an important role in the effect. We thus asked whether this also holds true for individual differences.

Although individual differences in modern RITL tasks have barely been studied, following-directions tasks that resemble RITL in some respect have been examined since the beginning of the 20th century. Many of these studies related following directions to intelligence, and we review them partly because of the close link between intelligence and WM (Kane et al., 2005).

“Following Directions” was one of the tests that was included in the Army Alpha intelligence examination (see Ottis (1918)). A somewhat similar test with the same name is one of the two tests indexing the “Integrative Processes” factor in the ETS - Kit of factor-referenced cognitive tests (Ekstrom et al., 1976), which is comprised of tests shown to tap individual differences factors (see also Hattrup et al. (1992)). In the present study, we employed a following-directions test called “Comprehension”. Among the eleven intelligence tests studied by Meiran and Fischman (1989), Comprehension showed the highest correlation with the general intelligence factor (r=.77) in that study. Engle et al. (1991) found positive correlations between following directions and WM, ranging from .30 to .47. Yet, in this study, the developmental trajectories were different for WM measures and “Following Directions”, suggesting that these two constructs are not identical.

Finally, despite their similarity, classic following-directions tests differ from RITL in the requirement to comprehend complex task descriptions, and their focus on response correctness. In contrast, RITL tasks emphasize the ability to efficiently execute simple instructions, and the primary measure is reaction-time (RT). These differences seem crucial since what arguably matters mostly for general intelligence is the ability to “structure” the task in its initial stages (Ackerman, 1988, Bhandari and Duncan, 2014).

The goal of the present work was to examine the role of WM in RITL efficiency (as assessed in what we called “a modern RITL task”) and intention-based reflexivity by focusing on the individual differences correlations between these constructs, as assessed using Structural Equations Modeling (see more below).

We were able to identify only one study on individual differences in RITL (Stocco and Prat, 2014) that showed that bilinguals, known to have especially efficient executive functions (Bialystok et al., 2012), outperformed monolinguals. Additionally, only among bilinguals, there was an increase in the level of oxygenated blood supply to the basal ganglia when novel rules were executed, but not when familiar rules were executed. This finding is especially important in the current context, given the role of the basal ganglia in WM updating (e.g., Hazy et al., 2007).

In order to assess (a) RITL success and (b) intention-based reflexivity, we used the NEXT paradigm (Meiran et al., 2015), described in Fig. 1, partly because this paradigm provides independent estimates for both. The NEXT paradigm consists of miniblocks, each involving a novel 2-choice RT task with two new stimuli that are arbitrarily mapped to the right/left keys. This means that the “X” and “Y” stimuli, presented in Fig. 1, are just an example, and the stimuli in other mini-block could be any two letters, pictures or digits.

In each miniblock of the NEXT paradigm, the first screen presents the mapping between two new stimuli and the responses. As soon as participants indicate that they have encoded the new instructions, they press a key, and the NEXT phase of the miniblock begins. In this phase, the stimuli are presented in red at the center of the screen. The red color indicates that the instructed task must be withheld and instead of this task, participants must press a predetermined and fixed key (the right/left key, counterbalanced across participants) in order to advance to the next screen. This screen advancement key remains unchanged throughout the experiment.

The NEXT phase can terminate at any point, and actually, in a small proportion of the trials, it is entirely skipped. Immediately following the NEXT phase, the GO phase begins. In this phase, the stimuli are presented in green at the center of the screen, indicating that the newly instructed choice task must now be executed. This GO phase consists of only two trials, a fact that forces participants to be highly efficient from the outset. In other words, participants are given just two trials to demonstrate their readiness to execute the task. As soon as the GO phase ends, a new miniblock begins with a new task. For example, after the mini-block presented in Fig. 1, the next miniblock might involve a picture of a cherry that is mapped to the left key and a picture of a piano that is mapped to the right key. Since the GO phase begins at an unexpected point in time, the readiness to execute the instructed GO task is assumed to be maintained throughout the NEXT phase that precedes it. Below, we provide a description of the two core performance indices that this paradigm yields.

In the NEXT paradigm, intention-based reflexivity is indexed by the NEXT Compatibility Effect (‘NEXT effect’, for short). Take for example a participant who uses the right key for NEXT responses. The NEXT effect in this example would be the slower NEXT responses to stimuli that are mapped to the left key (“Y” in Fig. 1), as compared to stimuli that are mapped to the right key (“X” in Fig. 1). The finding indicates reflexivity, since during the NEXT phase, the GO instructions are deferred and should not be executed, and the fact that they are executed to some degree shows reflexive (or autonomous) processing.

RITL success is indexed by the GO trial effect (GO effect, for short), which is the slower/less accurate 1st than 2nd (or more advanced) GO trial. Small GO trial effects indicate successful preparation because, in this case, performance in the first trial after the instructions is already almost as efficient as that in subsequent trials. Given this consideration, we should have compared the first GO trial to baseline performance as seen after practice. However, we had only two GO trials in each miniblock/task and, thus, did not have this baseline. Luckily, our prior results indicate that the second GO trial can serve as a reasonable baseline since it produced a similar level of performance as in Trials 3–10 (seen in experiments involving long GO phases). Thus, the GO effect was the difference in performance between the first and the second GO trials.

To summarize, the NEXT paradigm yields two measures: a measure of RITL success (the GO effect) and a measure of intention-based reflexivity (the NEXT effect).

A recent suggestion is that like the distinction between declarative and procedural long-term memory, there is a similar distinction between two aspects of WM: a declarative WM and a procedural WM (Oberauer, 2009, Oberauer et al., 2013). Because of this suggestion, we did not represent WM as a single construct but as two separate constructs. Specifically, declarative WM was assessed with complex-span tests (Unsworth et al., 2005), in which the memoranda were facts (e.g., letter identity in the Operation-Span) that were not used for task control. Procedural WM was assessed with tasks in which the memoranda were used for task-control. Specifically, it was assessed with choice tasks in which the memoranda were the newly instructed (and arbitrary) stimulus-response rules.

We were concerned with the fact that the choice tasks also involved Mental Speed and related factors such as choice complexity (e.g., number of choice alternatives). We thus included choice tasks with non-arbitrary stimulus-response mapping with minimal WM demand. This distinction between arbitrary and non-arbitrary mapping was supported in an individual-differences study (Wilhelm and Oberauer, 2006), and in a manipulation-based normative study (Shahar et al., 2014). Although we used the non-arbitrary tasks as control, we wish to emphasize the fact that even these tasks require some WM since participants must keep in mind the intention to react to the stimuli, monitor their performance to determine an optimal speed-accuracy tradeoff, etc. (see Cepeda et al. (2013)).

We employed tasks tapping reasoning and intelligence for two reasons. The first was that reasoning tasks serve as markers for general fluid intelligence that, according to dominant theorizing (Carroll, 1993, Spearman, 1904), influences performance in a very wide variety of cognitive tasks. The other reason is the relatively high correlation between general fluid intelligence and WM (e.g., Kyllonen and Christal, 1990, Unsworth et al., 2014a). Our intelligence tests included ETS-Locations, indexing Induction according to the ETS manual (Ekstrom et al., 1976). The other test used to assess general-fluid intelligence was the aforementioned Comprehension test used by Meiran and Fischman (1989). We included an Induction test because induction-reasoning tasks are typically among the best indices of general fluid intelligence (Marshalek et al., 1983), and because Induction is how Spearman originally defined general intelligence. The reasons to use Comprehension, aside from balancing the contents of the indices of general-fluid intelligence (see below), were that this test has shown to load exceptionally high on this factor, and also because we wished to examine the relationship between classic following-directions tests (Comprehension being one such test) to modern RITL measures. Nonetheless, we acknowledge the crude nature of our operationalization of the concept of general-fluid intelligence.

To improve the representation of the Intelligence factor in our study, we took advantage of the fact that almost all of our participants were previously tested on the Israeli analogue of the SAT, the Psychometric Entrance Test (PET) (Nevo and Oren, 1986). PET has three components: Quantitative, Verbal, and English (a foreign language in Israel) and we included just the Quantitative and Verbal scores in the analyses.

Our study design was guided by our data analytic approach involving Structural Equations Modeling and, specifically, confirmatory factor analysis. Confirmatory factor analysis resembles the more familiar exploratory factor analysis, but the two approaches differ in important respects. Exploratory factor analysis asks which factors account for the variance in a given test battery. Confirmatory factor analysis, in contrast, is predominantly used to assess the correlations between constructs (factors) that were defined a-priori. As such, it involves testing whether this a-priori factor structure fits the data, and a satisfactory fit must be found in order for the inter-factor correlations to be meaningful. This also means that a key feature of studies using confirmatory factor analysis is their choice of variables, as described in the next section.

A known limitation of exploratory factor analysis is that it relies on post-hoc interpretation of the factors. Such a post-hoc account is required because the factors are determined in such a way that they would meet a certain statistical criterion (variance explained) rather than a theoretically meaningful criterion. Since the factor structure is predicted (rather than found) in confirmatory factor analysis, this analysis avoids the pitfalls associated with post-hoc factor descriptions.

Another huge advantage of confirmatory factor analysis is that, to the degree to which the variables chosen to “index” the factors correctly represent the construct, the inter-factor correlations (“second order correlations”) represent the true correlations between the underlying constructs. This contrasts with usual correlations between observed variables (“first order correlations”), which are biased because a specific variable often represents a relatively narrow operationalization of the construct. When properly designed, studies using confirmatory factor analysis include several (typically 2–3) different indices of a given factor, and as such, the factor provides a relatively unbiased representation of the underlying construct. Additionally, while first order correlations are attenuated by unreliability, second order correlations are not. The reason is that the factors represent reliable variance only, since they explain correlations between observed measures, and, according to classical test theory, unreliable variance is uncorrelated with anything. To summarize, in a properly designed study, confirmatory factor analysis can provide excellent estimates of the correlations between hypothetical constructs that are neither biased by a narrow operationalization of the construct nor attenuated by unreliability.

Lastly, in Structural Equations Modeling, a predominant strategy to answer questions is by means of model comparison. In other words, the models do not necessarily represent the researcher's a-priori beliefs. Instead, models are often formulated in such a way that comparing between them would provide an answer to a question.

Given the consideration listed so far, a critical feature of the study design involves a balanced choice of variables that are used to index a given construct/factor. Our variable-sampling plan was based on Guttman's RADAX theory of intelligence (Schlesinger and Guttman, 1969). According to this theory, individual differences in cognitive abilities are determined by the type of process (here, the processes are declarative WM, procedural WM, intelligence, RT, etc.), but also by the “language” of the test (i.e., whether the test involves spatial/figural, verbal/letter, or numerical stimuli). Thus, we made sure that the indices of any given (process-based) latent-variable were more-or-less balanced in terms of “language”. This pertains to the indexes from the NEXT paradigm involving letters, digits, and pictures. Our WM-span measures were two complex-span tests: Operation-Span, involving letters and numbers, and Symmetry-Span, involving figural information. Our intelligence tests involved Comprehension, comprising verbal and numerical information, and ETS-Locations, involving figural information. The PET scores that served as additional intelligence measures covered the verbal and quantitative “languages”, but not the spatial/figural domain. The RT tasks were also more-or-less balanced in terms of “language” because, while the stimuli were digits and letters, responding was manual and, thus, involved spatial processing (e.g., Lu and Proctor, 1995).

To summarize, we designed an individual differences study including tests tapping RITL success, intention-based reflexivity, procedural WM, declarative WM, intelligence, and RT. We also made sure that the indices of these abilities were more-or-less balanced in terms of “language”; namely, that they use verbal, numerical, and spatial/figural information.

H1: We, of course, predicted that the factor structure, which guided us in designing the study, would fit the data. Specifically, we predicted a good fit for a model assuming separate factors for RT with arbitrary stimulus-response mapping (RTarbitrary) and RT with non-arbitrary stimulus-response mapping (RTnon-arbitrary). Additionally, we predicted Complex-Span and Intelligence factors. With regard to the NEXT and GO effects, there is no background literature on which we could rely in making our predictions. We, nonetheless, hoped that the indices of these two constructs would be explained by two factors. Finally, given that the PET test includes reasoning items and given the fact that reasoning is strongly associated with general-fluid intelligence, we tentatively predicted that PET scores would load on the general-fluid intelligence factor.

The remaining predictions refer to the inter-factor correlations, mostly.

H2: Under the assumption that RITL success requires holding task-control information in WM, we predicted that better WM would correlate with better RITL performance (small GO effects). Under the assumption that the distinction between procedural WM and declarative WM is valid, stronger correlations were predicted between RITL measures and RTarbitrary, tapping both RT and WM abilities (Wilhelm and Oberauer, 2006), than with RTnon-arbitrary, tapping predominantly general processing speed abilities. If, however, WM is a single system, RITL is expected to correlate more-or-less equally strongly with complex-span abilities and with RTarbitrary.

H3: Under the assumption that intention-based reflexivity reflects a byproduct of holding task information in limited-capacity WM, we predicted that poor WM capacity would correlate with small NEXT effects (little intention-based reflexivity). Similar considerations regarding separate vs. single WM system hold here as they hold in H2.

We also had several additional, less focal, predictions:

H4: If procedural WM and declarative WM are distinct abilities, as suggested by Oberauer et al. (2013), RTarbitrary (indexing procedural WM) and complex-span (indexing declarative WM) should show a relatively low correlation and/or a differential pattern of correlations with other constructs. Specifically, we predicted that RTarbitrary would correlate more strongly with WM and intelligence, as compared with RTnon-arbitrary, replicating Wilhelm and Oberauer (2006).

H5: Complex-span performance would positively and quite highly correlate with fluid intelligence, replicating previous works (e.g., Kane et al., 2005).

Section snippets

Participants

Participants were recruited to a WM training study, and this paper reports the pre-testing results from that study. Participants were 175 students in a pre-academic preparatory course for engineering at Ben-Gurion University of the Negev, aimed to improve the university entrance grades. The study was executed in two consecutive years (2014, N=76; 2015, N=99). Participants took part in the experiment in return for 500 or 450 NIS (~$125–145 USD; for 2014 and 2015, respectively).

Materials, tasks, and testing facility

Testing and

Choice of variables and simple statistics

We took advantage of the fact that we tested the participants twice, and used the post-test results to assess the test-retest reliabilities of the measures.2 Since our focus in this paper is on individual differences in trait-abilities, we used the significance

Summary

In the present work, we examined how individual differences in RITL success and intention-based reflexivity, as indexed in the NEXT paradigm (the NEXT effect and the GO effect), are related to WM, RT, and intelligence. Based on the normative studies employing WM-load manipulations, we predicted that RITL success (small GO effects) and intention-based reflexivity (large NEXT effects) would correlate positively with WM. The present results do not support this prediction. Firstly, the GO trial

References (61)

  • A. Ramamoorthy et al.

    Word and deed: a computational model of instruction following

    Brain Res.

    (2012)
  • L.R. Squire

    Memory systems of the brain: a brief history and current perspective

    Neurobiol. Learn. Mem.

    (2004)
  • A. Stocco et al.

    Bilingualism trains specific brain circuits involved in flexible rule selection and application

    Brain Lang.

    (2014)
  • J. Tzelgov

    Specifying the relations between automaticity and consciousness: a theoretical note

    Conscious. Cogn. Int. J.

    (1997)
  • N. Unsworth et al.

    Working memory and fluid intelligence: capacity, attention control, and secondary memory retrieval

    Cognit. Psychol.

    (2014)
  • P.L. Ackerman

    Determinants of individual differences during skill acquisition: cognitive abilities and information processing

    J. Exp. Psychol. Gen.

    (1988)
  • J.A. Bargh

    The ecology of automaticity: toward establishing the conditions needed to produce automatic processing effects

    Am. J. Psychol.

    (1992)
  • J.B. Carroll

    Human Cognitive abilities: A Survey of Factor-Analytic Studies

    (1993)
  • N.J. Cepeda et al.

    Speed isn’t everything: Complex processing speed measures mask individual differences and developmental changes in executive control

    Dev. Sci.

    (2013)
  • O. Cohen-Kdoshay et al.

    The representation of instructions in working memory leads to autonomous response activation: Evidence from the first trials in the flanker paradigm

    Q. J. Exp. Psychol.

    (2007)
  • M.W. Cole et al.

    Rapid instructed task learning: a new window into the human brain's unique capacity for flexible cognitive control

    Cognit. Affect. Behav. Neurosci.

    (2013)
  • M.W. Cole et al.

    Prefrontal dynamics underlying Rapid Instructed Task Learning reverse with practice

    J. Neurosci.

    (2010)
  • M.W. Cole et al.

    Rapid transfer of abstract rules to novel contexts in human lateral prefrontal cortex

    Front. Hum. Neurosci.

    (2011)
  • J. De Houwer et al.

    Further evidence for the role of mode-independent short-term associations in spatial Simon effects

    Percept. Psychophys.

    (2005)
  • J. Duncan et al.

    Task rules, working memory, and fluid intelligence

    Psychon. Bull. Rev.

    (2012)
  • R.B. Ekstrom et al.

    Kit of Factor-Referenced Cognitive Tests

    (1976)
  • R.W. Engle et al.

    Individual differences in working memory for comprehension and following directions

    J. Educ. Res.

    (1991)
  • C.W. Eriksen

    The flankers task and response competition: a useful tool for investigating a variety of cognitive problems

    Vis. Cogn.

    (1995)
  • T. Everaert et al.

    Automatic motor activation by mere instruction

    Cognit. Affect. Behav. Neurosci.

    (2014)
  • Fischman, E., 1982. Intellectual differential aptitude test battery. Holon, Israel: Center for Technological...
  • Cited by (0)

    This work was supported by research grants from the US-Israel Binational Science Foundation Grant #2011246 to Nachshon Meiran and Todd S. Braver and a research grant from the Chief Scientist of the Israeli Ministry of Education to Nachshon Meiran. We wish to thank two anonymous reviewers, Todd S. Braver and Michael W. Cole, for helpful comments, and Daniel Aranovich, Gal Berger, Dan Halunga, Ayelet Itzhak, Nadav Kozlovsky, Inbal Michel, Elad Naor, Liad Olansky, Hovav Paller, Elisha Puderbeutel, and Adva Weinstein for their invaluable help in data collection, and Stephanie Knipprath for English proofreading.

    View full text