The field of cognitive aging is dominated by cross-sectional work that contrasts young undergraduate students participating for course credit or money with older adults participating for the furtherance of science and often, to gain a sense of their own cognitive health. Course credit or payment is not withheld if young adults do poorly on a task (nor should it be), but if older adults underperform on a task, they may suffer embarrassment or begin to question their cognitive health. As such, in-lab motivational factors differ greatly between these two groups. Lifespan theories of motivation suggest that older adults are chronically motivated to maintain a positive mood (Carstensen & Mikels, 2005), and thus, conserve their limited cognitive resources for tasks which they value (Hess, 2014). Indeed, when older participants enter the lab, they seem to be motivated to do well on cognitive tasks (Frank et al., 2015; Jackson & Balota, 2012; Seli et al., 2017). However, this motivation to do well may increase older adults’ risk of stereotype threat, whereby exposure to a negative stereotype about one’s group within a domain that one cares about (e.g., memory tasks for most older adults) hinders performance on that task (e.g., Hess et al., 2003; Spencer et al., 1999; Steele & Aronson, 1995; for a recent review, see Barber, 2017). We argue that most cognitive experiments, even those not designed to look at stereotype threat, likely trigger some form of threat in older adults. This has implications for our understanding of how age affects cognitive functioning and the neural underpinnings of that functioning. Indeed, it is currently unknown whether motivational differences between younger and older adults confounds the study of neurocognitive aging. In this opinion piece, we argue that older adults’ greater motivation to do well in the lab often interacts with situational factors to foster a greater sense of “threat” or performance-related anxiety, and this has implications for both cognitive and neural functioning.

Before we begin, we would like to acknowledge the fact that throughout this opinion piece, we make several generalized statements about older and younger adults’ attitudes and motivations. While general statements are quite common in aging research (we study groups, not individuals), it is worth noting that “older adults” and “younger adults” are not homogenous groups and these points do not apply to everyone. Moreover, the findings we discuss are generally based on Western, Industrialized, Educated, Rich, Democratic (WEIRD) samples common in psychology research (Henrich et al., 2010), including most aging research (e.g., Dixon et al., 2004; Hultsch et al., 2002). As such, the observations made here about group differences in motivation and susceptibility to age-based stereotype threat may only apply to a select group of relatively healthy, highly-educated individuals, who willingly go out of their way to participate in cognitive research.

Older adults are motivated to do well in the lab

In this section, we briefly review major lifespan theories of motivation and relate these to evidence which suggests that older adults have a higher baseline motivation to “do well” in cognitive experiments than do younger adults. In addition to this empirical evidence, we also report the perceptions of cognitive aging researchers, who generally agree that older adults seem more motivated in the lab and participate for different reasons than younger adults. We discuss the possibility that older adults value doing well over money, potentially because doing well offers the unique reward of being able to feel good about one’s cognitive health.

In general, older adults come to the lab with chronic goals that are different from those of younger adults. According to the socioemotional selectivity theory (SST; Carstensen et al., 1999), older adults’ foreshortened sense of time leads them to prioritize emotional goals (i.e., maintaining good feelings) over information-seeking goals. As such, older adults show attention and memory biases towards positive information when tested on faces (Isaacowitz et al., 2008; Mather & Carstensen, 2003) and when asked to recall emotional pictures (Charles et al., 2003). Accordingly, older adults are also motivated to maintain a positive image of themselves, in that they recall a rosier view of past health behaviours than true records of past behaviour would suggest (Kennedy et al., 2004). Extending this positivity bias to metacognitive perceptions of performance in the lab, older adults may be more motivated to respond correctly during cognitive testing in order to maintain a positive view of their cognitive health. They may also be motivated to see their past performance as better than it was (e.g., Dodson et al., 2007; cf. Hertzog & Dunlosky, 2011).

Another influential life-span theory of motivation is selective engagement theory (Freund & Baltes, 1998; Hess, 2014). According to this theory, cognitive engagement becomes more costly with age, leading to resource depletion and fatigue (Ennis et al., 2013). As a result, older adults become more selective in how they choose to expend their limited cognitive resources, saving them for activities that they value. Research supporting this view has shown that older adults demonstrate a larger cardiovascular response than younger adults at lower levels of task demands (Ennis et al., 2013; Hess & Ennis, 2012). However, as task demands increase beyond a moderate level, older adults show a drop in systolic blood pressure (which typically rises with increased effort; Brehm & Self, 1989), suggesting that they disengage from the task when the cost of engagement becomes too great (Ennis et al., 2013). Relatedly, when participants are offered more money to try a harder version of the same task, younger adults will do so for less money, while older adults require more money to switch from the easier to the harder version (Westbrook et al., 2013). This suggests that older adults are highly motivated to avoid excessive cognitive effort because is it particularly costly for them. However, motivation to expend cognitive effort is not only determined by expected costs, but also by expected benefits (Westbrook & Braver, 2015), and the benefits of lab participation are often assumed to be the same across age groups (e.g., in the study by Westbrook et al., 2013, older and younger adults were assumed to value money to the same extent). However, we argue that the benefits of lab participation are not matched between age groups and, as discussed below, older adults may actually benefit more (or at least differently) than younger adults from doing well on cognitive tasks and this may contribute to group differences in baseline motivation.

When older and younger adults enter the lab, they seem to do so for different reasons. Anecdotally speaking, older adults in our lab often express an interest in knowing “how they did” on the task and show less concern about remuneration than younger adults. Other cognitive aging researchers have the same impression. We surveyed a large group (N = 88) of well-known leaders in the field of cognitive aging, as well as more junior trainees (postdocs and senior PhD students) who have recent experience testing younger and older adults in the lab.Footnote 1 We first asked, “Based on your experience, which group [younger or older adults] seems more motivated to take part in research in your lab?” A greater proportion of respondents thought that older adults were more motivated than younger adults (73 vs. 12), χ2(1, N = 85) = 43.78, p < .001; three people selected neither. This was also reflected in their ratings of how motivated they thought each age group was to take part in research on a scale from 1 to 10, with older adults (M = 7.91, SD = 1.44) rated as more motivated than younger adults (M = 5.16, SD = 1.55), t(87) = 11.41, p <.001. Finally, we also asked why they thought younger and older adults participated in research (i.e., what motivates them?). In line with our own perceptions, cognitive aging researchers thought that younger adults primarily take part to obtain course credit and for monetary compensation (see Fig. 1), while they thought that older adults primarily take part to get a sense of their cognitive health, to further science, and out of curiosity. Chi-squared tests performed on each of these categories indicated that the number of responses for young and old were significantly different in each case, course credit: χ2(1, N = 82) = 82.00, p < .001; money: χ2(1, N = 106) = 27.51, p < .001; cognitive health: χ2(1, N = 87) = 64.66, p < .001; further science: χ2(1, N = 87) = 45.62, p < .001; curiosity: χ2(1, N = 108) = 14.82, p < .001; social reasons: χ2(1, N = 10) = 6.4, p =.011.

Fig. 1
figure 1

Researcher perceptions of younger and older adults’ reasons for participating in psychology research. Note. Count represents the number of survey respondents who endorsed the reason younger and/or older adults participate in research. Respondents were asked to “Select all that apply”

Thus, older and younger adults appear to be motivated to participate in research by different things, though future work should pose these questions to older and younger adults themselves once in-lab testing resumes. Of most interest to us here, one of the primary perceived benefits of participation for older adults seems to be learning about their cognitive health. Most researchers did not think this was a concern for younger adults. Cognitive experiments rarely represent a unique opportunity for younger participants to learn about their cognitive status, because most of them regularly receive such feedback as students, and this may be why they seem less concerned about their performance in the lab. While explicit feedback is rarely provided to participants, people are surprisingly perceptive about their performance (in that they can detect errors and make metacognitive decisions) and these abilities seem to be relatively spared with age (e.g., Hertzog & Dunlosky, 2011; Larson et al., 2016). Thus, we argue that one of older adults’ primary benefits from participation is gaining a sense of their own cognitive health, and this benefit is performance-dependent. That is, older adults need to respond correctly in order to feel good about their performance and benefit from participation in this way. If the task becomes too difficult, some older adults may be more likely to disengage not only because of the increased costs of cognitive effort but also because of the decreased benefits (i.e., they can no longer respond correctly and maintain a positive view of their cognitive health). This is similar to the notion of self-handicapping in academic settings (e.g., Jones & Berglas, 1978; Schwinger et al., 2014), whereby students sometimes sabotage their own performance (by withdrawing effort or creating other obstacles) in order to protect their self-esteem. Some older adults may withdraw effort at higher levels of demand (Ennis et al., 2013) for this same reason (i.e., “If I don’t try, I can’t fail”). In contrast, younger adults’ primary benefit from participation is thought to be payment or course credit and this is typically not performance dependent.Footnote 2 As a result, the stakes are not as high for younger adults and their baseline motivation to respond correctly seems to be lower than that of older adults.

In line with this view, older adults often report a higher baseline motivation to respond correctly. In studies of mind-wandering, older adults report higher levels of task interest and motivation, which appears to explain their lower rates of mind-wandering or off-task thoughts (Frank et al., 2015; Jackson & Balota, 2012; Seli et al., 2017; Seli et al., 2020). Importantly, the cognitive tasks used to study mind-wandering are usually monotonous but not very demanding, and thus, older adults are capable of responding correctly and therefore, less likely to disengage (Ennis et al., 2013). Older adults’ greater motivation to respond correctly is also illustrated by well-established age differences in the speed–accuracy trade-off. For example, older adults often sacrifice speed in order to respond correctly on cognitive tasks (Salthouse, 1979), even when speed is emphasized in the instructions (Starns & Ratcliff, 2010). Taken together, these findings suggest that most older adults are motivated to respond correctly, not quickly, and also not to the highest level we (the experimenters) can push them (e.g., Hess, 2014; Reed, Chan, & Mikels, 2014; Westbrook et al., 2013). Minimizing errors and responding correctly allows older adults to maintain a positive view of their cognitive health (Carstensen et al., 1999) and thus, older adults are highly motivated to respond correctly. However, this motivation may have the ironic effect of decreasing performance, in that it makes older adults vulnerable to stereotype threat—an issue we turn to next.

Increased motivation leaves older adults vulnerable to stereotype threat

Older adults’ greater motivation to maintain a positive view of their own cognitive health leaves them susceptible to stereotype threat in the lab. First, we review the concept of stereotype threat and the conditions under which this effect is typically seen. We then discuss potential mechanisms of stereotype threat, including reduced executive control, task-related interference, and stress, and how these may affect older adults’ cognitive performance in typical laboratory experiments (i.e., not just those specifically designed to look at stereotype threat).

Stereotype threat is underperformance on a task following activation of a negative stereotype about one’s group (Steele & Aronson, 1995). In a classic example, women were shown to underperform on a math test after being reminded of the stereotype that women are worse at math than men (Spencer et al., 1999). Though evidence for stereotype threat is sometimes mixed (e.g., Armstrong et al., 2017; Barber, 2017; Zigerell, 2017), the effect seems to be most pronounced when individuals strongly identify with the stereotyped group and value the domain being tested (Hess et al., 2003; Steele, 1997). For example, research has shown that older adults who place a higher value on their memory abilities are more susceptible to performance impairments following exposure to negative aging and memory stereotypes (Hess et al., 2003). Most older adults tend to endorse negative views about aging (e.g., Axt et al., 2014; Levy & Banaji, 2002), leaving them particularly vulnerable to self-concept threat or concern that they will confirm, in their own minds, that such negative stereotypes apply to them (for a more thorough discussion of this topic, see Barber, 2017). As previously mentioned, we believe that older adults value their in-lab performance, as it provides them with a unique opportunity to assess their cognitive standing, and this makes them particularly vulnerable to self-concept threat.

Many of the studies on stereotype threat and aging aim to either increase or decrease the salience of negative age-related stereotypes. Specifically, explicit threat inductions often suggest that memory declines with age and that older adults need to rely on assistance and memory strategies to function in their day-to-day lives. On the other hand, threat-easing conditions tend to suggest that perceived age-related declines in memory are overblown and that individual differences play an important role in the memory abilities of older adults. Such manipulations have demonstrated that older adults who are exposed to negative age-related stereotypes exhibit poorer memory than those in threat-eased conditions (Hess et al., 2003). Indeed, this effect has been replicated a number of times using variations of this induction technique and across different memory tasks (Brubaker & Naveh-Benjamin, 2018; Hess et al., 2009; Krendl et al., 2015; Wong & Gallo, 2016; for recent reviews, see Armstrong et al., 2017; Barber & Mather, 2014).

Critically, stereotypes do not need to be explicitly activated; even the language commonly used in memory experiments is enough to induce threat in older adults (Rahhal et al., 2001). To explore the effect of memory task instructions on older adults’ performance, Rahhal et al. (2001) gave one group of older adults a set of instructions similar to those typically found in memory experiments. These instructions explicitly stated that participants’ memory for trivia items would be tested. Another group of older adults was given a separate set of instructions that framed the task as a test of participants’ learning. Results showed that when the experiment was framed as a memory test, older adults underperformed compared with younger adults—a finding in line with much of the memory and aging literature. However, age differences were attenuated when the instructions were “memory neutral” (Rahhal et al., 2001). Further, lab experiments that are described as simply comparing the performance of older and younger adults also negatively influence older adults’ memory performance (Brubaker & Naveh-Benjamin, 2018), suggesting that some aspects of the testing environment are sufficient to induce threat.

While the precise mechanisms underlying stereotype threat remain unclear, it has been suggested that stereotype threat interferes with executive control (Schmader et al., 2008; Schmader & Johns, 2003), hindering performance on the task at hand. More specifically, the integrated process model of stereotype threat suggests that efforts to mitigate the negative arousal, emotions, and intrusive thoughts brought on by stereotype threat tax older adults’ already limited attentional resources (Jordano & Touron, 2017; Popham & Hess, 2015; Schmader et al., 2008). In line with this position, stereotype threat has been shown to affect effortful control processes and explicit memory, while more automatic and implicit memory processes remain relatively intact (Eich et al., 2014; Mazerolle et al., 2012). Moreover, research suggests that, under threat, executive functions may become redirected towards thoughts and worries about one’s performance, or “task-related interference” (Jordano & Touron, 2017; McVay et al., 2013). Indeed, older adults in a stereotype-threat condition experienced more task-related-interference compared with those in a threat-eased condition and exhibited significantly worse performance (Jordano & Touron, 2017).

Further, stereotype threat seems to induce a prevention focus (aimed at preventing errors), rather than a promotion focus (aimed at maximizing gains) in some older adults (Barber et al., 2015; Barber & Mather, 2014; Popham & Hess, 2015). This fits with older adults’ preference for responding correctly rather than quickly or to the best of their ability and may contribute to performance decrements under threat. Exposure to age-related stereotypes may also reduce perceived control by highlighting factors outside of older adults’ control (i.e., age-related memory decline), which can exacerbate task-related interference (Lachman & Agrigoroaei, 2012). Taken together, such task-related worries are likely common for older adults in a wide range of cognitive experiments (i.e., not just those specifically designed to look at stereotype threat).

Finally, stereotype threat may affect performance by increasing stress levels. Older adults show a greater increase in salivary cortisol (a biomarker associated with stress) than younger adults when tested in a typical psychology lab setup (Sindi et al., 2013). Specifically, younger and older adults were tested in either a university setting or in a setting that favoured older adults (i.e., a place they were familiar with, being tested by someone their own age). Importantly, stereotypes were not explicitly activated in this study; aspects of the environment were manipulated to implicitly induce stress (similar to Rahhal et al., 2001). Older adults tested in the youth-favouring environment had higher salivary cortisol levels and worse memory performance than those tested in the senior-favouring environment. Further, cortisol levels in the youth-favouring environment were higher than those taken at home, suggesting that memory testing within a typical university setting is stressful to older adults (like a lab-equivalent to “white coat syndrome”).

While numerous studies are designed with the intent to induce stereotype threat, standard psychology experiments exploring age differences in memory are rife with opportunity to inadvertently activate stereotypes (Barber & Mather, 2014). As discussed, studies that are described as testing memory (Rahhal et al., 2001) and those in which participants know that age-related comparisons will be made have been shown to influence older adults’ performance (e.g., Brubaker & Naveh-Benjamin, 2018; Hess et al., 2009). This may explain why age differences are minimized when memory tasks are given in participants’ natural environment (e.g., as with prospective memory tasks given outside the lab, Rendell & Thomson, 1999; or home diary studies of involuntary memory Berntsen et al., 2017; Schlagman et al., 2009), possibly because threat-inducing cues are minimized outside the lab. From the moment of recruitment, there are many opportunities for participants to realize that they are participating in an aging study. Even with strict controls, environmental cues may be enough to create a stressful situation or “threat” for older adults. As such, we suggest that testing environments may be more impactful than researchers intend, confounding results in aging research.

Implications for our understanding of age differences in cognition

Older adults’ greater motivation to succeed, coupled with threat-inducing cues in the environment, may contribute to several commonly observed effects in the cognitive aging literature. Undoubtedly, true age differences in cognition exist, but our current understanding of those differences may be influenced by testing-related stress and interference. In this section, we discuss some of the aging effects that may be (at least partly) explained by older adults’ experience of stereotype threat in the lab. Given the purported mechanisms of, and necessary conditions for, stereotype threat, we would expect testing-related threat to have the largest effect on tasks that (1) place high demands on executive control, (2) make explicit mention of memory testing, and (3) highlight the fact that age comparisons will be made.

For instance, commonly observed age differences in explicit memory performance may partly reflect older adults’ experience of testing-related threat. Age differences in memory tend to be most pronounced when encoding and/or retrieval tasks place greater demands on controlled processing (Craik & McDowd, 1987; Hasher & Zacks, 1979; West, 1996). Further, age differences are greater when encoding is intentional rather than incidental (Old & Naveh-Benjamin, 2008; Perlmutter, 1979) and when retrieval is explicit rather than implicit (e.g., Howard, 1991; Light & Singh, 1987). While these performance differences may largely reflect true age differences in top-down control, we suggest that these differences may also reflect the different levels of threat (or the attempt to control one’s reaction to the threat) posed by each scenario. For instance, incidental encoding tasks, by definition, avoid informing participants that their memory will be tested, which likely leads to less anxiety and task-related interference for older adults than intentional encoding. Giving older adults certain encoding strategies has also been shown to improve memory (e.g., form a sentence with these words, complete a word fragment and remember the generated word; Luo et al., 2007; Naveh-Benjamin et al., 2007). In addition to providing “environmental support” (Craik, 1983), these strategies may be effective because they minimize threat/task-related interference by providing older adults with a nonmemory task on which to focus. At retrieval, when memory is tested implicitly, participants are unaware that their memory is being tested and thus, implicit tests should be less likely to invoke stereotype threat in older adults. However, when implicit tests are made explicit, by alerting participants to a connection between tasks or suggesting that explicit memory can be used to solve them (Gopie et al., 2011; Jennings & Jacoby, 1993), then older adults again do worse than younger adults. As such, we argue that when encoding and retrieval are intentional/explicit, older adults begin to monitor and worry about their performance, which then suffers due to increased task-related interference.

Another common observation in the cognitive aging literature is that not all cognitive domains are negatively affected by age. For instance, language comprehension (Shafto & Tyler, 2014), general knowledge (Umanath & Marsh, 2014), number skills (Cappelletti et al., 2014), emotion regulation (Carstensen & Mikels, 2005), and some types of decision-making (Grossmann et al., 2010; Samanez-Larkin & Knutson, 2015) are preserved or even improve with age. Compared with memory, there are very few negative stereotypes about the effects of age on these domains. Indeed, older adults are often entrusted with leadership positions in everyday life (e.g., judges, CEOs, world leaders), and this likely reflects the common belief that knowledge and decision making (at least in one’s area of expertise) continue to develop with age (Grossmann, 2017). Thus, when older adults perform these types of tasks in the lab, they may experience less stereotype threat than they do with memory tasks and this may contribute to their preserved performance. Of course, the lack of aging stereotypes about language, knowledge, and decision making may reflect the fact that these domains genuinely do not decline with age, with domain differences primarily reflecting true age differences in the neural systems underlying these functions. Going forward, it would be interesting to test whether these preserved domains are also susceptible to threat, for instance, if the tasks were framed as measuring memory.

Implications for our understanding of age differences in neural functioning

A number of findings in the neuroimaging literature may also be influenced by older adults’ greater motivation to succeed and the stress induced by the testing environment. In this section, we discuss two commonly observed effects in the neurocognitive aging literature: (1) reduced suppression of the default mode network (DMN) during overt task performance, and (2) increased activation of frontal regions during cognitive testing. We propose that task-related interference, as an introspective process, may be reflected by older adults’ failure to suppress the DMN. Further, as a process that consumes attentional resources, task-related interference may also contribute to older adults’ overactivation of frontal control regions.

A common finding in the neurocognitive aging literature is that older adults show less suppression of the DMN when performing attention-demanding tasks (e.g., Andrews-Hanna et al., 2007; Grady et al., 2006; Persson et al., 2007). The DMN tends to be more active during internally directed tasks, such as remembering the past or imagining the future (Buckner et al., 2008) and self-reflective thought (Andrews-Hanna et al., 2014; Grigg & Grady, 2010). Older adults’ failure to suppress the DMN has been attributed to their inability to disengage from internally directed thoughts (e.g., Persson et al., 2007), but this interpretation conflicts with the mind-wandering literature which suggests that older adults actually experience fewer off-task thoughts than younger adults (e.g., Jackson & Balota, 2012). Notably, mind-wandering studies rarely differentiate between off-task thought and task-related interference, and those that do show an age-related increase in task-related interference (Jordano & Touron, 2017; McVay et al., 2013). Thoughts of this nature are also introspective and may help explain reduced DMN suppression in older adults (cf. Spreng & Turner, 2019). To test this hypothesis, we suggest that more specific mind-wandering prompts be given during a task that requires DMN suppression (e.g., working memory) to determine whether task-related worry is associated with failed DMN suppression.

Task-related interference may also contribute to older adults’ increased frontal activity observed across a range of cognitive tasks, including working memory (e.g., Reuter-Lorenz et al., 2000; Reuter-Lorenz & Cappell, 2008), memory retrieval (e.g., Cabeza et al., 2002), and language comprehension (Peelle et al., 2010; Tyler et al., 2010). This increased frontal activation is often characterized as “compensatory” if it relates to better performance on the task and “dedifferentiation” if it does not relate to better performance (Cabeza et al., 2018; Grady, 2012). Recent longitudinal work suggests that increased left frontal activity at retrieval (i.e., a decrease in the Hemispheric Encoding/Retrieval Asymmetry [HERA] pattern) with age is not compensatory, in that it declines within an individual over time and relates to lower associative memory performance (Johansson et al., 2020; see also Morcom & Henson, 2018). However, it remains unclear what is causing decreased HERA or, in other cognitive domains, increased frontal activation with age. One possibility suggested by our recent work is that this increased frontal recruitment reflects differential responding with age to the demands of the task itself (Campbell et al., 2016; Campbell & Tyler, 2018; Davis et al., 2014; cf. Peelle & Wingfield, 2016). For instance, when language processing is measured in a naturalistic way (i.e., participants simply listen to sentences without an overt task), both older and younger adults only recruit language-processing networks (Campbell et al., 2016). When a simple task is introduced (i.e., deciding if the sentences are grammatical), additional cognitive control networks are also activated (including bilateral frontal regions), and these likely reflect both increased attentional demands and task-related interference in older adults. As such, it is possible that when extraneous task demands are minimized (e.g., during involuntary memory retrieval; Hall et al., 2014; Kompus et al., 2011), age differences in frontal activation may not be observed.

Finally, few studies have examined the neural correlates of stereotype threat in older adults (cf. Colton et al., 2013), but related work with younger adults has implicated frontal control regions. For instance, women primed with gender stereotypes exhibit less neural activity in task-related regions (i.e., those associated with math performance), and increased activity in regions associated with emotional processing and regulation—namely, the ventral anterior cingulate cortex (Krendl et al., 2008; Wraga et al., 2007) and right orbital gyrus (Wraga et al., 2007). Thus, older adults’ overactivation of frontal control regions during many cognitive tasks may reflect their greater experience of threat, even when stereotypes are not directly primed. Future work should aim to directly compare neural activity when aging stereotypes are explicitly activated (e.g., Colton et al., 2013), implicitly primed, and intentionally eased (e.g., Jordano & Touron, 2017) to determine if increased frontal recruitment, failed DMN suppression (or indeed, increased frontal-DMN connectivity; Samu et al., 2017; Spreng & Schacter, 2012; Spreng & Turner, 2019) are most apparent when aging stereotypes are activated—either explicitly or implicitly—but not when those stereotypes are eased.

Conclusion and ways forward

In this paper, we have argued that older adults who volunteer to take part in research are usually more motivated to do well than undergraduate students who are participating for course credit or money. For many older individuals, their performance in the lab is a matter of pride: They are in a position to gauge their cognitive health and disprove negative aging stereotypes and, whether in response to this or simply due to higher conscientiousness in general, they seem to try their best. Younger adults, on the other hand, usually have less to prove and seem to take a more relaxed approach to cognitive testing. This difference in baseline motivation between the two age groups may have far-reaching implications for neurocognitive aging research. Older adults’ greater motivation leaves them vulnerable to self-concept threat (Barber, 2017), and they likely experience some version of this threat whenever their memory is explicitly tested, not just when stereotype threat is manipulated directly. This threat response may contribute to older adults’ worse performance on explicit memory tasks, their preserved performance within some cognitive domains, and their increased activation of frontal control regions and decreased suppression of the DMN during cognitive tasks.

Disentangling the influence of testing-related threat and true age differences in cognitive control will be difficult, in that threat seems to affect performance by redirecting control processes towards task-related interference. Developing ways to minimize threat during experimental testing will not only improve older adults’ performance, but will allow researchers to gain a better sense of the true effects of age. First, efforts should be made to recruit more diverse and comparable samples of younger and older adults (e.g., Hultsch et al., 2002), as this will not only increase the generalizability of conclusions, but may also help to equate motivation between groups. Further, advertisements for cognitive aging studies should be carefully worded so as not to induce stereotype threat in the advertisement itself. While outright deception should be avoided, researchers may want to use terms like “learning” and “language processing” instead of “memory” (e.g., Rahhal et al., 2001) and avoid explicit mention of age-related cognitive changes/decline. A similar link has been made in the cognitive training literature between participants’ expectations and the wording of recruitment materials (Green et al., 2019). In one study (Foroughi et al., 2016), recruitment flyers either mentioned the benefits of “brain training” or made no mention of training. Participants who self-selected into the training group showed a marked placebo effect (i.e., improved fluid intelligence after only one hour of supposed “cognitive training”), while those in the control group did not, illustrating the powerful effect that recruitment materials can have on demand characteristics.

Other potential ways to reduce threat in cognitive aging studies include testing older adults at home or in nonuniversity settings, with older peers acting as research assistants (as in Sindi et al., 2013), self-administered testing (such as experience sampling or diary studies; e.g., Schlagman et al., 2009), and online platforms (e.g., MTurk, Prolific). While older adults have traditionally reported feeling less comfortable with computers than younger adults (Lee et al., 2019), it is important to note that attitudes are changing and becoming more positive in later birth cohorts (i.e., young–older adults). Internet use has been increasing amongst older adults over the past decade, doubling from 32% in 2007 to 68% in 2016 among Canadians aged 65 and older (Davidson & Schimmele, 2019; see Anderson & Perrin, 2017, for a similar trend in the United States), suggesting that many older adults have some level of familiarity with computers and presumably, this number will only continue to increase. These alternative testing methods would help avoid some of the pressures associated with in-lab testing (though self-administered or online testing may also forego some of the socioaffective benefits of coming into the lab; Kensinger & Gutchess, 2017).

Giving older adults more time to complete cognitive tasks has also been shown to reduce the effects of testing-related threat (e.g., Hess et al., 2009; Popham & Hess, 2015). Extra time may allow older adults to engage in regulatory behaviours, controlling their reactions to anxiety and task-related worry brought on by stereotype threat. Further, longitudinal studies may also help to avoid the confounding effects of stereotype threat, in that participants are compared to their own baseline performance, so presumably any threat effects would be consistent over time. However, it is possible that participants, particularly those with negative views about aging (Levy et al., 2016), may look for a change in their own performance at follow-up sessions and experience more threat under longitudinal conditions. One way to control for this may be to assess participants’ attitudes about aging (Levy & Banaji, 2002) and to include thought probes designed to measure task-related interference at each session. These could then be included as covariates in the analysis.

Finally, as mentioned above, there may be some benefit to using more naturalistic stimuli (such as task-free language or free-viewing of movies; Campbell et al., 2016; Campbell et al., 2015; Geerligs et al., 2018; Hasson et al., 2010), as these can be used to study age differences in neurocognitive functioning in the absence of task demands. Naturalistic stimuli are rich with meaning, incredibly engaging, and tend to induce the same neurocognitive processes across participants (unlike the resting state; Campbell & Schacter, 2016). However, like the resting state, naturalistic stimuli are task-free, in that participants can be instructed to simply watch or listen without any additional demands. Similarly, paradigms looking at memory for naturally occurring events that take place outside the lab (e.g., a museum tour or training protocol in the workplace; Armson et al., 2017; Diamond et al., 2020; St. Jacques & Schacter, 2013) hold great promise for examining age differences in memory under more naturalistic conditions.

In conclusion, there is no doubt that aging is associated with real neurocognitive changes, but these changes may be exaggerated or confounded by the way we typically study aging. If we want to get at true aging effects, and isolate specific cognitive functions from task-related demands and anxieties, then steps must be taken to minimize threat and measure cognition in a more naturalistic way.