The construct of motivation has been a central part of psychology since the earliest days of James and Wundt. It is a construct that spans many levels of analysis, complexity, and scope, from cellular and systems neuroscience, to individual differences and social psychology (plus applied domains such as educational and industrial/organizational psychology, and clinical psychology and psychiatry). Recently, interest in scientific studies of motivation has been rejuvenated, arising from three distinct scientific perspectives and research traditions: (a) cognitive, systems, and computational neuroscience; (b) social, affective, and personality psychology; and (c) aging, developmental, and lifespan research. This special issue of Cognitive, Affective, & Behavioral Neuroscience is the direct result of a recent effort to integrate and cross-fertilize these three research streams through a small-group conference sponsored by the National Institute of Aging (with additional support from the Scientific Research Network on Decision Neuroscience and Aging): Mechanisms of Motivation, Cognition, and Aging Interactions (MOMCAI). This special issue provides a sampling of the latest research that originates from these different traditions, with a number of the contributions coming from the conference participants.

In this introductory article, our goal is primarily conceptual: to define the space of the domain being covered in the Special Issue, as we currently see it. Specifically, we highlight some key unresolved theoretical questions and challenges that need to be addressed by the field, while also highlighting what we believe are some of the most profitable research strategies. Our hope is that this introductory article will serve as something like a roadmap for investigators interested in getting involved with this research area. More importantly, we hope to stimulate cross-talk and the cross-fertilization of ideas among investigators working in disparate research traditions.

The article is organized into five different sections. The first section briefly covers some of the recent developments that have rejuvenated the study of motivation–cognition interactions from different research perspectives. In the second section, we discuss how motivation is defined and studied, with different emphases and foci, in each of these different traditions. Third, we describe some of the relevant dimensions and distinctions within the domain of motivation, which help to further define and taxonomize this domain. The fourth section focuses on the candidate neural mechanisms arising from cognitive neuroscience research that are thought contribute to motivation–cognition interactions. In the final section, we highlight what in our view are some of the most pressing research questions and “low-hanging fruit” that we hope will be targeted in future investigations within this domain.

Recent developments

Recent research in cognitive, computational, and systems neuroscience has begun to uncover some of the underlying core mechanisms by which reward signals and motivational state changes modulate ongoing neurocognitive processing. In particular, this work suggests that performing tasks in a context with available reward incentives leads to enhancements in specific cognitive processes, such as active maintenance in working memory, preparatory attention, episodic encoding, and decision making (Locke & Braver, 2010; Maddox & Markman, 2010; Pessoa, 2009; Pessoa & Engelmann, 2010; Shohamy & Adcock, 2010). These cognitive effects appear to occur via modulation of specific neural circuits involving the prefrontal cortex (PFC), midbrain dopamine system, and related subcortical structures such as the basal ganglia and hippocampus. The experimental work has been paralleled by theoretical developments involving the reinforcement learning computational framework. This framework postulates that inputs coding the current and predicted motivational values of events are utilized by the brain as learning signals to adjust decision-making biases (K. C. Berridge, 2007; Daw & Shohamy, 2008; McClure, Daw, & Montague, 2003; Niv, Daw, Joel, & Dayan, 2007).

A second stream of research development has come from the social, affective, and personality perspective. In this domain, investigations have focused on the types of goals that individuals select to pursue, and the internal and external influences on goal pursuit. In recent years, two surprising findings have emerged: (1) the explicit motivational value of behavioral goals is often not a strong determinant of whether those goals will be implemented and realized (Gollwitzer, 1999), because nonconscious influences can alter goal pursuit, primarily by modulating the perceived motivational value associated with goal outcomes (Bargh, Gollwitzer, Lee-Chai, Barndollar, & Trotschel, 2001; Custers & Aarts, 2010); and (2) goal pursuit follows specific stages (e.g., planning vs. implementing) and time courses, such that goal-directed behavior can increase, decrease, or fluctuate over time, depending on the nature of the goal and the feedback received (Gollwitzer, 2012). This work has spawned a host of experimental paradigms and research strategies for specifying and elucidating the nature of nonconscious effects on goal pursuit (Bargh & Morsella, 2010), effective strategies for emotional regulation and self-control (Kross & Ayduk, 2011), the causes of self-regulatory persistence (Job, Walton, Bernecker, & Dweck, 2013) or depletion and failure (Baumeister & Vohs, 2007), and major sources of personality differences (Sorrentino, 2013).

The role of motivation–cognition interactions has also been emphasized in recent aging and developmental research. On the aging side, a primary focus has been on motivational reprioritization among older adults (Charles, 2010; Heckhausen, Wrosch, & Schulz, 2010). In the socioemotional domain, accumulating studies have suggested that older adults can exhibit better emotion regulation than can younger adults in some contexts, as well as a stable or increased focus toward positive affect (Carstensen et al., 2011; Mather, 2012; Urry & Gross, 2010). Such findings are somewhat puzzling, given that emotion regulation is generally hypothesized to depend on executive control processes and supporting brain systems (e.g., prefrontal cortex) that are well-established as showing age-related decline (Ochsner & Gross, 2005). Specifically, one theoretical account postulates that these effects reflect increased motivation toward emotionally meaningful goals and those associated with positive affect among older adults, as they get closer to the end of their life (Carstensen, 2006; Carstensen, Isaacowitz, & Charles, 1999). Contrasting accounts have also focused on motivational reprioritization, but instead as a specific response to age-related cognitive decline. According to such accounts, older adults will restrict cognitive engagement to (a) activities associated with maintenance or loss prevention, as opposed to growth (Baltes, 1997), or (b) tasks with the greatest implications for self (Hess, in press).

A different emphasis has arisen from the developmental perspective. Here, the focus has been on potentially diverging trajectories in the maturation of cognitive versus affective neural circuits. Specifically, adolescence has been highlighted as a period in which cognitive control processes are especially sensitive to incentive-related motivational influences (Geier, Terwilliger, Teslovich, Velanova, & Luna, 2010; Prencipe et al., 2011; Somerville & Casey, 2010; Steinberg, 2010a; van den Bos, Cohen, Kahnt, & Crone, 2012; Van Leijenhorst et al., 2010). These trajectories diverge once again in older age, with cognitive prefrontal circuits being more affected than emotional prefrontal circuits (Mather, 2012).

Although the body of work examining motivational influences on basic cognition and higher-level goal pursuit is rapidly growing, often there is little cross-talk between neurocognitively focused researchers and those taking social/personality and lifespan perspectives. This is problematic, because all of these perspectives are likely to be required in order to achieve a comprehensive understanding of how motivation impacts psychological and behavioral function. A number of challenges must be overcome to enable such integration. In the next two sections, we outline the key challenges of (a) defining motivation and (b) specifying its relevant dimensions.

Motivational definitions and operationalization

A key challenge for cross-disciplinary integration is to establish a unified definition for motivation and how motivational consequences are operationalized in experimental investigations. Indeed, different research traditions have emphasized distinct aspects of motivation. Here we briefly discuss how motivation has been defined and operationalized from within these different traditions.

Animal learning/systems neuroscience

Historically, studies of motivation in the animal-learning tradition have strongly focused on homeostatic drive accounts, in which physiological deviations from an internal set-point lead to shifts in motivational state (e.g., thirst, hunger) that trigger corrective behaviors (Bindra, 1974; Hull, 1943; Toates, 1986). However, contemporary research has been strongly influenced by the discovery that variations in the magnitude and quality of a reinforcer or the outcome of an instrumental action have behavioral effects that parallel those induced by physiological shifts in motivational state. This finding suggests that such states, rather than inducing drives, motivate behavior by modulating expectancies regarding the outcome (i.e., its incentive value). Because the incentive value of an action outcome must be learned, much of the current research focuses on the learning processes that mediate motivational control over behavior (K. C. Berridge, 2004). Incentive learning is investigated using standard Pavlovian and instrumental conditioning paradigms and assessed in terms of the behavioral, physiological, and neural responses that develop to cue stimuli (CSs) previously associated with rewarding or aversive outcomes.

In the domain of systems neuroscience, motivation is construed as having both activational and directional functions (Salamone & Correa, 2012), with the former being related to the nonspecific energization or invigoration of responding (typically assessed in terms of response rate or intensity), and the latter referring to specific response biases (typically assessed in terms of choice or place preferences). Behavior is further considered to be under goal-directed motivational control if it meets two additional criteria: (1) It is sensitive to the current incentive value of the outcome, and (2) it is sensitive to action–outcome contingencies (Dickinson & Balleine, 1995). A canonical paradigm for investigating goal-directed motivational effects is the outcome revaluation procedure (Dickinson, 1985), which is used to demonstrate how a change in the motivational state of the animal (selective satiation, physiological deprivation, aversive conditioning, etc.) can immediately impact Pavlovian responses (e.g., licking) and can also bias instrumental behaviors (e.g., rate of lever pressing), even in the absence of further contact with the reinforcer. Studies are typically conducted with primary reinforcers, such as food, liquid, or sexual stimuli, used as incentives.

Social, affective, and personality psychology

Social and personality psychologists use motivational constructs to describe why a person in a given situation selects one response over another, or makes a given response with stronger intensity or frequency (Bargh, Gollwitzer, & Oettingen, 2010). This conceptualization follows that of animal learning and systems neuroscience studies, in focusing on both the activational and directional functions of motivation. However, in the social, affective, and personality tradition, the primary interest is in how the direction and intensity of motivation arise from the expectations and needs of the individual (Weiner, 1992). A key theoretical framework is the conceptualization of motivation in terms of goals. Here, goals are considered to be mental representations of desired states, which serve as an intermediate construct that actually generates the activational and directional components of motivation (Austin & Vancouver, 1996; R. Custers & Aarts, 2005; Elliot & Fryer, 2008). Additionally, social and personality psychologists, use the term motive to refer to higher-order classes of incentives, such as achievement, power, affiliation, and intimacy, that may be intrinsically attractive to an individual (McClelland, 1985b). Motives can exhibit state-like properties, such that they reflect different situational construals, but they also have dispositions, which are relatively stable and trait-like (Gollwitzer, Barry, & Oettingen, 2011; Schultheiss & Brunstein, 2010).

Gollwitzer (1990) coined the summary terms feasibility and desirability to describe the directional and activational determinants of motivation, respectively. Feasibility reflects expectations of the probability of attaining the desired future outcome, on the basis of experiences in the past (Bandura, 1977; Mischel & Moore, 1973). These expectations can specify whether or not (a) one is capable of performing a certain behavior that is necessary to achieve a desired outcome (i.e., self-efficacy expectations), (b) the performed behavior will lead to the desired outcome (i.e., outcome expectations), or (c) one will reach the desired outcome (general expectations; Oettingen & Mayer, 2002). In contrast, desirability is defined as the estimated value of a specific future outcome (i.e., the perceived attractiveness of the expected short- and long-term consequences, within and outside the person, of having reached the desired future).

The dimension of desirability is often further subdivided in terms of motive strength and incentive value. Motive strength is defined primarily in terms of the individual, and relates to the class of incentives that the individual usually finds attractive. Thus, motive strength typically refers to the long-term likelihood that an individual will engage in actions of any type that would tend to satisfy the motive. In contrast, incentive value is defined in terms of the properties of the stimulus, and specifies the behavioral choices made within a particular domain of action. As an example, high achievement motive strength will cause an individual to see challenging tasks as attractive and seek out opportunities to engage in them. Tasks that provide the opportunity for achievement pride will have high incentive value and will be associated with specific behavioral choices that indicate high effort expenditure and task persistence.

A typical experimental paradigm within social, affective, and personality psychology examines the intensity or frequency of motivated behavioral responses in terms of these three factors: feasibility, motive, and incentive value (McClelland, 1985b). Response measures can be collected via laboratory performance tasks, but they are also commonly acquired through self-report or experience-sampling approaches. Likewise, measures of motive, incentive value, and feasibility (expectancy) can be taken from personality questionnaires, implicit rating tasks (e.g., projective methods, such as the Thematic Apperception Test; Murray, 1943), or experimental manipulations of success likelihood. A canonical finding is the presence of a three-way multiplicative interaction among these factors that predicts response strength (i.e., the frequency or intensity of a given behavior; McClelland, 1985a).

Cognitive neuroscience

In cognitive neuroscience, motivation is often formulated in terms of neural representations of expected outcomes that predict decisions regarding effort investment. Experimental investigations commonly operationalize motivation in terms of the transient neural responses evoked by extrinsic incentive cues. These cues are used to signal parametrically manipulated rewards (typically monetary) available for instrumental actions, on the assumption that motive strength will covary quantitatively with reward amount. The monetary incentive delay (MID) task is a canonical paradigm for investigating such effects (Knutson, Fong, Adams, Varner, & Hommer, 2001): Pretrial cues indicate the amount of monetary reward to be earned (or penalty avoided) by making a sufficiently fast button-press response to a brief visual target, with the allowable response window typically manipulated to ensure a specific reward rate. This paradigm is used to identify cue-related activation in candidate motivation-linked brain regions (e.g., midbrain dopamine system, nucleus accumbens) that tracks the expected incentive value (i.e., Amount × Success Probability) of the target action. A limitation of these types of paradigms is that they do not directly indicate a motivational effect, because typical behavioral indices of effort investment—accuracy and reinforcement rate—are experimentally controlled (and even reaction time, which is not typically controlled, is almost never considered a dependent measure). Instead, the expected value of an action is often treated as an assumed proxy for motivation in many cognitive neuroscience studies.

Another approach that has been utilized to decouple effort investment from simple motor behaviors (e.g., response speed/vigor) is to examine how fluctuations in incentive value modulate engagement in effortful cognitive processing. In this case, the motivation triggered by an incentive cue is related not only to the expected value of the action outcome, but also to the efficacy in obtaining it via a targeted neurocognitive process. A canonical example of this approach is the incentivized-encoding paradigm, in which pretrial incentive cues indicate the incentive value associated with successful memorization of an upcoming visual stimulus, with payoffs delivered at a later memory test session (Adcock, Thangavel, Whitfield-Gabrieli, Knutson, & Gabrieli, 2006; Wittmann et al., 2005). This paradigm has been used to demonstrate incentive-related mediation of successful memorization, in terms of the enhanced activation of motivation-linked neural circuits (e.g., dopaminergic pathways) and/or functional connectivity with memory systems (i.e., medial temporal lobe). Similar approaches have been used to target different cognitive processes, such as working memory, task switching, attentional selection, response inhibition, and decision making (Braem, Verguts, Roggeman, & Notebaert, 2012; Krebs, Boehler, Roberts, Song, & Woldorff, 2012; Krebs, Boehler, & Woldorff, 2010; Padmala & Pessoa, 2011; van Steenbergen, Band, & Hommel, 2012; Taylor et al., 2004).

Cognitive aging and development

In cognitive-aging research, motivational constructs have been invoked to explain changes in the selection of cognitive activities, level of engagement, and biases in attention and perceptual processing. A common approach in this tradition is to assess cognitive task selection and engagement as a function of the motivational value associated with that task (see, e.g., Freund, 2006; Germain & Hess, 2007).

A finding of particular interest within this domain is the positivity effect—in which memory and attention in older adults appear to be asymmetrically biased toward affectively positive items or events (i.e., Age × Valence interactions). An influential hypothesis is that positivity biases are the result of chronically active emotion regulation goals—that is, a heightened motivation to focus on the positive and avoid the negative (Mather & Carstensen, 2005; Reed & Carstensen, 2012). A standard experimental approach for testing this hypothesis is to put emotion regulation goals in competition with other goals and compare their expression to unconstrained conditions. The assumption is that age differences in active emotion regulation goals will be less strongly expressed when those goals are competing with experimentally imposed task goals (e.g., remembering items for a subsequent memory test). This approach has been used to demonstrate that (a) larger positivity effects (Age × Valence interactions) are observed during unconstrained conditions, relative to those that provide task-related goals (e.g., to remember items for a subsequent memory test; Reed, Chan, & Mikels, 2014); and conversely, (b) positivity effects can emerge in younger adults instructed to focus on their emotions (Kennedy, Mather, & Carstensen, 2004; Mather & Johnson, 2000). Another experimental approach to the positivity effect is to focus on the role of cognitive control, under the assumption that control is required to maintain emotion regulation goals in an active and accessible state. The key finding is that positivity effects are reduced in older adults with low cognitive control abilities, or under task conditions with high cognitive control demands (Knight et al., 2007; Mather & Knight, 2005; Petrican, Moscovitch, & Schimmack, 2008). However, it is important to note that, to date, the influence of motivational variables (e.g., motive strength, incentive value) on positivity effects have not been assessed directly.

Motivational constructs have also been invoked in the developmental literature as a primary means of explaining the apparent surge in risky decision making that occurs during adolescence (Somerville & Casey, 2010; Spear, 2000). Both rodent (Douglas, Varlinskaya, & Spear, 2003) and human models (Cauffman et al., 2010; Steinberg et al., 2008; Luciana and Collins, 2012) suggest that reward seeking, novelty seeking, and exploratory behavior peaks in adolescence. These behaviors are interpreted in terms of the unique trajectories of brain development that occur during this age period, in which the key mechanisms that modulate dopamine circuitry function are maximally activated, leading to biased dynamic interactions within subcortical–cortical neural circuits. Specifically, these neurodevelopmental changes are thought to up-regulate the signaling strength of motivationally salient information, such that this information exerts a disproportionately strong influence over adolescents’ choices, actions, and regulatory capacity (Somerville & Casey, 2010; Spear, 2000; Steinberg, 2004; see the Pressing Research Questions section for further discussion).

The standard experimental approach to this issue is to elicit motivational context-specificity effects, involving the same types of incentive manipulations used in the cognitive neuroscience literature, to demonstrate that adolescents show adult-like decision making under some circumstances, but selective disruptions under conditions in which salient affective–motivational cues or contexts are present (e.g.,. Figner, Mackinlay, Wilkening, & Weber, 2009). Current work aims to define the necessary and sufficient features of environmental cues and contexts that lead to heightened approach motivational behavior in adolescents.

Summary

The preceding sections highlighted the differences in how motivation is defined and investigated in various subfields. In animal behavioral neuroscience, the emphasis is on learning and conditioning processes, using primary incentives (food, liquid, and sexual stimuli) and measuring simple behaviors (physiological reflexes, response rates, and stimulus preferences). In social and personality psychology, the emphasis is on the pursuit of temporally extended goals involving high-level incentives (power, achievement, and affiliation) and assessing self-reported beliefs and goal striving behaviors. In cognitive neuroscience and adolescent developmental research, the emphasis is on neural representations of incentive value, typically using monetary rewards, and assessing how these modulate effortful cognitive processing. Finally, in cognitive-aging research, there is an emphasis on emotion–cognition interactions, using affectively valenced stimuli and measuring attentional and memory biases.

This comparison across research domains reveals shortcomings within each subfield. For example, systems and computational neuroscience studies typically focus on very simple goal-directed behaviors, and thus have only rarely addressed why or how motivational factors can influence high-level cognitive processing. In contrast, human cognitive neuroscience studies have tended to use rather narrow experimental manipulations of motivational state (i.e., monetary reward incentives), and thus often fail to exploit the higher degree of experimental control that comes from using biologically relevant incentives, such as food and liquids, that are more easily linked to motivational factors (e.g., physiological shifts, satiation, subjective preferences, etc.; Galvan & McGlennen, 2013; Krug & Braver, in press). Conversely, although social and personality psychologists more commonly explore the types of complex factors that are known to moderate human motivation (e.g., personality traits, affective context, or situational construals), this work does not typically take advantage of the experimental precision and additional leverage afforded by the paradigms and methods employed in cognitive and neuroscience research (e.g., neuroimaging, pharmacological interventions, etc.). Finally, in cognitive-aging and developmental studies, motivational mechanisms of age-related differences are often postulated without being explicitly tested with the types of experimental manipulations employed in either the neuroscience or social/personality literatures. Greater cross-fertilization would be highly fruitful in helping each subfield address its own limitations, by bridging between constructs and paradigms, such that motivation–cognition interactions could be understood at various levels of analysis.

Motivational dimensions and distinctions

A second key challenge to cross-disciplinary integration is to identify the relevant dimensions by which to taxonomize motivational influences on behavior. As will become clear below, the motivational dimensions and distinctions that have been investigated and emphasized vary significantly across disciplinary subfields. As a consequence, researchers working in one subfield may not be aware of the distinctions prominent in another, and as such, may not be sufficiently informed and constrained by them in their own research investigations. The goals of this section are to highlight these distinctions and to show how they challenge theory development and experimentation on the mechanisms of motivation–cognition interactions.

Goal-directed control versus other forms of incentive-based learning

Motivation is most often conceptualized as being goal-directed, in that effort is invested toward instrumental actions that bring about desirable outcomes, in relationship to the incentive value of those outcomes. However, through incentive-based learning mechanisms, stimulus–response associations may also form that are independent of the current incentive value of a goal, as in the case of habits. Habits are important for behavioral control in that they enable efficient and automatized responding that does not require representation of action–outcome associations (Balleine & Killcross, 2006; Dickinson & Balleine, 2000).

Within the animal and systems neuroscience literature, considerable work has been devoted to distinguishing motivational effects on goal-directed versus habitual behavioral control. As we described above, one classic approach is to identify goal-direct behaviors via outcome revaluation procedures, since habitual behaviors have been found to be insensitive to such manipulations (Dickinson & Balleine, 1994). A second test is Pavlovian–instrumental transfer (PIT; Dickinson & Balleine, 1994; Estes, 1943), in which presentation of a Pavlovian cue (i.e., predictive of reward not contingent on instrumental behavior) can enhance instrumental responding, although the cue had not previously been paired with such instrumental responses. One form of PIT, termed general PIT, enhances instrumental responses even when they are not linked to the Pavlovian outcome (e.g., for a thirsty animal, a water-predicting cue can increase instrumental responding for a food reward; Dickinson & Dawson, 1987). General PIT is thus activational rather directional, and appears to have a greater influence when behavior is under habitual control (Holland, 2004).

The phenomenon of PIT highlights the motivational effects of Pavlovian stimuli. Pavlovian motivational control has been referred to as incentive salience, which may be reflected in the subjective experience of “wanting” (K. C. Berridge & Robinson, 1998). Incentive salience indexes the motivational power of learned Pavlovian CSs (i.e., those previously associated with appetitive or aversive outcomes) to invigorate behaviors. Incentive salience is wholly motivational, in that it is a function not only of the learned outcome value transferred to the CS, but also of the current physiological state (e.g., hunger, satiety, etc.). Nevertheless, incentive salience is not thought to be goal-directed, in the sense described above. Indeed, Pavlovian responses appear to be hard-wired and reflexive, such that activated behaviors are somewhat inflexible, and may actually be maladaptive (Dayan, Niv, Seymour, & Daw, 2006; Hershberger, 1986). A core feature of incentive salience is that the Pavlovian CSs can sometimes become “motivational magnets,” triggering approach (or avoidance) behaviors directed toward the cue itself (rather than the outcome they signify; K. C. Berridge & Robinson, 1998; K. C. Berridge, Robinson, & Aldridge, 2009).

In more recent years, there has been increasing mutual influence between systems neuroscience studies of animal learning and the computational framework of reinforcement learning. This framework formalizes learning algorithms by which agents maximize expected long-term reward (Sutton & Barto, 1998). Thus, reinforcement learning refers to learning the value of events, actions, and stimuli. An important distinction in this literature has been between model-free versus model-based reinforcement learning, a computational distinction that parallels the habitual versus goal-directed control distinction (Daw, Niv, & Dayan, 2005). In model-free learning, action control is based on the learned (stored or “cached”) incentive values and behavioral responses that are associated with specific stimulus cues (eventually leading to habit formation). In contrast, model-based learning involves a forward simulation in which the incentive value of an action is directly computed using a sequential transition model of its associated outcomes.

Until recently, most reinforcement learning investigations have targeted the computational and neurobiological mechanisms that contribute to model-free processes (Doll, Simon, & Daw, 2012). One of the key reinforcement learning mechanisms that has been best studied is the reward prediction error (RPE), the primary signal that drives CS–UCS learning from reward outcomes. The RPE is now well-established to be encoded in the phasic activity of midbrain dopamine neurons and their mesocorticolimbic targets (i.e., ventral striatum; Schultz & Dickinson, 2000). However, the RPE may also reflect other forms of surprise signal triggered by salient, but not reward-predicting sensory cues (Bromberg-Martin, Matsumoto, & Hikosaka, 2010; D’Ardenne, Lohrenz, Bartley, & Montague, 2013; Dommett et al., 2005; Lammel, Lim, & Malenka, 2014; Redgrave, Gurney, & Reynolds, 2008). The relationship between the motivational and reinforcement learning functions of dopamine are still a matter of controversy, however (K. C. Berridge, 2012). Most reinforcement learning accounts have neglected motivational variables (Dayan & Balleine, 2002); thus, the proposed RPE-type mechanisms that govern learning of CS+ reward values do not typically incorporate instantaneous effects of change in motivational state, or whether instrumental responding is goal-directed.

Approach versus avoidance motivation

A fundamental distinction within the domain of motivation is between whether the motive is to seek out and approach some object or activity, or instead whether the motive is avoidance—that is, to escape from the object or activity. The affective responses associated with these orientations differ, and the actions to which they relate also differ (Guitart-Masip et al., 2012). The distinction between approach and avoidance motivation is one that must be dealt with cautiously, however. It tends to be assumed that positive affect is associated with approach and negative affect with avoidance, but that is not always the case. A good deal of evidence indicates that anger and irritability are related to thwarted approach rather than to threat motivation (Carver & Harmon-Jones, 2009; Harmon-Jones, 2003).

The distinction between approach and avoidance motivation has been operationalized in diverse ways across the various subfields engaged in motivational research (Elliot, 2008). In animal and human neuroscience, the distinction is often made in terms of the brain systems involved. For example, a classic distinction is between a mesocorticolimbic dopaminergic behavioral activation system (BAS) associated with approach motivation, and a behavioral inhibition system (BIS), originally localized to the septo-hippocampal system, associated with avoidance motivation (Gray, 1987). In contrast, for personality psychologists, approach and avoidance motivations are typically discussed in terms of stable individual differences in habitual orientations to the world, and assessed in terms of self-report scales (Carver, in press). These individual differences are typically discussed in the framework of reward sensitivity and threat sensitivity (e.g., BIS/BAS scale; Carver & White, 1994), or in related self-regulatory dimensions, such as promotion (focus on advancement and accomplishment) versus prevention (focus on safety and security; Higgins, 1997).

Activation of these systems is commonly elicited with different types of incentives, such as rewards versus punishments, or in humans, monetary gains versus losses. However, this work also indicates more complexity than the intuitive valence-based dimensions. For example, in the cognitive literature, support has been found for a regulatory fit account, in which a promotion focus (either trait-related or an experimentally induced state) will produce better performance when task incentives are framed in terms of monetary gain, rather than avoidance of monetary loss, whereas a prevention focus will show the opposite pattern (Maddox & Markman, 2010).

In the human cognitive neuroscience literature, ongoing debate has focused on whether specific brain regions within motivational networks are valence- or affect-specific. For example, some human neuroimaging studies have found that nucleus accumbens activation is greater on trials incentivized by contingent gains relative to losses in the monetary incentive delay task (Cooper & Knutson, 2008). In other studies, however, both the accumbens and the VTA respond during anticipation of both monetary losses and gains (Carter, Macinnes, Huettel, & Adcock, 2009; Choi, Padmala, Spechler, & Pessoa, 2013; Cooper & Knutson, 2008), and some studies report even greater responses under aversive than approach motivation (Niznikiewicz & Delgado, 2011). This result is paralleled by animal studies in which nucleus accumbens and ventral tegmental area have been found to reflect both appetitive (desire) and aversive (dread) motivation, although potentially in anatomically segregated subregions (Bromberg-Martin et al., 2010; Lammel et al., 2012; S. M. Reynolds & Berridge, 2008; Roitman, Wheeler, & Carelli, 2005). Similar complexities arise in regions often associated with aversive reinforcement learning, such as amygdala and anterior cingulate cortex (Hommer et al., 2003; Shackman et al., 2011), which also show responses to positive valence and involvement in appetitive learning.

The lack of valence specificity in human studies using monetary incentives could reflect the fact that in such studies gains and losses do not present a true valence asymmetry. More specifically, unless participants are endowed on a prior visit and asked to pay back the experimenter, even if they lose on a given trial, they still leave the experimental session with a net gain. Likewise, loss of a positive incentive is not necessarily equivalent to those involving punishment. The use of primary incentives alleviates this problem, but introduces others. One potentially promising approach has been to utilize selective patterns in the physiological activation of motivational systems as a reliable index of the meaning evoked by the objective incentives. Such distributed patterns have been differentially elicited by task incentive structures; for example, in the Incentivized Encoding paradigm, shock threats (aversive motivation) were associated with distinct patterns of activation and connectivity (amygdala/parahippocampal cortex) as compared to those found for monetary rewards (approach motivation; VTA/hippocampus). These findings imply that engagement of distinct neural circuits impacts the types of memory traces formed under distinct motivational conditions (Murty, Labar, & Adcock, 2012), whether or not these differences are best accounted for by valence.

Transient versus sustained motivation

Animal and human neuroscience studies have typically investigated transient motivational effects associated with specific external cues. Motivational influences are not just transitory, however, but can also persist in a tonic fashion across behavioral contexts. Recent findings have suggested the presence of sustained motivational effects, using incentive context paradigms (Jimura, Locke, & Braver, 2010). Here, the incentive value of cognitive task performance is manipulated in a block-wise manner, but also more transiently via orthogonally manipulated trial-specific reward cues. Incentive context has been found to be associated with enhanced task performance and sustained neural activity, but these effects were independent of trial-specific incentive value (Chiew & Braver, 2013; Jimura et al., 2010).

Similarly, physiological investigations, chiefly focused on the dopamine system, have overwhelmingly focused on transient responses to discrete motivational cues, despite a wealth of pharmacological research in animals, healthy humans and patient populations that demonstrates a role for dopamine not just in processing and learning about discrete rewards, but also in motivation and sustained motivated behavior (K. C. Berridge, 2007; Salamone & Correa, 2012). Moreover, whereas the anatomy of dopaminergic synapses in the striatum suggests high temporal precision, dopaminergic effects on learning can potentially bridge multiple synapses and phasic events (Lisman, Grace, & Duzel, 2011). Dopaminergic synaptic anatomy outside the striatum in cortex and in the hippocampus includes significant distances between terminals and receptors, consistent with modulation over slower, sustained time scales (Shohamy & Adcock, 2010).

One theoretical account explains these sustained motivational effects in terms of incentive context-related changes in tonic dopamine (Niv et al., 2007). According to this account, tonic dopamine signals the long-term average reward rate of the current environment. This signal is thought to lead to a generalized increase in the vigor or intensity of action, by indicating an increased “opportunity cost” of response latency. In other words, when the current environmental context has high incentive value, increasing the speed of all actions (even those not directly rewarded) will typically enable more rewards to be harvested per unit time. As such, sustained motivation may have connections with general PIT effects, which are also thought to produce a more nonspecific invigoration of behavioral responding (Niv et al., 2007). Interestingly, recent evidence from microdialysis has demonstrated tonic dopamine efflux correlated with long-term average reward rates selectively in PFC terminal regions, but not the nucleus accumbens (St Onge, Ahn, Phillips, & Floresco, 2012). Other work has shown tonic dopamine release, as well as sustained firing of dopamine neurons, under conditions related to anticipatory, sustained motivated behaviors (Fiorillo, Tobler, & Schultz, 2003; Howe, Tierney, Sandberg, Phillips, & Graybiel, 2013; Totah, Kim, & Moghaddam, 2013). These dopamine-mediated effects of sustained motivation on response vigor are just beginning to be examined in humans (Beierholm et al, 2013).

Conscious versus nonconscious motivation

Motivated behavior is often assumed to start with conscious awareness and the formation of explicit intentions. However, as noted above, provocative findings over the last two decades, primarily from within the social and personality literature, have highlighted a distinction between conscious versus non-conscious motivation, and the presence of implicit (or nonconscious) goal pursuit, in which motivated behavior is instigated by environmental cues that may not reach conscious awareness (Custers, Eitam, & Bargh, 2012). This idea has led to a research focus that contrasts goal pursuit under conditions in which goals are implicitly versus explicitly activated. The typical methodological approach to implicit goal priming is the presentation of words, pictures, or other stimuli, either in seemingly unrelated tasks preceding the experimental task or by subliminal priming, both of which render conscious awareness of this influence less likely. These priming manipulations both increase the tendency to engage in goal-relevant action patterns and the vigor with which goal pursuit is executed (Custers & Aarts, 2010).

Recent studies have extended this approach to focus on implicit priming of reward cues to motivate cognitive performance. In these studies, the reward that can be earned on a particular trial is cued at its beginning, either clearly visible, or presented subliminally. Subliminally presented high-reward cues have been found to induce more cognitive effort expenditure than low-reward cues (Bijleveld, Custers, & Aarts, 2009; Capa, Bustin, Cleeremans, & Hansenne, 2011). A few cognitive neuroscience studies using subliminally presented reward cues have demonstrated that these engage subcortical motivation-linked brain regions, such as the ventral pallidum, in proportion to incentive value (Pessiglione et al., 2007; Schmidt et al., 2008). The cognitive performance effects of subliminal reward cues have been found to diverge in some instances from that of clearly visible reward cues, specifically under conditions in which visible rewards lead to a strategic change in behavior. For example, in some cases whereas subliminal reward cues only boost expenditure of effort, visible rewards lead to a speed–accuracy trade-off (Bijleveld, Custers, & Aarts, 2010). Likewise, subliminal reward cues modulate cognitive performance even on trials in which rewards are known to be unattainable, whereas such effects are not present for clearly visible reward cues (Zedelius, Veling, & Aarts, 2012). If the effects of subliminal reward cues had been mediated by conscious processes (e.g., perceiving that the trial has high or low incentive value), such a divergence should be absent. Hence, it appears that reward cues can motivate behavior in the sense that the expenditure of effort is increased, even without people being aware of it (for further discussion, see Bargh & Morsella, 2008).

Extrinsic versus intrinsic motivation

Animal and human neuroscience studies have almost uniformly focused on extrinsic motivation, the neural and behavioral responses to extrinsically provided incentives (e.g., food, money, etc.). However, in social and personality psychology, extrinsic motivation is strongly distinguished from various forms of intrinsic motivation. Intrinsic motivation is defined as engagement in a task for the inherent pleasure and satisfaction derived from the task itself (Deci & Ryan, 1985). Intrinsic motivation appears to drive behavior in a way that is different from, and potentially even in competition with, extrinsic motivation. The most provocative example of this competition is the undermining effect (Deci, 1971; Deci, Koestner, & Ryan, 1999; Ryan, Mims, & Koestner, 1983; also called the “motivation crowding-out effect”: Camerer & Hogarth, 1999; Frey & Jegen, 2001; or “overjustification effect”: Lepper, Greene, & Nisbett, 1973), a phenomenon in which people’s intrinsic motivation is decreased by receiving performance-contingent extrinsic rewards.

The standard approach for demonstrating undermining effects on intrinsic motivation is through free-choice paradigms. Here, willingness to voluntarily engage in a target task is assessed after a preceding phase in which the targeted task is performed either under conditions in which performance-contingent extrinsic rewards are provided or not (manipulated across groups). A large number of studies have shown that the extrinsic reward group spends significantly less time than the control group engaging in the target task during the free-choice period, providing evidence that the extrinsic rewards undermine intrinsic motivation for the task (Deci et al., 1999; Tang & Hall, 1995; Wiersma, 1992). Although intrinsic motivation has been mostly neglected in cognitive and neuroscience studies, one study has shown neural evidence of the undermining effect, in that removing performance-contingent extrinsic rewards led to reduced activity in reward motivation regions (anterior striatum, dopaminergic midbrain) during a subsequent unrewarded performance phase (when compared to a never-rewarded control group) (Murayama, Matsumoto, Izuma, & Matsumoto, 2010). Other studies using different paradigms, such as those involving interesting trivia questions (Kang et al., 2009), inherently pleasurable music (Salimpoor et al., 2013), and self-determined choice (Leotti & Delgado, 2011; Murayama et al., 2013), have also indicated that intrinsic motivation may be related to the modulation of reward circuitry (e.g., striatum). In the reinforcement learning literature, some researchers have attempted to expand the basic framework to incorporate computational mechanisms of intrinsic motivation (Oudeyer & Kaplan, 2007; Singh, Lewis, Barto, & Sorg, 2010).

Goal setting versus goal striving

In social psychological treatments, the motivated pursuit of goals is often separated into goal-setting and goal-striving phases (Gollwitzer & Moskowitz, 1996; Oettingen & Gollwitzer, 2001). Goal setting refers to the processes and determinants of how a particular goal gets selected for pursuit, whereas goal striving indicates the processes by which a particular goal, once implemented, is used to modulate ongoing behavior. Goal-setting research is aimed at demonstrating that goal selection can be influenced by various factors, such as how the goal is assigned (by self or other), framed (the goal content) and internally represented (the goal structure). Here the approach/avoidance (or relatedly, promotion versus prevention) motivational distinction becomes especially relevant, in terms of both trait-related individual differences (what goals the individual finds desirable), as well situational context manipulations (to minimize failure or maximize success).

Gollwitzer (1990) suggested that whereas goal setting can be characterized in terms of motivational principles, goal striving is best characterized in terms of volitional factors. These include action initiation, persistence, goal-shielding, feedback integration, and disengagement. Accordingly, goal-striving research has primarily focused on the kinds and effectiveness of self-regulatory strategies that are implemented to attain the goal. Surprisingly, increasing the strength of goal activation (intention) may sometimes produce only limited impacts on successful goal attainment (Webb & Sheeran, 2006). Instead, volitional self-regulatory strategies are needed to prepare for potential obstacles standing in the way of attaining the desired future, and to stay on track and pursue the desired future even in the face of difficulties and temptations.

Two key self-regulatory strategies that are a focus of current investigation are mental contrasting and implementation intentions. Mental contrasting allows people to explicitly consider possible resistances and conflicts when trying to reach a desired future (Oettingen, 2012). This means that people mentally juxtapose the desired future (e.g., completing a writing project) with obstacles in present reality (e.g., following an invitation to socialize). Such contrasts are used to project success expectations, so that these can determine the intensity of goal pursuit. Implementation intentions (Gollwitzer, 1999; Gollwitzer & Oettingen, 2011) are a strategy that involves generating “if . . . , then . . .” plans to link a critical situation with an action that is instrumental to reaching a desired future (e.g., “if it is Saturday afternoon and my friends invite me to watch a movie, then I will tell them that I will first finish my writing project”). These plans offer a shortcut to automated responding (i.e., creating ad-hoc habits). In other words, if–then plans allow people to perform automatized responses in the specified critical situation in a fast and effortless way, and without any further conscious intent. It is worth pointing out that the automated nature of implementation intentions suggests a potential similarity to habitual control, as studied in the animal learning literature. However, in implementation intentions, the resilience to shifting motivational states is created not by overlearned associations, but rather by the prospective decision to avoid outcome revaluation.

The goal-setting and goal-striving phases can also be distinguished in terms of their differential “mindsets,” in that goal setting is associated with a deliberative mindset, whereas goal striving is associated with an implemental mindset (Gollwitzer, 2012). The deliberative mindset is characterized by general attentional broadening and a cognitive focus on desirability and feasibility information, whereas the implemental mindset is characterized by strengthened goal representations, upwardly biased assessments of feasibility, and more general attentional narrowing. One methodological approach used to investigate these mindsets and phases is to interrupt participants and have them engage in cognitive tasks while they are in the midst of deciding upon a goal to pursue (deliberative mindset), or immediately after they have chosen one (implemental mindset) (Heckhausen & Gollwitzer, 1987).

Positive versus negative feedback

Feedback is thought to play a fundamental role in goal pursuit, by providing individuals with information on how to evaluate their commitment to goal striving, in terms of whether, what, and how much to invest in their goals (Fishbach, Koo, & Finkelstein, in press). An important distinction has been postulated between the motivational consequences of positive (completed actions, strengths, correct responses) and negative (remaining actions, weaknesses, and incorrect responses) feedback (Fishbach & Dhar, 2005; Fishbach, Dhar, & Zhang, 2006; Kluger & DeNisi, 1996). A key finding is that positive feedback increases motivation (and, thus, goal pursuit) when it is used to evaluate commitment, by signaling that the goal is of high value and attainable. In contrast, negative feedback increases motivation when it is used to evaluate progress: that more effort is needed to accomplish the goal (e.g., cybernetic models; Carver & Scheier, 1998; Higgins, 1987). Indeed, whereas positive feedback for successes can signal sufficient accomplishment, and “licenses” the individual to disengage with the goal (Monin & Miller, 2001), when people think of their goals in cybernetic terms (e.g., “closing a gap”), negative feedback is motivating.

In general, positive feedback should be more effective than negative feedback when goal commitment is lower, because positive feedback increases commitment. Negative feedback, in contrast, will be more effective than positive feedback when goal commitment is already high, because it signals greater discrepancy (i.e., a larger gap to be closed). A promising approach to investigate feedback effects has been to explore how they interact with goal commitment level to influence motivation. For example, Koo and Fishbach (2008) manipulated feedback by emphasizing either completed or missing goal actions (e.g., positive feedback [“you have completed 50% of the work to date”] vs. negative feedback [“you have 50% of the work left to do”]). When the goal commitment level was low, positive feedback on completed actions increased motivation more than negative feedback did. Conversely, when goal commitment was high, the reverse pattern was obtained (greater increase in motivation with negative feedback). It is interesting to note that this perspective on negative feedback as sometimes increasing motivation contrasts with the one typically adopted in the cognitive and neuroscience literatures, in which it is assumed that negative feedback will have an immediate impact in reducing reward value estimates.

Summary

As the above sections have detailed, the distinctions and dimensions investigated in studies of motivation vary greatly in terms of disciplinary focus. Some, including distinctions between phases of high-level goal pursuit (e.g., goal setting vs. goal striving), are studied almost exclusively from within one domain. Others, such as the approach/avoidance distinction, have been studied from multiple perspectives. Yet, even in such cases, important differences in emphases are present. For example, approach versus avoidance motivation is typically studied as a stable trait variable in the personality literature, but as a state manipulation in systems and cognitive neuroscience.

Many important challenges remain for cross-disciplinary integration in the study of motivation–cognition interactions. Challenges arise even at the level of defining our terms: Some concepts and phenomena do not currently extend across fields, and those that do sometimes have different usage or implications. Table 1 presents the differential representations and usage of key concepts across fields, including some examples of potential conflicts in usage. Our hope is that, as researchers become more aware of the motivational dimensions and distinctions that are emphasized in other subfields, they will be inspired to initiate further cognitive neuroscience and cross-disciplinary investigations and to bring these concepts into even closer alignment. The explorations into conscious versus unconscious (Pessiglione et al., 2007; Schmidt et al., 2008) and intrinsic versus extrinsic (Murayama et al., 2010) motivation that are beginning to occur from a cognitive neuroscience perspective offer promising examples of these efforts.

Table 1 Do we speak the same language? Disciplines of research on mot90vation have had substantially different foci and operationalizations, but frank conflicts in terminology and usage are relatively few

Mechanisms of motivation–cognition interactions

One of the challenges for cognitive, affective, and behavioral neuroscience research is to provide an account of motivation–cognition interaction in terms of the neural mechanisms that enable such interactions to occur. The key challenge is that although “motivation” and “cognition” are usefully specified as distinct psychological entities, it is not clear that they have separable implementations in the brain. Indeed, the neural systems implicated in the internal representation of cognitive goals, and the active maintenance and manipulation of information in working memory (e.g., frontoparietal and frontostriatal circuits), bear a striking similarity to those implicated in the generation of motivated behaviors. Thus, mechanistic accounts of motivation–cognition interactions run the risk of drawing a false dichotomy, if they are couched in terms of a discrete point of interface between two distinct neural systems (Pessoa, 2013).

Despite this caveat, several neural candidate mechanisms have been described that enable shifts in motivational state to be transmitted into a form that can modulate cognitive processing (see Fig. 1). These candidates fall into several broad classes: (1) broadcast neuromodulation, influencing cellular-level physiologic response properties; (2) communication between large-scale brain networks, via either direct pathways or shifts in network topology; and (3) the engagement of specific brain computational hubs that serve as integrative convergence zones. All of these mechanisms implicate some form of neuromodulatory transmission. Of the brain neuromodulatory systems, the one most closely linked to motivation is dopamine. We therefore first will consider the regulation of dopamine release and its effects on its targets as a useful model mechanism for the transmission of motivational signals. We will then move on to discuss network and circuit interactions. Finally, we will highlight specific computational hubs in the striatum, anterior cingulate cortex, and lateral PFC that are thought to play increasingly well-understood roles in motivated cognition.

Fig. 1
figure 1

Diagram showing candidate neural mechanisms of motivation–cognition interaction. Left figures show broadcast neuromodulation of the dopamine system and anterior cingulate cortex, in medial view (upper), and of lateral prefrontal cortex (PFC) and striatum, in lateral view (lower). At right is the network mode of communication between a frontoparietal network and cortical and subcortical valuation networks. The right panel is from “Embedding Reward Signals Into Perception and Cognition,” by L. Pessoa and J. B. Engelmann, 2010, Frontiers in Neuroscience, 4, article 17, Fig. 3. Copyright 2010 by Pessoa and Engelmann. Adapted with permission

Broadcast neuromodulation: dopamine (and other systems)

Widespread projections enable neuromodulatory systems to reach large portions of the cortical surface and subcortical areas, from which they can rapidly influence neuronal activity. The broadcast release of global neuromodulators, such as dopamine and norepinephrine, is thus likely to have complex rather than monotonic effects, which nevertheless may have synergistic actions at multiple levels of functioning. Dopamine, in particular, is known to have a range of effects on cellular-level physiology, including modulating synaptic learning signals (Calabresi, Picconi, Tozzi, & Di Filippo, 2007; Lisman et al., 2011; J. N. Reynolds & Wickens, 2002), altering neuronal excitability (Henze, Gonzalez-Burgos, Urban, Lewis, & Barrionuevo, 2000; Nicola, Surmeier, & Malenka, 2000), enhancing the signal-to-noise ratio (Durstewitz & Seamans, 2008; Thurley, Senn, & Luscher, 2008), and impacting the temporal patterning of neural activity (Walters, Ruskin, Allers, & Bergstrom, 2000). Such effects in subcortical and cortical targets (e.g., frontal cortex) could alter processing efficiency in a number of ways, such as by sharpening cortical tuning (Gamo & Arnsten, 2011), heightening perceptual sensitivity and discrimination (Pleger et al., 2009), enhancing attentional or cognitive control and working memory function (Pessoa & Engelmann, 2010), and enhancing targeted long-term memory encoding (Shohamy & Adcock, 2010).

The dynamic changes in neurophysiology that result from release of the neuromodulators implicated in motivation are evident not only cellularly, but also at the circuit level. As one example, functional MRI evidence has shown that reward- versus punishment-motivated learning reconfigures neural circuits, with marked consequences for the sensitivity of memory encoding systems (Adcock et al., 2006; Murty & Adcock, 2013; Murty et al., 2012). These reconfigurations are evident both in systems thought to primarily implement motivation, and in the broader networks devoted to the memory encoding task. For example, during intentional encoding, learning under reward incentives increases connectivity and activation in the VTA and hippocampus, whereas learning under threat engages amygdala and parahippocampal cortex. The consequences of these differences in the neural implementation of memory encoding translate into qualitatively different memory traces, because hippocampal encoding embeds items in context to support more flexible representations, whereas parahippocampal encoding selectively emphasizes features of the scene. These findings imply that motivated states can influence the content and form of long-term memory formation, potentially tailoring the memory trace to support future behaviors consistent with that same motivational state.

Network interactions: direct communication and topological reconfiguration

Interactions between motivation and cognition appear to rely on the communication between “task networks” (e.g., the dorsal frontoparietal network engaged during attention tasks) and “valuation networks,” which involve both subcortical regions, such as those in the striatum, and cortical ones, such as orbitofrontal cortex. These interactions are suggested to take place via multiple modes of communication. The first mode involves direct pathways between task and valuation networks. One example is the pathway between orbitofrontal and lateral PFC (Barbas & Pandya, 1989). Another example involves the pathways between the extensively interconnected lateral surface of frontal cortex (including dorsolateral PFC) and cingulate regions (Morecraft & Tanji, 2009). Finally, the caudate is connected with several regions of frontal cortex (including lateral sectors) and parietal cortex, in part via the thalamus (Alexander, DeLong, & Strick, 1986). Thus, direct pathways provide a substrate for cognitive–motivational interactions.

A second mode of communication that might enable motivational modulation of cognitive processing is through a reconfiguration of network topology and structure. Network analysis provides useful tools from which to quantitatively characterize topological relationships within and between brain networks. For example, in one recent study, Kinnison, Padmala, Choi, and Pessoa (2012) compared network properties and relationships in attentional and valuation networks during trials with low versus high reward value. It was found that on control trials the two networks were relatively segregated (modular) and locally efficient (high within-network functional connectivity), but on high-reward trials between-network connectivity increased, decreasing the decomposability of the two networks. This finding suggests that a primary consequence of changes in reward motivational value is to increase the coupling and integration between motivational and cognitive brain networks. Such reconfigurations of network topology could potentially arise from neuromodulatory influences, since similar changes have been identified as a consequence of noradrenergic response to stressors (Hermans et al., 2011) and dopamine precursor depletion (Carbonell et al., 2014).

Striatum: linking motivation to cognition and action

Work with behaving experimental animals has long highlighted the importance of the striatum as a nexus mediating between motivation, cognition, and action (Baldo & Kelley, 2007; Belin, Jonkman, Dickinson, Robbins, & Everitt, 2009; Mogenson, Jones, & Yim, 1980). The nucleus accumbens in particular has been suggested as a key node, which may translate dopaminergic incentive value signals into a source of behavioral energization, drive, and the psychological experience of wanting (K. C. Berridge, 2003). This is consistent with animal data suggesting that the nucleus accumbens processes both the hedonic and motivational components of reward, within distinct subregions (S. M. Reynolds & Berridge, 2008). Likewise, neuroanatomical data from nonhuman primates have revealed an arrangement of spiraling connections between the midbrain and the striatum that seems perfectly suited to subserve a dopamine-mediated mechanism directing information flow from ventromedial to dorsomedial to dorsolateral regions of the striatum (Haber, Fudge, & McFarland, 2000). In turn, an increasingly consensual view is that the reciprocal circuits between the striatum and frontal cortex function as a gating mechanism that prevents actions (and thoughts) from being released until the contextually and sequentially appropriate points in time (Mink, 1996; O’Reilly & Frank, 2006). Taken together, these accounts suggest that dopaminergic input to the striatum serves to mediate the interaction between motivation, cognition, and action.

Accumulating evidence from genetic and neuroimaging (fMRI and dopamine PET) work with human volunteers and patients supports this hypothesis. For example, a recent dopamine PET study revealed that individual differences in baseline dopamine synthesis capacity in the dorsomedial striatum of healthy young volunteers predicted the effects of reward motivation on Stroop-like task performance (E. Aarts et al., 2014). Moreover, genetic differences in a dopamine transporter polymorphism were found to modulate the effects of reward on fMRI activation of the dorsomedial striatum (caudate nucleus) during conditions of high cognitive control demand (task-switching; E. Aarts, van Holstein, & Cools, 2011). Likewise, it has been found that the ventral striatum exhibits common activation in tracking the effects of incentive value on both physical and mental effort exertion (Schmidt, Lebreton, Cléry-Melin, Daunizeau, & Pessiglione, 2012), and that its response to rewards is discounted as a function of the degree of effort exerted to obtain it (Botvinick, Huffstetler, & McGuire, 2009). These results further suggest that striatal dopamine might be a key mechanism in energizing both cognitive and motor behaviors on the basis of their current motivational value.

Anterior cingulate cortex (ACC): computing the expected value of control

Another key hub is a region of dorsomedial PFC that spans the presupplementary motor area and dorsal ACC. The single-cell electrophysiology literature has suggested that neurons in this region encode multiple aspects of reward, such as proximity to the reward within a behavioral sequence (Shidara & Richmond, 2002), the value of the ongoing task (Amiez, Joseph, & Procyk, 2006; Sallet et al., 2007), the temporal integration of reward history (Kennerley, Walton, Behrens, Buckley, & Rushworth, 2006), and the need to change response strategy (Shima & Tanji, 1998). A general consensus view is that the ACC and adjacent dorsomedial PFC serve an evaluative role in monitoring and adjusting levels of control (Botvinick, 2007; Holroyd & Coles, 2002; Ridderinkhof, Ullsperger, Crone, & Nieuwenhuis, 2004; Rushworth & Behrens, 2008; Shackman et al., 2011), potentially in response to motivational variables (Kouneiher, Charron, & Koechlin, 2009).

These ideas were recently formalized in an integrative account that suggests that the ACC might serve as a critical interface between motivation and executive function, by computing the “expected value of control” (Shenhav, Botvinick, & Cohen, 2013). Here, the imposition of top-down control in cognitive information processing is understood as both yielding potential rewards (e.g., through enablement of context-appropriate responses) but also as carrying intrinsic subjective costs (Inzlicht, Schmeichel, & Macrae, 2014; Kool, McGuire, Rosen, & Botvinick, 2010; Kool, McGuire, Wang, & Botvinick, 2013; Kurzban, Duckworth, Kable, & Myers, 2013; Westbrook, Kester, & Braver, 2013). The decision as to whether executive resources should be invoked, favoring controlled over automatic processing, is based on a cost–benefit analysis, weighing potential payoffs against their attendant costs (e.g., Kool & Botvinick, 2014). On the basis of a wide range of evidence, Shenhav, Botvinick, and Cohen (2013) proposed that the ACC might serve as a critical hub in the relevant cost–benefit calculations, serving to link cognitive control with incentives and other motivational variables.

Lateral PFC: integrating motivation with cognitive goal representations

A wealth of findings in both animal and human neuroscience studies suggest that the lateral PFC might serve as a convergence zone in which motivational and cognitive variables are integrated. The integration of these signals reflects more than just additive contributions of cognitive demands and reward value, but actually enhances functional coding within PFC, such as by maximizing signal-to-noise ratio, enhancing discriminability of visuospatial signals, and increasing the amount of information transmitted by PFC neurons (Kobayashi, Lauwereyns, Koizumi, Sakagami, & Hikosaka, 2002; Leon & Shadlen, 1999; Pessoa, 2013; Watanabe, 1996; Watanabe, Hikosaka, Sakagami, & Shirakawa, 2002). The dual mechanisms of control (DMC) framework suggests a specific mechanism by which these motivational influences on lateral PFC activity might modulate cognitive processing (Braver, 2012; Braver & Burgess, 2007; Braver, Paxton, Locke, & Barch, 2009).

According to the DMC framework, cognitive control can be accomplished either via a transient, stimulus-triggered, reactive mode or a tonic and anticipatory (i.e., contextually triggered) proactive mode. Proactive control is the more effective mode, because it enables preconfiguration of the cognitive system for expected task demands. However, it is thought to be metabolically or computational costly, because it depends upon the active representation and sustained maintenance of task goals in lateral PFC. Thus, it should be preferred under conditions involving reward maximization and/or contexts with high motivational value. Computationally, proactive control is thought to be achieved via dopaminergic inputs to lateral PFC, which enable both appropriate goal updating (via phasic dopamine signals) and stable maintenance (via tonic dopamine release) in accordance with current reward estimates (Braver & Cohen, 2000; O’Reilly, 2006). In contrast, the reactive mode, because it is transient and stimulus-triggered, may be less dopamine-dependent and may also involve a wider network of brain regions.

Several studies have shown that pairing task contexts or trials with high reward value shifts performance toward proactive control, as indicated both by behavioral performance indicators and PFC activity dynamics (Braver, 2012; Chiew & Braver, 2013; Jimura et al., 2010; H. S. Locke & Braver, 2008). Conversely, in nonrewarded contexts, lateral PFC activity has been found to reflect the subjective cost associated with exerting cognitive control (as estimated via both self-report and the tendency to avoid high control conditions; McGuire & Botvinick, 2010). Indeed, the robust findings of motivational influences on PFC activity and performance in tasks with high control demands suggest the possibility that proactive control shifts might be a primary mechanism by which the cognitive effects of motivation are mediated.

Summary

As the above sections indicate, a number of candidate neural mechanisms have been proposed to mediate motivation–cognition interactions (Fig. 1). These range from more global and system-wide mechanisms, such as broadcast neuromodulation and network-level interactions, to the more focal computational hubs. The neuromodulatory effects of dopamine may serve as a unifying mechanism underlying motivational influences on neurocognitive processing across a range of levels. Specifically, as we previously described, dopamine has effects at the cellular level that are consistent with a range of motivation–cognition interactions (changing cortical excitability, signal-to-noise ratio, synaptic plasticity, etc.). Likewise, dopamine serves as a major input and neuromodulator of activation in each of the regions that have been identified as likely convergence hubs for the integration of motivational and cognitive signals: striatum, anterior cingulate cortex, and lateral PFC. Finally, more recent work has suggested that changes in dopamine tone can produce substantial effects on network-level dynamics and topology (e.g., Carbonell et al., 2014). Thus, one important direction for future research will be to determine more rigorously whether these different levels of motivational neural mechanisms can indeed be unified in terms of dopamine neuromodulation.

Nevertheless, it is critical to acknowledge that though most of the neuromodulatory-focused motivational research has targeted dopamine effects, the dopamine system has well-known and strong interactions with other neuromodulatory systems, such as acetylcholine, norepinephrine, serotonin, and adenosine. Thus, these other neuromodulators will need to be properly considered in order to form a complete picture of motivated cognition (Daw, Kakade, & Dayan, 2002; McClure, Gilzenrat, & Cohen, 2006; Salamone et al., 2009; Sarter, Gehring, & Kozak, 2006). Likewise, although we have focused on the set of candidate hubs that have received the most attention in recent research, this set is clearly not exhaustive. Indeed, other potential motivation–cognition hubs have also been noted in the literature, such as the posterior cingulate cortex (Mohanty, Gitelman, Small, & Mesulam, 2008; Small et al., 2005) and anterior insula (Mizuhiki, Richmond, & Shidara, 2012).

Finally, it is clear that our understanding of the neural mechanisms of motivation–cognition interaction will require not only better integration between levels of analysis (neuromodulation, regionally localized effects, network-level interactions), but also the development of neurocomputational frameworks that can accommodate these effects and better link them to cognitive and behavioral functioning. Work in this area is just beginning, but one of the most promising directions may be to expand the reinforcement learning framework to incorporate motivational variables. For example, initial attempts have been put forward to demonstrate the computational mechanisms by which motivation might modulate reward prediction error signals (Zhang, Berridge, Tindell, Smith, & Aldridge, 2009), simple model-free Pavlovian learning (Dayan & Balleine, 2002), and generalized response vigor (Niv et al., 2007). At a higher level, some accounts have utilized hierarchical extensions of reinforcement learning to begin to explain how reward and motivational signals might also be used to prioritize, select, and maintain temporally extended goals and more abstract action plans (Botvinick, 2012; Holroyd & Yeung, 2012). Excitingly, these accounts have put forward initial sketches of the respective roles for the dopamine system, along with striatum, ACC, and lateral PFC in these processes. Thus, more work in this area is clearly needed. Indeed, one of the primary challenges will be to demonstrate whether such mechanisms and computational frameworks can be used to account for the various dimensions and distinctive components of motivational influence that were detailed in earlier sections.

Pressing research questions

The previous sections highlighted some of the conceptual obstacles that challenge an integrative and cross-disciplinary investigation of motivation–cognition interactions, as well as some of the promising candidate neural mechanisms that are the focus of current research. In this section, we discuss what we see are some of the current experimental and methodological challenges. Specifically, we lay out a number of unresolved and puzzling issues that seem central to this domain, but which may represent potential “low-hanging fruit” that are ripe for investigation. Indeed, one of the goals of this section is to direct investigators toward these open questions, in the hopes of inspiring new research efforts targeted at them.

Can motivation be dissociated from related constructs?

A concern that is commonly raised in studies of motivation–cognition interactions is whether effects attributed to motivational factors may actually reflect another related, but potentially distinct construct. The most frequent candidates in this regard are affect, attention, arousal, and high-level decision-making strategies. This important and longstanding issue has seen increased experimental focus in recent years, but targeted efforts are still needed. Below, we describe work focused on each of these constructs in turn.

The potential distinction between affect and motivation has been most directly addressed in the animal neuroscience literature, in terms of the distinction between the hedonic impact versus incentive value of rewards and punishments. The work of Berridge represents a major theoretical influence in this regard, employing pharmacological and lesion manipulations to demonstrate that “wanting” can be dissociated from “liking” (K. C. Berridge et al., 2009). The key methodological approach here is to assess the hedonic impact of food rewards during consumption via orofacial response patterns, while using Pavlovian and instrumental appetitive behaviors to assess incentive effects. This has suggested that liking and wanting can be dissociated neurally in terms of anatomical substrates (e.g., distinct “hotspots” within the nucleus accumbens and ventral pallidum) and neurotransmitter modulation (GABA and dopamine) (Kringelbach & Berridge, 2009; S. M. Reynolds & Berridge, 2008).

Similarly, a more recent stream of research within human cognitive neuroscience has addressed the dissociability of positive affect and reward motivation (Chiew & Braver, 2011). Positive affect has been shown to have numerous influences on cognitive processing including enhanced creativity, broadened attentional focus, and greater cognitive flexibility (Carver, 2003; Easterbrook, 1959; Fredrickson & Branigan, 2005; Isen, Daubman, & Nowicki, 1987). Here, the critical question is whether these influences can be dissociated neurally and behaviorally from the potentially overlapping effects of reward motivation. Such overlap could occur because the receipt of motivating rewards has positive affective consequences, or because positive affect induces approach motivated behaviors. These types of overlap present considerable methodological challenges. One approach has been to operationalize reward motivation in terms of performance-contingent rewards, whereas positive affect is operationalized in terms of either randomly delivered rewards or incidental, positively valenced stimuli (Braem et al., 2013; Chiew & Braver, 2011; Dreisbach & Fischer, 2012). Another approach has been to induce affect that varies in motivational intensity, on the basis of the theoretical assumption that high motivational intensity, whether for positive or negative affect, produces attentional narrowing, whereas low motivational intensity induces attentional broadening (Harmon-Jones, Gable, & Price, 2013). Supportive evidence has been found with different kinds of stimuli used to induce high- versus low-intensity positive affect (e.g., desire: delicious desserts; amusement: humorous cats; Gable & Harmon-Jones, 2008).

The conceptual similarity between attention and motivation has also been frequently noted (Maunsell, 2004; Pessoa & Engelmann, 2010). The term attention is often used similarly to motivation, in describing how processing resources are allocated, how they can be captured by salient stimulus cues, and how they are influenced by behavioral goals and expectations. However, there are points of conceptual dissociation: Motivation is primarily related to the representation of incentive value and the energization of instrumental behaviors, whereas attention is primarily concerned with mechanisms of perceptual and response selection. A common methodological approach has been to orthogonally manipulate attentional and motivational factors within the same experimental design (Geier et al., 2010; Krebs et al., 2012). In terms of the neural mechanisms of attention and motivation, Pessoa and Engelmann detailed a number of possible different scenarios: (a) full independence, via distinct neural pathways; (b) mediation, in which at least part of motivational influence is mediated by changes in attentional processes and neural systems; and (c) integration, in which there is tight coupling between motivational and attentional brain systems, either in terms of convergence zones (hubs) or via network-wide interactions.

The relationship of motivation to arousal has been less well studied, particularly since arousal is a construct that is often underspecified experimentally. Nevertheless, arousal may imply the energization or invigoration of cognitive processing and behavior; this is also a central component of motivation. Traditionally, arousal has been identified with the locus coeruleus–norepinephrine (LC-NE) system (C. W. Berridge & Waterhouse, 2003), whereas motivational signaling has been conceptualized in terms of dopamine activity (Wise & Rompre, 1989). The relationships between these two neuromodulatory systems, and arousal and motivation more generally, have not been systematically investigated in cognitive neuroscience research. Methodologically, it seems possible to manipulate arousal independently of motivation (e.g., via pharmacological challenge, physical exertion, sleep–wake cycle, stress, etc.), which should enable a targeted examination of the relationship between the two constructs.

A final issue concerns the role of motivation versus high-level decision-making strategies in modulating task performance. This concern relates to the fact that manipulations of performance incentives have been a staple of cognitive research for decades, and have been traditionally used to modulate high-level cognitive strategies (e.g., response bias in signal detection experiments; Green & Swets, 1966). Yet such work is usually not construed in terms of motivation, but rather in decision theoretic terms related to strategic performance optimization. Thus, it has been questioned whether it is necessary or even relevant to appeal to volitional and motivational factors when describing such effects. A variety of methodological approaches can be used to address this issue, such as using symbolic versus real incentives (Hübner & Schlösser, 2010; Krug & Braver, in press), identifying idiosyncratic effects of subjective reward preference (O’Doherty, Buchanan, Seymour, & Dolan, 2006), exploiting stable individual differences related to reward and punishment sensitivity (Engelmann, Damaraju, Padmala, & Pessoa, 2009; Jimura et al., 2010), leveraging differential developmental trajectories of deliberative versus affective–motivational processes (Somerville, Hare, & Casey, 2011), and examining implicit or subliminal rather than explicit incentive cues (Bijleveld et al., 2009; Pessiglione et al., 2007). All of these approaches tend to support the attribution of incentive effects on behavior and brain activity to motivational, rather than strategic factors.

Why does motivation sometimes impair cognitive performance?

In folk psychological terms, being motivated implies being goal-driven. Accordingly, motivation is commonly assumed to have only beneficial and monotonic influences on goal pursuit. In line with this intuition, reward motivation often produces a general enhancing effect on cognition. However, motivation does not always improve and may in fact impair task performance in a variety of conditions (Bonner, Hastie, Sprinkle, & Young, 2000; Bonner & Sprinkle, 2002; Camerer & Hogarth, 1999). For example, the “choking under pressure” phenomenon has been coined to describe instances in which cognitive performance falters when motivational salience is high (Baumeister & Showers, 1986; Beilock, 2010; Callan & Schweighofer, 2008; Mobbs et al., 2009). The affective, motivational, and cognitive factors that elicit such phenomenon are still not well understood.

One account of choking phenomena is that they stem from increased and distracting anxiety (Callan & Schweighofer, 2008), occurring especially in high-stakes situations (e.g., evaluative tests). Both state and trait anxiety effects have been implicated in processes of overarousal (i.e., U-shaped curve effects; Yerkes & Dodson, 1908) or diversion of attention and working memory toward the source of anxiety (e.g., threat monitoring; Eysenck, Derakshan, Santos, & Calvo, 2007). It is still not clear how to predict the motivational or cognitive factors that will elicit these anxiety-type effects. However, recent work using skin conductance as a marker of physiological arousal has found evidence of processes consistent with a noradrenergic contribution to such paradoxical incentive effects (Murty, LaBar, Hamilton, & Adcock, 2011).

A second account suggests that motivation can produce impairing effects directly, even without elicitation of anxiety or overarousal, simply by heightened activation in motivational brain circuits (Mobbs et al., 2009; Padmala & Pessoa, 2010), and possibly supraoptimal levels of dopamine (E. Aarts et al., 2014). One version holds that high motivation shifts the balance of influence toward an impulsive limbic reward system myopically focused on immediate rewards, and away from a more prospectively oriented prefrontal cortical system oriented toward maximizing long-run gains (Loewenstein, Rick, & Cohen, 2008; S. M. McClure, Laibson, Loewenstein, & Cohen, 2004). Another version focuses more directly on interactions between striatal and cortical dopaminergic systems, and argues, in particular, that dopamine has contrasting effects on cognitive control depending on the current task demands, associated neural systems, and baseline levels of dopamine in these neural systems (Cools & D’Esposito, 2011; Cools & Robbins, 2004). Accordingly, incentive motivation should enhance processes associated with cognitive flexibility (e.g., task switching) via striatal dopamine effects, but can also, as a consequence, produce impairments associated with increased distractibility and reduced cognitive focus (E. Aarts et al., 2011). However, the fit of this account to experimental findings is somewhat mixed, indicating that further theoretical and experimental work will be needed to provide a more comprehensive understanding of motivational impairment effects.

For example, a related, but distinct account is that of regulatory fit (Maddox & Markman, 2010), which suggests that motivational effects on performance depend upon the interaction of three factors: (a) whether approach or avoidance motivation is activated (promotion or prevention focus); (b) the incentive structure of the task (gains or loss related); and (c) the cognitive processes that are required to optimize task performance. Specifically, under conditions in which the current regulatory focus matches the task incentive structure (i.e., promotion focus with gain incentives, or prevention focus with loss incentives), processes associated with cognitive flexibility should be enhanced. In contrast, if there is a regulatory mismatch, task performance can be impaired, particularly when successful performance demands high cognitive flexibility. In one supportive study testing this account, choking effects were observed when participants were put under high performance pressure (prevention focus) with a gain incentive structure (i.e., a regulatory mismatch), but only when the classification learning task relied upon the flexible application of categorization rules (Worthy, Markman, & Maddox, 2009).

How does motivation modulate cognitive effort?

As we described previously, a primary account of motivation–cognition interactions is that motivation not only influences performance in cognitively effortful activities, but also the willingness to engage in them in the first place. Indeed, some accounts suggest that the enhanced cognitive performance may actually result from selection of more effortful strategies, assuming that more effortful cognitive strategies are more effective (e.g., proactive control; Braver, 2012). A role for motivation in the selection of effortful strategies is often neglected since strategy selection is typically considered in strict decision-theoretic terms of performance optimization. And yet, recent work has confirmed that participants will avoid cognitively effortful tasks, all else being equal (Botvinick, 2007; Kool et al., 2010). Thus, selection of effortful cognitive strategies should depend on cost-benefit considerations, weighing the incentive benefits of increased performance against the apparent cost of effort.

Several state and trait factors may influence the subjective cost of cognitive effort. In the personality literature, it is well-established that individuals show stable, trait-like differences in their “need for cognition,” which refers to preferences for effortful cognitive activities (Cacioppo, Petty, Feinstein, & Jarvis, 1996). More recently, experimental paradigms have been developed that enable direct assessment of avoidance rates for cognitive tasks (Botvinick, 2007; Kool et al., 2010). Related paradigms directly estimate the subjective value of cognitive effort in terms of an economic decision (Westbrook et al., 2013): what additional amount of monetary reward will an individual trade away to avoid a high working memory load task in favor of a matched task with lower load? Individuals high in need for cognition were found to trade away less reward than those low in need for cognition. Additionally, state factors also moderate these effects, as they became stronger when working memory loads increased, but were proportionally smaller when incentive magnitude increased. Such results are consistent with the idea that motivational incentives can influence willingness to expend cognitive effort, yet there is no direct evidence yet that these effects mediate strategy selection within a particular cognitive task.

Motivational value could interact with cognitive effort by means of a number of possible mechanisms. First, motivation might modulate the computation and estimation of effort costs. For example, motivational incentives have been shown to affect the rate of accumulation of a physical effort cost signal, arising in the anterior insula, which predicts decisions about when to rest (Meyniel, Sergent, Rigoux, Daunizeau, & Pessiglione, 2013). Another proposal postulates that effort cost computations and effort-reward functions are directly mediated by dopaminergic mechanisms (Phillips, Walton, & Jhou, 2007). There is support for this idea from the animal literature, but only for physical effort (Breton, Mullett, Conover, & Shizgal, 2013; Salamone & Correa, 2012).

A related possibility is that reward motivation might decrease effort costs. This decreased effort cost could occur directly, via dopaminergic broadcast effects. As we described above, these could increase the fluency of cognitive processing via a variety of mechanisms (e.g., enhanced signal-to-noise ratio, sharpened cortical tuning, altered neuronal excitability, heightened perceptual sensitivity). Motivation could also decrease effort costs indirectly, by increasing cognitive control, and thus the ability to successfully meet increased effort demands. Such an account would be consistent with proposed mechanisms of motivation–cognition interactions that postulate effects on how and when cognitive control is allocated (e.g., proactive control, expected value of control). This type of account also aligns with the influential ego depletion literature in social psychology (Baumeister, Vohs, & Tice, 2007), which assumes that exertion of control depletes a limited resource (but see Inzlicht et al., 2014; Kurzban et al., 2013), and that motivation compensates for depletion by decreasing people’s tendency to conserve willpower (Muraven & Slessareva, 2003). Likewise, as we discuss further below, it is also consistent with the finding that people’s beliefs about the reward value of cognitive effort have a strong influence on their willingness to engage in it (Blackwell, Trzesniewski, & Dweck, 2007; Dweck, 2012).

A final possibility is that the motivational effects on effortful cognitive engagement occur through a primarily affective route. For example, it is intuitive to think that increasing incentive motivation changes the affective valence of cognitive effort from primarily aversive to primarily rewarding. Indeed, accounts of this flavor have been put forward in the animal learning literature to explain the effects of reinforcing high-effort behaviors in the development of “work ethic” (Clement, Feltus, Kaiser, & Zentall, 2000) or “learned industriousness” (Eisenberger, 1992). A similar type of interpretation is present in accounts from the social-personality literature that assume bidirectional affective–motivational interactions, such that making a cognitive goal a desired outcome increases the positive affect associated with it, and vice versa (H. Aarts, Custers, & Veltkamp, 2008). Such effects would be particularly relevant for studies investigating how to enhance cognitive engagement in relatively hypodopaminergic populations (e.g., healthy aging) and clinical syndromes (e.g., anergia, anhedonia).

How do motivation–cognition interactions change across the lifespan?

Development

A primary goal of neurodevelopmental research is to specify the biological mechanisms that dynamically influence behavior from childhood to adulthood. The adolescent period is especially interesting, in that some aspects of the brain have reached adult-level structure and connectivity, whereas others, including the prefrontal cortex, show developmentally lagged trajectories, not reaching adult volume and connectivity until the late twenties. Lagged development of the prefrontal cortex has been implicated in the still-maturing capacity for adolescents to instantiate impulse control and other forms of self-regulation (Casey, Galvan, & Hare, 2005; Rubia et al., 2006). In contrast, critical components of dopaminergic neurocircuitry, including the ventral striatum and orbitofrontal cortex, are functionally sensitized during adolescence (Andersen, Dumont, & Teicher, 1997; Brenhouse, Sonntag, & Andersen, 2008). Likewise, fMRI studies have demonstrated that the adolescent striatum shows a greater magnitude of response to reward cues relative to both children and adults (Galvan et al., 2006; Somerville et al., 2011) and shows exaggerated prediction error learning signals (Cohen et al., 2010).

As such, the adolescent brain is thought to be in a unique state of heightened incentive salience signaling, paired with an underdeveloped capacity for impulse control (Somerville & Casey, 2010; Steinberg, 2010b). This combination is thought to represent a developmentally normative “imbalance” that could lead to a heightened influence of motivational cues on adolescents’ behavior and decisions. Indeed, studies probing dynamic interactions within striatocortical circuitry have demonstrated adolescent-specific patterns of neural reactivity and heightened functional connectivity that parallel a reduced capacity to withhold behavioral responses to appetitive cues (Somerville et al., 2011). Evolutionarily inspired accounts argue that the adolescent brain might exist in such a state of bias in order to facilitate exploratory behavior to leave safety in search of mates and resources (Spear, 2000).

Despite initial support for this framework, numerous fundamental questions remain. Although a growing number of studies have measured incentive salience responding or cognitive control across development, only a few have manipulated both processes within the same experimental design. Thus, our understanding of how dynamic striatocortical interactions and connectivity might shape selective shifts in adolescent cognitive behavior is still poor. In addition, it is unclear how particular contextual factors that influence adolescent motivated and risky behavior in the real world (such as the presence of peers or affectively arousing contexts) dynamically modulate striatocortical interactions and ultimately, motivated behavior during this complex phase of the lifespan.

Aging

Much cognitive-aging research has focused on identifying the nature of age-related change in specific cognitive processes, as well as understanding the underlying neural mechanisms. Although cognitive and neurobiological factors such as processing speed, working memory, or gray matter volume may be predictive, they clearly do not explain all of the age-related variance in performance (for a review, see Allaire, 2012). Motivation-based accounts are also being increasingly emphasized as relevant for determining age differences in cognitive performance.

Age-related motivational influences may be evident in response to changes in the costs of engaging in cognitive activity. Hess and colleagues (Hess, in press; Hess & Emery, 2012) have argued that such costs increase in later life, and may negatively impact the motivation to engage cognitive resources in support of performance. The resultant shifts in the costs relative to the benefits of engaging in particular activities are hypothesized to result in both reduced overall levels of participation in cognitively demanding activities, and in increased salience of the self-relevance of the task in determining engagement. Some support for this selective-engagement account has been observed experimentally in terms of self-report, physiological, and behavioral indicators regarding the costs of cognitive activity (e.g., Ennis, Hess, & Smith, 2013; Westbrook et al., 2013), as well as with self-reported shifts from a more extrinsic to a more intrinsic motivational focus in later life (Hess, Emery, & Neupert, 2012). These findings have led to the interesting suggestion that some of the age-related variance observed on such tasks may reflect motivational influences, and that the observed age effects may overestimate age differences in underlying ability. However, it will be important for further research to be able to disentangle and quantitatively estimate the distinct contributions of motivational and cognitive performance effects on age differences in behavioral performance.

As we described above, motivational accounts have also been put forward to describe the positivity effect in cognitive aging. One tool for exploring whether such effects reflect a shift in motivational goals is to use eyetracking, so as to provide real-time measures of visual attention. These have resulted in fairly clear evidence that older adults look less at negative and more at positive stimuli than do their younger counterparts (see, e.g., Isaacowitz, Wadlinger, Goren, & Wilson, 2006). These age differences are magnified when participants come to the task in a bad mood (Isaacowitz, Toner, Goren, & Wilson, 2008). But do these effects reflect motivation? Positive looking behaviors could conceivably arise for motivational reasons (i.e., due to age-related prioritization of emotional goals). However, direct evidence for a motivational explanation of these findings at this point is lacking. It may be that these effects result from age-related changes in goals, but that remains to be tested empirically. Thus, it remains an open question whether age differences in looking and looking–feeling links really arise from age differences in motivation, and if they do, what specific configurations of goals lead to these patterns. To determine this, studies will be needed that directly assess goals and track individual differences in goal states through looking patterns and mood changes, as well as studies that manipulate goals and put them in competition to determine effects on looking and mood across different age groups.

In general, theories of cognitive aging are strongly based in descriptions of neurobiological change, whereas none of the current motivational theories of aging integrate neurobiology. One account interprets the age-related positivity effect described above in terms of a potential retuning of amygdala sensitivity from a negative emotional bias in young adulthood toward a relatively more positive emotional bias in older age (Mather et al., 2004) as argued by the “aging-brain” hypothesis (Cacioppo, Berntson, Bechara, Tranel, & Hawkley, 2011; for an opposing view, see Nashiro, Sakaki, & Mather, 2012). Similarly, there is evidence for intact reward motivation and enhancement of positive anticipation relative to negative anticipation in older adults’ self-reported emotional ratings and neural activation in the striatum and anterior insula (Samanez-Larkin et al., 2007). In contrast, a large literature suggests that many of the brain systems implicated in motivational enhancement of cognition decline structurally and functionally with age. For example, studies have shown relatively linear decline in D1-like and D2-like dopamine receptors and dopamine transporters across adulthood (and mixed evidence for age differences in synthesis capacity; Backman, Nyberg, Lindenberger, Li, & Farde, 2006). Some have argued that differential age-related decline of specific neural systems may account for the divergent trajectories of motivational and cognitive functions (e.g., MacPherson, Phillips, & Della Sala, 2002), but there is much debate about these theories, and they are not well supported by larger, cross-sectional and longitudinal studies of brain aging (Driscoll et al., 2009; Raz, Ghisletta, Rodrigue, Kennedy, & Lindenberger, 2010; Walhovd et al., 2011). All of this is further complicated by a wave of seemingly contradictory findings on age differences in sensitivity to positive and negative information in reward-based tasks (e.g., Eppinger, Schuck, Nystrom, & Cohen, 2013; Frank & Kong, 2008; Samanez-Larkin et al., 2007). Thus, testable neurobiologically based models of age differences in motivation and cognition urgently need to be developed.

How do motivational incentives get translated into goals?

Traditional theories (e.g., Ajzen, 1991) assume that the high perceived feasibility and desirability of an imagined future outcome will always result in a strong intention (i.e., a goal) to reach this outcome. Under such condition, the desired outcome (or incentive) is likely to transform into a goal. Extensive research has revealed, however, that even when the perceived feasibility of an attractive future outcome (i.e., a positive incentive) is high, people do not always commit to striving for it (e.g., imagine the highly attractive and feasible future outcome of becoming a skilled piano player). Thus, a key question remains regarding what factors are critical to ensuring that a highly motivating outcome translates into a change in cognitive goals.

Social psychological research has suggested important roles for both mental contrasting and mindset theory in the translation of an incentivized outcome into a goal commitment, even given high feasibility. Mental contrasting is a process of simulating both the desired future outcome as well as potential obstacles. This process is thought to activate expectations of overcoming the obstacles: If expectations are high, people will actively pursue (commit to and strive for) reaching the desired future, but if they are low, then people will refrain from goal pursuit, either reducing efforts or curbing them altogether (Oettingen, Pak, & Schnetter, 2001).

According to mindset theory (Gollwitzer, 1990, 2012; also known as the Rubicon model), goal setting is the process of transition from a predecisional deliberative phase into the postdecisional implementation phase. In the predecisional phase, the desirability and feasibility of a wish need to be fully and completely deliberated before the person can move from indecisiveness to decisiveness. Accordingly, when people feel that they have deliberated enough, they feel justified to move (i.e., “cross the Rubicon”) into implementation. Indeed, Gollwitzer, Heckhausen, and Ratajczak (1990) observed that as-yet-undecided people were more likely to make a decision after they had been asked to list likely positive and negative, short-term and long-term consequences of goal attainment.

Although these accounts of goal setting may apply well to the types of abstract, higher-order, and temporally extended outcomes that are typically studied in social and personality psychology, it is not at all clear that they fit well for the types of goals, motivational incentives, and behaviors that are the focus of standard cognitive neuroscience studies. Thus, more work will be needed to understand whether the concepts of mental contrasting and mindsets can be “translated” into more basic experimental domains. It is also thus unknown what cognitive and neural mechanisms underlie components of higher-order processes of mental contrasting and goal setting.

How do beliefs impact motivations?

An important, but often overlooked, area of motivation involves the study of beliefs and their impact. Recent research has shown that people’s beliefs (e.g., about the fixedness or malleability of personal attributes) predict their school achievement, the success of their relationships, the hardiness of their willpower, and their willingness to compromise for peace in the face of conflict (see Dweck, 2012). These beliefs do so by changing the goals that people are motivated to pursue and the ways that they pursue them. Moreover, the same lines of research show that changing people’s beliefs can change these goals and outcomes. Beliefs can change the meaning of the seemingly same experience, determining whether an individual will view challenges as threats or opportunities (e.g., Tomaka, Blascovich, Kibler, & Ernst, 1997), or setbacks as indicating a lack of ability or signaling that a change in effort or strategy is called for (e.g., Blackwell et al., 2007; Walton & Cohen, 2007). Beliefs can change the meaning of effort, from something unpleasant that makes people feel less competent, to something positive that signals learning (Blackwell et al., 2007). These different meanings have profoundly different motivational consequences.

Yet research is only just beginning to uncover the potential cognitive and neural mechanisms by which beliefs impact motivation. For example, in one study, individuals who differed in their beliefs about intelligence showed distinct patterns of behavioral and neural responses to errors in a demanding cognitive task. Specifically, individuals possessing a “growth mindset” (i.e., that intelligence is malleable and can be developed) showed higher accuracy after making an error, and this effect was mediated by a posterror event-related potential component (termed the Pe) thought to reflect error awareness and attentional allocation (Steinhauser & Yeung, 2010). Thus, it is possible to interpret these results as suggesting that beliefs about intelligence alter (a) how task errors are interpreted by the brain and (b) their motivational impact on subsequent performance. Given that such research is only in its infancy, much additional work will be needed to understand the neural mechanisms of belief development and change, and how such processes alter the landscape of motivation–cognition interactions.

Should other motivational constructs receive neuroscience investigation?

In addition to the topics discussed in this article, many motivational constructs and phenomena have been proposed and/or examined in psychology yet have still received little attention in neuroscience (Reeve & Lee, 2012). In fact, psychologists have proposed a number of motivational constructs to explain human behavior (some of which are already discussed in this article), such as intrinsic motivation (Deci & Ryan, 1985), need for achievement (McClelland, Atkinson, Clark, & Lowell, 1976), need to belong (Baumeister & Leary, 1995), self-efficacy (Bandura, 1977), achievement goals (Dweck, 1986), self-enhancement motives (Sedikides & Strube, 1997), and self-consistency motive (Aronson, 1968), just to name a few. These topics may be an important avenue for future research in neuroscience. Yet, they also present an important challenge: Can an integrative account be developed that incorporates the myriad of motivational constructs proposed in psychology into the theoretical frameworks used in neuroscience and/or computational models?

This is a critical question for understanding the complicated nature of motivation. For example, there is a long-lasting tradition in psychology to distinguish intrinsic interest and extrinsic motivation (Deci & Ryan, 1985), and most research in psychology has stood on the assumption that these motivations are distinct, qualitatively different entities. Viewed from the reinforcement learning theory framework, however, extrinsic motivation and intrinsic motivation may come from a common reward-processing mechanism to produce motivated behavior, with extrinsic motivation being focused on immediate, tangible reward, and intrinsic motivation being focused on invisible, future reward (Singh et al., 2010; see also Daw, O’Doherty, Dayan, Seymour, & Dolan, 2006). Likewise, as we described above, intrinsic and extrinsic motivations seem to activate common striatal reward areas (Murayama et al., 2010), suggesting a common neural basis. As another example, a large literature in social psychology has posited that a host of human social behaviors can be interpreted in terms of a fundamental “cognitive consistency motive” (or a “dissonance reduction motive”): a drive to reduce psychologically dissonant cognitions by modifying them to be consistent (Abelson, 1968; Aronson, 1968; Festinger, 1957). However, many cognitive dissonance phenomena have been successfully simulated in a computational model in which dissonance reduction occurs as an emergent product of much simpler cognitive phenomena (i.e., low-level constraint satisfaction mechanisms; Shultz & Lepper, 1998). Together, these examples suggest that neuroscience and computationally based theories may be able to provide accounts of complex motivational phenomena in terms of simpler and potentially more unifying mechanisms.

Motivation is invisible. Yet people are extremely talented in ascribing motivational concepts to interpret behavioral patterns. When we see a person acting in an unusual way, we cannot help thinking “why does he or she do that?” Even infants have basic inclination to infer others’ intentions or motives (Woodward, 1998). Many studies have shown that people are good at giving post-hoc explanations (i.e., motivation) for behavior that was actually induced unconsciously by extraneous factors (Nisbett & Wilson, 1977). This inborn tendency to attribute motivation to action may have contributed to the current myriad of definitions, hypotheses, and constructs to describe motivation (as we have discussed in this article). With the advance in the neuroscientific and computational approaches to motivation, the time may now be ripe to integrate these divergent views on motivation in a coherent, parsimonious way, instead of using motivation as a convenient “catch-all” to explain (or explain away) complex aspects of human behavior.

General conclusions

As we suggested at the outset of this article, it is indeed an exciting time for the study of motivation–cognition interactions. Although studies of motivation have been an active focus within psychology and neuroscience for decades, there has clearly also been a recent rejuvenation of interest. This rejuvenation is due, at least in part, to the growing body of exciting new findings occurring across a range of areas, including dissociations between goal-directed versus habitual motivational control, subliminal priming of goal pursuit, ego depletion, and related influences on the engagement of cognitive effort, age-related positivity biases, and adolescent oversensitivity to incentive motivation. Likewise, emerging insights into the mechanisms of motivation have been prompted by new evidence that motivation influences cognition in areas where it had previously been thought irrelevant—for example, in long-term memory formation. As we have reviewed above, and as is detailed in this special issue, some of these findings are having a strong impact on, and are being impacted by, current cognitive neuroscience research.

Yet for all the rejuvenation, excitement, and new findings, many challenges remain. We argue that the most critical and formidable challenge is that, with few exceptions, research on motivation–cognition interactions has been somewhat balkanized. Each of the different subfields tends to work largely in isolation, with the questions being pursued and methods being utilized showing little influence from, and awareness of, the parallel work going on in other areas. This balkanization has an impact even at the conceptual level, in terms of the definitions and dimensions that are used to taxonomize the domain and specify the relevant theoretical issues to be investigated.

Nevertheless, we believe that the time is ripe to move toward greater cross-disciplinary interaction and integration. A large number of pressing research questions are only just beginning to be addressed by current studies. We believe that the field is now poised to make rapid progress on these and related questions, but that such progress will critically depend on the adoption of an integrative, collaborative approach. Indeed, an explicit goal of this article, and of the special issue, is to encourage researchers toward such an approach, by highlighting not only the challenges, but also the opportunities, that come about from greater awareness of the breadth of motivation–cognition work occurring throughout psychology and neuroscience. Our hope is that the forging of new cross-disciplinary approaches and collaborations, hopefully inspired by this special issue, will lead us toward a more unified and comprehensive account of the mechanisms of motivation–cognition interaction.