Dynamics of response-conflict monitoring and individual differences in response control and behavioral control: An electrophysiological investigation using a stop-signal task

https://doi.org/10.1016/j.clinph.2006.10.023Get rights and content

Abstract

Objectives

The aim of the present study was to investigate the functional significance of error (related) negativity Ne/ERN and individual differences in human action monitoring. A response-conflict model of Ne/ERN should be tested applying a stop-signal paradigm. After a few modifications of Ne/ERN response-conflict theory (Yeung N, Botvinick MM, Cohen JD. The neural basis of error detection: conflict monitoring and the error-related negativity. Psychological Review 2004:111(4);931–959), strength and time course of response conflict could be modeled as a function of stop-signal delay.

Method

In Experiment 1, 35 participants performed a visual two-choice response-time task but tried to withhold the response if an auditory stop signal was presented. Probability of stopping errors was held at 50% using variable delays between visual and auditory stimuli. Experiment 2 (n = 10) employed both auditory go and stop signals and confirmed that Ne/ERN effects are due to conflict induced by the auditory stop signal, and not the mere presence or absence of an additional stimulus.

Results

As predicted, amplitudes of both the stimulus-locked and response-locked Ne/ERN were largest for non-stopped responses, followed by successfully stopped and go responses. However, independently of response type Ne/ERN also increased with increasing stop-signal delay. Since longer delay invokes stronger response conflict, results specifically support the notion of Ne/ERN reflecting response-conflict monitoring. Furthermore, individual differences related to measures of response control and behavioral control were observed. Both low response control estimated from stop-task performance and high psychometric impulsivity were accompanied by smaller Ne/ERN amplitude on stop trials, suggesting reduced response-conflict monitoring.

Conclusions

The present study supported the response-conflict view of Ne/ERN. Furthermore, the observed relationship between impulsivity and Ne/ERN amplitude suggested that individuals with low behavioral control were characterized by lower activity in anterior cingulate cortex, the neural generator of Ne/ERN, in situations of strong response conflict.

Significance

The present study, for the first time, employed a stop-signal paradigm to verify predictions regarding the temporal dynamics of response-conflict processing as derived from response-conflict theory of ERN.

Introduction

In everyday life, action monitoring is an important process. Inappropriate movements while driving fast, for example, would have serious consequences. Numerous articles on action monitoring and, especially, error processing were published during the last decade. Much of this research focused on a specific component in the event-related potential (ERP). Falkenstein et al. (1991) denoted the negative potential peaking between 50 and 100 ms after an incorrect motor response as error negativity (Ne), whereas Gehring et al. (1993) referred to the same component as error-related negativity (ERN). Ne/ERN has been examined in different types of reaction time (RT) tasks such as the Eriksen flanker task (Gehring et al., 1993), the Stroop task (Gehring et al., 2000), or the go/no-go task (Falkenstein et al., 1999). Remarkably, across all these experimental paradigms a sharp negative ERP component with fronto-central maximum was observed after a wrong response, irrespective of error type (e.g., wrong-hand response, or no-go error). The medial frontal anterior cingulate cortex (ACC) was suggested as neural generator of Ne/ERN (e.g., Dehaene et al., 1994, Luu and Tucker, 2001).

The first theory of the functional significance of Ne/ERN was mismatch theory, assuming Ne/ERN to indicate activity of an error-detection system that compares the representations of the actual response and the correct response (Falkenstein et al., 1991). In case of a mismatch, an Ne/ERN is elicited. However, this view was challenged by the existence of an Ne/ERN-like component after correct responses (CRN; e.g, Hajcak et al., 2003, Vidal et al., 2003), because on these trials there should be no mismatch between actual and correct response representations. Both ERN and CRN have been linked to the same processing mechanism which is less active during correct than during error trials. A more general processing mechanism not being restricted to response processing was introduced in reinforcement-learning theory (Holroyd and Coles, 2002; see General Discussion) to explain findings of an Ne/ERN-like component following error feedback (Miltner et al., 1997).

The present study investigated a further hypothesis about the functional significance of Ne/ERN. Cohen and colleagues put forward the idea of Ne/ERN being associated with a system that monitors ongoing response conflicts (Botvinick et al., 2001, Carter et al., 1998, Yeung et al., 2004). Based on functional magnetic resonance imaging (fMRI) results, Carter et al. (1998) proposed a detection mechanism sensitive to the processing of competing responses. Accordingly, ACC negativity indicates simultaneous activity of two or more response processing systems. Which system ‘wins the race’ and whether or not the overt response is correct would then be of minor relevance to Ne/ERN (Gehring and Fencsik, 2001).

Botvinick et al. (2001) developed a connectionist model based on simulations of response conflict to explain ACC activity found with different experimental tasks. Within this simplified model, an input layer responsible for sensory processing of stimulus features, a response layer that contains the involved response units, and a conflict-monitoring system located within the ACC are assumed. In its core, the model deals with processes taking place within the response layer. Strength of response conflict computes as follows: First, for each pair of simultaneously active response units i and j, their activations (ai, aj) are multiplied, and the product is weighted by a coefficient (wij) reflecting the degree of interference between these two specific response units. Then, pairwise conflicts are accumulated across all simultaneously active units (i, …, N; j, …, N) to yield total response-conflict energy (the negative sign corresponds to the negative polarity of Ne/ERN; see Yeung et al., 2004, p. 935),response conflict=-i=1Nj=1Naiajwij.

It follows from the equation that, in case of only one active response unit, the conflict equals zero. If two response units are simultaneously active, conflict emerges, and increasing one or both activations results in stronger response conflict. Strength of conflict within the response layer is assumed to be directly related to the amount of activity in the superordinate ACC, and hence, Ne/ERN amplitude (Yeung et al., 2004). Note that activity in the involved response units continues beyond the overt response. Consequently, conflict should be evident also after the onset of a response, in case of more than one response units being active. This would result in both larger Ne/ERN amplitude and increased error likelihood, compared to situations where only one response unit is active. Importantly, from the perspective of response-conflict theory, it is not the error per se that causes larger Ne/ERN amplitude for wrong than for correct responses.

In the present study, we applied a stop-signal task (e.g., Logan and Cowan, 1984) to investigate further the ERP correlates of response-conflict monitoring. In this task, two types of trials can be distinguished, go trials and stop trials. On go trials, participants usually respond to two or more imperative stimuli (go signals) assigned to different hands. On stop trials, after the go signal an additional stop signal is presented with a certain temporal delay. Participants are instructed to respond to the go signal as quickly as possible but try to withhold the response if a stop signal is presented. Osman et al. (1986) developed an adaptive tracking algorithm with varying delay between go and stop signals. Delay increased when the participant was able to stop but decreased when inhibition failed.

Common to all tasks applied to investigate the response-conflict model (flanker task, Yeung et al., 2004; Stroop and stem completion tasks, Botvinick et al., 2001; probability change-signal task, Brown and Braver, 2005) was that response conflict emerged between response units involving different hands and all conflict-inducing stimuli were presented simultaneously. Therefore, one major aim of the present study was to test several implications of the response-conflict theory for a stop-signal task because of its different characteristics. In the present stop task, response conflict should occur between go- and stop-response processing1 controlling the same overt response. The specific advantage of the stop task is that the delay between go and stop signals varies, affecting strength and time course of response conflict (see below).

We developed a simplified model based on considerations by Botvinick et al. (2001) and Yeung et al. (2004), yet modified with respect to the specific situation of a stop-signal task. Three levels of processing are assumed, the sensory input level representing go and stop stimuli, the response level containing go-response and stop-response units, and finally the ACC as a conflict-monitoring system. Since our theoretical considerations are restricted to two different response units and the respective activations, go and stop responses (ago and astop), the weight w equals 1. The present model is novel insofar as response-conflict energy is assumed to change over time during one trial, depending on the progress of go and stop processing. Based on the idea of response conflict being the product of activations of simultaneously active response units, in the present task response-conflict energy computes as a function of time by multiplying activation in go- and stop-response units (ago, astop) for each point in time (t),conflict(t)=-ago(t)astop(t).

Figs. 1A–E presents an exemplary computation of response-conflict energy as a function of time, separately for two types of responses (successfully stopped and non-stopped). Go processing starts with the onset of the go stimulus (Figs. 1A and B). After a certain delay, presentation of the stop signal triggers stop processing (Figs. 1C and D). While with successfully stopped trials, go activation decreases before a response threshold has been reached, it increases beyond this threshold in case of non-stopped trials. It is obvious that the shorter the delay, the greater the likelihood of successful stopping. A typical successfully stopped trial therefore involves a rather short delay, whereas a typical non-stopped trial involves a rather long delay. Contrasting both types of trials, it is critical for the behavioral outcome whether the sum of delay and internal stop-signal RT (SSRT; Logan et al., 1997) is greater or smaller than the time required by go-response processing to reach the response threshold. The former case is typical for long-delay trials (where the response cannot be stopped) but not short-delay trials (where the response can be stopped).

Regarding response conflict, at the beginning of each trial, only one process is active and conflict equals zero. Soon after presentation of the stop signal, however, conflict emerges and increases over time (Fig. 1E). The crucial fact is that, at any point in time after stop-signal presentation, go-response activation is greater for long-relative to short-delay trials. Moreover, go activation reaches larger values and peaks later for long-relative to short-delay trials (Figs. 1A and B). Therefore, response conflict computed as the product of go- and stop activations (Fig. 1E) reaches a larger peak value on non-stopped trials than on stopped trials, and the peak occurs later in case of the former trial type.

Based on this exemplary demonstration, the following predictions can be made regarding the ERP signs of response-conflict monitoring in the present task. Since on successfully stopped trials one cannot obtain response-locked Ne/ERNR, the following hypotheses were framed for the stimulus-locked equivalent component, Ne/ERNS. For stop trials, we expect (i) an Ne/ERNS to occur in ERPs time-locked to the go signal, peaking about 100 ms after the response (i.e., after mean RT). Since maximum response conflict should be larger on non-stopped trials than on successfully stopped trials (see above), we predict (ii) larger Ne/ERNS amplitude with the former type of trials. According to the model (Fig. 1E), (iii) the peak of Ne/ERNS should be delayed for non-stopped as compared to successfully stopped trials. Note that predictions (ii) and (iii) are based essentially on longer mean delay with non-stopped relative to successfully stopped trials. We therefore also predict (iv) increasing peak amplitude and latency of Ne/ERNS with increasing delay, irrespective of response type (successfully stopped and non-stopped). On go trials, only go processing should be active; thus, low response conflict and (v) no pronounced Ne/ERNS is expected. Finally, similar predictions are made for Ne/ERNR, i.e., (vi) a clear Ne/ERNR was expected for non-stopped trials, which (vii) should be larger for long- compared to short-delay trials. Note that if in the following Ne/ERN is used without index, we refer to both, stimulus- and response-locked Ne/ERN.

Coles et al. (2001, p. 174) suggested that “fast guessing or other forms of impulsive responding” are responsible for errors in speeded RT tasks. Impulsivity is a trait reflecting more or less behavioral control. Usually, high impulsives are characterized by acting without much deliberation, which often results in fast and error-prone responding (Dickman, 1990). Several studies examined the relationship between Ne/ERN and certain psychometric dimensions related to behavioral control. Larger Ne/ERN amplitude was found in high relative to low conscientious individuals (Pailing and Segalowitz, 2004), high relative to low socialized subjects (Dikman and Allen, 2000), and in patients suffering from obsessive-compulsive disorder (Gehring et al., 2000). Therefore, the second aim of the present study was to investigate ERP signs of response-conflict monitoring as a function of individual differences in impulsivity/impulsive responding.

In addition to the more general concept of behavioral control as measured by psychometric impulsivity, an index of response control on the present stop task was considered. Remember, delay between go and stop signals was decreased after a non-stopped response, but was increased after successful stopping. Based on single-trial delays, for each participant mean stop-signal delay can be computed. Logan et al. (1997) defined an individual’s mean SSRT as the difference between mean RT for go trials and mean delay.2 Long SSRT was thought to reflect poor inhibitory control, or, a tendency towards impulsive responding. This assumption received support from a positive correlation between SSRT and impulsivity scores. Therefore, individual SSRT values obtained from the present stop task are used as an index of (poor) response control.

The above-mentioned individual differences in Ne/ERN amplitude may suggest that individuals with weak response control/behavioral control are characterized by a less active monitoring system or reduced sensitivity to response conflict. We predict therefore (viii) a negative relationship between SSRT and Ne/ERN (i.e., low Ne/ERN amplitude with long SSRT and vice versa) on stop trials. Analogously, (ix) relative to participants with strong behavioral control, participants with low behavioral control (i.e., high scorers on impulsivity) are expected to show reduced Ne/ERN amplitudes on stop trials.

Section snippets

Experiment 1

The first experiment was designed to test the predictions derived from the response-conflict model of Ne/ERN, and to investigate individual differences in Ne/ERN as a function of response control/behavioral control.

Experiment 2

To examine whether the mere presence of the stop signal was responsible for Ne/ERN differences between stop trials and go trials, Experiment 2 was conducted. It was identical to Experiment 1 with the exception that an auditory go-okay signal was presented on go trials.

General discussion

The present study aimed at investigating (a) the functional significance of Ne/ERN within the framework of the response-conflict theory (Botvinick et al., 2001, Yeung et al., 2004), using a stop-signal task, and (b) individual differences in behavioral and response control related to this negativity. Several predictions regarding strength and time course of response conflict were derived from our adaptation of the response-conflict model. The present data shall also be discussed within the

Conclusion

In the present study, we tested several general assumptions of response-conflict theory regarding Ne/ERN in a stop-signal paradigm. Predictions concerning strength and time course of response conflict were derived from the model. Patterns of amplitude and latency of ERP negativity were consistent with the view of Ne/ERN reflecting the activity of a response-conflict monitoring system. The stronger the predicted response conflict, the larger was Ne/ERN. Our data further suggested that the

References (46)

  • J.R. Ramautar et al.

    Effects of stop-signal probability in the stop-signal paradigm: the N2/P3 complex further validated

    Brain Cogn

    (2004)
  • F. Vidal et al.

    Error negativity on correct trials: a reexamination of available data

    Biol Psychol

    (2003)
  • M.M. Botvinick et al.

    Conflict monitoring and cognitive control

    Psychol Rev

    (2001)
  • J.W. Brown et al.

    Learned predictions of error likelihood in the anterior cingulate cortex

    Science

    (2005)
  • C.S. Carter et al.

    Anterior cingulate cortex, error detection, and the online monitoring of performance

    Science

    (1998)
  • P.T. Costa et al.

    Revised NEO personality inventory (NEO PI-R) and NEO five factor inventory. Professional manual

    (1992)
  • S. Dehaene et al.

    Localization of a neural system for error detection and compensation

    Psychol Sci

    (1994)
  • S.J. Dickman

    Functional and dysfunctional impulsivity: personality and cognitive correlates

    J Pers Soc Psychol

    (1990)
  • Z.V. Dikman et al.

    Error monitoring during reward and avoidance learning in high- and low-socialized individuals

    Psychophysiology

    (2000)
  • A.J. Fridlund et al.

    Guidelines for human electromyographic research

    Psychophysiology

    (1986)
  • W.J. Gehring et al.

    Functions of the medial frontal cortex in the processing of conflict and errors

    J Neurosci

    (2001)
  • W.J. Gehring et al.

    When the going gets tough, the cingulate gets going

    Nat Neurosci

    (2004)
  • W.J. Gehring et al.

    A neural system for error detection and compensation

    Psychol Sci

    (1993)
  • Cited by (54)

    • A person-centered examination of emotion dysregulation, sensitivity to threat, and impulsivity among children and adolescents: An ERP study

      2021, Developmental Cognitive Neuroscience
      Citation Excerpt :

      The ERN is thought to be associated with performance monitoring, specifically the motivational significance of errors; whereby a larger ERN is associated with greater motivation to avoid errors (e.g., Hajcak and Foti, 2008; Meyer et al., 2017). Previous research has found that impulsive individuals tend to have smaller ERN amplitudes than those who are less impulsive (Checa et al., 2014; Pailing et al., 2002; Ruchsow et al., 2005; Stahl and Gibbons, 2007; Taylor et al., 2018); perhaps as a result of reduced behavioral monitoring. In contrast, individuals with greater threat sensitivity or anxiety tend to have larger ERNs when making errors than those with lower threat sensitivity or anxiety (Boksem et al., 2008; Chong and Meyer, 2019; Hajcak et al., 2003; Ladouceur et al., 2006; Meyer, 2017; Meyer and Hajcak, 2019; Weinberg et al., 2010).

    • Neural substrates of deficient cognitive control in individuals with severe internet gaming disorder

      2021, NeuroImage: Clinical
      Citation Excerpt :

      We found that lower activation in the left primary motor cortex was associated with worse behavior execution (longer GoRT), which may corroborate the important role of the primary motor cortex in good individual performance in control-related tasks. The ACC is implicated in conflict monitoring and cognitive control (Botvinick et al., 2001; Stahl and Gibbons, 2007). The ACC selects the “best-suited” action by receiving information from systems with conflicting “interests” and forwards this action to the motor system (Holroyd et al., 2004).

    View all citing articles on Scopus
    View full text