Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Regular rhythmic and audio-visual stimulations enhance procedural learning of a perceptual-motor sequence in healthy adults: A pilot study

Abstract

Procedural learning is essential for the effortless execution of many everyday life activities. However, little is known about the conditions influencing the acquisition of procedural skills. The literature suggests that sensory environment may influence the acquisition of perceptual-motor sequences, as tested by a Serial Reaction Time Task. In the current study, we investigated the effects of auditory stimulations on procedural learning of a visuo-motor sequence. Given that the literature shows that regular rhythmic auditory rhythm and multisensory stimulations improve motor speed, we expected to improve procedural learning (reaction times and errors) with repeated practice with auditory stimulations presented either simultaneously with visual stimulations or with a regular tempo, compared to control conditions (e.g., with irregular tempo). Our results suggest that both congruent audio-visual stimulations and regular rhythmic auditory stimulations promote procedural perceptual-motor learning. On the contrary, auditory stimulations with irregular or very quick tempo alter learning. We discuss how regular rhythmic multisensory stimulations may improve procedural learning with respect of a multisensory rhythmic integration process.

Introduction

Procedural learning refers to the acquisition and retention of motor and cognitive skills with repeated practice [15]. It is essential for many everyday life activities such as driving a car or playing a musical instrument but also for reading or writing [6,7]. Many studies showed that repeated practice of a structured perceptual-motor sequence specified by a stimulation-response association improves the speed of the motor responses and leads to the acquisition of the perceptual-motor sequence [812].

The Serial Reaction Time Task (SRTT), previously developed by Nissen and Bullemer (1987) [13], has been widely used to study the implicit procedural learning of a visuo-motor sequence in clinical and nonclinical populations [1417]. As described by [18], in the classic form of the task, participants have to respond as fast and as accurately as possible by pressing one of four keys corresponding to one of four locations of a visual cue. Unbeknownst to the participant, the first blocks of practice are composed of a repeated structured sequence (e.g., 10 items) that is implicitly learned. Then, on the sixth block of practice the sequence is unexpectedly removed and replaced with random trials. Participants who learned the perceptual-motor sequence will respond less quickly and/or less accurately in this random block. Thus, as explained by Robertson (page 10073), “the difference between sequential and random response times provides a specific and sensitive measure of skill acquisition in the SRTT”. Indeed, as opposed to the general learning that includes both familiarization and sequence learning, this difference in RT (or errors) reflects the learning of the specific sequence and is called specific learning. As stated by [19], multisensory-training protocols could enhance learning and better approximate natural multisensory settings. Several studies have tested how the manipulation of the perceptual stimulations could improve motor procedural learning. Globally, it appears that stimulations’ features play an essential role for learning of a motor sequence [for review see 20,21]. Manipulations refer to stimulation modality [2224], stimulation type [17,25,26], stimulation-response mapping [27,28] or response effect mapping [29]. Results generally support the role of visuo-spatial stimulations to memorize the sequence. The effects of auditory stimulations on learning of a visuo-motor sequence have been more rarely studied. Yet, several studies suggest that the temporal regularity of the sequence is learned concomitantly with the visuo-spatial sequence itself [3032]. More precisely, [32] showed that temporal patterns can be learned when the intervals are associated with concrete events, such as specific visual stimuli or finger movements, and that temporal and spatial parameters are learnt in an integrated fashion allowing to acquire the order of a repeated sequence. Given that temporal regularity detection is best accomplished with the auditory system [33,34], we hypothesized that providing auditory stimulations could facilitate the detection of the temporal regularity of the perceptual-motor sequence and so facilitate its learning.

Two types of mechanisms have been discovered to explain the benefits of auditory stimulations on motor control and learning: multisensory integration and audio-motor entrainment. Firstly, auditory stimulations provided simultaneously with visual stimulations can lead to multisensory integration. [35] showed that some neural cells of animals have higher neuronal activity in response to multisensory stimulations compared to unisensory stimulations. To be integrated as a coherent whole, stimulations have to be presented in a spatial and temporal “binding window” [i.e., 3639]. Multisensory integration leads to behavioral improvements, i.e. a reduction of simple reaction time [40] and choice reaction times [41] and a quickening of the detection of visual targets [e.g. 42]. Multisensory stimulations seem also to benefit perceptual learning [19]. Particularly, [43] found faster improvements on a motion-detection learning task with multisensory stimulations compared with unisensory stimulations. On this basis, it is possible that multisensory stimulations could also enhance perceptual-motor learning.

Secondly, another way to promote procedural perceptual-motor learning with auditory stimulations comes from studies on Regular Auditory Stimulations (RegAud). RegAud have been proved to induce benefits on motor control [e.g., 44]. In many situations, movements are spontaneously attracted to external regular rhythms although participants are not instructed to synchronize with [see review of [45,46]. RegAud induce a priming effect which can lead to a facilitation to produce voluntary movements called audio-motor entrainment [44,47]. This facilitation consists in an improvement of stability of movements in both time and space [4850] and these effects are still observable even when attention is focused on the visual modality [51]. The spontaneous sensorimotor synchronization with an auditory rhythm can be explained by the involvement of motor cerebral areas, particularly the supplementary motor area and the primary motor cortex, in rhythm perception and production [5255]. Moreover, listening of auditory stimulations with regular tempo, such as a metronome, modulates corticospinal excitability measured via motor-evoked potentials (measured thanks to TMS) [56] and creates a stable time scale with a predictable pace to which the motor system adjusts for motor programming [54,57]. [59] showed that audio-motor synchronization is more accurate with simple metrics (regular intervals) than with irregular metrics (i.e., irregular intervals). Benefits of RegAud on motor control could be explained by combined activations of both auditory and motor areas [5760]. It remains to explore the possible benefits of RegAud on perceptual-motor learning. Previous studies suggested that the temporal organization of the stimulations is an essential part of the perceptual-motor sequence learning [30,32,61]. Hence, the predictable tempo of the regular auditory stimulations may modulate the possible improvement of procedural learning. Thus, we hypothesize that RegAud improve procedural learning of a perceptual-motor sequence.

On these bases, the aim of our study is to investigate the possible effects of auditory stimulations on procedural learning evaluated with a SRTT. Auditory stimulations are provided either in congruency with visual stimulations (Congruent Audio-Visual, CongrAV) or regularly (RegAud). Possible effects of these additional auditory stimulations are controlled with four conditions: visual only stimulations (Visual Only, VisOnly), incongruent audio-visual stimulations (Non-Congruent Audio-Visual, NonCongrAV), and non-regular stimulations (Irregular Auditory Stimulations, IrregAud). Our main hypothesis is that auditory stimulations presented congruently with visual stimulations (CongrAV condition) and regular rhythmic auditory stimulations (RegAud condition) will enhance procedural learning compared to control conditions. We also hypothesized that the effects of the regular auditory stimulations (RegAud) could be linked to their speed and if this speed is not suitable to the task, we would not observe any benefits. Thus, speed effects are controlled with a condition with Quick Regular Auditory Stimulations (FastRhyth).

Method

Participants

Sixty right-handed (laterality quotient = 77.59 ± 21.45) adults participated to the study (32 females). Participants were undergraduate students pursuing sports science courses in Toulouse university. They were from 18 to 30 years old (mean age = 21.80 ± 2.53), reported normal or corrected-to-normal vision and hearing and were naïve as to the purpose of the study. They did not practice music more than two hours per week during a maximum of two years. They were randomly and equally assigned to the six different conditions. The study was conducted in accordance with the Declaration of Helsinki and approved by the Inserm (Institut National de la Santé et de la Recherche Médicale) ethical committee (Institutional Review Board IRB00003888—agreement n°14–156). Before the experiment beginning, all volunteers provided a verbal informed consent and documented a written form specifying their motivation to participate to the study.

Materials

Stimulation presentation and data collection were achieved using experimental software Presentation version 17.2 (Neurobehavioral System Inc, Albany, CA) which provide a precision under 1 millisecond for motor response measures [62]. The laptop was connected to an external display (40cm, refreshing at 60Hz) and to an adapted keyboard. On this keyboard, the keys were removed except those corresponding to the four letters D, F, G and H which were marked with yellow pallets.

Preparatory attention and divided attention were tested with the standardized alert phasic test and divided attention test of the Test of Attentional Performance (TAP, Version 2.3 Zimmermann and Fimm, 2002).

Procedure

The participant was seated in standardized sitting posture in a quiet room without visual or auditory distractors. The viewing distance was approximately 80cm and the keyboard was at 50cm from the screen. The experimenter space was separated from the participant space by a curtain.

Before the experiment, useful data about the participants (date of birth, gender, handedness assessed with the Edinburgh Handedness Inventory, [63]), were collected.

Test of Attentional Performance (TAP).

Attentional performance was assessed to explore the link between visuo-motor learning and attentional skills. This measure was also done to make sure that groups did not differ in terms of attentional functions. Each of the two neuropsychological tests was composed of two parts: a phase to familiarize the participant with the instructions and a test phase in which the results were recorded. We assessed two attentional functions: preparatory attention (alert phasic test) and divided attention. Note that one participant (RegAud condition) did not perform the attentional tests for time issues.

The Serial Reaction Time Task (SRTT).

We used a version of the serial reaction time task (SRTT) in which participants were instructed to respond using the four fingers except thumbs (index, middle, ring, and little finger) of their right hand to press the D, F, G, and H keys of the keyboard. Each of the four possible keys corresponded to one of the four stimulation locations. The four possible stimulation positions were specified by four equally spaced gray boxes, each a 2cm square, presented on a computer screen so that the stimulation-response mapping would be compatible with the keyboard. On each stimulation-response mapping, one of the four boxes on the monitor was colored in yellow, and the participant’s task was to press the corresponding key on the keyboard (Fig 1) as fast as possible (“Try to go as fast as possible and make as less as possible mistakes”). Once a key was pressed, the computer recorded the participant’s reaction time and then moved to a different box with an interval of 250ms before the next target. If the participant did not press any key, the stimulation remained for 3000ms on the screen and then went on with the next stimulus.

thumbnail
Fig 1. Serial reaction time task.

Each finger is associated with a response key and each key is associated with a visual cue. When a box lighted the participant had to press the corresponding key as quickly as possible.

https://doi.org/10.1371/journal.pone.0259081.g001

All participants went through 7 Blocks containing 100 items:

  • 1 Block of familiarization (B0) displaying a sequence of 10 positions repeated 10 times (100 items). It was performed in the same condition as the following blocks in order and aimed to make sure that there were no significant differences between the groups in performance at the beginning of the experiment (B0 can be considered as a baseline),
  • Then, 5 Blocks of practice of a same sequence displaying a repeated pattern of 10 positions presented ten times (100 items each block, i.e., 500 items in total) (B1 to B5) in order to test general learning.
  • Finally, a last Block (B6) presented the visual stimulations in a pseudo-random fashion (100 items). The sequence with the repeating pattern of positions was no longer played out. This Block aimed to test specific learning of the sequence.

We used several different sequences instead of a unique one because learning can depend on the sequence used [64] and it opens up the possibility that the obtained results may be specific to a particular fixed sequence. Thus, each group was we proposed different controlled sequences sharing the same rules:

  • The same position could not appear on successive trials
  • Each position appeared an equal number of times
  • The sequence could not contain runs (e.g., 1234)
  • The sequence could not contain trills of four units (e.g., 1313).

On this basis, four sequences of ten positions were generated (sequence A: 1 3 4 2 3 1 4 2 1 4, sequence B: 2 4 1 3 4 2 1 4 3 1, sequence C: 3 1 4 2 1 3 4 1 2 4, sequence D: 4 2 3 1 2 4 1 3 4 1). Each sequence was attributed in a counterbalanced manner to the participants for each condition. The selected sequence of B0 was different from this selected for B1 to B5 in order to avoid a possible transfer of learning between two specific sequences. For Block 6 (B6), all participants performed the same pseudo-random stimulations following the previous rules applied to 100 items (3,2,1,3,4,1,3,4,2,3,1,2,4,3,1,2,3,4,2,3,1,4,3,2, 4,1,2,4,1,3,2,4,1,2,4,1,3,4,2,1,4,2,3,1,4,3,1,2,4,3,1,2,4,3,1,2,4,3,1,2,4,2,3,1,4,1,3,2,1,3,4,2,1,3,2,1,3,2,1,4,2,3,1,4,3,2,4,1,2,4,1,3,4,2,3,4,1,4,2,3).

All participants were randomly and equally assigned to one of the six different conditions.

  • In the Visual Only (VisOnly) condition, visual stimulations were presented without auditory stimulations.
  • In the Congruent Audio-Visual (CongrAV) condition, an auditory stimulation was presented at exactly the same time as each visual cue.
  • In the Non-Congruent Audio-Visual (NonCongrAV) condition, an auditory stimulation was presented 200ms after each visual cue. If the participants pressed a key before this delay, the auditory stimulation was not presented.
  • In the Regular Rhythmic Auditory Stimulations (RegAud) condition, auditory stimulations were presented every 500ms independently of visual stimulations and participants’ responses.
  • In the Irregular Rhythmic Auditory Stimulations (IrregAud) condition, auditory stimulations were presented irregularly and independently of visual stimulations and participants’ responses. There was the same number of auditory stimulations in this soundtrack than in the RegAud condition but intervals were pseudo-randomly generated in a range of 0.022s to 2.891s and a mean of 0.494s. This pattern was programmed with the free software Audacity. The same soundtrack was presented in all blocks.
  • In the Quick Regular Rhythmic Auditory Stimulations (FastRhyth) condition, auditory stimulations were presented every 300ms independently of visual stimulations and participants’ responses.

In CongrAV, NonCongrAV, RegAud, IrregAud and FastRhyth conditions the auditory stimuli were presented via two external speakers placed on both sides of the screen and consisted in a 500Hz, 100ms sinewave presented at 80dB. Participants were told that there would be auditory stimuli but we did not tell them anything about their purpose. In the conditions CongrAV, NonCongrAV, RegAud, IrregAud and FastRhyth, the auditory stimuli were presented from Block 0 to Block 6. Note that B0 is performed in the same condition as the following Blocks because changing the way how stimuli are introduced between B0 and B1 would have possibly induced confounding effects on participants’ performance, hence confusing the interpretation of the results of the general learning phase. Moreover, a removal of the auditory stimuli in the random block (B6) would have induce a double change (sequence and auditory stimuli) for participants and it would have been difficult to determine if decrease change in performance would be due to a change in the sequence or the removal of the auditory stimuli.

The order of the SRTT and TAP tests was counter-balanced for each participant to prevent training or fatigue effects. At the end of the random Block (B6), the participants’ explicit knowledge of the sequence was measured by asking them whether or not they noticed a repeated sequence.

The entire experiment took approximately 1h.

Data analyses

For all analyses, incorrect responses were not included in the RT analyses.

Pre-tests.

The laterality quotient was assessed with the Edinburgh Handedness Inventory [63]. For the two attentional tests, mean reaction times and errors were computed by the TAP software. For the B0, the averaged reaction times (RTmean), variability of reaction times (RTsd) and errors were computed and compared between Conditions.

SRTT.

Average reaction times (RTmean), variability of reaction times (RTsd) and errors for all participants in each Condition were computed across trials of each of the block. Difference in performance between Blocks 1 to 5 was considered as a measure of general learning (difference B1-B5 = RTmeanB1-B5, RTsdB1-B5, ErrorB1-B5) whereas performance in Blocks 5 to 6 (fixed to pseudorandomized order of visual stimulations) was considered as a measure of specific learning of the sequence (difference B6-B5 = RTmeanB6-B5, RTsdB6-B5, ErrorB6-B5). A decrease in RTmean, RTsd and errors was expected between the B1 and B5 to give evidence of general learning and an increase in RTmean, RTsd and errors is expected between B5 and B6 (fixed to randomized order of visual stimulations) to give evidence of specific sequence learning.

Explicit knowledge.

Three kinds of responses were recorded to answer the question “Did you notice that the presentation of the boxes followed a repeated sequence?”: yes, no and maybe. We then computed the percentage of yes in each Condition.

Statistical analyses

The equality of variances was assessed by Levene’s test and normality of the distribution was tested by the Kolmogorov-Smirnov test. If assumptions were verified (at least 50% of data with equality of variance and normal distribution), analyses of variance (ANOVAs) were used. If data did not satisfy the criteria of equality of variance or normality, nonparametric tests (Friedman, Mann-Whitney and Wilcoxon signed rank test) were used. When appropriate, the data were further analyzed with post-hoc analysis (Fischer test).

A significance level of 0.05 was adopted for all analyses. Only significant results (p < .05) are reported in the Results section. All results are plotted using means and standard errors.

Pre-tests.

To make sure that laterality of participants did not differ between Conditions, one-way ANOVAs with Conditions as a Factor was conducted on the Laterality Quotient.

To make sure that attentional characteristics of participants did not differ between Conditions prior to the SRTT, one-way ANOVAs with Conditions as a Factor were conducted on the mean reaction times and errors for the two attentional tests. Pearson’s correlations were used to explore the link between attentional performance (alert phasic and divided attention) and the SRTT general learning score (difference B1-B5 = RTmeanB1-B5, RTsdB1-B5, ErrorB6-B5) and specific learning score (difference B6-B5 = RTmeanB6-B5, RTsdB6-B5, ErrorB6-B5).

B0 analyses.

To make sure that participants did not differ between Conditions prior to the SRTT, one-way ANOVAs with Conditions as a Factor were conducted on RTmean, RTsd and errors. Pearson’s correlations were used to explore the link between B0 performance (RTmean, RTsd and errors) and the SRTT general learning score (difference B1-B5 = RTmeanB1-B5, RTsdB1-B5, ErrorB6-B5) and specific learning score (difference B6-B5 = RTmeanB6-B5, RTsdB6-B5, ErrorB6-B5).

SRTT.

To determine general learning (B1-B5) for all conditions, Conditions x Blocks ANOVAs with Conditions (VisOnly, CongrAV, NonCongrAV, RegAud, FastRhyth and IrregAud) as between-subject factor and Blocks (B1 to B5) as a repeated measure were conducted on RTmean. One-way ANOVA was conducted on the RTmean differences B1-B5 (RTmeanB1-B5) to compare general learning evolution between Conditions. Friedman tests were used to assess RTsd and errors’ evolution in each Conditions.

To determine sequence-specific learning (B5-B6), Conditions x Blocks ANOVAs with Conditions (VisOnly, CongrAV, NonCongrAV, RegAud, FastRhyth and IrregAud) as between-subject factor and Blocks (B5 to B6) as a repeated measure were conducted on RTmean and errors from B5 to B6. One-way ANOVAs were conducted on the RTmean and errors differences B6-B5 (RTmeanB6-B5 and ErrorB6-B5) to compare specific learning evolution between Conditions. Wilcoxon signed rank tests were used to compare RTsd evolution between Conditions from B5 to B6 (RTsdB6-B5).

We estimated the Bayes Factor for these data using JASP [65]. The Bayes Factor is used to compare two hypotheses (H0 and H1) based on collected data. It tells how much more likely one hypothesis is to be true compared to the other (e.g., [6669]. The Bayes factor (BF10) measures the likelihood of H0 vs. H1 given our data. Although Bayes factors are defined on a continuous scale, several researchers have proposed to subdivide the scale into discrete evidential categories [70]. We used the standard non-informative Cauchy prior in JASP with a default width of 0.707.

Explicit knowledge.

We computed the percentage of “yes” reported by participants in each Condition. Kruskal-Wallis ANOVAs by Ranks were used to compare this percentage between Conditions. We also compared the specific learning score of participants who noticed a repeating pattern of position and participants who did not with t.tests on the RTmean and errors differences B6-B5 (RTmeanB6-B5 and ErrorB6-B5).

Results

Pre-tests

No difference was found between Conditions on the participants’ mean reaction time (RTmean) for the divided attention task and the phasic arousal index. No difference was found on the Laterality Quotient and at B0 on the RTmean, the variation of the reaction time (RTsd) or errors (Table 1a). Moreover, we did not find correlation between the performance at the first block of practice (B0) and SRTT learning scores, or between the two scores of the attentional tasks and SRTT learning scores (see Table 1b).

thumbnail
Table 1.

a: Participants’ results for the Block 0 (B0) and the Tests of Attentional Performance (TAP). b: Pearson correlations’ results between Block 0 (B0), the Tests of Attentional Performance (TAP) and SRTT learning scores.

https://doi.org/10.1371/journal.pone.0259081.t001

General learning (B1-B5)

ANOVA on RT mean revealed a significant effect of Block (F (4, 216) = 27.11, p < .001, η2P = .334, BF10 > 100). We found BF10 greater than 100 which corresponds to decisive evidence for H1. As illustrated in Fig 2a, RTmean decreased from Blocks 1 to 5. ANOVA also revealed a significant interaction between Blocks and Conditions (F (20, 216) = 1.76, p = .027, η2P = .140, BF10 = 1.98) suggesting that the evolution of RTmean differed between Conditions. However, we found BF10 to be 1.98 which corresponds to anecdotal evidence in favor of H1.

thumbnail
Fig 2.

a. Mean Reaction Times for general learning (from B1 to B5) and for specific learning (from B5 to B6) of all Conditions. Regular Rhythmic Auditory Stimulations (RegAud in purple squares), Congruent Audio-Visual (CongrAV in blue squares), Visual Only (VisOnly in green triangles), Non-Congruent Audio-Visual (NonCongrAV in red triangles), Irregular Auditory Stimulations (IrregAud in grey triangles) and Quick Rhythmic Auditory Stimulations (FastRhyth in yellow triangles). Vertical bars represent the standard errors. b. Mean Reaction Times (z-scores) for general learning (from B1 to B5) and for specific learning (from B5 to B6) of all Conditions. Regular Rhythmic Auditory Stimulations (RegAud in purple squares), Congruent Audio-Visual (CongrAV in blue squares), Visual Only (VisOnly in green triangles), Non-Congruent Audio-Visual (NonCongrAV in red triangles), Irregular Auditory Stimulations (IrregAud in grey triangles) and Quick Rhythmic Auditory Stimulations (FastRhyth in yellow triangles). Vertical bars represent the standard errors.

https://doi.org/10.1371/journal.pone.0259081.g002

To explore more precisely this result, we conducted an ANOVA with Conditions as a Factor on RTmeanB1-B5 which revealed a significant Condition effect (F (5, 54) = 2.84, p = .024, η2P = .208, BF10 = 2.50). We found BF10 to be 2.50 which correspond to anecdotal evidence in favor of H1. Post-hoc Fisher tests revealed that the RTmeanB1-B5 was lower for the FastRhyth Condition than for all the other Conditions (VisOnly: p = .007; CongrAV: p = .014; NonCongrAV: p = .003; RegAud: p = .005; IrregAud: p = .005).

In order to remove inter-group differences in terms of absolute response speed and allow for more sensitive differences in general learning, each participant’s observation on each measure was converted to a z score standardized on participant’s mean and variance (pooled across blocks). As previous, ANOVA revealed significant block effect on general learning phase (F (4,236) = 23.67, p < .001, η2P = .286, BF10 > 1000) (Fig 2b).

For general learning, Friedman tests revealed a significant Block effect only in the IrregAud Condition for both errors (X2R (10, 4) = 11.55, p = .021) and RTsd (X2R (10, 4) = 10.96, p = .027), suggesting that errors and RTsd increased in this Condition from B1 to B5 (Fig 3a and 3b respectively).

thumbnail
Fig 3.

a. Mean variations of Reaction Times (RTds) for Irregular Auditory Stimulations Condition (IrregAud) over general learning (B1 to B5). Vertical bars represent the standard error. b. Number of errors for Irregular Auditory Stimulations Condition (IrregAud) over general learning (B1 to B5). Vertical bars represent the standard errors.

https://doi.org/10.1371/journal.pone.0259081.g003

Specific learning (B5-B6)

The ANOVA on RTmean from B5 to B6 revealed a significant Block effect (F (1, 54) = 161.14, p < .001, η2P = .749, BF10 > 100). We found BF10 greater than 100 which corresponds to decisive evidence for H1. As illustrated in Fig 2, RTmean increased for all Conditions.

The Block effect from B5 to B6 was also significant on errors (F (1, 54) = 39.75, p < .001, η2P = .424, BF10 > 100), suggesting that errors increased between B5 and B6 for all Conditions (Fig 4a). We found BF10 greater than 100 which corresponds to decisive evidence for H1. Moreover, ANOVA conducted on the ErrorB6-B5 confirmed the significant Condition effect (F (5, 54) = 3.83, p = .005, η2P = .262, BF10 = 9.42). We found BF10 to be 9.42 which corresponds from substantial to strong evidence for H1. Post-hoc Fisher tests revealed that ErrorB6-B5 was higher in the CongrAV Condition than in the VisOnly Condition (p = .006) and FastRhyth (p = .020) (Fig 4b). Moreover, ErrorB6-B5 was higher in the RegAud Condition than in the VisOnly (p < .001), NonCongrAV (p = .016), IrregAud (p = .049) and FastRhyth Conditions (p = .003) (Fig 4b).

thumbnail
Fig 4.

a. Errors for general learning (from B1 to B5) and for specific learning (from B5 to B6) of all Conditions. Regular Rhythmic Auditory Stimulations (RegAud in purple), Congruent Audio-Visual stimulations (CongrAV in blue), Irregular Auditory Stimulations (IrregAud in grey), Non-Congruent Audio-Visual stimulations (NonCongrAV in red), Quick tempo Rhythmic Auditory Stimulations (FastRhyth in yellow) and Visual Only (VisOnly in green). Vertical bars represent the standard errors. b. Error differences during specific learning from B5 to B6 (ErrorB6-B5) of all Conditions. Regular Rhythmic Auditory Stimulations (RegAud in purple), Congruent Audio-Visual stimulations (CongrAV in blue), Irregular Auditory Stimulations (IrregAud in grey), Non-Congruent Audio-Visual stimulations (NonCongrAV in red), Quick tempo Rhythmic Auditory Stimulations (FastRhyth in yellow) and Visual Only (VisOnly in green). Vertical bars represent the standard errors.

https://doi.org/10.1371/journal.pone.0259081.g004

In order to remove inter-group differences in terms of absolute response speed and allow for more sensitive differences in specific learning, each participant’s observation on each measure was converted to a z-score standardized on participant’s mean and variance (pooled across blocks). As previous, ANOVA revealed significant block effect on specific learning phase (F (1, 59) = 393.21, p < .001, η2P = .870, BF10 > 1000) (Fig 2b).

Explicit knowledge

Of the 60 participants, 39 (65%) reported that they had perceived a repeated pattern including 6 participants in the VisOnly condition, 9 in the CongrAV condition, 6 in the NonCongrAV condition, 6 in the RegAud condition, 5 in the IrregAud condition and 7 in the FastRhyth Condition. There were no significant differences between Conditions in explicit knowledge (H (5,60) = 4.11, p = .534)).

The t.tests on the RTmean and errors differences B6-B5 (RTmeanB6-B5 and ErrorB6-B5) between participant who noticed a repeating pattern of positions and the one who did not were not significant.

Discussion

The present study aimed to investigate the effects of auditory stimulations on procedural learning of a visuo-motor sequence. To this aim, auditory stimulations were introduced during a SRTT, either with a regular rhythm (RegAud) or in temporal congruency with visual stimulations (CVA). These conditions were compared to four control conditions: without auditory stimulations (VisOnly), with incongruent audio-visual stimulations (NonCongrAV), with irregular auditory stimulations (IrregAud) or with a quick tempo regular rhythm (FastRhyth). Globally, our results are in accordance with our hypotheses and indicate that both rhythmic auditory stimulations (RegAud) and congruent audio-visual (CongrAV) stimulations enhance procedural learning. This improvement concerns the specific learning, as attested by a larger increase of errors when randomized order of visual stimulations is introduced.

Firstly, it is important to note that these results are not related to laterality, attentional scores or performance at B0, that is, at the beginning of the SRTT. Moreover, all conditions lead to the same explicit detection of the sequence. There is a huge debate in the literature about the link between awareness and what is learned [71,72]. In our study, although some subjects became aware of a repeating pattern during the learning phase (B1 to B5), there were no learning differences between aware and unaware subjects, as also shown by [28]. Hence, the detection of the repeating pattern cannot explain the differences in learning between the conditions. However, further investigations about explicit knowledge associated to learning with and without auditory stimuli are needed, for example with subjective scale with more gradings, which might be more sensitive to differences. Secondly, given that RegAud and CongrAV conditions led to better improvement compared to control conditions with both auditory and visual stimulations, one cannot explain the differences in learning by the addition of two stimulations compared to one cue. Thirdly, attentional profile is not linked to the learning process. It is in line with several studies showing no links between different executive function tasks and implicit learning [73,74]. Thus, the way by which the auditory stimulations are delivered is not likely to be responsible for the benefits. We discuss the learning improvement in the RegAud and the CongrAV conditions with respect of the involvement of a multisensory rhythmic integration process.

Specific learning can occur without general learning

In two conditions (IrregAud and FastRhyth) we found specific learning without general learning. It is in accordance with previous findings showing that it is possible to enhance general learning but not sequence-specific learning [see 75]. Thus, our results support the idea that general and specific learning are two distinct processes which are subserved by distinct neural correlates [76]. The first one corresponds to stimulus-based mappings and the second one to internalized sequence representation, or response-based mappings [e.g., 76,77].

General learning is lower with irregular stimulations and quick tempo auditory stimulations

Errors are discarded from a large number of studies using SRTT although they are important indicators of learning. Indeed, SRTT involves a permanent speed–accuracy trade-off and learning can be attested by a concomitant decrease in RTmean and/or errors. If, an increase in errors is associated with a decrease in RTmean or a decrease in RTmean is associated with an increase in errors, it is not clear whether this improvement reflects a learning effect or only a change of strategy. However, if one of these two variables decreases while the other one remains stable, it means that the performance increase. Interestingly [78], showed that performance of accuracy and speed depend on the instructions (more directed to speed or accuracy) but the learning effect occurs in both cases. Given that our results show differences in conditions on errors only, it is possible that our instructions were more emphasized on accuracy than speed.

Taken both variables into account, our results reveal that the IrregAud condition did not lead to a general learning because it induced a decrease in RTmean concomitant to an increase in errors between B1 and B5. Due to the irregularity (non-isochrony) of the auditory stimulations in the IrregAud condition, it is likely that extracting a temporal pattern of irregular auditory stimulations is more difficult than regular rhythmic auditory stimulations [34,52,7981]. During a motor task, the introduction of irrelevant auditory stimulation negatively influences motor control [82]. Indeed, [37] showed that irrelevant sounds can cause a disengagement of attention from the task. In this case, participants attempt to suppress the distractors in order to complete successfully the motor task. Other studies showed that auditory distraction alters both visual attention and motor control [8385]. Hence, introducing irregular auditory stimulations could have generated a distractor effect which limits the attentional focus on the SRTT and could have limited the general learning. Our results also highlight a higher RT variability with the irregular auditory stimulations compared to the other conditions. Again, this is consistent with results of studies showing that providing rhythmic auditory stimulations automatically attracts the tempo of tapping and leads to better stability of movement tempo compared to control conditions without auditory stimulations. Particularly, [86] suggested that auditory rhythms can modify parameters related to the motor production, especially by reducing the variability of muscle activity during the preparatory period.

The decrease in RTmean was lower in the quick tempo rhythmic auditory stimulations (FastRhyth) condition, suggesting that general learning was lower in this condition compared to the other conditions. Given that the only difference between FastRhyth and RegAud is the speed of the auditory stimulations’ tempo, it suggests that this speed affects motor learning. The delay between a participant’s response and the next stimulation was 200ms. Thus, with a tempo at 300ms when a motor response occurred at the same time as an auditory stimulation, the next auditory stimulation occurred 100ms after the next visual stimulation, which is too short for the participant to respond. Indeed, the RTmean mean achieved at the last Block (B5) in the FastRhyth condition was 381,93 (± 61,13ms). Literature shows that audio-motor entrainment is strongest when the tempo of the external rhythm is close to the spontaneous movement tempo (about 600 ms) but vanishes when the difference between the tempo of the external rhythm and the individual’s movement tempo is too high [45,87,88]. Furthermore, our results also suggest that a tempo quicker than the spontaneous tempo is detrimental for general learning. As in the IrregAud condition, the deleterious effect of the auditory metronome in the FastRhyth condition could be explained by distractor effect.

Specific learning is enhanced with congruent audio-visual stimulations and regular rhythmic auditory stimulations

Specific learning of the sequence is attested by an increase in RTmean and errors at B6 (sequenced visual stimulations) compared to B5 (random visual stimulations). This means that a larger increase in RTmean and errors between B5 and B6 highlights a larger specific learning of the visuo-motor sequence. Our results indicate that RTmean increased for all groups, hence suggesting that each condition led to a specific learning of the visuo-motor sequence. However, errors increased more in the conditions with congruent audio-visual stimulations (CongrAV) and regular rhythmic auditory stimulations (RegAud) than in control conditions. Even though the number of participants is small, the relatively high BF10 means that the observed effect is real. Even if we expected to find the effects on RTmean rather than on errors, this result is in accordance with our hypotheses. Hence, both the congruent audio-visual stimulations (CongrAV) and the regular rhythmic auditory stimulations (RegAud) enhance procedural learning of the sequence. Benefits in the CongrAV and RegAud conditions are not due to the introduction of two sources of stimulation rather than one source of stimulation, given than the IrregAud et FastRhyth, which provide two sources of stimulation, did not enhance learning.

Our result in the CongrAV condition is in line with the findings of [43,89,90] showing that practice with audio-visual stimulations would improve the acquisition of visual motion-detection skills faster compared to practice with visual stimulations only. The advantage of audio-visual stimulations compared to visual stimulations could be attributed to several processes related to multisensory integration such as (1) a faster detection of audio-visual stimulations than visual stimulations [35,39,42], (2) an improvement of spatial attention [91] and (3) a faster visual learning with multisensory stimulations compared to visual stimulations only [43]. Our results also contribute to the debate in the literature distinguishing (1) a modality-specific mechanism proposing that learning occurs in each modality separately and (2) a modality-general mechanism in which learning is independent across modalities [90,92]. In line with previous results [90], our results tend to be in favor of the latter purpose. Indeed, in [90] authors showed that learners are able to extract statistical regularities from audiovisual input and to integrate it into audio and visual streams separately. In our case, even if the auditory stimulations alone don’t provide any cue regarding the sequence, it seems that they still helped the learning of the visual sequence when they were presented simultaneously with the visual stimulations. Therefore, our results are in line with the purpose that procedural learning is sensitive to multimodal input.

As regard to the benefice of regular rhythmic auditory stimulations (RegAud), our results on errors is surprising given that previous studies in the literature indicate that RegAud quicken RTmean compared to visual stimulations [57]. Our results suggest that RegAud improve learning of the motor sequence by modulating errors (i.e., response on a wrong spatial location), suggesting that RegAud enhance spatial encoding of the motor sequence. This facilitation is in line with previous results showing an improvement of movement stability in both time and space with RegAud [4850]. Interestingly we found facilitation in the RegAud condition even if we did not manipulate directly the temporal pattern of the to-be-learned material as it was done in most previous studies [see for example [32,93,94]. Indeed, the sequence of position was played out through the visual modality whereas we implemented a temporal structure through auditory stimuli. One hypothesis is that the temporal regularity of these stimuli could have prepared the attentional system to deal with specific stimuli arriving in the same temporal pattern [e.g., [95,96]. Indeed, these effects have already been shown using other tasks of implicit learning of pitch structures [97,98], working memory [99], and statistical learning of artificial language [100].

Moreover, this tempo seems to be well suited given that the optimum tempo for motor synchronization is between 400 and 800ms [101]. Overall, the regularity of the RegAud may have facilitated the learning of the visual stimulations sequence and the increase in the number of errors at the random block (B6) suggests that participants continue to inappropriately play the sequence out [18].

Note that we used an ambitious design with six different conditions. However, all of them were required to understand the overall effect. For example, we showed that it is not only the regularity of the tempo that is decisive but that this tempo is in adequacy with the motor task to facilitate the learning. Moreover, the small sample might have led to underestimate the effects. However, a within-participants design would not have been possible because of (1) the possible transfer or interference effects between conditions and (2) the length of the experiment (6 different conditions x 6 blocks of 100 stimuli + attentional tests). Despite this ambitious design, we found some promising results that need to be explored more deeply and replicated.

Conclusion

For the first time, our results provide a strong argument in favor of the benefits of audio-visual and regular rhythmic auditory stimulations on procedural learning. This benefit was absent in the control conditions. Given that the addition of auditory information does not automatically enhance procedural learning (control conditions), the benefits cannot be attributed to the addition of auditory information but actually to the rhythmic structure of the auditory stimulations and to the temporal congruency of the auditory and visual stimulations. It suggests that regular rhythmic audio-visual stimulations seem to be a relevant condition to improve procedural learning of perceptual-motor sequences. Even if these preliminary results need for replications and extension with a retention test (with reintroduction of the repeated sequence in another Block, B7), future research is required to find out how sequence learning and temporal information are precisely related, possibly with investigations of the temporal structure of the sequence and the cerebral correlates of procedural learning with rhythmic multisensory stimulations.

Supporting information

S1 Table. Bayes factors levels.

Table for the interpretation of each Bayes factors level. Adapted from Jeffreys (1998).

https://doi.org/10.1371/journal.pone.0259081.s001

(DOCX)

S2 File. Auditory sounds—Regular auditory sequence.

https://doi.org/10.1371/journal.pone.0259081.s003

(WAV)

S3 File. Auditory sounds—Irregular auditory sequence.

https://doi.org/10.1371/journal.pone.0259081.s004

(WAV)

Acknowledgments

We would like to thank Catalina Onofrei for her helpful revision of English and Robert French for his advices in Bayesian statistics. We also would like to thank Manuel Mercier for his help in programming the experiment.

References

  1. 1. Cohen NJ, Squire LR. Preserved Learning and Retention of Pattern-Analyzing Skill in Amnesia: Dissociation of Knowing How and Knowing that. Science, New Series. 1980;210(4466):207–10. pmid:7414331
  2. 2. Squire LR. Mechanisms of memory. Science. 1986;232(4758):1612–9. pmid:3086978
  3. 3. Squire LR. Memory systems of the brain: A brief history and current perspective. Neurobiology of Learning and Memory. 2004 Nov 1;82(3):171–7. pmid:15464402
  4. 4. Censor N, Sagi D, Cohen LG. Common mechanisms of human perceptual and motor learning. Nature Reviews Neuroscience. 2012 Sep;13(9):658–64. pmid:22903222
  5. 5. Censor N. Generalization of perceptual and motor learning: A causal link with memory encoding and consolidation? Neuroscience. 2013 Oct;250:201–7. pmid:23850685
  6. 6. Nicolson RI, Fawcett AJ, Brookes RL, Needle J. Procedural learning and dyslexia. Dyslexia. 2010 Aug 1;16(3):194–212. pmid:20680991
  7. 7. Nicolson RI, Fawcett AJ. Procedural learning difficulties: reuniting the developmental disorders? Trends Neurosci. 2007 Apr;30(4):135–41. pmid:17328970
  8. 8. Aizenstein HJ. Regional Brain Activation during Concurrent Implicit and Explicit Sequence Learning. Cerebral Cortex. 2004 Feb 1;14(2):199–208. pmid:14704217
  9. 9. Miyawaki K. The influence of the response–stimulus interval on implicit and explicit learning of stimulus sequence. Psychological Research Psychologische Forschung. 2006 Jul;70(4):262–72. pmid:16044316
  10. 10. Rüsseler J, Rösler F. Implicit and explicit learning of event sequences: evidence for distinct coding of perceptual and motor representations. Acta Psychologica. 2000 Mar;104(1):45–67. pmid:10769939
  11. 11. Saywell N, Taylor D. The role of the cerebellum in procedural learning—Are there implications for physiotherapists’ clinical practice? Physiotherapy Theory and Practice. 2008 Jan 1;24(5):321–8. pmid:18821439
  12. 12. Willingham DB, Salidis J, Gabrieli JDE. Direct Comparison of Neural Systems Mediating Conscious and Unconscious Skill Learning. Journal of Neurophysiology. 2002 Sep 1;88(3):1451–60. pmid:12205165
  13. 13. Nissen MJ, Bullemer P. Attentional requirements of learning: Evidence from performance measures. Cognitive Psychology. 1987 Jan;19(1):1–32.
  14. 14. Coomans D, Vandenbossche J, Deroost N. The effect of attentional load on implicit sequence learning in children and young adults. Frontiers in Psychology. 2014;5:465. pmid:24904481
  15. 15. Helmuth LL, Mayr U, Daum I. Sequence learning in Parkinson’s disease: a comparison of spatial-attention and number-response sequences. Neuropsychologia. 2000 Oct;38(11):1443–51. pmid:10906370
  16. 16. Lum JAG, Ullman MT, Conti-Ramsden G. Procedural learning is impaired in dyslexia: Evidence from a meta-analysis of serial reaction time studies. Research in Developmental Disabilities. 2013 Oct;34(10):3460–76. pmid:23920029
  17. 17. Robertson EM, Pascual-Leone A. Aspects of sensory guidance in sequence learning. Experimental Brain Research. 2001 Apr 2;137(3–4):336–45. pmid:11355380
  18. 18. Robertson EM. The Serial Reaction Time Task: Implicit Motor Skill Learning? Journal of Neuroscience. 2007 Sep 19;27(38):10073–5. pmid:17881512
  19. 19. Shams L, Seitz AR. Benefits of multisensory learning. Trends in Cognitive Sciences. 2008 Nov;12(11):411–7. pmid:18805039
  20. 20. Abrahamse EL, Jiménez L, Verwey WB, Clegg BA. Representing serial action and perception. Psychonomic Bulletin & Review. 2010 Oct;17(5):603–23. pmid:21037157
  21. 21. Schwarb H, Schumacher E. Generalized lessons about sequence learning from the study of the serial reaction time task. Advances in Cognitive Psychology. 2012 Jun 28;8(2):165–78. pmid:22723815
  22. 22. Abrahamse EL, Verwey WB. Context dependent learning in the serial RT task. Psychological Research. 2008 Jul 1;72(4):397–404. pmid:17674034
  23. 23. Kim D, Johnson BJ, Gillespie RB, Seidler RD. The effect of haptic cues on motor and perceptual based implicit sequence learning. Frontiers in Human Neuroscience. 2014;8. pmid:24734013
  24. 24. Willingham DB. Implicit motor sequence learning is not purely perceptual. Memory & Cognition. 1999 May;27(3):561–72.
  25. 25. Clegg BA. Stimulus-specific sequence representation in serial reaction time tasks. Q J Exp Psychol A. 2005 Aug;58(6):1087–101. pmid:16194949
  26. 26. Mayr U. Spatial attention and implicit sequence learning: Evidence for independent learning of spatial and nonspatial sequences. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1996;22(2):350–64. pmid:8901340
  27. 27. Deroost N, Soetens E. The role of response selection in sequence learning. Quarterly Journal of Experimental Psychology. 2006 Mar;59(3):449–56. pmid:16627348
  28. 28. Willingham DB, Nissen MJ, Bullemer P. On the development of procedural knowledge. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1989;15(6):1047–60. pmid:2530305
  29. 29. Hoffmann J, Sebald A, Stöcker C. Irrelevant response effects improve serial learning in serial reaction time tasks. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2001;27(2):470–82. pmid:11294444
  30. 30. Dominey PF. A shared system for learning serial and temporal structure of sensori-motor sequences? Evidence from simulation and human experiments. Cognitive Brain Research. 1998 Jan;6(3):163–72. pmid:9479067
  31. 31. Gobel EW, Sánchez DJ, Reber PJ. Integration of temporal and ordinal information during serial interception sequence learning. Journal of experimental psychology Learning, memory, and cognition. 2011; pmid:21417511
  32. 32. Shin JC, Ivry RB. Concurrent learning of temporal and spatial sequences. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2002;28(3):445–57. pmid:12018497
  33. 33. Barakat B, Seitz AR, Shams L. Visual rhythm perception improves through auditory but not visual training. Current Biology. 2015 Jan;25(2):R60–1. pmid:25602302
  34. 34. Patel AD, Iversen JR, Chen Y, Repp BH. The influence of metricality and modality on synchronization with a beat. Exp Brain Res. 2005 May 1;163(2):226–38. pmid:15654589
  35. 35. Meredith M, Stein B. Interactions among converging sensory inputs in the superior colliculus. Science. 1983 Jul 22;221(4608):389–91. pmid:6867718
  36. 36. Stein BE, Stanford TR, Rowland BA. Development of multisensory integration from the perspective of the individual neuron. Nature Reviews Neuroscience. 2014 Aug;15(8):520–35. pmid:25158358
  37. 37. Hughes HC, Reuter-Lorenz PA, Nozawa G, Fendrich R. Visual-auditory interactions in sensorimotor processing: Saccades versus manual responses. Journal of Experimental Psychology: Human Perception and Performance. 1994;20(1):131–53. pmid:8133219
  38. 38. Meredith MA, Nemitz JW, Stein BE. Determinants of multisensory integration in superior colliculus neurons. I. Temporal factors. J Neurosci. 1987 Oct 1;7(10):3215–29. pmid:3668625
  39. 39. Stein BE, Stanford TR. Multisensory integration: current issues from the perspective of the single neuron. Nature Reviews Neuroscience. 2008 Apr;9(4):255–66. pmid:18354398
  40. 40. Diederich A, Colonius H. Bimodal and trimodal multisensory enhancement: Effects of stimulus onset and intensity on reaction time. Perception & Psychophysics. 2004 Nov;66(8):1388–404. pmid:15813202
  41. 41. Hecht D, Reiner M, Karni A. Multisensory enhancement: gains in choice and in simple response times. Exp Brain Res. 2008 May 14;189(2):133. pmid:18478210
  42. 42. Stein BE, London N, Wilkinson LK, Price DD. Enhancement of Perceived Visual Intensity by Auditory Stimuli: A Psychophysical Analysis. Journal of Cognitive Neuroscience. 1996 Nov;8(6):497–506. pmid:23961981
  43. 43. Seitz AR, Kim R, Shams L. Sound Facilitates Visual Learning. Current Biology. 2006 Jul;16(14):1422–7. pmid:16860741
  44. 44. Thaut MH. Neural Basis of Rhythmic Timing Networks in the Human Brain. Annals of the New York Academy of Sciences. 2003;999(1):364–73. pmid:14681157
  45. 45. Repp BH, Su Y-H. Sensorimotor synchronization: A review of recent research (2006–2012). Psychon Bull Rev. 2013 Jun 1;20(3):403–52. pmid:23397235
  46. 46. Varlet M, Williams R, Keller PE. Effects of pitch and tempo of auditory rhythms on spontaneous movement entrainment and stabilisation. Psychological Research. 2020;84(3):568–84. pmid:30116886
  47. 47. Thaut MH, Kenyon GP, Schauer ML, McIntosh GC. The connection between rhythmicity and brain function. IEEE Engineering in Medicine and Biology Magazine. 1999 Apr;18(2):101–8. pmid:10101675
  48. 48. Kudo K, Park H, Kay BA, Turvey MT. Environmental coupling modulates the attractors of rhythmic coordination. Journal of Experimental Psychology: Human Perception and Performance. 2006;32(3):599–609. pmid:16822126
  49. 49. Roerdink M, Ridderikhoff A, Peper CE, Beek PJ. Informational and Neuromuscular Contributions to Anchoring in Rhythmic Wrist Cycling. Ann Biomed Eng. 2013 Aug;41(8):1726–39. pmid:23099793
  50. 50. Thaut M, Schleiffers S, Davis W. Analysis of EMG Activity in Biceps and Triceps Muscle in an Upper Extremity Gross Motor Task under the Influence of Auditory Rhythm. Journal of Music Therapy. 1991 Jun 1;28(2):64–88.
  51. 51. Repp BH, Penel A. Rhythmic movement is attracted more strongly to auditory than to visual rhythms. Psychological Research. 2004 Aug 1;68(4):252–70. pmid:12955504
  52. 52. Chen JL, Penhune VB, Zatorre RJ. Listening to Musical Rhythms Recruits Motor Regions of the Brain. Cereb Cortex. 2008 Dec 1;18(12):2844–54. pmid:18388350
  53. 53. Ivry RB, Hazeltine RE. Perception and production of temporal intervals across a range of durations: Evidence for a common timing mechanism. Journal of Experimental Psychology: Human Perception and Performance. 1995;21(1):3–18. pmid:7707031
  54. 54. Molinari M, Leggio M, Thaut M. The cerebellum and neural networks for rhythmic sensorimotor synchronization in the human brain. The Cerebellum. 2007;6(1):18–23. pmid:17366263
  55. 55. Thaut MH, McIntosh GC, Hoemberg V. Neurobiological foundations of neurologic music therapy: Rhythmic entrainment and the motor system. Frontiers in Psychology. 2015;5. pmid:25774137
  56. 56. Michaelis K, Wiener M, Thompson JC. Passive listening to preferred motor tempo modulates corticospinal excitability. Frontiers in Human Neuroscience. 2014;8:252. pmid:24795607
  57. 57. Thaut MH. The discovery of human auditory–motor entrainment and its role in the development of neurologic music therapy. In: Progress in Brain Research [Internet]. Elsevier; 2015 [cited 2018 Dec 12]. p. 253–66. https://linkinghub.elsevier.com/retrieve/pii/S0079612314000314.
  58. 58. Bengtsson SL, Ullén F, Henrik Ehrsson H, Hashimoto T, Kito T, Naito E, et al. Listening to rhythms activates motor and premotor cortices. Cortex. 2009 Jan;45(1):62–71. pmid:19041965
  59. 59. Schubotz RI, Friederici AD, Yves von Cramon D. Time Perception and Motor Timing: A Common Cortical and Subcortical Basis Revealed by fMRI. NeuroImage. 2000 Jan;11(1):1–12. pmid:10686112
  60. 60. Grahn JA, Brett M. Rhythm and Beat Perception in Motor Areas of the Brain. Journal of Cognitive Neuroscience. 2007 May;19(5):893–906. pmid:17488212
  61. 61. O’Reilly JX, McCarthy KJ, Capizzi M, Nobre AC. Acquisition of the Temporal and Ordinal Structure of Movement Sequences in Incidental Learning. Journal of Neurophysiology. 2008 May;99(5):2731–5. pmid:18322005
  62. 62. Bridges D, Pitiot A, MacAskill MR, Peirce JW. The timing mega-study: comparing a range of experiment generators, both lab-based and online. PeerJ. 2020 Jul 20;8:e9414. pmid:33005482
  63. 63. Oldfield RC. The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia. 1971 Mar;9(1):97–113. pmid:5146491
  64. 64. DeCoster J, O’Mally J. Specific Sequence Effects in the Serial Reaction Time Task. Journal of Motor Behavior. 2011 May;43(3):263–73. pmid:21598158
  65. 65. Wagenmakers E-J, Love J, Marsman M, Jamil T, Ly A, Verhagen J, et al. Bayesian inference for psychology. Part II: Example applications with JASP. Psychon Bull Rev. 2018 Feb 1;25(1):58–76. pmid:28685272
  66. 66. Dienes Z. Bayesian Versus Orthodox Statistics: Which Side Are You On? Perspect Psychol Sci. 2011 May 1;6(3):274–90. pmid:26168518
  67. 67. Jeffreys H. Theory of probability. 3rd ed. Oxford [Oxfordshire]: New York: Clarendon Press; Oxford University Press; 1998. 459 p. (Oxford classic texts in the physical sciences).
  68. 68. Kruschke JK. Bayesian Assessment of Null Values Via Parameter Estimation and Model Comparison. Perspect Psychol Sci. 2011 May 1;6(3):299–312. pmid:26168520
  69. 69. Rouder JN, Speckman PL, Sun D, Morey RD, Iverson G. Bayesian t tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin & Review. 2009 Apr 1;16(2):225–37.
  70. 70. Kass RE, Raftery AE. Bayes Factors. Journal of the American Statistical Association. 1995 Jun 1;90(430):773–95.
  71. 71. Rünger D, Frensch PA. How incidental sequence learning creates reportable knowledge: The role of unexpected events. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2008;34(5):1011–26. pmid:18763888
  72. 72. Shanks DR. Learning: From Association to Cognition. Annual Review of Psychology. 2010;61(1):273–301.
  73. 73. Feldman J. Correlational analyses of procedural and declarative learning performance. Intelligence. 1995 Feb;20(1):87–114.
  74. 74. Reber AS, Walkenfeld FF, Hernstadt R. Implicit and Explicit Learning: Individual Differences and IQ.:9.
  75. 75. Chan RW, Alday PM, Zou-Williams L, Lushington K, Schlesewsky M, Bornkessel-Schlesewsky I, et al. Focused-attention meditation increases cognitive control during motor sequence performance: Evidence from the N2 cortical evoked potential. Behavioural Brain Research. 2020 Apr 20;384:112536. pmid:32032740
  76. 76. Tubau E, Hommel B, López-Moliner J. Modes of executive control in sequence learning: From stimulus-based to plan-based control. Journal of Experimental Psychology: General. 2007 Feb;136(1):43–63. pmid:17324084
  77. 77. Abrahamse EL, Ruitenberg MFL, de Kleine E, Verwey WB. Control of automated behavior: insights from the discrete sequence production task. Front Hum Neurosci. 2013 Mar 19;7:82. pmid:23515430
  78. 78. Vékony T, Marossy H, Must A, Vécsei L, Janacsek K, Nemeth D. Speed or Accuracy Instructions During Skill Learning do not Affect the Acquired Knowledge. Cerebral Cortex Communications. 2020 Aug 5;1(1):tgaa041. pmid:34296110
  79. 79. Bood RJ, Nijssen M, van der Kamp J, Roerdink M. The Power of Auditory-Motor Synchronization in Sports: Enhancing Running Performance by Coupling Cadence with the Right Beats. PLoS ONE. 2013 Aug 7;8(8):e70758. pmid:23951000
  80. 80. Maslovat D, Chua R, Lee TD, Franks IM. Anchoring Strategies for Learning a Bimanual Coordination Pattern. Journal of Motor Behavior. 2006 Mar;38(2):101–17. pmid:16531393
  81. 81. Serrien DJ, Spapé MM. Coupling between perception and action timing during sensorimotor synchronization. Neuroscience Letters. 2010 Dec;486(3):215–9. pmid:20884327
  82. 82. Bigliassi M, Karageorghis CI, Bishop DT, Nowicky AV, Wright MJ. Cerebral effects of music during isometric exercise: An fMRI study. International Journal of Psychophysiology. 2018 Nov 1;133:131–9. pmid:30059701
  83. 83. Piitulainen H, Bourguignon M, Smeds E, Tiège XD, Jousmäki V, Hari R. Phasic stabilization of motor output after auditory and visual distractors. Human Brain Mapping. 2015;36(12):5168–82. pmid:26415889
  84. 84. Röer JP, Bell R, Körner U, Buchner A. Equivalent auditory distraction in children and adults. Journal of Experimental Child Psychology. 2018 Aug 1;172:41–58. pmid:29574236
  85. 85. Smucny J, Rojas DC, Eichman LC, Tregellas JR. Neuronal effects of auditory distraction on visual attention. Brain Cogn. 2013 Mar;81(2):263–70. pmid:23291265
  86. 86. Yoles-Frenkel M, Avron M, Prut Y. Impact of Auditory Context on Executed Motor Actions. Frontiers in Integrative Neuroscience. 2016;10:1. pmid:26834584
  87. 87. Bardy BG, Hoffmann CP, Moens B, Leman M, Dalla Bella S. Sound-induced stabilization of breathing and moving. Ann N Y Acad Sci. 2015 Mar;1337:94–100. pmid:25773622
  88. 88. Lopresti-Goodman SM, Richardson MJ, Silva PL, Schmidt RC. Period Basin of Entrainment for Unintentional Visual Coordination. Journal of Motor Behavior. 2008 Jan;40(1):3–10. pmid:18316292
  89. 89. Frassinetti F, Bolognini N, Làdavas E. Enhancement of visual perception by crossmodal visuo-auditory interaction. Experimental Brain Research. 2002 Dec 1;147(3):332–43. pmid:12428141
  90. 90. Mitchel AD, Weiss DJ. Learning across senses: Cross-modal effects in multisensory statistical learning. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2011;37(5):1081–91. pmid:21574745
  91. 91. Spence C, Santangelo V. Capturing spatial attention with multisensory cues: A review. Hearing Research. 2009 Dec;258(1–2):134–42. pmid:19409472
  92. 92. Seitz AR, Kim R, van Wassenhove V, Shams L. Simultaneous and Independent Acquisition of Multisensory and Unisensory Associations. Perception. 2007 Oct 1;36(10):1445–53. pmid:18265827
  93. 93. Willingham DB, Greenberg AR, Thomas RC. Response-to-stimulus interval does not affect implicit motor sequence learning, but does affect performance. Memory & Cognition. 1997 Jul;25(4):534–42.
  94. 94. Rünger D. How sequence learning creates explicit knowledge: the role of response–stimulus interval. Psychological Research. 2012 Sep 1;76(5):579–90. pmid:21786123
  95. 95. Jones MR, Boltz M. Dynamic attending and responses to time. Psychological Review. 1989 Jul;96(3):459–91. pmid:2756068
  96. 96. Large EW, Jones MR. The dynamics of attending: How people track time-varying events. Psychological Review. 1999 Jan;106(1):119–59.
  97. 97. Selchenkova T, Jones MR, Tillmann B. The influence of temporal regularities on the implicit learning of pitch structures. Quarterly Journal of Experimental Psychology. 2014 Dec 1;67(12):2360–80. pmid:25318962
  98. 98. Selchenkova T, François C, Schön D, Corneyllie A, Perrin F, Tillmann B. Metrical Presentation Boosts Implicit Learning of Artificial Grammar. PLOS ONE. 2014 Nov 5;9(11):e112233. pmid:25372147
  99. 99. Plancher G, Lévêque Y, Fanuel L, Piquandet G, Tillmann B. Boosting maintenance in working memory with temporal regularities. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2018 May;44(5):812–8. pmid:29094985
  100. 100. Hoch L, Tyler MD, Tillmann B. Regularity of unit length boosts statistical learning in verbal and nonverbal artificial languages. Psychonomic Bulletin & Review. 2013 Feb;20(1):142–7. pmid:22890871
  101. 101. Fraisse P. II.—Mouvements rythmiques et arythmiques. L’année psychologique. 1946;47(1):11–27.