On the continuing problem of inappropriate learning measures: Comment on Wulf et al. (2014) and Wulf et al. (2015)
Introduction
An important goal of motor learning research is to identify key variables that affect how rapidly people learn motor skills, how well they learn them, and how well those skills are retained over time and possibly transferred to performance contexts that are different than those under which they were initially practiced. In the search for these variables, studies may differ in the extent to which theoretical issues are emphasized versus emphasis on more applied, practical issues (see Christina, 1987, Christina, 1989 for a discussion of different levels of research in motor learning, and Christina & Bjork, 1991 for a discussion of variables affecting retention and transfer). While it is vital to select important independent variables to manipulate for studying motor skill learning, it is equally important to validly measure the relevant dependent variables if we are to draw reasonable conclusions. The purpose of this commentary is to highlight a motor skill assessment issue that was initially raised over 20 years ago by Reeve, Fischman, Christina, and Cauraugh (1994), but has persisted over time and, unfortunately, continues to plague the field.
Section snippets
The problem
My focus is on two recent articles in this journal (Wulf et al., 2014, Wulf et al., 2015), although as I will show, the issues are not limited to only those studies. Wulf et al. (2014) studied the individual and combined influences of autonomy support and enhanced expectancies in novice participants learning to throw overhand with their non-dominant arm. Autonomy support was manipulated by giving participants a choice about the ball color during practice, and enhanced expectancies involved
A solution
After the issues were first raised by Reeve et al. (1994), a solution was proposed by Hancock, Butler, and Fischman (1995). They introduced a set of formulae for calculating and statistically analyzing accuracy, bias, and consistency of performance for two-dimensional tasks such as those using concentric circle targets, both for single individuals and for groups. They also explain how specific information regarding the learning process may be missed if one uses only an accuracy measure with
Learning the overhand throw
My final comment addresses the absence of precise measures of the overhand throw in Wulf et al. (2014) and Wulf et al. (2015). Participants were charged with learning to throw overhand with their non-dominant arm so as to achieve a high point total. Thus, throwing accuracy, a performance outcome measure, was the goal. Practice, retention, and transfer phases were included, which are appropriate components in motor learning research. Participants received only minimal basic instructions for the
Conclusion
Thirty years ago, in a critique of statistical analyses in science, Cooke and Brown (1985) stated “…the application of statistics must always be subordinate to the application of principled scientific thinking. Good statistics can never rescue bad science.” (p. 492). However, the converse is also true; that is, faulty statistics can often mask the true meaning of good science. I will take the liberty here of paraphrasing Cooke and Brown’s admonition by replacing “statistics” with “measurement”
Acknowledgements
I thank Keith Lohse, Matt Miller, and members of Auburn University’s Performance and Exercise Psychophysiology Lab for helpful discussions of the issues raised in this commentary, and Robert Christina and two anonymous reviewers for comments on a previous draft of the manuscript.
References (44)
- et al.
Positive social-comparative feedback enhances motor learning in children
Psychology of Sport and Exercise
(2012) - et al.
Motor learning benefits of self-controlled practice in person’s with Parkinson’s disease
Gait & Posture
(2012) - et al.
Impacts of autonomy-supportive versus controlling instructional language on motor learning
Human Movement Science
(2014) - et al.
The influence of attentional focus on the development of skill representation in a complex action
Psychology of Sport and Exercise
(2014) - et al.
Knowledge of results after relatively good trials enhances self-efficacy and motor learning
Psychology of Sport and Exercise
(2012) Self-controlled practice enhances motor learning: Implications for physiotherapy
Physiotherapy
(2007)- et al.
Additive benefits of autonomy support and enhanced expectancies for motor learning
Human Movement Science
(2014) - et al.
External focus and autonomy support: Two important factors in motor learning have additive benefits
Human Movement Science
(2015) - et al.
Increased movement accuracy and reduced EMG activity as the result of adopting an external focus of attention
Brain Research Bulletin
(2005) - et al.
Feedback after good versus poor trials affects intrinsic motivation
Research Quarterly for Exercise and Sport
(2011)
Feedback about more accurate versus less accurate trials: Differential effects on self-confidence and activation
Research Quarterly for Exercise and Sport
Enhancing self-controlled learning environments: The use of self-regulated feedback information
Journal of Human Movement Studies
Self-controlled feedback: Does it enhance learning because performers get feedback when they need it?
Research Quarterly for Exercise and Sport
Feedback after good trials enhances learning
Research Quarterly for Exercise and Sport
Learning benefits of self-controlled knowledge of results in 10-year old children
Research Quarterly for Exercise and Sport
Knowledge of results after good trials enhances learning in older adults
Research Quarterly for Exercise and Sport
Motor learning: Future lines of research
Whatever happened to applied research in motor learning?
Optimizing long-term retention and transfer
Science and statistics in motor physiology
Journal of Motor Behavior
On the problem of two-dimensional error scores: Measures and analyses of accuracy, bias, and consistency
Journal of Motor Behavior
Self-controlled use of a perceived physical assistance device during a balancing task
Perceptual and Motor Skills
Cited by (14)
Reliable measurement in sport psychology: The case of performance outcome measures
2020, Psychology of Sport and ExerciseCitation Excerpt :To test theories that relate theoretical constructs to each other (e.g., construct A influences construct B for individuals drawn from population P under conditions C), it is necessary to not only have reliable measures, but also valid measures that actually measure construct A and B and control for P and C. Validity typically refers to whether a given measure in fact measures what it claims to measure. Unfortunately, frequently used measures within psychology (e.g., Schimmack, 2019) and sport science (Fischman, 2015) might not measure what they claim to measure. Although, the present paper focused on reliability and not validity, high quality measurement in any scientific field needs to focus on both.
Does limiting pre-movement time during practice eliminate the benefit of practicing while expecting to teach?
2019, Human Movement ScienceExamining the impact of error estimation on the effects of self-controlled feedback
2019, Human Movement ScienceCitation Excerpt :In Experiment 1, the average performance between SC and YK groups, despite not being significantly different, appeared to match previous literature. Perhaps the dependent variable used was not sensitive enough to accurately reflect these differences (for discussions see Fischman, 2015; Hancock, Butler, & Fischman, 1995; Reeve, Fischman, Christina, & Cauraugh, 1994). Therefore, we selected a laboratory task that has shown self-controlled feedback learning benefits (e.g., Carter & Ste-Marie, 2017a, 2017b).
Simultaneous and alternate action observation and motor imagery combinations improve aiming performance
2018, Psychology of Sport and ExerciseCitation Excerpt :Another limitation of our study relates to the nature of the performance measurement used. Criticism of this method suggests that it lacks sensitivity and is inappropriate for the capture of the true characteristics of performance such as direction and variability around the target (see Fischman, 2015). Finally, the decision to ask participants to complete the intervention at home may be a further limitation of the study design, as we cannot ensure subjects integrity to engage in the intervention period.
Good-vs. poor-trial feedback in motor learning: The role of self-efficacy and intrinsic motivation across levels of task difficulty
2018, Learning and InstructionCitation Excerpt :It has been suggested that explicitly grouping KR trials as a function of the participant's performance (regardless of whether it relates to KR-good or KR-poor) may increase the informational value of KR (Patterson & Azizieh, 2012); suggesting that whilst there may be some role of motivational feedback on learning, this is not always the key contributor to effective performance. In addition, it is worth noting that consistency has been argued to be a better indicator of learning than performance accuracy (e.g., Fischman, 2015; Schmidt & Lee, 2011), and this is an important consideration for future motor learning research as our radial error findings limit the conclusions that can be drawn here. However, given that learning is said to reflect a relatively long-term change in performance (Schmidt, 1991), the inclusion of a one-week retention test in the present study seems to at least be a more sensitive measure of long-term learning, and may account for the lack of effects for the KR-poor group under extended retention periods.
Does practicing a skill with the expectation of teaching alter motor preparatory cortical dynamics?
2018, International Journal of Psychophysiology