01-11-2011 | Editorial
Crossmodal action: modality matters
Auteurs:
Lynn Huestegge, Eliot Hazeltine
Gepubliceerd in:
Psychological Research
|
Uitgave 6/2011
Log in om toegang te krijgen
Excerpt
Research on multitasking harkens back to the beginnings of cognitive psychology. The central question has always been how we manage to perform multiple actions at the same time. Here, we highlight the role of specific input- and output-modalities involved in coordinating multiple action demands (i.e., crossmodal action). For a long time, modality- and content-blind models of multitasking have dominated theory, but a variety of recent findings indicate that modalities and content substantially determine performance. Typically, the term “input modality” refers to sensory channels (e.g., visual input is treated differently from auditory input), and the term “output modality” is closely associated with effector systems (e.g., hand vs. foot movements). However, this definition may be too narrow. The term “input modality” sometimes refers to a dimension within a sensory channel (e.g., shape/color in vision). Furthermore, the linkage between output-modalities and effector systems may not be specific enough to illuminate some notorious twilight zones (e.g., to distinguish between hand and wrist movements). As a consequence, we will use “modality” as an umbrella term here to capture various sources of stimulus variability used to differentiate the task-relevant information and sources of motor variability used to differentiate responses. …