Elsevier

Neuropsychologia

Volume 48, Issue 2, January 2010, Pages 456-466
Neuropsychologia

Cross-modal influences of affect across social and non-social domains in individuals with Williams syndrome

https://doi.org/10.1016/j.neuropsychologia.2009.10.003Get rights and content

Abstract

The Williams syndrome (WS) cognitive profile is characterized by relative strengths in face processing, an attentional bias towards social stimuli, and an increased affinity and emotional reactivity to music. An audio-visual integration study examined the effects of auditory emotion on visual (social/non-social) affect identification in individuals with WS and typically developing (TD) and developmentally delayed (DD) controls. The social bias in WS was hypothesized to manifest as an increased ability to process social than non-social affect, and a reduced auditory influence in social contexts. The control groups were hypothesized to perform similarly across conditions. The results showed that while participants with WS exhibited indistinguishable performance to TD controls in identifying facial affect, DD controls performed significantly more poorly. The TD group outperformed the WS and DD groups in identifying non-social affect. The results suggest that emotionally evocative music facilitated the ability of participants with WS to process emotional facial expressions. These surprisingly strong facial-processing skills in individuals with WS may have been due to the effects of combining social and music stimuli and to a reduction in anxiety due to the music in particular. Several directions for future research are suggested.

Introduction

Williams syndrome (WS) is a multifactorial genetic disorder resulting from a hemizygous deletion of 25–30 genes on chromosome 7q11.23 (Ewart et al., 1993, Korenberg et al., 2000). It is associated with a unique combination of distinct facial characteristics, widespread clinical symptoms, and an asymmetrical, complex profile of cognitive and behavioral features (see Järvinen-Pasley et al., 2008, Meyer-Lindenberg et al., 2006, Morris and Mervis, 2000, for reviews). The neuropsychological profile is characterized by a mean IQ estimate between 40 and 90 (Searcy et al., 2004), with a typically higher verbal IQ (VIQ) than performance IQ (PIQ) (Howlin et al., 1998, Udwin and Yule, 1990). In addition, the neurocognitive phenotype is characterized by a unique pattern of dissociations: while relative strengths are evident in socially relevant information processing (e.g., in face and language), significant impairments are apparent in non-verbal intellectual functioning (e.g., planning, problem solving, spatial and numerical cognition) (Bellugi et al., 2000, Bellugi et al., 1994). However, rather than being “intact”, evidence indicates that near typical performance in some socially relevant tasks, such as face processing, is associated with atypical neural processing (e.g., Haas et al., 2009, Mills et al., 2000, Mobbs et al., 2004), which may be related to significantly increased attention to faces (Riby and Hancock, 2008, Riby and Hancock, 2009), as well as to a relative enlargement in some major brain structures involved in social information processing (Reiss et al., 2004). Emerging data suggests that at least some of the characteristic “excessive” social functions, specifically an increased tendency to approach unfamiliar people, can be linked to the genetic features of the WS full deletion (Dai et al., 2009). It remains to be investigated, however, whether areas of deficit may be common to general intellectual impairment. Dai et al. (2009) report evidence from a rare individual with a deletion of a subset of the WS genes, who displays a subset of the WS features. These data suggest that GTF2I, the gene telomeric to GTF2IRD1, may contribute disproportionately to specific aspects of social behavior, such as indiscriminant approach to strangers, in WS. However, the pathways of the “dissociation” characterizing the WS social phenotype, that is, the increased sociability and emotionality on one hand, and the clear limitations in complex social cognition on the other, are currently poorly understood.

While great progress has been made in characterizing aspects of the social phenotype of WS, and in mapping out some of its major behavioral components, a somewhat unsymmetrical profile has emerged, with major enigmas remaining with respect to the “hypersocial” phenotype. Perhaps the most robust behavioral characteristic is an increased drive for social interaction, including the initiation of social contacts with unknown people, and increased social engagement (e.g., eye contact, use of language, staring at the faces of others)—a feature readily observable even in infancy (Doyle et al., 2004, Jones et al., 2000). Other characteristics that appear unique to this syndrome include a relative strength in identifying (e.g., Rossen, Jones, Wang, & Klima, 1996, Special issue) and remembering (Udwin & Yule, 1991) faces, empathetic, friendly, and emotional personality (Klein-Tasman and Mervis, 2003, Tager-Flusberg and Sullivan, 2000), as well as socially engaging language in narratives (Gothelf et al., 2008, Järvinen-Pasley et al., 2008). Remarkably, overly social behavior and language of individuals with WS in relation to typical individuals extend across different cultures (Järvinen-Pasley et al., 2008, Zitzer-Comfort et al., 2007). At the same time, the social profile of WS is poorly understood and appears paradoxical, in that, for example, the emotional and empathic personality is accompanied by significant deficits in social–perceptual abilities (Gagliardi et al., 2003, Plesa-Skwerer et al., 2006, Plesa-Skwerer et al., 2005, Porter et al., 2007). This pattern of strengths and deficits suggests that social functioning may have several dissociable dimensions, including affiliative drive and certain aspects of face and social–perceptual processing.

Within the WS phenotype, increased sociability is accompanied by an intriguing profile of auditory processing. Reports suggest that individuals with WS demonstrate a high affinity to music, including a high engagement in musical activities (Don et al., 1999, Levitin et al., 2005a), which may be linked to increased activation of the amygdala, reduced planum temporale asymmetries, and augmented size of the superior temporal gyrus (STG) (Galaburda and Bellugi, 2000, Levitin et al., 2003, Reiss et al., 2004). However, this is not to say that individuals with WS demonstrate enhanced music processing abilities (e.g., Deruelle, Schön, Rondan, & Mancini, 2005). In addition, in as much as 95% of cases, WS is accompanied by hyperacusis, including certain sound aversions and attractions (Levitin et al., 2005b, Gothelf et al., 2006).

Of specific interest to the current study is the notion that in individuals with WS, heightened emotionality has been reported to extend from their social interactions with others (e.g., Reilly et al., 2004, Tager-Flusberg and Sullivan, 2000) to the experience of music (Don et al., 1999, Levitin et al., 2005a). In one study, Levitin et al. (2005) utilized a comprehensive parental questionnaire designed to characterize the musical phenotype in WS. Participants included 130 children and adults with WS (M = 18.6 years), as well as controls with autism, Down syndrome, and typical development (TD) (30 in each group), matched for chronological age (CA). Findings suggested that people with WS exhibited a higher degree of emotionality than Down syndrome and TD groups when listening to music. Individuals with WS were also reported to show greater and earlier interest in music than the comparison groups. Similarly, a study by Don et al. (1999) reported that, in addition to inducing feelings of happiness, individuals with WS differed from the comparison groups (TD, autism, Down syndrome) in that music had a significantly greater propensity to induce sadness in these participants. These findings are interesting in light of the fact that a genetic link between musicality and sociability has been postulated (Huron, 2001). More specifically, according to this view, during the history of human evolution, music is assumed to have played a role in social communication and social bonding, and thus shared genes may be implicated in both social and musical behaviors. However, reports of increased emotionality in response to music are largely anecdotal in the WS literature, a question of significant interest concerns the ways in which musical information may influence the processing of emotion in other modalities and domains in individuals with WS.

Social behavior is arguably tightly coupled to emotion, and the understanding of the emotions of others is critical for successful social interactions. Previous evidence from affect identification studies utilizing standardized face and voice stimuli have robustly established that individuals with WS are significantly impaired when compared to TD CA-matched controls, but perform at the level expected for their mental age (MA). For example, a study by Plesa-Skwerer et al. (2005) included dynamic face stimuli with happy, sad, angry, fearful, disgusted, surprised, and neutral expressions. The findings showed that TD participants were significantly better at labeling disgusted, neutral, and fearful faces than their counterparts with WS. Similarly, a study by Gagliardi et al. (2003) included animated face stimuli exhibiting neutral, angry, disgusted, afraid, happy, and sad expressions. The results showed that participants with WS showed noticeably lower levels of performance than CA-matched controls particularly with disgusted, fearful, and sad face stimuli. Another study by Plesa-Skwerer et al. (2006) utilized The Diagnostic Analysis of Nonverbal Accuracy—DANVA2 test (Nowicki & Duke, 1994), which includes happy, sad, angry, and fearful expressions, across both voice and still face stimuli. The results showed that, across both visual and auditory domains, individuals with WS exhibited significantly poorer performance than CA-matched controls with all but the happy expressions. In all of the above-mentioned studies, the performance of participants with WS was indistinguishable from that of MA-matched controls. However, these studies fail to elucidate the potential interactions between emotion processing across different domains (e.g., visual and auditory, social and non-social), and reports of increased emotionality in WS.

Affective expressions are often multimodal, that is, simultaneous and often complementary information is provided by, for example, a face and a voice. Thus, the integration of information from visual and auditory sources is an important prerequisite for successful social interaction, particularly during face-to-face conversation. Recent studies with typical individuals utilizing multi-modal affective face/voice stimuli have shown that a congruence in emotion between the two facilitates the processing of emotion (Dolan, Morris, & de Gelder, 2001); that multimodal presentation results in faster and more accurate emotion processing than unimodal presentation (Collignon et al., 2008); that information obtained via one sense affects the information-processing of another sensory modality, even when individuals are instructed to attend to only one modality (De Gelder and Vroomen, 2000, Ethofer et al., 2006); and that visually presented affect tends to be more salient than aurally presented emotion (Collignon et al., 2008). In the context of music, research has shown that musicians’ facial expressions have a significant impact on the experience of emotion in the musical sound (Thompson et al., 2005, Thompson et al., 2008, Vines et al., 2006). These results suggest that the processes underlying the integration of facial and vocal information are automatic. Only one known study has examined audiovisual integration abilities in WS (Böhning, Campbell, & Karmiloff-Smith, 2002). In this study, which focused upon natural speech perception, individuals with WS were found to be impaired in visual but not auditory speech identification, with decreased effects of visual information upon auditory processing in the audiovisual speech condition. Nevertheless, individuals with WS demonstrated audiovisual integration of speech, albeit to a lesser degree than typical controls.

A central question that arises from the literature reviewed above concerns the role of a face, or a social context, for multimodal emotion processing in individuals with WS. Thus, the aim of the present experiment was to compare the multi-sensory processing of affect in individuals with WS and in both TD and DD controls, and to test the possibility that a “face capture” in WS (e.g., Riby and Hancock, 2008, Riby and Hancock, 2009) may extend to audio-visual contexts. That is, that the presence of a face stimulus may attract the attention of individuals with WS at the cost of attending to other stimuli. Given the strong attraction to music in individuals with WS, and supposedly increased emotionality in response to such stimuli, novel music segments conveying happy, sad, and fearful emotion were used as auditory stimuli. The three emotions were selected because they represent relatively basic affective states, and there is a sizeable literature documenting the relevant abilities of individuals with WS within the visual domain (e.g., Gagliardi et al., 2003, Plesa-Skwerer et al., 2005, Plesa-Skwerer et al., 2006, Porter et al., 2007). The auditory segments were paired with either standardized images of facial expressions in the social condition, or with standardized images of objects, scenes, and animals conveying the same affective states as the faces in the non-social condition, in both audio-visually congruent and incongruent conditions. The experimental tasks were first, to identify the affect conveyed by the visual image, and second, to rate its intensity. To directly compare the influences of auditory emotion upon visual affect processing across social and non-social domains, that is, to examine whether the face as a stimulus may have a special status for those with WS, participants were required to respond to the visually presented emotion while ignoring the auditory affect. Although previous evidence has indicated higher auditory than visual performance in audiovisual integration contexts for individuals with WS (Böhning et al., 2002), that study did not examine emotion processing. The current design, focusing on the visual processing, allowed for the direct examination of the potential presence of the “face capture”. The judgment of emotional intensity in the visual domain was included as a measure of experienced emotionality.

In light of the unusual social profile in WS, specifically with respect to the atypically intense interest in people and faces, we predicted that the effects of auditory emotion would be relatively weaker in social, as compared to non-social contexts, across both congruent and incongruent conditions. More specifically, we hypothesized that because of their atypical social profile, individuals with WS would exhibit a “face capture”, resulting in a reduced interference of auditory emotion with stimuli comprising human faces. Thus, this pattern would be manifested as higher visual emotion processing ability in social, as compared to non-social contexts, in individuals with WS. Crucially, in addition, we hypothesized that the reduced auditory interference within the social domain in WS would specifically manifest as relatively high levels of visual performance with the audio-visually incongruent social stimuli (i.e., similar to that with congruent stimuli), reflecting the fact that facial emotion processing would not be affected by a conflict in the emotional content between the visual and auditory stimuli. By contrast, we hypothesized that stronger effects of auditory emotion would be apparent in the non-social condition, manifested as lower visual processing performance overall and mirroring the pattern of performance for TD controls with an advantage for emotionally congruent relative to emotionally incongruent audiovisual stimuli. We hypothesized that both control groups would show similar levels of affect identification performance across the social and non-social stimuli, with higher levels of performance for the audio-visually congruent as compared to the incongruent stimuli across domains; we also expected that the TD group would outperform the DD group overall. Based upon previous studies, we hypothesized that the TD group would also outperform the WS group in facial expression processing, while the WS and DD groups would exhibit similar levels of performance (cf. e.g., Gagliardi et al., 2003, Plesa-Skwerer et al., 2005). It was further predicted that individuals with WS would experience greater emotional intensity in the social, as compared to the non-social contexts, reflecting their increased interest in human faces over non-social stimuli. By contrast, we predicted that both TD and DD controls would exhibit similar patterns of performance across the social and non-social conditions, with both control groups experiencing the intensity of emotion as similar in the two domains, reflecting equivalent levels of interest in both types of stimuli.

Section snippets

Participants

Twenty-one individuals with WS (11 males) were recruited through a multicenter program based at the Salk Institute. For all participants, genetic diagnosis of WS was established using fluorescence in situ hybridization (FISH) probes for elastin (ELN), a gene invariably associated with the WS microdeletion (Ewart et al., 1993, Korenberg et al., 2000). In addition, all participants exhibited the medical and clinical features of the WS phenotype, including cognitive, behavioral, and physical

The accuracy of visual emotion identification comparing across the social and non-social stimuli and the three emotions

Fig. 1 displays the percentage of correct judgments for each emotion crossed with the type of visual stimulus (social and non-social) for participants with WS, TD, and DD. A judgment was deemed correct if it corresponded with the emotion conveyed in the visual stimulus alone. The scores were collapsed across the emotion content in the music. As no neutral music was included in the audio-visual pairs, the inclusion of the results in response to the stimuli including neutral visual stimuli, as a

Discussion

The aim of the current study was to examine the effects of aurally presented emotion on visual processing of affect by contrasting individuals with WS with TD and DD control groups. We also obtained perceptual ratings of emotion intensity. The main hypothesis was that, due to the disproportionate attention towards face stimuli characterizing individuals with WS, these participants would exhibit an increased ability to identify visual affect in the social relative to non-social stimuli, due to a

Acknowledgements

This study was supported by a grant P01 HD033113-12 (NICHD), and grants from the National Institute of Neurological Disorders and Stroke (NS053326), the Michael Smith Foundation for Health Research, and the Grammy Foundation to B.W.V.

References (65)

  • D.M. Riby et al.

    Viewing it differently: Social scene perception in Williams syndrome and Autism

    Neuropsychologia

    (2008)
  • N. Tottenham et al.

    The NimStim set of facial expressions: Judgments from untrained research participants

    Psychiatry Research

    (2009)
  • U. Bellugi et al.

    The neurocognitive profile of Williams syndrome: A complex pattern of strengths and weaknesses

    Journal of Cognitive Neuroscience

    (2000)
  • U. Bellugi et al.

    Williams syndrome: An unusual neuropsychological profile

  • A.L. Benton et al.

    Contributions to neuropsychological assessment

    (1983)
  • J.D. Cohen et al.

    PsyScope: A new graphic interactive environment for designing psychology experiments

    Behavioral Research Methods, Instruments, and Computers

    (1993)
  • L. Dai et al.

    Is it Williams syndrome? GTF21 implicated in sociability and GTF21RD1 in visual-spatial construction revealed by high resolution arrays

    American Journal of Medical Genetics

    (2009)
  • K.M. Dalton et al.

    Gaze fixation and the neural circuitry of face processing in autism

    Nature Neuroscience

    (2005)
  • B. De Gelder et al.

    The perception of emotions by ear and by eye

    Cognition and Emotion

    (2000)
  • C. Deruelle et al.

    Global and local music perception in children with Williams syndrome

    Neuroreport

    (2005)
  • R.J. Dolan et al.

    Crossmodal binding of fear in voice and face

    Proceedings of the National Academy of Sciences of the United States of America

    (2001)
  • A. Don et al.

    Music and language skills of children with Williams syndrome

    Child Neuropsychology

    (1999)
  • T.F. Doyle et al.

    Everybody in the world is my friend. Hypersociability in young children with Williams syndrome

    American Journal of Medical Genetics

    (2004)
  • E.M. Dykens

    Anxiety, fears, and phobias in persons with Williams syndrome

    Developmental Neuropsychology

    (2003)
  • T. Ethofer et al.

    Impact of voice on emotional judgment of faces: An event-related fMRI study

    Human Brain Mapping

    (2006)
  • A.K. Ewart et al.

    Hemizygosity at the elastin locus in a developmental disorder, Williams syndrome

    Nature Genetics

    (1993)
  • A.M. Galaburda et al.

    V. Multi-level analysis of cortical neuroanatomy in Williams syndrome

    Journal of Cognitive Neuroscience

    (2000)
  • D. Gothelf et al.

    Hyperacusis in Williams syndrome: Characteristics and associated neuroaudiologic abnormalities

    Neurology

    (2006)
  • D. Gothelf et al.

    Association between cerebral shape and social use of language in Williams syndrome

    American Journal of Medical Genetics A

    (2008)
  • B.W. Haas et al.

    Genetic influences on sociability: Heightened amygdala reactivity and event-related responses to positive social stimuli in Williams syndrome

    Journal of Neuroscience

    (2009)
  • P. Howlin et al.

    Cognitive functioning in adults with Williams syndrome

    Journal of Child Psychology and Psychiatry

    (1998)
  • Huron, D. (2001). Is music an evolutionary adaptation? In Biological foundations of music (Vol. 930, pp. 43–61). New...
  • Cited by (23)

    • Abnormal auditory event-related potentials in Williams syndrome

      2021, European Journal of Medical Genetics
    • Social Information Processing in Williams Syndrome

      2018, International Review of Research in Developmental Disabilities
    • Hypersexuality and Neuroimaging Personality, Social Cognition, and Character

      2016, Neuroimaging Personality, Social Cognition, and Character
    • Relations between social-perceptual ability in multi- and unisensory contexts, autonomic reactivity, and social functioning in individuals with Williams syndrome

      2015, Neuropsychologia
      Citation Excerpt :

      The authors interpreted this finding as suggesting that music and social-emotional processing are unusually strongly intertwined in individuals with WS. However, as the studies of Järvinen-Pasley et al. (2010c) and Lense et al. (2014) involved auditory stimuli that did not originate from human sources (music), it is unclear how the findings may relate to audiovisual emotion integration in purely social contexts in WS. In addition to the literature documenting receptive emotion processing abilities in WS, there are reports of unusual emotional and empathic sensitivity/reactivity in such individuals in the expressive domain (Järvinen-Pasley et al., 2008; Mervis et al., 2003; Fidler et al., 2007).

    View all citing articles on Scopus
    1

    A.J.-P. and B.W.V. contributed equally to this work.

    View full text