Skip to main content
Top
Gepubliceerd in: Psychological Research 2/2024

05-08-2023 | Research

Sex related differences in the perception and production of emotional prosody in adults

Auteurs: Ayşe Ertürk, Emre Gürses, Maviş Emel Kulak Kayıkcı

Gepubliceerd in: Psychological Research | Uitgave 2/2024

Log in om toegang te krijgen
share
DELEN

Deel dit onderdeel of sectie (kopieer de link)

  • Optie A:
    Klik op de rechtermuisknop op de link en selecteer de optie “linkadres kopiëren”
  • Optie B:
    Deel de link per e-mail

Abstract

This study aimed to investigate the features of sex-related emotional prosody production patterns and perception abilities in adult speakers. The study involved 42 native Turkish speakers (27 females and 15 males). Sex-related perception and production of the emotions “anger,” “joy,” “sadness,” and “neutral” were examined. Participants were first asked to indicate the actor's emotional state by selecting one of the given emotion alternatives provided. They were then instructed to produce the same stimuli with varying emotions. We analyzed the change in voice characteristics employed in different emotions in terms of F0 (Hz), speaking rate (seconds), and intensity (dB) using pairwise emotion comparison. The findings showed no sex differences in emotional prosody perceptions (p = 0.725). However, differences in the production of emotional prosody between sex have been documented in pitch variation of speech. Within-group analyses revealed that women tended to use a higher pitch when expressing joy versus sadness and a neutral state of feeling. Both men and women exhibited varying loudness levels for different emotional states in the speech loudness analysis. When expressing sadness, both men and women speak slower than when expressing as contrasted to anger, joy, or neutral states of feeling. Although Turkish speakers’ ability to perceive emotional prosody is similar to that of other languages, they favor speech loudness fluctuation in the production of emotional prosody.
Literatuur
go back to reference Akagi, M., Han, X., Elbarougy, R., Hamada, Y., & Li, J. (2014). Toward affective speech-to-speech translation: Strategy for emotional speech recognition and synthesis in multiple languages. In Signal and information processing association annual summit and conference (APSIPA), 2014 Asia-Pacific. Akagi, M., Han, X., Elbarougy, R., Hamada, Y., & Li, J. (2014). Toward affective speech-to-speech translation: Strategy for emotional speech recognition and synthesis in multiple languages. In Signal and information processing association annual summit and conference (APSIPA), 2014 Asia-Pacific.
go back to reference Fraser, J., Papaioannou, I., & Lemon, O. (2018). Spoken conversational ai in video games: Emotional dialogue management increases user engagement. In Proceedings of the 18th international conference on intelligent virtual agents. Fraser, J., Papaioannou, I., & Lemon, O. (2018). Spoken conversational ai in video games: Emotional dialogue management increases user engagement. In Proceedings of the 18th international conference on intelligent virtual agents.
go back to reference Giles, H., Scherer, K. R., & Taylor, D. M. (1979). Speech markers in social interaction. Social Markers in Speech, 343–381. Giles, H., Scherer, K. R., & Taylor, D. M. (1979). Speech markers in social interaction. Social Markers in Speech, 343–381.
go back to reference Heijnen, S., De Kleijn, R., & Hommel, B. (2019). The impact of human–robot synchronization on anthropomorphization. Frontiers in Psychology, 9, 2607.CrossRefPubMedPubMedCentral Heijnen, S., De Kleijn, R., & Hommel, B. (2019). The impact of human–robot synchronization on anthropomorphization. Frontiers in Psychology, 9, 2607.CrossRefPubMedPubMedCentral
go back to reference Iriondo, I., Guaus, R., Rodríguez, A., Lázaro, P., Montoya, N., Blanco, J. M., Bernadas, D., Oliver, J. M., Tena, D., & Longhi, L. (2000). Validation of an acoustical modelling of emotional expression in Spanish using speech synthesis techniques. In ISCA Tutorial and Research Workshop (ITRW) on Speech and Emotion. Iriondo, I., Guaus, R., Rodríguez, A., Lázaro, P., Montoya, N., Blanco, J. M., Bernadas, D., Oliver, J. M., Tena, D., & Longhi, L. (2000). Validation of an acoustical modelling of emotional expression in Spanish using speech synthesis techniques. In ISCA Tutorial and Research Workshop (ITRW) on Speech and Emotion.
go back to reference Lin, Y., Ding, H., & Zhang, Y. (2021). Gender differences in identifying facial, prosodic, and semantic emotions show category-and channel-specific effects mediated by encoder’s gender. Journal of Speech, Language, and Hearing Research, 64(8), 2941–2955.CrossRefPubMed Lin, Y., Ding, H., & Zhang, Y. (2021). Gender differences in identifying facial, prosodic, and semantic emotions show category-and channel-specific effects mediated by encoder’s gender. Journal of Speech, Language, and Hearing Research, 64(8), 2941–2955.CrossRefPubMed
go back to reference Lucarini, V., Grice, M., Cangemi, F., Zimmermann, J. T., Marchesi, C., Vogeley, K., & Tonna, M. (2020). Speech prosody as a bridge between psychopathology and linguistics: The case of the schizophrenia spectrum. Frontiers in Psychiatry, 11, 531863.CrossRefPubMedPubMedCentral Lucarini, V., Grice, M., Cangemi, F., Zimmermann, J. T., Marchesi, C., Vogeley, K., & Tonna, M. (2020). Speech prosody as a bridge between psychopathology and linguistics: The case of the schizophrenia spectrum. Frontiers in Psychiatry, 11, 531863.CrossRefPubMedPubMedCentral
go back to reference Montaño, R., Alías, F., & Ferrer, J. (2013). Prosodic analysis of storytelling discourse modes and narrative situations oriented to text-to-speech synthesis. In Eighth ISCA workshop on speech synthesis. Montaño, R., Alías, F., & Ferrer, J. (2013). Prosodic analysis of storytelling discourse modes and narrative situations oriented to text-to-speech synthesis. In Eighth ISCA workshop on speech synthesis.
go back to reference Morris, C. G. (2002). Understanding Psychology [in Turkish: Psikolojiyi Anlamak] (3rd ed.). Psychology Association Publications. Morris, C. G. (2002). Understanding Psychology [in Turkish: Psikolojiyi Anlamak] (3rd ed.). Psychology Association Publications.
go back to reference Paulmann, S., & Uskul, A. K. (2014). Cross-cultural emotional prosody recognition: Evidence from Chinese and British listeners. Cognition and Emotion, 28(2), 230–244.CrossRefPubMed Paulmann, S., & Uskul, A. K. (2014). Cross-cultural emotional prosody recognition: Evidence from Chinese and British listeners. Cognition and Emotion, 28(2), 230–244.CrossRefPubMed
go back to reference Pickett, J. M. (1999). The acoustics of speech communication: Fundamentals, speech perception theory, and technology. Allyn & Bacon. Pickett, J. M. (1999). The acoustics of speech communication: Fundamentals, speech perception theory, and technology. Allyn & Bacon.
go back to reference Preti, E., Suttora, C., & Richetin, J. (2016). Can you hear what i feel? A validated prosodic set of angry, happy, and neutral Italian pseudowords. Behavior Research Methods, 48, 259–271.CrossRefPubMed Preti, E., Suttora, C., & Richetin, J. (2016). Can you hear what i feel? A validated prosodic set of angry, happy, and neutral Italian pseudowords. Behavior Research Methods, 48, 259–271.CrossRefPubMed
go back to reference Raphael, L. J., Borden, G. J., & Harris, K. S. (2003). Speech science primer: Physiology, acoustics, and perception of speech (4th ed.). Lippincott Williams & Wilkins. Raphael, L. J., Borden, G. J., & Harris, K. S. (2003). Speech science primer: Physiology, acoustics, and perception of speech (4th ed.). Lippincott Williams & Wilkins.
go back to reference Ross, E. D. (2000). Affective prosody and the aprosodias. Principles of Behavioral and Cognitive Neurology, 2, 316–331.CrossRef Ross, E. D. (2000). Affective prosody and the aprosodias. Principles of Behavioral and Cognitive Neurology, 2, 316–331.CrossRef
go back to reference Tseng, H.-H., Huang, Y.-L., Chen, J.-T., Liang, K.-Y., Lin, C.-C., & Chen, S.-H. (2017). Facial and prosodic emotion recognition in social anxiety disorder. Cognitive Neuropsychiatry, 22(4), 331–345.CrossRefPubMed Tseng, H.-H., Huang, Y.-L., Chen, J.-T., Liang, K.-Y., Lin, C.-C., & Chen, S.-H. (2017). Facial and prosodic emotion recognition in social anxiety disorder. Cognitive Neuropsychiatry, 22(4), 331–345.CrossRefPubMed
go back to reference Vekkot, S., & Gupta, D. (2019). Prosodic transformation in vocal emotion conversion for multi-lingual scenarios: A pilot study. International Journal of Speech Technology, 22, 533–549.CrossRef Vekkot, S., & Gupta, D. (2019). Prosodic transformation in vocal emotion conversion for multi-lingual scenarios: A pilot study. International Journal of Speech Technology, 22, 533–549.CrossRef
go back to reference Wu, Z. (2021). Chasing the unicorn? The feasibility of automatic assessment of interpreting fluency. In Testing and assessment of interpreting: Recent developments in China (pp. 143–158). Wu, Z. (2021). Chasing the unicorn? The feasibility of automatic assessment of interpreting fluency. In Testing and assessment of interpreting: Recent developments in China (pp. 143–158).
Metagegevens
Titel
Sex related differences in the perception and production of emotional prosody in adults
Auteurs
Ayşe Ertürk
Emre Gürses
Maviş Emel Kulak Kayıkcı
Publicatiedatum
05-08-2023
Uitgeverij
Springer Berlin Heidelberg
Gepubliceerd in
Psychological Research / Uitgave 2/2024
Print ISSN: 0340-0727
Elektronisch ISSN: 1430-2772
DOI
https://doi.org/10.1007/s00426-023-01865-1

Andere artikelen Uitgave 2/2024

Psychological Research 2/2024 Naar de uitgave