Skip to main content
Log in

FACSGen: A Tool to Synthesize Emotional Facial Expressions Through Systematic Manipulation of Facial Action Units

  • Original Paper
  • Published:
Journal of Nonverbal Behavior Aims and scope Submit manuscript

Abstract

To investigate the perception of emotional facial expressions, researchers rely on shared sets of photos or videos, most often generated by actor portrayals. The drawback of such standardized material is a lack of flexibility and controllability, as it does not allow the systematic parametric manipulation of specific features of facial expressions on the one hand, and of more general properties of the facial identity (age, ethnicity, gender) on the other. To remedy this problem, we developed FACSGen: a novel tool that allows the creation of realistic synthetic 3D facial stimuli, both static and dynamic, based on the Facial Action Coding System. FACSGen provides researchers with total control over facial action units, and corresponding informational cues in 3D synthetic faces. We present four studies validating both the software and the general methodology of systematically generating controlled facial expression patterns for stimulus presentation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

References

  • Bickel, B., Botsch, M., Angst, R., Matusik, W., Otaduy, M., Pfister, H., et al. (2007). Multi-scale capture of facial geometry and motion. ACM Transactions in Graphics, 26(3), 33.

    Article  Google Scholar 

  • Blanz, V., & Vetter, T. (1999). Morphable model for the synthesis of 3D faces [Computer software]. Los Angeles: SIGGRAPH.

    Google Scholar 

  • Corneille, O., Hugenberg, K., & Timothy, P. (2007). Applying the attractor field model to social cognition: Perceptual discrimination is facilitated, but memory is impaired for faces displaying evaluatively congruent expressions. Journal of Personality and Social Psychology, 93(3), 335–352.

    Article  PubMed  Google Scholar 

  • Cosker, D., Borkett, R., Mashall, D., & Rosin, P. L. (2008). Towards automatic performance driven animation between multiple types of facial model. IET Computer Vision, 2(3), 129–141.

    Article  Google Scholar 

  • Cristinzio, C., N’Diaye, K., Seeck, M., Vuilleumier, P., & Sander, D. (2010). Integration of gaze direction and facial expression in patients with unilateral amygdala damage. Brain, 133, 248–261.

    Article  PubMed  Google Scholar 

  • Delplanque, S., N’Diaye, K., Scherer, K. R., & Grandjean, D. (2007). Spatial frequencies or emotional effects? A systematic measure of spatial frequencies for IAPS pictures by a discrete wavelet analysis. Journal of Neuroscience Methods, 165, 144–150.

    Article  PubMed  Google Scholar 

  • Ekman, P., & Friesen, W. V. (1976). Pictures of facial affect. Palo Alto, CA: Consulting Psychologists Press.

    Google Scholar 

  • Ekman, P., Friesen, W. V., & Hager, J. (2002). The facial action coding system. London, UK.

  • Ekman, P., Irwin, W., & Rosenberg, E. L. (1994). The emotional facial action coding system (EMFACS). London, UK.

  • Ellison, J. W., & Massaro, D. W. (1997). Featural evaluation, integration, and judgment of facial affect. Journal of Experimental Psychology: Human Perception and Performance, 23(1), 213–226.

    Article  PubMed  Google Scholar 

  • FaceGen Modeller. [Software] (2009). Singular Inversions Inc. Retrieved from http://www.facegen.com/.

  • Fiser, J., Bex, P. J., & Makous, W. (2003). Contrast conservation in human vision. Vision Research, 43, 2637–2648.

    Article  PubMed  Google Scholar 

  • Freeman, J. B., & Ambady, N. (2009). Motions of the hand expose the partial and parallel activation of stereotypes. Psychological Science, 20, 1183–1188.

    Article  PubMed  Google Scholar 

  • Gaag, C., van der Minderaa, R. B., & Keysers, C. (2007). The bold signal in the amygdala does not differentiate between dynamic facial expressions. Social Cognitive Affective Neuroscience, 2, 93–103.

    Article  Google Scholar 

  • Goeleven, E., De Raedt, R., Leyman, L., & Verschuere, B. (2008). The Karolinska directed emotional faces: A validation studies. Cognition and Emotion, 22(6), 1094–1118.

    Article  Google Scholar 

  • Grammer, K., Tessarek, A., & Hofer, G. (in preparation). From emoticons to avatars: The simulation of facial expression. In A. Kappas (Ed.), Emotional communication on the internet. Retrieved from http://evolution.anthro.univie.ac.at/institutes/urbanethology/projects/simulation/emosym/index.html.

  • Hess, U., Blairy, S., & Kleck, R. E. (1997). The intensity of emotional facial expressions and decoding accuracy. Journal of Nonverbal Behavior, 21(4), 241–257.

    Article  Google Scholar 

  • Hezle, V., Biehn, C., Schlömer, T., & Linner, F. (2004). Adaptable setup for performance driven facial animation. In Proceedings of SIGGRAPH’04—sketches. Los Angeles: Springer.

  • Hirsh, A. T., Alqudah, A. F., Stutts, L. A., & Robinson, M. E. (2009). Virtual human technology: Capturing sex, race, and age influences in individual pain decision policies. Pain, 140, 231–238.

    Article  Google Scholar 

  • Joorman, J., & Gotlib, I. H. (2006). Is this happiness I see? Biases in the identification of emotional facial expressions in depression and social phobia. Journal of Abnormal Psychology, 115(4), 705–714.

    Article  Google Scholar 

  • Kanade, T., Cohn, J. F., & Tian, Y. (2000). Comprehensive database for facial expression analysis. In Proceedings of the fourth IEEE international conference on automatic face and gesture recognition. Retrieved from http://vasc.ri.cmu.edu/idb/html/face/facial_expression/index.html.

  • Lundqvist, D., Esteves, F., & Öhman, A. (1998). The Karolinska directed emotional faces—KDEF [CD ROM]. Department of Clinical Neuroscience, Psychology section, Karolinska Institute. ISBN 91-630-7164-9.

  • Ma, W., Jones, A., Chiang, J., Hawkins, T., Frederiksen, S., Peers, P., et al. (2008). Facial performance synthesis using deformation-driven polynomial displacement maps. ACM Transactions in Graphics, 27(5), 1–10.

    Article  Google Scholar 

  • Malatesta, L., Raouzaiou, A., Karpouzis, K., & Kollias, S. (2006). Mpeg-4 facial expression synthesis based on appraisal theory. In The 3rd IFIP conference in artificial intelligence applications and innovations, AIAI 2006. Athens, Greece.

  • Moradi, F., Koch, C., Shimojo, S., Sarma, G., & Gutierrez, J. (2005). Adaptation to face identity and emotional expression depends on attention. In Proceedings of vision sciences society 5th. Sarasota, FL: Journal of Vision.

  • Moser, E., Derntl, B., Robinson, S., Fink, B., Gur, R. C., & Grammer, K. (2007). Amygdala activation at 3t in response to human and avatar facial expressions of emotions. Journal of Neuroscience Methods, 161(1), 126–133.

    Article  PubMed  Google Scholar 

  • N’Diaye, K., Sander, D., & Vuilleumier, P. (2009). Self-relevance processing in the human amygdala: Gaze direction, facial expression, and emotion intensity. Emotion, 9(6), 798–806.

    Article  PubMed  Google Scholar 

  • Oosterhof, N. N., & Todorov, A. (2008). The functional basis of face evaluation. In Proceedings of the national academy of sciences of the United States of America, 105(32), 11087–11092.

  • Pantic, M., Valstar, M. F., Rademaker, R., & Maat, L. (2005). Web-based database for facial expression analysis. In Proceedings of the IEEE international conference on multimedia and expo (ICME’05). Retrieved from http://www.docstoc.com/docs/2933918/EULA-End-User-License-Agreement-MMI-Face-Database-www-mmifacedb.

  • Parke, F. I., & Waters, K. (1996). Computer facial animation. Natick, MA: A.K. Peters Ltd.

    Google Scholar 

  • Parr, L. A., Waller, B. M., & Heintz, M. (2008). Facial expression categorization by chimpanzees using standardized stimuli. Emotion, 8(2), 216–231.

    Article  PubMed  Google Scholar 

  • Pasquariello, S., & Pelachaud, C. (2001). Greta: A simple facial animation engine. In R. Rajkumar, M. Köppen, S. Ovaska, T. Furuhashi, & F. Hoffmann (Eds.), 6th online world conference on soft computing in industrial applications, session on soft computing for intelligent 3D agents. Germany: Springer.

    Google Scholar 

  • Pelphrey, K., Viola, R., & McCarthy, G. (2004). When strangers pass: Processing of mutual and averted social gaze in the superior temporal sulcus. Psychological Science, 15(9), 598–603.

    Article  PubMed  Google Scholar 

  • Poser 7 [Software] (2007). e-frontier. Retrieved from http://www.e-frontier.com/go/products/poser/.

  • Pourtois, G., Grandjean, D., Sander, D., & Vuilleumier, P. (2004). Electrophysiological correlates of rapid spatial orienting towards fearful faces. Cerebral Cortex, 14(6), 619–633.

    Article  PubMed  Google Scholar 

  • Roesch, E. B., Sander, D., Mumenthaler, C., Kerzel, D., & Scherer, K. R. (2010a). Psychophysics of emotion: The QUEST for emotional attention. Journal of Vision, 10(3), 4, 1–9.

    Google Scholar 

  • Roesch, E. B., Sander, D., & Scherer, K. R. (2009). Emotion and motion in facial expressions modulate the attentional blink. Perception, 38, 466.

    Google Scholar 

  • Roesch, E. B., Sander, D., & Scherer, K. R. (2010b). The 4th dimension(s) of emotion perception: Emotion and motion in facial expressions modulate the attentional blink. Manuscript in preparation.

  • Ruys, K. I., & Stapel, D. A. (2008). Emotion elicitor or emotion messenger: Subliminal priming reveals two faces of facial expressions. Psychological Science, 19(6), 593–600.

    Article  PubMed  Google Scholar 

  • Sander, D., Grandjean, D., Kaiser, S., Wehrle, T., & Scherer, K. R. (2007). Interaction effect of perceived gaze direction and dynamic facial expression: Evidence for appraisal theories of emotion. European Journal of Cognitive Psychology, 19(3), 470–480.

    Article  Google Scholar 

  • Sayette, M. A., Cohn, J. F., Wertz, J. M., Perrott, M. A., & Parrott, D. J. (2004). A psychometric evaluation of the facial action coding system for assessing spontaneous expression. Journal of Nonverbal Behavior, 25(3), 167–186.

    Article  Google Scholar 

  • Scherer, K. R., & Ellgring, H. (2007). Are facial expressions of emotion produced by categorical affect programs or dynamically driven by appraisal? Emotion, 7(1), 113–130.

    Article  PubMed  Google Scholar 

  • Schulte-Rüther, M., Markowitsch, H. J., Fink, G. R., & Piefke, M. (2007). Mirror neuron and theory of mind mechanisms involved in face-to-face interactions: A functional magnetic resonance imaging approach to empathy. Journal of Cognitive Neuroscience, 19, 1354–1372.

    Article  PubMed  Google Scholar 

  • Shimojo, S., Simion, C., Shimojo, E., & Scheier, C. (2003). Gaze bias both reflects and influences preference. Nature Neuroscience, 6(12), 1317–1322.

    Article  PubMed  Google Scholar 

  • Szczepanowski, R., & Pessoa, L. (2007). Fear perception: Can objective and subjective awareness measures be dissociated? Journal of Vision, 7(4), 10–17.

    Article  PubMed  Google Scholar 

  • Todorov, A., Baron, S. G., & Oosterhof, N. N. (2008). Evaluating face trustworthiness: A model based approach. Social Cognitive Affective Neuroscience, 3, 119–127.

    Article  Google Scholar 

  • Wehrle, T., Kaiser, S., Schmidt, S., & Scherer, K. R. (2000). Studying the dynamics of emotional expression using synthesized facial muscle movements. Journal of Personality and Social Psychology, 78(1), 105–119.

    Article  PubMed  Google Scholar 

  • Zhang, L., Snavely, N., Curless, B., & Seitz, S. (2004). Spacetime faces: High resolution capture for modeling and animation. ACM Transactions in Graphics, 23(3), 548–558.

    Article  Google Scholar 

Download references

Acknowledgments

This work was partially supported by the following sources: HUMAINE, 6th Framework Programme IST Multimodal Interfaces, http://emotion-research.net. The National Centre of Competence in Research (NCCR) in Affective Sciences financed by the Swiss National Science Foundation (n° 51NF40-104897). A grant from the Swiss National Science Foundation (105311-108187/1 to David Sander and Patrik Vuilleumier). The “Programme d’actions intégrées Franco-Suisse Germaine de Staël” in collaboration with the Swiss Academy for Technical Sciences (to Lionel Reveret and David Sander).

The authors would like to thank Prof. Susanne Kaiser, Dr. Marc Méhu, Katia Schenkel, Birgit Michel, and Stéphane With (University of Geneva) for their expertise and guidance about the FACS, and Dr Mina Vasalou (University of Bath) for comments on drafts of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Etienne B. Roesch.

Additional information

FACSGen is a software developed at the Swiss Centre for Affective Sciences for research purposes. It is only available on a per collaboration basis. More information can be found at http://www.affective-sciences.ch/facsgen. FaceGen Modeller can be purchased from Singular Inversion Inc. Prices and a demonstration version of the software can be found on http://www.facegen.com.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Roesch, E.B., Tamarit, L., Reveret, L. et al. FACSGen: A Tool to Synthesize Emotional Facial Expressions Through Systematic Manipulation of Facial Action Units. J Nonverbal Behav 35, 1–16 (2011). https://doi.org/10.1007/s10919-010-0095-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10919-010-0095-9

Keywords

Navigation