Basic Emotion Recogniton using Automatic Facial Expression Analysis Software

  Author(s)
Vivi Triyanti    (Universitas Katolik Atmajaya - Indonesia)
Yassierli Yassierli (Institut Teknologi Bandung - Indonesia)
Hardianto Iridiastadi (Institut Teknologi Bandung - Indonesia)

 ) Corresponding Author
Copyright (c) 2019 Vivi Triyanti, Yassierli, Hardianto Iridiastadi
  Abstract

Facial expression was proven to show a person's emotions, including 6 basic human emotions, namely happy, sad, surprise, disgusted, angry, and fear. Currently, the recognition of basic emotions is applied using the automatic facial expression analysis software. In fact, not all emotions are performed with the same expressions. This study aims to analyze whether the six basic human emotions can be recognized by software. Ten subjects were asked to spontaneously show the expressions of the six basic emotions sequentially. Subjects are not given instructions on how the standard expressions of each of the basic emotions are. The results show that only happy expressions can be consistently identified clearly by the software, while sad expressions are almost unrecognizable. On the other hand surprise expressions tend to be recognized as mixed emotions of surprise and happy. There are two emotions that are difficult to express by the subject, namely fear and anger. The subject interpretation of these two emotions varies widely and tends to be unrecognizable by software. The conclusion of this study is the way a person shows his emotions varies. Although there are some similarities in expression, it cannot be proven that all expressions of basic human emotions can be generalized. Further implication of this research needs further discussion.

  Keywords
Automatic facial expression analysis; basic emotion; spontan
  Click to Read the Full Text
PDF
  References

[1] L. T. Ying, T. Kanade, and J. F. Cohn, “Recognizing Action Units for facial expression analysis,” in Handbook of Face Recognition, Li. S. Z., Jain, A. K., Eds., London: Springer-Verlag London Limited, 2011.

[2] P. Ekman and E. L. Rosenberg, What the face reveals, basic and applied studies of spontaneous expression using the Facial Action Coding System (FACS) second edition, Oxford: Oxford University Press, 2005. https://doi.org/10.1093/acprof:oso/9780195179644.001.0001.

[3] A. Bandini, S. Orlandi, H. J. Escalante, F. Giovannelli, M. Cincotta, C. A. Reyes-Garcia, P. Vanni, G. Zaccara, C. Manfredia, “Analysis of facial expressions in parkinson's disease through video-based automatic methods,” Journal of Neuroscience Methods, vol 281, pp. 7-20. https://doi.org/10.1016/j.jneumeth.2017.02.006.

[4] J. Hamm, C. G. Kohler, R. C. Gur, R. Verma,” Automated Facial Action Coding System for dynamic analysis of facial expressions in neuropsychiatric disorders,”Journal of Neuroscience Methods, vol 200, pp. 237– 256, 2011. https://doi.org/10.1016/j.jneumeth.2011.06.023.

[5] M. S. Pereira, J. Lange, S. Shahid, M. Swerts, “A perceptual and behavioral analysis of facial cues to deception in interactions between children and a virtual agent,” International Journal of Child-Computer Interaction, vol. 15, pp. 1-12, 2017. https://doi.org/10.1016/j.ijcci.2017.10.003.

[6] W. B. Zhu, H. Yang, Y. Jin, B. Liu, “A method for recognizing fatigue driving based on Dempster-Shafer Theory and Fuzzy Neural Network,” Mathematical Problems in Engineering, vol 2017, Article ID 6191035, 10 pages, 2017. https://doi.org/10.1155/2017/6191035.

[7] G. Szirtesa, J. Orozcoa, I. Petrása, D. Szolgaya, A. Utasia, J. F. Cohnb, “Behavioral cues help predict impact of advertising on future sales,” Image and Vision Computing, vol. 65, pp. 49-57, 2018. https://doi.org/10.1016/j.imavis.2017.03.002.

[8] X. Shu, Y. Cai, L. Yang, L. Zhang, J. Tang, “Computational face reader based on facial attribute estimation,” Neurocomputing, vol. 236, pp. 153-163, 2017. https://doi.org/10.1016/j.neucom.2016.09.110.

[9] D. Dawson, A. K. Searle, J. L. Paterson, “Look before you sleep: Evaluating the use of fatigue detection technologies within a fatigue risk management system for the road transport industry,” Sleep Medicine Reviews, vol. 18, pp. 141-152. https://doi.org/10.1016/j.smrv.2013.03.003.

[10] M. E. Jabon, J. N. Bailenson, E. Pontikakis, L. Takayama, and C. Nass, “Facial-expression analysis for predicting unsafe driving behavior face and gesture recognition,” Pervasive computing¸ vol. October-December, 2011. https://doi.org/10.1109/MPRV.2010.46.

[11] A. Fernandez, R. Usamentiaga, J. L. Carús, R. Casado, “Driver distraction using visual-based sensors and algorithms, Sensors, vol. 16, pp. 1805, 2016. https://doi.org/10.3390/s16111805.

[12] M. Magdin*, F. Prikler, “Real Time Facial Expression Recognition Using Webcam and SDK Affectiva, International Journal of Interactive Multimedia and Atificial Intelligence, vol 5, no 1, 2017. https://doi.org/10.9781/ijimai.2017.11.002.

[13] J. Lei, J. Sala, S. Jasra, “Identifying correlation between facial expression and heart rate and skin conductance with iMotions biometric platform.” Journal of Emerging Forensic Sciences Research, pp. 53-83, 2018.

[14] Noldus Information Technology, Face Reader Version 7.1 Reference Manual, Wageningen: Noldus Information Technology, 2017

[15] P. Lewinski, T. M. Uyl, C. Butler, “Automated facial coding: validation of basic emotions and FACS AUs in FaceReader,” Journal of Neuroscience, Psychology, and Economics, vol. 7, no. 4, pp. 227-236, 2014. https://doi.org/10.1037/npe0000028.

[16] L. Kulke, D. Feyerabend, and A. Schacht, “Comparing the Affectiva iMotions Facial Expression Analysis Software with EMG”, 10.31234/osf.io/6c58y, 2018

[17] S. Stöckli, M. Schulte-Mecklenbeck S. Borer, and A.C. Samson, “Facial expression analysis with AFFDEX and FACET: A validation study, Behavior Research Methods, 50(3), 2017. https://doi.org/10.3758/s13428-017-0996-1.

[18] J. D. Prasetyo, Z. Fatah, T. Saleh, “Ekstraksi fitur berbasis average face untuk pengenalan ekspresi wajah,” Jurnal Ilmiah Informatika, vol. 2, no. 2, 2017.

[19] D. Y. Liliana, T. Basaruddin, M.R. Widyanto, “Mixed Facial Emotion Recognition using Active Appearance Model and Hidden Conditional Random Fields,” International Journal of Pure and Applied Mathematics, col. 118, no. 18, pp. 3159-3167, 2018

[20] F. Chiarugi, G. Iatraki, E. Christinaki, D. Manousos, G. Giannakakis, M. Pediaditis, A. Pampouchidou, K. Marias and M. Tsiknakis, “Facial Signs and Psycho-physical Status Estimation for Well-being Assessment,” in HEALTHINF 2014 - Proceedings of the 7th International Conference on Health Informatics, M. Bieńkiewicz, C. Verdier, G. Plantier, T. Schultz, A. Fred, H. Gamboa, Eds, March 2014

[21] E. Vural, M. Cetin, A. Ercil, G. Littlewort, M. Bartlett, J. Movellan, “Drowsy driver detection through facial movement analysis,” in Human–Computer Interaction 2007. LNCS, M. Lew, N. Sebe, T.S. Huang, E.M. Bakker, Eds., vol. 4796, pp. 6-18, Heidelberg: Springer, 2007. https://doi.org/10.1007/978-3-540-75773-3_2.

[22] Nakamura, A. Maejima, S. Morishima, Detection of driver’s drowsy facial expression,” in Proceeding of Second Asian Conference on Pattern Recognition, Okinawa: Conference Publising Service, 2013. https://doi.org/10.1109/ACPR.2013.176.

StatisticsArticle Metrics

This article has been read : 294 times
PDF file viewed/downloaded : 146 times

This article can be traced from





Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.