A multi-modal emotion recognition system based on CNN-transformer deep learning technique
Citation
Karatay, B., Beştepe, D., Sailunaz, K., Özyer, T. ve Alhajj, R. (2022). A multi-modal emotion recognition system based on CNN-transformer deep learning technique. 7th International Conference on Data Science and Machine Learning Applications, CDMA içinde (145-150. ss.). Riyadh, 1-3 March 2022. https://doi.org/10.1109/CDMA54072.2022.00029Abstract
Emotion analysis is a subject that researchers from various fields have been working on for a long time. Different emotion detection methods have been developed for text, audio, photography, and video domains. Automated emotion detection methods using machine learning and deep learning models from videos and pictures have been an interesting topic for researchers. In this paper, a deep learning framework, in which CNN and Transformer models are combined, that classifies emotions using facial and body features extracted from videos is proposed. Facial and body features were extracted using OpenPose, and in the data preprocessing stage 2 operations such as new video creation and frame selection were tried. The experiments were conducted on two datasets, FABO and CK+. Our framework outperformed similar deep learning models with 99% classification accuracy for the FABO dataset, and showed remarkable performance over 90% accuracy for most versions of the framework for both the FABO and CK+ dataset.