Multimodal Sarcasm Detection via Hybrid Classifier with Optimistic Logic
DOI:
https://doi.org/10.26636/jtit.2022.161622Keywords:
Bi-GRU, improved CCA, LSTM, multimodal sarcasm detectionAbstract
This work aims to provide a novel multimodal sarcasm detection model that includes four stages: pre-processing, feature extraction, feature level fusion, and classification. The pre-processing uses multimodal data that includes text, video, and audio. Here, text is pre-processed using tokenization and stemming, video is pre-processed during the face detection phase, and audio is pre-processed using the filtering technique. During the feature extraction stage, such text features as TF-IDF, improved bag of visual words, n-gram, and emojis as well on the video features using improved SLBT, and constraint local model (CLM) are extraction. Similarly the audio features like MFCC, chroma, spectral features, and jitter are extracted. Then, the extracted features are transferred to the feature level fusion stage, wherein an improved multilevel canonical correlation analysis (CCA) fusion technique is performed. The classification is performed using a hybrid classifier (HC), e.g. bidirectional gated recurrent unit (Bi-GRU) and LSTM. The outcomes of Bi-GRU and LSTM are averaged to obtain an effective output. To make the detection results more accurate, the weight of LSTM will be optimally tuned by the proposed opposition learning-based aquila optimization (OLAO) model. The MUStARD dataset is a multimodal video corpus used for automated sarcasm discovery studies. Finally, the effectiveness of the proposed approach is proved based on various metrics.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2022 Journal of Telecommunications and Information Technology
This work is licensed under a Creative Commons Attribution 4.0 International License.