ScholarMate
客服热线:400-1616-289

Context-Based Adaptive Multimodal Fusion Network for Continuous Frame-Level Sentiment Prediction

Huang, Maochun; Qing, Chunmei*; Tan, Junpeng; Xu, Xiangmin
Science Citation Index Expanded
-

摘要

Recently, video sentiment computing has become the focus of research because of its benefits in many applications such as digital marketing, education, healthcare, and so on. The difficulty of video sentiment prediction mainly lies in the regression accuracy of long-term sequences and how to integrate different modalities. In particular, different modalities may express different emotions. In order to maintain the continuity of long time-series sentiments and mitigate the multimodal conflicts, this article proposes a novel Context-Based Adaptive Multimodal Fusion Network (CAMFNet) for consecutive frame-level sentiment prediction. A Context-based Transformer (CBT) module was specifically designed to embed clip features into continuous frame features, leveraging its capability to enhance the consistency of prediction results. Moreover, to resolve the multi-modal conflict between modalities, this article proposed an Adaptive multimodal fusion (AMF) method based on the self-attention mechanism. It can dynamically determines the degree of shared semantics across modalities, enabling the model to flexibly adapt its fusion strategy. Through adaptive fusion of multimodal features, the AMF method effectively resolves potential conflicts arising from diverse modalities, ultimately enhancing the overall performance of the model. The proposed CAMFNet for consecutive frame-level sentiment prediction can ensure the continuity of long time-series sentiments. Extensive experiments illustrate the superiority of the proposed method especially in multimodal conflicts videos.

关键词

Video sentiment computing multimodal fusion context-based transformer long-term features