近3三年多模态情感分析论文及其代码

排行榜:

在CMU-MOSE数据集排行榜

CMU-MOSEI Benchmark (Multimodal Sentiment Analysis) | Papers With Code

在MOSI数据集排行榜

MOSI Benchmark (Multimodal Sentiment Analysis) | Papers With Code

2022年

M-SENA: An Integrated Platform for Multimodal Sentiment Analysis

ACL;ACL ; star:317;2022

UniMSE: Towards Unified Multimodal Sentiment Analysis and Emotion Recognition

star:44; 2022;  MOSI数据排行第1

MMLatch: Bottom-up Top-down Fusion for Multimodal Sentiment Analysis

star:14; 2022; CMU-MOSEI排行第3

The MuSe 2022 Multimodal Sentiment Analysis Challenge: Humor, Emotional Reactions, and Stressstar:19;2022;

Sentiment Word Aware Multimodal Refinement for Multimodal Sentiment Analysis with ASR Errors

star:15;2022;

2021年

Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment Analysis

star:444; 2021;

Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis

EMNLP;star:444 ;2021;CMU-MOSEI的sota4;

Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis

ACL;   star:317;  2021

​​​​​​CLASSIC: Continual and Contrastive Learning of Aspect Sentiment Classification Tasks

EMNLP;   star:190; 对比学习;2021;

Learning Implicit Sentiment in Aspect-based Sentiment Analysis with Supervised Contrastive Pre-Training EMNLP;   star:70; 对比学习(好像都用在ABSA上);2021;

The MuSe 2021 Multimodal Sentiment Analysis Challenge: Sentiment, Emotion, Physiological-Emotion, and Stress

star:34; 2021;

2020年

​​​​​​CH-SIMS: A Chinese Multimodal Sentiment Analysis Dataset with Fine-grained Annotation of ModalityACL;star:317 2020

MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis

star:105; 2020

基于深度学习的多模态情感分析是一将多种感知模态(如文本、图像、音频等)结合起来进行情感分析的方法。下面是一个简单的基于深度学习的多模态情感分析代码的示例: ```python import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import DataLoader # 定义模型 class MultiModalSentimentAnalysis(nn.Module): def __init__(self): super(MultiModalSentimentAnalysis, self).__init__() # 定义文本模态的网络结构 self.text_model = nn.Sequential( nn.Linear(text_input_size, hidden_size), nn.ReLU(), ... ) # 定义图像模态的网络结构 self.image_model = nn.Sequential( nn.Conv2d(image_input_channels, hidden_channels, kernel_size), nn.ReLU(), ... ) # 定义音频模态的网络结构 self.audio_model = nn.Sequential( nn.Conv1d(audio_input_channels, hidden_channels, kernel_size), nn.ReLU(), ... ) # 定义融合模态的网络结构 self.fusion_model = nn.Sequential( nn.Linear(hidden_size + hidden_channels + hidden_channels, fusion_hidden_size), nn.ReLU(), ... ) # 定义情感分类层 self.sentiment_classifier = nn.Linear(fusion_hidden_size, num_classes) def forward(self, text_input, image_input, audio_input): text_output = self.text_model(text_input) image_output = self.image_model(image_input) audio_output = self.audio_model(audio_input) fusion_input = torch.cat((text_output, image_output, audio_output), dim=1) fusion_output = self.fusion_model(fusion_input) sentiment_output = self.sentiment_classifier(fusion_output) return sentiment_output # 定义数据集和数据加载器 dataset = MyMultiModalDataset(...) dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=True) # 初始化模型和优化器 model = MultiModalSentimentAnalysis() optimizer = optim.Adam(model.parameters(), lr=learning_rate) # 训练模型 for epoch in range(num_epochs): for batch_data in dataloader: text_input, image_input, audio_input, labels = batch_data optimizer.zero_grad() outputs = model(text_input, image_input, audio_input) loss = criterion(outputs, labels) loss.backward() optimizer.step() # 使用训练好的模型进行预测 text_input, image_input, audio_input = get_test_data() outputs = model(text_input, image_input, audio_input) predicted_labels = torch.argmax(outputs, dim=1) ``` 这是一个简单的多模态情感分析代码示例,其中包括了定义模型、数据集和数据加载器、训练模型以及使用训练好的模型进行预测的步骤。你可以根据自己的需求进行修改和扩展。
评论 12
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值