Co-GAT: A Co-Interactive Graph Attention Network for Joint Dialog Act Recognition and Sentiment Classification
动机
- where dialog act and sentiment can indicate the explicit and the implicit intentions separately. SC can detect the sentiments in utterances which can help to capture speakers’implicit intentions.
- 认为两个信息很重要,上下文信息和交互信息,之前的方法要么考虑一个信息,要么是使用pipeline的形式,单独建模。
Related Work
- 1、上图a coling 2018:Multi-task dialog act and sentiment recognition on Mastodon
- We manually annotate both dialogues and sentiments on this corpus, and train a multi-task hierarchical recurrent network – joint learning
- can implicitly extract the shared mutual interaction information, but fail to effectively capture the contextual information
- 2、上图b PR期刊,只考虑上下文的信息 Integrated neural network model for identifying speech acts, predicators, and sentiments of dialogue utterances
- explicitly leverage the previous act information to guide the current DA prediction
- the model ignores the mutual interaction information
数据中的一个例子:
- 3、2020 AAAI DCR-Net
- capture the contextual information, followed then by a relation layer to consider the mutual interaction information.
- Pipeline way:two info model independently
Contributions
- first attempt to simultaneously incorporate contextual information and mutual interaction information
- propose a co-interactive graph attention network where a cross-tasks connection and cross-utterances connection
Method
- Speaker-Level Encoder , 使用GNN考虑同一个说话的人的信息。边,如果是同一个人,为1
- Stacked Co-Interactive Graph Layer ,2N个结点,2N*2N条边。
- Cross-utt info
- Cross-tasks info
Experiments
问题
- 本质上来说,该做法还是依赖数据中的某些规律。
- 跨任务图的计算还是一种全连接的方式,有边就是1,否则就是0;怎么更好建模一阶邻居?