Effective Approaches to Attention-based Neural Machine Translation 学习笔记

这篇博客详细介绍了基于Attention的神经机器翻译模型,包括全局和局部Attention机制,以及Input-feeding方法。全局Attention在每个目标单词处考虑所有源单词,而局部Attention只关注源单词的一部分,降低了计算成本。Input-feeding方法使模型能利用过去的对齐信息,提高翻译质量。该文通过实验展示了Attention机制在处理长句子和翻译特定内容(如名字)方面的优势。
摘要由CSDN通过智能技术生成

Effective Approaches to Attention-based Neural Machine Translation 学习笔记

本文是阅读 Effective Approaches to Attention-based Neural Machine Translation 之后的学习总结,如有不妥之处,烦请各位斧正。

0. 概述

这篇论文重在研究基于attention的神经机器翻译模型,测试了两种简单有效的attention机制:
1. 全局方法:总是关注所有的源单词。比前人的方法结构上更简单。
2. 局部方法:每次只关注源单词的一个子集。比全局方法或者soft attention花销要更小,同时与hard attention不同的是,更容易实现和训练。另外,可以在这些基于attention的模型测试不同的对齐函数。
论文中除了在WMT翻译任务中测试英德互译之外,还根据学习能力,处理长句子的能力,attention机制的选择,对齐质量和翻译的输出来对模型进行评估。

1. 神经机器翻译(NMT)

神经机器翻译系统是定向地将翻译源句,就是将x1,……,xn翻译成目标句子,y1,……,ym的条件概率建模的神经网络。NMT的基本形式包含两个组成成分:
1. 编码器:计算得到每个源句的表示。
2. 解码器:每次形成一个目标单词
因此将条件概率分解为:
该图片由原文截取
在解码器的分解建模中,常见的选择就是用RNN。可以参数化每个单词yj解码的概率:
该图片由原文截取
g是输出词汇大小向量的转换函数。这里,hj是RNN的隐藏单元,是这样计算的:

Visual segmentation is one of the most important tasks in computer vision, which involves dividing an image into multiple segments, each of which corresponds to a different object or region of interest in the image. In recent years, transformer-based methods have emerged as a promising approach for visual segmentation, leveraging the self-attention mechanism to capture long-range dependencies in the image. This survey paper provides a comprehensive overview of transformer-based visual segmentation methods, covering their underlying principles, architecture, training strategies, and applications. The paper starts by introducing the basic concepts of visual segmentation and transformer-based models, followed by a discussion of the key challenges and opportunities in applying transformers to visual segmentation. The paper then reviews the state-of-the-art transformer-based segmentation methods, including both fully transformer-based approaches and hybrid approaches that combine transformers with other techniques such as convolutional neural networks (CNNs). For each method, the paper provides a detailed description of its architecture and training strategy, as well as its performance on benchmark datasets. Finally, the paper concludes with a discussion of the future directions of transformer-based visual segmentation, including potential improvements in model design, training methods, and applications. Overall, this survey paper provides a valuable resource for researchers and practitioners interested in the field of transformer-based visual segmentation.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值