Attention is all you need 2020-05-15

11 篇文章 0 订阅

Attention is all you need

Abstract

Transformer : 无recurrence和convolutions,只基于attention

Introduction

Recurrent models 是seq2seq model,h_t = f (position,h_t-1);不能并行运算,
RNN 长期忘记,transformer: averaging attention-weighted positions
Self-attention : relating different positions of a single sequence

Model Architecture

  1. The encoder is composed of a stack of N = 6 identical layers. 每个layer由multi-head self-attention和fully connected feed-forward network组成
  2. The decoder is also composed of a stack of N = 6 identical layers, 3 sub-layer, multi-head self-attention和fully connected feed-forward network和multi-head attention
  3. position-wise feed-forward: FFN(x) = max(0,xW1 + b1)W2 + b2
  4. decoder output to predicted next-token probabilities.
  5. make use of the order of the sequence : 添加position信息

Attention

map a query -> key-value , 计算相关度,可以dot-product 也可以其他
dot-product
multi-head:将query,key和value分别线性地投影为dk,dk和dv维度的h时间,分别具有不同的学习线性投影
Multi-head
##self-attention && multihead && 加入位置信息 示意图
在这里插入图片描述
##并行计算 && transformer理解
在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值