Transformer45篇精选论文分享(模型、架构、训练方法)

今天来聊聊transformer。

得益于ChatGPT的爆火,今年大模型可谓是人工智能领域最热门的研究方向,作为大模型奠基之作的transformer也重新活跃在众人面前,新的研究成果一个接一个发布。

对于刚入门AI的同学来说,transformer是必学的知识点;对于其他人工智能领域的同学来说,transformer更是必须要掌握的基础。

所以我这回帮大家整理了transformer相关的论文资料,包括23篇模型相关论文,10篇架构相关论文,8篇预训练后处理,4篇训练方法,方面刚入门的小白快速上手,也方便其他同学梳理自己的知识体系。

论文list如下:

一、模型(23)

GPT

Improving Language Understanding by Generative Pre-Training

GPT-2

Language Models are Unsupervised Multitask Learners

GPT-3

Language Models are Few-Shot Learners

GPT-3.5

Models referred to as"GPT 3.5"

GPT-4

GPT-4 Technical Report

GPT-NeoX

GPT-NeoX-20B: An Open-Source Autoregressive Language Model

GPT-J

Pretrained Models

Gopher

Scaling Language Models: Methods, Analysis & Insights from Training Gopher

AlphaCode

Competition-Level Code Generation with AlphaCode

RETRO

Improving language models by retrievingfrom trillions of tokens

Chinchilla

Training Compute-Optimal Large Language Models

Flamingo

Flamingo: a Visual Language Model for FewShot Learning

Gato

A Generalist Agent

Anthropic LM

A General Language Assistantas a Laboratory for Alignment

PaLM

PaLM: Scaling Language Modeling with Pathways

GLaM

GLaM: Efficient Scaling of Language Models with Mixture-of-Experts

LAMDA

LaMDA: Language Models for Dialog Applications

LLaMA

Open and Efficient Foundation Language Models

Switch

Switch Transformers: Scaling to Trillion Parameter Modelswith Simple and Efficient Sparsity

BLOOM

BLOOM: A 176B-Parameter Open-Access MultilingualLanguage Model

Galactica

Galactica: A Large Language Model for Science

OPT

OPT: Open Pre-trained Transformer Language Models

GLM-130B

GLM-130B: AN OPEN BILINGUAL PRE-TRAINEDMODEL

二、架构(10)

多查询注意力

Fast Transformer Decoding: One Write-Head is All You Need

稀疏注意力

Generating Long Sequences with Sparse Transformers

混合专家

Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer

UNIFIED SCALING LAWS FOR ROUTED LANGUAGE MODELS

Efficient Large Scale Language Modeling with Mixtures of Experts

FlashAttention

FLASHATTENTION: Fast and Memory-Efficient Exact Attention with IO-Awareness

编码器 + 解码器

Attention Is All You Need

平行注意力

PaLM: Scaling Language Modeling with Pathways

RoPE

ROFORMER: ENHANCED TRANSFORMER WITH ROTARYPOSITION EMBEDDING

ALiBi

TRAIN SHORT.TEST LONG: ATTENTION WITH LINEARBIASES ENABLES INPUT LENGTH EXTRAPOLATION

三、预训练后处理(8)

采用 PPO 算法的 RLHF

Deep Reinforcement Learning from Human Preferences

Learning to summarize from human feedback

Constitutional

Constitutional Al: Harmlessness from AI Feedback

Minerva

Solving Quantitative Reasoning Problems with Language Models

Codex

Evaluating Large Language Models Trained on Code

FeedME (SFT)

Training language models to follow instructions with human feedback

Fine-Tuning Language Models from Human Preferences

FLAN

FINETUNED LANGUAGE MODELS ARE ZERO-SHOTLEARNERS

四、训练方法(4)

设置超参数

Training Compute-Optimal Large Language Models

Scaling Laws for Neural Language Models

基于人类反馈的预训练

Pretraining Language Models with Human Preferences

MuP

Tensor Programs V:Tuning Large Neural Networks viaZero-Shot Hyperparameter Transfer

关注下方《学姐带你玩AI》🚀🚀🚀

回复“精选45”获取全部论文+代码合集

码字不易,欢迎大家点赞评论收藏!

  • 3
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值