Llama3技术报告

Model architecture

standard decoder-only transformer
Compared to Llama 2, we made several key improvements:

  1. Llama 3 uses a tokenizer with a vocabulary of 128K tokens that encodes language much more efficiently
  2. To improve the inference efficiency of Llama 3 models, we’ve adopted grouped query attention (GQA) across both the 8B and 70B sizes
  3. We trained the models on sequences of 8,192 tokens, using a mask to ensure self-attention does not cross document boundaries.

Instruction fine-tuning

  1. supervised fine-tuning (SFT)
  2. rejection sampling
  3. proximal policy optimization (PPO)
  4. direct preference optimization (DPO):the preference rankings that are used in PPO and DPO has an outsized influence on the performance of aligned models
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值