FLOPS计算

训练的FLOPS:

Llama2 (DGXC Benchmarking) | NVIDIA NGC

The calculations for the 7b parameter model:

model flops = (sequence length) * ((attention flops) + (mlp flops) + (embedding flops))

model flops breakdown:
    attention flops = 12 * (number of layers) * (hidden size)^2 * (1 + (number of query groups)/(number of attention heads) + (sequence length)/(hidden size))
    mlp flops = 18 * (number of layers) * (FFN size) * (hidden size)
    embedding flops = 6 * (vocab size) * (hidden size)

Llama 2 7b calculation:
    sequence length = 4096
    attention flops = 12 * 32 * 4096^2 * (1 + 32/32 + 4096/4096) = 19,327,352,832
    mlp flops = 18 * 32 * 11008 * 4096 = 25,971,130,368
    embedding flops = 6
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值