Tiny-universe学习笔记1:Qwen-blog
本文是参与Datawhale Tiny-universe 组队学习的学习笔记第一篇,笔记中的代码均参考transformers-4.39.3中的Qwen2模型实现,基于torch开发,代码路径为transformers-4.39.3/src/transformers/models/qwen2。
1. Qwen2Config
1.1 PretrainedConfig介绍
Qwen2Config
继承自PretrainedConfig
,PretrainedConfig
是transformers框架中所有配置类的基类,用于处理所有模型配置共有的一些参数以及加载/下载/保存配置的方法。需要明确的一点是,在transformers框架的设定中,配置文件可以加载并保存到磁盘。加载配置文件并使用此文件初始化模型不会加载模型权重,它只会影响模型的配置。
PretrainedConfig
中有一些是可以被子类重写的类属性:
- model_type (
str
):模型的类型,使用transformers.AutoConfig
方法加载配置文件时,依赖这个字段来确认被创建的是哪个配置类的对象。 - attribute_map (
Dict[str, str]
) :存储模型特定的属性名称和transformers属性标准名称之间的映射关系。
有一些参数是所有子类共有的(仅存在于所有子类中),比如: - vocab_size (
int
):词表中token的数量,也是embedding矩阵的第一维数字(对于没有文本模态的模型,例如VIT,不会包含此属性)。 - hidden_size (
int
):模型隐藏层的维度。
初始化该类的参数,常见的有name_or_path (str
, optional, defaults to""
)、output_hidden_states (bool
, optional, defaults toFalse
)等。
1.2 Qwen2Config的介绍
Qwen2Config
中除了PretrainedConfig
中定义的基础参数外,需要特别注意的参数有:
- num_hidden_layers (
int
, optional, defaults to 32):隐藏层的数量,由于Qwen2主要使用transformer架构的decoder,所以这个参数也可以被描述为是transformer decoder中的隐藏层的数量。 - rms_norm_eps (
float
, optional, defaults to 1e-06):Qwen2中使用了RMS normalization layer,这个参数用于防止RMS中发生除0的情况。关于RMS normalization,它是基于LayerNorm的一种变体,主要是去掉了LN分子和分母中减去均值的部分,减少约 7%∼64% 的计算时间。
需要注意的一点是,如果全部使用代码中的默认值来实例化该配置类,则会获得和Qwen2-7B-beta模型类似的配置。
2. Qwen2Model
2.1 Qwen2PreTrainedModel实现
Qwen2Model
继承自Qwen2PreTrainedModel
,Qwen2PreTrainedModel
继承自PreTrainedModel
,主要重写了_init_weights
方法,用于初始化模型参数:
class Qwen2PreTrainedModel(PreTrainedModel):
config_class = Qwen2Config
base_model_prefix = "model"
supports_gradient_checkpointing = True
_no_split_modules = ["Qwen2DecoderLayer"]
_skip_keys_device_placement = "past_key_values"
_supports_flash_attn_2 = True
_supports_sdpa = True
_supports_cache_class = True
def _init_weights(self, module):
std = self.config.initializer_range
if isinstance(module, nn.Linear):
module.weight.data.normal_(mean=0.0, std=std)
if module.bias is not None:
module.bias.data.zero_()
elif isinstance(module, nn.Embedding):
module.weight.data.normal_(mean=0.0, std=std)
if module.padding_idx is not None:
module.weight.data[module.padding_idx].zero_()
Transformers框架要求所有继承PreTrainedModel
的模型类必须实现_init_weights
,而且当使用from_pretrained
加载模型的checkpoint时,_init_weights
是唯一会被调用的模型初始化方法。
2.2 Qwen2Model初始化
Qwen2Model
模型初始化包含两个部分:
第一个部分是__init__
方法,用于初始化父类的属性和模型属性:
class Qwen2Model(Qwen2PreTrainedModel):
"""
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`Qwen2DecoderLayer`]
Args:
config: Qwen2Config
"""
def __init__(self, config: Qwen2Config):
super().__init__(config)
self.padding_idx = config.pad_token_id
self.vocab_size = config.vocab_size
self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
self.layers = nn.ModuleList(
[Qwen2DecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
)
self._attn_implementation = config._attn_implementation
self.norm = Qwen2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
self.gradient_checkpointing = False
# Initialize weights and apply final processing
self.post_init()
该方法需要传入Qwen2Config
对象:
- 设置了模型的两个属性:padding_idx(词表中用于进行填充的字符对应索引id),vocab_size(词表大小)
- 初始化了模型的嵌入层(
self.embed_tokens
)、隐藏层(self.layers
)、归一化层(self.norm
)。 - 隐藏层主要使用transformer架构中的decoder(
Qwen2DecoderLayer
)。 - 归一化使用了RMS normalization(
Qwen2RMSNorm
)。
第二个部分是post_init
,该方法定义在PreTrainedModel
中,Qwen2PreTrainedModel
和Qwen2Model
均没有对该方法进行重写,该方法主要对模型参数进行初始化,调用了上面提到的Qwen2PreTrainedModel
中实现的_init_weights
:
def post_init(self):
"""
A method executed at the end of each Transformer model initialization, to execute code that needs the model's
modules properly initialized (such as weight initialization).
"""
self.init_weights()
self._backward_compatibility_gradient_checkpointing()
2.3 Qwen2Model的forward方法
forward
方法可以分成三个部分来看,输入部分、计算部分、输出部分。
输入部分代码如下:
def forward(
self,
input_ids: torch.LongTensor = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[List[torch.FloatTensor]] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
use_cache: Optional[bool] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple, BaseModelOutputWithPast]:
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
output_hidden_states = (
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
)
use_cache = use_cache if use_cache is not None else self.config.use_cache
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
# retrieve input_ids and inputs_embeds
if input_ids is not None and inputs_embeds is not None:
raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time")
elif input_ids is not None:
batch_size, seq_length = input_ids.shape
elif inputs_embeds is not None:
batch_size, seq_length, _ = inputs_embeds.shape
else:
raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds")
if self.gradient_checkpointing and self.training:
if use_cache:
logger.warning_once(
"`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
)
use_cache = False
past_key_values_length = 0
if use_cache:
use_legacy_cache = not isinstance(past_key_values, Cache)
if use_legacy_cache:
past_key_values = DynamicCache.from_legacy_cache(past_key_values)
past_key_values_length = past_key_values.get_usable_length(seq_length)
if position_ids is None:
device = input_ids.device if input_ids is not None else inputs_embeds.device
position_ids = torch.arange(
past_key_values_length, seq_length + past_key_values_length, dtype=torch.long, device=device
)
position_ids = position_ids.unsqueeze(0).view(-1, seq_length)
else:
position_ids = position_ids.view(-1, seq_length).long()
if inputs_embeds is None:
inputs_embeds = self.embed_tokens(input_ids)
if attention_mask is not None and self._attn_implementation == "flash_attention_2" and use_cache:
is_padding_right = attention_mask[:, -1].sum().item() != batch_size
if is_padding_right:
raise ValueError(
"You are attempting to perform batched generation with padding_side='right'"
" this may lead to unexpected behaviour for Flash Attention version of Qwen2. Make sure to "
" call `tokenizer.padding_side = 'left'` before tokenizing the input. "
)
if self._attn_implementation == "flash_attention_2":
# 2d mask is passed through the layers
attention_mask = attention_mask if (attention_mask is not None and 0 in attention_mask) else None
elif self._attn_implementation == "sdpa" and not output_attentions:
# output_attentions=True can not be supported when using SDPA, and we fall back on
# the manual implementation that requires a 4D causal mask in all cases.
attention_mask = _prepare_4d_causal_attention_mask_for_sdpa(
attention_mask,
(batch_size, seq_length),
inputs_embeds,
past_key_values_length,
)
else:
# 4d mask is passed through the layers
attention_mask = _prepare_4d_causal_attention_mask(
attention_mask,
(batch_size, seq_length),
inputs_embeds,
past_key_values_length,
sliding_window=self.config.sliding_window,
)
forward
方法的输入中,需要注意以下几点:
past_key_values
为预先计算的隐藏状态(self-attention块和cross-attention块中的键和值)可用于加速顺序解码。当use_cache=True
或config.use_cache=True时该参数对应的值通常包含在解码前一阶段模型返回的past_key_values
中。input_ids
和inputs_embeds
两个参数至少要传一个、且同时只能传一个。- 如果
inputs_embeds
为空,会将将input_ids
输入嵌入层获得inputs_embeds
。 - 输入部分会对
attention_mask
和position_ids
进行处理。
计算部分的代码如下:
hidden_states = inputs_embeds
# decoder layers
all_hidden_states = () if output_hidden_states else None
all_self_attns = () if output_attentions else None
next_decoder_cache = None
for decoder_layer in self.layers:
if output_hidden_states:
all_hidden_states += (hidden_states,)
if self.gradient_checkpointing and self.training:
layer_outputs = self._gradient_checkpointing_func(
decoder_layer.__call__,
hidden_states,
attention_mask,
position_ids,
past_key_values,
output_attentions,
use_cache,
)
else:
layer_outputs = decoder_layer(
hidden_states,
attention_mask=attention_mask,
position_ids=position_ids,
past_key_value=past_key_values,
output_attentions=output_attentions,
use_cache=use_cache,
)
hidden_states = layer_outputs[0]
if use_cache:
next_decoder_cache = layer_outputs[2 if output_attentions else 1]
if output_attentions:
all_self_attns += (layer_outputs[1],)
hidden_states = self.norm(hidden_states)
这其中:
all_hidden_states
中保存了所有的hidden_states,其中的第一个hidden_states
就是inputs_embeds
,最后一个为归一化层的输出。- 整个流程是将第一个
hidden_states
,也就是inputs_embeds
,以及经过处理的attention_mask
和position_id
,输入到每一层decoder_layer中。最后再对最后一个decoder_layer的输出,利用归一化层进行处理,得到最后的输出。
输出部分的代码如下:
if not return_dict:
return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None)
return BaseModelOutputWithPast(
last_hidden_state=hidden_states,
past_key_values=next_cache,
hidden_states=all_hidden_states,
attentions=all_self_attns,
)
模型的输出有两种形式,如果采用字典形式,则会基于BaseModelOutputWithPast
类构建字典;否则将会把[hidden_states, next_cache, all_hidden_states, all_self_attns]
输出为一个tuple
。BaseModelOutputWithPast
为Transformers框架定义的基础模型输出类之一,主要针对模型输出中可能包含past_key_values
字段(参考上文输入部分的解释)的情况时使用。
3. Qwen2DecoderLayer
3.1 Qwen2DecoderLayer初始化
Qwen2DecoderLayer
是Qwen2Model
的核心结构,它属于transformer架构中的decoder。Qwen2DecoderLayer
的三个模块就是我们熟悉的attention(Qwen2Attention
、Qwen2FlashAttention2
、Qwen2SdpaAttention
,一般为Qwen2Attention
)、MLP(Qwen2MLP
)、norm(Qwen2RMSNorm
)。
QWEN2_ATTENTION_CLASSES = {
"eager": Qwen2Attention,
"flash_attention_2": Qwen2FlashAttention2,
"sdpa": Qwen2SdpaAttention,
}
class Qwen2DecoderLayer(nn.Module):
def __init__(self, config: Qwen2Config, layer_idx: int):
super().__init__()
self.hidden_size = config.hidden_size
if config.use_sliding_window and config._attn_implementation != "flash_attention_2":
logger.warning_once(
f"Sliding Window Attention is enabled but not implemented for `{config._attn_implementation}`; "
"unexpected results may be encountered."
)
self.self_attn = QWEN2_ATTENTION_CLASSES[config._attn_implementation](config, layer_idx)
self.mlp = Qwen2MLP(config)
self.input_layernorm = Qwen2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
self.post_attention_layernorm = Qwen2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
这里面的input_layernorm
和post_attention_layernorm
,以及前文提到Qwen2Model
中的正则化层都是用了Qwen2RMSNorm
。
3.2 Qwen2DecoderLayer的forward方法
Qwen2DecoderLayer
的流程如下图所示:
核心代码如下:
residual = hidden_states
# 标准化后送入attn
hidden_states = self.input_layernorm(hidden_states) # RMSNorm标准化
# Self Attention
hidden_states, self_attn_weights, present_key_value = self.self_attn(
hidden_states=hidden_states,
attention_mask=attention_mask,
position_ids=position_ids,
past_key_value=past_key_value,
output_attentions=output_attentions,
use_cache=use_cache,
**kwargs,
)
# 残差与新的hidden_states相加
hidden_states = residual + hidden_states
# Fully Connected
residual = hidden_states
# 同样的RMSNorm标准化
hidden_states = self.post_attention_layernorm(hidden_states)
hidden_states = self.mlp(hidden_states)
hidden_states = residual + hidden_states
outputs = (hidden_states,)
return outputs
Qwen2DecoderLayer
中两次使用了residual network的逻辑。首先复制一份hidden_states
为residual
,然后将hidden_states
送入Norm,再送入attn模块。得到attn的输出后,再复制一份residual
,再将hidden_states
送入Norm,mlp,再与residual进行相加,得到最后的输出。
4. Qwen2Attention
Qwen2Attention
对比传统MHA(Multi-headed attention),在GQA(Grouped-query attention)的基础上添加了SWA(Sliding window attention)的优化,目的是为了降低显存,提高模型inference的速度。下面我们从代码角度对比下传统MHA和GQA。
4.1 MHA的代码实现
我们通过transformers-4.39.3中的Bert模型源码看一下MHA,初始化部分:
self.num_attention_heads = config.num_attention_heads
self.attention_head_size = int(config.hidden_size / config.num_attention_heads)
self.all_head_size = self.num_attention_heads * self.attention_head_size
self.query = nn.Linear(config.hidden_size, self.all_head_size)
self.key = nn.Linear(config.hidden_size, self.all_head_size)
self.value = nn.Linear(config.hidden_size, self.all_head_size)
forward部分,有一个关键的方法transpose_for_scores
,用于把hidden_size拆成多个头输出的形状,并且将中间两维转置以进行矩阵相乘,即交换num_attention_heads
和sequence_length
(这里多说一句,这里之所以不是多头算多遍,是因为考虑计算效率和代码方便,毕竟少了一个loop):
def transpose_for_scores(self, x: torch.Tensor) -> torch.Tensor:
new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size)
x = x.view(new_x_shape)
return x.permute(0, 2, 1, 3)
经过transpose_for_scores
方法处理后,q、k、v的形状为batch_size * num_attention_heads * sequence_length * attention_head_size)
。
剩余部分的实现如下:
mixed_query_layer = self.query(hidden_states)
key_layer = self.transpose_for_scores(self.key(hidden_states))
value_layer = self.transpose_for_scores(self.value(hidden_states))
attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2))
context_layer = torch.matmul(attention_probs, value_layer)
context_layer = context_layer.permute(0, 2, 1, 3).contiguous()
new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,)
context_layer = context_layer.view(new_context_layer_shape)
得到的attention_scores
的形状为batch_size * num_attention_heads * sequence_length * sequence_length
,context_layer
即attention矩阵与value矩阵的乘积,大小为batch_size * num_attention_heads * sequence_length * attention_head_size
,context_layer进行转置和view操作以后,最终的形状就恢复了batch_size * sequence_length * hidden_size
。(参考文章:BERT源码详解(一)——HuggingFace Transformers最新版本源码解读)
4.2 GQA代码实现
在介绍GQA之前,先简单说一下MQA(Multi Query Attention)。上文中提到了,MHA通过将Q、K、V拆分成多个头(这里用了“拆分”,是按照上文的实际实现来描述的),实现了multi head,而MQA则是只对Q进行拆分,而每一层的所有Q共享K和V。而GQA,则相当于在MQA的基础上增加了分组,MQA是所有Q共享K和V,每个组组内的Q共享K和V,当分组数G为1时,GQA等于MQA,而当分组数等于Q的head数时,GQA等同于MHA。
Tips:GQA和MQA并没有减少计算复杂度,而是减少kv cache的显存占用,这样batchsize能设置的更大,吞吐量就变大。
下面我们通过transformers-4.39.3中的Llama模型源码看一下GQA的实现,首先看一下初始化部分:
self.num_heads = config.num_attention_heads
self.head_dim = self.hidden_size // self.num_heads
self.num_key_value_heads = config.num_key_value_heads
self.num_key_value_groups = self.num_heads // self.num_key_value_heads
self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=config.attention_bias)
self.k_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=config.attention_bias)
self.v_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=config.attention_bias)
self.o_proj = nn.Linear(self.hidden_size, self.hidden_size, bias=config.attention_bias)
相较于Bert中MHA的实现,GQA中多了两个参数self.num_key_value_heads
(K和V的头数),self.num_key_value_groups
(这个才是分组数G,n_rep是组内重复数量不是分组数)。而将多个分组进行整合,就用到了下面的repeat_kv
方法:
def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
"""
This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch, num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
"""
batch, num_key_value_heads, slen, head_dim = hidden_states.shape
if n_rep == 1: # 这种情况下,等同于MHA
return hidden_states
# 这种情况下, 如果n_rep=self.num_heads,则为MQA;否则为GQA
hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
这里,简单说一下torch.expand()
和torch.repeat()
之间的区别:
torch.expand()
函数用于将张量中单数维(singleton dimensions,张量在某个维度上的size为1)的数据扩展到指定的size;torch.expand()
函数并不会重新分配内存,返回结果仅仅是原始张量上的一个视图,其中单个数据元素在多个维度上被使用;expand 对内存使用效率较高,因为它不创建数据的实际副本。torch.repeat()
能作用于非单数维的数据扩展,不受限于原张量维度的大小;它会实际复制数据并创建一个新的张量,同时分配新的内存,并实际创建包含重复数据的张量。
4.3 Qwen2Attention代码实现
看完MHA和GQA,我们再回头来看Qwen2Attention的forward
方法中核心代码实现:
# 获取形状信息,hidden_states输入的为(bs,T,hd)
bsz, q_len, _ = hidden_states.size()
# 对hidden_states进行Linear生成query、key、value
query_states = self.q_proj(hidden_states)
key_states = self.k_proj(hidden_states)
value_states = self.v_proj(hidden_states)
# reshape多头处理--分块--(bs,T,heads,hd_d)
query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
# 将旋转位置嵌入应用于查询和键张量。使用了旋转位置嵌入的余弦和正弦部分,将它们与查询和键张量相乘,并将结果相加,从而实现旋转位置嵌入的效果
cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
# 先将key_states和value_states重复了num_key_value_groups次
key_states = repeat_kv(key_states, self.num_key_value_groups)
value_states = repeat_kv(value_states, self.num_key_value_groups)
# 使用dot attn实现q*kT/hd_d^0.5
attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
# 然后 attn_weights 加上 attention_mask,实现读取顺序
attn_weights = attn_weights + attention_mask
# softmax + dropout + values_states相乘
attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
attn_weights = nn.functional.dropout(attn_weights, p=self.attention_dropout, training=self.training)
attn_output = torch.matmul(attn_weights, value_states)
# 转置,修改形状等reshape操作
attn_output = attn_output.transpose(1, 2).contiguous()
attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
# 最后在进行一次o_proj
attn_output = self.o_proj(attn_output)
# 返回结果
return attn_output, attn_weights, past_key_value
这其中的核心逻辑除了刚刚说到的GQA以外,另一个重要的逻辑就是apply_rotary_pos_emb
方法,是用于将**旋转位置编码(Rotary Positional Embedding,RoPE)**应用到q和k张量上,代码实现如下:
def rotate_half(x):
"""Rotates half the hidden dims of the input."""
x1 = x[..., : x.shape[-1] // 2]
x2 = x[..., x.shape[-1] // 2 :]
return torch.cat((-x2, x1), dim=-1)
def apply_rotary_pos_emb(q, k, cos, sin, position_ids, unsqueeze_dim=1):
"""Applies Rotary Position Embedding to the query and key tensors.
Args:
q (`torch.Tensor`): The query tensor.
k (`torch.Tensor`): The key tensor.
cos (`torch.Tensor`): The cosine part of the rotary embedding.
sin (`torch.Tensor`): The sine part of the rotary embedding.
position_ids (`torch.Tensor`):
The position indices of the tokens corresponding to the query and key tensors. For example, this can be
used to pass offsetted position ids when working with a KV-cache.
unsqueeze_dim (`int`, *optional*, defaults to 1):
The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
Returns:
`tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding.
"""
cos = cos[position_ids].unsqueeze(unsqueeze_dim)
sin = sin[position_ids].unsqueeze(unsqueeze_dim)
q_embed = (q * cos) + (rotate_half(q) * sin)
k_embed = (k * cos) + (rotate_half(k) * sin)
return q_embed, k_embed
RoPE的讲解可以参考RoPE旋转位置编码:Meta与Hugging Face两种代码实现详解
5. Qwen2RMSNorm
Qwen模型的归一化使用了RMS normalization,它是基于LayerNorm的一种变体,主要是去掉了LN分子和分母中减去均值的部分,减少约 7%∼64% 的计算时间。具体代码实现如下:
class Qwen2RMSNorm(nn.Module): # 标准化层
def __init__(self, hidden_size, eps=1e-6):
"""
Qwen2RMSNorm is equivalent to T5LayerNorm
"""
super().__init__()
self.weight = nn.Parameter(torch.ones(hidden_size))
self.variance_epsilon = eps
def forward(self, hidden_states):
input_dtype = hidden_states.dtype
hidden_states = hidden_states.to(torch.float32)
variance = hidden_states.pow(2).mean(-1, keepdim=True)
hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
return self.weight * hidden_states.to(input_dtype)
其中的self.variance_epsilon
对应前文提到的Qwen2Config
中的rms_norm_eps
参数。