1. bert的细节:
ⅱ. embedding相加的策略会不会有问题?为什么 Bert 的三个 Embedding 可以进行相加?
1. 邱锡鹏:频率不同的信息即使是叠加,在后续都可以进行解耦的操作进行“还原”,也可能已经解耦在不同维度中了(不同频率对应的就是不同维度)。
2. 苏神:
ⅲ. bert预训练的mask策略
1. Bert 的 MASK 机制是这样的:它以 token 为单位随机选择句子中 15% 的 token,然后将其中 80% 的 token 使用 [MASK] 符号进行替换,将 10% 使用随机的其他 token 进行替换,剩下的 10% 保持不变。
2. set(集合)和哈希表(字典)的区别
ⅰ. 哈希表 - 键值对。集合 - 值
3. 多头注意力的伪代码:
import torch
import torch.nn as nn
class MultiHeadAttention(nn.Module):
def __init__(self, d_model, num_heads):
super(MultiHeadAttention, self).__init__()
self.num_heads = num_heads
self.d_model = d_model
self.head_dim = d_model // num_heads
self.q_linear = nn.Linear(d_model, d_model)
self.k_linear = nn.Linear(d_model, d_model)
self.v_linear = nn.Linear(d_model, d_model)
self.output_linear = nn.Linear(d_model, d_model)
def forward(self, query, key, value, mask=None):
batch_size = query.size(0)
# Apply linear transformations to query, key, and value
q = self.q_linear(query)
k = self.k_linear(key)
v = self.v_linear(value)
# Split into multiple heads
q = q.view(batch_size, -1, self.num_heads, self.head_dim).transpose(1, 2) # (batch_size, num_heads, seq_len, head_dim)
k = k.view(batch_size, -1, self.num_heads, self.head_dim).transpose(1, 2) # (batch_size, num_heads, seq_len, head_dim)
v = v.view(batch_size, -1, self.num_heads, self.head_dim).transpose(1, 2) # (batch_size, num_heads, seq_len, head_dim)
# Compute attention scores
scores = torch.matmul(q, k.transpose(-2, -1)) / torch.sqrt(torch.tensor(self.head_dim, dtype=torch.float32))
if mask is not None:
scores = scores.masked_fill(mask == 0, float('-inf'))
# Apply softmax activation function to get attention weights
attention_weights = torch.softmax(scores, dim=-1)
# Apply attention weights to values
attended_values = torch.matmul(attention_weights, v)
# Concatenate heads
attended_values = attended_values.transpose(1, 2).contiguous().view(batch_size, -1, self.d_model)
# Apply linear transformation to obtain the final output
output = self.output_linear(attended_values)
return output, attention_weights