-
torch.renorm(input, p, dim, maxnorm, *, out=None) → Tensor
v a l u e = { x _ s u b v e c t o r p _ n o r m ( x _ s u b v e c t o r ) where p_norm > max_norm p _ n o r m ( x _ s u b v e c t o r ) otherwise value = \begin{cases} \frac{x\_subvector}{p\_norm(x\_subvector)}\quad \text{where p\_norm > max\_norm\\}\\ p\_norm(x\_subvector)\quad \text{otherwise}\\ \end{cases} value={p_norm(x_subvector)x_subvectorwhere p_norm > max_normp_norm(x_subvector)otherwise即求得x的范数后,如果范数大于max_norm,则进行截断;将x值与max_norm做商,作为新的范数;如果小于max_norm,即不做处理;
feat = torch.Tensor([[1,2,3],[4,5,6],[7,8,9]])
feat_r = feat.renorm(2, 0, 1e-5).mul(1e5)
print(f"renorm feat_r: {feat_r}")
print(f"----")
b_l2 = np.linalg.norm(feat, axis=1)
#b_l2 = b_l2.reshape(3,1)
# 这里调整 numpy 对应除法,需要给l2范数增加一个维度,或者使用转置矩阵(注释掉的代码)
b_l2 = np.expand_dims(b_l2, 1)
print(b_l2,'\n')
max_num_val = 1e-5
b_l2[b_l2<=max_num_val] = 1
#res = (b.T/b_l2).T
res = feat/b_l2
print(res)
print(f"----")
# l2_t = torch.linalg.norm(feat, ord=2, dim=1)
l2_t = feat.norm(2, 1)
print(f"l2_t: {l2_t}")
l2_tran = l2_t.unsqueeze(dim=0).t()
l2_tran[l2_tran <= max_num_val] = 1
res = feat/l2_tran
print(res)