pytorch 和 tensorflow2.0 方法替换

Embedding初始化

pytorch: Embedding()
tf2.0: random.normal()

# 验证均值和方差
def confirm(weight):
    mean = np.sum(weight) / dim
    print("均值: {}".format(mean))
    square_sum = np.sum((mean - weight) ** 2)
    print("方差: {}".format(square_sum / dim))

dim = 1000000  # dim越大,均值、方差越接近0和1

embd = nn.Embedding(5, dim) # 默认 训练参数服从(0,1)正态分布
weight = embd.weight.data[0].numpy()
confirm(weight)

embd2 = tf.Variable(tf.random.normal([5, dim]))  # 设置为(0,1)正态分布
weight2 = embd2.numpy()[0]
confirm(weight2)

张量初始化

pytorch: xavier_uniform_()
tf2.0: GlorotUniform()

def confirm(weight):
    mean = np.sum(weight) / dim
    print("均值: {}".format(mean))
    square_sum = np.sum((mean - weight) ** 2)
    print("方差: {}".format(square_sum / dim))

dim = 1000000  

w = nn.Parameter(torch.zeros(size=(3, dim)))
nn.init.xavier_uniform_(w.data)
weight = w.data[0].numpy()
confirm(weight)


initializer = tf.initializers.GlorotUniform()
w2 = tf.Variable(initializer(shape=[3, dim]))
weight2 = w2[0].numpy()
confirm(weight2)

多分类交叉熵损失

pytorch: CrossEntropyLoss()
tf2.0: categorical_crossentropy()

input = np.random.random((3,3))
input_p = torch.tensor(input)
input_t = tf.convert_to_tensor(input)

target_p = torch.tensor([1,2,2])
target_t1 = tf.keras.utils.to_categorical([1,2,2])
target_t2 = tf.constant([1,2,2])
target_t3 = tf.one_hot([1,2,2], depth=3)

p_f = torch.nn.CrossEntropyLoss()
loss1 = p_f(input_p,target_p)
print(loss1)

# 方法一
loss2 = tf.losses.categorical_crossentropy(y_true=target_t1,y_pred=tf.nn.softmax(input_t,axis=1))
print(tf.reduce_mean(loss2))

# 方法二
loss3 = tf.keras.losses.sparse_categorical_crossentropy(y_true=target_t2, y_pred=tf.nn.softmax(input_t,axis=1))
print(tf.reduce_mean(loss3))

# 方法三
loss4 = tf.keras.losses.categorical_crossentropy(y_true=target_t3, y_pred=tf.nn.softmax(input_t,axis=1))
print(tf.reduce_mean(loss4))

二分类交叉熵损失

pytorch: BCEWithLogitsLoss()
tf2.0: sigmoid_cross_entropy_with_logits()

input = np.random.random((3,3))
input_p = torch.tensor(input)
input_t = tf.convert_to_tensor(input)

target = np.array([[0.,1.,1.],[0.,0.,1.],[1.,0.,1.]])
target_p = torch.tensor(target)
target_t = tf.convert_to_tensor(target)

p_f = torch.nn.BCEWithLogitsLoss()
loss1 = p_f(input_p,target_p)
print(loss1)

# 方法一
loss2 = tf.nn.sigmoid_cross_entropy_with_logits(logits=input_t, labels=target_t)
print(tf.reduce_mean(loss2))

# 方法二
loss_fn = tf.keras.losses.BinaryCrossentropy(from_logits=True)
loss3 = loss_fn(y_true=target_t, y_pred=input_t)
print(tf.reduce_mean(loss3))
  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值