Pytorch与Tensorflow2相互转换,函数记录(持续更新中......)

10 篇文章 0 订阅
1 篇文章 0 订阅

记录Pytorch与Tensorflow2的函数转换(持续更新中…)

tensorflow2官方文档

pytorch官方文档

  • 将所有元素的值限制到[min,max]这个范围。
# Tensorflow语法:
out = tf.clip_by_value(input, minvalue, maxvalue)
# Pytorch语法:
out = torch.clamp(input, min, max, out=None)
  • 计算输出相对于输入的梯度之和
# Tensorflow语法:
grad = tf.gradients(ys, xs)[0]
# Pytorch语法:
grad = torch.autograd.grad(ys, xs)[0]
  • 禁用梯度计算的上下文管理器
# Tensorflow语法:暂时停止在此磁带上记录操作。在此上下文管理器处于活动状态时执行的操作将不会记录在磁带上。这对于减少跟踪所有计算所使用的内存非常有用。
x = tf.constant(4.0)
with tf.GradientTape() as tape:
  with tape.stop_recording():
    y = x ** 2
dy_dx = tape.gradient(y, x)
print(dy_dx)
None

# Pytorch语法:
>>> x = torch.tensor([1.], requires_grad=True)
>>> with torch.no_grad():
...   y = x * 2
>>> y.requires_grad
False
>>> @torch.no_grad()
... def doubler(x):
...     return x * 2
>>> z = doubler(x)
>>> z.requires_grad
False
  • 返回输入张量中所有元素的最大值
# Tensorflow语法:
max_ = tf.reduce_max(input, axis=1)
# Pytorch语法:
max_ = torch.max(input, dim=1) 
  • 返回给定张量的矩阵范数或向量范数
# Tensorflow语法:
out = tf.norm(input, ord=1, axis=1)
# Pytorch语法:
out = torch.norm(input, p=1, dim=1)
  • 在给定维度中连接给定的张量
# Tensorflow语法:
out = tf.concat(values, axis=0)
# Pytorch语法:
out = torch.cat(values, dim=0)
  • 模型保存
# Tensorflow语法:
tf.saved_model.save(obj, export_dir)
# Pytorch语法:
torch.save(obj, f)
  • 创建一个所有元素都为零的张量
# Tensorflow语法:
tf.zeros_like(input, dtype=None)
# Pytorch语法:
torch.zeros_like(input, dtype=None
  • 从均匀分布中输出随机值
# Tensorflow语法:
tf.random.uniform(shape=[])
# Pytorch语法:
torch.rand(shape=[])
  • 计算均方误差
# Tensorflow语法:
tf.losses.mean_squared_error(x, y)
# Pytorch语法:
MSELoss = torch.nn.MSELoss()
MSELoss(x, y)

  • 使输入变平。不影响批大小。
# Tensorflow语法:
Flatten = tf.keras.layers.Flatten()
Flatten(input)

# Pytorch语法:
Flatten = nn.Flatten()
Flatten(input)

>>> input = torch.randn(32, 1, 5, 5)
>>> # With default parameters
>>> m = nn.Flatten()
>>> output = m(input)
>>> output.size()
torch.Size([32, 25])
>>> # With non-default parameters
>>> m = nn.Flatten(0, 2)
>>> output = m(input)
>>> output.size()
torch.Size([160, 5])

  • 计算x元素的双曲正切,tanh
# Tensorflow语法:
tf.math.tanh(x)
# Pytorch语法:
torch.tanh(x)
  • 计算x元素的自然对数。
# Tensorflow语法:
tf.math.log(x)
# Pytorch语法:
torch.log(x)
  • 构造一个单位矩阵,或一批矩阵。
# Tensorflow语法:
tf.eye(num_rows)

# Construct one identity matrix.
tf.eye(2)
==> [[1., 0.],
     [0., 1.]]

# Construct a batch of 3 identity matrices, each 2 x 2.
# batch_identity[i, :, :] is a 2 x 2 identity matrix, i = 0, 1, 2.
batch_identity = tf.eye(2, batch_shape=[3])

# Construct one 2 x 3 "identity" matrix
tf.eye(2, num_columns=3)
==> [[ 1.,  0.,  0.],
     [ 0.,  1.,  0.]]
     
# Pytorch语法:
torch.eye(num_rows)

>>> torch.eye(3)
tensor([[ 1.,  0.,  0.],
        [ 0.,  1.,  0.],
        [ 0.,  0.,  1.]])
  • 应用布尔掩码到张量上
# Tensorflow语法:
tf.boolean_mask(tensor, mask, axis=None, name='boolean_mask')

tensor = [0, 1, 2, 3]  # 1-D example
mask = np.array([True, False, True, False])
out = tf.boolean_mask(tensor, mask)
# out : tf.Tensor([0 2], shape=(2,), dtype=int32)


tensor = [[0,1,2],[3,4,5],[6,7,8]] # 2-D example
mask = np.array([[True,False,False],
				 [False, True,False],
				 [False,False,True]])
out = tf.boolean_mask(tensor, mask)
# out : tf.Tensor([0 4 8], shape=(3,), dtype=int32)

# Pytorch语法:
torch.masked_select(input, mask)

>>> x = torch.randn(3, 4)
>>> x
tensor([[ 0.3552, -2.3825, -0.8297,  0.3477],
        [-1.2035,  1.2252,  0.5002,  0.6248],
        [ 0.1307, -2.0608,  0.1244,  2.0139]])
>>> mask = x.ge(0.5)
>>> mask
tensor([[False, False, False, False],
        [False, True, True, True],
        [False, False, False, True]])
>>> torch.masked_select(x, mask)
tensor([ 1.2252,  0.5002,  0.6248,  2.0139])
  • 计算输入中每个元素的绝对值。
# Tensorflow语法:
tf.math.abs(x)
# Pytorch语法:
torch.abs(x)
  • 将一个秩-R张量列表堆叠成一个秩-(R+1)张量。
# Tensorflow语法:
tf.stack(values, axis=0, name='stack')
# Pytorch语法:
torch.stack(tensors, dim=0)

  • 计算一个张量维度上元素的平均值
# Tensorflow语法:
tf.math.reduce_mean(x,axis=None)
# Pytorch语法:
torch.mean(x,dim=None)
  • 用零填充图像到指定的高度和宽度。
# Tensorflow语法:
tf.raw_ops.PadV2(input, paddings=[[pad_top, pad_bottom],[pad_left,pad_right]],constant_values=0)
# Pytorch语法:
torch.nn.functional.pad(input, [pad_left, pad_right, pad_top, pad_bottom],mode='constant', value=0)
  • 计算输入对数和目标之间的交叉熵损失
# Tensorflow语法:
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(label, logit)
loss_ = tf.reduce_mean(loss)

# logit: tf.Tensor([[-1.6456  0.0097 -1.0953  1.1578  0.917 ]
# 					[ 1.8985  0.3284  0.7734  0.551   0.5097]
#					[ 0.7643  2.2743  1.376   1.3695  0.931 ]], shape=(3, 5), dtype=float32)
# label: tf.Tensor([0 1 2], shape=(3,), dtype=int32)
# loss: tf.Tensor([3.6227016 2.2839847 1.7284999], shape=(3,), dtype=float32)
# loss_: tf.Tensor(2.545062, shape=(), dtype=float32)

# Pytorch语法:
torch.nn.CrossEntropyLoss()

>>> # Example of target with class indices
>>> loss = nn.CrossEntropyLoss()
>>> input = torch.randn(3, 5, requires_grad=True)
>>> target = torch.empty(3, dtype=torch.long).random_(5)
>>> output = loss(input, target)
>>> output.backward()

# input:tensor([[-1.6456,  0.0097, -1.0953,  1.1578,  0.9170],
#        		[ 1.8985,  0.3284,  0.7734,  0.5510,  0.5097],
#        		[ 0.7643,  2.2743,  1.3760,  1.3695,  0.9310]], requires_grad=True)
# target:tensor([0, 1, 2])
# output:tensor(2.5451, grad_fn=<NllLossBackward0>)
  • 改写网络结构,需要用到的:维度交换
    以[𝑏, ℎ, w, 𝑐]转换到[𝑏, 𝑐, ℎ, w]为例,即[5, 28, 28, 3] -> [5, 3, 28, 28]
    通过 tf.transpose(data, perm)函数完成维度交换操作
# Tensorflow语法:
"""
变换维度
tf.transpose(data, perm)
data:数据
perm:新维度的顺序 List
"""

#[𝑏, ℎ, w, 𝑐]转换到[𝑏, 𝑐, ℎ, w]为例,即[5, 28, 28, 3] -> [5, 3, 28, 28]
matrix3 = tf.transpose(matrix, perm=[0, 3, 1, 2])
matrix3_shape = matrix3.shape
print("matrix3_shape:", matrix3_shape)
  • conv2d的改写
# Tensorflow语法:
def get_shape(self, tensor):
    return tensor.get_shape().as_list()

def conv2d(self, input, kernel_size, strides=1, name=None, use_bias=False, padding='SAME',
           initializer=tf.random_normal_initializer(mean=0, stddev=0.02)):
    input_shape = self.get_shape(input)
    # input_channels = input_shape[-1]
    # filter_shape = [kernel_size, kernel_size, input_channels, out_channels]
    filter_shape = kernel_size
    strides_shape = [1, strides, strides, 1]

    with tf.variable_scope(name):
        weight = tf.get_variable('weight', shape=filter_shape, initializer=initializer)
        _conv2d = tf.nn.conv2d(input, filter=weight, strides=strides_shape, padding=padding)
        if (use_bias):
            bias = tf.get_variable('bias', shape=filter_shape[-1], initializer=tf.constant_initializer(0.))
            _conv2d = tf.nn.bias_add(_conv2d, bias)
        return _conv2d
# 卷积核的形状,从pytorch的形态转换为tensorflow的形态 
# tensorflow: (filter_height, filter_width, in_channels, out_channels)
# stack_kernel [3, 1, 15, 15] -> stack_kernel_tf [15, 15, 1, 3]
stack_kernel_tf = tf.transpose(stack_kernel, [2,3,1,0])	
output = self.conv2d(input, stack_kernel_tf, strides=1, padding='same')

# Pytorch语法:
# stack_kernel [3, 1, 15, 15]	(out_channels, in_channels/groups, kH, kW)
outpu = torch.nn.functional.conv2d(grad, stacked_kernel, stride=1, padding='same', groups=3)
  • 修改tensor中某些元素
# Tensorflow语法:
# tf无法直接对其进行赋值,如修改tensor数组a中的3为100
A = 3 #原修改位置的元素
B = 100 #要修改的元素值B
tensor_val = tf.constant([1,2,3,4]) # 原tensor
#获取3的下标位置为2,则该位置为1的one-hot为[0,0,1,0],记为D
_tensor_val = tf.one_hot(2, len(tensor_val), dtype=tf.int32) 
# 新的tensor = 原tensor - A*D + B*D
new_tensor_val = tensor_val - A*_tensor_val + B*_tensor_val
print(new_tensor_val)  
# tf.Tensor([1 2 100 4], shape=(4,), dtype=int32)

# 方法2
#tf.Variable配合assign。
#首先将原tensor变为Variable,然后配合assign函数达到修改tensor的目的,之后再将Variable变回tensor。
tensor_input = tf.constant([i for i in range(20)], tf.float32)
tensor_input = tf.reshape(tensor_input, [4, 5])
new_tensor_v = tf.Variable(tensor_input)
new_tensor_v[2, 3].assign(100)
tf.convert_to_tensor(new_tensor_v)
print(type(new_tensor))  # <class 'tensorflow.python.framework.ops.EagerTensor'>
# print(new_tensor.numpy())
'''
[[  0.   1.   2.   3.   4.]
[  5.   6.   7.   8.   9.]
[ 10.  11.  12. 100.  14.]
[ 15.  16.  17.  18.  19.]]
'''


# Pytorch语法:
# 能够直接赋值
a = torch.tensor([1, 2, 3, 4.])
a[2] = 100
print(a)  # tensor([  1.,   2., 100.,   4.])
  • 应用布尔掩码到张量
# Tensorflow语法:
tf.boolean_mask(input, mask)
# Pytorch语法:
torch.masked_select(input, mask)
  • 梯度更新
# Tensorflow语法:

# 对图像进行数值变化
w = self.inverse_tanh_space(images)
# 将w将设置为变量,用于后续更新
w = tf.Variable(w)
optimizer = tf.keras.optimizers.Adam(lr)
# 若只是对w进行梯度更新
with tf.GradientTape(watch_accessed_variables=False) as tape:
	tape.watch(w)
	...
	loss = tf.reduce_mean(outputs, labels)
	...
grad = tape.gradient(loss, w)
optimizer.apply_gradients([(grad, w)])

# Pytorch语法:

# 对图像进行数值变化
w = self.inverse_tanh_space(images).detach()
w.requires_grad = True
optimizer = optim.Adam([w], lr)
...
loss = tf.reduce_mean(outputs, labels)
...
optimizer.zero_grad()
loss.backward()
optimizer.step()

  • 向下取整
# Tensorflow语法:
a = tf.constant([[0.7498, 0.2052, 0.9352],
        [0.1171, 0.2046, 0.1682],
        [0.3003, 0.7483, 0.0089]])
b = tf.math.floor(a).numpy()
Out[5]: 
#array([[0., 0., 0.],
#       [0., 0., 0.],
#       [0., 0., 0.]], dtype=float32)
# Pytorch语法:
>>> b1 = torch.rand(3,3)
>>> b1
tensor([[0.7498, 0.2052, 0.9352],
        [0.1171, 0.2046, 0.1682],
        [0.3003, 0.7483, 0.0089]])
>>> b1.long()
tensor([[0, 0, 0],
        [0, 0, 0],
        [0, 0, 0]])


# Tensorflow语法:

# Pytorch语法:

# Tensorflow语法:

# Pytorch语法:

# Tensorflow语法:

# Pytorch语法:

# Tensorflow语法:

# Pytorch语法:

# Tensorflow语法:

# Pytorch语法:

# Tensorflow语法:

# Pytorch语法:

# Tensorflow语法:

# Pytorch语法:

# Tensorflow语法:

# Pytorch语法:

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值