pytorch深度学习_pytorch用于深度学习第1部分

pytorch深度学习

什么是Pytorch? (What is Pytorch ?)

Pytorch is a Deep Learning Library Devoloped by Facebook. it can be used for various purposes such as Natural Language Processing , Computer Vision, etc

Pytorch是Facebook开发的深度学习库。 它可以用于各种目的,例如自然语言处理,计算机视觉等

先决条件 (Prerequisites)

Python, Numpy, Pandas and Matplotlib

Python,Numpy,Pandas和Matplotlib

张量基础 (Tensor Basics)

What is a tensor ? A Tensor is a n-dimensional array of elements. In pytorch, everything is a defined as a tensor.

什么是张量? 张量是元素的n维数组。 在pytorch中,一切都定义为张量。

PyTorch的一些Tensor操作: (Some Tensor operations with PyTorch:)

before we do anything with pytorch , we need to import the torch library.and let’s create a numpy array with numpy

在使用pytorch进行任何操作之前,我们需要导入Torch库。让我们使用numpy创建一个numpy数组

#importing the required libraries
import torch
import numpy as np#creating numpy array
data = np.array([1,2,3,4,5])

Now , we can convert this numpy into Torch Tensor in 4 different ways

现在,我们可以通过4种不同的方式将此numpy转换为Torch Tensor

#different ways for creating tensors
output1 = torch.Tensor(data)
output2 = torch.tensor(data)
output3 = torch.from_numpy(data)
output4 = torch.as_tensor(data)
print(“{}\n{}\n{}\n{}”.format(output1,output2,output3,output4))Output:
tensor([1., 2., 3., 4., 5.])
tensor([1, 2, 3, 4, 5])
tensor([1, 2, 3, 4, 5])
tensor([1, 2, 3, 4, 5])

let’s see the difference between these methods:

让我们看看这些方法之间的区别:

  1. The data type of elements in the tensor is float by default

    张量中元素的数据类型默认为浮点型
  2. The data type of the elements in the tensor is by default the data types of the elements itself.

    张量中元素的数据类型默认情况下是元素本身的数据类型。
  3. Shares data between the numpy array and the tensor. changes in the numpy array is reflected to the tensor object and vice versa

    在numpy数组和张量之间共享数据。 numpy数组中的更改将反映到张量对象,反之亦然
  4. This is same as 3 but is similar to 2. because , the default data type is that of the data original data itself

    与3相同,但与2相似。因为,默认数据类型是数据原始数据本身的数据类型。

Any one of these above methods can be use. it’s based on the use cases and is the programmer’s choice

可以使用以上任何一种方法。 它基于用例,是程序员的选择

PyTorch中的一些有用的Tensor函数 (Some Useful Tensor Functions in PyTorch)

1) Eye Function : it’s a function that can be used to create an identity matrix of given dimensions

1) 眼睛功能:该功能可用于创建给定尺寸的单位矩阵

# Eye Functiontorch.eye(5)Output: tensor([[1., 0., 0., 0., 0.],         
[0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 0., 0., 0., 1.]])

2) zeros Function : This Function is used to create a tensor of given dimensions with all the elements being 0.

2) zeros函数:此函数用于创建给定尺寸的张量,所有元素均为0。

#creating a tensor full of value '1'torch.ones(4,4)Output:tensor([[0., 0.],         
[0., 0.]])

3) Ones Function: This Function is used to create a tensor of given dimensions with all the elements being 1.

3) Ones函数:此函数用于创建给定尺寸的张量,所有元素均为1。

#creating a tensor full of value '1'torch.ones(4,4)Output:tensor([[1., 1., 1., 1.],         
[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]])

4) Rand : creates a tensor of specified dimensions with random values

4) Rand:创建具有随机值的指定尺寸的张量

#creating tensor with random valuestorch.rand(5,3)Output:tensor([[0.4212, 0.1785, 0.6450],         
[0.0284, 0.9095, 0.9114],
[0.0811, 0.5611, 0.2545],
[0.1132, 0.3313, 0.9921],
[0.2310, 0.4237, 0.6206]])

5) Reshape and Flatten ( shaping functions ) : changes the shape of the tensor.note: the shape of the tensor can only be changed if the resultant tensor has the same number of elements as the original one

5) 整形和展平(整形功能):更改张量的形状。 注意:仅当合成张量具有与原始张数相同数量的元素时,才能更改张量的形状

#reshaping tensors
output5 = torch.rand(6,4)
print(output5,"\n")
print(output5.reshape(12,2),"\n")#flattening the tensor
print(output5.flatten())Output : tensor([[0.2267, 0.3610, 0.3366, 0.1552],
[0.3319, 0.3566, 0.3814, 0.9854],
[0.4764, 0.1241, 0.8957, 0.2228],
[0.6024, 0.0024, 0.4686, 0.9024],
[0.0275, 0.3080, 0.7966, 0.8439],
[0.3685, 0.7664, 0.6974, 0.0545]])tensor([[0.2267, 0.3610],
[0.3366, 0.1552],
[0.3319, 0.3566],
[0.3814, 0.9854],
[0.4764, 0.1241],
[0.8957, 0.2228],
[0.6024, 0.0024],
[0.4686, 0.9024],
[0.0275, 0.3080],
[0.7966, 0.8439],
[0.3685, 0.7664],
[0.6974, 0.0545]])tensor([0.2267, 0.3610, 0.3366, 0.1552, 0.3319, 0.3566, 0.3814, 0.9854, 0.4764,0.1241, 0.8957, 0.2228, 0.6024, 0.0024, 0.4686, 0.9024, 0.0275, 0.3080,0.7966, 0.8439, 0.3685, 0.7664, 0.6974, 0.0545])

6) Squeeze and Unsqueeze operations: the squeeze function removes redundant dimensions whereas the unsqueeze function adds an extra dimension to the tensor

6)挤压和反挤压操作:挤压功能去除多余的尺寸,而反挤压功能为张量添加额外的尺寸

output6 = torch.as_tensor([[1,2,3,4,5,6,6,7]])print("{}\n{}\n{}".format(output6,output6.squeeze(), output6.squeeze().unsqueeze(dim=0)))output: tensor([[1, 2, 3, 4, 5, 6, 6, 7]]) 
tensor([1, 2, 3, 4, 5, 6, 6, 7])
tensor([[1, 2, 3, 4, 5, 6, 6, 7]])

7) Tensor Concatenation : concatenates 2 tensors.notice the difference when the axis parameter is changed

7)张量级联:将2个张量级联。 注意更改轴参数时的差异

#tensor concatenationtensor1 = torch.tensor([[1,2,3,4,5]])
tensor2 = torch.tensor([[6,7,8,9,10]])print("{}\n{}".format(torch.cat((tensor1,tensor2),axis=1), torch.cat((tensor1,tensor2),axis=0)))Output:tensor([[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]])
tensor([[ 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10]])

结论 (Conclusion)

PyTorch is a very powerful and highly capable library and I personally love it because of a lot of reasons. some of them are : 1.It’s abstract but not way too abstract

PyTorch是一个非常强大且功能强大的库,由于很多原因,我个人喜欢它。 其中一些是: 1.它是抽象的,但不是太抽象

2.Utilises Object Oriented Programing

2.利用面向对象的编程

3.we can define everything on our own and know what’s happening instead of calling some functions.

3.我们可以自己定义所有内容并知道发生了什么,而不用调用某些函数。

4.Pytorch’s AutoGrad

4,Pytorch的AutoGrad

5. it’s highly pythonic and we feel like we are coding in regular python while using PyTorch

5.这是高度Python风格的,我们觉得我们在使用PyTorch时使用常规Python进行编码

So, These are the basic Tensor operations and functions in Pytorch and that’s it for this part.

因此,这些是Pytorch中基本的Tensor操作和函数,仅此部分而已。

谢谢 (Thank You)

翻译自: https://medium.com/analytics-vidhya/pytorch-for-deep-learning-part-1-1324a45b0af3

pytorch深度学习

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值