# 强大的PyTorch：10分钟让你了解深度学习领域新流行的框架

PyTorch由4个主要包装组成：

Torch：类似于Numpy的通用数组库，可以在将张量类型转换为（torch.cuda.TensorFloat）并在GPU上进行计算。
torch.nn：具有共同层和成本函数的神经网络库
1.导入工具

import torch # arrays on GPU
import torch.nn as nn # neural net library
import torch.nn.functional as F # most non-linearities are here
import torch.optim as optim # optimization package
2.torch数组取代了numpy ndarray - >在GPU支持下提供线性代数

# 2 matrices of size 2x3 into a 3d tensor 2x2x3

d=[[[1., 2.,3.],[4.,5.,6.]],[[7.,8.,9.],[11.,12.,13.]]]
d=torch.Tensor(d) # array from python list
print “shape of the tensor:”,d.size()

# the first index is the depth

z=d[0]+d[1]
print “adding up the two matrices of the 3d tensor:”,z
shape of the tensor: torch.Size([2, 2, 3])
adding up the two matrices of the 3d tensor:
8 10 12
15 17 19
[torch.FloatTensor of size 2x3]

# a heavily used operation is reshaping of tensors using .view()

print d.view(2,-1) #-1 makes torch infer the second dim
1 2 3 4 5 6
7 8 9 11 12 13
[torch.FloatTensor of size 2x6]

# d is a tensor not a node, to create a node based on it:

print “the node’s data is the tensor:”, x.data.size()
the node’s data is the tensor: torch.Size([2, 2, 3])
the node’s gradient is empty at creation: None

# do operation on the node to make a computational graph

y= x+1
z=x+y
s=z.sum()
print s.creator

s.backward()
the variable now has gradients: Variable containing:
(0 ,.,.) =
2 2 2
2 2 2
(1 ,.,.) =
2 2 2
2 2 2
[torch.FloatTensor of size 2x2x3]
4.torch.nn包含各种NN层（张量行的线性映射）+（非线性）–>

# linear transformation of a 2x5 matrix into a 2x3 matrix

linear_map=nn.Linear(5,3)
print “using randomly initialized params:”, linear_map.parameters
using randomly initialized params:

# data has 2 examples with 5 features and 3 target

data=torch.randn(2,5) # training

# apply transformation to a node creates a computational graph

a=linear_map(x)
z=F.relu(a)
o=F.softmax(z)
print “output of softmax as a probability distribution:”, o.data.view(1,-1)

# loss function

loss_func=nn.MSELoss() #instantiate loss function
L=loss_func(z,y) # calculateMSE loss between output and target
print “Loss:”, L
output of softmax as a probability distribution:
0.2092 0.1979 0.5929 0.4343 0.3038 0.2619
[torch.FloatTensor of size 1x6]
Loss: Variable containing:
2.9838
[torch.FloatTensor of size 1]

_ init_函数必须始终被继承，然后层的所有参数必须在这里定义为类变量（self.x）

class Log_reg_classifier(nn.Module):
def init(self, in_size,out_size):
super(Log_reg_classifier,self).init() #always call parent’s init
self.linear=nn.Linear(in_size, out_size) #layer parameters
def forward(self,vect):
return F.log_softmax(self.linear(vect)) #
5.torch.optim也可以做优化—>

optimizer=optim.SGD(linear_map.parameters(),lr=1e-2) # instantiate optimizer with model params + learning rate

# epoch loop: we run following until convergence

L.backward(retain_variables=True)
optimizer.step()
print L
Variable containing:
2.9838
[torch.FloatTensor of size 1]

# define model

model = Log_reg_classifier(10,2)

# define loss function

loss_func=nn.MSELoss()

# define optimizer

optimizer=optim.SGD(model.parameters(),lr=1e-1)

# send data through model in minibatches for 10 epochs

for epoch in range(10):
for minibatch, target in data:
#forward pass
#backward pass
L=loss_func(out,target) #calculate loss
optimizer.step() # make an update step

#### PyTorch学习总结(一)——查看模型中间结果

2017-12-05 14:35:19

#### Ubuntu16.04 借助 Docker 安装 Caffe

2017-01-26 03:29:53

#### Linux下Caffe、Docker、Tensorflow、PyTorch环境搭建(CentOS 7)

2017-11-30 14:54:54

#### ubuntu14安装pytorch,docker,pycrayon(使用tensorboard）

2018-05-21 10:48:49

#### ubuntu16.04-LTS+cuda-9.1+docker+tensorflow-gpu

2018-01-23 11:01:00

#### PyTorch GPU安装指南 (Ubuntu 16.04 anaconda cuda8.0 cuDNN6.0)

2017-12-22 16:42:50

#### PyTorch使用指定的GPU

2017-09-20 11:16:26

#### 深度学习框架之Pytorch学习（一）

2017-12-21 22:07:53

2017-08-30 11:11:43

#### pytorch下使用LSTM神经网络写诗

2018-04-23 00:33:30