深度学习---从入门到放弃(一)pytorch基础

深度学习—从入门到放弃(一)pytorch

Tensor

类似于numpy的array,pandas的dataframe;在pytorch里的数据结构是tensor,即张量

tensor简单操作
1.Flatten and reshape
###
Original z: 
 tensor([[ 0,  1],
        [ 2,  3],
        [ 4,  5],
        [ 6,  7],
        [ 8,  9],
        [10, 11]])
Flattened z: 
 tensor([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11])
Reshaped (3x4) z: 
 tensor([[ 0,  1,  2,  3],
        [ 4,  5,  6,  7],
        [ 8,  9, 10, 11]])
###
2.Squeezing tensors

当我们处理类似于x.shape=[1,10]或[256,1,3]这样的高维数据时,单纯输入下x[0]可能无法输出对应的点数据,所以我们需要用torch.squeeze()提取某一个具体维度

x = torch.randn(1, 10)
x = x.squeeze(0)#取到了第一行的x的数据
print(x.shape)
print(f"x[0]: {x[0]}")
###
torch.Size([10])
x[0]: -0.7390837073326111
###
3.permute

torch.permute()可以用来重新排列维度之间的顺序

x = torch.rand(3, 48, 64)
x = x.permute(1, 2, 0)
###
torch.Size([48, 64, 3])
###
4.Concatenation

tensor和tensor之间按维度的拼接

x = torch.arange(12, dtype=torch.float32).reshape((3, 4))
y = torch.tensor([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])
#行连接
cat_rows = torch.cat((x, y), dim=0)
#列连接
cat_cols = torch.cat((x, y), dim=1)
###
行连接: shape[6, 4] 
 tensor([[ 0.,  1.,  2.,  3.],
        [ 4.,  5.,  6.,  7.],
        [ 8.,  9., 10., 11.],
        [ 2.,  1.,  4.,  3.],
        [ 1.,  2.,  3.,  4.],
        [ 4.,  3.,  2.,  1.]])
列连接: shape[3, 8]  
 tensor([[ 0.,  1.,  2.,  3.,  2.,  1.,  4.,  3.],
        [ 4.,  5.,  6.,  7.,  1.,  2.,  3.,  4.],
        [ 8.,  9., 10., 11.,  4.,  3.,  2.,  1.]])
###

GPU vs CPU

在处理大规模与高速数据时,CPU很难满足需要,而深度学习往往就需要处理大规模的数据,所以我们需要灵活的选择CPU或GPU
在这里插入图片描述

def set_device():
  device = "cuda" if torch.cuda.is_available() else "cpu"
  if device != "cuda":
    print("GPU is not enabled in this notebook. \n"
          "If you want to enable it, in the menu under `Runtime` -> \n"
          "`Hardware accelerator.` and select `GPU` from the dropdown menu")
  else:
    print("GPU is enabled in this notebook. \n"
          "If you want to disable it, in the menu under `Runtime` -> \n"
          "`Hardware accelerator.` and select `None` from the dropdown menu")

  return device
  
 DEVICE = set_device()

简单神经网络

Pytorch有一个 nn.Module类专门用于构建深度学习网络,我们需要从 nn.Module中继承并实现一些重要的方法:

  1. init
    在该__init__方法中,我们需要定义网络的结构。在这里,我们将指定网络由哪些层组成,将使用哪些激活函数等。
  2. forward
    所有神经网络模块都需要实现该forward方法。它指定了当数据通过网络时网络需要进行的计算。
  3. predict
    这不是神经网络模块的强制性方法,但可用于快速从网络中获得最可能的标签
  4. train
    这也不是强制性方法,但可用于训练网络中的参数
# Inherit from nn.Module - the base class for neural network modules provided by Pytorch
class NaiveNet(nn.Module):

  # Define the structure of your network
  def __init__(self):
    super(NaiveNet, self).__init__()

    # The network is defined as a sequence of operations
    self.layers = nn.Sequential(
        nn.Linear(2, 16),  # Transformation from the input to the hidden layer
        nn.ReLU(),         # Activation function (ReLU) is a non-linearity which is widely used because it reduces computation. The function returns 0 if it receives any
                           # negative input, but for any positive value x, it returns that value back.
        nn.Linear(16, 2),  # Transformation from the hidden to the output layer
    )

  # Specify the computations performed on the data
  def forward(self, x):
    # Pass the data through the layers
    return self.layers(x)

  # Choose the most likely label predicted by the network
  def predict(self, x):
    # Pass the data through the networks
    output = self.forward(x)

    # Choose the label with the highest score
    return torch.argmax(output, 1)

 # Implement the train function given a training dataset X and correcsponding labels y
def train(model, X, y):
  # The Cross Entropy Loss is suitable for classification problems
  loss_function = nn.CrossEntropyLoss()

  # Create an optimizer (Stochastic Gradient Descent) that will be used to train the network
  learning_rate = 1e-2
  optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)

  # Number of epochs
  epochs = 15000

  # List of losses for visualization
  losses = []

  for i in range(epochs):
    # Pass the data through the network and compute the loss
    # We'll use the whole dataset during the training instead of using batches
    # in to order to keep the code simple for now.
    y_logits = model.forward(X)
    loss = loss_function(y_logits, y)

    # Clear the previous gradients and compute the new ones
    optimizer.zero_grad()
    loss.backward()

    # Adapt the weights of the network
    optimizer.step()

    # Store the loss
    losses.append(loss.item())

    # Print the results at every 1000th epoch
    if i % 1000 == 0:
      print(f"Epoch {i} loss is {loss.item()}")

      plot_decision_boundary(model, X, y, DEVICE)
      plt.savefig('frames/{:05d}.png'.format(i))

  return losses


# Create a new network instance a train it
model = NaiveNet().to(DEVICE)
losses = train(model, X, y)

以上为一个简单神经网络应用于分类的实例,整个网络的结构如下:1 个大小为 2 的输入层+1 个大小为 16 的隐藏层(ReLU为激活函数)+1 个大小为 2 的输出层

NaiveNet(
(layers): Sequential(
(0): Linear(in_features=2, out_features=16, bias=True)
(1): ReLU()
(2): Linear(in_features=16, out_features=2, bias=True)
)
)

今天大家只需对神经网络的基本结构有一个了解,明天将会系统学习简单线性神经网络的详细结构。也欢迎大家关注公众号奇趣多多一块交流!
在这里插入图片描述
深度学习—从入门到放弃(二)简单线性神经网络

  • 3
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

佩瑞

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值