思路:自定义简单的tensor,将设计的深度学习的层包装到nn.Sequential中
代码:
import torch
from torch import optim, nn
import torch.nn.functional as F
x = torch.Tensor([1, 2, 3, 4, 5]).reshape(1, 1, 5)
padding_layer = torch.nn.ReflectionPad1d
net1 = nn.Sequential(
padding_layer((1, 0)),
nn.Conv1d(in_channels=1, out_channels=1, kernel_size=1, stride=1)
)
net2 = nn.Sequential(
nn.Conv1d(in_channels=1, out_channels=4, kernel_size=2, padding=1, stride=1),
nn.BatchNorm1d(4),
nn.Dropout(0.5),
nn.Conv1d(in_channels=4, out_channels=8, kernel_size=3, padding=1, stride=2),
nn.BatchNorm1d(8),
nn.Dropout(0.5),
nn.Linear(3, 1),
nn.Sigmoid(),
)
out1 = net1(x)
out2 = net2(x)
运行结果:
tensor([[[0.5254, 0.0157, 0.5254, 1.0351, 1.5448, 2.0545]]],
grad_fn=)
tensor([[[0.6319],
[0.6324],
[0.3160],
[0.4043],
[0.4561],
[0.3068],
[0.2934],
[0.4924]]], grad_fn=)