这篇文章介绍pytorch网络和参数的操作
打印网络,直接print(net)就完事了
官方代码:创建网络的时候一个细节,x=torch.flatten(x,1),从第一个维度后拉平,第0个维度是batch不管
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 5x5 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 5 * 5, 120) # 5*5 from image dimension
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square, you can specify with a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = torch.flatten(x, 1) # flatten all dimensions except the batch dimension
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
print(net)
输出
Net(
(conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))
(conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(fc1): Linear(in_features=400, out_features=120, bias=True)
(fc2): Linear(in_features=120, out_features=84, bias=True)
(fc3): Linear(in_features=84, out_features=10, bias=True)
)
网络的可学习参数用net.parameter()查看,包括每一层的权重和偏置
params = list(net.parameters())
print(len(params))
print(params[0].size()) # conv1's .weight
输出
10
torch.Size([6, 1, 5, 5])
第一点,用tensor.require_grad=false冻结网络中的参数,实现网络微调
官方代码
from torch import nn, optim
model = torchvision.models.resnet18(pretrained=True)
# Freeze all the parameters in the network
for param in model.parameters():
param.requires_grad = False
把网络最后一层换掉,默认是不冻结,这样除了最后一层不冻结,前面的层都冻结了
model.fc = nn.Linear(512,10)
另一种方式,torch.no_grad()和require_grad=False是一样的效果
用torch.no_grad(),不计算梯度,即使输入的张量require_grad=True,后面的张量require_grad也是False
官方代码:
>>> x = torch.tensor([1], requires_grad=True)
>>> with torch.no_grad():
... y = x * 2
>>> y.requires_grad
False
>>> @torch.no_grad()
... def doubler(x):
... return x * 2
>>> z = doubler(x)
>>> z.requires_grad
False