【PyTorch学习笔记_03】--- PyTorch(深度学习神经网络搭建)

36. 深度学习要解决的问题

  • 机器学习流程:数据获取,特征工程,建立模型,评估与应用
  • 特征工程的作用:数据特征决定了模型的上限
  • 预处理和特征提取时最核心的
  • 算法和参数选择决定了如何逼近这个上限
  • 深度学习解决的是怎么提特征

37.深度学习应用

  • 无人驾驶,人脸识别
  • 移动端不太好支持,计算量大,速度慢

38. 计算机视觉任务

39. 视觉任务中遇到的问题

  • K近邻算法

  • 算法流程;

  • 计算已知类别数据集中的点与当前点的距离

  • 按照距离依次排序

  • 选取与当前点距离最小的K个点

  • 确定前K个点所在类别的出现概率

  • 返回前K个点出现频率最高的类别作为当前点预测分类

  • KNN 算法本身简单有效,它是一种lazy-learning算法

  • 分类器不需要使用训练集进行训练,训练时间复杂度为0

  • K值的选择,距离度量和分类决策规则是该算法的三个基本要素

  • CIFAR-10 数据库

  • 图像距离计算公式

  • K近邻不能用来图像分类,(背景影响,无法区分主体)

40. 得分函数

  • 线性函数 f(x, w) w: 权重参数
  • f(x, w) = wx + b b: 偏置参数 (微调作用)

41. 损失函数的作用

  • 多组权重参数构成了决策边界

42. 前向传播整体流程

  • 损失函数的值相同,并不意味着两个模型一样
  • 正则化惩罚项
  • 总是希望模型不要太复杂,过拟合的模型是没用的
  • Softmax分类器
  • g(z) = 1 / 1 + e^(-z)
  • 归一化 normalize
  • e^x 可以放大结果
  • 前向传播获取损失值

43. 反向传播计算方法

  • 梯度下降
  • 求偏导,链式法则
  • 梯度是一步一步传的

44. 神经网络整体架构

  • 可以一大块一大块计算
  • 加法门单元
  • MAX门单元:给最大的
  • 乘法门单元
  • 整体架构:
  • 层次结构,神经元,全连接,非线性

45. 神经网络架构细节

  • 基本结构:f = W2max(0, W1x)

46. 神经元个数对结果的影响

  • 越多越好,但是会有过拟合的风险

47.正则化与激活函数

  • 惩罚力度对结果的影响
  • 常用激活函数:Sigmoid, Relu, Tanh等
  • Sigmod函数是点问题:数值较大或较小,会导致梯度消失
  • Relu函数: max(0, x)

48. 神经网络过拟合解决方法

  • 数据预处理:不同的预处理结果会使得模型的效果发生很大的差异!
  • 参数初始化:通常使用随机策略来进行参数初始化
  • DROP-OUT 过拟合是神经网络非常头疼的一个大问题
  • DROP-OUT 随机杀死一些神经元

49. pyTorch实战

50. pyTorch发展趋势简介

  • torch 火炬
  • caffe 老框架
  • tensorflow 调试有点困难
  • keras 封装了TensorFlow
  • 安装pytorch

51. pyTorch基本操作简介

import torch

# 创建一个矩阵
x = torch.empty(5, 3)

# 来个随机
x = torch.rand(5, 3)

# 初始化一个全零矩阵
x = torch.zeros(5, 3, dtype=torch.long)

# 直接传入数据
x = torch.tensor([5.5, 3])
x = x.new_ones((5, 3), dtype=torch.double)
x = torch.randn_like(x, dtype=torch.float)

# 展示矩阵大小
x.size()

# 加法
y = torch.rand(5, 3)
x + y
torch.add(x, y)

# 索引
x[:, 1]

# view操作可以改变矩阵维度
x = torch.rand(4, 4)
y = x.view(16)
z = x.view(-1, 8)   # -1代表自动去算维度
print(x.size(), y.size(), z.size())

# 与Numpy的协同操作
a = torch.ones(5)
b = a.numpy()

import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)

52. 自动求导机制

  • 框架干的最厉害的一件事就是帮我们把反向传播全部计算好了
import torch

# 需要求导的,可以手动定义:
# 方法一:
x = torch.randn(3, 4, requires_grad=True)

# 方法二:
x = torch.randn(3, 4)
x.requires_grad = True

b = torch.randn(3, 4, requires_grad=True)
t = x + b
y = t.sum()
y.backward()  # y反向传播
print(b.grad)  # 对b进行求导

# 计算流程
x = torch.rand(1)
b = torch.rand(1, requires_grad=True)
w = torch.rand(1, requires_grad=True)
y = w * x
z = y + b
# 反向传播计算
z.backward(retain_graph=True)  # 如果不清空会累加起来
print(w.grad)
print(b.grad)

53. 线性回归DEMO-数据与参数配置

54. 训练回归模型

import numpy as np
import torch

x_values = [i for i in range(11)]
x_train = np.array(x_values, dtype=np.float32)
x_train = x_train.reshape(-1, 1)
print(x_train.shape)

y_values = [2 * i + 1 for i in x_values]
y_train = np.array(y_values, dtype=np.float32)
y_train = y_train.reshape(-1, 1)
print(y_train.shape)

import torch.nn as nn

# 线性回归模型:不加激活函数的全连接层
class LinearRegressionModel(nn.Module):
    def __init__(self, input_dim, output_dim):
        super(LinearRegressionModel, self).__init__()
        self.linear = nn.Linear(input_dim, output_dim)
    
    def forward(self, x):
        out = self.linear(x)
        return out

input_dim = 1
output_dim = 1
model = LinearRegressionModel(input_dim, output_dim)

# 指定好参数和损失函数
epochs = 1000
learning_rate = 0.01
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)   # 优化器
criterion = nn.MSELoss()

# 训练模型
for epoch in epochs:
    epoch += 1
    # 注意转行成tensor
    inputs = torch.from_numpy(x_train)
    labels = torch.from_numpy(y_train)

    # 梯度要清零每一次迭代
    optimizer.zero_grad()

    # 前向传播
    outputs = model(inputs)
    
    # 计算损失
    loss = criterion(outputs, labels)

    # 反向传播
    loss.backward()

    # 更新权重参数
    optimizer.step()
    if epoch % 50 == 0:
        print(f'epoch {epoch}, loss {loss.item()}')

# 测试模型预测结果
predicted = model(torch.from_numpy(x_train).requires_grad()).data_numpy()
print(predicted)

# 模型的保存和读取
torch.save(model.state_dict(), 'model.pkl')
model.load_state_dict(torch.load('model.pkl'))

  • 使用GPU进行训练
import numpy as np
import torch

x_values = [i for i in range(11)]
x_train = np.array(x_values, dtype=np.float32)
x_train = x_train.reshape(-1, 1)
print(x_train.shape)

y_values = [2 * i + 1 for i in x_values]
y_train = np.array(y_values, dtype=np.float32)
y_train = y_train.reshape(-1, 1)
print(y_train.shape)

import torch.nn as nn

# 线性回归模型:不加激活函数的全连接层
class LinearRegressionModel(nn.Module):
    def __init__(self, input_dim, output_dim):
        super(LinearRegressionModel, self).__init__()
        self.linear = nn.Linear(input_dim, output_dim)
    
    def forward(self, x):
        out = self.linear(x)
        return out

input_dim = 1
output_dim = 1
model = LinearRegressionModel(input_dim, output_dim)


device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model.to(device)


# 指定好参数和损失函数
epochs = 1000
learning_rate = 0.01
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)   # 优化器
criterion = nn.MSELoss()

# 训练模型
for epoch in epochs:
    epoch += 1
    # 注意转行成tensor
    inputs = torch.from_numpy(x_train).to(device)
    labels = torch.from_numpy(y_train).to(device)

    # 梯度要清零每一次迭代
    optimizer.zero_grad()

    # 前向传播
    outputs = model(inputs)
    
    # 计算损失
    loss = criterion(outputs, labels)

    # 反向传播
    loss.backward()

    # 更新权重参数
    optimizer.step()
    if epoch % 50 == 0:
        print(f'epoch {epoch}, loss {loss.item()}')

# 测试模型预测结果
predicted = model(torch.from_numpy(x_train).requires_grad()).data_numpy()
print(predicted)

# 模型的保存和读取
torch.save(model.state_dict(), 'model.pkl')
model.load_state_dict(torch.load('model.pkl'))

## 55. 常见的tensor格式

  • scalar 数值
  • vector 向量
  • matrix 矩阵
  • n-dimensional tensor 高维矩阵
import torch
from torch import tensor

# Scalar
x = tensor(42.)
x.dim()
x.size()

# vector
v = tensor([1.5, -0.5, 3.0])
v.dim()
v.size()

# Matrix
M = tensor([[1, 2], [3, 4]])
M.matmul(M)

56. Hub模块简介

  • 调用别人训练好的模型
  • github: https://github.com/pytorch/hub
  • 模型:https://pytorch.org/hub/research-models
  • torch.hub.list(‘’)
  • 一行代码,美滋滋

57. 气温数据集与任务介绍

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import torch
import torch.optim as optim
import warnings

warnings.filterwarnings("ignore")
# %matplotlib inline 

features = pd.read_csv('temps.csv')

features.head()  # 看前五行

print("数据维度", features.shape)

# 处理时间数据
import datetime

# 分别得到年,月,日
years = features['year']
months = features['month']
days = features['day']

# datetime格式
dates = [str(int(year)) + "-" + str(int(month)) + "-" + str(int(day)) for year, month, day in zip(years, months, days)]
dates = [datetime.datetime.strptime(date, "%Y-%m-%d") for date in dates]

print(dates[:5])

# 准备画图
# 指定默认风格
plt.style.use('fivethirtyeight')

# 设置布局
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2, figsize=(10, 10))
fig.autofmt_xdate(rotation=45)

# 标签值
ax1.plot(dates, features['actual'])
ax1.set_xlabel(''); ax1.set_ylabel("Temperature"); ax1.set_title("Max Temp")

# 昨天
ax2.plot(dates, features['temp_1'])
ax2.set_xlabel(''); ax2.set_ylabel("Temperature"); ax1.set_title("Previous Max Temp")

# 前天
ax3.plot(dates, features['temp_2'])
ax3.set_xlabel('Date'); ax2.set_ylabel("Temperature"); ax1.set_title("Two Days Prior Max Temp")

# 我的逗比朋友
ax4.plot(dates, features['friend'])
ax4.set_xlabel('Date'); ax2.set_ylabel("Temperature"); ax1.set_title("Friend Estimate")

plt.tight_layout(pad=2)

# 独热编码
features = pd.get_dummies(features)
features.head()

# 标签
labels = np.array(features['actual'])

# 在特征中去掉标签
features = features.drop('actual', axis=1)

# 名字单独保存一下,以备后患
feature_list = list(features.columns)

# 转换成合适的格式
features = np.array(features)

print(features.shape)

from sklearn import preprocessing
# 预处理,标准化
input_features = preprocessing.StandardScaler().fit_transform(features)


# 构建网络模型
x = torch.tensor(input_features, dtype=float)

y = torch.tensor(labels, dtype=float)

# 权重参数初始化
weights = torch.randn((14, 128), dtype=float, requires_grad=True)
biases = torch.randn(128, dtype=float, requires_grad=True)
weights2 = torch.randn((128, 1), dtype=float, requires_grad=True)
biases2 = torch.randn(1, dtype=float, requires_grad=True)

learning_rate = 0.001
losses = []

for i in range(1000):
    # 计算隐层
    hidden = x.mm(weights) + biases
    # 加入激活函数
    hidden = torch.relu(hidden)
    # 预测结果
    predictions = hidden.mm(weights2) + biases2
    # 计算损失
    loss = torch.mean((predictions - y) ** 2)
    losses.append(loss.data.numpy())

    # 打印损失值
    if i % 100 == 0:
        print('loss:', loss)
    
    # 反向传播计算
    loss.backward()

    # 更新参数
    weights.data.add_(-learning_rate * weights.grad.data)
    biases.data.add_(-learning_rate *  biases.grad.data)
    weights2.data.add_(-learning_rate * weights2.grad.data)
    biases2.data.add_(-learning_rate *  biases2.grad.data)

    # 每次迭代都记得清空
    weights.grad.data.zero_()
    biases.grad.data.zero_() 
    weights2.grad.data.zero_()
    biases2.grad.data.zero_()
    

58. 按建模顺序构建完成网络架构

59. 简单代码训练网络模型

import matplotlib.pyplot as plt
import pandas as pd
import datetime
import numpy as np
import torch

input_size = input_features.shape[1]
hidden_size = 128
output_size = 1
batch_size = 16

my_nn = torch.nn.Sequential(
  torch.nn.Linear(input_size, hidden_size),
  torch.nn.Sigmoid(),
  torch.nn.Linear(hidden_size, output_size),
)
cost = torch.nn.MSELoss(reduction='mean')
optimizer = torch.optim.Adam(my_nn.parameters(), lr=0.001)  # Adam 优化器

# 训练网络
losses = []
for i in range(1000):
  batch_loss = []
  # MINI-Batch方法来进行训练
  for start in range(0, len(input_features), batch_size):
    end = start + batch_size if start + batch_size < len(input_features) else len(input_features)
    xx = torch.tensor(input_features[start:end], dtype=torch.float, requires_grad=True)
    yy = torch.tensor(labels[start:end], dtype=torch.float, requires_grad=True)
    prediction = my_nn(xx)
    loss = cost(prediction, yy)
    optimizer.zero_grad()
    loss.backward(retain_graph=True)
    optimizer.step()
    batch_loss.append(loss.data.numpy())

  # 打印损失
  if i % 100 == 0:
    losses.append(np.mean(batch_loss))
    print(i, np.mean(batch_loss))

x = torch.tensor(input_features, dtype=torch.float)
predict = my_nn(x).data.numpy()

# 转换日期格式
dates = [str(int(year)) + '-' + str(int(month)) + '-' + str(int(day)) for year, month, day in zip(years, months, days)]
dates = [datetime.datetime.strptime(date, "%Y-%m-%d") for date in dates]

# 创建一个表格来存日期和其对应的标签数值
true_data = pd.DataFrame(data={'date': dates, 'actual': labels})

# 同理,再创建一个来存日期和其对应的模型预测值
months = features[:, feature_list.index('month')]
days = features[:, feature_list.index('day')]
years = features[:, feature_list.index('year')]

test_dates = [str(int(year)) + '-' + str(int(month)) + "-" + str(int(day)) for year, month, day in
              zip(years, months, days)]

test_dates = [datetime.datetime.strptime(date, "%Y-%m-%d") for date in test_dates]

predictions_data = pd.DataFrame(data={'date': test_dates, 'prediction': predict.reshape(-1)})

# 真实值
plt.plot(ture_data['date'], true_data['acutal'], 'b-', label='actual')

# 预测值
plt.plot(predictions_data['date'], predictions_data['prediction'], 'ro', label='prediction')
plt.xticks(rotation='60')
plt.legend()


# 图名
plt.xlabel('Date'); plt.ylabel('Maximum Temperature(F)'); plt.title('Actual and Predicted Values')

60. 分类任务概述

  • Mnist分类任务:
  • 网络基本构建与训练方法,常用函数解析
  • torch.nn.functional模块
  • nn.Module模块

61. 构建分类网络模型

  • torch.nn.functional
  • nn.functional, nn.Module
  • 什么时候用nn.functional呢?如果模型有可学习的参数,最好用nn.Module, 其它情况nn.functional相对更简单一些
import torch
import torch.nn.functional as F

loss_func = F.cross_entropy


def model(xb):
  return xb.mm(weights) +bias


bs = 64
xb = x_train[0:bs]
yb = y_train[0:bs]
weights = torch.randn([784, 10], dtype=torch.float, requires_grad=True)
bs = 64
bias = torch.zeros(10, requires_grad=True)

print(loss_func(model(xb), yb))
  • 创建一个model来更简化代码
  • Module中的可学习参数可以通过named_parameters()或者parameters()反向迭代器
from torch import nn
import torch.nn.functional as F

class Mnist_NN(nn.Module):
    def __init__(self):
        super().__init__()
        self.hidden1 = nn.Linear(784, 128)
        self.hidden2 = nn.Linear(128, 256)
        self.out = nn.Linear(256, 10)
    
    def forward(self, x):
        x = F.relu(self.hidden1(x))
        x = F.relu(self.hidden2(x))
        x = self.out(x)
        return x

net = Mnist_NN()
print(net)

## 62.DataSet模块介绍与应用方法

  • 使用TensorDataSet和DataLoader来简化
from torch.utils.data import TensorDataset
from torch.utils.data import DataLoader

train_ds = TensorDataset(x_train, y_train)
train_dl = DataLoader(train_ds, batch_size=bs, shuffle=True)

valid_ds = TensorDataset(x_valid, y_valid)
valid_dl = DataLoader(valid_ds, batch_size=bs * 2)

def get_data(train_ds, valid_ds, bs):
    return (
      DataLoader(train_ds, batch_size=bs, shuffle=True),
      DataLoader(valid_ds, batch_size=bs * 2)
    )

  • 一般在训练模型时加上model.train(), 这样会正常使用Batch Normalization 和 Dropout
  • 测试的时候一般选择model.eval(), 这样就不会使用Batch Normalization 和 Dropout

63. 卷积神经网络应用领域

  • 检测任务
  • 分类与检索
  • 超分辨率重构
  • 医学任务
  • OCR识别
  • 无人驾驶
  • 人脸识别

64. 卷积的作用

  • CNN 卷积神经网络
  • NN 传统网络
  • 整体架构:
  • 输入层,卷积层,池化层,全连接层
  • 特征图:卷积后的图片

65.卷积特征值计算方法

  • 图像颜色通道
  • 卷积核

66. 得到特征图表示

67. 步长与卷积核大小对结果的影响

  • 卷积层涉及参数:
  • 滑动窗口步长,卷积核尺寸,边缘填充,卷积核个数

68. 边缘填充方法

  • 填0
  • 卷积核个数

69. 特征图尺寸计算与参数共享

  • 卷积参数共享

70. 池化层的作用

  • 压缩数据(下采样)
  • MAX POOLING(最大池化)

71. 整体网络架构

  • 卷积层—RELU—池化 特征提取
  • (拉成特征向量) FC层(全连接层)
  • 带参数计算的,才能称为一层(7/8/9层神经网络)

72. VGG网络架构

  • 经典网络—Alexnet(用的少) AlexNet(2012年) -----8层网络 5层卷积 3层全连接
  • 经典网络—Vgg (2014年) ---------16层,19层 (训练时间3天左右)

73. 残差网络Resnet—2015年

  • 经典网络–Resnet
  • 核心思想:至少不比原来差

74. 感受野的作用

  • 感受野

75. 卷积网络参数定义

# 构建卷积神经网络
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torchvision import datasets, transforms
import matplotlib.pylab as plt
import numpy as np
# % matplotlib inline

# 首先读取数据
# 分别构建训练集和测试集    DataLoader来迭代取数据

# 定义超参数
input_size = 28   # 图像尺寸28*28
num_classes = 10   # 标签的种类数
num_epochs = 3     # 训练的总循环周期
batch_size = 64    # 一个撮(批次)的大小,64张图片

# 训练集
train_dataset = datasets.MNIST(root='./data', train=True, transform=transforms.ToTensor(), download=True)
# 测试集
test_dataset = datasets.MNIST(root='./data', train=False, transform=transforms.ToTensor())

# 构建batch数据
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=True)

# 卷积网络模块构建
# 一般卷积层,relu层,池化层可以写成一个套餐
# 注意卷积最后结果还是一个特征图,需要把图转换成向量才能做分类或者回归任务
class CNN(nn.Module):
    def __init__(self):
        super(CNN, self).__init__()
        self.conv1 = nn.Sequential(    # 输入大小(1, 28, 28)
          nn.Conv2d(
            in_channels=1,    # 灰度图
            out_channels=16,  # 要得到多少个特征图
            kernel_size=5,    # 卷积核大小
            stride=1,         # 步长
            padding=2,        # 如果希望卷积后大小和原来一样,需要设置padding=(kernel_size-1)/2 if stride=1
          ),                  # 输出特征图为(16,28,28)
          nn.ReLU(),          # relu层
          nn.MaxPool2d(kernel_size=2),   # 进行池化操作(2*2区域), 输出结果为:(16,14,14)
        )
        self.conv2 = nn.Sequential(  # 下一个套餐的输入(16, 14, 14)
          nn.Conv2d(16, 32, 5, 1, 2),  # 输出层(32, 14, 14)
          nn.ReLU(),                 # relu层
          nn.MaxPool2d(2),           # 输出(32,7,7)
        )
        self.out = nn.Linear(32 * 7 * 7, 10)   # 全连接层得到结果
        
    def forward(self, x):
        x = self.conv1(x)
        x = self.conv2(x)
        x = x.view(x.size(0), -1)    # flatten操作,结果为:(batch_size, 32*7*7)
        output = self.out(x)
        return output


# 准确率作为评估标准
def accuracy(predictions, labels):
    pred = torch.max(predictions.data, 1)[1]
    rights = pred.eq(labels.data.view_as(pred)).sum()
    return rights, len(labels)

# 训练网络模型
net = CNN()    # 实例化
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=0.001)   # 定义优化器,普通的随机梯度下降算法

# 开始训练循环
for epoch in range(num_epochs):
    # 当前epoch的结果保存下来
    train_rights = []

    for batch_idx, (data, target) in enumerate(train_loader):  # 针对容器中的每一个批进行循环
        net.train()
        output = net(data)
        loss = criterion(output, target)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        right = accuracy(output, target)
        train_rights.append(right)
        
        if batch_idx % 100 == 0:
          
          net.eval()
          val_rights = []
          
          for(data, target) in test_loader:
            output = net(data)
            right = accuracy(output, target)
            val_rights.append(right)
          
          # 准确率计算
          train_r = (sum([tup[0] for tup in train_rights]), sum([tup[1] for tup in train_rights]))
          val_r = (sum([tup[0] for tup in val_rights]), sum([tup[1] for tup in val_rights]))
          
          print('当前epoch: {} [{}/{} ({:0f}%)]\t损失:{:.6f}\t训练集准确率:{:.2f}%\t测试集准确率:{:.2f}%'.format(
            epoch, batch_idx*batch_size, len(train_loader.dataset),
            100 * batch_idx / len(train_loader),
            loss.data,
            100 * train_r[0].numpy() / train_r[1],
            100 * val_r[0].numpy() / val_r[1]
          ))

76. Vision模块功能解读

  • torchvision.datasets
  • torchvision.models------VGG, ResNet, AlexNet, SqueezeNet…构建网络架构
  • torchvision.transforms 图像预处理
  • 迁移学习

77. 分类任务数据集定义与配置

  • 数据预处理部分:

  • 数据增强:torchvision中transform模块自带功能,比较实用

  • 数据预处理:torchvision中transform也帮我们实现了

  • DataLoader模块直接读取batch数据

  • 网络模块设置:

  • 加载预训练模式—迁移学习

  • 别人的任务和我们的不一样,需要把最后的head层改一改,一般就是最后的全连接层

  • 训练可以全部重头训练,也可以只训练最后咱们任务的层,因为前几层是做特征提取的,本质任务目标一致

  • 网络模型保存与测试:

  • 模型保存的时候可以带有选择性,效果好则保存

  • 读取模型进行实际测试

78. 图像增强的作用

import os
import matplotlib.pyplot as plt
# %matplotlib inline
import numpy as np
import torch
from torch import nn
import torch.optim as optim
import torchvision
from torchvision import transforms, models, datasets

import imageio
import time
import warnings
import random
import sys
import copy
import json
from PIL import Image

# 数据读取与预处理操作
data_dir = './flower_data/'
train_dir = data_dir + '/train'
valid_dir = data_dir + '/valid'

# 制作好数据源:
# data_transforms中指定了所有图像预处理操作
# ImageFolder假设所有的文件按文件夹保存好,每个文件夹下面存贮同一类别的图片,文件夹的名字为分类的名字
data_transforms = {
  'train': transforms.Compose(
    [transforms.RandomRotation(45),  # 随机旋转,-45 到 45度之间随机选
     transforms.CenterCrop(224),  # 从中心开始裁剪
     transforms.RandomHorizontalFlip(p=0.5),  # 随机水平翻转,选择一个概率
     transforms.RandomVerticalFlip(p=0.5),  # 随机垂直翻转
     transforms.ColorJitter(brightness=0.2, contrast=0.1, saturation=0.1, hue=0.1),  # 参数1为亮度,参数2对比度,参数3饱和度,参数4色相
     transforms.RandomGrayscale(p=0.025),  # 概率转换成灰度率,3通道就是RGB
     transforms.ToTensor(),
     transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])  # 均值,标准差
     ]
  ),
  'valid': transforms.Compose([
    transforms.Resize(256),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
  ])
}

batch_size = 8

image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'valid']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=batch_size, shuffle=True) for x in
               ['train', 'valid']}
dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'valid']}
class_names = image_datasets['train'].classes

# 读取标签对应的实际名字
with open('cat_to_name.json', 'r') as f:
  cat_to_name = json.load(f)


# 展示下数据
def im_convert(tensor):
  image = tensor.to("cpu").clone().detach()
  image = image.numpy().squeeze()
  image = image.transpose(1, 2, 0)
  image = image * np.array((0.229, 0.224, 0.225)) + np.array((0.485, 0.465, 0.406))
  image = image.clip(0, 1)

  return image


fig = plt.figure(figsize=(20, 12))
columns = 4
rows = 2

dataiter = iter(dataloaders['valid'])
inputs, classes = dataiter.next()

for idx in range(columns * rows):
  ax = fig.add_subplot(rows, columns, idx + 1, xticks=[], yticks=[])
  ax.set_title(cat_to_name[str(int(class_names[classes[idx]]))])
  plt.imshow(im_convert(inputs[idx]))
plt.show()

# 加载models中提供的模型, 并且直接用训练好的权重当做初始化参数
model_name = 'resnet'  # 可选的比较多['resnet', 'alexnet', 'vgg', 'squeezenet', 'densenet', 'inception']
# 是否用人家训练好的特征来做
feature_extract = True

# 是否用GPU训练
train_on_gpu = torch.cuda.is_available()

if not train_on_gpu:
  print('CUDA is not available. Training on CPU...')
else:
  print('CUDA is availabe! Taining on GPU...')
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")


def set_parameter_requires_grad(model, feature_extracting):
  if feature_extracting:
    for param in model.parameters():
      param.requires_grad = False


model_ft = models.resnet152()  # 152层
print(model_ft)

# 参考pytorch官网例子
def initialize_model(model_name, num_classes, feature_extract, use_pretrained=True):
    # 选择合适的模型,不同模型的初始化方法稍微有点区别
    model_ft = None
    input_size = 0
    
    if model_name == "resnet":
        """
        resnet152
        """
        model_ft = models.resnet152(pretrained=use_pretrained)
        set_parameter_requires_grad(model_ft, feature_extract)
        num_ftrs = model_ft.fc.in_features   # 改最后一层
        model_ft.fc = nn.Sequential(nn.Linear(num_ftrs, 102),
                                    nn.LogSoftmax(dim=1))
        input_size = 224
        
    elif model_name == "alexnet":
        model_ft = models.alexnet(pretrained=use_pretrained)
        set_parameter_requires_grad(model_ft, feature_extract)
        num_ftrs = model_ft.classifier[6].in_features
        model_ft.classifier[6] = nn.Linear(num_ftrs, num_classes)
        input_size = 224
        
    elif model_name == "vgg":
        model_ft = models.vgg16(pretrained=use_pretrained)
        set_parameter_requires_grad(model_ft, feature_extract)
        num_ftrs = model_ft.classifier[6].in_features
        model_ft.classifier[6] = nn.Linear(num_frts, num_classes)
        input_size = 224
    
    elif model_name = "squeezenet":
        model_ft = models.squeezenet1_0(pretrained=use_pretrained)
        set_parameter_requires_grad(model_ft, feature_extract)
        model_ft.classifier[1] = nn.Conv2d(512, num_classes, kernel_size=(1,1), stride=(1,1))
        model_ft.num_classes = num_classes
        input_size = 224
        
    elif model_name == "densenet":
        model_ft = models.densenet121(pretrained=use_pretrained)
        set_parameter_requires_grad(model_ft, feature_extract)
        num_ftrs = model_ft.classifier.in_features
        model_ft.classifier = nn.Linear(num_ftrs, num_classes)
        input_size = 224
        
    elif model_name == "inception":
        model_ft = models.inception_v3(pretrained=use_pretrained)
        set_parameter_requires_grad(model_ft, feature_extract)
        num_ftrs = model_ft.AuxLogits.fc.in_features
        model_ft.AuxLogits.fc = nn.Linear(num_ftrs, num_classes)
        num_ftrs = model_ft.fc.in_features
        model_ft.fc = nn.Linear(num_ftrs, num_classes)
        input_size = 299
    
    else:
        print("Invalid model name, exiting...")
        exit()
        
    return model_ft, input_size

# 设置哪些层需要训练
model_ft, input_size = initialize_model(model_name, 102, feature_extract, use_pretrained=True)

# GPU计算
model_ft = model_ft.to(device)

# 模型保存
filename = 'checkpoint.pth'

# 是否训练所有层
params_to_update = model_ft.parameters()
print("Params to learn:")
if feature_extract:
    params_to_update = []
    for name, param in model_ft.named_parameters():
        if param.requires_grad == True:
            params_to_update.append(param)
            print("\t", name)
else:
    for name, param in model_ft.named_paramters():
        if param.requires_grad == True:
            print("\t", name)

# 优化器设置
optimizer_ft = optim.Adam(params_to_update, lr=1e-2)
scheduler = optim.lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)  # 学习率每7个epoch衰减为原来的1/10
# 最后一层已经LogSoftmax()了,所以不能nn.CrossEntropyLoss()来计算了,nn.CrossEntropyLoss()相当于logSoftmax()和nn.NLLoss()整合
criterion = nn.NLLLoss()


# 训练模块
def train_model(model, dataloaders, criterion, optimizer, num_epochs=25, is_inception=False, filename=filename):
    since = time.time()
    best_acc = 0
    
    model.to(device)

    val_acc_history = []
    train_acc_history = []
    train_losses = []
    valid_losses = []
    LRs = [optimizer.param_groups[0]['lr']]
    
    best_model_wts = copy.deepcopy(model.stare_dict())

    for epoch in range(num_epochs):
        print('Epoch {}/{}'.format(epoch, num_epochs - 1))
        print('-' * 10)
        
        # 训练和验证
        for phase in ['train', 'valid']:
            if phase == 'train':
                model.train()    # 训练
            else:
                model.eval()     # 验证

            running_loss = 0.0
            running_corrects = 0

            # 把数据都取个遍
            for inputs, labels in dataloaders[phase]:
                inputs = inputs.to(device)
                labels = labels.to(device)

                # 清零
                optimizer.zero_grad()
                # 只有训练的时候计算和更新梯度
                with torch.set_grad_enabled(phase=='train'):
                    if is_inception and phase == 'train':
                        outputs, aux_outputs = model(inputs)
                        loss1 = criterion(outputs, labels)
                        loss2 = criterion(aux_outputs, labels)
                        loss = loss1 + 0.4 * loss2
                    else:   # resnet执行的是这里
                        outputs = model(inputs)
                        loss = criterion(outputs, labels)
                        
                    _, pres = torch.max(outputs, 1)
                   # 训练阶段更新权重
                   if phase == 'train':
                        loss.backward()
                        optimizer.step()

            # 计算损失
            running_loss += loss.item() * inputs.size(0)
            running_corrects += torch.sum(preds == labels.data)
        
            epoch_loss = running_loss / len(dataloaders[phase].dataset)
            epoch_acc = running_corrects.double() / len(dataloaders[phase].dataset)
            
            time_elapsed = time.time() - since
            print('Time elapsed {:.0f}m {:.0f}s'.format(time_elapsed//60, time_elapsed % 60))
            print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase, epoch_loss, epoch_acc))
            
            # 得到最好那次的模型
            if phase == "valid" and epoch_acc > best_acc:
                best_acc = epoch_acc
                best_model_wts = copy.deepcopy(model.state_dict())
                state = {
                  'state_dict': model.state_dict(),
                  'best_acc': best_acc,
                  'optimizer': optimizer.state_dict(),
                }
                torch.save(state, filename)
            if phase == "valid":
                val_acc_history.append(epoch_acc)
                valid_losses.append(epoch_loss)
                scheduler.step(epoch_loss)
            if phase == "train":
                train_acc_history.append(epoch_acc)
                train_losses.append(epoch_loss)
    
            
        print('Optimizer learning rate: {:.7f}'.format(optimizer.param_groups[0]['lr'])
              LRs.append(optimizer.param_groups[0]['lr'])
              )
        print()

    time_elapsed = time.time() - since
    
    # 训练完成后用最好的一次当作模型最终的结果
    model.load_state_dict(best_model_wts)
    return model, val_acc_history, train_acc_history, valid_losses, train_losses, LRs

79. Batch数据制作

80. 迁移学习

  • 用别人的网络

81. 迁移学习策略

  • 残差网络
  • VGG, Resnet

82. 加载训练好的网络模型

83. 优化器模块配置

84. 实现训练模块

85. 训练结果与模型保存

86. 加载模型对测试数据进行预测

  • 加载训练好的模型

87. Resnet论文解读

88. Resnet 网络架构解读

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值