一、图像分类
1.1 模型是如何将图像分类的?
对于蜜蜂蚂蚁二分类模型:
从人的角度来看,是从输入一张RGB图像到输出一种动物的过程
从计算机角度看,是从输入3-d张量到输出字符串的过程
类别名是通过标签进行转换得到的,在这里也就是0和1,而输出的0,1则是通过模型输出的向量取最大值而得到的,而模型输出向量则是通过构造复杂的模型而得到的
实际的运行顺序:
输入3d张量到模型中,模型经过复杂的数学运算,输出一个向量,这个向量就是模型的输出,然后再对输出的向量取最大值和标签与类别名的转换,最后才得到最终的字符串的输出
1.2 图像分类的Inference(推理)
图像分类的Inference(推理)步骤:
- 获取数据与标签
- 选择模型,损失函数,优化器
- 写训练代码
- 写inference代码
Inference代码基本步骤:
- 获取数据与模型
- 数据变换,如RGB → 4D-Tensor
- 前向传播
- 输出保存预测结果
Inference阶段注意事项:
- 确保 model处于eval状态而非training
- 设置torch.no_grad(),减少内存消耗
- 数据预处理需保持一致, RGB or BGR?
二、resnet18模型inference代码
# -*- coding: utf-8 -*-
import os
import time
import torch.nn as nn
import torch
import torchvision.transforms as transforms
from PIL import Image
from matplotlib import pyplot as plt
import torchvision.models as models
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
# device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device = torch.device("cpu")
# config
vis = True
# vis = False
vis_row = 4
norm_mean = [0.485, 0.456, 0.406]
norm_std = [0.229, 0.224, 0.225]
inference_transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(norm_mean, norm_std),
])
classes = ["ants", "bees"]
def img_transform(img_rgb, transform=None):
"""
将数据转换为模型读取的形式
:param img_rgb: PIL Image
:param transform: torchvision.transform
:return: tensor
"""
if transform is None:
raise ValueError("找不到transform!必须有transform对img进行处理")
img_t = transform(img_rgb) # 将rgb图像转化为tensor
return img_t
def get_img_name(img_dir, format="jpg"):
"""
获取文件夹下format格式的文件名
:param img_dir: str
:param format: str
:return: list
"""
file_names = os.listdir(img_dir)
img_names = list(filter(lambda x: x.endswith(format), file_names))
if len(img_names) < 1:
raise ValueError("{}下找不到{}格式数据".format(img_dir, format))
return img_names
def get_model(m_path, vis_model=False):
resnet18 = models.resnet18()
num_ftrs = resnet18.fc.in_features
resnet18.fc = nn.Linear(num_ftrs, 2)
checkpoint = torch.load(m_path)
resnet18.load_state_dict(checkpoint['model_state_dict'])
if vis_model:
from torchsummary import summary
summary(resnet18, input_size=(3, 224, 224), device="cpu")
return resnet18
if __name__ == "__main__":
img_dir = os.path.join("..", "..", "data/hymenoptera_data/val/bees")
model_path = "./checkpoint_14_epoch.pkl"
time_total = 0
img_list, img_pred = list(), list()
# 1. data
img_names = get_img_name(img_dir)
num_img = len(img_names)
# 2. model
resnet18 = get_model(model_path, True)
resnet18.to(device) # 将模型迁移到指定设备上
resnet18.eval() # 通过eval(),指明模型不是在训练状态
with torch.no_grad(): # torch.no_grad()告诉pytorch,下面所有计算不计算梯度
for idx, img_name in enumerate(img_names):
path_img = os.path.join(img_dir, img_name)
# step 1/4 : path --> img
img_rgb = Image.open(path_img).convert('RGB')
# step 2/4 : img --> tensor
img_tensor = img_transform(img_rgb, inference_transform)
img_tensor.unsqueeze_(0)
img_tensor = img_tensor.to(device)
# step 3/4 : tensor --> vector
time_tic = time.time()
outputs = resnet18(img_tensor)
time_toc = time.time()
# step 4/4 : visualization
_, pred_int = torch.max(outputs.data, 1)
pred_str = classes[int(pred_int)]
if vis:
img_list.append(img_rgb)
img_pred.append(pred_str)
if (idx+1) % (vis_row*vis_row) == 0 or num_img == idx+1:
for i in range(len(img_list)):
plt.subplot(vis_row, vis_row, i+1).imshow(img_list[i])
plt.title("predict:{}".format(img_pred[i]))
plt.show()
plt.close()
img_list, img_pred = list(), list()
time_s = time_toc-time_tic
time_total += time_s
print('{:d}/{:d}: {} {:.3f}s '.format(idx + 1, num_img, img_name, time_s))
print("\ndevice:{} total time:{:.1f}s mean:{:.3f}s".
format(device, time_total, time_total/num_img))
if torch.cuda.is_available():
print("GPU name:{}".format(torch.cuda.get_device_name()))
运行结果:
预测结果:
三、resnet18结构分析
经典的卷积神经网络:alexnet, vgg, googlenet, resnet, densenet
轻量化卷积神经网络:mobilenet, shufflenet, squeezenet
自动搜索结构网络:mnasenet