文章目录
1 Tensorboard安装
欢迎查看我的另一篇文章:tensorboard无法联网时安装
2 Efficientnet训练过程
在不使用TensorBoard可视化的情况下,EfficientNet训练自定义分类数据集,欢迎查看我的另一篇文章:EfficientNet训练自定义分类数据集
3 Tensorboard可视化命令
训练完成后,使用tensorboard查看训练过程的一些信息时,可在日志保存的同级目录下,例如在runs同级目录下 使用命令:
tensorboard --logdir runs
如果runs
文件夹下有两组数据怎么办呢?如下:
依旧是运行上方命令,在浏览器中可选择查看哪个文件夹下的内容。
当SummaryWriter(log_dir='scalar')
的log_dir的参数值 存在时,使用命令:
tensorboard --logdir scalar
然后复制网址:http://localhost:6006/
到浏览器中即可查看。
4 出现部分问题及解决方案记录
4.1 查看网络参数名称
很多场景下,我们需要知道网络某层的名称,例如:
# add conv1 weights into tensorboard
tb_writer.add_histogram(tag="conv1",
values=model.features.stem_conv[0].weight,
global_step=epoch)
tb_writer.add_histogram(tag="stage1/block0/conv1",
values=model.features.stage1a.block.dwconv[0].weight,
global_step=epoch)
values是如何得到的?
注意:命名里一般不以数字开头,不然容易出现很多问题。
本来第一个values可以直接写成:model.features.stem_conv.0.weight
,但.0
导致语法错误。
本来第二个values可以直接写成:model.features.1a.block.dwconv.0.weight
,但.1a
和.0
导致语法错误。
为什么说values可以直接写成xxx呢?
这里给出两种查看网络参数名称的方式。
4.1.1 方式1
model.named_parameters()
:查看模型所有层名称
如果不想查看全部,可以根据网络结构中给出的名称方式,进一步缩小范围,比如我把特征提取层归到features下,把分类层归到classifier下,实现分级操作,就可以分开查看。
for key, value in model.features.named_parameters():
print(key)
4.1.2 方式2
训练完成后,拿到保存的权重,使用netron
工具,关于netron的安装与使用,可参考文章:【netron】神经网络可视化。
4.2 预训练权重与网络层名称不一致时修改示例
把网络中某级名称从数字开头修改为字母开头。
修改前:
# 1a, 2a, 2b, ... 原来index长这样,权重里也是这个
index = str(stage + 1) + chr(i + 97)
修改后:
# stage1a, stage2a, stage2b, ... 修改一下,index长这样
index = 'stage' + str(stage + 1) + chr(i + 97)
那权重加载的时候怎么办呢?
weights_dict = torch.load(args.weights, map_location=device)
# index:1a, 2a, 2b, ... 的加载权重方式
# load_weights_dict = {k: v for k, v in weights_dict.items()
# if model.state_dict()[k].numel() == v.numel()}
# 网络index名称修改后的权重加载方式
load_weights_dict = {}
for k, v in weights_dict.items():
if "features" in k and "stem_conv" not in k and "top" not in k:
k_list = k.split(".")
new_stage_index_name = 'stage' + k_list[1]
k_list[1] = new_stage_index_name
new_k = ".".join(k_list)
k = new_k
if model.state_dict()[k].numel() == v.numel():
load_weights_dict.update({k:v}) # 往字典里添加元素
print(model.load_state_dict(load_weights_dict, strict=False))
5 tensorboard训练时用法
使用方式详见下方代码。
# 导入包,启动TensorBoard会话
from torch.utils.tensorboard import SummaryWriter
def main(args):
print('Start Tensorboard with "tensorboard --logdir=runs", view at http://localhost:6006/')
# -------------------------------------------------------------------------------------#
# 实例化SummaryWriter对象,
# 可指定日志生成的位置,例如 tb_writer = SummaryWriter(log_dir="runs/flower_experiment"),
# 会自动新建文件夹runs/flower_experiment,然后将日志保存在该文件夹下
# 若不指定日志生成路径,默认保存在 runs/时间_账户名 文件夹下,以及随机的events名称
# 例如本例中保存在 runs/Apr29_13-07-06_amax 文件夹下
# -------------------------------------------------------------------------------------#
tb_writer = SummaryWriter()
# 将网络结构写入tensorboard
init_img = torch.zeros((1, 3, 224, 224), device=device) # 大小和网路输入保持一致
# 通过模型输入数据在网络中的正向传播来创建网络结构图
# 通过add_graph函数传入模型
tb_writer.add_graph(model, init_img)
for epoch in range(args.epochs):
...
# add loss, acc and lr into tensorboard
tags = ["loss", "accuracy", "learning_rate"]
tb_writer.add_scalar(tags[0], mean_loss, epoch)
tb_writer.add_scalar(tags[1], acc, epoch)
# 注意获得学习率的方式
tb_writer.add_scalar(tags[2], optimizer.param_groups[0]["lr"], epoch)
# add conv1 weights into tensorboard
# values的名字写法见4.1节
tb_writer.add_histogram(tag="conv1",
values=model.features.stem_conv[0].weight,
global_step=epoch)
tb_writer.add_histogram(tag="stage1/block0/conv1",
values=model.features.stage1a.block.dwconv[0].weight,
global_step=epoch)
6 浏览器查看示例
网络训练完成后,得到文件如下:
使用tensorboard --logdir runs
,复制链接到浏览器中查看,示例如下:
命令如下:
结果如下:
7 项目代码
和EfficientNet训练自定义分类数据集中的代码类似,主要变化有两点。
第一,model.py
中class EfficientNet(nn.Module):
中index = 'stage' + str(stage + 1) + chr(i + 97)
,那儿加了‘stage’,原因在上文中已经解释。
第二,train.py
中加入了tensorboard
的使用,这里重命名为train_tensorboard.py
,下方给出train_tensorboard.py
全部代码。
from hashlib import new
import os
import math
import argparse
import torch
import torch.optim as optim
from torch.utils.tensorboard import SummaryWriter # 上面那个 启动TensorBoard会话 是这行代码在Vscode中自动生成的
from torchvision import transforms
import torch.optim.lr_scheduler as lr_scheduler
from model import efficientnet_b0 as create_model
from my_dataset import MyDataSet
from utils import read_data, train_one_epoch, evaluate
def main(args):
device = torch.device('cuda' if torch.cuda.is_available() else "cpu")
print(args)
print('Start Tensorboard with "tensorboard --logdir=runs", view at http://localhost:6006/')
# ---------------------------------------------------------------------------------------#
# 实例化SummaryWriter对象,
# 可指定日志生成的位置,例如 tb_writer = SummaryWriter(log_dir="runs/flower_experiment"),
# 会自动新建文件夹runs/flower_experiment,然后将日志保存在该文件夹下
# 若不指定日志生成路径,默认保存在 runs/时间_账户名 文件夹下,以及随机的events名称
# 例如本例中保存在 runs/Apr29_13-07-06_amax 文件夹下
# ---------------------------------------------------------------------------------------#
tb_writer = SummaryWriter()
if os.path.exists("./output") is False:
os.makedirs("./output")
train_images_path, train_images_label, val_images_path, val_images_label = read_data(args.data_path)
img_size = {"B0": 224,
"B1": 240,
"B2": 260,
"B3": 300,
"B4": 380,
"B5": 456,
"B6": 528,
"B7": 600}
num_model = "B0"
data_transform = {
"train": transforms.Compose([transforms.RandomResizedCrop(img_size[num_model]),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]),
"val": transforms.Compose([transforms.Resize(img_size[num_model]),
transforms.CenterCrop(img_size[num_model]), # 中心裁剪
transforms.ToTensor(), # 转化成tensor,数值从0~255,变成0~1
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])} # mean std
# 实例化训练数据集
train_dataset = MyDataSet(images_path=train_images_path,
images_class=train_images_label,
transform=data_transform["train"])
# 实例化验证数据集
val_dataset = MyDataSet(images_path=val_images_path,
images_class=val_images_label,
transform=data_transform["val"])
batch_size = args.batch_size
nw = min([os.cpu_count(), batch_size if batch_size > 1 else 0, 8]) # number of workers
print('Using {} dataloader workers every process'.format(nw))
train_loader = torch.utils.data.DataLoader(train_dataset,
batch_size=batch_size,
shuffle=True,
pin_memory=True,
num_workers=nw,
collate_fn=train_dataset.collate_fn)
val_loader = torch.utils.data.DataLoader(val_dataset,
batch_size=batch_size,
shuffle=False,
pin_memory=True,
num_workers=nw,
collate_fn=val_dataset.collate_fn)
# 实例化模型
model = create_model(num_classes=args.num_classes).to(device)
# 将网络结构写入tensorboard
init_img = torch.zeros((1, 3, 224, 224), device=device) # 大小和网路输入保持一致
# 通过模型输入数据在网络中的正向传播来创建网络结构图
# 通过add_graph函数传入模型
tb_writer.add_graph(model, init_img)
# 如果存在预训练权重则载入
if args.weights != "":
if os.path.exists(args.weights):
weights_dict = torch.load(args.weights, map_location=device)
# index:1a, 2a, 2b, ... 的加载权重方式
# load_weights_dict = {k: v for k, v in weights_dict.items()
# if model.state_dict()[k].numel() == v.numel()}
# 网络index名称修改后的权重加载方式
load_weights_dict = {}
for k, v in weights_dict.items():
if "features" in k and "stem_conv" not in k and "top" not in k:
k_list = k.split(".")
new_stage_index_name = 'stage' + k_list[1]
k_list[1] = new_stage_index_name
new_k = ".".join(k_list)
k = new_k
if model.state_dict()[k].numel() == v.numel():
load_weights_dict.update({k:v}) # 往字典里添加元素
print(model.load_state_dict(load_weights_dict, strict=False))
else:
raise FileNotFoundError("not found weights file: {}".format(args.weights))
# 是否冻结权重
if args.freeze_layers:
for name, para in model.named_parameters():
# 除最后一个卷积层和全连接层外,其他权重全部冻结
if ("features.top" not in name) and ("classifier" not in name):
para.requires_grad_(False)
else:
print("training {}".format(name))
pg = [p for p in model.parameters() if p.requires_grad]
optimizer = optim.SGD(pg, lr=args.lr, momentum=0.9, weight_decay=1E-4)
# Scheduler https://arxiv.org/pdf/1812.01187.pdf
lf = lambda x: ((1 + math.cos(x * math.pi / args.epochs)) / 2) * (1 - args.lrf) + args.lrf # cosine
scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf)
for epoch in range(args.epochs):
# train
mean_loss = train_one_epoch(model=model,
optimizer=optimizer,
data_loader=train_loader,
device=device,
epoch=epoch)
scheduler.step()
# validate
acc = evaluate(model=model,
data_loader=val_loader,
device=device)
print("[epoch {}] accuracy: {}".format(epoch, round(acc, 3)))
# add loss, acc and lr into tensorboard
tags = ["loss", "accuracy", "learning_rate"]
tb_writer.add_scalar(tags[0], mean_loss, epoch)
tb_writer.add_scalar(tags[1], acc, epoch)
tb_writer.add_scalar(tags[2], optimizer.param_groups[0]["lr"], epoch) # 注意获得学习率的方式
# add conv1 weights into tensorboard
tb_writer.add_histogram(tag="conv1",
values=model.features.stem_conv[0].weight,
global_step=epoch)
tb_writer.add_histogram(tag="stage1/block0/conv1",
values=model.features.stage1a.block.dwconv[0].weight,
global_step=epoch)
torch.save(model.state_dict(), "./output/model-{}.pth".format(epoch))
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--num_classes', type=int, default=5)
parser.add_argument('--epochs', type=int, default=30)
parser.add_argument('--batch-size', type=int, default=16)
parser.add_argument('--lr', type=float, default=0.01)
parser.add_argument('--lrf', type=float, default=0.01)
# 数据集所在根目录
# https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz
parser.add_argument('--data-path', type=str,
default="./data")
# download model weights
# 链接: https://pan.baidu.com/s/1ouX0UmjCsmSx3ZrqXbowjw 密码: 090i
parser.add_argument('--weights', type=str, default='./pretrained/efficientnetb0.pth',
help='initial weights path')
parser.add_argument('--freeze-layers', type=bool, default=False)
opt = parser.parse_args()
main(opt)
8 感谢链接
https://www.bilibili.com/video/BV1Qf4y1C7kz?spm_id_from=333.999.0.0
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing