4.4 可视化工具
Tensorflow有TensorBoard可视化工具,它可以记录训练数据、评估数据、网络结构、图像等,并且可以在wab上面展示,对于观察神经网络训练的过程非常有帮助。PyTorch中也有这样的工具———tensorboardX。
4.4.1 tensorboardX简介
tensorboardX供能很强大,支持scalar、image、figure、histogram、audio、text、graph、onnx_graph、embedding、pr_curve 和 videosummaries等可视化方式。
安装也比较方便,直接安装 pip install tensorflow==1.15,记住安装前先换下载源,清华源或者阿里源
使用tensorboardX的步一般骤如下所示:、
1、导入tensorboardX,实例化SummaryWriter,指明记录日志路径等信息
from tensorboardX import SummaryWriter
# 实例化SummaryWriter,并指明日志存放路径。在当前目录没有logs目录将自动创建
writer = SummaryWriter(log_dir="logs")
# 调用实例
writer.add_xxx() # 如add_figure、add_graph
# 关闭writer
writer.close()
说明:
1)如果是windows环境,log_dir注意路径解析,如:
writer = SummaryWriter(log_dir="D:\myboard\test\logs"
2)SummaryWriter的格式为:
SummaryWriter(log_dir=None,comment="",**kwargs)
# 其中comment在文件命名加上comment后缀
3)如果不写log_dir,系统将在当前目录创建一个runs的目录。
2、调用相应的API接口,接口一般格式为:
# 即add_xxx(标签,记录的对象,迭代次数)
add_xxx(tag_name, object, iteration-number)
3、启动tensorboard服务
tensorboard --logdir=logs --port 6006
# 如果是windows环境,要注意路径解析,如
# tensorboard --logdir=r"D:\myboard\test\logs" --port 6006
4、web展示
在浏览器输入
http://服务器IP或名称:6006
# 如果是本机,服务器名称可以使用localhost
便可看到logs目录中保存的各种图形。
4.4.2 用tensorboardX可视化神经网格
前面介绍了tensorboardX的一般步骤,下面我们就来看几个实例。分别是可视化神经网络模型、可视化损失值、可视化特征图。
1.导入需要的模块
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
from torch.utils.tensorboard import SummaryWriter
2、构建神经网络
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
self.bn = nn.BatchNorm2d(20)
def forward(self, x):
x = F.max_pool2d(self.conv1(x), 2)
x = F.relu(x) + F.relu(-x)
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = self.bn(x)
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
x = F.softmax(x, dim=1)
return x
3、把模型保存为graph
# 把模型保存为graph
# 定义输入
input = torch.rand(32, 1, 28, 28)
# 实例化神经网络
model = Net()
# 将model保存为graph
with SummaryWriter(log_dir=r"D:\logs", comment="Net") as w:
w.add_graph(model, (input, ))
然后用win+R输入cmd进入命令行,在你解释器的环境输入
tensorboard --logdir=r"D:\myboard\test\logs" --port 6006
输入之后会出现网址,在浏览器里输入网址就可以了
4.4.3 用tensorboardX可视化损失值
可视化损失值,需要使用add_scalar函数,用一层全连接神经网络,训练一元二次函数的参数。
import torch
import numpy as np
from torch.utils.tensorboard import SummaryWriter
import torch.nn as nn
learning_rate = 0.01
num_epoches = 2000
dtype = torch.FloatTensor
writer = SummaryWriter(log_dir=r"D:\logs", comment="Linear")
np.random.seed(100)
x_train = np.linspace(-1, 1, 100).reshape(100, 1)
y_train = 3 * np.power(x_train, 2) + 2 + 0.2 * np.random.rand(x_train.size).reshape(100, 1)
model = nn.Linear(1, 1)
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
for epoch in range(num_epoches):
inputs = torch.from_numpy(x_train).type(dtype)
targets = torch.from_numpy(y_train).type(dtype)
output = model(inputs)
loss = criterion(output, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
# 保存loss的数据与epoch的数值
writer.add_scalar("训练损失值", loss, epoch)