相比tensorbordX, visdom刷新更快,界面体验也良好,首先是visdom的安装,与普通的python库一样,直接pip install visdom
即可
成功安装后,在控制台下输入python -m visdom.server
复制http://localhost:8097,输入浏览器即可打开visdom可视化窗口
调用visdom主要需要如下语句:
from visdom import Visdom
# your code
vis = Visdom()
# vis = Visdom(env = 'my window') 可接收env参数,用于标识窗口句柄
# your data need to be drawn
示例代码(序列数据(sin 函数)预测):
import numpy as np
import matplotlib.pyplot as plt
import torch.optim as optim
from visdom import Visdom
import torch
import torch.nn as nn
num_step = 50
input_size = 1
output_size = 1
hidden_size = 16
lr = 0.01
device = 'cpu'
class RNNnet(nn.Module):
def __init__(self):
super(RNNnet, self).__init__()
self.rnn = nn.RNN(
input_size=input_size,
hidden_size=hidden_size,
num_layers=1,
batch_first=True
)
self.linear = nn.Linear(hidden_size, output_size)
def forward(self, x, hidden):
out, hidden = self.rnn(x, hidden)
out = out.view(-1, hidden_size)
out = self.linear(out)
out = out.unsqueeze(dim=0)
return out, hidden
if __name__ == '__main__':
rnnnet = RNNnet().to(device)
loss_func = nn.MSELoss()
optimizer = optim.Adam(rnnnet.parameters(), lr=lr)
hidden = torch.zeros(1, 1, hidden_size)
global_step = 0
vis = Visdom()
# win 表征该env下的窗口句柄,一个win代表一个窗口,窗口标题由title决定
vis.line([0.], [0.], win='train_loss', opts=dict(title='train loss'))
for epoch in range(10000):
start = np.random.randint(3, size=1)[0]
time_steps = np.linspace(start, start+10, num_step)
data = np.sin(time_steps)
data = data.reshape(num_step, 1)
x = torch.tensor(data[:-1]).float().view(1, num_step - 1, 1).to(device)
y = torch.tensor(data[1:]).float().view(1, num_step - 1, 1).to(device)
output, hidden = rnnnet(x, hidden)
hidden = hidden.detach()
loss = loss_func(output, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
global_step += 1
vis.line([loss.item()], [global_step], win='train_loss', update='append')
if epoch % 100 == 0:
print("Iteration: {} loss: {}".format(epoch, loss.item()))
可看到打开的浏览器窗口中实时出现了如下图像:
env
标识的即上面的main
,改变参数后可在下拉框中选择目标窗口