利用Tensorboard、Hook、CAM、LIME简单记录训练过程中的数据或者图片

本文介绍了如何利用Tensorboard记录训练数据和图片,包括模型监控、卷积核与特征图可视化。同时,讲解了Hook技术用于记录中间变量和特征图,以及CAM和LIME方法,CAM用于定位图像相关区域,LIME通过超像素分析解释模型决策。
摘要由CSDN通过智能技术生成

Tensorboard

记录数据

记录简单数据的方法

1.add_scalar() 记录标量
2.add_scalars() 记录多个值
在谷歌Colaboratory里跑以下代码

import numpy as np
import matplotlib.pyplot as plt
from torch.utils.tensorboard import SummaryWriter
import numpy as np

np.random.seed(47)  # 固定随机数

# ----------------------------------- 0 SummaryWriter -----------------------------------
# flag = 0
flag = 1
if flag:

    max_epoch = 100
    writer = SummaryWriter( comment='test_comment', filename_suffix="test_suffix")

    for x in range(max_epoch):

        writer.add_scalar('y=2x', x * 2, x)
        writer.add_scalar('y=pow_2_x', 2 ** x, x)

        writer.add_scalars('data/scalar_group', {
   "xsinx": x * np.sin(x), "xcosx": x * np.cos(x)}, x)

    writer.close()

接着运行

# Load the TensorBoard notebook extension
%load_ext tensorboard
%tensorboard --logdir='./runs'

就能看到以下画面
在这里插入图片描述

进行模型监控

for epoch in range(MAX_EPOCH):

    loss_mean = 0.
    correct = 0.
    total = 0.

    net.train()
    for i, data in enumerate(train_loader):

        iter_count += 1

        # forward
        inputs, labels = data
        outputs = net(inputs)

        # backward
        optimizer.zero_grad()
        loss = criterion(outputs, labels)
        loss.backward()

        # update weights
        optimizer.step()

        # 统计分类情况
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).squeeze().sum().numpy()

        # 打印训练信息
        loss_mean += loss.item()
        train_curve.append(loss.item())
        if (i+1) % log_interval == 0:
            loss_mean = loss_mean / log_interval
            print("Training:Epoch[{:0>3}/{:0>3}] Iteration[{:0>3}/{:0>3}] Loss: {:.4f} Acc:{:.2%}".format(
                epoch, MAX_EPOCH, i+1, len(train_loader), loss_mean, correct / total))
            loss_mean = 0.

        # 记录数据,保存于event file
        writer.add_scalars("Loss", {
   "Train": loss.item()}, iter_count)
        writer.add_scalars("Accuracy", {
   "Train": correct / total}, iter_count)

    # 每个epoch,记录梯度,权值
    for name, param in net.named_parameters():
        writer.add_histogram(name + '_grad', param.grad, epoch)
        writer.add_histogram(name + '_data', param, epoch)

    scheduler.step()  # 更新学习率

    # validate the model
    if (epoch+1) % val_interval == 0:

        correct_val = 0.
        total_val = 0.
        loss_val = 0.
        net.eval()
        with torch.no_grad():
            for j, data in enumerate(valid_loader):
                inputs, labels = data
                outputs = net(inputs)
                loss = criterion(outputs, labels)

                _, predicted = torch.max(outputs.data, 1
  • 0
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值