众所众知:
写程序不写注释是XX
写程序不缩进是XX
写程序不写log是XX <-------------- 今天说叨说叨这个
之前一直想把,PaddleClas 里边儿那个log 搬下来,自己用用,奈何,那个 logger.py 过于庞大,我把握不住
今天撞大运了,看这个baseline,发现一个简单好用的log,赶紧搬过来,给我用!
原出处在这里:
https://github.com/PerceptionComputingLab/PARSE2022/blob/main/baseline/train.py
import logging
root_nowexp = "test"
def get_logger(filename, verbosity=1, name=None):
level_dict = {0: logging.DEBUG, 1: logging.INFO, 2: logging.WARNING}
formatter = logging.Formatter(
"[%(asctime)s][%(filename)s][line:%(lineno)d][%(levelname)s] %(message)s"
)
logger = logging.getLogger(name)
logger.setLevel(level_dict[verbosity])
fh = logging.FileHandler(filename, "w")
fh.setFormatter(formatter)
logger.addHandler(fh)
sh = logging.StreamHandler()
sh.setFormatter(formatter)
logger.addHandler(sh)
return logger
logger = get_logger(f'{root_nowexp}exp.log')
logger.info('start training!')
logger.warning('啊哈哈哈哈')
logger.error('你们被拒绝过吗')
info/warning/error 都有
运行一下看看:
[2022-04-14 14:31:54,112][2908748658.py][line:24][INFO] start training!
[2022-04-14 14:31:54,113][2908748658.py][line:25][INFO] 啊哈哈哈哈
[2022-04-14 14:31:54,114][2908748658.py][line:26][INFO] 你们被拒绝过吗
然后会在本地生成一个文件 testexp.log
打开一看,还是这个:
[2022-04-14 14:31:54,112][2908748658.py][line:24][INFO] start training!
[2022-04-14 14:31:54,113][2908748658.py][line:25][INFO] 啊哈哈哈哈
[2022-04-14 14:31:54,114][2908748658.py][line:26][INFO] 你们被拒绝过吗
nice!! 好用的一批好吗!
再放一个训练的例子:
logger = get_logger(f'{root_nowexp}exp.log')
logger.info('start training!')
for epoch in range(opt.max_epoch):
model.train()
for batch_id, (dcm_image, label_image) in tqdm(enumerate(train_dataLoader),
total=int(len(train_dataset) / opt.batch_size)):
XXXXXXXXXXXXXXXX
optimizer.zero_grad()
loss.backward()
optimizer.step()
Unet.eval()
logger.info('Epoch:[{}/{}]\t loss={:.5f}\t train dice:{:.1f} \t evalu dice:{:.1f}'.format(epoch, opt.max_epoch,
epoch_loss.avg,
train_dice,
evalu_dice))
scheduler.step()
logger.info('finish training!')
前面是 开始 训练:
logger.info('start training!')
而后是 中间的指标:
logger.info('Epoch:[{}/{}]\t loss={:.5f}\t train dice:{:.1f} \t evalu dice:{:.1f}'.format(epoch, opt.max_epoch,
epoch_loss.avg,
train_dice,
evalu_dice))
最后是训练结束:
logger.info('finish training!')
下一篇:
https://blog.csdn.net/HaoZiHuang/article/details/127127752
来举个例子说下 log 的 level