猫十二分类

试题说明

任务描述

利用训练的模型来预测数据所属的类别。

数据说明

本数据集包含12种类的猫的图片。

整个数据将被分为训练集与测试集。在训练数据中,我们提供彩色的图片,如图所示。

训练集:在训练集中,我们将提供高清彩色图片以及图片所属的分类。

测试集:在测试数据集中,我们仅仅提供彩色图片。

提交答案

考试提交,需要提交模型代码项目版本结果文件。结果文件为CSV文件格式,命名为result.csv,文件内的字段需要按照指定格式写入。

文件格式:WMgOhwZzacY023lCusqnBxIdibpkT5GP.jp,0
其中,前半部分为【图片路径】,后半部分为【类别编号】,数据列以逗号分隔,每一行数据都以回车符结束。

提交的预测结果要与我们提供的 label图像名字与格式 保持完全一致,否则上传无法通过格式检查

项目日志

11-15及以前:
尝试swin transformer模型,初步进行数据增强,以及调参(使用PyTorch框架)。
精度大致范围:[90%, 92.5%]

11-16至11-17
学习paddleclas套件,尝试多个模型(SwinTransformer_large_patch4_window12_384、ResNet101、ResNet101_vd_ssld等),通过配置文件的修改,确定参数大致范围,并对各个模型性能进行比较(相关内容请见正文)。
精度达到93.33%

11-18至11-19
对ResNet进行细致学习,尝试多种数据增强组合,增大训练轮次,对参数进行进一步配置。
精度达到97%以上

11-20至11-26
撰写相关文案,总结模型优缺点,训练过程中遇到的问题及其解决方案等。

11-27至12-03
重新运行代码,补充相关信息。

绪论

课题背景

近年来,随着人工智能(Artifcial Intelligence, AI)的高速发展及其在许多领域取得的显著成绩,使其逐渐成为推动人类进入智能时代的决定性力量。图像分类是 AI 的重要研究目标之一,它作为计算机视觉、模式识别和机器学习等多个领域的交叉方向,旨在从图像或图像序列中提取判别性特征并进行归类,使得机器视觉具有一定的识别能力。最原始的图像分类任务主要由大量的人工进行标记,即耗时又无法保证分类的效果。随着计算机相关技术的蓬勃发展,图像自动分类技术成为计算机视觉的研究热点之一。图像分类技术通过模拟人类视觉系统对客观世界中存在的物体进行识别,使其能够帮助人类对海量的图像进行分类。目前,图像分类技术被广泛应用于医疗图像处理领域、智能交通、电商平台、人脸识别等领域。
在大数据的出现和计算机运算能力突飞猛进的背景下,以深度学习为代表的机器学习算法逐渐成为推进 AI 发展的重要工具。深度学习算法在图像分类任务中取得了质的飞跃。
图像分类在很多领域有广泛应用,包括零售商品分类、农作物品质分级、医学领域的图像识别、交通标志分类等。作为计算机视觉任务的基础,还可辅助目标检测、图像分割、物体跟踪、行为分析等其他高层视觉任务组网和效果提升,比如经典的目标检测模型FasterRCNN的结构,骨干网络即使用分类任务中的网络结构,如ResNet。作为骨干网络,分类任务的预训练模型也可以帮助其他视觉任务在训练时更快地收敛。

问题重述

通过data/data10954/cat_12_train.zip的图片进行分析处理,选取相关网络进行训练,
进而预测data/data10954/cat_12_train.zip下240张图片每种猫的所属类别。
以测试集上的准确率作为得分。

工具选择与前置条件

工具选择

PaddleClas

为了让用户能够更方便地训练并使用图像分类模型,完成图像分类任务,飞桨开源了PaddleClas图像分类套件,打通模型开发、训练、压缩、部署全流程,助力开发者更好的开发和应用图像分类模型。PaddleClas有如下特色:

1、提供了丰富的模型库,多达29个系列,同时也提供了134个模型在ImageNet1k数据集上的训练配置以及预训练模型。
2、提供了8种数据增广方式,可更加便捷地进行数据增广扩充,提升模型的鲁棒性。
3、开源了自研的SSLD(Simple Semi-supervised Label Distillation)知识蒸馏方案,模型效果普遍提升3%以上,在ImageNet1k数据集上,ResNet50_vd模型的精度达到了84.0%。
4、开源了自研的10万类图像分类预训练模型,识别准确率最高可以提升30%。
5、提供了PaddleLite、HubServing、TensorRT等工业级部署推理方案,无论在服务器端还是移动端、嵌入式硬件,都可以方便地部署模型。

PaddleX

飞桨场景应用开发套件,它集成飞桨智能视觉领域图像分类、目标检测、语义分割、实例分割任务能力,将深度学习开发全流程从数据准备、模型训练与优化到多端部署端到端打通,并提供统一任务API接口及图形化开发界面Demo。开发者无需分别安装不同套件,以低代码的形式即可快速完成飞桨全流程开发。

PaddleX经过质检、安防、巡检、遥感、零售、医疗等十多个行业实际应用场景验证,沉淀产业实际经验,并提供丰富的案例实践教程,全程助力开发者产业实践落地。

PaddleX提供可视化界面GUI和Python API两种使用模式,通过可视化界面,即使是不熟悉编码的用户也可以快速上手飞桨的使用。通过简明易懂的Python API,方便用户根据实际生产需求直接调用或二次开发,为开发者提供飞桨全流程开发的最佳实践。同时,PaddleX提供的Restful API可以让用户通过简单集成即可生产所在行业的专属AI工具。

为了学习使用PaddleclasPaddleX套件,本文同时使用二者完成对猫的12分类。

安装PaddleX

!pip install paddlex
# 每次启动都要重新运行
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: paddlex in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (2.0.0)
Requirement already satisfied: scikit-learn==0.23.2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlex) (0.23.2)
Requirement already satisfied: lap in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlex) (0.4.0)
Requirement already satisfied: paddleslim==2.1.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlex) (2.1.1)
Requirement already satisfied: pycocotools in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlex) (2.0.3)
Requirement already satisfied: shapely>=1.7.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlex) (1.8.0)
Requirement already satisfied: colorama in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlex) (0.4.4)
Requirement already satisfied: pyyaml in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlex) (5.1.2)
Requirement already satisfied: scipy in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlex) (1.6.3)
Requirement already satisfied: opencv-python in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlex) (4.1.1.26)
Requirement already satisfied: tqdm in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlex) (4.36.1)
Requirement already satisfied: chardet in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlex) (3.0.4)
Requirement already satisfied: visualdl>=2.1.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlex) (2.2.0)
Requirement already satisfied: motmetrics in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlex) (1.2.0)
Requirement already satisfied: numpy>=1.13.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn==0.23.2->paddlex) (1.20.3)
Requirement already satisfied: joblib>=0.11 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn==0.23.2->paddlex) (0.14.1)
Requirement already satisfied: threadpoolctl>=2.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn==0.23.2->paddlex) (2.1.0)
Requirement already satisfied: matplotlib in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddleslim==2.1.1->paddlex) (2.2.3)
Requirement already satisfied: pyzmq in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddleslim==2.1.1->paddlex) (18.1.1)
Requirement already satisfied: pillow in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddleslim==2.1.1->paddlex) (7.1.2)
Requirement already satisfied: cython>=0.27.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pycocotools->paddlex) (0.29)
Requirement already satisfied: setuptools>=18.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pycocotools->paddlex) (56.2.0)
Requirement already satisfied: requests in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.1.1->paddlex) (2.22.0)
Requirement already satisfied: pre-commit in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.1.1->paddlex) (1.21.0)
Requirement already satisfied: shellcheck-py in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.1.1->paddlex) (0.7.1.1)
Requirement already satisfied: pandas in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.1.1->paddlex) (1.1.5)
Requirement already satisfied: Flask-Babel>=1.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.1.1->paddlex) (1.0.0)
Requirement already satisfied: flask>=1.1.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.1.1->paddlex) (1.1.1)
Requirement already satisfied: six>=1.14.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.1.1->paddlex) (1.15.0)
Requirement already satisfied: flake8>=3.7.9 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.1.1->paddlex) (3.8.2)
Requirement already satisfied: protobuf>=3.11.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.1.1->paddlex) (3.14.0)
Requirement already satisfied: bce-python-sdk in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.1.1->paddlex) (0.8.53)
Requirement already satisfied: pytest in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from motmetrics->paddlex) (6.2.5)
Requirement already satisfied: flake8-import-order in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from motmetrics->paddlex) (0.18.1)
Requirement already satisfied: xmltodict>=0.12.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from motmetrics->paddlex) (0.12.0)
Requirement already satisfied: pytest-benchmark in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from motmetrics->paddlex) (3.4.1)
Requirement already satisfied: python-dateutil>=2.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->paddleslim==2.1.1->paddlex) (2.8.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->paddleslim==2.1.1->paddlex) (2.4.2)
Requirement already satisfied: pytz in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->paddleslim==2.1.1->paddlex) (2019.3)
Requirement already satisfied: kiwisolver>=1.0.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->paddleslim==2.1.1->paddlex) (1.1.0)
Requirement already satisfied: cycler>=0.10 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->paddleslim==2.1.1->paddlex) (0.10.0)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests->visualdl>=2.1.1->paddlex) (1.25.6)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests->visualdl>=2.1.1->paddlex) (2019.9.11)
Requirement already satisfied: idna<2.9,>=2.5 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests->visualdl>=2.1.1->paddlex) (2.8)
Requirement already satisfied: aspy.yaml in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl>=2.1.1->paddlex) (1.3.0)
Requirement already satisfied: nodeenv>=0.11.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl>=2.1.1->paddlex) (1.3.4)
Requirement already satisfied: toml in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl>=2.1.1->paddlex) (0.10.0)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl>=2.1.1->paddlex) (0.23)
Requirement already satisfied: identify>=1.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl>=2.1.1->paddlex) (1.4.10)
Requirement already satisfied: cfgv>=2.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl>=2.1.1->paddlex) (2.0.1)
Requirement already satisfied: virtualenv>=15.2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl>=2.1.1->paddlex) (16.7.9)
Requirement already satisfied: Jinja2>=2.5 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Flask-Babel>=1.0.0->visualdl>=2.1.1->paddlex) (2.11.0)
Requirement already satisfied: Babel>=2.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Flask-Babel>=1.0.0->visualdl>=2.1.1->paddlex) (2.8.0)
Requirement already satisfied: click>=5.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flask>=1.1.1->visualdl>=2.1.1->paddlex) (7.0)
Requirement already satisfied: Werkzeug>=0.15 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flask>=1.1.1->visualdl>=2.1.1->paddlex) (0.16.0)
Requirement already satisfied: itsdangerous>=0.24 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flask>=1.1.1->visualdl>=2.1.1->paddlex) (1.1.0)
Requirement already satisfied: pycodestyle<2.7.0,>=2.6.0a1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flake8>=3.7.9->visualdl>=2.1.1->paddlex) (2.6.0)
Requirement already satisfied: mccabe<0.7.0,>=0.6.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flake8>=3.7.9->visualdl>=2.1.1->paddlex) (0.6.1)
Requirement already satisfied: pyflakes<2.3.0,>=2.2.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flake8>=3.7.9->visualdl>=2.1.1->paddlex) (2.2.0)
Requirement already satisfied: future>=0.6.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from bce-python-sdk->visualdl>=2.1.1->paddlex) (0.18.0)
Requirement already satisfied: pycryptodome>=3.8.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from bce-python-sdk->visualdl>=2.1.1->paddlex) (3.9.9)
Requirement already satisfied: pluggy<2.0,>=0.12 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pytest->motmetrics->paddlex) (0.13.1)
Requirement already satisfied: packaging in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pytest->motmetrics->paddlex) (20.9)
Requirement already satisfied: iniconfig in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pytest->motmetrics->paddlex) (1.1.1)
Requirement already satisfied: attrs>=19.2.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pytest->motmetrics->paddlex) (19.2.0)
Requirement already satisfied: py>=1.8.2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pytest->motmetrics->paddlex) (1.11.0)
Requirement already satisfied: py-cpuinfo in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pytest-benchmark->motmetrics->paddlex) (8.0.0)
Requirement already satisfied: zipp>=0.5 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from importlib-metadata; python_version < "3.8"->pre-commit->visualdl>=2.1.1->paddlex) (3.6.0)
Requirement already satisfied: MarkupSafe>=0.23 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Jinja2>=2.5->Flask-Babel>=1.0.0->visualdl>=2.1.1->paddlex) (1.1.1)

导入相关包

import warnings
warnings.filterwarnings('ignore') # 忽略警告信息

import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0' # 设置环境变量

from paddlex import transforms as T # 用于定义模型训练、验证、预测过程中,输入图像的预处理和数据增强操作
                                    
import paddlex as pdx
import paddle
from paddle.regularizer import L2Decay # L2 权重衰减正则化

import numpy as np
import pandas as pd
import shutil # 文件文档处理库
import cv2    
import imghdr # 检测图片类型
from PIL import Image
from matplotlib import pyplot as plt
[12-03 17:36:35 MainThread @utils.py:79] WRN paddlepaddle version: 2.1.2. The dynamic graph version of PARL is under development, not fully tested and supported


/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/parl/remote/communication.py:38: DeprecationWarning: 'pyarrow.default_serialization_context' is deprecated as of 2.0.0 and will be removed in a future version. Use pickle or the pyarrow IPC functionality instead.
  context = pyarrow.default_serialization_context()
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/pyarrow/pandas_compat.py:1027: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  'floating': np.float,
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/__init__.py:107: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  from collections import MutableMapping
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/rcsetup.py:20: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  from collections import Iterable, Mapping
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/colors.py:53: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  from collections import Sized
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/tensor/creation.py:125: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. 
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  if data.dtype == np.object:

数据分析与处理

数据导入

!unzip -q /home/aistudio/data/data10954/cat_12_train.zip -d data/data10954/
!unzip -q /home/aistudio/data/data10954/cat_12_test.zip -d data/data10954/
replace data/data10954/cat_12_test/06yUTYIeaRsE782Ou5dh1NPzm9XpkBoL.jpg? [y]es, [n]o, [A]ll, [N]one, [r]ename: ^C
## 相关文件夹的删除与建立
!rm -rf data/data10954/ImageNetDataset # 删除文件夹,防止多次运行时出错

for i in range(12):
    cls_path = os.path.join('data/data10954/ImageNetDataset/', '%02d' % int(i)) # 拼接路径
    if not os.path.exists(cls_path):
        os.makedirs(cls_path) # 创建文件夹

!ls data/data10954/ImageNetDataset # 列出文件夹

##生成文件名和类别的一一对应关系,之后将根据类别cls将图片放入目标文件夹:data/data10954/ImageNetDataset/*/*.jpg。
train_df = pd.read_csv('data/data10954/train_list.txt', header=None, sep='\t') # 读取测试集标签
train_df.columns = ['name', 'cls'] # 返回列索引列表
train_df['name'] = train_df['name'].apply(lambda x: str(x).strip().split('/')[-1]) # 切分文件名,舍去cat_12_train/
train_df['cls'] = train_df['cls'].apply(lambda x: '%02d' % int(str(x).strip())) # 使图片标签类别变成2位数字
train_df.head() # 观察前五行数据格式
00  01	02  03	04  05	06  07	08  09	10  11
namecls
08GOkTtqw7E6IHZx4olYnhzvXLCiRsUfM.jpg00
1hwQDH3VBabeFXISfjlWEmYicoyr6qK1p.jpg00
2RDgZKvM6sp3Tx9dlqiLNEVJjmcfQ0zI4.jpg00
3ArBRzHyphTxFS2be9XLaU58m34PudlEf.jpg00
4kmW7GTX6uyM2A53NBZxibYRpQnIVatCH.jpg00

图片模式检验&修复

图片模式主要有以下几种:
1、RGB 为真色彩模式, 可组合为 256 x 256 x256 种, 打印需要更改为 CMYK模式, 需要注意数值溢出的问题。
2、HSB 模式(本篇没有涉及),建立基于人类感觉颜色的方式,将颜色分为色相(Hue),饱和度(Saturation),明亮度(Brightness),这里不详细展开。
3、CMYK模式,应用在印刷领域,4个字母意思是青、洋红、黄、黑,因为不能保证纯度,所以需要黑。
4、位图模式,见1, 颜色由黑和白表示(True, False)。
5、灰度模式,只有灰度, 所有颜色转化为灰度值,见L,I,F。
6、双色调模式(未有涉及),节约成本将可使用双色调。
7、Lab模式(未涉及,ps内置),由3通道组成(亮度,a,b)组成,作为RGB到CMYK的过渡。
8、多通道模式,删除RGB,CMYK,Lab中某一个通道后,会转变为多通道,多通道用于处理特殊打印,它的每个通道都为256级灰度通道。
9、索引颜色模式,用在多媒体和网页,通过颜色表查取,没有则就近取,仅支持单通道,(8位/像素)。

通过对数据集图片模式进行检验,我们发现其含有 ‘P’,’RGBA’,’RGB’ 三种不同模式的图片。
P(pallete)模式:调色板模式,把原来单像素占用24(32)个bit的RGB(A)真彩图片中的像素值,重映射到了8bit长,即0~255的数值范围内。而这套映射关系,就是属于这张图的所谓“调色板”(Pallete)。

## 图片格式应当为RGB三通道,其中一张RGBA模式图片展示如下
img = Image.open('data/data10954/cat_12_train/ulFBEZNRQrxn57voHAJ4UG6Mct2sw1Cj.jpg')
print(img.mode)
plt.imshow(img)
plt.show(img)
RGBA


/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:2349: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  if isinstance(obj, collections.Iterator):
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:2366: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  return list(data) if isinstance(data, collections.MappingView) else data
## P、RGBA、L模式的图片转换为RGB模式
for i in range(len(train_df)):
    img_path = os.path.join('data/data10954/cat_12_train', train_df.at[i, 'name']) # i 元素在列中的位置 ,name 列名
    if os.path.exists(img_path) and imghdr.what(img_path): # 检测路径文件是否存在及判断类别
        img = Image.open(img_path) # 打开文件
        if img.mode != 'RGB':
            img = Image.open(img_path)
            print(img_path)
            print(img.mode)
            img = img.convert('RGB') # 转换成rgb形式
            img.save(img_path) # 保存
          
for img_path in os.listdir('data/data10954/cat_12_test'):
    src = os.path.join('data/data10954/cat_12_test',img_path)
    img = Image.open(src)
    if img.mode != 'RGB':
        print(img_path)
        img = img.convert('RGB') 
        img.save(src)
data/data10954/cat_12_train/tO6cKGH8uPEayzmeZJ51Fdr2Tx3fBYSn.jpg
P
data/data10954/cat_12_train/ulFBEZNRQrxn57voHAJ4UG6Mct2sw1Cj.jpg
RGBA
data/data10954/cat_12_train/F3VnNwb2K9tgMWLodrXl1f6PIEjYqhy8.jpg
L
data/data10954/cat_12_train/YfsxcFB9D3LvkdQyiXlqnNZ4STwope2r.jpg
P
data/data10954/cat_12_train/6yYs4rvFLkQJlRxdhNfMOW52EAbgHejC.jpg
RGBA
data/data10954/cat_12_train/5nKsehtjrXCZqbAcSW13gxB8E6z2Luy7.jpg
P
data/data10954/cat_12_train/yGcJHV8Uuft6grFs7QWnK5CTAZvYzdDO.jpg
P
data/data10954/cat_12_train/YGyx4qCdOb7j8tzBuNfoFHLi6gU0SE3T.jpg
RGBA
data/data10954/cat_12_train/3yMZzWekKmuoGOF60ICQxldhBEc9Ra15.jpg
P
Qt29gPjYZwv3B6RJh5yiTWXrVImue1FH.jpg

数据可视化

## Data Visualization
## 随机查看同一类猫咪的特征
plt.figure(1)
img_1_1 = Image.open('data/data10954/cat_12_train/spNU7J8uk6BXiAyQErHegYMzjOaFR2qV.jpg')
plt.subplot(2, 2, 1) #图一包含1行2列子图,当前画在第一行第一列图上
plt.imshow(img_1_1)
plt.subplot(2, 2, 2)#当前画在第一行第2列图上
img_1_2 = Image.open('data/data10954/cat_12_train/7QZTYlspK2fqdJUwjC0HDmOFrM5W4PX9.jpg')
plt.imshow(img_1_2)
plt.subplot(2, 2, 3)
img_1_3 = Image.open('data/data10954/cat_12_train/oZin4PuwTet39xWCYhUBfvlzGyISb5DV.jpg')
plt.imshow(img_1_3)
plt.subplot(2, 2, 4)
img_1_4 = Image.open('data/data10954/cat_12_train/qbKjsR05lrFVYfLChtMGD7im36cUgAnE.jpg')
plt.imshow(img_1_4)
<matplotlib.image.AxesImage at 0x7f68bb922290>
## 随机选取不同类别的猫咪进行查看
plt.figure(2)
img_0 = Image.open('data/data10954/cat_12_train/8GOkTtqw7E6IHZx4olYnhzvXLCiRsUfM.jpg')
plt.subplot(2, 6, 1)
plt.imshow(img_0)
img_0 = Image.open('data/data10954/cat_12_train/spNU7J8uk6BXiAyQErHegYMzjOaFR2qV.jpg')
plt.subplot(2, 6, 2)
plt.imshow(img_0)
img_0 = Image.open('data/data10954/cat_12_train/jbIdxGyNpoql3XQZrfREMiAzh7B46WOa.jpg')
plt.subplot(2, 6, 3)
plt.imshow(img_0)
img_0 = Image.open('data/data10954/cat_12_train/cCeBo4EJ9H1hbXsIS5G6Kxdzg27nwqfy.jpg')
plt.subplot(2, 6, 4)
plt.imshow(img_0)
img_0 = Image.open('data/data10954/cat_12_train/yxNcRSz4TI7FpwCVJBuea6MmGitZYUkK.jpg')
plt.subplot(2, 6, 5)
plt.imshow(img_0)
img_0 = Image.open('data/data10954/cat_12_train/NZw3P0Wfz4JDsSECG8y7HXihl2Oon6rA.jpg')
plt.subplot(2, 6, 6)
plt.imshow(img_0)
img_0 = Image.open('data/data10954/cat_12_train/K5wdv0zEnx3cti4OagyPphCVJUIXYuSZ.jpg')
plt.subplot(2, 6, 7)
plt.imshow(img_0)
img_0 = Image.open('data/data10954/cat_12_train/BOmo5yiKzMGV8qvleRIdLQC4bZcPxwWD.jpg')
plt.subplot(2, 6, 8)
plt.imshow(img_0)
img_0 = Image.open('data/data10954/cat_12_train/COJUByb07wYXqcTMovWFnAgpNZk1SxrI.jpg')
plt.subplot(2, 6, 9)
plt.imshow(img_0)
img_0 = Image.open('data/data10954/cat_12_train/dHJn0vb8XoSTM4DPG965fQ1swczARBel.jpg')
plt.subplot(2, 6, 10)
plt.imshow(img_0)
img_0 = Image.open('data/data10954/cat_12_train/sv3RcZgEInHWtBoVKr9Q46PMUmA8Jy2h.jpg')
plt.subplot(2, 6, 11)
plt.imshow(img_0)
img_0 = Image.open('data/data10954/cat_12_train/mrgAsyPJdDvwp1EYnUG3Hj92ehMTKNxt.jpg')
plt.subplot(2, 6, 12)
plt.imshow(img_0)
<matplotlib.image.AxesImage at 0x7f68bb686e10>
! pip install pyecharts
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: pyecharts in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (1.9.1)
Requirement already satisfied: jinja2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pyecharts) (2.11.0)
Requirement already satisfied: prettytable in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pyecharts) (0.7.2)
Requirement already satisfied: simplejson in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pyecharts) (3.17.6)
Requirement already satisfied: MarkupSafe>=0.23 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from jinja2->pyecharts) (1.1.1)

样本平衡问题检验

为了检验或解决本项目样本不均匀问题,我们对data/cat_12_train中各类猫的图片数量进行统计并绘制条形图,结果如下图所示。由此可以观察出,本项目不存在样本不均衡问题,可直接进行下一步操作。

超链接:0-11类别猫数目统计

## 统计训练集各类猫的数目,防止样本不平衡问题。
from pyecharts import options as opts
from pyecharts.charts import Bar

with open("data/data10954/train_list.txt", "r") as f:
    labels = f.readlines()
    labels = [int(i.split()[-1]) for i in labels]

counts = pd.Series(labels).value_counts().sort_index().to_list()
values = np.random.rand(12) * 100
names = [str(i) for i in list(range(12))]
data = list(zip(values, counts, names))
source = [list(i) for i in data]
source.insert(0, ["score", "amount", "product"])


c = (
    Bar()
    .add_dataset(
        source=source
    )
    .add_yaxis(
        series_name="",
        y_axis=[],
        encode={"x": "amount", "y": "product"},
        label_opts=opts.LabelOpts(is_show=False),
    )
    .set_global_opts(
        title_opts=opts.TitleOpts(title="Dataset normal bar example"),
        xaxis_opts=opts.AxisOpts(name="amount"),
        yaxis_opts=opts.AxisOpts(type_="category"),
        visualmap_opts=opts.VisualMapOpts(
            orient="horizontal",
            pos_left="center",
            min_=10,
            max_=100,
            range_text=["High Score", "Low Score"],
            dimension=0,
            range_color=["#D7DA8B", "#E15457"],
        ),
    )
    .render("./work/labels.html")
)

## 从源路径 src_path 移动至目标路径 dst_path。
for i in range(len(train_df)):
    # 源路径
    src_path = os.path.join('data/data10954/cat_12_train',train_df.at[i, 'name']) # i 元素在列中的位置 ,name 列名
    # 目标路径
    dst_path = os.path.join(os.path.join('data/data10954/ImageNetDataset/',train_df.at[i, 'cls']),train_df.at[i, 'name'])
    try:
        shutil.move(src_path, dst_path) # 移动图片到目标路径
    except Exception as e: 
        print(e) # 抛出错误信息

数据增强

在图像分类任务中,图像数据的增广是一种常用的正则化方法,常用于数据量不足或者模型参数较多的场景。在本章节中,我们将对除 ImageNet 分类任务标准数据增强外的8种数据增强方式进行简单的介绍和对比,用户也可以将这些增广方法应用到自己的任务中,以获得模型精度的提升。这8种数据增强方式在ImageNet上的精度指标如下所示。

ImageNet 分类训练阶段的标准数据增强方式分为以下几个步骤:

  1. 图像解码:简写为 ImageDecode
  2. 随机裁剪到长宽均为 224 的图像:简写为 RandCrop
  3. 水平方向随机翻转:简写为 RandFlip
  4. 图像数据的归一化:简写为 Normalize
  5. 图像数据的重排,[224, 224, 3] 变为 [3, 224, 224]:简写为 Transpose
  6. 多幅图像数据组成 batch 数据,如 batch-size 个 [3, 224, 224] 的图像数据拼组成 [batch-size, 3, 224, 224]:简写为 Batch
    下图为三类数据增强方式的效果展示:

    图像变换类:图像变换类是在随机裁剪与翻转之间进行的操作,也可以认为是在原图上做的操作。主要方式包括AutoAugment和RandAugment,基于一定的策略,包括锐化、亮度变化、直方图均衡化等,对图像进行处理。这样网络在训练时就已经见过这些情况了,之后在实际预测时,即使遇到了光照变换、旋转这些很棘手的情况,网络也可以从容应对了。
    图像裁剪类:图像裁剪类主要是在生成的在通道转换之后,在图像上设置掩码,随机遮挡,从而使得网络去学习一些非显著性的特征。否则网络一直学习很重要的显著性区域,之后在预测有遮挡的图片时,泛化能力会很差。主要方式包括:CutOut、RandErasing、HideAndSeek、GridMask。这里需要注意的是,在通道转换前后去做图像裁剪,其实是没有区别的。因为通道转换这个操作不会修改图像的像素值。
    图像混叠类:组完batch之后,图像与图像、标签与标签之间进行混合,形成新的batch数据,然后送进网络进行训练。这也就是图像混叠类数据增广方式,主要的有Mixup与Cutmix两种方式。
    由于本项目数据集中,包含2160张训练集图片,为了增强模型效果,我们采用数据增强,并通过调整方式及相关参数,使模型效果最优(相关设置请见配置文件)。

我们以训练集中7QZTYlspK2fqdJUwjC0HDmOFrM5W4PX9.jpg为例,展示数据增强效果:
(注:由于前期使用Pytorch进行模型建立,故将相关代码放于图片下方。)

相关代码如下:

from PIL import Image
import matplotlib.pyplot as plt
import numpy as np
from torchvision import transforms
from timm.data.constants import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD
from timm import data

# 数据增强策略
trans = transforms.Compose([
    transforms.RandomCrop((384, 384), pad_if_needed=True),
    transforms.RandomHorizontalFlip(),
    data.AutoAugment(data.auto_augment_policy('originalr')),
    transforms.ToTensor(),
    transforms.Normalize(IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD),
    transforms.RandomErasing()
])

# 加载单张图片
image = Image.open("./0yTr3fswKBv4M8Fo2NcUnzibx6ClIm5e.jpg.jpg")
image = image.convert("RGB")

# 标准化的参数
mean = np.array(IMAGENET_DEFAULT_MEAN)
std = np.array(IMAGENET_DEFAULT_STD)

# 进行绘图
plt.figure(figsize=(12, 12))
for i in range(25):
    plt.subplot(5, 5, i + 1)
    # 对图片进行增强
    trans_image = trans(image)
    # 提取增强后的图片,转换为numpy.ndarray格式
    trans_image = trans_image.numpy().transpose([1, 2, 0])
    # 反标准化
    trans_image = std * trans_image + mean
    trans_image = np.clip(trans_image, 0, 1)
    # 展示图片
    plt.imshow(trans_image)
    plt.axis('off')

plt.savefig('./src/0yTr3fswKBv4M8Fo2NcUnzibx6ClIm5e.jpg_2.jpg', dpi=100)
plt.show()

模型构建

模型概述(ResNet)

ResNet的诞生
深度网络的退化问题(Degradation problem):网络深度增加时,网络准确度出现饱和,甚至出现下降。深度网络的退化问题至少说明深度网络不容易训练。但是我们考虑这样一个事实:现在你有一个浅层网络,你想通过向上堆积新层来建立深层网络,一个极端情况是这些增加的层什么也不学习,仅仅复制浅层网络的特征,即这样新层是恒等映射(Identity mapping)。在这种情况下,深层网络应该至少和浅层网络性能一样,也不应该出现退化现象。为了解决这个问题,ResNet的作者何凯明提出了残差学习来解决退化问题。
对于一个堆积层结构(几层堆积而成)当输入为 时其学习到的特征记为 ,现在我们希望其可以学习到残差 ,这样其实原始的学习特征是 。之所以这样是因为残差学习相比原始特征直接学习更容易。当残差为0时,此时堆积层仅仅做了恒等映射,至少网络性能不会下降,实际上残差不会为0,这也会使得堆积层在输入特征基础上学习到新的特征,从而拥有更好的性能。残差学习的结构如下图所示。这有点类似与电路中的“短路”,所以是一种短路连接(shortcut connection)。

ResNet网络是参考了VGG19网络,在其基础上进行了修改,并通过短路机制加入了残差单元,如图5所示。变化主要体现在ResNet直接使用stride=2的卷积做下采样,并且用global average pool层替换了全连接层。ResNet的一个重要设计原则是:当feature map大小降低一半时,feature map的数量增加一倍,这保持了网络层的复杂度。从图5中可以看到,ResNet相比普通网络每两层间增加了短路机制,这就形成了残差学习,其中虚线表示feature map数量发生了改变。图5展示的34-layer的ResNet,还可以构建更深的网络如表1所示。从表中可以看到,对于18-layer和34-layer的ResNet,其进行的两层间的残差学习,当网络更深时,其进行的是三层间的残差学习,三层卷积核分别是1x1,3x3和1x1,一个值得注意的是隐含层的feature map数量是比较小的,并且是输出feature map数量的1/4。

PaddleClas训练模型

PaddleClas支持通过修改配置文件(.yaml)的方式,灵活便捷的配置模型训练参数。相关配置文件已放于相应文件夹下。本文重点介绍全局配置(Global)、优化器(Optimizer)相关参数。

全局配置相关参数

参数名字参数名字默认值可选值
checkpoints断点模型路径,用于恢复训练nullstr
pretrained_model预训练模型路径nullstr
output_diroutput_dir“./output/”str
save_intervalsave_interval1int
eval_during_train是否在训练时进行评估Truebool
eval_interval每隔多少个epoch进行模型评估1int
epochs训练总epoch数int
print_batch_stepprint_batch_step10int
use_visualdl是否是用visualdl可视化训练过程Falsebool
image_shape图片大小[3,224,224]list, shape: (3,)
save_inference_dirsave_inference_dir“./inference”str
eval_modeeval的模式“classification”“retrieval”
to_static是否改为静态图模式FalseTrue
ues_dali是否使用dali库进行图像预处理FalseTrue

优化器相关参数

参数名字具体含义默认值可选值
name优化器方法名“Momentum”“Momentum”
momentummomentum值0.9float
lr.name学习率下降方式“Cosine”“Linear”、"Piecewise"等其他下降方式
lr.learning_rate学习率初始值0.1float
lr.warmup_epochwarmup轮数0int,如5
regularizer.name正则化方法名“L2”[“L1”, “L2”]
regularizer.coeff正则化系数0.00007float

生成ImageNet

[3] 数据集划分

!paddlex --split_dataset --format ImageNet\
    --dataset_dir data/data10954/ImageNetDataset\
    --val_value 0.085\
    --test_value 0
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/distributed/parallel.py:120: UserWarning: Currently not a parallel execution environment, `paddle.distributed.init_parallel_env` will not do anything.
  "Currently not a parallel execution environment, `paddle.distributed.init_parallel_env` will not do anything."
[12-03 17:36:50 MainThread @logger.py:242] Argv: /opt/conda/envs/python35-paddle120-env/bin/paddlex --split_dataset --format ImageNet --dataset_dir data/data10954/ImageNetDataset --val_value 0.085 --test_value 0
[12-03 17:36:50 MainThread @utils.py:79] WRN paddlepaddle version: 2.1.2. The dynamic graph version of PARL is under development, not fully tested and supported
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/parl/remote/communication.py:38: DeprecationWarning: 'pyarrow.default_serialization_context' is deprecated as of 2.0.0 and will be removed in a future version. Use pickle or the pyarrow IPC functionality instead.
  context = pyarrow.default_serialization_context()
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/pyarrow/pandas_compat.py:1027: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  'floating': np.float,
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/__init__.py:107: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  from collections import MutableMapping
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/rcsetup.py:20: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  from collections import Iterable, Mapping
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/colors.py:53: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  from collections import Sized
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/tensor/creation.py:125: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. 
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  if data.dtype == np.object:
2021-12-03 17:36:53 [INFO]	Dataset split starts...
2021-12-03 17:36:53 [INFO]	Dataset split done.
2021-12-03 17:36:53 [INFO]	Train samples: 1980
2021-12-03 17:36:53 [INFO]	Eval samples: 180
2021-12-03 17:36:53 [INFO]	Test samples: 0
2021-12-03 17:36:53 [INFO]	Split files saved in data/data10954/ImageNetDataset

[4] 定义数据增强、装载数据集

在对数据集进行数据增强之前,我们首先需要根据本数据集计算相关参数:
T.Normalize():
初始参数:
T.Normalize(mean = [0.485, 0.456, 0.406], std = [0.229, 0.224, 0.225])
为在ImageNet上几百万张图片上的均值和方差,我们需计算本数据集上的均值和方差,并用此数据进行标准化。

import torch
from torch.utils.data import DataLoader
from torchvision.transforms import transforms
from dataset.CatDataset import CatDataset

trans = transforms.Compose([
    transforms.Resize((410, 410)),
    transforms.ToTensor()
])

train_dataset = CatDataset("E:/project/data10954", "train_list.txt", trans)
train_loader = DataLoader(dataset=train_dataset, batch_size=16, shuffle=True)


def get_mean_std(loader):
    # Var[x] = E[X**2]-E[X]**2
    channels_sum, channels_squared_sum, num_batches = 0, 0, 0
    for data, _ in loader:
        channels_sum += torch.mean(data, dim=[0, 2, 3])
        channels_squared_sum += torch.mean(data ** 2, dim=[0, 2, 3])
        num_batches += 1

    print(num_batches)
    print(channels_sum)
    mean = channels_sum / num_batches
    std = (channels_squared_sum / num_batches - mean ** 2) ** 0.5

    return mean, std


mean, std = get_mean_std(train_loader)

print(mean)
print(std)

相关代码如上,经计算,本数据集的相关参数:
mean = [0.4848, 0.4435, 0.4023],
std = [0.2744, 0.2688, 0.2757]

# 训练集增强
train_transforms = T.Compose([
    T.MixupImage(
        alpha=1.5,
        beta=1.5,
        mixup_epoch=int(300 * 25. / 27)),
    T.Resize(
        target_size=438,
        interp='CUBIC'),
    # 以图像中心点扩散裁剪长宽为目标尺寸的正方形
    T.RandomCrop(360),
    # 以一定的概率对图像进行随机水平翻转
    T.RandomHorizontalFlip(0.5),
    # 以一定的概率对图像进行随机像素内容变换,可包括亮度、对比度、饱和度、色相角度、通道顺序的调整,模型训练时的数据增强操作
    T.RandomDistort(
        brightness_range=0.25,
        brightness_prob=0.5,
        contrast_range=0.25,
        contrast_prob=0.5,
        saturation_range=0.25,
        saturation_prob=0.5,
        hue_range=18.0,
        hue_prob=0.5),
    # 以一定的概率对图像进行高斯模糊
    T.RandomBlur(0.1),
    # 对图像进行标准化
    T.Normalize([0.4848, 0.4435, 0.4023], [0.2744, 0.2688, 0.2757])
])
# 验证集增强
eval_transforms = T.Compose([
    T.Resize(
        target_size=410,
        interp='AREA'),
    T.CenterCrop(360),
    T.Normalize([0.4848, 0.4435, 0.4023], [0.2744, 0.2688, 0.2757])
])

装载数据集

train_dataset = pdx.datasets.ImageNet(
    data_dir='data/data10954/ImageNetDataset',
    file_list='data/data10954/ImageNetDataset/train_list.txt',
    label_list='data/data10954/ImageNetDataset/labels.txt',
    transforms=train_transforms,
    shuffle=True) # 是否需要对数据集中样本打乱顺序

eval_dataset = pdx.datasets.ImageNet(
    data_dir='data/data10954/ImageNetDataset',
    file_list='data/data10954/ImageNetDataset/val_list.txt',
    label_list='data/data10954/ImageNetDataset/labels.txt',
    transforms=eval_transforms)
2021-12-03 17:36:54 [INFO]	Starting to read file list from dataset...
2021-12-03 17:36:54 [INFO]	1980 samples in file data/data10954/ImageNetDataset/train_list.txt
2021-12-03 17:36:54 [INFO]	Starting to read file list from dataset...
2021-12-03 17:36:54 [INFO]	180 samples in file data/data10954/ImageNetDataset/val_list.txt

配置 ResNet 模型并训练

#初始化模型
model = pdx.cls.ResNet101_vd_ssld(
    num_classes=len(train_dataset.labels)
)
W1203 17:36:54.270475  2385 device_context.cc:404] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1
W1203 17:36:54.275947  2385 device_context.cc:422] device: 0, cuDNN Version: 7.6.
model.train(
    train_dataset=train_dataset,
    eval_dataset=eval_dataset,
    num_epochs=420, #训练轮数
    train_batch_size=80, #一个step所用到的样本量
    warmup_steps=(len(train_dataset.file_list) // 80) * 6, #学习率从0经过steps轮迭代增长到设定的学习率
    learning_rate=0.025, # 学习率
    lr_decay_epochs=[40, 65, 115, 160, 205], #表示学习率在第几个epoch时衰减一次
    lr_decay_gamma=0.1, # 学习率衰减率

    save_interval_epochs=2, # 每几轮保存一次
    log_interval_steps=(len(train_dataset.file_list) // 80) * 7, # 训练日志输出间隔

    pretrain_weights='IMAGENET',
    #pretrain_weights (str or None): 若指定为'.pdparams'文件时,则从文件加载模型权重;
    #若为字符串'IMAGENET',则自动下载在ImageNet图片数据上预训练的模型权重;
    #若为None,则不使用预训练模型。默认为'IMAGENET'
    save_dir='output/ResNet101_vd_ssld',
    use_vdl=False)
2021-12-03 17:36:57 [INFO]	Loading pretrained model from output/ResNet101_vd_ssld/pretrain/ResNet101_vd_ssld_pretrained.pdparams
2021-12-03 17:36:59 [WARNING]	[SKIP] Shape of pretrained params out.weight doesn't match.(Pretrained: [2048, 1000], Actual: [2048, 12])
2021-12-03 17:36:59 [WARNING]	[SKIP] Shape of pretrained params out.bias doesn't match.(Pretrained: [1000], Actual: [12])
2021-12-03 17:36:59 [INFO]	There are 530/532 variables loaded into ResNet101_vd_ssld.
2021-12-03 17:37:28 [INFO]	[TRAIN] Epoch 1 finished, loss=2.4290602, acc1=0.15781249, acc5=0.5880208 .
2021-12-03 17:37:56 [INFO]	[TRAIN] Epoch 2 finished, loss=1.437159, acc1=0.61041665, acc5=0.9546874 .
2021-12-03 17:37:56 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 17:37:58 [INFO]	[EVAL] Finished, Epoch=2, acc1=0.870833, acc5=1.000000 .
2021-12-03 17:37:59 [INFO]	Model saved in output/ResNet101_vd_ssld/best_model.
2021-12-03 17:37:59 [INFO]	Current evaluated best model on eval_dataset is epoch_2, acc1=0.8708333373069763
2021-12-03 17:38:00 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_2.
2021-12-03 17:38:28 [INFO]	[TRAIN] Epoch 3 finished, loss=0.59905374, acc1=0.7947917, acc5=0.98802084 .
2021-12-03 17:38:56 [INFO]	[TRAIN] Epoch 4 finished, loss=0.4139061, acc1=0.8567708, acc5=0.99062496 .
2021-12-03 17:38:57 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 17:38:59 [INFO]	[EVAL] Finished, Epoch=4, acc1=0.920833, acc5=1.000000 .
2021-12-03 17:39:00 [INFO]	Model saved in output/ResNet101_vd_ssld/best_model.
2021-12-03 17:39:00 [INFO]	Current evaluated best model on eval_dataset is epoch_4, acc1=0.9208333492279053
2021-12-03 17:39:01 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_4.
2021-12-03 17:39:29 [INFO]	[TRAIN] Epoch 5 finished, loss=0.47607446, acc1=0.8328125, acc5=0.9906251 .
2021-12-03 17:39:57 [INFO]	[TRAIN] Epoch 6 finished, loss=0.45627102, acc1=0.8411458, acc5=0.98281246 .
2021-12-03 17:39:57 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 17:40:00 [INFO]	[EVAL] Finished, Epoch=6, acc1=0.954167, acc5=1.000000 .
2021-12-03 17:40:01 [INFO]	Model saved in output/ResNet101_vd_ssld/best_model.
2021-12-03 17:40:01 [INFO]	Current evaluated best model on eval_dataset is epoch_6, acc1=0.9541666507720947
2021-12-03 17:40:01 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_6.
2021-12-03 17:40:29 [INFO]	[TRAIN] Epoch=7/420, Step=24/24, loss=0.493965, acc1=0.825000, acc5=0.937500, lr=0.025000, time_each_step=1.17s, eta=3:25:41
2021-12-03 17:40:29 [INFO]	[TRAIN] Epoch 7 finished, loss=0.4299587, acc1=0.8505208, acc5=0.9869792 .
2021-12-03 17:40:58 [INFO]	[TRAIN] Epoch 8 finished, loss=0.45664763, acc1=0.8338542, acc5=0.98854166 .
2021-12-03 17:40:58 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 17:41:00 [INFO]	[EVAL] Finished, Epoch=8, acc1=0.920833, acc5=1.000000 .
2021-12-03 17:41:00 [INFO]	Current evaluated best model on eval_dataset is epoch_6, acc1=0.9541666507720947
2021-12-03 17:41:01 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_8.
2021-12-03 17:41:29 [INFO]	[TRAIN] Epoch 9 finished, loss=0.4346855, acc1=0.85312504, acc5=0.99010414 .
2021-12-03 17:41:57 [INFO]	[TRAIN] Epoch 10 finished, loss=0.44030318, acc1=0.8473959, acc5=0.9916666 .
2021-12-03 17:41:58 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 17:42:00 [INFO]	[EVAL] Finished, Epoch=10, acc1=0.879167, acc5=1.000000 .
2021-12-03 17:42:00 [INFO]	Current evaluated best model on eval_dataset is epoch_6, acc1=0.9541666507720947
2021-12-03 17:42:01 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_10.
2021-12-03 17:42:29 [INFO]	[TRAIN] Epoch 11 finished, loss=0.3777156, acc1=0.87343746, acc5=0.9911458 .
2021-12-03 17:42:57 [INFO]	[TRAIN] Epoch 12 finished, loss=0.3166206, acc1=0.8854167, acc5=0.9916666 .
2021-12-03 17:42:57 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 17:43:00 [INFO]	[EVAL] Finished, Epoch=12, acc1=0.941667, acc5=1.000000 .
2021-12-03 17:43:00 [INFO]	Current evaluated best model on eval_dataset is epoch_6, acc1=0.9541666507720947
2021-12-03 17:43:00 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_12.
2021-12-03 17:43:29 [INFO]	[TRAIN] Epoch 13 finished, loss=0.2943517, acc1=0.89947915, acc5=0.99062496 .
2021-12-03 17:43:57 [INFO]	[TRAIN] Epoch=14/420, Step=24/24, loss=0.201405, acc1=0.950000, acc5=1.000000, lr=0.025000, time_each_step=1.17s, eta=3:18:27
2021-12-03 17:43:57 [INFO]	[TRAIN] Epoch 14 finished, loss=0.3012723, acc1=0.8953125, acc5=0.9921875 .
2021-12-03 17:43:57 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 17:44:00 [INFO]	[EVAL] Finished, Epoch=14, acc1=0.900000, acc5=1.000000 .
2021-12-03 17:44:00 [INFO]	Current evaluated best model on eval_dataset is epoch_6, acc1=0.9541666507720947
2021-12-03 17:44:00 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_14.
2021-12-03 17:44:29 [INFO]	[TRAIN] Epoch 15 finished, loss=0.25963572, acc1=0.9083333, acc5=0.9953125 .
2021-12-03 17:44:57 [INFO]	[TRAIN] Epoch 16 finished, loss=0.27723664, acc1=0.90416664, acc5=0.99062496 .
2021-12-03 17:44:57 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 17:44:59 [INFO]	[EVAL] Finished, Epoch=16, acc1=0.937500, acc5=1.000000 .
2021-12-03 17:44:59 [INFO]	Current evaluated best model on eval_dataset is epoch_6, acc1=0.9541666507720947
2021-12-03 17:45:00 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_16.
2021-12-03 17:45:28 [INFO]	[TRAIN] Epoch 17 finished, loss=0.2988337, acc1=0.8984375, acc5=0.9927084 .
2021-12-03 17:45:57 [INFO]	[TRAIN] Epoch 18 finished, loss=0.27131745, acc1=0.9114583, acc5=0.9916666 .
2021-12-03 17:45:57 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 17:45:59 [INFO]	[EVAL] Finished, Epoch=18, acc1=0.941667, acc5=1.000000 .
2021-12-03 17:45:59 [INFO]	Current evaluated best model on eval_dataset is epoch_6, acc1=0.9541666507720947
2021-12-03 17:46:00 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_18.
2021-12-03 17:46:28 [INFO]	[TRAIN] Epoch 19 finished, loss=0.22694236, acc1=0.9260418, acc5=0.99322915 .
2021-12-03 17:46:56 [INFO]	[TRAIN] Epoch 20 finished, loss=0.24531086, acc1=0.91718745, acc5=0.9942708 .
2021-12-03 17:46:57 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 17:46:59 [INFO]	[EVAL] Finished, Epoch=20, acc1=0.916667, acc5=1.000000 .
2021-12-03 17:46:59 [INFO]	Current evaluated best model on eval_dataset is epoch_6, acc1=0.9541666507720947
2021-12-03 17:47:00 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_20.
2021-12-03 17:47:28 [INFO]	[TRAIN] Epoch=21/420, Step=24/24, loss=0.111501, acc1=0.962500, acc5=1.000000, lr=0.025000, time_each_step=1.17s, eta=3:15:22
2021-12-03 17:47:28 [INFO]	[TRAIN] Epoch 21 finished, loss=0.2682513, acc1=0.9067709, acc5=0.9942708 .
2021-12-03 17:47:56 [INFO]	[TRAIN] Epoch 22 finished, loss=0.26808843, acc1=0.9083333, acc5=0.9937499 .
2021-12-03 17:47:56 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 17:47:59 [INFO]	[EVAL] Finished, Epoch=22, acc1=0.941667, acc5=1.000000 .
2021-12-03 17:47:59 [INFO]	Current evaluated best model on eval_dataset is epoch_6, acc1=0.9541666507720947
2021-12-03 17:47:59 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_22.
2021-12-03 17:48:28 [INFO]	[TRAIN] Epoch 23 finished, loss=0.21594548, acc1=0.9244792, acc5=0.9947917 .
2021-12-03 17:48:56 [INFO]	[TRAIN] Epoch 24 finished, loss=0.21632612, acc1=0.9244792, acc5=0.9927084 .
2021-12-03 17:48:56 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 17:48:58 [INFO]	[EVAL] Finished, Epoch=24, acc1=0.950000, acc5=1.000000 .
2021-12-03 17:48:58 [INFO]	Current evaluated best model on eval_dataset is epoch_6, acc1=0.9541666507720947
2021-12-03 17:48:59 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_24.
2021-12-03 17:49:27 [INFO]	[TRAIN] Epoch 25 finished, loss=0.23070288, acc1=0.92031246, acc5=0.9942708 .
2021-12-03 17:49:56 [INFO]	[TRAIN] Epoch 26 finished, loss=0.24922025, acc1=0.92083335, acc5=0.99010414 .
2021-12-03 17:49:56 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 17:49:58 [INFO]	[EVAL] Finished, Epoch=26, acc1=0.958333, acc5=1.000000 .
2021-12-03 17:49:59 [INFO]	Model saved in output/ResNet101_vd_ssld/best_model.
2021-12-03 17:49:59 [INFO]	Current evaluated best model on eval_dataset is epoch_26, acc1=0.9583333134651184
2021-12-03 17:49:59 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_26.
2021-12-03 17:50:28 [INFO]	[TRAIN] Epoch 27 finished, loss=0.22161298, acc1=0.9234374, acc5=0.99531245 .
2021-12-03 17:50:56 [INFO]	[TRAIN] Epoch=28/420, Step=24/24, loss=0.125783, acc1=0.950000, acc5=1.000000, lr=0.025000, time_each_step=1.17s, eta=3:14:20
2021-12-03 17:50:56 [INFO]	[TRAIN] Epoch 28 finished, loss=0.19467662, acc1=0.9317708, acc5=0.9921875 .
2021-12-03 17:50:56 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 17:50:59 [INFO]	[EVAL] Finished, Epoch=28, acc1=0.950000, acc5=1.000000 .
2021-12-03 17:50:59 [INFO]	Current evaluated best model on eval_dataset is epoch_26, acc1=0.9583333134651184
2021-12-03 17:50:59 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_28.
2021-12-03 17:51:28 [INFO]	[TRAIN] Epoch 29 finished, loss=0.21905367, acc1=0.92031246, acc5=0.9947917 .
2021-12-03 17:51:56 [INFO]	[TRAIN] Epoch 30 finished, loss=0.20450185, acc1=0.9322917, acc5=0.9927084 .
2021-12-03 17:51:56 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 17:51:58 [INFO]	[EVAL] Finished, Epoch=30, acc1=0.929167, acc5=1.000000 .
2021-12-03 17:51:58 [INFO]	Current evaluated best model on eval_dataset is epoch_26, acc1=0.9583333134651184
2021-12-03 17:51:59 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_30.
2021-12-03 17:52:27 [INFO]	[TRAIN] Epoch 31 finished, loss=0.21385586, acc1=0.9265625, acc5=0.99062496 .
2021-12-03 17:52:55 [INFO]	[TRAIN] Epoch 32 finished, loss=0.19574146, acc1=0.93072915, acc5=0.9958334 .
2021-12-03 17:52:56 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 17:52:58 [INFO]	[EVAL] Finished, Epoch=32, acc1=0.962500, acc5=1.000000 .
2021-12-03 17:52:59 [INFO]	Model saved in output/ResNet101_vd_ssld/best_model.
2021-12-03 17:52:59 [INFO]	Current evaluated best model on eval_dataset is epoch_32, acc1=0.9625000357627869
2021-12-03 17:52:59 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_32.
2021-12-03 17:53:28 [INFO]	[TRAIN] Epoch 33 finished, loss=0.21569063, acc1=0.92760414, acc5=0.9927084 .
2021-12-03 17:53:56 [INFO]	[TRAIN] Epoch 34 finished, loss=0.1684433, acc1=0.9411459, acc5=0.9953125 .
2021-12-03 17:53:56 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 17:53:59 [INFO]	[EVAL] Finished, Epoch=34, acc1=0.966667, acc5=1.000000 .
2021-12-03 17:53:59 [INFO]	Model saved in output/ResNet101_vd_ssld/best_model.
2021-12-03 17:53:59 [INFO]	Current evaluated best model on eval_dataset is epoch_34, acc1=0.9666666984558105
2021-12-03 17:54:00 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_34.
2021-12-03 17:54:28 [INFO]	[TRAIN] Epoch=35/420, Step=24/24, loss=0.213228, acc1=0.937500, acc5=1.000000, lr=0.025000, time_each_step=1.17s, eta=3:11:8
2021-12-03 17:54:28 [INFO]	[TRAIN] Epoch 35 finished, loss=0.19812012, acc1=0.9385417, acc5=0.9916666 .
2021-12-03 17:54:56 [INFO]	[TRAIN] Epoch 36 finished, loss=0.1941156, acc1=0.9333334, acc5=0.9947917 .
2021-12-03 17:54:57 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 17:54:59 [INFO]	[EVAL] Finished, Epoch=36, acc1=0.962500, acc5=1.000000 .
2021-12-03 17:54:59 [INFO]	Current evaluated best model on eval_dataset is epoch_34, acc1=0.9666666984558105
2021-12-03 17:55:00 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_36.
2021-12-03 17:55:28 [INFO]	[TRAIN] Epoch 37 finished, loss=0.22260733, acc1=0.9208333, acc5=0.9921875 .
2021-12-03 17:55:56 [INFO]	[TRAIN] Epoch 38 finished, loss=0.18473105, acc1=0.9385417, acc5=0.996875 .
2021-12-03 17:55:56 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 17:55:59 [INFO]	[EVAL] Finished, Epoch=38, acc1=0.970833, acc5=1.000000 .
2021-12-03 17:56:00 [INFO]	Model saved in output/ResNet101_vd_ssld/best_model.
2021-12-03 17:56:00 [INFO]	Current evaluated best model on eval_dataset is epoch_38, acc1=0.9708333015441895
2021-12-03 17:56:00 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_38.
2021-12-03 17:56:29 [INFO]	[TRAIN] Epoch 39 finished, loss=0.17182076, acc1=0.94374996, acc5=0.9942708 .
2021-12-03 17:56:58 [INFO]	[TRAIN] Epoch 40 finished, loss=0.18835612, acc1=0.93645835, acc5=0.9953125 .
2021-12-03 17:56:58 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 17:57:00 [INFO]	[EVAL] Finished, Epoch=40, acc1=0.975000, acc5=1.000000 .
2021-12-03 17:57:01 [INFO]	Model saved in output/ResNet101_vd_ssld/best_model.
2021-12-03 17:57:01 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 17:57:02 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_40.
2021-12-03 17:57:30 [INFO]	[TRAIN] Epoch 41 finished, loss=0.17025845, acc1=0.9421875, acc5=0.9942708 .
2021-12-03 17:57:58 [INFO]	[TRAIN] Epoch=42/420, Step=24/24, loss=0.217800, acc1=0.912500, acc5=1.000000, lr=0.025000, time_each_step=1.17s, eta=3:8:37
2021-12-03 17:57:59 [INFO]	[TRAIN] Epoch 42 finished, loss=0.1786012, acc1=0.9338541, acc5=0.99635416 .
2021-12-03 17:57:59 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 17:58:01 [INFO]	[EVAL] Finished, Epoch=42, acc1=0.962500, acc5=1.000000 .
2021-12-03 17:58:01 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 17:58:02 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_42.
2021-12-03 17:58:30 [INFO]	[TRAIN] Epoch 43 finished, loss=0.17234616, acc1=0.94427085, acc5=0.996875 .
2021-12-03 17:58:59 [INFO]	[TRAIN] Epoch 44 finished, loss=0.16636248, acc1=0.94843745, acc5=0.99583334 .
2021-12-03 17:58:59 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 17:59:01 [INFO]	[EVAL] Finished, Epoch=44, acc1=0.950000, acc5=1.000000 .
2021-12-03 17:59:01 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 17:59:02 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_44.
2021-12-03 17:59:30 [INFO]	[TRAIN] Epoch 45 finished, loss=0.17296259, acc1=0.9390624, acc5=0.99166673 .
2021-12-03 17:59:58 [INFO]	[TRAIN] Epoch 46 finished, loss=0.19328034, acc1=0.9369791, acc5=0.9937499 .
2021-12-03 17:59:58 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:00:01 [INFO]	[EVAL] Finished, Epoch=46, acc1=0.945833, acc5=1.000000 .
2021-12-03 18:00:01 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:00:01 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_46.
2021-12-03 18:00:30 [INFO]	[TRAIN] Epoch 47 finished, loss=0.15977891, acc1=0.95, acc5=0.9953125 .
2021-12-03 18:00:58 [INFO]	[TRAIN] Epoch 48 finished, loss=0.11736957, acc1=0.9609375, acc5=0.9963541 .
2021-12-03 18:00:58 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:01:00 [INFO]	[EVAL] Finished, Epoch=48, acc1=0.966667, acc5=1.000000 .
2021-12-03 18:01:00 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:01:01 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_48.
2021-12-03 18:01:29 [INFO]	[TRAIN] Epoch=49/420, Step=24/24, loss=0.105271, acc1=0.975000, acc5=1.000000, lr=0.002500, time_each_step=1.17s, eta=3:1:16
2021-12-03 18:01:29 [INFO]	[TRAIN] Epoch 49 finished, loss=0.13760276, acc1=0.95520836, acc5=0.9942708 .
2021-12-03 18:01:57 [INFO]	[TRAIN] Epoch 50 finished, loss=0.11758989, acc1=0.9598958, acc5=0.9953125 .
2021-12-03 18:01:57 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:02:00 [INFO]	[EVAL] Finished, Epoch=50, acc1=0.966667, acc5=1.000000 .
2021-12-03 18:02:00 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:02:00 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_50.
2021-12-03 18:02:28 [INFO]	[TRAIN] Epoch 51 finished, loss=0.095484234, acc1=0.96875, acc5=0.9973958 .
2021-12-03 18:02:57 [INFO]	[TRAIN] Epoch 52 finished, loss=0.10604599, acc1=0.96406245, acc5=0.9973958 .
2021-12-03 18:02:57 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:03:00 [INFO]	[EVAL] Finished, Epoch=52, acc1=0.975000, acc5=1.000000 .
2021-12-03 18:03:00 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:03:00 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_52.
2021-12-03 18:03:29 [INFO]	[TRAIN] Epoch 53 finished, loss=0.10948511, acc1=0.96197915, acc5=0.9942708 .
2021-12-03 18:03:58 [INFO]	[TRAIN] Epoch 54 finished, loss=0.09607029, acc1=0.97343755, acc5=0.996875 .
2021-12-03 18:03:58 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:04:00 [INFO]	[EVAL] Finished, Epoch=54, acc1=0.975000, acc5=1.000000 .
2021-12-03 18:04:00 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:04:01 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_54.
2021-12-03 18:04:30 [INFO]	[TRAIN] Epoch 55 finished, loss=0.0969007, acc1=0.9682291, acc5=0.9932292 .
2021-12-03 18:04:58 [INFO]	[TRAIN] Epoch=56/420, Step=24/24, loss=0.062564, acc1=0.987500, acc5=1.000000, lr=0.002500, time_each_step=1.17s, eta=2:59:13
2021-12-03 18:04:58 [INFO]	[TRAIN] Epoch 56 finished, loss=0.08416006, acc1=0.9729166, acc5=0.99895835 .
2021-12-03 18:04:58 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:05:01 [INFO]	[EVAL] Finished, Epoch=56, acc1=0.975000, acc5=1.000000 .
2021-12-03 18:05:01 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:05:01 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_56.
2021-12-03 18:05:30 [INFO]	[TRAIN] Epoch 57 finished, loss=0.10191559, acc1=0.96458334, acc5=0.99635416 .
2021-12-03 18:05:58 [INFO]	[TRAIN] Epoch 58 finished, loss=0.10892256, acc1=0.95989585, acc5=0.9979167 .
2021-12-03 18:05:58 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:06:01 [INFO]	[EVAL] Finished, Epoch=58, acc1=0.975000, acc5=1.000000 .
2021-12-03 18:06:01 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:06:01 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_58.
2021-12-03 18:06:30 [INFO]	[TRAIN] Epoch 59 finished, loss=0.082898095, acc1=0.97291666, acc5=0.9963541 .
2021-12-03 18:06:58 [INFO]	[TRAIN] Epoch 60 finished, loss=0.09491751, acc1=0.96718746, acc5=0.99583334 .
2021-12-03 18:06:58 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:07:00 [INFO]	[EVAL] Finished, Epoch=60, acc1=0.975000, acc5=1.000000 .
2021-12-03 18:07:00 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:07:01 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_60.
2021-12-03 18:07:29 [INFO]	[TRAIN] Epoch 61 finished, loss=0.084152125, acc1=0.96875, acc5=0.996875 .
2021-12-03 18:07:57 [INFO]	[TRAIN] Epoch 62 finished, loss=0.08742928, acc1=0.96770835, acc5=0.996875 .
2021-12-03 18:07:57 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:08:00 [INFO]	[EVAL] Finished, Epoch=62, acc1=0.975000, acc5=1.000000 .
2021-12-03 18:08:00 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:08:00 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_62.
2021-12-03 18:08:28 [INFO]	[TRAIN] Epoch=63/420, Step=24/24, loss=0.042869, acc1=0.975000, acc5=1.000000, lr=0.002500, time_each_step=1.17s, eta=2:54:35
2021-12-03 18:08:29 [INFO]	[TRAIN] Epoch 63 finished, loss=0.08022346, acc1=0.9739583, acc5=0.996875 .
2021-12-03 18:08:57 [INFO]	[TRAIN] Epoch 64 finished, loss=0.08210222, acc1=0.9739583, acc5=0.9953125 .
2021-12-03 18:08:57 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:08:59 [INFO]	[EVAL] Finished, Epoch=64, acc1=0.975000, acc5=1.000000 .
2021-12-03 18:08:59 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:09:00 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_64.
2021-12-03 18:09:28 [INFO]	[TRAIN] Epoch 65 finished, loss=0.08569533, acc1=0.9692709, acc5=0.9973958 .
2021-12-03 18:09:56 [INFO]	[TRAIN] Epoch 66 finished, loss=0.08419883, acc1=0.9697917, acc5=0.99895835 .
2021-12-03 18:09:57 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:09:59 [INFO]	[EVAL] Finished, Epoch=66, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:09:59 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:10:00 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_66.
2021-12-03 18:10:28 [INFO]	[TRAIN] Epoch 67 finished, loss=0.09635818, acc1=0.9666667, acc5=0.9979167 .
2021-12-03 18:10:56 [INFO]	[TRAIN] Epoch 68 finished, loss=0.06674386, acc1=0.9786458, acc5=0.9979167 .
2021-12-03 18:10:56 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:10:58 [INFO]	[EVAL] Finished, Epoch=68, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:10:58 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:10:59 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_68.
2021-12-03 18:11:27 [INFO]	[TRAIN] Epoch 69 finished, loss=0.08619857, acc1=0.96875, acc5=0.9979167 .
2021-12-03 18:11:55 [INFO]	[TRAIN] Epoch=70/420, Step=24/24, loss=0.056595, acc1=0.987500, acc5=1.000000, lr=0.002500, time_each_step=1.17s, eta=2:50:38
2021-12-03 18:11:56 [INFO]	[TRAIN] Epoch 70 finished, loss=0.08336272, acc1=0.97239584, acc5=0.996875 .
2021-12-03 18:11:56 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:11:58 [INFO]	[EVAL] Finished, Epoch=70, acc1=0.975000, acc5=1.000000 .
2021-12-03 18:11:58 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:11:59 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_70.
2021-12-03 18:12:27 [INFO]	[TRAIN] Epoch 71 finished, loss=0.091573544, acc1=0.9692709, acc5=0.996875 .
2021-12-03 18:12:55 [INFO]	[TRAIN] Epoch 72 finished, loss=0.09470719, acc1=0.9692709, acc5=0.9958334 .
2021-12-03 18:12:55 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:12:58 [INFO]	[EVAL] Finished, Epoch=72, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:12:58 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:12:58 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_72.
2021-12-03 18:13:27 [INFO]	[TRAIN] Epoch 73 finished, loss=0.0855863, acc1=0.971875, acc5=0.996875 .
2021-12-03 18:13:55 [INFO]	[TRAIN] Epoch 74 finished, loss=0.08923915, acc1=0.9703124, acc5=0.9973958 .
2021-12-03 18:13:55 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:13:57 [INFO]	[EVAL] Finished, Epoch=74, acc1=0.966667, acc5=1.000000 .
2021-12-03 18:13:57 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:13:58 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_74.
2021-12-03 18:14:26 [INFO]	[TRAIN] Epoch 75 finished, loss=0.08202963, acc1=0.9723959, acc5=0.99635416 .
2021-12-03 18:14:54 [INFO]	[TRAIN] Epoch 76 finished, loss=0.06454754, acc1=0.9776042, acc5=0.9979167 .
2021-12-03 18:14:55 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:14:57 [INFO]	[EVAL] Finished, Epoch=76, acc1=0.966667, acc5=1.000000 .
2021-12-03 18:14:57 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:14:58 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_76.
2021-12-03 18:15:26 [INFO]	[TRAIN] Epoch=77/420, Step=24/24, loss=0.095304, acc1=0.962500, acc5=1.000000, lr=0.000250, time_each_step=1.17s, eta=2:47:24
2021-12-03 18:15:26 [INFO]	[TRAIN] Epoch 77 finished, loss=0.07232836, acc1=0.9750001, acc5=0.9979167 .
2021-12-03 18:15:54 [INFO]	[TRAIN] Epoch 78 finished, loss=0.0744181, acc1=0.97499996, acc5=0.9984376 .
2021-12-03 18:15:54 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:15:57 [INFO]	[EVAL] Finished, Epoch=78, acc1=0.966667, acc5=1.000000 .
2021-12-03 18:15:57 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:15:57 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_78.
2021-12-03 18:16:26 [INFO]	[TRAIN] Epoch 79 finished, loss=0.082669795, acc1=0.9713542, acc5=0.996875 .
2021-12-03 18:16:54 [INFO]	[TRAIN] Epoch 80 finished, loss=0.07568556, acc1=0.9744792, acc5=0.996875 .
2021-12-03 18:16:54 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:16:56 [INFO]	[EVAL] Finished, Epoch=80, acc1=0.966667, acc5=1.000000 .
2021-12-03 18:16:56 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:16:57 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_80.
2021-12-03 18:17:26 [INFO]	[TRAIN] Epoch 81 finished, loss=0.07554179, acc1=0.97239584, acc5=0.996875 .
2021-12-03 18:17:54 [INFO]	[TRAIN] Epoch 82 finished, loss=0.07668756, acc1=0.9713542, acc5=0.9994791 .
2021-12-03 18:17:54 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:17:56 [INFO]	[EVAL] Finished, Epoch=82, acc1=0.966667, acc5=1.000000 .
2021-12-03 18:17:56 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:17:57 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_82.
2021-12-03 18:18:26 [INFO]	[TRAIN] Epoch 83 finished, loss=0.08131597, acc1=0.97291666, acc5=0.99895835 .
2021-12-03 18:18:54 [INFO]	[TRAIN] Epoch=84/420, Step=24/24, loss=0.105134, acc1=0.962500, acc5=1.000000, lr=0.000250, time_each_step=1.17s, eta=2:44:43
2021-12-03 18:18:54 [INFO]	[TRAIN] Epoch 84 finished, loss=0.082646936, acc1=0.9713542, acc5=0.99895835 .
2021-12-03 18:18:54 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:18:56 [INFO]	[EVAL] Finished, Epoch=84, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:18:56 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:18:57 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_84.
2021-12-03 18:19:25 [INFO]	[TRAIN] Epoch 85 finished, loss=0.0857651, acc1=0.97187495, acc5=0.9979167 .
2021-12-03 18:19:54 [INFO]	[TRAIN] Epoch 86 finished, loss=0.083941184, acc1=0.9708333, acc5=0.996875 .
2021-12-03 18:19:54 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:19:56 [INFO]	[EVAL] Finished, Epoch=86, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:19:56 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:19:57 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_86.
2021-12-03 18:20:25 [INFO]	[TRAIN] Epoch 87 finished, loss=0.08497129, acc1=0.9734375, acc5=0.9973958 .
2021-12-03 18:20:53 [INFO]	[TRAIN] Epoch 88 finished, loss=0.06482745, acc1=0.97812504, acc5=0.9984376 .
2021-12-03 18:20:54 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:20:56 [INFO]	[EVAL] Finished, Epoch=88, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:20:56 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:20:57 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_88.
2021-12-03 18:21:25 [INFO]	[TRAIN] Epoch 89 finished, loss=0.08298701, acc1=0.9729166, acc5=0.9984376 .
2021-12-03 18:21:53 [INFO]	[TRAIN] Epoch 90 finished, loss=0.07249967, acc1=0.97812504, acc5=0.9979167 .
2021-12-03 18:21:53 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:21:56 [INFO]	[EVAL] Finished, Epoch=90, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:21:56 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:21:56 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_90.
2021-12-03 18:22:24 [INFO]	[TRAIN] Epoch=91/420, Step=24/24, loss=0.084362, acc1=0.962500, acc5=1.000000, lr=0.000250, time_each_step=1.17s, eta=2:41:3
2021-12-03 18:22:25 [INFO]	[TRAIN] Epoch 91 finished, loss=0.06896204, acc1=0.97760415, acc5=0.9973958 .
2021-12-03 18:22:53 [INFO]	[TRAIN] Epoch 92 finished, loss=0.0794282, acc1=0.97135425, acc5=0.9984376 .
2021-12-03 18:22:53 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:22:56 [INFO]	[EVAL] Finished, Epoch=92, acc1=0.966667, acc5=1.000000 .
2021-12-03 18:22:56 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:22:56 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_92.
2021-12-03 18:23:25 [INFO]	[TRAIN] Epoch 93 finished, loss=0.09760138, acc1=0.96875, acc5=0.9953125 .
2021-12-03 18:23:53 [INFO]	[TRAIN] Epoch 94 finished, loss=0.0805056, acc1=0.97499996, acc5=0.9984376 .
2021-12-03 18:23:53 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:23:55 [INFO]	[EVAL] Finished, Epoch=94, acc1=0.966667, acc5=1.000000 .
2021-12-03 18:23:55 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:23:56 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_94.
2021-12-03 18:24:24 [INFO]	[TRAIN] Epoch 95 finished, loss=0.07358048, acc1=0.97604173, acc5=0.9979167 .
2021-12-03 18:24:52 [INFO]	[TRAIN] Epoch 96 finished, loss=0.08592871, acc1=0.9713542, acc5=0.9984376 .
2021-12-03 18:24:53 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:24:55 [INFO]	[EVAL] Finished, Epoch=96, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:24:55 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:24:56 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_96.
2021-12-03 18:25:24 [INFO]	[TRAIN] Epoch 97 finished, loss=0.06888026, acc1=0.9776042, acc5=0.99635416 .
2021-12-03 18:25:52 [INFO]	[TRAIN] Epoch=98/420, Step=24/24, loss=0.074342, acc1=0.975000, acc5=1.000000, lr=0.000250, time_each_step=1.17s, eta=2:37:56
2021-12-03 18:25:52 [INFO]	[TRAIN] Epoch 98 finished, loss=0.08625216, acc1=0.9770834, acc5=0.996875 .
2021-12-03 18:25:52 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:25:55 [INFO]	[EVAL] Finished, Epoch=98, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:25:55 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:25:55 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_98.
2021-12-03 18:26:24 [INFO]	[TRAIN] Epoch 99 finished, loss=0.07701122, acc1=0.971875, acc5=0.9984376 .
2021-12-03 18:26:52 [INFO]	[TRAIN] Epoch 100 finished, loss=0.0855381, acc1=0.9713542, acc5=0.99531245 .
2021-12-03 18:26:52 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:26:54 [INFO]	[EVAL] Finished, Epoch=100, acc1=0.966667, acc5=1.000000 .
2021-12-03 18:26:54 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:26:55 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_100.
2021-12-03 18:27:23 [INFO]	[TRAIN] Epoch 101 finished, loss=0.07626576, acc1=0.97343755, acc5=0.9973958 .
2021-12-03 18:27:51 [INFO]	[TRAIN] Epoch 102 finished, loss=0.08012588, acc1=0.9708333, acc5=0.99635416 .
2021-12-03 18:27:52 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:27:54 [INFO]	[EVAL] Finished, Epoch=102, acc1=0.966667, acc5=1.000000 .
2021-12-03 18:27:54 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:27:55 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_102.
2021-12-03 18:28:23 [INFO]	[TRAIN] Epoch 103 finished, loss=0.063586414, acc1=0.9776042, acc5=0.99895835 .
2021-12-03 18:28:51 [INFO]	[TRAIN] Epoch 104 finished, loss=0.05953465, acc1=0.9781249, acc5=0.99895835 .
2021-12-03 18:28:51 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:28:54 [INFO]	[EVAL] Finished, Epoch=104, acc1=0.975000, acc5=1.000000 .
2021-12-03 18:28:54 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:28:54 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_104.
2021-12-03 18:29:22 [INFO]	[TRAIN] Epoch=105/420, Step=24/24, loss=0.142754, acc1=0.950000, acc5=1.000000, lr=0.000250, time_each_step=1.17s, eta=2:33:48
2021-12-03 18:29:23 [INFO]	[TRAIN] Epoch 105 finished, loss=0.08107657, acc1=0.97499996, acc5=0.996875 .
2021-12-03 18:29:51 [INFO]	[TRAIN] Epoch 106 finished, loss=0.086397566, acc1=0.9723959, acc5=0.996875 .
2021-12-03 18:29:51 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:29:53 [INFO]	[EVAL] Finished, Epoch=106, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:29:53 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:29:54 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_106.
2021-12-03 18:30:22 [INFO]	[TRAIN] Epoch 107 finished, loss=0.077353776, acc1=0.9765625, acc5=0.9963541 .
2021-12-03 18:30:50 [INFO]	[TRAIN] Epoch 108 finished, loss=0.061453193, acc1=0.9786458, acc5=0.99895835 .
2021-12-03 18:30:51 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:30:53 [INFO]	[EVAL] Finished, Epoch=108, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:30:53 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:30:54 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_108.
2021-12-03 18:31:22 [INFO]	[TRAIN] Epoch 109 finished, loss=0.0749416, acc1=0.9770834, acc5=0.996875 .
2021-12-03 18:31:50 [INFO]	[TRAIN] Epoch 110 finished, loss=0.074059926, acc1=0.97291666, acc5=0.996875 .
2021-12-03 18:31:50 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:31:53 [INFO]	[EVAL] Finished, Epoch=110, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:31:53 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:31:53 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_110.
2021-12-03 18:32:22 [INFO]	[TRAIN] Epoch 111 finished, loss=0.090344794, acc1=0.9713542, acc5=0.99583334 .
2021-12-03 18:32:50 [INFO]	[TRAIN] Epoch=112/420, Step=24/24, loss=0.046595, acc1=0.987500, acc5=1.000000, lr=0.000250, time_each_step=1.17s, eta=2:30:20
2021-12-03 18:32:50 [INFO]	[TRAIN] Epoch 112 finished, loss=0.08876576, acc1=0.9697917, acc5=0.9979167 .
2021-12-03 18:32:50 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:32:52 [INFO]	[EVAL] Finished, Epoch=112, acc1=0.975000, acc5=1.000000 .
2021-12-03 18:32:52 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:32:53 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_112.
2021-12-03 18:33:21 [INFO]	[TRAIN] Epoch 113 finished, loss=0.06420311, acc1=0.97708327, acc5=0.9973958 .
2021-12-03 18:33:49 [INFO]	[TRAIN] Epoch 114 finished, loss=0.070479505, acc1=0.9765625, acc5=0.99895835 .
2021-12-03 18:33:49 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:33:52 [INFO]	[EVAL] Finished, Epoch=114, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:33:52 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:33:52 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_114.
2021-12-03 18:34:21 [INFO]	[TRAIN] Epoch 115 finished, loss=0.07192231, acc1=0.97499996, acc5=0.9979167 .
2021-12-03 18:34:49 [INFO]	[TRAIN] Epoch 116 finished, loss=0.07107488, acc1=0.9739583, acc5=0.9984376 .
2021-12-03 18:34:49 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:34:52 [INFO]	[EVAL] Finished, Epoch=116, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:34:52 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:34:52 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_116.
2021-12-03 18:35:21 [INFO]	[TRAIN] Epoch 117 finished, loss=0.07995339, acc1=0.9734375, acc5=0.9973958 .
2021-12-03 18:35:49 [INFO]	[TRAIN] Epoch 118 finished, loss=0.09627076, acc1=0.9671876, acc5=0.9963541 .
2021-12-03 18:35:49 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:35:51 [INFO]	[EVAL] Finished, Epoch=118, acc1=0.975000, acc5=1.000000 .
2021-12-03 18:35:51 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:35:52 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_118.
2021-12-03 18:36:20 [INFO]	[TRAIN] Epoch=119/420, Step=24/24, loss=0.071300, acc1=0.975000, acc5=1.000000, lr=0.000250, time_each_step=1.17s, eta=2:27:12
2021-12-03 18:36:20 [INFO]	[TRAIN] Epoch 119 finished, loss=0.07593992, acc1=0.97395843, acc5=0.996875 .
2021-12-03 18:36:48 [INFO]	[TRAIN] Epoch 120 finished, loss=0.07943519, acc1=0.9729166, acc5=0.99895835 .
2021-12-03 18:36:48 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:36:51 [INFO]	[EVAL] Finished, Epoch=120, acc1=0.975000, acc5=1.000000 .
2021-12-03 18:36:51 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:36:51 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_120.
2021-12-03 18:37:19 [INFO]	[TRAIN] Epoch 121 finished, loss=0.082146846, acc1=0.97239584, acc5=0.996875 .
2021-12-03 18:37:48 [INFO]	[TRAIN] Epoch 122 finished, loss=0.06290797, acc1=0.9796875, acc5=0.9973958 .
2021-12-03 18:37:48 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:37:50 [INFO]	[EVAL] Finished, Epoch=122, acc1=0.975000, acc5=1.000000 .
2021-12-03 18:37:50 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:37:51 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_122.
2021-12-03 18:38:19 [INFO]	[TRAIN] Epoch 123 finished, loss=0.07908613, acc1=0.9734375, acc5=0.9973958 .
2021-12-03 18:38:47 [INFO]	[TRAIN] Epoch 124 finished, loss=0.055846896, acc1=0.9802084, acc5=0.99895835 .
2021-12-03 18:38:47 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:38:50 [INFO]	[EVAL] Finished, Epoch=124, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:38:50 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:38:50 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_124.
2021-12-03 18:39:19 [INFO]	[TRAIN] Epoch 125 finished, loss=0.077188335, acc1=0.9713542, acc5=0.99895835 .
2021-12-03 18:39:47 [INFO]	[TRAIN] Epoch=126/420, Step=24/24, loss=0.125211, acc1=0.975000, acc5=0.987500, lr=0.000025, time_each_step=1.17s, eta=2:23:12
2021-12-03 18:39:47 [INFO]	[TRAIN] Epoch 126 finished, loss=0.08339673, acc1=0.9750001, acc5=0.9953125 .
2021-12-03 18:39:47 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:39:49 [INFO]	[EVAL] Finished, Epoch=126, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:39:49 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:39:50 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_126.
2021-12-03 18:40:18 [INFO]	[TRAIN] Epoch 127 finished, loss=0.07330363, acc1=0.97552085, acc5=0.9984376 .
2021-12-03 18:40:47 [INFO]	[TRAIN] Epoch 128 finished, loss=0.08519297, acc1=0.97291666, acc5=0.9963541 .
2021-12-03 18:40:47 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:40:49 [INFO]	[EVAL] Finished, Epoch=128, acc1=0.966667, acc5=1.000000 .
2021-12-03 18:40:49 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:40:50 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_128.
2021-12-03 18:41:18 [INFO]	[TRAIN] Epoch 129 finished, loss=0.067579895, acc1=0.9786458, acc5=0.9973958 .
2021-12-03 18:41:46 [INFO]	[TRAIN] Epoch 130 finished, loss=0.069853514, acc1=0.9786458, acc5=0.9973958 .
2021-12-03 18:41:46 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:41:48 [INFO]	[EVAL] Finished, Epoch=130, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:41:48 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:41:49 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_130.
2021-12-03 18:42:17 [INFO]	[TRAIN] Epoch 131 finished, loss=0.07824853, acc1=0.97499996, acc5=0.9984376 .
2021-12-03 18:42:45 [INFO]	[TRAIN] Epoch 132 finished, loss=0.06796416, acc1=0.9776042, acc5=0.996875 .
2021-12-03 18:42:46 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:42:48 [INFO]	[EVAL] Finished, Epoch=132, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:42:48 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:42:49 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_132.
2021-12-03 18:43:17 [INFO]	[TRAIN] Epoch=133/420, Step=24/24, loss=0.003387, acc1=1.000000, acc5=1.000000, lr=0.000025, time_each_step=1.17s, eta=2:19:49
2021-12-03 18:43:17 [INFO]	[TRAIN] Epoch 133 finished, loss=0.0832173, acc1=0.9723959, acc5=0.996875 .
2021-12-03 18:43:45 [INFO]	[TRAIN] Epoch 134 finished, loss=0.0771009, acc1=0.97499996, acc5=1.0 .
2021-12-03 18:43:45 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:43:47 [INFO]	[EVAL] Finished, Epoch=134, acc1=0.966667, acc5=1.000000 .
2021-12-03 18:43:47 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:43:48 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_134.
2021-12-03 18:44:16 [INFO]	[TRAIN] Epoch 135 finished, loss=0.06486031, acc1=0.97812504, acc5=0.99843746 .
2021-12-03 18:44:44 [INFO]	[TRAIN] Epoch 136 finished, loss=0.0722585, acc1=0.97760415, acc5=0.9984376 .
2021-12-03 18:44:44 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:44:47 [INFO]	[EVAL] Finished, Epoch=136, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:44:47 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:44:47 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_136.
2021-12-03 18:45:16 [INFO]	[TRAIN] Epoch 137 finished, loss=0.06451393, acc1=0.9781249, acc5=0.9984376 .
2021-12-03 18:45:44 [INFO]	[TRAIN] Epoch 138 finished, loss=0.06223954, acc1=0.98020834, acc5=0.9994791 .
2021-12-03 18:45:44 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:45:46 [INFO]	[EVAL] Finished, Epoch=138, acc1=0.975000, acc5=1.000000 .
2021-12-03 18:45:46 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:45:47 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_138.
2021-12-03 18:46:15 [INFO]	[TRAIN] Epoch 139 finished, loss=0.08269531, acc1=0.9723959, acc5=0.99635416 .
2021-12-03 18:46:43 [INFO]	[TRAIN] Epoch=140/420, Step=24/24, loss=0.021888, acc1=1.000000, acc5=1.000000, lr=0.000025, time_each_step=1.16s, eta=2:16:23
2021-12-03 18:46:43 [INFO]	[TRAIN] Epoch 140 finished, loss=0.07022292, acc1=0.9750001, acc5=0.99895835 .
2021-12-03 18:46:43 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:46:46 [INFO]	[EVAL] Finished, Epoch=140, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:46:46 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:46:46 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_140.
2021-12-03 18:47:14 [INFO]	[TRAIN] Epoch 141 finished, loss=0.06855821, acc1=0.97499996, acc5=0.9984376 .
2021-12-03 18:47:42 [INFO]	[TRAIN] Epoch 142 finished, loss=0.0685357, acc1=0.9744792, acc5=0.9979167 .
2021-12-03 18:47:43 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:47:45 [INFO]	[EVAL] Finished, Epoch=142, acc1=0.966667, acc5=1.000000 .
2021-12-03 18:47:45 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:47:46 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_142.
2021-12-03 18:48:14 [INFO]	[TRAIN] Epoch 143 finished, loss=0.074797966, acc1=0.9770834, acc5=0.996875 .
2021-12-03 18:48:42 [INFO]	[TRAIN] Epoch 144 finished, loss=0.076525494, acc1=0.9750001, acc5=0.9963541 .
2021-12-03 18:48:42 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:48:45 [INFO]	[EVAL] Finished, Epoch=144, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:48:45 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:48:45 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_144.
2021-12-03 18:49:14 [INFO]	[TRAIN] Epoch 145 finished, loss=0.06800219, acc1=0.97760415, acc5=0.9984376 .
2021-12-03 18:49:42 [INFO]	[TRAIN] Epoch 146 finished, loss=0.07622688, acc1=0.9765625, acc5=0.9979167 .
2021-12-03 18:49:42 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:49:44 [INFO]	[EVAL] Finished, Epoch=146, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:49:44 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:49:45 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_146.
2021-12-03 18:50:13 [INFO]	[TRAIN] Epoch=147/420, Step=24/24, loss=0.038031, acc1=1.000000, acc5=1.000000, lr=0.000025, time_each_step=1.17s, eta=2:13:5
2021-12-03 18:50:13 [INFO]	[TRAIN] Epoch 147 finished, loss=0.06629787, acc1=0.98020834, acc5=0.9979167 .
2021-12-03 18:50:41 [INFO]	[TRAIN] Epoch 148 finished, loss=0.06958797, acc1=0.9770834, acc5=0.9973958 .
2021-12-03 18:50:42 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:50:44 [INFO]	[EVAL] Finished, Epoch=148, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:50:44 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:50:44 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_148.
2021-12-03 18:51:13 [INFO]	[TRAIN] Epoch 149 finished, loss=0.06888991, acc1=0.9765625, acc5=0.99895835 .
2021-12-03 18:51:41 [INFO]	[TRAIN] Epoch 150 finished, loss=0.0645605, acc1=0.9802084, acc5=0.9979167 .
2021-12-03 18:51:41 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:51:43 [INFO]	[EVAL] Finished, Epoch=150, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:51:43 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:51:44 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_150.
2021-12-03 18:52:12 [INFO]	[TRAIN] Epoch 151 finished, loss=0.068414964, acc1=0.9770834, acc5=0.99895835 .
2021-12-03 18:52:40 [INFO]	[TRAIN] Epoch 152 finished, loss=0.058896035, acc1=0.9822917, acc5=0.9973958 .
2021-12-03 18:52:41 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:52:43 [INFO]	[EVAL] Finished, Epoch=152, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:52:43 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:52:44 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_152.
2021-12-03 18:53:12 [INFO]	[TRAIN] Epoch 153 finished, loss=0.06336367, acc1=0.9817708, acc5=0.99895835 .
2021-12-03 18:53:40 [INFO]	[TRAIN] Epoch=154/420, Step=24/24, loss=0.132620, acc1=0.962500, acc5=0.962500, lr=0.000025, time_each_step=1.17s, eta=2:9:34
2021-12-03 18:53:40 [INFO]	[TRAIN] Epoch 154 finished, loss=0.083626874, acc1=0.9734375, acc5=0.99583334 .
2021-12-03 18:53:40 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:53:42 [INFO]	[EVAL] Finished, Epoch=154, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:53:42 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:53:43 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_154.
2021-12-03 18:54:11 [INFO]	[TRAIN] Epoch 155 finished, loss=0.07942876, acc1=0.9692709, acc5=0.9953125 .
2021-12-03 18:54:39 [INFO]	[TRAIN] Epoch 156 finished, loss=0.07959085, acc1=0.9739583, acc5=0.99635416 .
2021-12-03 18:54:40 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:54:42 [INFO]	[EVAL] Finished, Epoch=156, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:54:42 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:54:42 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_156.
2021-12-03 18:55:11 [INFO]	[TRAIN] Epoch 157 finished, loss=0.06775691, acc1=0.98072916, acc5=0.996875 .
2021-12-03 18:55:39 [INFO]	[TRAIN] Epoch 158 finished, loss=0.070488565, acc1=0.9765625, acc5=0.9973958 .
2021-12-03 18:55:39 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:55:41 [INFO]	[EVAL] Finished, Epoch=158, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:55:41 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:55:42 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_158.
2021-12-03 18:56:10 [INFO]	[TRAIN] Epoch 159 finished, loss=0.057524443, acc1=0.9765625, acc5=0.9994791 .
2021-12-03 18:56:38 [INFO]	[TRAIN] Epoch 160 finished, loss=0.058315247, acc1=0.9786458, acc5=0.9994791 .
2021-12-03 18:56:39 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:56:41 [INFO]	[EVAL] Finished, Epoch=160, acc1=0.975000, acc5=1.000000 .
2021-12-03 18:56:41 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:56:41 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_160.
2021-12-03 18:57:09 [INFO]	[TRAIN] Epoch=161/420, Step=24/24, loss=0.149564, acc1=0.937500, acc5=0.987500, lr=0.000025, time_each_step=1.17s, eta=2:6:7
2021-12-03 18:57:10 [INFO]	[TRAIN] Epoch 161 finished, loss=0.06980514, acc1=0.9750001, acc5=0.99843746 .
2021-12-03 18:57:38 [INFO]	[TRAIN] Epoch 162 finished, loss=0.082333535, acc1=0.971875, acc5=0.996875 .
2021-12-03 18:57:38 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:57:40 [INFO]	[EVAL] Finished, Epoch=162, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:57:40 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:57:41 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_162.
2021-12-03 18:58:09 [INFO]	[TRAIN] Epoch 163 finished, loss=0.069570266, acc1=0.9791667, acc5=0.99895835 .
2021-12-03 18:58:37 [INFO]	[TRAIN] Epoch 164 finished, loss=0.070096664, acc1=0.97760415, acc5=0.9979167 .
2021-12-03 18:58:37 [INFO]	Start to evaluate(total_samples=180, total_steps=3)...
2021-12-03 18:58:40 [INFO]	[EVAL] Finished, Epoch=164, acc1=0.970833, acc5=1.000000 .
2021-12-03 18:58:40 [INFO]	Current evaluated best model on eval_dataset is epoch_40, acc1=0.9749999642372131
2021-12-03 18:58:40 [INFO]	Model saved in output/ResNet101_vd_ssld/epoch_164.



---------------------------------------------------------------------------

KeyboardInterrupt                         Traceback (most recent call last)

/tmp/ipykernel_2385/1199053666.py in <module>
     17     #若为None,则不使用预训练模型。默认为'IMAGENET'
     18     save_dir='output/ResNet101_vd_ssld',
---> 19     use_vdl=False)


/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlex/cv/models/classifier.py in train(self, num_epochs, train_dataset, train_batch_size, eval_dataset, optimizer, save_interval_epochs, log_interval_steps, save_dir, pretrain_weights, learning_rate, warmup_steps, warmup_start_lr, lr_decay_epochs, lr_decay_gamma, early_stop, early_stop_patience, use_vdl, resume_checkpoint)
    286             early_stop=early_stop,
    287             early_stop_patience=early_stop_patience,
--> 288             use_vdl=use_vdl)
    289 
    290     def quant_aware_train(self,


/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlex/cv/models/base.py in train_loop(self, num_epochs, train_dataset, train_batch_size, eval_dataset, save_interval_epochs, log_interval_steps, save_dir, ema, early_stop, early_stop_patience, use_vdl)
    331                     outputs = self.run(ddp_net, data, mode='train')
    332                 else:
--> 333                     outputs = self.run(self.net, data, mode='train')
    334                 loss = outputs['loss']
    335                 loss.backward()


/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlex/cv/models/classifier.py in run(self, net, inputs, mode)
    130         else:
    131             # mode == 'train'
--> 132             labels = inputs[1].reshape([-1, 1])
    133             loss = CELoss(class_dim=self.num_classes)
    134             loss = loss(net_out, inputs[1])


/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/tensor/manipulation.py in reshape(x, shape, name)
   1580 
   1581     """
-> 1582     return paddle.fluid.layers.reshape(x=x, shape=shape, name=name)
   1583 
   1584 


/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/nn.py in reshape(x, shape, actual_shape, act, inplace, name)
   6140                 for item in shape
   6141             ]
-> 6142             out, _ = core.ops.reshape2(x, None, 'shape', shape)
   6143         elif isinstance(shape, Variable):
   6144             shape.stop_gradient = True


KeyboardInterrupt: 

训练技巧与参数选择

调优策略

在训练网络的过程中,通常会打印每一个epoch的训练集准确率和验证集准确率,二者刻画了该模型在两个数据集上的表现。通常来说,训练集的准确率比验证集准确率微高或者二者相当是比较不错的状态。如果发现训练集的准确率比验证集高很多,说明在这个任务上已经过拟合,需要在训练过程中加入更多的正则,如增大l2_decay的值,加入更多的数据增广策略,加入label_smoothing策略等;如果发现训练集的准确率比验证集低一些,说明在这个任务上可能欠拟合,需要在训练过程中减弱正则效果,如减小l2_decay的值,减少数据增广方式,增大图片crop区域面积,减弱图片拉伸变换,去除label_smoothing等。

优化器&学习率选择

学习率下降策略:
在整个训练过程中,我们不能使用同样的学习率来更新权重,否则无法到达最优点,所以需要在训练过程中调整学习率的大小。在训练初始阶段,由于权重处于随机初始化的状态,损失函数相对容易进行梯度下降,所以可以设置一个较大的学习率。在训练后期,由于权重参数已经接近最优值,较大的学习率无法进一步寻找最优值,所以需要设置一个较小的学习率。
Cosine_decay和piecewise_decay的学习率变化曲线如下图所示,容易观察到,在整个训练过程中,cosine_decay都保持着较大的学习率,所以其收敛较为缓慢,但是最终的收敛效果较peicewise_decay更好一些。

warmup策略:
让学习率先进行预热,在训练初期,本文不直接使用最大的学习率,而是用一个逐渐增大的学习率去训练网络,当学习率增大到最高点时,再使用学习率下降策略中提到的学习率下降方式衰减学习率的值。

本文多采用AdamW + Cosine调度进行优化,相关参数如下:

Optimizer:
  name: AdamW
  beta1: 0.9
  beta2: 0.999
  epsilon: 1e-8
  weight_decay: 0.05
  no_weight_decay_name: absolute_pos_embed relative_position_bias_table .bias norm 
  one_dim_param_no_weight_decay: True
  lr:
    name: Cosine
    learning_rate: 3e-6
    eta_min: 1e-6
    warmup_epoch: 20
    warmup_start_lr: 1e-6

batch_size

batch_size决定了一次将多少数据送入神经网络参与训练,当batch_size的值与学习率的值呈线性关系时,收敛精度几乎不受影响。因本文采用飞桨—至尊版GPU环境,所以在条件的允许下,尽量增大batch_size值的大小
(从64开始尝试,若训练过程中出现内存溢出错误,则减小batch_size值的大小。)

模型预测,另存为提交文件

model = pdx.load_model('output/ResNet101_vd_ssld/epoch_40') # 加载模型
model.get_model_info() # 显示信息
2021-12-03 18:59:58 [INFO]	Model[ResNet101_vd_ssld] loaded.





{'version': '2.0.0',
 'Model': 'ResNet101_vd_ssld',
 '_Attributes': {'model_type': 'classifier',
  'num_classes': 12,
  'labels': ['00',
   '01',
   '02',
   '03',
   '04',
   '05',
   '06',
   '07',
   '08',
   '09',
   '10',
   '11'],
  'fixed_input_shape': None,
  'eval_metrics': {'acc1': 0.9749999642372131}},
 '_init_params': {'num_classes': 12},
 'Transforms': [{'Resize': {'target_size': (410, 410),
    'interp': 'AREA',
    'keep_ratio': False}},
  {'CenterCrop': {'crop_size': 360}},
  {'Normalize': {'mean': [0.4848, 0.4435, 0.4023],
    'std': [0.2744, 0.2688, 0.2757],
    'min_val': [0, 0, 0],
    'max_val': [255.0, 255.0, 255.0],
    'is_scale': True}}],
 'completed_epochs': 0}

生成 work/result.csv 提交文件。

import glob

test_list = glob.glob('data/data10954/cat_12_test/*.jpg')
test_df = pd.DataFrame() # 创建表结构

for i in range(len(test_list)):
    img = Image.open(test_list[i]).convert('RGB')
    img = np.asarray(img, dtype='float32') # 转换数据类型

    result = model.predict(img[:, :, [2, 1, 0]]) # 预测结果
    test_df.at[i, 'name'] = str(test_list[i]).split('/')[-1] # 文件名
    test_df.at[i, 'cls'] = int(result[0]['category_id']) # 类别

test_df[['name']] = test_df[['name']].astype(str)
test_df[['cls']] = test_df[['cls']].astype(int)
test_df.to_csv('work/result.csv', index=False, header=False) # 生成csv文件

-1] # 文件名
    test_df.at[i, 'cls'] = int(result[0]['category_id']) # 类别

test_df[['name']] = test_df[['name']].astype(str)
test_df[['cls']] = test_df[['cls']].astype(int)
test_df.to_csv('work/result.csv', index=False, header=False) # 生成csv文件

test_df.head()
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/tensor/creation.py:125: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. 
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  if data.dtype == np.object:
namecls
0veKcksx0IRrmqYWB3nAQhuGJHptTVf4U.jpg10
1vpfbsh14YXUBIWotjxqcD3GF6Oe52zM9.jpg7
2eC9s4uwdgtzZhX5V63xJQnFvN8GPSo1i.jpg11
3sNyJrfZAFvo5imu3HISBUG9W2l1e40Kk.jpg8
4vO7GNaLMqw92R4hjtVxfoegbd6ImAWEl.jpg2

模型的评价与改进

模型创新之处:

1、尝试并采取了较多数据增强方式并进行展示,且参数计算有理有据。(进行T.normalize()操作时,计算了本数据集的相关参数,而不采用默认参数。)
2、进行数据处理时,进行了样本平衡问题检验,观察出训练集所给样本,每种类别猫的数量相同,都为180,不需要进行额外处理。
3、使用PaddleX与Paddleclas套件分别进行了尝试,较为熟练地掌握了二者的初步使用。

模型可改进之处:

1、在时间允许的情况下,对多个模型进行精调,采用模型融合策略,尝试进一步提高精度。
2、进一步尝试多种数据增强组合,找出最佳组合与参数。
3、在尝试的过程中,发现Resnet网络性能最为优良,可进一步修改相关参数,微调网络结构,获得更高的精度。

在查阅相关论文时,有大数据专业的同学推荐了一篇论文,其部分截图如下,论文作者关注的也是是分类任务,使用了ImageNet数据集进行实验。
经典的大小调整,通常会导致更好的感知质量,但此论文作者建议调整大小不必须提供更好的视觉质量,但相反地提高任务的性能。
其中心思想是:
尝试不同的方法,建立图像质量评估(IQA)模型的准则。

参考文献

[1] 飞桨领航团开学季新人赛-猫十二分类问题-东秦专场特供版Baseline.[2021-09-21](2021-11-17).[EB/OL].
https://aistudio.baidu.com/aistudio/projectdetail/2777394?forkThirdPart=1

[2] PaddleClas文档.[2021-09-21](2021-11-17).[EB/OL].
https://paddleclas.readthedocs.io/zh_CN/latest

[3] github–PaddlePaddle/PaddleClas.[2021-10-15](2021-11-19).[EB/OL].
https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/ppcls/configs/ImageNet

[4] PIL库中图像的mode参数.[2021-10-15](2021-11-19).[EB/OL].https://blog.csdn.net/u013066730/article/details/102832597?spm=1001.2101.3001.6650.1&utm_medium=distribute.pc_relevant.none-task-blog-2%7Edefault%7ECTRLIST%7Edefault-1.no_search_link&depth_1-utm_source=distribute.pc_relevant.none-task-blog-2%7Edefault%7ECTRLIST%7Edefault-1.no_search_link

[5] 使用数据增广方式提升精度.[2020-6-15](2021-11-29).[EB/OL].https://gitee.com/paddlepaddle/PaddleClas/blob/release/2.3/docs/zh_CN/models_training/train_strategy.md#7%E4%BD%BF%E7%94%A8%E6%95%B0%E6%8D%AE%E5%A2%9E%E5%B9%BF%E6%96%B9%E5%BC%8F%E6%8F%90%E5%8D%87%E7%B2%BE%E5%BA%A6

  • 20
    点赞
  • 86
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
为了实现二类分类,可以使用Python和深度学习框架TensorFlow。 首先,需要收集带有标签的的图像数据集。可以使用公共的数据集,例如ImageNet,也可以自己手动收集和标记数据集。 其次,需要将数据集分为训练集、验证集和测试集。训练集用于训练模型,验证集用于调整模型超参数和防止过拟合,测试集用于评估模型效果。 接下来,需要对图像数据进行预处理,例如缩放、裁剪、旋转、翻转等。这有助于提高模型的准确性和鲁棒性。 然后,可以使用卷积神经网络(CNN)来构建模型。可以使用现有的CNN模型,例如ResNet、VGG等,也可以自己构建CNN模型。 最后,可以使用训练集对模型进行训练,并使用验证集进行调整和评估。最终,可以使用测试集对模型进行最终评估。 以下是一个简单的Python代码示例,用于实现二类分类: ```python import tensorflow as tf from tensorflow.keras.preprocessing.image import ImageDataGenerator # 定义数据生成器 train_datagen = ImageDataGenerator(rescale=1./255) val_datagen = ImageDataGenerator(rescale=1./255) test_datagen = ImageDataGenerator(rescale=1./255) # 加载数据集 train_dataset = train_datagen.flow_from_directory( 'train/', target_size=(180, 180), batch_size=32, class_mode='categorical') val_dataset = val_datagen.flow_from_directory( 'val/', target_size=(180, 180), batch_size=32, class_mode='categorical') test_dataset = test_datagen.flow_from_directory( 'test/', target_size=(180, 180), batch_size=32, class_mode='categorical') # 定义模型 model = tf.keras.Sequential([ layers.Conv2D(32, (3, 3), activation='relu', input_shape=(180, 180, 3)), layers.MaxPooling2D((2, 2)), layers.Conv2D(64, (3, 3), activation='relu'), layers.MaxPooling2D((2, 2)), layers.Conv2D(128, (3, 3), activation='relu'), layers.MaxPooling2D((2, 2)), layers.Conv2D(128, (3, 3), activation='relu'), layers.MaxPooling2D((2, 2)), layers.Flatten(), layers.Dense(512, activation='relu'), layers.Dense(12, activation='softmax') ]) # 编译模型 model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) # 训练模型 model.fit(train_dataset, epochs=10, validation_data=val_dataset) # 评估模型 test_loss, test_acc = model.evaluate(test_dataset) print('Test accuracy:', test_acc) ``` 其中,代码中的`train/`、`val/`和`test/`目录分别包含训练集、验证集和测试集的图像数据集。模型使用了包含四个卷积层和两个全连接层的CNN结构。在训练期间,使用了Adam优化器和交叉熵损失函数。模型在训练集上进行了10个时期的训练,并在测试集上进行了评估。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

清上尘

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值