免费GPU算力 + 高分开源baseline助力最后冲刺
2020年1月15日,由中关村海华信息技术前沿研究院与清华大学交叉信息研究院联合主办,中关村科技园区海淀园管理委员会与北京市海淀区城市管理委员会作为指导单位,biendata竞赛平台承办,华为NAIE云服务提供AI开发环境的“2020海华AI挑战赛·垃圾分类”正式开赛。本次比赛旨在以算法赛选拔优秀的AI人才以及推动有潜力的项目孵化。比赛将设置中学组与技术组两条赛道,奖金总额高达30万元。
比赛进入最后3周倒计时,希望这篇baseline能给个为大佬带来一些启发,突破分数瓶颈。尚未参赛的同学也不妨直接fork此baseline上手实操。
报名地址:https://www.biendata.com/competition/haihua_wastesorting_task2/
免费GPU算力
为解决比赛数据下载、模型运行环境等问题。华为网络人工智能引擎NAIE云服务为本次比赛提供一站式AI开发环境,包括:提供SDK下载数据、在线IDE开发调试算法、模型训练、生成预测结果文件能力。主办方海华研究院同时提供了一份针对华为云平台开发的比赛指导说明,详情请点击下方链接查看https://www.biendata.com/competition/haihua_wastesorting_task2/data/
想要直接参赛的同学可以立即登录华为云NAIE官方网站注册免费账号,下载数据开始训练模型。
高分baseline分享
赛事承办方biendata整理了一篇来自参赛选手Richard的开源baseline分享。本篇baseline采用torchvision内置的目标检测模型,可以节省大量的用于搭建模型的时间和代码,把注意力放在数据的分析和图片增强上。根据主办方提供的3000张复杂样本,按照8:2的比例划分训练和测试样本,通过简单的图片增强和torchvision发布的预训练模型上进行finetune。Baseline的LB可以达到76左右。如果为了进一步提高成绩,可以把主办方提供的简单样本也加入训练,以及增加更多的图像增强方式,预计LB可以达到85左右。
本文最后会附送完整baseline代码分享,您也可以扫描下方二维码或点击原文链接登录biendata.com查看baseline原文。
比赛地址
baseline原文链接
Baseline分享
1 | 环境要求 |
python==3.6.0
torch==1.1.0
torchvision==0.3.0
2 | 预训练权重 |
预训练权重来自torchvision官方,如果遇到网络不通畅,也可以从百度网盘下载链接:
链接:https://pan.baidu.com/s/1mcVvzmRVZ4ey-Tp679YVhg
提取码: hvwp
# 引入必要的库
import json
import pandas as pd
import time
import numpy as np
import matplotlib.pyplot as plt
import os, sys, glob
from data_aug.data_aug import *
from PIL import Image
import math
from tqdm import tqdm
import torch
import torchvision
import torchvision.transforms as T
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from torch import autograd
%matplotlib inline
# 定义全局变量
TRAIN_JSON_PATH = 'train.json'
TRAIN_IMAGE_PATH = 'work/train/images' # 训练图片的路径
VAL_IMAGE_PATH = "work/val/images" # 验证图片的路径
CKP_PATH = 'data/data22244/fasterrcnn.pth' # 预训练的权重
SAVE_PATH = "result.json" # 提交结果的保存路径
MODEL_PATH = "model.pth" # 模型的保存路径
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
with open(TRAIN_JSON_PATH) as f:
train_json = json.load(f)
NUM_CLASS = len(train_json["categories"]) + 1
id_to_label = dict([(item["id"], item["name"]) for item in train_json["categories"]]) # 将ID转成中文名称
image_paths = glob.glob(os.path.join(TRAIN_IMAGE_PATH, "*.png"))[:100] # 为了演示,只取前100张
imageid_of_annotation = list(set([i["image_id"] for i in train_json["annotations"]]))
image_paths = [
i for i in image_paths if os.path.basename(i).split(".")[0] in imageid_of_annotation
] # 剔除image_id不在annotation里面的图片
len_train = int(0.8 * len(image_paths))
train_image_paths = image_paths[:len_train]
eval_image_paths = image_paths[len_train:]
val_image_paths = glob.glob(os.path.join(VAL_IMAGE_PATH, "*.png"))[:10]# 为了演示,只取前10张
# 将所有的标注信息整合到DataFrame里,方便后续分析
data = []
for idx, annotations in enumerate(train_json['annotations']):
data.append([annotations['image_id'],annotations['category_id'], idx])
data = pd.DataFrame(data, columns=['image_id','category_id', 'idx'])
data['category_name'] = data['category_id'].map(id_to_label)
print('训练集总共有多少张图片? ', len(train_json['images']))
print('训练集总共有多少个标注框? ',len(train_json['annotations']))
print('总共有多少种类别? ',len(train_json['categories']))
data.head()
训练集总共有多少张图片? 2999
训练集总共有多少个标注框? 28160
总共有多少种类别? 204
3 | 分组显示bounding box数量的图片数量 |
每张图片上最少有1个bounding box, 最多有20个。
拥有5个bounding box的图片数量最多,平均每张图片上有9个bounding box。
image_group = data.groupby('image_id').agg({'idx':'count'})['idx']
image_group.value_counts().sort_index().plot.bar()
image_group.describe()
count 2998.000000
mean 9.392929
std 4.376932
min 1.000000
25% 5.000000
50% 9.000000
75% 13.000000
max 20.000000
Name: idx, dtype: float64
4 | 显示每个category出现的次数 |
每个category最少出现1次,最多出现897次,平均出现196次。出现次数最多的category分别为金属工具、塑料包装、药瓶等。
category_group = data.groupby('category_name').agg({'idx':'count'})['idx']# .value_counts()
category_group.sort_index().plot.bar()
print(category_group.describe())
category_group.sort_values(ascending=False).reset_index().head(5)
count 143.000000
mean 196.923077
std 180.903331
min 1.000000
25% 60.000000
50% 147.000000
75% 287.000000
max 897.000000
Name: idx, dtype: float64
palette = (2 ** 11 - 1, 2 ** 15 - 1, 2 ** 20 - 1)
# 显示图片及bounding box的辅助函数
def compute_color_for_labels(label):
"""
Simple function that adds fixed color depending on the class
"""
color = [int((p * (label ** 2 - label + 1)) % 255) for p in palette]
return tuple(color)
def draw_boxes(img, bbox, identities=None, offset=(0,0)):
for i,box in enumerate(bbox):
x1,y1,x2,y2 = [int(i) for i in box]
x1 += offset[0]
x2 += offset[0]
y1 += offset[1]
y2 += offset[1]
# box text and bar
id = int(identities[i]) if identities is not None else 0
color = compute_color_for_labels(id)
label = '{}{:d}'.format("", id)
t_size = cv2.getTextSize(label, cv2.FONT_HERSHEY_PLAIN, 2 , 2)[0]
cv2.rectangle(img,(x1, y1),(x2,y2),color,3)
cv2.rectangle(img,(x1, y1),(x1+t_size[0]+3,y1+t_size[1]+4), color,-1)
cv2.putText(img,label,(x1,y1+t_size[1]+4), cv2.FONT_HERSHEY_PLAIN, 2, [255,255,255], 2)
return img
# 将coco的bounding box格式从[x,y,width,height]转成[x1,y1,x2,y2]
def xywh_to_xyxy(points):
return [points[0], points[1], points[0] + points[2], points[1]+ points[3]]
5 | 图像增强的方法 |
为了最大化的利用数据,对训练数据进行增强是一个有效手段。以下代码显示原始的图片,以及进行旋转、平移后的图片。图像增强的关键在于图片进行变换后,bounding box的坐标要跟着变换。除了介绍的这两种,还有其他不同的图像增强方法。
show_image_path = image_paths[0]
show_image_id = os.path.basename(show_image_path).split('.')[0]
show_annotation = [i for i in train_json['annotations'] if i['image_id'] == show_image_id]
def show_original_image(show_image_path, show_annotation):
show_bboxes = [xywh_to_xyxy(a['bbox']) for a in show_annotation]
show_bboxes = np.array(show_bboxes).astype(np.float32)
show_labels = [a['category_id'] for a in show_annotation]
show_iamge = cv2.imread(show_image_path) # [:,:,::-1]
show_iamge = cv2.cvtColor(show_iamge, cv2.COLOR_BGR2RGB)
show_iamge = draw_boxes(show_iamge, show_bboxes, show_labels)
print('显示原始的图片')
plt.imshow(show_iamge)
show_original_image(show_image_path, show_annotation)
显示原始的图片
def show_translate_image(show_image_path, show_annotation):
show_bboxes = [xywh_to_xyxy(a['bbox']) for a in show_annotation]
show_bboxes = np.array(show_bboxes).astype(np.float32)
show_labels = [a['category_id'] for a in show_annotation]
show_iamge = cv2.imread(show_image_path) # [:,:,::-1]
show_iamge = cv2.cvtColor(show_iamge, cv2.COLOR_BGR2RGB)
show_iamge, show_bboxes = RandomTranslate(0.3, diff = True)(show_iamge.copy(), show_bboxes.copy())
show_iamge = draw_boxes(show_iamge, show_bboxes, show_labels)
print('显示随机平移的图片')
plt.imshow(show_iamge)
show_translate_image(show_image_path, show_annotation)
显示随机平移的图片
def show_rotate_image(show_image_path, show_annotation):
show_bboxes = [xywh_to_xyxy(a['bbox']) for a in show_annotation]
show_bboxes = np.array(show_bboxes).astype(np.float32)
show_labels = [a['category_id'] for a in show_annotation]
show_iamge = cv2.imread(show_image_path) # [:,:,::-1]
show_iamge = cv2.cvtColor(show_iamge, cv2.COLOR_BGR2RGB)
show_iamge, show_bboxes = RandomRotate(20)(show_iamge.copy(), show_bboxes.copy())
show_iamge = draw_boxes(show_iamge, show_bboxes, show_labels)
print('显示随机旋转的图片')
plt.imshow(show_iamge)
show_rotate_image(show_image_path, show_annotation)
显示随机旋转的图片
# 定义FasterRCNN的网络结,主要是修改预测的类别数量
def get_model(num_classes):
# load an instance segmentation model pre-trained pre-trained on COCO
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(
pretrained=False, pretrained_backbone=False
)
model.load_state_dict(torch.load(CKP_PATH))
# get number of input features for the classifier
in_features = model.roi_heads.box_predictor.cls_score.in_features
# replace the pre-trained head with a new one
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
return model
# 定义Pytorch支持的数据加载方式
def get_transform():
custom_transforms = []
custom_transforms.append(torchvision.transforms.ToTensor())
return torchvision.transforms.Compose(custom_transforms)
def cal_area(bbox):
return (bbox[2] - bbox[0]) * (bbox[3] - bbox[1])
class CreateDataset(torch.utils.data.Dataset):
def __init__(self, image_paths, coco_json, for_train=False):
self.image_paths = image_paths
self.coco_json = coco_json
self.for_train = for_train # 是否需要图像增强
# 图像增强的方法
self.aug_seq = Sequence(
[
RandomScale(0.1),
RandomTranslate(0.1),
]
)
def __getitem__(self, index):
image_id = os.path.basename(self.image_paths[index]).split(".")[0]
image_path = self.image_paths[index]
annotations = [
i for i in self.coco_json["annotations"] if i["image_id"] == image_id
]
img = Image.open(image_path)
transform = T.Compose([T.ToTensor()]) # Defing PyTorch Transform
num_objs = len(annotations)
bboxes = [xywh_to_xyxy(a["bbox"]) for a in annotations]
areas = [cal_area(bbox) for bbox in bboxes]
bboxes = np.array(bboxes).astype(np.float32)
labels = [a["category_id"] for a in annotations]
if self.for_train:
img, bboxes = self.aug_seq(np.array(img).copy(), bboxes.copy())
img = transform(img) # Apply the transform to the image
boxes = torch.as_tensor(bboxes, dtype=torch.float32)
labels = torch.tensor(labels)
img_id = torch.tensor([index])
areas = torch.tensor(areas, dtype=torch.float32)
# Iscrowd
iscrowd = torch.zeros((num_objs,), dtype=torch.int64)
# Annotation is in dictionary format
my_annotation = {}
my_annotation["boxes"] = boxes
my_annotation["labels"] = labels
my_annotation["image_id"] = img_id
my_annotation["area"] = areas
my_annotation["iscrowd"] = iscrowd
return img, my_annotation
def __len__(self):
return len(self.image_paths)
def collate_fn(batch):
return tuple(zip(*batch))
def init_dataloader(image_paths, train_json, batch_size=2, for_train=False):
dataset = CreateDataset(image_paths, train_json, for_train)
dataloader = torch.utils.data.DataLoader(
dataset,
batch_size=batch_size,
shuffle=True,
num_workers=0,
collate_fn=collate_fn,
)
return dataloader
train_loader = init_dataloader(train_image_paths, train_json, for_train=True) # 定义训练集合
eval_loader = init_dataloader(eval_image_paths, train_json) # 定义验证集合
# 训练模型
def train_epoch(model, train_loader, optimizer):
model.to(device)
pbar = tqdm(train_loader)
with autograd.detect_anomaly():
loss_train, loss_eval = [], []
bg_time = time.time()
model.train()
for batch_imgs, batch_annotations in pbar:
imgs = [img.to(device) for img in batch_imgs]
annotations = [
{k: v.to(device) for k, v in t.items()} for t in batch_annotations
]
loss_dict = model(imgs, annotations)
losses = sum(loss for loss in loss_dict.values())
optimizer.zero_grad()
losses.backward()
optimizer.step()
loss_train.append(losses.cpu().detach().numpy())
desc = "%.3f" % (sum(loss_train) / len(loss_train))
pbar.set_description(desc)
print('training loss = ', sum(loss_train) / len(loss_train))
return model
def train_process(train_loader, eval_loader):
model = get_model(num_classes=NUM_CLASS)
if os.path.exists(MODEL_PATH):
model.load_state_dict(torch.load(MODEL_PATH))
params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(params, lr=0.005, momentum=0.9, weight_decay=0.0005)
# and a learning rate scheduler
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=3, gamma=0.1)
for idx_epoch in range(1):
print("training %d epoch" % idx_epoch)
model = train_epoch(model, train_loader, optimizer)
lr_scheduler.step()
torch.save(model.state_dict(), MODEL_PATH+'TEMP')
return model
model = train_process(train_loader, eval_loader)
0%| | 0/42 [00:00<?, ?it/s]
training 0 epoch
1.715: 100%|██████████| 42/42 [00:23<00:00, 1.82it/s]
training loss = 1.7150054659162248
# 生成验证集的结果,并检查验证集上的预测效果
def boxes_to_lines(preds, image_id):
r = []
for bbox, label, score in zip(
preds[0]["boxes"].cpu().detach().numpy(),
preds[0]["labels"].cpu().detach().numpy(),
preds[0]["scores"].cpu().detach().numpy(),
):
# torchvision生成的bounding box格式为xyxy,需要转成xywh
xyxy = list(bbox)
xywh = [xyxy[0], xyxy[1], xyxy[2] - xyxy[0], xyxy[3]- xyxy[1]]
r.append(
{
"image_id": image_id,
"bbox": xywh,
"category_id": label,
"score": score,
}
)
return r
def output_result(image_paths):
result = []
model = get_model(num_classes=NUM_CLASS)
if os.path.exists(MODEL_PATH):
model.load_state_dict(torch.load(MODEL_PATH))
model.to(device)
model.eval()
for image_path in tqdm(image_paths):
img = Image.open(image_path)
image_id = os.path.basename(image_path).split(".")[0]
transform = T.Compose([T.ToTensor()]) # Defing PyTorch Transform
img_tensor = transform(img) # Apply the transform to the image
preds = model([img_tensor.to(device)])
result += boxes_to_lines(preds, image_id)
bboxes, labels = list(zip(*[
(bbox, label)
for bbox, label in zip(
preds[0]["boxes"].cpu().detach().numpy(),
preds[0]["labels"].cpu().detach().numpy()) ]))
img = draw_boxes(np.array(img), bboxes, labels)
plt.imshow(img)
return result
result = output_result(val_image_paths)
100%|██████████| 10/10 [00:01<00:00, 7.11it/s]
# 将结果保存到本地JSON文件,由于带有numpy的类型,需要自定义一个JSONEncoder
class MyEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, np.integer):
return int(obj)
elif isinstance(obj, np.floating):
return float(obj)
elif isinstance(obj, np.ndarray):
return obj.tolist()
else:
return super(MyEncoder, self).default(obj)
with open(SAVE_PATH, "w") as f:
json.dump(result, f, cls=MyEncoder)
---
扫描下方二维码直达赛事页面。