使用轻量型模型对deepsort特征提取模块重训练

序言

最近用到了deepsort做目标跟踪,但是由于deepsort中有reid的特征提取网络部分,这部分网络在没有做任何压缩的情况下,用GPU跑实时是没问题的,但是如果要弄到一些边缘板子上,就要考虑到板子算力有限,需要做一些模型压缩相关的工作,我这里直接使用了ShuffleNetV2-05代替原来的网络进行训练,在精度没有太大损失的情况下,模型从原来的45M变成了2.5M,这样一来要搞到硬件终端就容易多了。因为之前没有接触过deepsort,刚开始看起来训练很麻烦的样子,但是摸索了大半天后发现其实就是简单的分类网络训练而已。

本文用到的pytorch实现的deepsort地址:deep_sort

重新整理后的全部代码:ShuffleNet-deepsort

一、数据准备

这里使用了行人重识别的Market-1501数据集,这里给个百度云链接:ku12,下载下来后因为是分类网络训练,所以需要对数据集进行整理,给出整理脚本如下,运行时更改数据集的路径即可:

import os
from shutil import copyfile

# You only need to change this line to your dataset download path
download_path = 'F:\BaiduNetdiskDownload\Market-1501-v15.09.15'                            # 这里改成你数据集的路径

if not os.path.isdir(download_path):
    print('please change the download_path')

save_path = download_path + '/pytorch'
if not os.path.isdir(save_path):
    os.mkdir(save_path)
#-----------------------------------------
#query
query_path = download_path + '/query'
query_save_path = download_path + '/pytorch/query'
if not os.path.isdir(query_save_path):
    os.mkdir(query_save_path)

for root, dirs, files in os.walk(query_path, topdown=True):
    for name in files:
        if not name[-3:]=='jpg':
            continue
        ID  = name.split('_')
        src_path = query_path + '/' + name
        dst_path = query_save_path + '/' + ID[0]
        if not os.path.isdir(dst_path):
            os.mkdir(dst_path)
        copyfile(src_path, dst_path + '/' + name)

#-----------------------------------------
#multi-query
query_path = download_path + '/gt_bbox'
# for dukemtmc-reid, we do not need multi-query
if os.path.isdir(query_path):
    query_save_path = download_path + '/pytorch/multi-query'
    if not os.path.isdir(query_save_path):
        os.mkdir(query_save_path)

    for root, dirs, files in os.walk(query_path, topdown=True):
        for name in files:
            if not name[-3:]=='jpg':
                continue
            ID  = name.split('_')
            src_path = query_path + '/' + name
            dst_path = query_save_path + '/' + ID[0]
            if not os.path.isdir(dst_path):
                os.mkdir(dst_path)
            copyfile(src_path, dst_path + '/' + name)

#-----------------------------------------
#gallery
gallery_path = download_path + '/bounding_box_test'
gallery_save_path = download_path + '/pytorch/gallery'
if not os.path.isdir(gallery_save_path):
    os.mkdir(gallery_save_path)

for root, dirs, files in os.walk(gallery_path, topdown=True):
    for name in files:
        if not name[-3:]=='jpg':
            continue
        ID  = name.split('_')
        src_path = gallery_path + '/' + name
        dst_path = gallery_save_path + '/' + ID[0]
        if not os.path.isdir(dst_path):
            os.mkdir(dst_path)
        copyfile(src_path, dst_path + '/' + name)

#---------------------------------------
#train_all
train_path = download_path + '/bounding_box_train'
train_save_path = download_path + '/pytorch/train_all'
if not os.path.isdir(train_save_path):
    os.mkdir(train_save_path)

for root, dirs, files in os.walk(train_path, topdown=True):
    for name in files:
        if not name[-3:]=='jpg':
            continue
        ID  = name.split('_')
        src_path = train_path + '/' + name
        dst_path = train_save_path + '/' + ID[0]
        if not os.path.isdir(dst_path):
            os.mkdir(dst_path)
        copyfile(src_path, dst_path + '/' + name)


#---------------------------------------
#train_val
train_path = download_path + '/bounding_box_train'
train_save_path = download_path + '/pytorch/train'
val_save_path = download_path + '/pytorch/val'
if not os.path.isdir(train_save_path):
    os.mkdir(train_save_path)
    os.mkdir(val_save_path)

for root, dirs, files in os.walk(train_path, topdown=True):
    for name in files:
        if not name[-3:]=='jpg':
            continue
        ID  = name.split('_')
        src_path = train_path + '/' + name
        dst_path = train_save_path + '/' + ID[0]
        if not os.path.isdir(dst_path):
            os.mkdir(dst_path)
            dst_path = val_save_path + '/' + ID[0]  #first image is used as val image
            os.mkdir(dst_path)
        copyfile(src_path, dst_path + '/' + name)

运行结束后得到这样的数据集文件格式,只需要关注train、和val即可,其实看到这里大概心里就很清楚了,就是训练个普通的人体特征提取分类网络而已:
在这里插入图片描述

二、开始训练

到了这一步之后就简单得多了,首先把你的model.py文件准备好,我这里使用了shufflenetv2,所以在deep文件夹下创建了一个shufflenetv2.py的文件,里面是shufflenetv2的网络结构,部分代码如下:

class ShuffleNetV2(nn.Module):
    def __init__(self, stages_repeats, stages_out_channels, num_classes=1000,reid = False):
        super(ShuffleNetV2, self).__init__()

        if len(stages_repeats) != 3:
            raise ValueError('expected stages_repeats as list of 3 positive ints')
        if len(stages_out_channels) != 5:
            raise ValueError('expected stages_out_channels as list of 5 positive ints')
        self._stage_out_channels = stages_out_channels
        self.reid = reid

        input_channels = 3
        output_channels = self._stage_out_channels[0]
        self.conv1 = nn.Sequential(
            nn.Conv2d(input_channels, output_channels, 3, 2, 1, bias=False),
            nn.BatchNorm2d(output_channels),
            nn.ReLU(inplace=True),
        )
        input_channels = output_channels

        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=(1,2), padding=1)

        stage_names = ['stage{}'.format(i) for i in [2, 3, 4]]
        for name, repeats, output_channels in zip(
                stage_names, stages_repeats, self._stage_out_channels[1:]):
            seq = [InvertedResidual(input_channels, output_channels, 2)]
            for i in range(repeats - 1):
                seq.append(InvertedResidual(output_channels, output_channels, 1))
            setattr(self, name, nn.Sequential(*seq))
            input_channels = output_channels

        output_channels = self._stage_out_channels[-1]
        self.conv5 = nn.Sequential(
            nn.Conv2d(input_channels, output_channels, 1, 1, 0, bias=False),
            nn.BatchNorm2d(output_channels),
            nn.ReLU(inplace=True),
        )

        self.class_out = nn.Linear(output_channels, num_classes)

    def forward(self, x):
        x = self.conv1(x)
        x = self.maxpool(x)
        x = self.stage2(x)
        x = self.stage3(x)
        x = self.stage4(x)
        x = self.conv5(x)
        x = x.mean([2, 3])  # globalpool
        if self.reid:
            x = x.div(x.norm(p=2,dim=1,keepdim=True))
            return x
        x = self.class_out(x)

        return x

然后在train.py文件夹中import导入即可,代替原来的Net进行训练,需要修改一下这里:
在这里插入图片描述
最后在终端执行命令:

python deep/train.py --data-dir F:\BaiduNetdiskDownload\Market-1501-v15.09.15\pytorch(自己数据集的路径)

如果顺利的话将会看到以下输出,我这里没有加载预训练模型,权重随机随机初始化,只跑了40个epoch,可以看到精度还不错。因为模型真的太小了,完全可以接受。
在这里插入图片描述
最后得到的模型权重只有2.5M(这里我对保存部分的代码做了一些修改),如果再经过量化的话,最终模型大小可以小于1M,在嵌入式设备里面完全是足够的:
在这里插入图片描述

三、其他尝试

因为考虑到是普通的分类网络,因为git下来的代码中没有dataset这部分,我直接写个train脚本,和dataset.py去加载训练不也可以嘛?说干就干,按照分类的网络的训练流程,重写一个trainer.py和dataset.py的脚本:

import argparse
import os
import torch
import torch.nn as nn
import numpy as np
import torch.optim as optim

from deep.dataset import Datasets
from torch.utils.data import DataLoader
from tensorboardX import SummaryWriter
import global_settings as settings
from torch.optim.lr_scheduler import _LRScheduler
from deep.mobilenet import MobileNetv2
from deep.model import Net
from deep.ghost_net import ghost_net
from deep.ShuffleNetV2 import shufflenet_v2_x0_5

class WarmUpLR(_LRScheduler):
    """warmup_training learning rate scheduler
    Args:
        optimizer: optimzier(e.g. SGD)
        total_iters: totoal_iters of warmup phase
    """

    def __init__(self, optimizer, total_iters, last_epoch=-1):
        self.total_iters = total_iters
        super().__init__(optimizer, last_epoch)

    def get_lr(self):
        """we will use the first m batches, and set the learning
        rate to base_lr * m / total_iters
        """
        return [base_lr * self.last_epoch / (self.total_iters + 1e-8) for base_lr in self.base_lrs]



def train(epoch):       # 开启训练
    net.train()

    loss_sum = 0.0
    correct = 0.0

    for batch_index,(images,labels) in enumerate(train_set):
        if epoch <= args.warm:           # 第一回合使用warmup学习率策略
            warmup_scheduler.step()
        images = images.to(device)
        labels = labels.to(device)

        optimizer.zero_grad()
        outputs = net(images)        # 前向

        outputs_pred = torch.softmax(outputs,1)
        _, preds = outputs_pred.max(1)
        correct += preds.eq(labels).sum()

        loss = loss_function(outputs,labels)  # 损失
        loss.backward()
        optimizer.step()
        loss_sum += loss.item()

        n_iter = (epoch - 1) * len(train_set) + batch_index + 1

        last_layer = list(net.children())[-1]
        for name, para in last_layer.named_parameters():
            if 'weight' in name:
                writer.add_scalar('LastLayerGradients/grad_norm2_weights', para.grad.norm(), n_iter)
            if 'bias' in name:
                writer.add_scalar('LastLayerGradients/grad_norm2_bias', para.grad.norm(), n_iter)

        print('Training Epoch: {epoch} [{trained_samples}/{total_samples}]\tLoss: {:0.4f}\tLR: {:0.6f}'.format(
            loss.item(),
            optimizer.param_groups[0]['lr'],
            epoch=epoch,
            trained_samples=batch_index * args.b + len(images),
            total_samples=len(train_set.dataset)
        ))

        # update training loss for each iteration
        writer.add_scalar('Train/loss', loss.item(), n_iter)

    for name, param in net.named_parameters():
        layer, attr = os.path.splitext(name)
        attr = attr[1:]
        writer.add_histogram("{}/{}".format(layer, attr), param, epoch)

    return loss_sum/(len(train_set.dataset)/args.b),correct.float() / len(train_set.dataset)

def eval_training(epoch):         # 测试集验证
    net.eval()

    test_loss = 0.0  # cost function error
    correct = 0.0

    for (images, labels) in test_set:
        images = images.to(device)
        labels = labels.to(device)
        with torch.no_grad():
            outputs = net(images)
        loss = loss_function(outputs, labels)
        outputs = torch.softmax(outputs,1)
        test_loss += loss.item()
        _, preds = outputs.max(1)
        correct += preds.eq(labels).sum()

    print('Test set: Average loss: {:.4f}, Adduracy: {:.4f}'.format(
        test_loss / (len(test_set.dataset)/32),
        correct.float() / len(test_set.dataset)
    ))

    # add informations to tensorboard
    writer.add_scalar('Test/Average loss', test_loss / len(test_set.dataset), epoch)
    writer.add_scalar('Test/Adduracy', correct.float() / len(test_set.dataset), epoch)

    return test_loss / (len(test_set.dataset)/32),correct.float() / len(test_set.dataset)


def load_model(model_path,net,gpu_id=None):          # 加载预训练权重
    if gpu_id is not None and isinstance(gpu_id, int) and torch.cuda.is_available():
        device = torch.device("cuda:{}".format(gpu_id))
    else:
        device = torch.device("cpu")
    if model_path is not None:

        pretrained_params = torch.load(model_path,map_location=device)

        pretrained_params= \
            {k: v for k, v in pretrained_params.items() if
             k in net.state_dict().keys() and net.state_dict()[k].numel() == v.numel()}
        net.load_state_dict(pretrained_params, strict=False)

    print('Angle device:', device)

if __name__=="__main__":

    parser = argparse.ArgumentParser()
    parser.add_argument('-net', type=str, default="shufflenet", help='net type')
    parser.add_argument('-gpu', type=bool, default=True, help='use gpu or not')
    parser.add_argument('-w', type=int, default=2, help='number of workers for dataloader')
    parser.add_argument('-b', type=int, default=32, help='batch size for dataloader')
    parser.add_argument('-s', type=bool, default=True, help='whether shuffle the dataset')
    parser.add_argument('-warm', type=int, default=3, help='warm up training phase')
    parser.add_argument('-lr', type=float, default=0.01, help='initial learning rate')
    args = parser.parse_args()

    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

    # net = ghost_net(num_classes=751).to(device)
    net = shufflenet_v2_x0_5(num_classes=751).to(device)
    # net = MobileNetv2(num_classes=751).to(device)
    # net = Net(num_classes=751).to(device)

    # model_path = r"deep/checkpoint/mobilenet_v2-b0353104.pth"

    # load_model(model_path,net,0)

    train_path = r"F:\BaiduNetdiskDownload\Market-1501-v15.09.15\pytorch\train_all"            # 训练时修改成自己的数据集路径即可
    test_path = r"F:\BaiduNetdiskDownload\Market-1501-v15.09.15\pytorch\val"

    train_set = Datasets(train_path,True)    # 加载数据集
    test_set = Datasets(test_path)

    train_set = DataLoader(train_set, shuffle=True, num_workers=min([os.cpu_count(), 32, 4]), batch_size=32)
    test_set = DataLoader(test_set, shuffle=True, num_workers=min([os.cpu_count(), 32, 4]), batch_size=32)

    loss_function = nn.CrossEntropyLoss()   # 交叉熵做损失

    optimizer = optim.SGD(net.parameters(), lr=args.lr, momentum=0.9, weight_decay=5e-4)
    train_scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=settings.MILESTONES,
                                                     gamma=0.1)  # learning rate decay

    iter_per_epoch = len(train_set)
    warmup_scheduler = WarmUpLR(optimizer, iter_per_epoch * args.warm)
    checkpoint_path = os.path.join(settings.CHECKPOINT_PATH, args.net, 'bank')
    results_file = os.path.join(checkpoint_path ,'results.txt')

    # use tensorboard
    if not os.path.exists(settings.LOG_DIR):
        os.mkdir(settings.LOG_DIR)
    log_dir = os.path.join(
        settings.LOG_DIR, args.net, 'bank')
    print(log_dir)
    writer = SummaryWriter(log_dir)
    print("done")
    # writer.add_graph(net, Variable(input_tensor, requires_grad=True))

    # 创建模型保存的文件夹
    if not os.path.exists(checkpoint_path):
        os.makedirs(checkpoint_path)

    best_path = os.path.join(checkpoint_path, '{type}.pth')
    checkpoint_path = os.path.join(checkpoint_path, '{net}-{epoch}-{type}.pth')

    file = open(results_file,"w")
    file.write("\t\tepoch\t\t\ttrain_loss\t\ttest_loss\t\ttrain_acc\t\ttest_acc\t\tval_acc\t\tbest_acc")
    file.write("\n")
    file.close()

    best_acc = 0.0
    for epoch in range(1, settings.EPOCH):
        if epoch > args.warm:
            train_scheduler.step(epoch)

        train_loss,train_acc = train(epoch)     # 返回训练集平均损失
        test_loss,acc = eval_training(epoch)    # 返回测试集损失和精度
        print("test_set in epoch:{} acc is :{}".format(epoch,acc))
        print()
        val_acc = 0        # 返回验证集损失

        # start to save best performance model after learning rate decay to 0.01
        if best_acc <= acc:        # 按照测试集精度保存

            # checkpoint = {
            #     'net_dict': net.state_dict(),
            #     'acc': acc,
            #     'epoch': epoch,
            # }
            # if not os.path.isdir('checkpoint'):
            #     os.mkdir('checkpoint')
            # torch.save(checkpoint, './checkpoint/ckpt.t8')

            torch.save(net.state_dict(), best_path.format(type='best'))
            best_acc = acc

        if not epoch % settings.SAVE_EPOCH:    # 每十个轮次保存一次
            torch.save(net.state_dict(), checkpoint_path.format(net=args.net, epoch=epoch, type='regular'))

        f = open(results_file,"a")
        f.write("\t\t{}/{}\t\t\t{:.4f}\t\t\t{:.4f}\t\t\t{:.4f}\t\t\t{:.4f}\t\t\t{:.4f}\t\t{:.4f}".format(epoch,settings.EPOCH,train_loss,test_loss,train_acc,acc,val_acc,best_acc))
        # f.write("\t\t"+str(epoch)+"\t\t"+str(train_loss)+"\t\t"+str(test_loss)+"\t\t"+str(train_acc)+"\t\t"+str(acc)+"\t\t"+str(val_acc)+"\t\t"+str(best_acc))
        f.write("\n")
        f.close()
    writer.close()

dataset.py脚本如下:

import torch
import numpy as np
import os
from torch.utils import data
from PIL import Image
import cv2
import random
import torchvision

class Datasets(data.Dataset):
    def __init__(self,path,train = False):
        self.img_path = []
        self.label_data = []
        self.train = train
        self.transforms_train = torchvision.transforms.Compose([
            torchvision.transforms.RandomCrop((128,64),padding=4),
            torchvision.transforms.RandomHorizontalFlip(),
            torchvision.transforms.ToTensor(),
            torchvision.transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
        ])
        self.transforms_test = torchvision.transforms.Compose([
            torchvision.transforms.Resize((128,64)),
            torchvision.transforms.ToTensor(),
            torchvision.transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
        ])
        count = 0
        for dir in os.listdir(path):
            for img_path in os.listdir(os.path.join(path,dir)):
                self.img_path.append(os.path.join(path,dir,img_path))
                self.label_data.append(count)
            count +=1

    def __len__(self):
        return len(self.img_path)

    def __getitem__(self, index):
        img_path = self.img_path[index]
        img_data = Image.open(img_path)
        label = self.label_data[index]
        if self.train:
            img_data = self.transforms_train(img_data)
        else:
            img_data = self.transforms_test(img_data)

        return img_data,label


if __name__=="__main__":
    train_path = r"F:\BaiduNetdiskDownload\Market-1501-v15.09.15\pytorch\val"
    train_Data = Datasets(train_path,True)
    train = data.DataLoader(train_Data,batch_size=64,shuffle=False)
    for i,(x,y) in enumerate(train):
        print(x.size())
        print(y)
        break

记录下训练过程,可以看到精度确实要高一些:

		epoch			train_loss		test_loss		train_acc		test_acc		val_acc		best_acc
		1/50			6.5989			6.8324			0.0032			0.0027			0.0000		0.0027
		2/50			6.4183			6.7420			0.0080			0.0013			0.0000		0.0027
		3/50			6.1647			6.5104			0.0108			0.0053			0.0000		0.0053
		4/50			5.7020			6.0016			0.0256			0.0200			0.0000		0.0200
		5/50			5.0634			5.3360			0.0568			0.0320			0.0000		0.0320
		6/50			4.4494			4.7930			0.0994			0.0692			0.0000		0.0692
		7/50			3.9030			4.4081			0.1640			0.0866			0.0000		0.0866
		8/50			3.4578			4.1317			0.2171			0.1185			0.0000		0.1185
		9/50			3.0159			3.7158			0.2873			0.1798			0.0000		0.1798
		10/50			2.7154			3.4622			0.3338			0.1824			0.0000		0.1824
		11/50			2.4266			2.8242			0.3929			0.2796			0.0000		0.2796
		12/50			2.1565			2.6152			0.4443			0.3289			0.0000		0.3289
		13/50			1.9233			2.3308			0.5050			0.3995			0.0000		0.3995
		14/50			1.7566			2.1909			0.5394			0.4248			0.0000		0.4248
		15/50			1.5880			2.0974			0.5809			0.4541			0.0000		0.4541
		16/50			1.4413			1.7000			0.6156			0.5313			0.0000		0.5313
		17/50			1.3156			1.5236			0.6415			0.5925			0.0000		0.5925
		18/50			1.2189			1.5391			0.6737			0.5859			0.0000		0.5925
		19/50			1.1305			1.3557			0.6940			0.6485			0.0000		0.6485
		20/50			0.7889			0.8562			0.8083			0.7816			0.0000		0.7816
		21/50			0.7060			0.8064			0.8360			0.8029			0.0000		0.8029
		22/50			0.6716			0.7688			0.8500			0.8162			0.0000		0.8162
		23/50			0.6512			0.7240			0.8576			0.8256			0.0000		0.8256
		24/50			0.6243			0.7038			0.8622			0.8282			0.0000		0.8282
		25/50			0.6070			0.6673			0.8656			0.8415			0.0000		0.8415
		26/50			0.5979			0.6560			0.8711			0.8469			0.0000		0.8469
		27/50			0.5804			0.6575			0.8790			0.8522			0.0000		0.8522
		28/50			0.5721			0.6256			0.8754			0.8642			0.0000		0.8642
		29/50			0.5496			0.6052			0.8823			0.8655			0.0000		0.8655
		30/50			0.5502			0.5987			0.8860			0.8802			0.0000		0.8802
		31/50			0.5456			0.5723			0.8842			0.8708			0.0000		0.8802
		32/50			0.5328			0.5742			0.8868			0.8695			0.0000		0.8802
		33/50			0.5232			0.5409			0.8919			0.8788			0.0000		0.8802
		34/50			0.5152			0.5407			0.8951			0.8815			0.0000		0.8815
		35/50			0.4980			0.5268			0.8946			0.8908			0.0000		0.8908
		36/50			0.4958			0.5326			0.9000			0.8842			0.0000		0.8908
		37/50			0.4884			0.5088			0.9027			0.8961			0.0000		0.8961
		38/50			0.4826			0.5063			0.9015			0.8988			0.0000		0.8988
		39/50			0.4746			0.4798			0.9028			0.9121			0.0000		0.9121
		40/50			0.4463			0.4630			0.9155			0.9081			0.0000		0.9121
		41/50			0.4455			0.4543			0.9155			0.9081			0.0000		0.9121
		42/50			0.4374			0.4609			0.9163			0.9121			0.0000		0.9121
		43/50			0.4347			0.4614			0.9188			0.9121			0.0000		0.9121
		44/50			0.4326			0.4588			0.9163			0.9095			0.0000		0.9121
		45/50			0.4394			0.4515			0.9176			0.9121			0.0000		0.9121
		46/50			0.4353			0.4469			0.9189			0.9095			0.0000		0.9121
		47/50			0.4291			0.4476			0.9202			0.9108			0.0000		0.9121
		48/50			0.4324			0.4430			0.9198			0.9134			0.0000		0.9134
		49/50			0.4286			0.4440			0.9207			0.9134			0.0000		0.9134

四、效果

检测部分用了yolo-fastest的权重模型,大小也只有1.3M,两个模型加起来不到4M,效果也是很奈斯的,具体效果如下,欣赏一下女团妹子的热舞:
在这里插入图片描述
在这里插入图片描述

  • 24
    点赞
  • 207
    收藏
    觉得还不错? 一键收藏
  • 39
    评论
评论 39
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值