视频动作分类模型搭建的整体流程

概要

近期公司的需要做一个自己数据集的视频动作分类项目,以前重来没有自己搭建过一个深度学习模型,这是第一次自己搭建,该文章用于记录整个流程。

整体架构流程

1.数据集处理

公司自己拍摄的数据集,其中拍摄完成之后需要进行视频剪接,进行标注动作开始时间结束时间和对应动作标签,使用的工具是团队成员编写好的一个可视化工具,使用下面工具会生成对应的json标签文件。需要的可以根据下面下载链接进行获取:链接:https://pan.baidu.com/s/1JU8AC4fw53CkGFJfUfB6mg?pwd=vjlu
提取码:vjlu

在这里插入图片描述
由于需要对json文件当中的video_path路径进行统一修改。其中json文件和对应代码如下所示。
json文件代码:

{
    "video_path": "E:/recovery_source_code/Movement_Classification/Datasets/Night/Night_video/一道_123.mp4",
    "split_result": [
        {
            "beginTime": 901,
            "endTime": 2719,
            "label": "一道"
        },
        {
            "beginTime": 4376,
            "endTime": 6565,
            "label": "一道"
        },
        {
            "beginTime": 7502,
            "endTime": 9205,
            "label": "一道"
        },
        {
            "beginTime": 9782,
            "endTime": 12042,
            "label": "一道"
        }
    ]
}

统一更改json当中的video_path路径,修改为自己想要存放的路径。代码如下所示:

import json
import os

# 要替换的旧路径前缀
old_prefix = 'E:/recovery_source_code/Movement_Classification/Datasets/Night/Night_video/'
# 新的路径前缀
new_prefix = 'E:/recovery_source_code/Movement_Classification/Datasets/Night/Night_video/'

# JSON文件所在的目录
json_dir = '/Datasets/Night/Night_json/'

# 遍历目录下的所有文件
for filename in os.listdir(json_dir):

    if filename.endswith('.json'):  # 确保只处理JSON文件
        file_path = os.path.join(json_dir, filename)
        print(file_path)
        # 读取JSON文件
        with open(file_path, 'r', encoding='utf-8') as file:
            data = json.load(file)
        # 递归函数,用于替换字典中的路径前缀
        def replace_video_path_prefix(obj):
            if isinstance(obj, dict):
                for key, value in obj.items():
                    if key == 'video_path' and isinstance(value, str) and value.startswith(old_prefix):
                        obj[key] = new_prefix + value[len(old_prefix):]
                    replace_video_path_prefix(value)
            elif isinstance(obj, list):
                for item in obj:
                    replace_video_path_prefix(item)
                    # 调用函数替换路径前缀
        replace_video_path_prefix(data)

        # 将修改后的数据写回文件
        with open(file_path, 'w', encoding='utf-8') as file:
            json.dump(data, file, ensure_ascii=False, indent=4)

print("所有JSON文件中的路径前缀已更新。")

由于我们这个项目是基于关节点的视频动作模型分类模型,所以需要每一帧对应的关节点位置和置信度。由于识别视频并转化为每一帧对应的关节点文件的代码项目太大,自己没有整,使用的是我们小组成员的电脑代码跑的,跑完之后是每一个视频对应的txt文件。在txt文件当中26行连续有数据的是一帧的26个关节点信息,空行表示用于区分每一帧的间隔,某一行如果是no_peoper表示一帧的26个关节点信息为空(我们的数据集是夜晚的数据集,原因是由于当灯光照到摄像机的时候这一帧识别不了关节点的信息)。其中txt文件部分内容如下所示:
在这里插入图片描述
由于json文件当中的起止开始时间可能与txt文件当中的每一帧可能对不上,编写代码进行测试,测试使用的是每秒十30帧,编写的代码也是批量处理,我的数据集的文件结构和代码如下所示:
数据集文件结构:
在这里插入图片描述

代码:

import json
import os

import cv2

# 读取json文件当中的video_path
def read_json_file(json_path):
    if json_path.endswith('.json'):
        try:
            with open(json_path, 'r', encoding='utf-8') as file:
                data = json.load(file)
        except FileNotFoundError:
            print('json_path does not exist')
        video_path = data['video_path']
    else:
        print("json_path should be a json file")
    return video_path

def read_json_file_start_end_time(json_path):
    if json_path.endswith('.json'):
        try:
            with open(json_path, 'r', encoding='utf-8') as file:
                data = json.load(file)
        except FileNotFoundError:
            print('json_path does not exist')
        start_end_time_label_list = data['split_result']
    else:
        print("json_path should be a json file")
    return start_end_time_label_list

# 输入Joint_file_path 获得的是一个字典,每一个键值对表示每一帧的关节点坐标信息
def read_Joint_file(Joint_file_path):
    # 存储关节点数据
    joints = []
    with open(Joint_file_path, 'r', encoding='utf-8') as file:
        for line in file:
            parts = line.strip().split()
            if len(parts)==0:
                joints = joints
            if len(parts) == 3:
                x = float(parts[0])
                y = float(parts[1])
                confidence = float(parts[2])
                joint_tuple = (x, y, confidence)
                joints.append(joint_tuple)
            elif len(parts) == 1 and parts[0]=="no_people":
                joint_tuple = ()
                for i in range(1,27):
                    joints.append(joint_tuple)
    Joints_lenght = len(joints) #11700
    frame_length = int(Joints_lenght/26) #450
    Joints_dict = {}
    j = 0
    temp_tuple = []
    for i in range(0,frame_length):
        while j%26!=0 or j==0:
            temp_tuple.append(joints[j])
            j+=1
        j = j
        Joints_dict[i] = temp_tuple
        if j == Joints_lenght:
            break
        temp_tuple = []
        temp_tuple.append(joints[j])
        j = j+1
    return Joints_dict

# 按帧读取视频
def read_video(video_path):
    cap = cv2.VideoCapture(video_path)
    # 如果是视频文件或打开摄像头 cap.isOpened()=true
    while (cap.isOpened()):
        ret, frame = cap.read()
        if ret:
            # 如果正确读取帧,ret为True
            # 在这里可以对frame进行操作,比如显示或保存
            cv2.imshow('frame', frame)
            # 按'q'键退出循环
            if cv2.waitKey(46) & 0xFF == ord('q'):
                break
        else:
            # 如果正确读取帧,ret为False
            break
        # 释放VideoCapture对象
    cap.release()
    cv2.destroyAllWindows()

def draw_frame_Joints(json_file_path, video_path, Joints_dict,output_joints_video_path):
    cap = cv2.VideoCapture(video_path)
    width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
    height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
    fps = cap.get(cv2.CAP_PROP_FPS)
    # 定义输出视频的编码器并创建VideoWriter对象
    fourcc = cv2.VideoWriter_fourcc(*'XVID')
    tail = video_path[len("E:/recovery_source_code/Movement_Classification/Datasets/Night/Night_video/"):]
    tail = tail.rstrip(".mp4")
    output_path = os.path.join(output_joints_video_path, tail+".mp4")
    out = cv2.VideoWriter(output_path, fourcc, 30, (width, height))
    list = read_json_file_start_end_time(json_file_path)
    # list_lenght = len(list)
    for data in list:
        if data['beginTime'] is None or data['endTime'] is None:
            print(output_path+"#####################################失败!")
        beginTime = (data['beginTime']/1000)*30
        endTime = (data['endTime']/1000)*30
        # label = data['label']
        data['beginTime']=beginTime
        data['endTime']=endTime
    # list 开始结束帧字典
    # 遍历视频帧
    frame_index = 0
    Joints_lenght = len(Joints_dict)


    print("每个视频的帧数:", Joints_lenght)
    while frame_index<Joints_lenght:
        ret, frame = cap.read()
        if not ret:
            break
        # 检查当前帧是否在Joints_dict中有对应的关节点
        if frame_index in Joints_dict:
            # 拿出该帧的关节点信息
            # joints = Joints_dict[frame_index]
            # 遍历拿出动作起始结束时间和标签
            for data in list:
                beginTime = round(data['beginTime'])
                endTime = round(data['endTime'])
                label = data['label']
                begin = beginTime-1
                end = endTime+1
                if frame_index>=begin and frame_index<end:
                    joints = Joints_dict[frame_index]
                    for joint in joints:
                        x, y, confidence = joint
                        # 绘制关节点,这里简单使用圆圈表示
                        cv2.circle(frame, (int(x), int(y)), 4, (0, 255, 0), -1)

        # 写入帧到输出视频
        out.write(frame)
        frame_index += 1
    print(output_path+"成功!")
    # 释放资源
    cap.release()
    out.release()
    cv2.destroyAllWindows()

def all_draw_Joints_video(all_Joints_file_path, all_json_file_path, all_video_path, output_joints_video_path):
    all_joints = []
    all_json = []
    all_video = []
    for root, dirs, files in os.walk(all_Joints_file_path):
        for file in files:
            if file.endswith(".txt"):
                all_joints.append(os.path.join(root, file))
    for root, dirs, files in os.walk(all_json_file_path):
        for file in files:
            if file.endswith(".json"):
                all_json.append(os.path.join(root, file))
    for root, dirs, files in os.walk(all_video_path):
        for file in files:
            if file.endswith(".mp4"):
                all_video.append(os.path.join(root, file))
    length = len(all_json)
    temp = 0
    for root, dirs, files in os.walk(output_joints_video_path):
        for file in files:
            if file.endswith(".mp4"):
                temp = temp+1
    length = len(all_json)

    # temp = ""
    i = temp
    while i<length:
        all_joints_dict = read_Joint_file(all_joints[i])

        try:
            draw_frame_Joints(all_json[i], all_video[i], all_joints_dict, output_joints_video_path)
        except FileNotFoundError:
            print("json_path does not exist")
        else:
            print("json_path should be a json file")

        i = i+1


    return all_joints, all_json, all_video
if __name__ == '__main__':
    # json_path = "E:/recovery_source_code/Movement_Classification/Datasets/Night/Night_json/一道_姚雅倩1.json"
    # Joint_file_path = "E:/recovery_source_code/Movement_Classification/Datasets/Night/Joint_file/一道_姚雅倩1.txt"
    # video_path = read_json_file(json_path)
    # # read_video(video_path)
    # Joints_dict = read_Joint_file(Joint_file_path)
    # output_joints_video_path = "E:/recovery_source_code/Movement_Classification/Datasets/Night/output_joints_video_path"
    # # draw_frame_Joints(json_path, video_path,Joints_dict,output_joints_video_path)

    # all_test
    all_Joints_file_path = "E:/recovery_source_code/Movement_Classification/Datasets/Night/Joint_file"
    all_json_file_path = "E:/recovery_source_code/Movement_Classification/Datasets/Night/Night_json"
    all_video_path = "E:/recovery_source_code/Movement_Classification/Datasets/Night/Night_video"
    output_joints_video_path = "E:/recovery_source_code/Movement_Classification/Datasets/Night/result_mp4"
    all_draw_Joints_video(all_Joints_file_path,all_json_file_path, all_video_path, output_joints_video_path)

2.Dataset和Dataloader

后面应该需要修改

import os
import json
import pickle
import cv2
import numpy as np
from torch.utils.data.dataset import Dataset
# from Read_Joint_file import read_Joint_file

Json_paths = r"E:\recovery_source_code\Movement_Classification\Datasets\Night\json_output"
Joint_paths = r"E:\recovery_source_code\Movement_Classification\Datasets\Night\Joint_file"

class MyDataset(Dataset):
    def __init__(self, Json_paths, Joint_paths, train=True):
        self.Json_paths = Json_paths
        self.Joint_paths = Joint_paths
        self.crop_len = 60
        self.hop_rate = 0.5
        self.fps = 30
        # pickle 文件 将任意对象转换为一种可以保存到磁盘上,并能从磁盘上读取出来重新还原为原来的python对象
        self.cache_file = "cache.pkl"
        self.label_str2int = {
            "一道": 0, "七道": 1, "三道": 2, "二道": 3, "五道": 4, "停车信号": 5, "八道": 6, "六道": 7, "减速信号": 8,
            "十一道": 9, "十三道": 10,
            "十二道": 11, "十五道": 12, "十四道": 13, "四道": 14, "指挥机车向显示人反方向去的信号": 15,
            "指挥机车向显示人反方向稍行移动的信号": 16, "指挥机车向显示人方向来的信号": 17,
            "指挥机车向显示人方向稍行移动的信号":18, "道岔开通信号": 19
        }
        if not os.path.exists(self.cache_file):
            self.load_json()
            with open(self.cache_file, 'wb') as f:
                pickle.dump(self.data, f)
            print(f"cache file saved at {self.cache_file}")
        else:
            with open(self.cache_file, 'rb') as f:
                print(f"cache file loaded from {self.cache_file}")
                self.data = pickle.load(f)
        if train:
            pass
        else:
            pass
        self.pointer = [0, 0]

    # 加载所有json文件 并将其加载txt文件之后 返回data,label,file
    def load_json(self):
        self.json_data = []
        self.data = {}
        self.data['data'] = []
        self.data['label'] = []
        self.data['file'] = []
        for json_name in os.listdir(self.Json_paths):
            if not json_name.endswith(".json"):
                continue
            with open(os.path.join(self.Json_paths, json_name), 'r', encoding='utf-8') as f:
                json_data = json.load(f)

            kpts_file_name = json_name.split('.')[0] + '.txt'
            kpts_file_path = os.path.join(self.Joint_paths, kpts_file_name)
            kpts_data = self.read_Joint_file(kpts_file_path)
            label_data = np.zeros(len(kpts_data))
            for s_e_pair in json_data['split_result']:
                if s_e_pair['label'] not in self.label_str2int.keys():
                    continue
                s = s_e_pair['beginTime']
                e = s_e_pair['endTime']
                label = s_e_pair['label']
                label_num = self.label_str2int[label] + 1
                s_ = round((s / 1000) * self.fps)
                e_ = round((e / 1000) * self.fps)
                label_data[s_:e_] = label_num
            self.data['data'].append(kpts_data)
            self.data['label'].append(label_data)
            self.data['file'].append(kpts_file_name)

    # 读取txt文件
    def read_Joint_file(self, Joint_file_path):
        joints = []
        kpts_data = []

        with open(Joint_file_path, 'r', encoding='utf-8') as file:
            # 读取txt文件并将其生成一个列表
            lines = file.readlines()
            for line in lines:
                parts = line.strip().split()
                if len(parts) == 0:
                    assert len(joints) == 26
                    kpts_data.append(joints)
                    joints = []
                if len(parts) == 3:
                    joint_tuple = list(map(float, parts))
                    joints.append(joint_tuple)
                if "no_person" in line:
                    joints = np.ones((26, 3)) * -1
                    joints = joints.tolist()
        return kpts_data

    def __len__(self):
        # 感觉有错误
        total_frame_num = 0
        for i in range(len(self.data['data'])):
            total_frame_num += (len(self.data['data'][i]) - self.crop_len)//(self.crop_len*self.hop_rate)
        return 7000

    def __getitem__(self, idx):

        if self.pointer[1] + self.crop_len >= len(self.data['data'][self.pointer[0]]):

            self.pointer[1] = 0
            self.pointer[0] += 1
            x = self.data['data'][self.pointer[0]][self.pointer[1]:self.pointer[1] + self.crop_len]
            y = self.data['label'][self.pointer[0]][self.pointer[1]:self.pointer[1] + self.crop_len]
        else:
            x = self.data['data'][self.pointer[0]][self.pointer[1]:self.pointer[1] + self.crop_len]
            y = self.data['label'][self.pointer[0]][self.pointer[1]:self.pointer[1] + self.crop_len]
            self.pointer[1] += int(self.crop_len*self.hop_rate)
        # , self.data['file'][self.pointer[0]]
        return np.array(x), np.array(y)


if __name__ == '__main__':
    datasets = MyDataset(Json_paths, Joint_paths)
    datasets.load_json()
    i=0
    for x, y in datasets:
        i=i+1
        print(x)
        print(y)
    print(i)

3.Train里面所需要的小段代码

3.1 Model

# 设置使用cpu还是gpu
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# 将模型放置到gpu上面
model = PoseClassificationModel().to(device)
'''
损失函数:最小化模型预测值与实际值之间的差异程度。通过最小化这个差异,调整模型的参数,使其更加准确地预测位置数据。
评估模型性能,指导模型优化......
'''
criterion = nn.CrossEntropyLoss()
'''
优化器:torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)
model.parameters():需要优化的参数
lr:学习率,用于控制权重更新的步长大小
betas:可选,用于计算运行平均值与平方运行平均值的系数,beta1是动量项系数,beta2是平方梯度系数betas = (0.9,0.999)
weight_decay:权重衰减系数,默认值为0
amsgrad:可选bool值,是否应用AMSGrad变体,改变体改进Adam算法在某些设置下的收敛性为,默认为False.
工作原理:1.计算梯度,首先,通过反向传播计算每个参数的梯度
        2.更新偏差校正的第一矩估计
        3.更新偏差校正的第二矩估计
        4.计算更新
'''
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)

3.训练循环

epochs = 999
for epoch in range(epochs):
	model.train(True)
	for batch_idx, (data, labels) in enumerate(dataloader):
		data = data.to(dtype = torch.float, device = device)
		# 移除张量当中大小为1的维度,这里指出了是第一个维度0
		labels = labels.squeeze(0)
		labels = labels.to(dtype = torch.float, device = device)
		# 前向传播
		ouputs = model(data)
		'''
			其中outputs.view(-1, 20) 这个view函数是重新调整张量的形状而不改变其数据,outputs原本为(1, 6020)经过view之后变成了(1*60*20/20, 20'''
		loss = criterion(outputs.view(-1,20), labels.long())
		# 反向传播和优化 由于梯度是累加的 所以在反向传播的时候需要进行梯度清零
		optimizer.zero_grad()
		loss.backward()
		optimizer.step()
		

标题

小结

刚开始,以后会持续更新该篇文章。

  • 5
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值