对比关节点的起止时间


前言

整体思路:首先对从Joint_file当中读取每帧的关节点坐标和置信度信息,其中空格行表示关节点与关节点之间的间隔,no_people行表示该帧的关节点坐标为空。然后从json文件当中读取视频路径video_path和一个列表List(列表当中是一个一个字典表示动作的起止时间和动作标签)。然后逐帧进行读取视频,从List当中读取每个动作的起止时间和动作标签,在该动作起止时间内绘制关节点,最后保存绘制动作关节点之后的视频。


一、读取json文件当中的video_path

json文件:

{
    "video_path": "E:/recovery_source_code/Movement_Classification/Datasets/Night/Night_video/一道_吴山山1.mp4",
    "split_result": [
        {
            "beginTime": 620,
            "endTime": 4306,
            "label": "一道"
        },
        {
            "beginTime": 4494,
            "endTime": 8858,
            "label": "一道"
        },
        {
            "beginTime": 9003,
            "endTime": 13604,
            "label": "一道"
        }
    ]
}

读取json文件当中的video_path并返回:

def read_json_file(json_path):
    if json_path.endswith('.json'):
        try:
            with open(json_path, 'r', encoding='utf-8') as file:
                data = json.load(file)
        except FileNotFoundError:
            print('json_path does not exist')
        video_path = data['video_path']
    else:
        print("json_path should be a json file")
    return video_path

二、读取json文件当中的start_end_time_label_list

代码:

def read_json_file_start_end_time(json_path):
    if json_path.endswith('.json'):
        try:
            with open(json_path, 'r', encoding='utf-8') as file:
                data = json.load(file)
        except FileNotFoundError:
            print('json_path does not exist')
        start_end_time_label_list = data['split_result']
    else:
        print("json_path should be a json file")
    return start_end_time_label_list

三、读取txt文件当中的关节点坐标

输入的是txt文件路径,返回的是一个字典,字典内每个键是0,1,2…,每个值是26个元组。例如:
在这里插入图片描述
代码:

def read_Joint_file(Joint_file_path):
    # 存储关节点数据
    joints = []
    with open(Joint_file_path, 'r', encoding='utf-8') as file:
        for line in file:
            parts = line.strip().split()
            print(len(parts))
            if len(parts)==0:
                joints = joints
            if len(parts) == 3:
                x = float(parts[0])
                y = float(parts[1])
                confidence = float(parts[2])
                joint_tuple = (x, y, confidence)
                joints.append(joint_tuple)
            elif len(parts) == 1 and parts[0]=="no_people":
                joint_tuple = ()
                for i in range(1,27):
                    joints.append(joint_tuple)
    Joints_lenght = len(joints) #11700
    frame_length = int(Joints_lenght/26) #450
    Joints_dict = {}
    j = 0
    temp_tuple = []
    for i in range(0,frame_length):
        while j%26!=0 or j==0:
            temp_tuple.append(joints[j])
            j+=1
        j = j
        Joints_dict[i] = temp_tuple
        if j == Joints_lenght:
            break
        temp_tuple = []
        temp_tuple.append(joints[j])
        j = j+1
    return Joints_dict

四、在视频中绘制动作关节点

代码:

def draw_frame_Joints(json_file_path, video_path, Joints_dict,output_joints_video_path):
    cap = cv2.VideoCapture(video_path)
    width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
    height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
    fps = cap.get(cv2.CAP_PROP_FPS)
    # 定义输出视频的编码器并创建VideoWriter对象
    fourcc = cv2.VideoWriter_fourcc(*'XVID')
    out = cv2.VideoWriter(output_joints_video_path+"/output_test.avi", fourcc, 30, (width, height))
    list = read_json_file_start_end_time(json_file_path)
    list_lenght = len(list)
    print(list_lenght)
    for data in list:
        beginTime = (data['beginTime']/1000)*30
        endTime = (data['endTime']/1000)*30
        # label = data['label']
        data['beginTime']=beginTime
        data['endTime']=endTime
    # list 开始结束帧字典
    # 遍历视频帧
    frame_index = 0
    Joints_lenght = len(Joints_dict)
    while frame_index<Joints_lenght:
        ret, frame = cap.read()
        if not ret:
            break
        # 检查当前帧是否在Joints_dict中有对应的关节点
        if frame_index in Joints_dict:
            # 拿出该帧的关节点信息
            # joints = Joints_dict[frame_index]
            # 遍历拿出动作起始结束时间和标签
            for data in list:
                beginTime = round(data['beginTime'])
                endTime = round(data['endTime'])
                label = data['label']
                begin = beginTime-1
                end = endTime+1
                if frame_index>=begin and frame_index<end:
                    joints = Joints_dict[frame_index]
                    for joint in joints:
                        x, y, confidence = joint
                        # 绘制关节点,这里简单使用圆圈表示
                        cv2.circle(frame, (int(x), int(y)), 4, (0, 255, 0), -1)

        # 写入帧到输出视频
        out.write(frame)
        frame_index += 1

    # 释放资源
    cap.release()
    out.release()
    cv2.destroyAllWindows()

五、学习逐帧读取视频并显示

def read_video(video_path):
    cap = cv2.VideoCapture(video_path)
    # 如果是视频文件或打开摄像头 cap.isOpened()=true
    while (cap.isOpened()):
        ret, frame = cap.read()
        if ret:
            # 如果正确读取帧,ret为True
            # 在这里可以对frame进行操作,比如显示或保存
            cv2.imshow('frame', frame)
            # 按'q'键退出循环
            if cv2.waitKey(46) & 0xFF == ord('q'):
                break
        else:
            # 如果正确读取帧,ret为False
            break
        # 释放VideoCapture对象
    cap.release()
    cv2.destroyAllWindows()

六、整体测试代码

import json
import cv2

# 读取json文件当中的video_path
def read_json_file(json_path):
    if json_path.endswith('.json'):
        try:
            with open(json_path, 'r', encoding='utf-8') as file:
                data = json.load(file)
        except FileNotFoundError:
            print('json_path does not exist')
        video_path = data['video_path']
    else:
        print("json_path should be a json file")
    return video_path

def read_json_file_start_end_time(json_path):
    if json_path.endswith('.json'):
        try:
            with open(json_path, 'r', encoding='utf-8') as file:
                data = json.load(file)
        except FileNotFoundError:
            print('json_path does not exist')
        start_end_time_label_list = data['split_result']
    else:
        print("json_path should be a json file")
    return start_end_time_label_list

# 输入Joint_file_path 获得的是一个字典,每一个键值对表示每一帧的关节点坐标信息
def read_Joint_file(Joint_file_path):
    # 存储关节点数据
    joints = []
    with open(Joint_file_path, 'r', encoding='utf-8') as file:
        for line in file:
            parts = line.strip().split()
            print(len(parts))
            if len(parts)==0:
                joints = joints
            if len(parts) == 3:
                x = float(parts[0])
                y = float(parts[1])
                confidence = float(parts[2])
                joint_tuple = (x, y, confidence)
                joints.append(joint_tuple)
            elif len(parts) == 1 and parts[0]=="no_people":
                joint_tuple = ()
                for i in range(1,27):
                    joints.append(joint_tuple)
    Joints_lenght = len(joints) #11700
    frame_length = int(Joints_lenght/26) #450
    Joints_dict = {}
    j = 0
    temp_tuple = []
    for i in range(0,frame_length):
        while j%26!=0 or j==0:
            temp_tuple.append(joints[j])
            j+=1
        j = j
        Joints_dict[i] = temp_tuple
        if j == Joints_lenght:
            break
        temp_tuple = []
        temp_tuple.append(joints[j])
        j = j+1
    return Joints_dict

# 按帧读取视频
def read_video(video_path):
    cap = cv2.VideoCapture(video_path)
    # 如果是视频文件或打开摄像头 cap.isOpened()=true
    while (cap.isOpened()):
        ret, frame = cap.read()
        if ret:
            # 如果正确读取帧,ret为True
            # 在这里可以对frame进行操作,比如显示或保存
            cv2.imshow('frame', frame)
            # 按'q'键退出循环
            if cv2.waitKey(46) & 0xFF == ord('q'):
                break
        else:
            # 如果正确读取帧,ret为False
            break
        # 释放VideoCapture对象
    cap.release()
    cv2.destroyAllWindows()

def draw_frame_Joints(json_file_path, video_path, Joints_dict,output_joints_video_path):
    cap = cv2.VideoCapture(video_path)
    width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
    height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
    fps = cap.get(cv2.CAP_PROP_FPS)
    # 定义输出视频的编码器并创建VideoWriter对象
    fourcc = cv2.VideoWriter_fourcc(*'XVID')
    out = cv2.VideoWriter(output_joints_video_path+"/output_test.avi", fourcc, 30, (width, height))
    list = read_json_file_start_end_time(json_file_path)
    list_lenght = len(list)
    print(list_lenght)
    for data in list:
        beginTime = (data['beginTime']/1000)*30
        endTime = (data['endTime']/1000)*30
        # label = data['label']
        data['beginTime']=beginTime
        data['endTime']=endTime
    # list 开始结束帧字典
    # 遍历视频帧
    frame_index = 0
    Joints_lenght = len(Joints_dict)
    while frame_index<Joints_lenght:
        ret, frame = cap.read()
        if not ret:
            break
        # 检查当前帧是否在Joints_dict中有对应的关节点
        if frame_index in Joints_dict:
            # 拿出该帧的关节点信息
            # joints = Joints_dict[frame_index]
            # 遍历拿出动作起始结束时间和标签
            for data in list:
                beginTime = round(data['beginTime'])
                endTime = round(data['endTime'])
                label = data['label']
                begin = beginTime-1
                end = endTime+1
                if frame_index>=begin and frame_index<end:
                    joints = Joints_dict[frame_index]
                    for joint in joints:
                        x, y, confidence = joint
                        # 绘制关节点,这里简单使用圆圈表示
                        cv2.circle(frame, (int(x), int(y)), 4, (0, 255, 0), -1)

        # 写入帧到输出视频
        out.write(frame)
        frame_index += 1

    # 释放资源
    cap.release()
    out.release()
    cv2.destroyAllWindows()

if __name__ == '__main__':
    json_path = "E:/recovery_source_code/Movement_Classification/Datasets/Night/Night_json/一道_吴山山1.json"
    Joint_file_path = "E:/recovery_source_code/Movement_Classification/Datasets/Night/Joint_file/一道_吴山山1.txt"
    video_path = read_json_file(json_path)
    # read_video(video_path)
    Joints_dict = read_Joint_file(Joint_file_path)
    output_joints_video_path = "E:/recovery_source_code/Movement_Classification/Datasets/Night/output_joints_video_path"
    draw_frame_Joints(json_path, video_path,Joints_dict,output_joints_video_path)

七、改进

修改了通过文件夹批次处理文件夹下面的所有json和txt和video文件,如果结果文件夹下面已经生成一部分,则会跳过该部分,不会重新进行生成结果video。

import json
import os

import cv2

# 读取json文件当中的video_path
def read_json_file(json_path):
    if json_path.endswith('.json'):
        try:
            with open(json_path, 'r', encoding='utf-8') as file:
                data = json.load(file)
        except FileNotFoundError:
            print('json_path does not exist')
        video_path = data['video_path']
    else:
        print("json_path should be a json file")
    return video_path

def read_json_file_start_end_time(json_path):
    if json_path.endswith('.json'):
        try:
            with open(json_path, 'r', encoding='utf-8') as file:
                data = json.load(file)
        except FileNotFoundError:
            print('json_path does not exist')
        start_end_time_label_list = data['split_result']
    else:
        print("json_path should be a json file")
    return start_end_time_label_list

# 输入Joint_file_path 获得的是一个字典,每一个键值对表示每一帧的关节点坐标信息
def read_Joint_file(Joint_file_path):
    # 存储关节点数据
    joints = []
    with open(Joint_file_path, 'r', encoding='utf-8') as file:
        for line in file:
            parts = line.strip().split()
            if len(parts)==0:
                joints = joints
            if len(parts) == 3:
                x = float(parts[0])
                y = float(parts[1])
                confidence = float(parts[2])
                joint_tuple = (x, y, confidence)
                joints.append(joint_tuple)
            elif len(parts) == 1 and parts[0]=="no_people":
                joint_tuple = ()
                for i in range(1,27):
                    joints.append(joint_tuple)
    Joints_lenght = len(joints) #11700
    frame_length = int(Joints_lenght/26) #450
    Joints_dict = {}
    j = 0
    temp_tuple = []
    for i in range(0,frame_length):
        while j%26!=0 or j==0:
            temp_tuple.append(joints[j])
            j+=1
        j = j
        Joints_dict[i] = temp_tuple
        if j == Joints_lenght:
            break
        temp_tuple = []
        temp_tuple.append(joints[j])
        j = j+1
    return Joints_dict

# 按帧读取视频
def read_video(video_path):
    cap = cv2.VideoCapture(video_path)
    # 如果是视频文件或打开摄像头 cap.isOpened()=true
    while (cap.isOpened()):
        ret, frame = cap.read()
        if ret:
            # 如果正确读取帧,ret为True
            # 在这里可以对frame进行操作,比如显示或保存
            cv2.imshow('frame', frame)
            # 按'q'键退出循环
            if cv2.waitKey(46) & 0xFF == ord('q'):
                break
        else:
            # 如果正确读取帧,ret为False
            break
        # 释放VideoCapture对象
    cap.release()
    cv2.destroyAllWindows()

def draw_frame_Joints(json_file_path, video_path, Joints_dict,output_joints_video_path):
    cap = cv2.VideoCapture(video_path)
    width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
    height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
    fps = cap.get(cv2.CAP_PROP_FPS)
    # 定义输出视频的编码器并创建VideoWriter对象
    fourcc = cv2.VideoWriter_fourcc(*'XVID')
    tail = video_path[len("E:/recovery_source_code/Movement_Classification/Datasets/Night/Night_video/"):]
    tail = tail.rstrip(".mp4")
    output_path = os.path.join(output_joints_video_path, tail+".avi")
    out = cv2.VideoWriter(output_path, fourcc, 30, (width, height))
    list = read_json_file_start_end_time(json_file_path)
    # list_lenght = len(list)
    for data in list:
        if data['beginTime'] is None or data['endTime'] is None:
            print(output_path+"#####################################失败!")
        beginTime = (data['beginTime']/1000)*30
        endTime = (data['endTime']/1000)*30
        # label = data['label']
        data['beginTime']=beginTime
        data['endTime']=endTime
    # list 开始结束帧字典
    # 遍历视频帧
    frame_index = 0
    Joints_lenght = len(Joints_dict)


    print("每个视频的帧数:", Joints_lenght)
    while frame_index<Joints_lenght:
        ret, frame = cap.read()
        if not ret:
            break
        # 检查当前帧是否在Joints_dict中有对应的关节点
        if frame_index in Joints_dict:
            # 拿出该帧的关节点信息
            # joints = Joints_dict[frame_index]
            # 遍历拿出动作起始结束时间和标签
            for data in list:
                beginTime = round(data['beginTime'])
                endTime = round(data['endTime'])
                label = data['label']
                begin = beginTime-1
                end = endTime+1
                if frame_index>=begin and frame_index<end:
                    joints = Joints_dict[frame_index]
                    for joint in joints:
                        x, y, confidence = joint
                        # 绘制关节点,这里简单使用圆圈表示
                        cv2.circle(frame, (int(x), int(y)), 4, (0, 255, 0), -1)

        # 写入帧到输出视频
        out.write(frame)
        frame_index += 1
    print(output_path+"成功!")
    # 释放资源
    cap.release()
    out.release()
    cv2.destroyAllWindows()

def all_draw_Joints_video(all_Joints_file_path, all_json_file_path, all_video_path, output_joints_video_path):
    all_joints = []
    all_json = []
    all_video = []
    for root, dirs, files in os.walk(all_Joints_file_path):
        for file in files:
            if file.endswith(".txt"):
                all_joints.append(os.path.join(root, file))
    for root, dirs, files in os.walk(all_json_file_path):
        for file in files:
            if file.endswith(".json"):
                all_json.append(os.path.join(root, file))
    for root, dirs, files in os.walk(all_video_path):
        for file in files:
            if file.endswith(".mp4"):
                all_video.append(os.path.join(root, file))
    length = len(all_json)
    temp = 0
    for root, dirs, files in os.walk(output_joints_video_path):
        for file in files:
            if file.endswith(".avi"):
                temp = temp+1
    length = len(all_json)

    # temp = ""
    i = temp
    while i<=length:
        all_joints_dict = read_Joint_file(all_joints[i])

        try:
            draw_frame_Joints(all_json[i], all_video[i], all_joints_dict, output_joints_video_path)
        except FileNotFoundError:
            print("json_path does not exist")
        else:
            print("json_path should be a json file")

        i = i+1


    return all_joints, all_json, all_video
if __name__ == '__main__':
    # json_path = "E:/recovery_source_code/Movement_Classification/Datasets/Night/Night_json/一道_姚雅倩1.json"
    # Joint_file_path = "E:/recovery_source_code/Movement_Classification/Datasets/Night/Joint_file/一道_姚雅倩1.txt"
    # video_path = read_json_file(json_path)
    # # read_video(video_path)
    # Joints_dict = read_Joint_file(Joint_file_path)
    # output_joints_video_path = "E:/recovery_source_code/Movement_Classification/Datasets/Night/output_joints_video_path"
    # # draw_frame_Joints(json_path, video_path,Joints_dict,output_joints_video_path)

    # all_test
    all_Joints_file_path = "E:/recovery_source_code/Movement_Classification/Datasets/Night/Joint_file"
    all_json_file_path = "E:/recovery_source_code/Movement_Classification/Datasets/Night/Night_json"
    all_video_path = "E:/recovery_source_code/Movement_Classification/Datasets/Night/Night_video"
    output_joints_video_path = "E:/recovery_source_code/Movement_Classification/Datasets/Night/output_joints_video_path"
    all_draw_Joints_video(all_Joints_file_path,all_json_file_path, all_video_path, output_joints_video_path)

总结

本文总结了如何冲json和txt文件中分别获取video_path,动作起止时间和标签和每一帧关节点坐标和置信度信息。然后在视频动作当中绘制关节点并保存视频。对比了标签起止时间与绘制的时间是否对照,总体来讲效果还可以。

  • 5
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Python中,可以使用OpenCV库来进行关节点捕捉。OpenCV是一个跨平台的计算机视觉和机器学习软件库,提供了图像处理和计算机视觉方面的通用算法。它支持多种编程语言的接口,包括Python。通过使用OpenCV的函数和方法,可以实现对图像中的关节点进行捕捉。 要进行关节点捕捉,可以使用Mediapipe库。Mediapipe是一个开源的跨平台框架,用于构建各种视觉和感知应用程序。它提供了一系列预训练的模型和算法,可以用于关节点检测、姿势估计等任务。 在使用Python和OpenCV进行关节点捕捉时,可以按照以下步骤进行操作: 1. 安装OpenCV和Mediapipe库。可以使用pip命令来安装这些库,例如: ``` pip install opencv-python pip install mediapipe ``` 2. 导入所需的库和模块: ```python import cv2 import mediapipe as mp ``` 3. 创建一个Mediapipe的关节点检测器: ```python mp_drawing = mp.solutions.drawing_utils mp_pose = mp.solutions.pose pose = mp_pose.Pose() ``` 4. 读取图像或视频,并进行关节点捕捉: ```python image = cv2.imread('image.jpg') # 读取图像 # 或者 cap = cv2.VideoCapture('video.mp4') # 读取视频 while cap.isOpened(): success, frame = cap.read() if not success: break # 将图像转换为RGB格式 image_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) # 进行关节点捕捉 results = pose.process(image_rgb) # 在图像上绘制关节点 mp_drawing.draw_landmarks(frame, results.pose_landmarks, mp_pose.POSE_CONNECTIONS) cv2.imshow('MediaPipe Pose', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() ``` 这样,你就可以使用Python和OpenCV进行关节点捕捉了。通过调用Mediapipe的关节点检测器,可以获取到图像中的关节点位置,并在图像上绘制出来。 #### 引用[.reference_title] - *1* *3* [基于Opencv+Mediapipe实现手势追踪](https://blog.csdn.net/weixin_62343847/article/details/122664231)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insertT0,239^v3^insert_chatgpt"}} ] [.reference_item] - *2* [python抓取视频中的人物动作,并生成3D的bvh](https://blog.csdn.net/qwer492915298/article/details/112321642)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insertT0,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值