行为识别实战第二天——Yolov5+SlowFast+deepsort: Action Detection(PytorchVideo)-CSDN博客
一、简介
SlowFast 是一个用于视频行为识别的深度学习模型框架,由 Facebook AI Research (FAIR) 提出。它主要用于理解视频中的动作和行为,广泛应用于视频分析、监控、运动分析等领域。
SlowFast 模型采用了一种独特的双路径网络结构,包括一个慢路径(Slow pathway)和一个快路径(Fast pathway):
- 慢路径:以较低的帧率捕捉视频中的空间信息(如形状、环境等),使用较深的网络结构来提取高级的空间特征。
- 快路径:以较高的帧率捕捉视频中的动态信息(如快速运动、微小变化等),使用较浅的网络结构来迅速响应动态变化。
这种结构使得 SlowFast 能够同时捕获视频中的精细动态和关键空间信息,提高了对复杂动作的识别能力和效率。
使用的技术
- 双流网络:结合了慢速和快速处理路径,优化了信息的捕获和处理。
- 3D 卷积:利用 3D ConvNets 处理视频数据,能够更好地理解时间维度上的信息。
- 融合策略:在网络的某些阶段将快路径和慢路径的特征进行融合,以强化模型的判别能力。
SlowFast 是一个在视频理解领域具有重要意义的技术,通过其创新的网络结构设计,提供了对动态视觉内容高效且深入的分析能力。
- 提高精确度:通过同时分析视频的空间和时间特征,SlowFast 能够更精确地识别和理解视频中的复杂行为和动作。
- 效率提升:快路径的设计使得模型能够以较低的计算成本快速响应,同时慢路径确保了深入的特征提取,平衡了速度和精度。
- 广泛应用:可用于安全监控、体育分析、健康监测、自动驾驶等多种场景,具有广泛的应用前景和实际价值。
二、环境安装
https://github.com/facebookresearch/SlowFast
conda create -n action python==3.9
activate action
conda install pytorch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install numpy
pip install fvcore
pip install simplejson
pip install PyAV
pip install psutil
pip install opencv-python
pip install tensorboard
pip install moviepy
pip install pytorchvideo
git clone https://github.com/facebookresearch/detectron2.git
pip install -e detectron2
git clone https://github.com/facebookresearch/SlowFast.git
cd SlowFast
python setup.py build develop
三、文件准备
1.下载SLOWFAST_32x2_R101_50_50.pkl
SlowFast/MODEL_ZOO.md at main · facebookresearch/SlowFast · GitHub
下载第三个。
2.新建 ava.json:
{
"bend/bow (at the waist)": 0,
"crawl": 1,
"crouch/kneel": 2,
"dance": 3,
"fall down": 4,
"get up": 5,
"jump/leap": 6,
"lie/sleep": 7,
"martial art": 8,
"run/jog": 9,
"sit": 10,
"stand": 11,
"swim": 12,
"walk": 13,
"answer phone": 14,
"brush teeth": 15,
"carry/hold (an object)": 16,
"catch (an object)": 17,
"chop": 18,
"climb (e.g., a mountain)": 19,
"clink glass": 20,
"close (e.g., a door, a box)": 21,
"cook": 22,
"cut": 23,
"dig": 24,
"dress/put on clothing": 25,
"drink": 26,
"drive (e.g., a car, a truck)": 27,
"eat": 28,
"enter": 29,
"exit": 30,
"extract": 31,
"fishing": 32,
"hit (an object)": 33,
"kick (an object)": 34,
"lift/pick up": 35,
"listen (e.g., to music)": 36,
"open (e.g., a window, a car door)": 37,
"paint": 38,
"play board game": 39,
"play musical instrument": 40,
"play with pets": 41,
"point to (an object)": 42,
"press": 43,
"pull (an object)": 44,
"push (an object)": 45,
"put down": 46,
"read": 47,
"ride (e.g., a bike, a car, a horse)": 48,
"row boat": 49,
"sail boat": 50,
"shoot": 51,
"shovel": 52,
"smoke": 53,
"stir": 54,
"take a photo": 55,
"text on/look at a cellphone": 56,
"throw": 57,
"touch (an object)": 58,
"turn (e.g., a screwdriver)": 59,
"watch (e.g., TV)": 60,
"work on a computer": 61,
"write": 62,
"fight/hit (a person)": 63,
"give/serve (an object) to (a person)": 64,
"grab (a person)": 65,
"hand clap": 66,
"hand shake": 67,
"hand wave": 68,
"hug (a person)": 69,
"kick (a person)": 70,
"kiss (a person)": 71,
"lift (a person)": 72,
"listen to (a person)": 73,
"play with kids": 74,
"push (another person)": 75,
"sing to (e.g., self, a person, a group)": 76,
"take (an object) from (a person)": 77,
"talk to (e.g., self, a person, a group)": 78,
"watch (a person)": 79
}
3.在SlowFast下新建input ,output文件夹,将视频1.MP4 放入input下。修改:SLOWFAST_32x2_R101_50_50.yaml:
TRAIN:
ENABLE: False
DATASET: ava
BATCH_SIZE: 16
EVAL_PERIOD: 1
CHECKPOINT_PERIOD: 1
AUTO_RESUME: True
CHECKPOINT_FILE_PATH: '/home/ps/ycc/SlowFast-main/demo/AVA/SLOWFAST_32x2_R101_50_50.pkl' #path to pretrain model
CHECKPOINT_TYPE: pytorch
DATA:
NUM_FRAMES: 32
SAMPLING_RATE: 2
TRAIN_JITTER_SCALES: [256, 320]
TRAIN_CROP_SIZE: 224
TEST_CROP_SIZE: 256
INPUT_CHANNEL_NUM: [3, 3]
DETECTION:
ENABLE: True
ALIGNED: False
AVA:
BGR: False
DETECTION_SCORE_THRESH: 0.8
TEST_PREDICT_BOX_LISTS: ["person_box_67091280_iou90/ava_detection_val_boxes_and_labels.csv"]
SLOWFAST:
ALPHA: 4
BETA_INV: 8
FUSION_CONV_CHANNEL_RATIO: 2
FUSION_KERNEL_SZ: 5
RESNET:
ZERO_INIT_FINAL_BN: True
WIDTH_PER_GROUP: 64
NUM_GROUPS: 1
DEPTH: 101
TRANS_FUNC: bottleneck_transform
STRIDE_1X1: False
NUM_BLOCK_TEMP_KERNEL: [[3, 3], [4, 4], [6, 6], [3, 3]]
SPATIAL_DILATIONS: [[1, 1], [1, 1], [1, 1], [2, 2]]
SPATIAL_STRIDES: [[1, 1], [2, 2], [2, 2], [1, 1]]
NONLOCAL:
LOCATION: [[[], []], [[], []], [[6, 13, 20], []], [[], []]]
GROUP: [[1, 1], [1, 1], [1, 1], [1, 1]]
INSTANTIATION: dot_product
POOL: [[[2, 2, 2], [2, 2, 2]], [[2, 2, 2], [2, 2, 2]], [[2, 2, 2], [2, 2, 2]], [[2, 2, 2], [2, 2, 2]]]
BN:
USE_PRECISE_STATS: False
NUM_BATCHES_PRECISE: 200
SOLVER:
MOMENTUM: 0.9
WEIGHT_DECAY: 1e-7
OPTIMIZING_METHOD: sgd
MODEL:
NUM_CLASSES: 80
ARCH: slowfast
MODEL_NAME: SlowFast
LOSS_FUNC: bce
DROPOUT_RATE: 0.5
HEAD_ACT: sigmoid
TEST:
ENABLE: False
DATASET: ava
BATCH_SIZE: 8
DATA_LOADER:
NUM_WORKERS: 2
PIN_MEMORY: True
NUM_GPUS: 1
NUM_SHARDS: 1
RNG_SEED: 0
OUTPUT_DIR: .
# TENSORBOARD:
# MODEL_VIS:
# TOPK: 2
DEMO:
ENABLE: True
LABEL_FILE_PATH: '/home/ps/ycc/SlowFast-main/demo/AVA/ava.json' # Add local label file path here.
INPUT_VIDEO: "/home/ps/ycc/SlowFast-main/input/1.mp4"
OUTPUT_FILE: "/home/ps/ycc/SlowFast-main/output/1.mp4"
#WEBCAM: 0
DETECTRON2_CFG: "COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml"
DETECTRON2_WEIGHTS: detectron2://COCO-Detection/faster_rcnn_R_50_FPN_3x/137849458/model_final_280758.pkl
四、运行以及报错处理
在根目录下运行:
python tools/run_net.py --cfg demo/AVA/SLOWFAST_32x2_R101_50_50.yaml
报错1:
(action) ps@ps:~/ycc/SlowFast-main$ python tools/run_net.py --cfg demo/AVA/SLOWFAST_32x2_R101_50_50.yaml
Traceback (most recent call last):
File "/home/ps/ycc/SlowFast-main/tools/run_net.py", line 6, in <module>
from slowfast.utils.misc import launch_job
File "/home/ps/ycc/SlowFast-main/SlowFast/slowfast/utils/misc.py", line 13, in <module>
import slowfast.utils.logging as logging
File "/home/ps/ycc/SlowFast-main/SlowFast/slowfast/utils/logging.py", line 16, in <module>
import slowfast.utils.distributed as du
File "/home/ps/ycc/SlowFast-main/SlowFast/slowfast/utils/distributed.py", line 13, in <module>
from pytorchvideo.layers.distributed import ( # noqa
ImportError: cannot import name 'cat_all_gather' from 'pytorchvideo.layers.distributed' (/home/ps/anaconda3/envs/action/lib/python3.9/site-packages/pytorchvideo/layers/distributed.py)
解决1:下载pytorchvidio后pip安装:
GitHub - facebookresearch/pytorchvideo: A deep learning library for video understanding research.
cd pytorchvideo-main
pip install -e .
问题2:
(action) ps@ps:~/ycc/SlowFast-main$ python tools/run_net.py --cfg demo/AVA/SLOWFAST_32x2_R101_50_50.yaml
/home/ps/anaconda3/envs/action/lib/python3.9/site-packages/torchvision/transforms/_functional_video.py:6: UserWarning: The 'torchvision.transforms._functional_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms.functional' module instead.
warnings.warn(
/home/ps/anaconda3/envs/action/lib/python3.9/site-packages/torchvision/transforms/_transforms_video.py:22: UserWarning: The 'torchvision.transforms._transforms_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms' module instead.
warnings.warn(
Traceback (most recent call last):
File "/home/ps/ycc/SlowFast-main/tools/run_net.py", line 8, in
from vision.fair.slowfast.tools.demo_net import demo
ModuleNotFoundError: No module named 'vision'
解决2:将涉及vison的代码删除,修改如下:其他代码同理。
from demo_net import demo
from test_net import test
from train_net import train
from visualization import visualize