Slowfast行为识别部署

一、简介

SlowFast 是一个用于视频行为识别的深度学习模型框架,由 Facebook AI Research (FAIR) 提出。它主要用于理解视频中的动作和行为,广泛应用于视频分析、监控、运动分析等领域。

SlowFast 模型采用了一种独特的双路径网络结构,包括一个慢路径(Slow pathway)和一个快路径(Fast pathway):

  1. 慢路径:以较低的帧率捕捉视频中的空间信息(如形状、环境等),使用较深的网络结构来提取高级的空间特征。
  2. 快路径:以较高的帧率捕捉视频中的动态信息(如快速运动、微小变化等),使用较浅的网络结构来迅速响应动态变化。

这种结构使得 SlowFast 能够同时捕获视频中的精细动态和关键空间信息,提高了对复杂动作的识别能力和效率。

使用的技术

  • 双流网络:结合了慢速和快速处理路径,优化了信息的捕获和处理。
  • 3D 卷积:利用 3D ConvNets 处理视频数据,能够更好地理解时间维度上的信息。
  • 融合策略:在网络的某些阶段将快路径和慢路径的特征进行融合,以强化模型的判别能力。

SlowFast 是一个在视频理解领域具有重要意义的技术,通过其创新的网络结构设计,提供了对动态视觉内容高效且深入的分析能力。

  1. 提高精确度:通过同时分析视频的空间和时间特征,SlowFast 能够更精确地识别和理解视频中的复杂行为和动作。
  2. 效率提升:快路径的设计使得模型能够以较低的计算成本快速响应,同时慢路径确保了深入的特征提取,平衡了速度和精度。
  3. 广泛应用:可用于安全监控、体育分析、健康监测、自动驾驶等多种场景,具有广泛的应用前景和实际价值。

二、环境安装 

https://github.com/facebookresearch/SlowFast


conda create -n action python==3.9

activate action

conda install pytorch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 pytorch-cuda=11.8 -c pytorch -c nvidia

pip install numpy

pip install fvcore

pip install simplejson

pip install PyAV

pip install psutil

pip install opencv-python

pip install tensorboard

pip install moviepy

pip install pytorchvideo

git clone https://github.com/facebookresearch/detectron2.git
pip install -e detectron2

git clone https://github.com/facebookresearch/SlowFast.git
cd SlowFast
python setup.py build develop

三、文件准备

1.下载SLOWFAST_32x2_R101_50_50.pkl

SlowFast/MODEL_ZOO.md at main · facebookresearch/SlowFast · GitHub

下载第三个。 

2.新建 ava.json:

{
    "bend/bow (at the waist)": 0,
    "crawl": 1,
    "crouch/kneel": 2,
    "dance": 3,
    "fall down": 4,
    "get up": 5,
    "jump/leap": 6,
    "lie/sleep": 7,
    "martial art": 8,
    "run/jog": 9,
    "sit": 10,
    "stand": 11,
    "swim": 12,
    "walk": 13,
    "answer phone": 14,
    "brush teeth": 15,
    "carry/hold (an object)": 16,
    "catch (an object)": 17,
    "chop": 18,
    "climb (e.g., a mountain)": 19,
    "clink glass": 20,
    "close (e.g., a door, a box)": 21,
    "cook": 22,
    "cut": 23,
    "dig": 24,
    "dress/put on clothing": 25,
    "drink": 26,
    "drive (e.g., a car, a truck)": 27,
    "eat": 28,
    "enter": 29,
    "exit": 30,
    "extract": 31,
    "fishing": 32,
    "hit (an object)": 33,
    "kick (an object)": 34,
    "lift/pick up": 35,
    "listen (e.g., to music)": 36,
    "open (e.g., a window, a car door)": 37,
    "paint": 38,
    "play board game": 39,
    "play musical instrument": 40,
    "play with pets": 41,
    "point to (an object)": 42,
    "press": 43,
    "pull (an object)": 44,
    "push (an object)": 45,
    "put down": 46,
    "read": 47,
    "ride (e.g., a bike, a car, a horse)": 48,
    "row boat": 49,
    "sail boat": 50,
    "shoot": 51,
    "shovel": 52,
    "smoke": 53,
    "stir": 54,
    "take a photo": 55,
    "text on/look at a cellphone": 56,
    "throw": 57,
    "touch (an object)": 58,
    "turn (e.g., a screwdriver)": 59,
    "watch (e.g., TV)": 60,
    "work on a computer": 61,
    "write": 62,
    "fight/hit (a person)": 63,
    "give/serve (an object) to (a person)": 64,
    "grab (a person)": 65,
    "hand clap": 66,
    "hand shake": 67,
    "hand wave": 68,
    "hug (a person)": 69,
    "kick (a person)": 70,
    "kiss (a person)": 71,
    "lift (a person)": 72,
    "listen to (a person)": 73,
    "play with kids": 74,
    "push (another person)": 75,
    "sing to (e.g., self, a person, a group)": 76,
    "take (an object) from (a person)": 77,
    "talk to (e.g., self, a person, a group)": 78,
    "watch (a person)": 79
  }

3.在SlowFast下新建input ,output文件夹,将视频1.MP4 放入input下。修改:SLOWFAST_32x2_R101_50_50.yaml:

TRAIN:
  ENABLE: False
  DATASET: ava
  BATCH_SIZE: 16
  EVAL_PERIOD: 1
  CHECKPOINT_PERIOD: 1
  AUTO_RESUME: True
  CHECKPOINT_FILE_PATH: '/home/ps/ycc/SlowFast-main/demo/AVA/SLOWFAST_32x2_R101_50_50.pkl'  #path to pretrain model
  CHECKPOINT_TYPE: pytorch
DATA:
  NUM_FRAMES: 32
  SAMPLING_RATE: 2
  TRAIN_JITTER_SCALES: [256, 320]
  TRAIN_CROP_SIZE: 224
  TEST_CROP_SIZE: 256
  INPUT_CHANNEL_NUM: [3, 3]
DETECTION:
  ENABLE: True
  ALIGNED: False
AVA:
  BGR: False
  DETECTION_SCORE_THRESH: 0.8
  TEST_PREDICT_BOX_LISTS: ["person_box_67091280_iou90/ava_detection_val_boxes_and_labels.csv"]
SLOWFAST:
  ALPHA: 4
  BETA_INV: 8
  FUSION_CONV_CHANNEL_RATIO: 2
  FUSION_KERNEL_SZ: 5
RESNET:
  ZERO_INIT_FINAL_BN: True
  WIDTH_PER_GROUP: 64
  NUM_GROUPS: 1
  DEPTH: 101
  TRANS_FUNC: bottleneck_transform
  STRIDE_1X1: False
  NUM_BLOCK_TEMP_KERNEL: [[3, 3], [4, 4], [6, 6], [3, 3]]
  SPATIAL_DILATIONS: [[1, 1], [1, 1], [1, 1], [2, 2]]
  SPATIAL_STRIDES: [[1, 1], [2, 2], [2, 2], [1, 1]]
NONLOCAL:
  LOCATION: [[[], []], [[], []], [[6, 13, 20], []], [[], []]]
  GROUP: [[1, 1], [1, 1], [1, 1], [1, 1]]
  INSTANTIATION: dot_product
  POOL: [[[2, 2, 2], [2, 2, 2]], [[2, 2, 2], [2, 2, 2]], [[2, 2, 2], [2, 2, 2]], [[2, 2, 2], [2, 2, 2]]]
BN:
  USE_PRECISE_STATS: False
  NUM_BATCHES_PRECISE: 200
SOLVER:
  MOMENTUM: 0.9
  WEIGHT_DECAY: 1e-7
  OPTIMIZING_METHOD: sgd
MODEL:
  NUM_CLASSES: 80
  ARCH: slowfast
  MODEL_NAME: SlowFast
  LOSS_FUNC: bce
  DROPOUT_RATE: 0.5
  HEAD_ACT: sigmoid
TEST:
  ENABLE: False
  DATASET: ava
  BATCH_SIZE: 8
DATA_LOADER:
  NUM_WORKERS: 2
  PIN_MEMORY: True

NUM_GPUS: 1
NUM_SHARDS: 1
RNG_SEED: 0
OUTPUT_DIR: .
# TENSORBOARD:
#   MODEL_VIS:
#     TOPK: 2
DEMO:
  ENABLE: True
  LABEL_FILE_PATH: '/home/ps/ycc/SlowFast-main/demo/AVA/ava.json' # Add local label file path here.
  INPUT_VIDEO: "/home/ps/ycc/SlowFast-main/input/1.mp4"
  OUTPUT_FILE: "/home/ps/ycc/SlowFast-main/output/1.mp4"
  #WEBCAM: 0
  DETECTRON2_CFG: "COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml"
  DETECTRON2_WEIGHTS: detectron2://COCO-Detection/faster_rcnn_R_50_FPN_3x/137849458/model_final_280758.pkl

四、运行以及报错处理

在根目录下运行: 

python tools/run_net.py --cfg demo/AVA/SLOWFAST_32x2_R101_50_50.yaml

报错1: 

(action) ps@ps:~/ycc/SlowFast-main$ python tools/run_net.py --cfg demo/AVA/SLOWFAST_32x2_R101_50_50.yaml
Traceback (most recent call last):
  File "/home/ps/ycc/SlowFast-main/tools/run_net.py", line 6, in <module>
    from slowfast.utils.misc import launch_job
  File "/home/ps/ycc/SlowFast-main/SlowFast/slowfast/utils/misc.py", line 13, in <module>
    import slowfast.utils.logging as logging
  File "/home/ps/ycc/SlowFast-main/SlowFast/slowfast/utils/logging.py", line 16, in <module>
    import slowfast.utils.distributed as du
  File "/home/ps/ycc/SlowFast-main/SlowFast/slowfast/utils/distributed.py", line 13, in <module>
    from pytorchvideo.layers.distributed import (  # noqa
ImportError: cannot import name 'cat_all_gather' from 'pytorchvideo.layers.distributed' (/home/ps/anaconda3/envs/action/lib/python3.9/site-packages/pytorchvideo/layers/distributed.py)

解决1:下载pytorchvidio后pip安装: 

GitHub - facebookresearch/pytorchvideo: A deep learning library for video understanding research.

cd pytorchvideo-main
pip install -e .

问题2: 

(action) ps@ps:~/ycc/SlowFast-main$ python tools/run_net.py --cfg demo/AVA/SLOWFAST_32x2_R101_50_50.yaml
/home/ps/anaconda3/envs/action/lib/python3.9/site-packages/torchvision/transforms/_functional_video.py:6: UserWarning: The 'torchvision.transforms._functional_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms.functional' module instead.
warnings.warn(
/home/ps/anaconda3/envs/action/lib/python3.9/site-packages/torchvision/transforms/_transforms_video.py:22: UserWarning: The 'torchvision.transforms._transforms_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms' module instead.
warnings.warn(
Traceback (most recent call last):
File "/home/ps/ycc/SlowFast-main/tools/run_net.py", line 8, in
from vision.fair.slowfast.tools.demo_net import demo
ModuleNotFoundError: No module named 'vision'

解决2:将涉及vison的代码删除,修改如下:其他代码同理。

from demo_net import demo
from test_net import test
from train_net import train
from visualization import visualize

 

  • 14
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
SlowFast架构是一种在视频行为识别中广泛使用的架构,它结合了慢速和快速两种不同的卷积神经网络。以下是SlowFast架构的核心代码: ```python import torch import torch.nn as nn import torch.nn.functional as F class Bottleneck(nn.Module): def __init__(self, in_planes, planes, stride=1): super(Bottleneck, self).__init__() self.conv1 = nn.Conv3d(in_planes, planes, kernel_size=1, bias=False) self.bn1 = nn.BatchNorm3d(planes) self.conv2 = nn.Conv3d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False) self.bn2 = nn.BatchNorm3d(planes) self.conv3 = nn.Conv3d(planes, planes*4, kernel_size=1, bias=False) self.bn3 = nn.BatchNorm3d(planes*4) self.shortcut = nn.Sequential() if stride != 1 or in_planes != planes*4: self.shortcut = nn.Sequential( nn.Conv3d(in_planes, planes*4, kernel_size=1, stride=stride, bias=False), nn.BatchNorm3d(planes*4) ) def forward(self, x): out = F.relu(self.bn1(self.conv1(x))) out = F.relu(self.bn2(self.conv2(out))) out = self.bn3(self.conv3(out)) out += self.shortcut(x) out = F.relu(out) return out class SlowFast(nn.Module): def __init__(self, block, num_blocks, num_classes=10): super(SlowFast, self).__init__() self.in_planes = 64 self.fast = nn.Sequential( nn.Conv3d(3, 8, kernel_size=(1, 5, 5), stride=(1, 2, 2), padding=(0, 2, 2), bias=False), nn.BatchNorm3d(8), nn.ReLU(inplace=True), nn.Conv3d(8, 16, kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1), bias=False), nn.BatchNorm3d(16), nn.ReLU(inplace=True), nn.Conv3d(16, 32, kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1), bias=False), nn.BatchNorm3d(32), nn.ReLU(inplace=True), nn.Conv3d(32, 64, kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1), bias=False), nn.BatchNorm3d(64), nn.ReLU(inplace=True) ) self.slow = nn.Sequential( nn.Conv3d(3, 64, kernel_size=(1, 1, 1), stride=(1, 1, 1), padding=(0, 0, 0), bias=False), nn.BatchNorm3d(64), nn.ReLU(inplace=True), nn.Conv3d(64, 64, kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1), bias=False), nn.BatchNorm3d(64), nn.ReLU(inplace=True), nn.Conv3d(64, 64, kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1), bias=False), nn.BatchNorm3d(64), nn.ReLU(inplace=True), nn.Conv3d(64, 128, kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1), bias=False), nn.BatchNorm3d(128), nn.ReLU(inplace=True) ) self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1) self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2) self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2) self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2) self.avgpool = nn.AdaptiveAvgPool3d((1, 1, 1)) self.fc = nn.Linear(512 * block.expansion, num_classes) def _make_layer(self, block, planes, num_blocks, stride): strides = [stride] + [1]*(num_blocks-1) layers = [] for stride in strides: layers.append(block(self.in_planes, planes, stride)) self.in_planes = planes * block.expansion return nn.Sequential(*layers) def forward(self, x): fast = self.fast(x[:, :, ::2]) slow = self.slow(x[:, :, ::16]) x = torch.cat([slow, fast], dim=2) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) x = self.avgpool(x) x = x.view(x.size(0), -1) x = self.fc(x) return x ``` 该代码定义了SlowFast架构中的Bottleneck块和SlowFast类,用于构建整个网络。其中,Bottleneck块是SlowFast中的基本块,用于构建各个层;SlowFast类则是整个网络的主体部分,定义了各个层的结构和前向传播的过程。在构建网络时,可以根据需要调整Bottleneck块和SlowFast类的超参数,以满足不同的视频行为识别任务需求。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

学术菜鸟小晨

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值