slowfast实现行为识别

1.window+slowfast实现行为识别

1.1环境搭建

1.首先安装cuda:https://developer.nvidia.com/cuda-downloads,按步骤安装。安装完之后用terminal输入nvc c -V即可看到自己安装的cuda版本。
2.安装cudnn(与自己的cuda版本要对应):https://developer.nvidia.com/rdp/cudnn-download我一开始下载的exe文件但是安装失败,出现developement和runtime安装失败,如下图
请添加图片描述
所以我又下载了zip文件,然后解压,将里面三个文件夹分别复制到对用的cuda文件中,如下图
请添加图片描述
请添加图片描述
请添加图片描述
复制完之后添加环境变量,如下图
请添加图片描述
请添加图片描述
然后在运行测试文件,如下图,分别说明了cuda和cudnn安装成功
请添加图片描述

请添加图片描述
3.创建python环境
遇到这个问题请添加图片描述
下载百度网盘资源:链接:https://pan.baidu.com/s/1rZZohYya9ZBSHBV2nm9urA
提取码:vplk
–来自百度网盘超级会员V1的分享运行exe文件即可,不行的话链接vpn就行。
这里总是遇到问题,所以换环境,安装了cuda10.0版本。
1.多个cuda和cudnn切换时:哪个版本不用时,就把那个环境变量中的path路径改为非实际路径,比如v9.0改为v9.0.111
需要用的版本,就在环境变量中将CUDA_PATH,NVCUDASAMPLES_ROOT改成对应的路径
2.安装git:https://npm.taobao.org/mirrors/git-for-windows然后添加环境变量。
3.配置虚拟环境

conda create -n slowfast python=3.6
conda activate slowfast
#安装git
conda install git
pip install torch===1.4.0 torchvision===0.5.0 -f https://download.pytorch.org/whl/torch_stable.html
# 安装fvcore
pip install 'git+https://github.com/facebookresearch/fvcore'
# 安装simplejson
pip install simplejson
# 安装PyAv
conda install av -c conda-forge
# 安装iopath
pip install -U iopath
# 安装psutil
pip install psutil
# 安装opencv-python
pip install opencv-python
# 安装tensorboard
pip install tensorboard
# 安装cython
pip install cython
#按装pytorch
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
#安装pytorch
conda install pytorch==1.2.0 torchvision==0.4.0 cudatoolkit=10.0 -c pytorch
# 安装detectron2
git clone https://github.com/facebookresearch/detectron2 detectron2_repo
cd detectron2_repo
python setup.py build develop

2.xavier+slowfast实现行为识别

2.1环境搭建


 conda create -n slowfast python=3.7
conda activate slowfast
pip install --upgrade pip
 pip install torchvision
 pip install torch
 pip install opencv-python
 pip install simplejson
 pip install -U fvcore
pip install torchaudio
pip install quote
 git clone git@github.com:PyAV-Org/PyAV
cd PyAV
source scripts/activate.sh
pip install --upgrade -r tests/requirements.txt
./scripts/build-deps#在root下还是权限不够,so
chmod +x scripts/build-deps
./scripts/build-deps
make
python setup.py install

到这儿一直有问题,可以参考https://blog.csdn.net/qq_21368481/article/details/89448226,利用博主的最后一种方法,直接除掉一些文件。






python setup.py build


git clone https://gitee.com/qiang_sun/SlowFast.git
cd SlowFast
python setup.py build develop
pip install portalocker

conda uninstall pyyaml
pip install -U fvcore
#安装pyav
sudo apt-get install -y python-dev python-virtualenv pkg-config
sudo sudo apt-get install -y   libavformat-dev libavcodec-dev libavdevice-dev  libavutil-dev libswscale-dev libavresample-dev
git clone https://github.com/PyAV-Org/PyAV.git
cd PyAV
pip install Cython
python setup.py build#出现gcc的错就按照红的的提示更改文件,成功安装av

pip install matplotlib pandas psutil
pip install pytorchvideo
pip install --upgrade pip
pip install pytorch torchvision torchaudio 

conda install simplejson
conda install ninja
pip install torch
cd /home/nvidia/detectron2-main/detectron2-main
python setup.py install


3.参考

[1]https://blog.csdn.net/souyan1991/article/details/109668593
[2]https://blog.csdn.net/qq_37296487/article/details/83028394

  • 0
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
SlowFast架构是一种在视频行为识别中广泛使用的架构,它结合了慢速和快速两种不同的卷积神经网络。以下是SlowFast架构的核心代码: ```python import torch import torch.nn as nn import torch.nn.functional as F class Bottleneck(nn.Module): def __init__(self, in_planes, planes, stride=1): super(Bottleneck, self).__init__() self.conv1 = nn.Conv3d(in_planes, planes, kernel_size=1, bias=False) self.bn1 = nn.BatchNorm3d(planes) self.conv2 = nn.Conv3d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False) self.bn2 = nn.BatchNorm3d(planes) self.conv3 = nn.Conv3d(planes, planes*4, kernel_size=1, bias=False) self.bn3 = nn.BatchNorm3d(planes*4) self.shortcut = nn.Sequential() if stride != 1 or in_planes != planes*4: self.shortcut = nn.Sequential( nn.Conv3d(in_planes, planes*4, kernel_size=1, stride=stride, bias=False), nn.BatchNorm3d(planes*4) ) def forward(self, x): out = F.relu(self.bn1(self.conv1(x))) out = F.relu(self.bn2(self.conv2(out))) out = self.bn3(self.conv3(out)) out += self.shortcut(x) out = F.relu(out) return out class SlowFast(nn.Module): def __init__(self, block, num_blocks, num_classes=10): super(SlowFast, self).__init__() self.in_planes = 64 self.fast = nn.Sequential( nn.Conv3d(3, 8, kernel_size=(1, 5, 5), stride=(1, 2, 2), padding=(0, 2, 2), bias=False), nn.BatchNorm3d(8), nn.ReLU(inplace=True), nn.Conv3d(8, 16, kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1), bias=False), nn.BatchNorm3d(16), nn.ReLU(inplace=True), nn.Conv3d(16, 32, kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1), bias=False), nn.BatchNorm3d(32), nn.ReLU(inplace=True), nn.Conv3d(32, 64, kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1), bias=False), nn.BatchNorm3d(64), nn.ReLU(inplace=True) ) self.slow = nn.Sequential( nn.Conv3d(3, 64, kernel_size=(1, 1, 1), stride=(1, 1, 1), padding=(0, 0, 0), bias=False), nn.BatchNorm3d(64), nn.ReLU(inplace=True), nn.Conv3d(64, 64, kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1), bias=False), nn.BatchNorm3d(64), nn.ReLU(inplace=True), nn.Conv3d(64, 64, kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1), bias=False), nn.BatchNorm3d(64), nn.ReLU(inplace=True), nn.Conv3d(64, 128, kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1), bias=False), nn.BatchNorm3d(128), nn.ReLU(inplace=True) ) self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1) self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2) self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2) self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2) self.avgpool = nn.AdaptiveAvgPool3d((1, 1, 1)) self.fc = nn.Linear(512 * block.expansion, num_classes) def _make_layer(self, block, planes, num_blocks, stride): strides = [stride] + [1]*(num_blocks-1) layers = [] for stride in strides: layers.append(block(self.in_planes, planes, stride)) self.in_planes = planes * block.expansion return nn.Sequential(*layers) def forward(self, x): fast = self.fast(x[:, :, ::2]) slow = self.slow(x[:, :, ::16]) x = torch.cat([slow, fast], dim=2) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) x = self.avgpool(x) x = x.view(x.size(0), -1) x = self.fc(x) return x ``` 该代码定义了SlowFast架构中的Bottleneck块和SlowFast类,用于构建整个网络。其中,Bottleneck块是SlowFast中的基本块,用于构建各个层;SlowFast类则是整个网络的主体部分,定义了各个层的结构和前向传播的过程。在构建网络时,可以根据需要调整Bottleneck块和SlowFast类的超参数,以满足不同的视频行为识别任务需求。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

我是小z呀

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值