行为动作识别(二):SlowFast

1 Slowfast模型,一共有两个通道,一个Slow pathway,一个Fast pathway,其中Slow pathway采用低采样,高通道数主要提取空时特征;Fast pathway用高时间采样,低通道数量(主要为了降低计算量),来提取时域特征,两个通道都是以3Dresnet作为backbone,提取特征的,基础网络图如下:
在这里插入图片描述
2 对于两个通道都没有采用temporal downsampling,假设Slow pathway里面feature shape是 { T , S 2 , C } \{T,S^2,C\} {T,S2,C},Fast pathway对应的feature shape { α T , S 2 , β C } \{\alpha T,S^2,\beta C\} {αT,S2,βC}。其中对于Slow pathway,只在res4和res5采用non-degenerate temporal convolutions (temporal kernel size > 1),即311的kernel size,因为发现在前面2个res采用会造成准确率下降,可能原因是We argue that this is because when objects move fast and the temporal stride is large, there is little correlation within a temporal receptive field unless the spatial receptive field is large enough(i.e., in later layers)。对于Fast pathway,全部采用non-degenerate temporal convolutions,因为pathway holds fine temporal resolution for the temporal convolutions to capture detailed motion。
3 横向连接,就是将Fast pathway的特征连接到Slow pathway特征上,论文一共提供了三种方法:
(i) Time-to-channel: 直接将 { α T , S 2 , β C } \{\alpha T,S^2,\beta C\} {αT,S2,βC}reshape成 { T , S 2 , α β C } \{ T,S^2,\alpha \beta C\} {T,S2,αβC}
(ii) Time-strided sampling: 相当于降采样,对没 α \alpha α帧,提取一帧,变成 { T , S 2 , β C } \{ T,S^2,\beta C\} {T,S2,βC}
(iii) Time-strided convolution: 通过3D卷积,一个 5 ∗ 1 2 5*1^2 512的kernel和 2 β C 2\beta C 2βC的channel,以及对应的stride= α \alpha α,padding=2
最终两个特征通过求和或者连接形式保留,通过最终实验得出Time-strided convolution以及连接形式效果最好,作为默认设置
4 对于其中超参数 α , β \alpha ,\beta α,β,论文设置 α \alpha α=8,对于 β \beta β,设置从1/32到1/4,最终取1/8
5 对于Fast pathway,采用Weaker spatial inputs,一共设置了half
spatial resolution,gray-scale,“time difference" frames,optical flow作为输入,发现灰度图和RGB图效果接近
在这里插入图片描述

  • 2
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
SlowFast架构是一种在视频行为识别中广泛使用的架构,它结合了慢速和快速两种不同的卷积神经网络。以下是SlowFast架构的核心代码: ```python import torch import torch.nn as nn import torch.nn.functional as F class Bottleneck(nn.Module): def __init__(self, in_planes, planes, stride=1): super(Bottleneck, self).__init__() self.conv1 = nn.Conv3d(in_planes, planes, kernel_size=1, bias=False) self.bn1 = nn.BatchNorm3d(planes) self.conv2 = nn.Conv3d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False) self.bn2 = nn.BatchNorm3d(planes) self.conv3 = nn.Conv3d(planes, planes*4, kernel_size=1, bias=False) self.bn3 = nn.BatchNorm3d(planes*4) self.shortcut = nn.Sequential() if stride != 1 or in_planes != planes*4: self.shortcut = nn.Sequential( nn.Conv3d(in_planes, planes*4, kernel_size=1, stride=stride, bias=False), nn.BatchNorm3d(planes*4) ) def forward(self, x): out = F.relu(self.bn1(self.conv1(x))) out = F.relu(self.bn2(self.conv2(out))) out = self.bn3(self.conv3(out)) out += self.shortcut(x) out = F.relu(out) return out class SlowFast(nn.Module): def __init__(self, block, num_blocks, num_classes=10): super(SlowFast, self).__init__() self.in_planes = 64 self.fast = nn.Sequential( nn.Conv3d(3, 8, kernel_size=(1, 5, 5), stride=(1, 2, 2), padding=(0, 2, 2), bias=False), nn.BatchNorm3d(8), nn.ReLU(inplace=True), nn.Conv3d(8, 16, kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1), bias=False), nn.BatchNorm3d(16), nn.ReLU(inplace=True), nn.Conv3d(16, 32, kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1), bias=False), nn.BatchNorm3d(32), nn.ReLU(inplace=True), nn.Conv3d(32, 64, kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1), bias=False), nn.BatchNorm3d(64), nn.ReLU(inplace=True) ) self.slow = nn.Sequential( nn.Conv3d(3, 64, kernel_size=(1, 1, 1), stride=(1, 1, 1), padding=(0, 0, 0), bias=False), nn.BatchNorm3d(64), nn.ReLU(inplace=True), nn.Conv3d(64, 64, kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1), bias=False), nn.BatchNorm3d(64), nn.ReLU(inplace=True), nn.Conv3d(64, 64, kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1), bias=False), nn.BatchNorm3d(64), nn.ReLU(inplace=True), nn.Conv3d(64, 128, kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1), bias=False), nn.BatchNorm3d(128), nn.ReLU(inplace=True) ) self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1) self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2) self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2) self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2) self.avgpool = nn.AdaptiveAvgPool3d((1, 1, 1)) self.fc = nn.Linear(512 * block.expansion, num_classes) def _make_layer(self, block, planes, num_blocks, stride): strides = [stride] + [1]*(num_blocks-1) layers = [] for stride in strides: layers.append(block(self.in_planes, planes, stride)) self.in_planes = planes * block.expansion return nn.Sequential(*layers) def forward(self, x): fast = self.fast(x[:, :, ::2]) slow = self.slow(x[:, :, ::16]) x = torch.cat([slow, fast], dim=2) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) x = self.avgpool(x) x = x.view(x.size(0), -1) x = self.fc(x) return x ``` 该代码定义了SlowFast架构中的Bottleneck块和SlowFast类,用于构建整个网络。其中,Bottleneck块是SlowFast中的基本块,用于构建各个层;SlowFast类则是整个网络的主体部分,定义了各个层的结构和前向传播的过程。在构建网络时,可以根据需要调整Bottleneck块和SlowFast类的超参数,以满足不同的视频行为识别任务需求。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值