【论文笔记】Combining EfficientNet and Vision Transformers for Video Deepfake Detection

* Combining EfficientNet and Vision Transformers for Video Deepfake Detection

题目:结合高效网络和视觉变压器进行视频深度虚假检测(结合)

作者:Davide Coccomini, Nicola Messina, Claudio Gennaro, and Fabrizio Falchi

ISTI-CNR, via G. Moruzzi 1, 56124, Pisa, Italy(意大利国家研究委员会)

发表期刊:ICIAP(图像分析和处理国际会议)

1.概要

将各种类型的视觉变换器与卷积EfficientNet B0相结合,提取人脸特征。

不使用蒸馏法,也不使用集成法。而是一种基于简单投票的方案,用于处理同一视频镜头中的多个不同人脸。

主要创新:在视频的时空上判断各个人脸

2.总方法

  • 网络输入:提取的人脸。

  • 网络输出:人脸被操纵的概率。

用人脸检测器MTCNN对人脸进行预提取;

再用 the Efficient ViT and the Convolutional Cross ViT两个网络训练。

3.Efficient ViT

  • 两个模块组成:卷积模块(EfficientNet B0特征提取)+a Transformer Encoder。

  • 具体步骤:

    1.用EfficientNet B0为人脸每个块生成一个视觉特征。(一个块为7*7像素);

    2.每个特征都由视觉变换器(Linear Proj)进一步处理;

    3.用CLS生成二分类的分数;

    4.Transformer encoder编码器,把特征编码为机器容易学习的向量;

    5.MLP Head将图片分为real/fake。

  •  

  • 缺陷:只能用小补丁。而伪影可能在全局出现。

4.Convolutional Cross ViT

  • 两分支组成:the Efficient ViT and the multi-scale Transformer architecture

    即 S分支处理较小的斑块,L分支处理较大的斑块,以获得更宽的感受野。

  • 使用两个不同的CNN主干作为特征提取器。

    (只使用其一)

    1.EfficientNet B0,它为S分支处理7×7图像补丁,为L分支处理54×54图像补丁。

    2.Wodajo等人的CNN,它为S分支处理7×7图像补丁,为L分支处理64×64图像补丁。

  • Linear Proj:视觉变换器处理特征。

  • Transformer Encoder:解码器解码。

  • Cross-Attention:两条分支交互,生成独立的S-CLS,L-CLS。

  • MLP Head:分类图片。

  •  

5.推论

  • 优化器:使用 SGD optimizer with a learning rate of 0.01进行端到端训练。

  • 真假阀值设置:0.55.

  • 投票机制:针对同一个视频里有多个不同人脸的视频。

    根据人脸特征分类人脸,并平均得分,判断是否是假脸。

    一个视频里有一张假脸就判定该视频是假的。

  •  

6.结论

  • 性能指标:AUC(准确率)+F1-score(伪造人脸的平均分数)

  • 数据集:FaceForensics++,DFDC

 

 

  • 3
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
Deep person re-identification is the task of recognizing a person across different camera views in a surveillance system. It is a challenging problem due to variations in lighting, pose, and occlusion. To address this problem, researchers have proposed various deep learning models that can learn discriminative features for person re-identification. However, achieving state-of-the-art performance often requires carefully designed training strategies and model architectures. One approach to improving the performance of deep person re-identification is to use a "bag of tricks" consisting of various techniques that have been shown to be effective in other computer vision tasks. These techniques include data augmentation, label smoothing, mixup, warm-up learning rates, and more. By combining these techniques, researchers have been able to achieve significant improvements in re-identification accuracy. In addition to using a bag of tricks, it is also important to establish a strong baseline for deep person re-identification. A strong baseline provides a foundation for future research and enables fair comparisons between different methods. A typical baseline for re-identification consists of a deep convolutional neural network (CNN) trained on a large-scale dataset such as Market-1501 or DukeMTMC-reID. The baseline should also include appropriate data preprocessing, such as resizing and normalization, and evaluation metrics, such as mean average precision (mAP) and cumulative matching characteristic (CMC) curves. Overall, combining a bag of tricks with a strong baseline can lead to significant improvements in deep person re-identification performance. This can have important practical applications in surveillance systems, where accurate person recognition is essential for ensuring public safety.

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值