论文地址:
https://paperswithcode.com/paper/depth-wise-convolutions-in-vision
代码地址:
https://github.com/ZTX-100/Efficient_ViT_with_DW
前言
The Vision Transformer (ViT) leverages the Transformer’s encoder to capture global information by dividing images into patches and achieves superior performance across various computer vision tasks. However, the self-attention mechanism of ViT captures the global context from the outset, overlooking the inherent relationships between neighboring pixels in images or videos. Transformers mainly focus on global information while ignoring the fine-grained local details. Consequently, ViT lacks inductive bias during image or video dataset training. In contrast, convolutional neural networks (CNNs), with their reliance on local filters, possess an inherent inductive bias, making them more efficient and quicker to converge than ViT with less data. In this paper, we present a lightweight Depth-Wise Convolution module as a shortcut in ViT models, bypassing entire Transformer blocks to ensure the models capture both local and global information with minimal overhead. Additionally, we introduce two architecture variants, allowing the Depth-Wise Convolution modules to be applied to multiple Transformer blocks for parameter savings, and incorporating independent parallel Depth-Wise Convolution modules with different kernels to enhance the acquisition of local information. The proposed approach significantly boosts the performance of ViT models on image classification, object detection and instance segmentation by a large margin, especially on small datasets, as evaluated on CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet for image classification, and COCO for object detection and instance segmentation. The source code can be accessed at https://github.com/ZTX-100/Efficient_ViT_with_DW.
1. 环境要求
- nvcr>=21.05
- python=3.7
- CUDA>=10.2 with cudnn>=7
- PyTorch>=1.8.0 and torchvision>=0.9.0 with CUDA>=10.2
- timm==0.4.12
- pip install opencv-python4.4.0.46 termcolor1.1.0 yacs==0.1.8 pyyaml scipy
- Install fused window process for acceleration, activated by passing
--fused_window_process
in the running scriptcd kernels/window_process python setup.py install #--user
2. 问题一:
描述:RuntimeError: Dataset not found or corrupted. You can use download=True to d
解决:定位到错误代码,把download=False改成True
3. 问题二:
描述:AttributeError: module ‘torchvision.transforms.functional’ has no attribute…
解决:使用的这个库的某个属性或函数错了,ctrl+鼠标左键定位到这个库,看看你使用的属性或者函数还有没有,或者是不是名字改了
4. 问题三:
描述:RuntimeError: Distributed package doesn‘t have NCCL built in
解决: windows系统不支持nccl,采用gloo; 将报错函数的参数中nccl改成gloo
运行
python -m torch.distributed.launch --nproc_per_node=[num of GPUs] --master_port 12345 main.py --cfg configs/vit/vit_tiny_16_224_cifar10.yaml --data-path [data path to CIFAR10] --batch-size [batch size]
中括号参数需要自己设置
python -m torch.distributed.launch --nproc_per_node=1 --master_port 12345 main.py --cfg configs/vit/vit_tiny_16_224_cifar10.yaml --data-path './data/cifar-10-batches-py' --batch-size 10
效果
300轮循环就不一一测试了,效果和很多因素都有关系,如果运行后达不到满意效果,考虑数据集是否感觉以及其他原因。