摘要
We present SegNeXt, a simple convolutional network architecture for semantic segmentation.
Recent transformer-based models have dominated the field of semantic segmentation due to the efficiency of self-attention in encoding spatial information.
In this paper, we show that convolutional attention is a more efficient and effective way to encode contextual information than the self-attention mechanism in transformers.
By re-examining the characteristics owned by successful segmentation models, we discover several key components leading to the performance improvement of segmentation models.
This motivates us to design a novel convolutional attention network that uses cheap convolutional operations.
Without bells and whistles, our SegNeXt significantly improves the performance of previous state-of-the-art methods on popular benchmarks, includ- ing ADE20K, Cityscapes, COCO-Stuff, Pascal VOC, Pascal Context, and iSAID.