https://www.paperswithcode.com/sota/image-to-image-translation-on-cityscapes
ADE20K---->UperNet101
https://github.com/CSAILVision/semantic-segmentation-pytorch
https://github.com/CSAILVision/unifiedparsing
Cityscapes---->DRN-D-105
https://liumin.blog.csdn.net/article/details/88879985
https://github.com/fyu/drn
COCO-Stuff & PASCAL VOC 2012 ----->DeepLab V2
https://github.com/kazuto1011/deeplab-pytorch
Based on the PaddleClas ImageNet pretrained weights, we achieve 83.22% on Cityscapes val, 59.62% on PASCAL-Context val (new SOTA), 45.20% on COCO-Stuff val (new SOTA), 58.21% on LIP val and 47.98% on ADE20K val. Please checkout openseg.pytorch for more details.