【语义分割】【专题系列】二、Unet语义分割代码实战

【语义分割】【专题系列】二、Unet语义分割代码实战

一、简介

        原始论文是《U-Net: Convolutional Networks for Biomedical Image Segmentation》地址:https://arxiv.org/abs/1505.04597。其网络结构主要是以“U”型编码器-解码器构成了下采样-上采样两部分功能结构。下采样采用典型的卷积网络架构,就采样结构结果而言,每层的Max-Pooling采样减小了图像尺寸,但是成倍增加了channels,具体每层卷积操作可以看代码或者详读论文。上采用过程中对下采样的结果进行Conv-Transpose反卷积过程,直到恢复网络结构,网络结构如图1.1:

二、代码实现

        本代码优势在于:将Encoder、Decoder部分拆分很细,分别用源码实现。用开源数据集Massachusetts Roads Dataset实验。

此处我把网络结构打出来如下所示:

Unet(
  (encoder): ResNetEncoder(
    (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
    (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
    (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
    (layer1): Sequential(
      (0): BasicBlock(
        (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (1): BasicBlock(
        (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (layer2): Sequential(
      (0): BasicBlock(
        (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (downsample): Sequential(
          (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
          (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): BasicBlock(
        (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (layer3): Sequential(
      (0): BasicBlock(
        (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (downsample): Sequential(
          (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
          (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): BasicBlock(
        (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (layer4): Sequential(
      (0): BasicBlock(
        (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (downsample): Sequential(
          (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
          (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): BasicBlock(
        (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
  )
  (decoder): UnetDecoder(
    (center): Identity()
    (blocks): ModuleList(
      (0): DecoderBlock(
        (conv1): Conv2dReLU(
          (0): Conv2d(768, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (attention1): Attention(
          (attention): Identity()
        )
        (conv2): Conv2dReLU(
          (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (attention2): Attention(
          (attention): Identity()
        )
      )
      (1): DecoderBlock(
        (conv1): Conv2dReLU(
          (0): Conv2d(384, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (attention1): Attention(
          (attention): Identity()
        )
        (conv2): Conv2dReLU(
          (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (attention2): Attention(
          (attention): Identity()
        )
      )
      (2): DecoderBlock(
        (conv1): Conv2dReLU(
          (0): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (attention1): Attention(
          (attention): Identity()
        )
        (conv2): Conv2dReLU(
          (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (attention2): Attention(
          (attention): Identity()
        )
      )
      (3): DecoderBlock(
        (conv1): Conv2dReLU(
          (0): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (attention1): Attention(
          (attention): Identity()
        )
        (conv2): Conv2dReLU(
          (0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (attention2): Attention(
          (attention): Identity()
        )
      )
      (4): DecoderBlock(
        (conv1): Conv2dReLU(
          (0): Conv2d(32, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (attention1): Attention(
          (attention): Identity()
        )
        (conv2): Conv2dReLU(
          (0): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (attention2): Attention(
          (attention): Identity()
        )
      )
    )
  )
  (segmentation_head): SegmentationHead(
    (0): Conv2d(16, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): Identity()
    (2): Activation(
      (activation): Sigmoid()
    )
  )
)

三、结果展示

在Massachusetts Roads Dataset上,epoch=40时,val的IOU能干到0.59502421,目测还没有收敛。有兴趣的小伙伴可以自行训练看看。

Epoch: 0001 train_loss: 0.7134 val_loss: 0.6442 train_precision: 0.23880491 train_recall: 0.88401257 train_f1: 0.36606809 train_iou: 0.23008029 val_precision: 0.28081220 val_recall: 0.90503695 val_f1: 0.42485987 val_iou: 0.27211128
Epoch: 0002 train_loss: 0.6440 val_loss: 0.5990 train_precision: 0.30973935 train_recall: 0.89006882 train_f1: 0.44130200 train_iou: 0.29497366 val_precision: 0.30003451 val_recall: 0.88704003 val_f1: 0.44382554 val_iou: 0.28774504
Epoch: 0003 train_loss: 0.5809 val_loss: 0.5667 train_precision: 0.37510515 train_recall: 0.88288407 train_f1: 0.49693106 train_iou: 0.34970738 val_precision: 0.30535113 val_recall: 0.87525155 val_f1: 0.44803793 val_iou: 0.29129299
Epoch: 0004 train_loss: 0.5305 val_loss: 0.5242 train_precision: 0.43006600 train_recall: 0.87128798 train_f1: 0.54068292 train_iou: 0.39348913 val_precision: 0.35612130 val_recall: 0.86600418 val_f1: 0.49277677 val_iou: 0.33373147
Epoch: 0005 train_loss: 0.4876 val_loss: 0.4798 train_precision: 0.47869049 train_recall: 0.86026718 train_f1: 0.57861810 train_iou: 0.43161710 val_precision: 0.42249471 val_recall: 0.85318884 val_f1: 0.54132836 val_iou: 0.38466874
Epoch: 0006 train_loss: 0.4528 val_loss: 0.4460 train_precision: 0.51792818 train_recall: 0.84974886 train_f1: 0.60690926 train_iou: 0.46050927 val_precision: 0.47053913 val_recall: 0.84276648 val_f1: 0.57504279 val_iou: 0.42025690
Epoch: 0007 train_loss: 0.4251 val_loss: 0.4193 train_precision: 0.54838712 train_recall: 0.84212795 train_f1: 0.62845700 train_iou: 0.48276858 val_precision: 0.50611501 val_recall: 0.83739026 val_f1: 0.60078048 val_iou: 0.44763370
Epoch: 0008 train_loss: 0.4024 val_loss: 0.3977 train_precision: 0.57348713 train_recall: 0.83669894 train_f1: 0.64602372 train_iou: 0.50127033 val_precision: 0.53540251 val_recall: 0.83293011 val_f1: 0.62132378 val_iou: 0.46976183
Epoch: 0009 train_loss: 0.3839 val_loss: 0.3817 train_precision: 0.59381976 train_recall: 0.83287914 train_f1: 0.66033810 train_iou: 0.51651671 val_precision: 0.55557149 val_recall: 0.83003393 val_f1: 0.63610154 val_iou: 0.48555701
Epoch: 0010 train_loss: 0.3686 val_loss: 0.3685 train_precision: 0.61047220 train_recall: 0.82980344 train_f1: 0.67197488 train_iou: 0.52895717 val_precision: 0.57365584 val_recall: 0.82629309 val_f1: 0.64834580 val_iou: 0.49872833
Epoch: 0011 train_loss: 0.3557 val_loss: 0.3572 train_precision: 0.62461117 train_recall: 0.82747916 train_f1: 0.68187777 train_iou: 0.53962922 val_precision: 0.58768186 val_recall: 0.82497273 val_f1: 0.65872966 val_iou: 0.50993902
Epoch: 0012 train_loss: 0.3444 val_loss: 0.3476 train_precision: 0.63678064 train_recall: 0.82615697 train_f1: 0.69063551 train_iou: 0.54918636 val_precision: 0.60145070 val_recall: 0.82175064 val_f1: 0.66753766 val_iou: 0.51951728
Epoch: 0013 train_loss: 0.3346 val_loss: 0.3394 train_precision: 0.64741247 train_recall: 0.82478282 train_f1: 0.69810628 train_iou: 0.55735411 val_precision: 0.61160709 val_recall: 0.82066813 val_f1: 0.67496778 val_iou: 0.52757325
Epoch: 0014 train_loss: 0.3260 val_loss: 0.3323 train_precision: 0.65677645 train_recall: 0.82388024 train_f1: 0.70478761 train_iou: 0.56471267 val_precision: 0.62035063 val_recall: 0.81972906 val_f1: 0.68133640 val_iou: 0.53445814
Epoch: 0015 train_loss: 0.3184 val_loss: 0.3265 train_precision: 0.66494723 train_recall: 0.82322069 train_f1: 0.71066977 train_iou: 0.57121214 val_precision: 0.62773513 val_recall: 0.81838897 val_f1: 0.68650602 val_iou: 0.54000238
Epoch: 0016 train_loss: 0.3117 val_loss: 0.3210 train_precision: 0.67209574 train_recall: 0.82267292 train_f1: 0.71582589 train_iou: 0.57690838 val_precision: 0.63411171 val_recall: 0.81839270 val_f1: 0.69153129 val_iou: 0.54551654
Epoch: 0017 train_loss: 0.3055 val_loss: 0.3158 train_precision: 0.67884721 train_recall: 0.82225910 train_f1: 0.72062163 train_iou: 0.58227112 val_precision: 0.64107010 val_recall: 0.81728966 val_f1: 0.69619697 val_iou: 0.55062496
Epoch: 0018 train_loss: 0.2999 val_loss: 0.3115 train_precision: 0.68493047 train_recall: 0.82202995 train_f1: 0.72502175 train_iou: 0.58722331 val_precision: 0.64679424 val_recall: 0.81606361 val_f1: 0.69996233 val_iou: 0.55470701
Epoch: 0019 train_loss: 0.2949 val_loss: 0.3075 train_precision: 0.69023971 train_recall: 0.82171978 train_f1: 0.72883702 train_iou: 0.59148439 val_precision: 0.65194888 val_recall: 0.81545952 val_f1: 0.70359413 val_iou: 0.55873150
Epoch: 0020 train_loss: 0.2901 val_loss: 0.3045 train_precision: 0.69545559 train_recall: 0.82167012 train_f1: 0.73261525 train_iou: 0.59578674 val_precision: 0.65580008 val_recall: 0.81449923 val_f1: 0.70623094 val_iou: 0.56149356
Epoch: 0021 train_loss: 0.2857 val_loss: 0.3014 train_precision: 0.70017869 train_recall: 0.82173365 train_f1: 0.73609438 train_iou: 0.59975674 val_precision: 0.65987980 val_recall: 0.81363695 val_f1: 0.70894482 val_iou: 0.56446275
Epoch: 0022 train_loss: 0.2815 val_loss: 0.2984 train_precision: 0.70471075 train_recall: 0.82184676 train_f1: 0.73942635 train_iou: 0.60359673 val_precision: 0.66346511 val_recall: 0.81332689 val_f1: 0.71157069 val_iou: 0.56733540
Epoch: 0023 train_loss: 0.2776 val_loss: 0.2955 train_precision: 0.70894907 train_recall: 0.82213283 train_f1: 0.74259947 train_iou: 0.60728505 val_precision: 0.66759657 val_recall: 0.81249543 val_f1: 0.71416430 val_iou: 0.57020498
Epoch: 0024 train_loss: 0.2741 val_loss: 0.2934 train_precision: 0.71275744 train_recall: 0.82225695 train_f1: 0.74540097 train_iou: 0.61051814 val_precision: 0.67051596 val_recall: 0.81162935 val_f1: 0.71602630 val_iou: 0.57220630
Epoch: 0025 train_loss: 0.2709 val_loss: 0.2914 train_precision: 0.71619587 train_recall: 0.82233776 train_f1: 0.74792177 train_iou: 0.61341680 val_precision: 0.67386077 val_recall: 0.81012324 val_f1: 0.71775998 val_iou: 0.57408361
Epoch: 0026 train_loss: 0.2676 val_loss: 0.2888 train_precision: 0.71967905 train_recall: 0.82263564 train_f1: 0.75053286 train_iou: 0.61648129 val_precision: 0.67743068 val_recall: 0.80957011 val_f1: 0.72002418 val_iou: 0.57662325
Epoch: 0027 train_loss: 0.2645 val_loss: 0.2867 train_precision: 0.72303048 train_recall: 0.82295222 train_f1: 0.75303998 train_iou: 0.61943899 val_precision: 0.68009341 val_recall: 0.80942968 val_f1: 0.72195664 val_iou: 0.57876611
Epoch: 0028 train_loss: 0.2617 val_loss: 0.2847 train_precision: 0.72609556 train_recall: 0.82329160 train_f1: 0.75536631 train_iou: 0.62218264 val_precision: 0.68258332 val_recall: 0.80915211 val_f1: 0.72369006 val_iou: 0.58069745
Epoch: 0029 train_loss: 0.2589 val_loss: 0.2828 train_precision: 0.72913983 train_recall: 0.82364982 train_f1: 0.75765468 train_iou: 0.62491217 val_precision: 0.68458354 val_recall: 0.80934475 val_f1: 0.72533284 val_iou: 0.58255166
Epoch: 0030 train_loss: 0.2562 val_loss: 0.2813 train_precision: 0.73199074 train_recall: 0.82406112 train_f1: 0.75982800 train_iou: 0.62751079 val_precision: 0.68661076 val_recall: 0.80885910 val_f1: 0.72664088 val_iou: 0.58399610
Epoch: 0031 train_loss: 0.2538 val_loss: 0.2798 train_precision: 0.73465173 train_recall: 0.82444588 train_f1: 0.76186141 train_iou: 0.62994060 val_precision: 0.68872633 val_recall: 0.80846444 val_f1: 0.72801374 val_iou: 0.58551535
Epoch: 0032 train_loss: 0.2514 val_loss: 0.2783 train_precision: 0.73717231 train_recall: 0.82480272 train_f1: 0.76378247 train_iou: 0.63223636 val_precision: 0.69029574 val_recall: 0.80856743 val_f1: 0.72929450 val_iou: 0.58693215
Epoch: 0033 train_loss: 0.2491 val_loss: 0.2768 train_precision: 0.73964243 train_recall: 0.82523251 train_f1: 0.76568482 train_iou: 0.63453289 val_precision: 0.69243032 val_recall: 0.80817813 val_f1: 0.73063752 val_iou: 0.58843603
Epoch: 0034 train_loss: 0.2469 val_loss: 0.2755 train_precision: 0.74202036 train_recall: 0.82563194 train_f1: 0.76749942 train_iou: 0.63672887 val_precision: 0.69405017 val_recall: 0.80804003 val_f1: 0.73179546 val_iou: 0.58971431
Epoch: 0035 train_loss: 0.2448 val_loss: 0.2742 train_precision: 0.74438993 train_recall: 0.82609582 train_f1: 0.76932035 train_iou: 0.63895482 val_precision: 0.69541326 val_recall: 0.80814263 val_f1: 0.73290393 val_iou: 0.59092317
Epoch: 0036 train_loss: 0.2428 val_loss: 0.2732 
train_precision: 0.74652884 train_recall: 0.82648190 train_f1: 0.77096064 train_iou: 0.64094307 val_precision: 0.69661766 val_recall: 0.80788825 val_f1: 0.73373421 val_iou: 0.59183081
Epoch: 0037 train_loss: 0.2409 val_loss: 0.2723 train_precision: 0.74857545 train_recall: 0.82684625 train_f1: 0.77252740 train_iou: 0.64285049 val_precision: 0.69798621 val_recall: 0.80750145 val_f1: 0.73457814 val_iou: 0.59275958
Epoch: 0038 train_loss: 0.2392 val_loss: 0.2716 train_precision: 0.75039063 train_recall: 0.82717224 train_f1: 0.77393797 train_iou: 0.64454733 val_precision: 0.69882874 val_recall: 0.80713604 val_f1: 0.73512894 val_iou: 0.59331609
Epoch: 0039 train_loss: 0.2374 val_loss: 0.2706 train_precision: 0.75234708 train_recall: 0.82761406 train_f1: 0.77546911 train_iou: 0.64643138 val_precision: 0.70017163 val_recall: 0.80680093 val_f1: 0.73595370 val_iou: 0.59421872
Epoch: 0040 train_loss: 0.2355 val_loss: 0.2697 train_precision: 0.75433732 train_recall: 0.82810773 train_f1: 0.77702815 train_iou: 0.64837430 val_precision: 0.70105602 val_recall: 0.80684953 val_f1: 0.73669905 val_iou: 0.59502421

  • 7
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

BoostingIsm

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值