笔记:pytorch 的unet segnet模型

本文介绍了如何在PyTorch中搭建语义分割模型Unet SegNet,并提供了相关代码资源链接。
摘要由CSDN通过智能技术生成

pytorch 的unet segnet模型

pytorch搭建的语义分割模型Unet SegNet

https://github.com/piglaker/SHcrack/tree/master/Desktop/pycharm/crack/net

Unet


import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optimi
import numpy as np


class UNet(nn.Module):
    def __init__(self, in_channels, output_channels):
        super(UNet, self).__init__()

        self.down1 = self.down(in_channels, 64, kernel_size = 3)

        self.mxp1 = nn.MaxPool2d(kernel_size = 2)

        self.down2 = self.down(64, 128, kernel_size = 3)

        self.mxp2 = nn.MaxPool2d(kernel_size = 2)

        self.down3 = self.down(128, 256, kernel_size = 3)

        self.mxp3 = nn.MaxPool2d(kernel_size = 2)

        self.down4 = self.down(256, 512, kernel_size = 3)

        self.mxp4 = nn.MaxPool2d(kernel_size = 2)

        self.bottom = nn.Sequential(
                            torch.nn.Conv2d(in_channels = 512, out_channels = 1024, kernel_size = 3 ),
                            torch.nn.ReLU(),
                            torch.nn.BatchNorm2d(1024),
                            torch.nn.Conv2d(in_channels = 1024, out_channels = 1024, kernel_size = 3,),
                            torch.nn.ReLU(),
                            torch.nn.BatchNorm2d(1024),
                            torch.nn.ConvTranspose2d(in_channels = 1024, out_channels = 512, kernel_size = 3, stride = 2, padding = 1, output_padding = 1)
                            )

        self.up1 = self.up(1024, 512, 256)

        self.up2 = self.up(512, 256, 128)

        self.up3 = self.up(256, 128, 64)

        self.final_layer = self.final(128, 64, out_channels = output_channels)




    def down(self, in_channels, out_channels, kernel_size = 3):
        stage = nn.Sequential(
            nn.Conv2d(in_channels = in_channels, out_channels = out_channels, kernel_size = kernel_size, ),
            nn.ReLU(),
            nn.BatchNorm2d(out_channels),
            nn.Conv2d(in_channels = out_channels, out_channels = out_channels, kernel_size = kernel_size),
            nn.ReLU(),
            nn.BatchNorm2d(out_channels),
        )
        return stage


    def up(self, in_channels, mid_channels, out_channels, kernel_size = 3):
        stage = nn.Sequential(
            nn.Conv2d(in_channels = in_channels, out_channels = mid_channels, kernel_size = kernel_size),
            nn.ReLU(),
            nn.BatchNorm2d(mid_channels),
            nn.Conv2d(
  • 1
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值