Resnet与残差网络

Resnet与残差网络

增加层的缺陷

在到达一定深度后加入更多层,模型可能产生梯度消失或爆炸问题

可以通过更好的初始化权重,添加BN层等解决

现代架构,试图通过引入不同的技术来解决这些问题,如残差连接

ResNet

ResNet是一种残差网络,可以理解为是一个模块,这个模块经过堆叠可以构成一个很深的网络

ResNet通过增加残差链接(shortcut connection),显示地址让网络中的层拟合残差映射(residual mapping)

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Vo01I4e5-1665630372555)(C:\Users\Doctor Jin\AppData\Roaming\Typora\typora-user-images\image-20221012171527395.png)]

ResNet不再尝试学习x到H(x)的潜在映射,而是学习两者之间的不同,或者说是残差(residual)

然后,为了计算H(x),可以将残差加到输入上。假设参残差是F(x) = H(x)-x,我们将尝试学习F(x)+x,而不是直接学习H(x)。

** 由于ResNet最后一步是由加法操作进行的,是在元素级别执行的,所以输入和输出的大小要一致,如果不同的话要进行填充

ResNet中没有polling层,降采样是通过conv的stride实现的(卷积核滑动的距离)

ResNet网络结构的特点

通过Average Poo得到最终的特征,而不是通过全连接层

每隔卷积层之后都紧接着BatchNorm层

class ResnetbasicBlock(nn.Module):
    def __init__(self,in_channels,out_channels):
        super(),__init__()
        self.conv1 = nn.Conv2d(in_channels,
                               out_channels,
                               kernel_size = 3,
                               padding=1,#padding在图片上下左右都会进行填充
                                         #因为最后要进行加法操作
                              bias = False)#bias是偏置,当作批标准化过程时偏置会被抵消
        self.bn1 = nn.BatchNorm2d(out_channels)
        self.conv2 = nn.Conv2d(out_channels,
                               out_channels,
                               kernel_size = 3,
                               padding=1,
                              bias = False)
        self.bn2 = nn.BatchNorm2d(out_channels)
    def forward(self,x):
        residual = x
        out = self.conv1(x)
        out = F.relu(self.bn(out),inplace = True)
        out = self.conv2(out)
        out = self.bn2(out)
        
        out+=residual #最终要的加法操作
        return F.relu(out)

Densenet

Densent模型 引入了每层与所有后续层的连接,即每一层都接受所有前置层的特征平面作为输入

需要特征图大小保持一致,在Densent网络中使用DenseBlock+Transition结构 (其结果就是多个Densenet中间通过poling连接降低特征图大小)

DenseBlock

是包含很多层的模块,每个层的特征图大小相同,层与层采用密集连接方式

Transition

Transition模块是连接两个相邻的DenseBlock,并通过Pooling是特征图大小降低

实例 利用DenseNet提取特征(小鸟数据集)

如果数据都在一个文件夹中可以使用

torchvision.datasets.ImageFolder

但是当数据分散在不同问价夹时就需要用glob来提取路径信息

imgs_path = glob.glob(r'birds/*/*.jpg')

对路径数据进行切片,观察其命名规律选择labels标签

imgs_path[:2]

利用列表split切片推导式获取all_labels_names

all_labels_names = [img_p.split('\\')[1].split('.')[1] for img_p in imgs_path]

unique方法获取类的种类

unique_labels = np.unique(all_labels_names)
len(unique_labels)

使用字典方法对路径创建index

label_to_index = dict((v,k)for k,v in enumerate(unique_labels))

创建all_labels(使所有的图片都有index,上面的方法仅仅是针对不同的类创建了index)

all_labels = [label_to_index.get(name)for name in all_labels_names]

利用随机数据区分train和test数据

np.random.seed(2021) #将随机数确定
random_index = np.random.permutation(len(imgs_path))
imgs_path = np.array(imgs_path)[random_index]
all_labels = np.array(all_labels)[random_index]
i = int(len(imgs_path)*0.8)
train_path = imgs_path[:i]
train_label = all_labels[:i]
test_path = imgs_path[i:]
test_label = all_labels[i:]

创建dataset类

transform = transforms.Compose([
    transforms.Resize((224,224)),
    transforms.ToTensor()
])
class BirdsDataset(data.Dataset):
    def __init__(self,imgs_path,labels):
        self.imgs = imgs_path
        self.labels = labels
    def __getitem__(self,index):
        img = self.imgs[index]
        label = self.labels[index]
        
        pil_img = Image.open(img)
        pil_img = pil_img.convert('RGB')
        img_tensor = transform(pil_img)
        return img_tensor,label
    def __len__(self):
        return len(self.imgs)

画图

img_batch,label_batch = next(iter(train_dl))
plt.Figure(figsize=(30,30))
for i, (img,label) in enumerate(zip(img_batch[:6],label_batch[:6])):
    img = img.permute(1,2,0).numpy()
    plt.subplot(3,2,i+1)
    plt.title(index_to_label.get(label.item()))
    plt.imshow(img)

使用DenseNet提取特征

my_densenet = torchvision.models.densenet121(pretrained=True).features #仅仅使用features部分对数据的特征进行提取
if torch.cuda.is_available():
    my_densenet = my_densenet.cuda()

不改变DenseNet中已经训练好的parameters

for p in my_densenet.parameters():
    p.requires_grad = False

提取特征

train_features = []
train_feat_labels = []
for im,la in train_dl:
    out = my_densenet(im.cuda())
    out = out.view(out.size(0),-1)
    train_features.extend(out.cpu().data)#append 和 extend的区别
    train_feat_labels.extend(la)
    
test_features = []
test_feat_labels = []
for im,la in test_dl:
    out = my_densenet(im.cuda())
    out = out.view(out.size(0),-1)
    test_features.extend(out.cpu().data)# 数据要在cpu上进行加减(数据类型的不同)
    test_feat_labels.extend(la)

将得到的特征和label作为新的dataset再用线性全裂阶层进行拟合

class FeatureDataset(data.Dataset):
    def __init__(self,feat_list,label_list):
        self.feat_list = feat_list
        self.label_list = label_list
    def __getitem__(self,index):
        return self.feat_list[index],self.label_list[index]
    def __len__(self):
        return len(self.feat_list)
        
train_feat_ds = FeatureDataset(train_features,train_label)
test_feat_ds = FeatureDataset(test_features,test_label)
train_feat_dl = data.DataLoader(train_feat_ds,batch_size=32,shuffle=True)
test_feat_dl = data.DataLoader(test_feat_ds,batch_size=32)

看train_features的维度,因为要做线性拟合

in_feat_size = train_features[0].shape[0]
in_feat_size

nn.Linear的创建

class FCModel(torch.nn.Module):
    def __init__(self,in_size,out_size):
        super().__init__()
        self.lin1 = nn.Linear(in_size,out_size)
    def forward(self,input):
        return self.lin1(input)
        

初始化

net = FCModel(50176,200)
if torch.cuda.is_available():
    net.to('cuda')

fit函数

def fit(epoch, model, trainloader, testloader):
    correct = 0
    total = 0
    running_loss = 0
    model.train()
    for x, y in train_feat_dl:
        if torch.cuda.is_available():
            y = torch.tensor(y,dtype = torch.long)
            x, y = x.to('cuda'), y.to('cuda')
        y_pred = net(x)
        loss = loss_fn(y_pred, y)
        optim.zero_grad()
        loss.backward()
        optim.step()
        with torch.no_grad():
            y_pred = torch.argmax(y_pred, dim=1)
            correct += (y_pred == y).sum().item()
            total += y.size(0)
            running_loss += loss.item()
        
    epoch_loss = running_loss / len(trainloader)
    epoch_acc = correct / total
        
        
    test_correct = 0
    test_total = 0
    test_running_loss = 0 
    
    model.eval()
    with torch.no_grad():
        for x, y in test_feat_dl:
            if torch.cuda.is_available():
                y = torch.tensor(y,dtype = torch.long)
                x, y = x.to('cuda'), y.to('cuda')
            y_pred = net(x)
            loss = loss_fn(y_pred, y)
            y_pred = torch.argmax(y_pred, dim=1)
            test_correct += (y_pred == y).sum().item()
            test_total += y.size(0)
            test_running_loss += loss.item()
    
    epoch_test_loss = test_running_loss / len(testloader)
    epoch_test_acc = test_correct / test_total
    
        
    print('epoch: ', epoch, 
          'loss: ', round(epoch_loss, 3),
          'accuracy:', round(epoch_acc, 3),
          'test_loss: ', round(epoch_test_loss, 3),
          'test_accuracy:', round(epoch_test_acc, 3)
             )
        
    return epoch_loss, epoch_acc, epoch_test_loss, epoch_test_acc
epoches = 30
train_loss = []
train_acc = []
test_loss = []
test_acc = []

for epoch in range(epoches):
    epoch_loss,epoch_acc,epoch_test_loss,epoch_test_acc = fit(epoch,net,train_dl,test_dl)
    
    train_loss.append(epoch_loss)
    train_acc.append(epoch_acc)
    test_loss.append(epoch_test_loss)
    test_acc.append(epoch_test_acc)
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值