【CV图像分割】Landslide4Sense-2022

环境

  • 代码包来源:https://github.com/iarai/Landslide4Sense-2022、https://www.iarai.ac.at/landslide4sense/challenge/
  • 代码包路径:E:\Downloads\20220916-Landslide4Sense-2022-main\Landslide4Sense-2022-main(已传至百度网盘) 或 E:\Downloads\Landslide4Sense-2022-main (2)\Landslide4Sense-2022-main
  • 自用电脑运行环境:使用自建的Python 3.8虚拟环境,位于D:\software\anaconda3\envs\LandSlide_Detection_Faster-RCNN-main;常规Anaconda3 Python 3.9环境可能配置存在些许问题,运行代码报错 “Torch not compiled with CUDA enabled”,故未用
图 1 Python 3.8虚拟环境设置

数据

  • dataset数据情况:image_1.h5(128×128×14),其中,训练集有mask标注(mask_1.h5,128×128×2)。源代码Train.py的test_list路径指向训练数据集。
  • 查看h5文件:h5_visualization.py读取解压h5文件为图片(https://blog.csdn.net/qq_39909808/article/details/125047516),image_1.h5转为image_1.nii.gz后打开,如下图
图 2 模型自带h5数据一览

批量设置图像尺寸

from PIL import Image
import os

file_path = r"E:\Downloads\Landslide4Sense-2022-main(2)-(2)\Landslide4Sense-2022-main\dataset\TrainData\mask\20220915mask"    # 原始图像路径

raw_files = os.walk(file_path)              # 遍历所有图像
width, height = 128, 128                    # 修改后的图像尺寸大小

save_path = r"E:\Downloads\Landslide4Sense-2022-main(2)-(2)\Landslide4Sense-2022-main\dataset\TrainData\mask\20220915mask\resize"  # 修改后图像存储的路径
if not os.path.exists(save_path):           # 如果没有这个文件夹,就新建
    os.makedirs(save_path)

for root, dirs, files in raw_files:
    for file in files:                      # 展现各文件
        picture_path = os.path.join(root, file)    # 得到图像的绝对路径
        pic_org = Image.open(picture_path)               # 打开图像

        pic_new = pic_org.resize((width, height), Image.ANTIALIAS)   # 图像尺寸修改
        _, sub_folder = os.path.split(root)              # 得到子文件夹名字
        pic_new_path = os.path.join(save_path, sub_folder)
        if not os.path.exists(pic_new_path):
            os.makedirs(pic_new_path)                    # 建立子文件夹
        pic_new_path = os.path.join(pic_new_path, file)  # 新图像存储绝对路径
        pic_new.save(pic_new_path)					     # 存储文件
        print("%s have been resized!" %pic_new_path)

图片转为.h5

  • 使用自己的图片建模,可按如下操作将图片转为.h5文件(原繁琐操作:先由代码得到group形式的.h5文件,可通过HDFView将其单独保存);再套用Landslide4Sense代码
import numpy as np
import h5py
from PIL import Image
import imageio
from skimage.transform import resize as imresize

content_image = imageio.imread(r'E:\Downloads\Landslide4Sense-2022-main(2)-(2)\Landslide4Sense-2022-main\dataset\TrainData\img\20220915img\resize\20220915img/df021.png')
image = imresize(content_image, [128,128,3]) #此为制作128×128mask,或128×128×3图片
#图片:image = imresize(content_image, [128,128,14])
archive = h5py.File(r'E:\Downloads\Landslide4Sense-2022-main(2)-(2)\Landslide4Sense-2022-main\dataset\TrainData\img\20220915img\resize\20220915img/20.h5', 'w')
archive.create_dataset('img', data=image)
archive.close()
图 3 转换后的h5示例

训练

U-Net

  • Train.py(原有代码+原有.h5数据)
import argparse
import numpy as np
import time
import os
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils import data
import torch.backends.cudnn as cudnn
from utils.tools import *
from dataset.landslide_dataset import LandslideDataSet
import importlib

name_classes = ['Non-Landslide','Landslide']
epsilon = 1e-14

def importName(modulename, name):
    """ Import a named object from a module in the context of this function.
    """
    try:
        module = __import__(modulename, globals(), locals(  ), [name])
    except ImportError:
        return None
    return vars(module)[name]

def get_arguments():

    parser = argparse.ArgumentParser(description="Baseline method for Land4Seen")
    
    parser.add_argument("--data_dir", type=str, default='E:\Downloads/20220916-Landslide4Sense-2022-main\Landslide4Sense-2022-main\dataset/',
                        help="dataset path.")
    parser.add_argument("--model_module", type =str, default='model.Networks',
                        help='model module to import')
    parser.add_argument("--model_name", type=str, default='unet',
                        help='modle name in given module')
    parser.add_argument("--train_list", type=str, default='./dataset/train.txt',
                        help="training list file.")
    parser.add_argument("--test_list", type=str, default='./dataset/train.txt',
                        help="test list file.")
    parser.add_argument("--input_size", type=str, default='128,128',
                        help="width and height of input images.")                     
    parser.add_argument("--num_classes", type=int, default=2,
                        help="number of classes.")               
    parser.add_argument("--batch_size", type=int, default=32,
                        help="number of images in each batch.")
    parser.add_argument("--num_workers", type=int, default=4,
                        help="number of workers for multithread dataloading.")
    parser.add_argument("--learning_rate", type=float, default=1e-3,
                        help="learning rate.")
    parser.add_argument("--num_steps", type=int, default=10, #原为default=5000
                        help="number of training steps.")
    parser.add_argument("--num_steps_stop", type=int, default=10, #原为default=5000
                        help="number of training steps for early stopping.")
    parser.add_argument("--weight_decay", type=float, default=5e-4,
                        help="regularisation parameter for L2-loss.")
    parser.add_argument("--gpu_id", type=int, default=0,
                        help="gpu id in the training.")
    parser.add_argument("--snapshot_dir", type=str, default='./exp/',
                        help="where to save snapshots of the model.")

    return parser.parse_args()


def main():
    args = get_arguments()
    os.environ["CUDA_VISIBLE_DEVICES"] = str(args.gpu_id)
    snapshot_dir = args.snapshot_dir
    if os.path.exists(snapshot_dir)==False:
        os.makedirs(snapshot_dir)

    w, h = map(int, args.input_size.split(','))
    input_size = (w, h)

    cudnn.enabled = True
    cudnn.benchmark = True
    
    # Create network   
    model_import = importName(args.model_module, args.model_name)
    model = model_import(n_classes=args.num_classes)
    model.train()
    model = model.cuda()

    src_loader = data.DataLoader(
                    LandslideDataSet(args.data_dir, args.train_list, max_iters=args.num_steps_stop*args.batch_size,set='labeled'),
                    batch_size=args.batch_size, shuffle=True, num_workers=args.num_workers, pin_memory=True)


    test_loader = data.DataLoader(
                    LandslideDataSet(args.data_dir, args.train_list,set='labeled'),
                    batch_size=1, shuffle=False, num_workers=args.num_workers, pin_memory=True)


    optimizer = optim.Adam(model.parameters(),
                        lr=args.learning_rate, weight_decay=args.weight_decay)
    
    interp = nn.Upsample(size=(input_size[1], input_size[0]), mode='bilinear')
    
    hist = np.zeros((args.num_steps_stop,3))
    F1_best = 0.5    
    cross_entropy_loss = nn.CrossEntropyLoss(ignore_index=255)

    for batch_id, src_data in enumerate(src_loader):
        if batch_id==args.num_steps_stop:
            break
        tem_time = time.time()
        model.train()
        optimizer.zero_grad()
        
        images, labels, _, _ = src_data
        images = images.cuda()      
        pred = model(images)   
        
        pred_interp = interp(pred)
              
        # CE Loss
        labels = labels.cuda().long()
        cross_entropy_loss_value = cross_entropy_loss(pred_interp, labels)
        _, predict_labels = torch.max(pred_interp, 1)
        predict_labels = predict_labels.detach().cpu().numpy()
        labels = labels.cpu().numpy()
        batch_oa = np.sum(predict_labels==labels)*1./len(labels.reshape(-1))

            
        hist[batch_id,0] = cross_entropy_loss_value.item()
        hist[batch_id,1] = batch_oa
        
        cross_entropy_loss_value.backward()
        optimizer.step()

        hist[batch_id,-1] = time.time() - tem_time

        if (batch_id+1) % 1 == 0:
            print('Iter %d/%d Time: %.2f Batch_OA = %.1f cross_entropy_loss = %.3f'%(batch_id+1,args.num_steps,10*np.mean(hist[batch_id-9:batch_id+1,-1]),np.mean(hist[batch_id-9:batch_id+1,1])*100,np.mean(hist[batch_id-9:batch_id+1,0])))
           
        # evaluation per 500 iterations
        if (batch_id+1) % 1 == 0:  #原为if (batch_id+1) % 500 == 0:
            print('Testing..........')
            model.eval()
            TP_all = np.zeros((args.num_classes, 1))
            FP_all = np.zeros((args.num_classes, 1))
            TN_all = np.zeros((args.num_classes, 1))
            FN_all = np.zeros((args.num_classes, 1))
            n_valid_sample_all = 0
            F1 = np.zeros((args.num_classes, 1))
        
            for _, batch in enumerate(test_loader):  
                image, label,_, name = batch
                label = label.squeeze().numpy()
                image = image.float().cuda()
                
                with torch.no_grad():
                    pred = model(image)

                _,pred = torch.max(interp(nn.functional.softmax(pred,dim=1)).detach(), 1)
                pred = pred.squeeze().data.cpu().numpy()                       
                               
                TP,FP,TN,FN,n_valid_sample = eval_image(pred.reshape(-1),label.reshape(-1),args.num_classes)
                TP_all += TP
                FP_all += FP
                TN_all += TN
                FN_all += FN
                n_valid_sample_all += n_valid_sample

            OA = np.sum(TP_all)*1.0 / n_valid_sample_all
            for i in range(args.num_classes):
                P = TP_all[i]*1.0 / (TP_all[i] + FP_all[i] + epsilon)
                R = TP_all[i]*1.0 / (TP_all[i] + FN_all[i] + epsilon)
                F1[i] = 2.0*P*R / (P + R + epsilon)
                if i==1:
                    print('===>' + name_classes[i] + ' Precision: %.2f'%(P * 100))
                    print('===>' + name_classes[i] + ' Recall: %.2f'%(R * 100))                
                    print('===>' + name_classes[i] + ' F1: %.2f'%(F1[i] * 100))

            mF1 = np.mean(F1)            
            print('===> mean F1: %.2f OA: %.2f'%(mF1*100,OA*100))

            if F1[1]>F1_best:
                F1_best = F1[1]
                # save the models        
                print('Save Model')                     
                model_name = 'batch'+repr(batch_id+1)+'_F1_'+repr(int(F1[1]*10000))+'.pth'
                torch.save(model.state_dict(), os.path.join(
                    snapshot_dir, model_name))
 
if __name__ == '__main__':
    main()
import numpy as np
import torch
from torch.utils import data
from torch.utils.data import DataLoader
import h5py

class LandslideDataSet(data.Dataset):
    def __init__(self, data_dir, list_path, max_iters=None,set='label'):
        self.list_path = list_path
        self.mean = [-0.4914, -0.3074, -0.1277, -0.0625, 0.0439, 0.0803, 0.0644, 0.0802, 0.3000, 0.4082, 0.0823, 0.0516, 0.3338, 0.7819]
        self.std = [0.9325, 0.8775, 0.8860, 0.8869, 0.8857, 0.8418, 0.8354, 0.8491, 0.9061, 1.6072, 0.8848, 0.9232, 0.9018, 1.2913]
        self.set = set
        self.img_ids = [i_id.strip() for i_id in open(list_path)]
        
        if not max_iters==None:
            n_repeat = int(np.ceil(max_iters / len(self.img_ids)))
            self.img_ids = self.img_ids * n_repeat + self.img_ids[:max_iters-n_repeat*len(self.img_ids)]

        self.files = []

        if set=='labeled':
            for name in self.img_ids:
                img_file = data_dir + name
                label_file = data_dir + name.replace('img','mask').replace('image','mask')
                self.files.append({
                    'img': img_file,
                    'label': label_file,
                    'name': name
                })
        elif set=='unlabeled':
            for name in self.img_ids:
                img_file = data_dir + name
                self.files.append({
                    'img': img_file,
                    'name': name
                })
            
    def __len__(self):
        return len(self.files)


    def __getitem__(self, index):
        datafiles = self.files[index]
        
        if self.set=='labeled':
            with h5py.File(datafiles['img'], 'r') as hf:
                image = hf['img'][:]
            with h5py.File(datafiles['label'], 'r') as hf:
                label = hf['mask'][:]
            name = datafiles['name']
                
            image = np.asarray(image, np.float32)
            label = np.asarray(label, np.float32)
            image = image.transpose((-1, 0, 1))
            size = image.shape

            for i in range(len(self.mean)):
                image[i,:,:] -= self.mean[i]
                image[i,:,:] /= self.std[i]

            return image.copy(), label.copy(), np.array(size), name

        else:
            with h5py.File(datafiles['img'], 'r') as hf:
                image = hf['img'][:]
            name = datafiles['name']
                
            image = np.asarray(image, np.float32)
            image = image.transpose((-1, 0, 1))
            size = image.shape

            for i in range(len(self.mean)):
                image[i,:,:] -= self.mean[i]
                image[i,:,:] /= self.std[i]

            return image.copy(), np.array(size), name

       
if __name__ == '__main__':
    
    train_dataset = LandslideDataSet(data_dir='/dataset/', list_path='./train.txt')
    train_loader = DataLoader(dataset=train_dataset,batch_size=1,shuffle=True,pin_memory=True)

    channels_sum,channel_squared_sum = 0,0
    num_batches = len(train_loader)
    for data,_,_,_ in train_loader:
        channels_sum += torch.mean(data,dim=[0,2,3])   
        channel_squared_sum += torch.mean(data**2,dim=[0,2,3])       

    mean = channels_sum/num_batches
    std = (channel_squared_sum/num_batches - mean**2)**0.5
    print(mean,std) 
    #[-0.4914, -0.3074, -0.1277, -0.0625, 0.0439, 0.0803, 0.0644, 0.0802, 0.3000, 0.4082, 0.0823, 0.0516, 0.3338, 0.7819]
    #[0.9325, 0.8775, 0.8860, 0.8869, 0.8857, 0.8418, 0.8354, 0.8491, 0.9061, 1.6072, 0.8848, 0.9232, 0.9018, 1.2913]
  • 迭代5000次后,Precision: 84.33, Recall: 60.36, F1: 70.36
图 4 模型自带数据训练结果

  • 其他网络:Semantic-segmentation-methods-for-landslide-detection-master(https://github.com/waterybye/Semantic-segmentation-methods-for-landslide-detection)内含多种网络代码,将其扩充至当前Landslide4Sense-2022-main代码包下的model,即可训练DeepLabV3+FCNGCN网络,对应Train.py修改如下:
parser.add_argument("--model_module", type =str, default='model.deeplab3plus',
                    help='model module to import')
parser.add_argument("--model_name", type=str, default='DeepLabv3_plus',
                    help='modle name in given module')
  • 注意:训练DeepLabV3plus网络,图片需以128×128×3三通道输入(对应上述转.h文件代码中的“image = imresize(content_image, [128,128])”);另外,还需修改channels、batch_size等参数(例如,设置迭代100次、batch_size为5),多加尝试对比
  • 注意:为对应Train.py中的“model = model_import(n_classes=args.num_classes)”,需要将具体网络代码“num_classes”修改为“n_classes”

DeepLabv3+

  • 输入128×128×.h5图像,
  • Train.py 进行如下关键修改(同上代码段)
  • 注意:可能报错“CUDA out of memory” → 调整batch_size即可
    parser.add_argument("--data_dir", type=str, default='E:\Downloads/20220916-Landslide4Sense-2022-main\Landslide4Sense-2022-main\dataset/',
                        help="dataset path.")
    parser.add_argument("--model_module", type=str, default='model.deeplab3plus',
                        help='model module to import')
    parser.add_argument("--model_name", type=str, default='DeepLabv3_plus',
                        help='modle name in given module')
    parser.add_argument("--train_list", type=str, default='./dataset/train_other.txt',
                        help="training list file.")
    parser.add_argument("--test_list", type=str, default='./dataset/train_other.txt',
                        help="test list file.")
  • landslide_dataset.py 进行如下修改
        self.mean = [-0.4914, -0.3074, -0.1277] #因输入数据通道数不同,U-Net外的其他网络需做相应修改
        self.std = [0.9325, 0.8775, 0.8860] #因输入数据通道数不同,U-Net外的其他网络需做相应修改
if __name__ == '__main__':
    
    train_dataset = LandslideDataSet(data_dir='/dataset/', list_path='./train_other.txt')

FCN

  • FCN(8s/16s/32s)类同上述修改,只是在Train.py的相应模型设置不同,如下
    parser.add_argument("--model_module", type=str, default='model.fcn',
                        help='model module to import')
    parser.add_argument("--model_name", type=str, default='fcn8s',
                        help='modle name in given module')

GCN

  • GCN类同上述修改,只是在Train.py的相应模型设置不同,如下
    parser.add_argument("--model_module", type=str, default='model.gcn',
                        help='model module to import')
    parser.add_argument("--model_name", type=str, default='GCN',
                        help='modle name in given module')


预测

  • Predict.py预测:Predict.py在调用模型权重文件时报错“RecursionError: maximum recursion depth exceeded while calling a Python object”或“Process finished with exit code -1073741571 (0xC00000FD)”,故直接将Predict.py相关代码附在Train.py →→→ 输出对应mask,位于文件夹…\exp

  • 2
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 11
    评论
评论 11
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值