FastInst小改动之换骨干网络

更换backbone(mask2formerswin)

Fastinst

1. 配置文件

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

# 从基础文件中继承配置
_BASE_: Fast-COCO-InstanceSegmentation.yaml 
    
MODEL:
  BACKBONE:
    NAME: "build_resnet_vd_backbone"  # 指定模型使用的主干网络构建函数名称
  WEIGHTS: "checkpoints/resnet50d_ra2-464e36ba.pkl"  # 预训练权重文件路径
  RESNETS:
    DEFORM_ON_PER_STAGE: [ False, False, True, True ] # 在ResNet的最后两个阶段使用可变形卷积
  FASTINST:
    DEC_LAYERS: 3 # 配置某个模块的解码层数量为3

# 配置训练过程的优化器和学习率策略
SOLVER:
  IMS_PER_BATCH: 4   # 1GPU,单个GPU上每批处理的图像数量
  # IMS_PER_BATCH: 16  # 4GPU
  BASE_LR: 0.000025  # 1GPU 初始学习率
  # BASE_LR: 0.0001   # 4GPU
  STEPS: (1311112, 1420368) # 1GPU
  # STEPS: (327778, 355092)   # 4GPU
  WARMUP_FACTOR: 1.0
  MAX_ITER: 1475000 # 1GPU
  # MAX_ITER: 368750   # 4GPU
  WARMUP_ITERS: 10
        
  WEIGHT_DECAY: 0.05  # 权重衰减防止过拟合
  OPTIMIZER: "ADAMW"
  BACKBONE_MULTIPLIER: 0.1  # 降低主干网络学习速率
  #  CLIP_GRADIENTS:
  #    ENABLED: True
  #    CLIP_TYPE: "full_model"
  #    CLIP_VALUE: 0.01
  #    NORM_TYPE: 2.0
  AMP:
    ENABLED: True  # 混合精度训练节省显存

# 模型训练输出的目录路径
OUTPUT_DIR: "output/fastinst_r50-vd-dcn_ppm-fpn_bs16_50ep_x3_640_1gpu"
基础配置文件:Fast-COCO-InstanceSegmentation.yaml
MODEL:
  BACKBONE:
    FREEZE_AT: 0   # 冻结层数0
    NAME: "build_resnet_backbone"
  WEIGHTS: "detectron2://ImageNetPretrained/torchvision/R-50.pkl"
  PIXEL_MEAN: [ 123.675, 116.280, 103.530 ]
  PIXEL_STD: [ 58.395, 57.120, 57.375 ]
    
  RESNETS:   # (一样)
    DEPTH: 50    # 确认使用Resnet50架构
    STEM_TYPE: "basic"  
    STEM_OUT_CHANNELS: 64
    STRIDE_IN_1X1: False
    OUT_FEATURES: [ "res3", "res4", "res5" ]  # 指定从Backbone中提取哪些层级
    RES5_MULTI_GRID: [ 1, 1, 1 ] 
        
  META_ARCHITECTURE: "FastInst"  #(一样)
  SEM_SEG_HEAD:
    NAME: "FastInstHead"
    IGNORE_VALUE: 255
    NUM_CLASSES: 80
    CONVS_DIM: 256
    MASK_DIM: 256
    NORM: "GN"
    # pixel decoder
    PIXEL_DECODER_NAME: "PyramidPoolingModuleFPN"
    IN_FEATURES: [ "res3", "res4", "res5" ]
        
  FASTINST:
    TRANSFORMER_DECODER_NAME: "FastInstDecoder"
    DEEP_SUPERVISION: True
    NO_OBJECT_WEIGHT: 0.1
    CLASS_WEIGHT: 2.0
    MASK_WEIGHT: 5.0
    DICE_WEIGHT: 5.0
    LOCATION_WEIGHT: 1000.0  # ?_不用管
    PROPOSAL_WEIGHT: 20.0    # ?_不用管
    HIDDEN_DIM: 256
    NUM_OBJECT_QUERIES: 100
    NUM_AUX_QUERIES: 8
    NHEADS: 8
    DROPOUT: 0.0
    DIM_FEEDFORWARD: 1024   #  transformer中前馈神经网络隐藏层的维度,mask2former=2048?
    PRE_NORM: False
    SIZE_DIVISIBILITY: 32
    DEC_LAYERS: 1  # 解码器(Decoder)的层数,mask2former=10?
'''增加解码器的层数可以增强模型对输出细节的捕捉能力,但同样会增加计算复杂度和训练时间。因此,如果目的是提升Mask2Former模型的性能,特别是提高输出预测的质量和精度,将 DEC_LAYERS 调整到10是一个可能的优化方向。不过,这需要更多的计算资源,并且可能需要细致调整其他相关超参数(如学习率、正则化策略等)'''
    TRAIN_NUM_POINTS: 12544
    OVERSAMPLE_RATIO: 3.0
    IMPORTANCE_SAMPLE_RATIO: 0.75
        
    TEST: # (一样)
      SEMANTIC_ON: False
      INSTANCE_ON: True
      PANOPTIC_ON: False
      OVERLAP_THRESHOLD: 0.8
      OBJECT_MASK_THRESHOLD: 0.8
        
DATASETS: #(一样)
  TRAIN: ("coco_2017_train",)
  TEST: ("coco_2017_val",)
    
SOLVER:  #(一样)
  IMS_PER_BATCH: 16  # 每个批次的图像数量(4GPU版本需要改!)
  BASE_LR: 0.0001    # 初始学习率
  STEPS: (327778, 355092)  # 学习率下降步长
  MAX_ITER: 368750    # 总迭代次数
  WARMUP_FACTOR: 1.0
  WARMUP_ITERS: 10
  WEIGHT_DECAY: 0.05    # 权重衰减参数
  OPTIMIZER: "ADAMW"    # 优化器
  BACKBONE_MULTIPLIER: 0.1
  #  CLIP_GRADIENTS:
  #    ENABLED: True
  #    CLIP_TYPE: "full_model"
  #    CLIP_VALUE: 0.01
  #    NORM_TYPE: 2.0

  AMP:
    ENABLED: True   # 启用自动混合精度训练
        
INPUT:  # (这里不太一样,要注意!)
  MIN_SIZE_TRAIN: (416, 448, 480, 512, 544, 576, 608, 640)
  MAX_SIZE_TRAIN: 853
  MIN_SIZE_TEST: 640
  MAX_SIZE_TEST: 853
    
  CROP:
    ENABLED: True
    RESIZE:
      ENABLED: True
      MIN_SIZE: (400, 500, 600)
      MIN_SIZE_TRAIN_SAMPLING: "choice"
    TYPE: "absolute_range"
    SIZE: (384, 600)
  FORMAT: "RGB"
    
  DATASET_MAPPER_NAME: "fastinst_instance"  # 数据集映射器名称  (mask2former是coco_instance_lsj)
    
TEST:# (一样)
  EVAL_PERIOD: 5000   # 每5000个迭代进行一次评估
    
DATALOADER: # (一样)
  FILTER_EMPTY_ANNOTATIONS: True
  NUM_WORKERS: 4    # 使用4个进程来加载数据
VERSION: 2
SEED: 42

2. config文件

# -*- coding: utf-8 -*-
from detectron2.config import CfgNode as CN

# 向Detectron2的配置文件添加FastInst模型相关的配置项
def add_fastinst_config(cfg):
    
    # (1)数据处理(Data Processing)
    cfg.INPUT.DATASET_MAPPER_NAME = "fastinst_instance"  # 指定数据集映射器名称
    cfg.INPUT.COLOR_AUG_SSD = False    # 不使用SSD风格的颜色增强
    cfg.INPUT.CROP.SINGLE_CATEGORY_MAX_AREA = 1.0  # 单个类别在裁剪区域的最大占比,超出则放弃此裁剪尝试
    # 裁剪后的图像重设尺寸配置
    cfg.INPUT.CROP.RESIZE = CN() 
    cfg.INPUT.CROP.RESIZE.ENABLED = False    # 默认不启用
    cfg.INPUT.CROP.RESIZE.MIN_SIZE = (800,)  # 若启用,最小尺寸
    cfg.INPUT.CROP.RESIZE.MIN_SIZE_TRAIN_SAMPLING = "choice"
    cfg.INPUT.SIZE_DIVISIBILITY = -1  # 图像尺寸需满足的可除性条件,-1表示无特定要求

    # (2)求解器(Solver)配置
    cfg.SOLVER.WEIGHT_DECAY_EMBED = 0.0  # 嵌入层权重衰减设为0
    cfg.SOLVER.OPTIMIZER = "ADAMW"   # 使用ADAMW优化器
    cfg.SOLVER.BACKBONE_MULTIPLIER = 1.0    # backbone 的学习率与整体学习率相同,没有额外的缩放 mask2former=0.1?

    # (3)FastInst模型基础配置
    cfg.MODEL.FASTINST = CN()

    # loss 损失函数权重设置
    # 深度监督、各类权重、掩码、Dice、位置和提案框的损失权重
    cfg.MODEL.FASTINST.DEEP_SUPERVISION = True
    cfg.MODEL.FASTINST.NO_OBJECT_WEIGHT = 0.1
    cfg.MODEL.FASTINST.CLASS_WEIGHT = 2.0  # 侧重类别高好;侧重分割精度低好mask2former=1.0
    cfg.MODEL.FASTINST.DICE_WEIGHT = 5.0   # 侧重分割精度高好;mask2former=1.0
    cfg.MODEL.FASTINST.MASK_WEIGHT = 5.0   #侧重分割精度高好;mask2former=20.0
    cfg.MODEL.FASTINST.LOCATION_WEIGHT = 1e3
    cfg.MODEL.FASTINST.PROPOSAL_WEIGHT = 20.0

    # transformer config
    # 注意力头数、Dropout率、前馈网络维度、解码器层数等
    cfg.MODEL.FASTINST.NHEADS = 8
    cfg.MODEL.FASTINST.DROPOUT = 0.  #  丢弃部分神经元提高模型泛化能力;mask2former=0.1好
    cfg.MODEL.FASTINST.DIM_FEEDFORWARD = 1024  # # mask2former=2048
    cfg.MODEL.FASTINST.DEC_LAYERS = 10   # mask2former=6
    cfg.MODEL.FASTINST.PRE_NORM = False
  
    cfg.MODEL.FASTINST.HIDDEN_DIM = 256
    cfg.MODEL.FASTINST.NUM_OBJECT_QUERIES = 100  # Transformer解码器的目标查询数量
    cfg.MODEL.FASTINST.NUM_AUX_QUERIES = 8    # 辅助查询数量

    # (4)推理配置(这一部分mask2former的.yaml文件里也写好了,需要改的话注意一下)
    cfg.MODEL.FASTINST.TEST = CN()
    # 是否开启语义分割、实例分割或全景分割
    cfg.MODEL.FASTINST.TEST.SEMANTIC_ON = False
    cfg.MODEL.FASTINST.TEST.INSTANCE_ON = True
    cfg.MODEL.FASTINST.TEST.PANOPTIC_ON = False
    # 实例分割中掩码和重叠的阈值  (mask2former=0.0)
    cfg.MODEL.FASTINST.TEST.OBJECT_MASK_THRESHOLD = 0.8
    cfg.MODEL.FASTINST.TEST.OVERLAP_THRESHOLD = 0.8
    # 是否在推理前执行语义分割后处理
    cfg.MODEL.FASTINST.TEST.SEM_SEG_POSTPROCESSING_BEFORE_INFERENCE = False
    # 图像尺寸可除性,若骨干网有特定要求可覆盖  (mask2former=32)
    cfg.MODEL.FASTINST.SIZE_DIVISIBILITY = -1

    # (5)像素解码器配置
    cfg.MODEL.SEM_SEG_HEAD.MASK_DIM = 256  # 输出掩码的维度
    cfg.MODEL.SEM_SEG_HEAD.TRANSFORMER_ENC_LAYERS = 0 # 编码器层数,默认不使用
    cfg.MODEL.SEM_SEG_HEAD.PIXEL_DECODER_NAME = "FastInstEncoderV1"

    # (6)swin transformer backbone
    cfg.MODEL.SWIN = CN()
    cfg.MODEL.SWIN.PRETRAIN_IMG_SIZE = 224  # 预训练时图像的尺寸
    cfg.MODEL.SWIN.PATCH_SIZE = 4  # Patch的尺寸
    cfg.MODEL.SWIN.EMBED_DIM = 96  # 嵌入维度
    cfg.MODEL.SWIN.DEPTHS = [2, 2, 6, 2]  # 每个Swin Transformer块的层数
    cfg.MODEL.SWIN.NUM_HEADS = [3, 6, 12, 24]    # 每层的注意力头数
    cfg.MODEL.SWIN.WINDOW_SIZE = 7   # 窗口大小,用于局部自注意力
    cfg.MODEL.SWIN.MLP_RATIO = 4.0   # MLP层的扩张率
    cfg.MODEL.SWIN.QKV_BIAS = True   # QKV是否带有偏置
    cfg.MODEL.SWIN.QK_SCALE = None   
    cfg.MODEL.SWIN.DROP_RATE = 0.0   
    cfg.MODEL.SWIN.ATTN_DROP_RATE = 0.0   
    cfg.MODEL.SWIN.DROP_PATH_RATE = 0.3
    cfg.MODEL.SWIN.APE = False  # 无绝对位置嵌入
    cfg.MODEL.SWIN.PATCH_NORM = True  # 是否在每个Patch后应用归一化
    cfg.MODEL.SWIN.OUT_FEATURES = ["res2", "res3", "res4", "res5"]  # 输出特征层的名称列表
    cfg.MODEL.SWIN.USE_CHECKPOINT = False  # 是否使用checkpointing减少内存使用

    # (7)transformer 解码器
    cfg.MODEL.FASTINST.TRANSFORMER_DECODER_NAME = "FastInstDecoderV4"
    # 图像尺寸及缩放范围配置,用于LSJ增强
    cfg.INPUT.IMAGE_SIZE = 1024  # 训练图像尺寸
    cfg.INPUT.MIN_SCALE = 0.1    # 缩放范围最小值
    cfg.INPUT.MAX_SCALE = 2.0    # 缩放范围最大值

    # MSDeformAttn encoder configs
    cfg.MODEL.SEM_SEG_HEAD.DEFORMABLE_TRANSFORMER_ENCODER_IN_FEATURES = ["res3", "res4", "res5"]

    # 点损失采样配置
    # 训练时用于掩码点头的采样点数
    cfg.MODEL.FASTINST.TRAIN_NUM_POINTS = 112 * 112
    cfg.MODEL.FASTINST.OVERSAMPLE_RATIO = 3.0     # 过采样参数,用于PointRend的训练点采样
    cfg.MODEL.FASTINST.IMPORTANCE_SAMPLE_RATIO = 0.75  # 重要性采样参数,用于确定哪些点更重要

Mask2former

1.Swin-L(在mask2former中效果最好 但是不太适用fastinst)-pass

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

权重文件地址: https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_large_patch4_window12_384_22k.pth

_BASE_: ../maskformer2_R50_bs16_50ep.yaml
    
MODEL:
  BACKBONE:  # 模型的骨干网络指定为"D2SwinTransformer"
    NAME: "D2SwinTransformer"
        
  # 模型配置      
  SWIN:
    EMBED_DIM: 192  # 嵌入维度设置为192 跟上面config文件里面的嵌入维度设置数值不太一样 改的时候要注意一下啊
    DEPTHS: [2, 2, 18, 2]  # Swin Transformer中每个阶段的变压器块数量,为[2, 2, 18, 2] 这里也不太一样
    NUM_HEADS: [6, 12, 24, 48]  # 多头自注意力中的注意力头数量,每个阶段递增至[6, 12, 24, 48] 这里也不太一样
    WINDOW_SIZE: 12  # 在移位窗口自注意力机制中使用的局部窗口大小为12 这里也不太一样
    APE: False  # 不使用绝对位置嵌入
    DROP_PATH_RATE: 0.3  # 用于随机丢弃层以防止过拟合的随机深度丢弃率为0.3
    PATCH_NORM: True     # 在 patch 嵌入后应用归一化
    PRETRAIN_IMG_SIZE: 384   # 预训练时使用的输入图像大小为384x384像素   这里也不太一样
  WEIGHTS: "swin_large_patch4_window12_384_22k.pkl"    # 预训练权重初始化
  # 用于图像标准化的均值和标准差值
  PIXEL_MEAN: [123.675, 116.280, 103.530]
  PIXEL_STD: [58.395, 57.120, 57.375]
    
  # MaskFormer 特定配置
  MASK_FORMER:
    NUM_OBJECT_QUERIES: 200  # 将对象查询的数量设置为200

# 解算器(优化器)设置        
SOLVER:
  STEPS: (655556, 710184)  # 训练过程中有两个学习率下降点
  MAX_ITER: 737500   # 设置总迭代次数为737,500,决定了训练的整体持续时间

2.Swin-T

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

权重文件地址: https://github.com/SwinTransformer/storage/releases/download/v1.0.0.0/swin_tiny_patch4_window7_224.pth

很契合:外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

_BASE_: ../maskformer2_R50_bs16_50ep.yaml
MODEL:
  BACKBONE:
    NAME: "D2SwinTransformer"
  SWIN:
    EMBED_DIM: 96
    DEPTHS: [2, 2, 6, 2]
    NUM_HEADS: [3, 6, 12, 24]
    WINDOW_SIZE: 7
    APE: False
    DROP_PATH_RATE: 0.3
    PATCH_NORM: True
  WEIGHTS: "swin_tiny_patch4_window7_224.pkl"  # 重点在这儿呢 预训练权重文件!!
  PIXEL_MEAN: [123.675, 116.280, 103.530]
  PIXEL_STD: [58.395, 57.120, 57.375]

3.Swin-S(不太适用fastinst)-pass

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

权重文件地址: https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_small_patch4_window7_224.pth

_BASE_: ../maskformer2_R50_bs16_50ep.yaml
MODEL:
  BACKBONE:
    NAME: "D2SwinTransformer"
  SWIN:
    EMBED_DIM: 96
    DEPTHS: [2, 2, 18, 2]   # 不太一样
    NUM_HEADS: [3, 6, 12, 24]
    WINDOW_SIZE: 7
    APE: False
    DROP_PATH_RATE: 0.3
    PATCH_NORM: True
  WEIGHTS: "swin_small_patch4_window7_224.pkl"
  PIXEL_MEAN: [123.675, 116.280, 103.530]
  PIXEL_STD: [58.395, 57.120, 57.375]
基础配置文件:maskformer2_R50_bs16_50ep.yaml
_BASE_: Base-COCO-InstanceSegmentation.yaml

MODEL:
  META_ARCHITECTURE: "MaskFormer"
  SEM_SEG_HEAD:
    NAME: "MaskFormerHead"  
    IGNORE_VALUE: 255
    NUM_CLASSES: 80
    LOSS_WEIGHT: 1.0  # 损失权重
    CONVS_DIM: 256  # 特征维度
    MASK_DIM: 256   # 掩码维度
    NORM: "GN"   # 使用组归一化
        
    # pixel decoder
    PIXEL_DECODER_NAME: "MSDeformAttnPixelDecoder"
    IN_FEATURES: ["res2", "res3", "res4", "res5"]  
    DEFORMABLE_TRANSFORMER_ENCODER_IN_FEATURES: ["res3", "res4", "res5"] # 像素解码器输入层
    COMMON_STRIDE: 4
    TRANSFORMER_ENC_LAYERS: 6
        
  MASK_FORMER:
    TRANSFORMER_DECODER_NAME: "MultiScaleMaskedTransformerDecoder"
    TRANSFORMER_IN_FEATURE: "multi_scale_pixel_decoder"
    DEEP_SUPERVISION: True  # 深度监督
    # 无对象区域、分类、掩码、Dice损失 的权重
    NO_OBJECT_WEIGHT: 0.1
    CLASS_WEIGHT: 2.0
    MASK_WEIGHT: 5.0
    DICE_WEIGHT: 5.0
    HIDDEN_DIM: 256    # 隐藏层维度
    NUM_OBJECT_QUERIES: 100  # 对象查询数量
    NHEADS: 8  # 多头注意力的多有8
    DROPOUT: 0.0
        
    DIM_FEEDFORWARD: 2048
    ENC_LAYERS: 0
    PRE_NORM: False
    ENFORCE_INPUT_PROJ: False
    SIZE_DIVISIBILITY: 32
    DEC_LAYERS: 10  
    TRAIN_NUM_POINTS: 12544
    OVERSAMPLE_RATIO: 3.0
    IMPORTANCE_SAMPLE_RATIO: 0.75
        
    TEST:
      # 是否启用语义分割、实例分割和全景分割
      SEMANTIC_ON: False
      INSTANCE_ON: True
      PANOPTIC_ON: False
      # 定义测试时的重叠阈值和对象掩码阈值
      OVERLAP_THRESHOLD: 0.8
      OBJECT_MASK_THRESHOLD: 0.8
基础配置文件:Base-COCO-InstanceSegmentation.yaml
MODEL:
  BACKBONE:
    FREEZE_AT: 0  # 冻结层数0
    NAME: "build_resnet_backbone"
  WEIGHTS: "detectron2://ImageNetPretrained/torchvision/R-50.pkl"
  PIXEL_MEAN: [123.675, 116.280, 103.530]
  PIXEL_STD: [58.395, 57.120, 57.375]
    
  RESNETS:
    DEPTH: 50  # 确认使用ResNet50架构
    STEM_TYPE: "basic"  
    STEM_OUT_CHANNELS: 64
    STRIDE_IN_1X1: False
    OUT_FEATURES: ["res3", "res4", "res5"]  # 指定从ResNet backbone中提取哪些层级
    RES5_MULTI_GRID: [1, 1, 1] 
        
DATASETS:
  TRAIN: ("coco_2017_train",)
  TEST: ("coco_2017_val",)
    
SOLVER:
  IMS_PER_BATCH: 16   # 每个批次的图像数量(那就是4GPU的设置了)
  BASE_LR: 0.0001     # 初始学习率
  STEPS: (327778, 355092)  # 定义学习率下降的步长
  MAX_ITER: 368750    # 总迭代次数
  WARMUP_FACTOR: 1.0
  WARMUP_ITERS: 10
  WEIGHT_DECAY: 0.05  # 权重衰减参数
  OPTIMIZER: "ADAMW"  # 使用AdamW优化器(优化器不太一样)
  BACKBONE_MULTIPLIER: 0.1  
  # 这一部分fastinst给#掉了
  CLIP_GRADIENTS:
    ENABLED: True
    CLIP_TYPE: "full_model"
    CLIP_VALUE: 0.01
    NORM_TYPE: 2.0
        
  AMP:
    ENABLED: True  #  启用自动混合精度训练
        
INPUT:
  IMAGE_SIZE: 1024  # 输入图像尺寸被调整到1024x1024
  # 图像缩放范围
  MIN_SCALE: 0.1
  MAX_SCALE: 2.0
  FORMAT: "RGB"
  DATASET_MAPPER_NAME: "coco_instance_lsj"  # 数据集映射器名称 
    
TEST:
  EVAL_PERIOD: 5000     # 每5000个迭代进行一次评估
    
DATALOADER:
  FILTER_EMPTY_ANNOTATIONS: True
  NUM_WORKERS: 4  # 使用4个进程来加载数据
    
VERSION: 2

config文件

# -*- coding: utf-8 -*-
# Copyright (c) Facebook, Inc. and its affiliates.
from detectron2.config import CfgNode as CN

# 向Detectron2的配置文件添加mask2former模型相关的配置项
def add_maskformer2_config(cfg):
    
    # (1)数据处理(Data Processing)
    cfg.INPUT.DATASET_MAPPER_NAME = "mask_former_semantic"  # 指定数据集映射器名称,用于实例分割任务
    cfg.INPUT.COLOR_AUG_SSD = False   # 不使用SSD风格的颜色增强
    cfg.INPUT.CROP.SINGLE_CATEGORY_MAX_AREA = 1.0  # 单个类别在裁剪区域的最大占比,超出则放弃此裁剪尝试
    cfg.INPUT.SIZE_DIVISIBILITY = -1  # 图像尺寸需满足的可除性条件,-1表示无特定要求

    # (2)求解器(Solver)配置
    cfg.SOLVER.WEIGHT_DECAY_EMBED = 0.0  # 嵌入层权重衰减设为0
    cfg.SOLVER.OPTIMIZER = "ADAMW"   # 使用ADAMW优化器
    cfg.SOLVER.BACKBONE_MULTIPLIER = 0.1  # 主干网络学习率倍增系数设为0.1(这里不太一样,fastinst用的1.0)

    # (3)FastInst模型基础配置
    cfg.MODEL.MASK_FORMER = CN()

    # loss 损失函数权重设置
    # 深度监督、各类权重、掩码、Dice、位置和提案框的损失权重
    cfg.MODEL.MASK_FORMER.DEEP_SUPERVISION = True
    cfg.MODEL.MASK_FORMER.NO_OBJECT_WEIGHT = 0.1
    cfg.MODEL.MASK_FORMER.CLASS_WEIGHT = 1.0  # 不太一样(fastinst是2.0)
    cfg.MODEL.MASK_FORMER.DICE_WEIGHT = 1.0   # 不太一样(fastinst是5.0)
    cfg.MODEL.MASK_FORMER.MASK_WEIGHT = 20.0  # 不太一样(fastinst是5.0)

    # transformer config
    # 注意力头数、Dropout率、前馈网络维度、解码器层数等
    cfg.MODEL.MASK_FORMER.NHEADS = 8
    cfg.MODEL.MASK_FORMER.DROPOUT = 0.1   # 不太一样(fastinst是0.0)
    cfg.MODEL.MASK_FORMER.DIM_FEEDFORWARD = 2048  # 不太一样(fastinst是1024)
    cfg.MODEL.MASK_FORMER.ENC_LAYERS = 0
    cfg.MODEL.MASK_FORMER.DEC_LAYERS = 6  # 不太一样(fastinst是10)
    cfg.MODEL.MASK_FORMER.PRE_NORM = False

    cfg.MODEL.MASK_FORMER.HIDDEN_DIM = 256
    cfg.MODEL.MASK_FORMER.NUM_OBJECT_QUERIES = 100

    cfg.MODEL.MASK_FORMER.TRANSFORMER_IN_FEATURE = "res5"
    cfg.MODEL.MASK_FORMER.ENFORCE_INPUT_PROJ = False
    
    # (4)推理配置(这一部分mask2former的.yaml文件里也写好了,需要改的话注意一下)
    cfg.MODEL.MASK_FORMER.TEST = CN()
    # 是否开启语义分割、实例分割或全景分割
    cfg.MODEL.MASK_FORMER.TEST.SEMANTIC_ON = True
    cfg.MODEL.MASK_FORMER.TEST.INSTANCE_ON = False
    cfg.MODEL.MASK_FORMER.TEST.PANOPTIC_ON = False  
    # 实例分割中掩码和重叠的阈值(fastinst上都是0.8)
    cfg.MODEL.MASK_FORMER.TEST.OBJECT_MASK_THRESHOLD = 0.0
    cfg.MODEL.MASK_FORMER.TEST.OVERLAP_THRESHOLD = 0.0
    # 是否在推理前执行语义分割后处理
    cfg.MODEL.MASK_FORMER.TEST.SEM_SEG_POSTPROCESSING_BEFORE_INFERENCE = False
    # 图像尺寸可除性,若骨干网有特定要求可覆盖
    cfg.MODEL.MASK_FORMER.SIZE_DIVISIBILITY = 32
    
    # (5)像素解码器配置
    cfg.MODEL.SEM_SEG_HEAD.MASK_DIM = 256  # 输出掩码的维度
    # adding transformer in pixel decoder
    cfg.MODEL.SEM_SEG_HEAD.TRANSFORMER_ENC_LAYERS = 0   # 编码器层数,默认不使用
    # pixel decoder
    cfg.MODEL.SEM_SEG_HEAD.PIXEL_DECODER_NAME = "BasePixelDecoder"

    # (6)swin transformer backbone
    cfg.MODEL.SWIN = CN()
    cfg.MODEL.SWIN.PRETRAIN_IMG_SIZE = 224
    cfg.MODEL.SWIN.PATCH_SIZE = 4
    cfg.MODEL.SWIN.EMBED_DIM = 96
    cfg.MODEL.SWIN.DEPTHS = [2, 2, 6, 2]
    cfg.MODEL.SWIN.NUM_HEADS = [3, 6, 12, 24]
    cfg.MODEL.SWIN.WINDOW_SIZE = 7
    cfg.MODEL.SWIN.MLP_RATIO = 4.0
    cfg.MODEL.SWIN.QKV_BIAS = True
    cfg.MODEL.SWIN.QK_SCALE = None
    cfg.MODEL.SWIN.DROP_RATE = 0.0
    cfg.MODEL.SWIN.ATTN_DROP_RATE = 0.0
    cfg.MODEL.SWIN.DROP_PATH_RATE = 0.3
    cfg.MODEL.SWIN.APE = False
    cfg.MODEL.SWIN.PATCH_NORM = True
    cfg.MODEL.SWIN.OUT_FEATURES = ["res2", "res3", "res4", "res5"]
    cfg.MODEL.SWIN.USE_CHECKPOINT = False

    # (7)transformer 解码器
    cfg.MODEL.MASK_FORMER.TRANSFORMER_DECODER_NAME = "MultiScaleMaskedTransformerDecoder"

    # 图像尺寸及缩放范围配置,用于LSJ增强
    cfg.INPUT.IMAGE_SIZE = 1024  # 训练图像尺寸
    cfg.INPUT.MIN_SCALE = 0.1    # 缩放范围最小值
    cfg.INPUT.MAX_SCALE = 2.0    # 缩放范围最大值

    # MSDeformAttn encoder configs
    cfg.MODEL.SEM_SEG_HEAD.DEFORMABLE_TRANSFORMER_ENCODER_IN_FEATURES = ["res3", "res4", "res5"]
    cfg.MODEL.SEM_SEG_HEAD.DEFORMABLE_TRANSFORMER_ENCODER_N_POINTS = 4
    cfg.MODEL.SEM_SEG_HEAD.DEFORMABLE_TRANSFORMER_ENCODER_N_HEADS = 8

    # 点损失采样配置
    # 训练时用于掩码点头的采样点数
    cfg.MODEL.FASTINST.TRAIN_NUM_POINTS = 112 * 112
    cfg.MODEL.FASTINST.OVERSAMPLE_RATIO = 3.0     # 过采样参数,用于PointRend的训练点采样
    cfg.MODEL.FASTINST.IMPORTANCE_SAMPLE_RATIO = 0.75  # 重要性采样参数,用于确定哪些点更重要

修改后的配置:

.yaml文件:

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

#_BASE_: ../maskformer2_R50_bs16_50ep.yaml
_BASE_: Fast-COCO-InstanceSegmentation.yaml 
MODEL:
  BACKBONE:
    NAME: "D2SwinTransformer"
  SWIN:
    EMBED_DIM: 96
    DEPTHS: [2, 2, 6, 2]
    NUM_HEADS: [3, 6, 12, 24]
    WINDOW_SIZE: 7
    APE: False
    DROP_PATH_RATE: 0.3
    PATCH_NORM: True
  WEIGHTS: "checkpoints/swin_tiny_patch4_window7_224.pkl"  # 重点在这儿呢 预训练权重文件!!
  PIXEL_MEAN: [123.675, 116.280, 103.530]
  PIXEL_STD: [58.395, 57.120, 57.375]
  FASTINST:
    DEC_LAYERS: 3 # 配置某个模块的解码层数量为3

# 配置训练过程的优化器和学习率策略
SOLVER:
  IMS_PER_BATCH: 4   # 1GPU,单个GPU上每批处理的图像数量
  # IMS_PER_BATCH: 16  # 4GPU
  BASE_LR: 0.000025  # 1GPU 初始学习率
  # BASE_LR: 0.0001   # 4GPU
  STEPS: (1311112, 1420368) # 1GPU
  # STEPS: (327778, 355092)   # 4GPU
  WARMUP_FACTOR: 1.0
  MAX_ITER: 1475000 # 1GPU
  # MAX_ITER: 368750   # 4GPU
  WARMUP_ITERS: 10
        
  WEIGHT_DECAY: 0.05  # 权重衰减防止过拟合
  OPTIMIZER: "ADAMW"
  BACKBONE_MULTIPLIER: 0.1  # 降低主干网络学习速率
  #  CLIP_GRADIENTS:
  #    ENABLED: True
  #    CLIP_TYPE: "full_model"
  #    CLIP_VALUE: 0.01
  #    NORM_TYPE: 2.0
  AMP:
    ENABLED: True  # 混合精度训练节省显存
        
# 模型训练输出的目录路径
OUTPUT_DIR: "output/mask2former_swin_tiny_96_1gpu"
基础配置文件:Fast-COCO-InstanceSegmentation.yaml
config文件:fastinst原本的

具体某些地方数值有出入还需要进一步敲定

命令该怎么改:

文件地址:/ai/chenyujia/CYJ/FastInst

学院 1 GPU

步骤一:
python tools/convert-pretrained-swin-model-to-d2.py checkpoints/swin_tiny_patch4_window7_224.pth checkpoints/swin_tiny_patch4_window7_224.pkl

## 这里要注意预训练模型从ResNet变成了Swin所以与训练权重文件转换文件也发生了变化
步骤二:
python train_net.py --num-gpus 1 --config-file configs/coco/instance-segmentation/maskformer2_swin_tiny_bs16_50ep_1gpu.yaml

614 2 GPU

文件路径:/media/bigdata/Data/CYJ/FastInst-main

# 步骤一:
python tools/convert-pretrained-swin-model-to-d2.py checkpoints/swin_tiny_patch4_window7_224.pth checkpoints/swin_tiny_patch4_window7_224.pkl

# 步骤二:export CUDA_VISIBLE_DEVICES=4,5
## 需要先指定两个GPU:
python train_net.py --num-gpus 2 --config-file configs/coco/instance-segmentation/maskformer2_swin_tiny_bs16_50ep_2gpu.yaml

614 4 GPU

# 步骤一:
python tools/convert-pretrained-swin-model-to-d2.py checkpoints/swin_tiny_patch4_window7_224.pth checkpoints/swin_tiny_patch4_window7_224.pkl

# 步骤二:export CUDA_VISIBLE_DEVICES=2,3,4,5
## 需要先指定两个GPU:
python train_net.py --num-gpus 4 --config-file configs/coco/instance-segmentation/maskformer2_swin_tiny_bs16_50ep.yaml
  • 24
    点赞
  • 19
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值