TypeError: __init__() got an unexpected keyword argument img_scale

openmmlb 测试分割程序时报错:TypeError: init() got an unexpected keyword argument img_scale

报错来源:config文件里面test_pipeline操作包含img_scale
问题分析:由于openmmlab版本的更新导致,config文件是基于old version 的openmmlab,而运行测试程序时,运行环境是new version的openmmlab
修改方法:将img_scale删除

如下是运行老版本openmmlab时,config文件

norm_cfg = dict(type=‘BN’, requires_grad=True)

model = dict(
type=‘EncoderDecoder’,
pretrained=‘open-mmlab://resnet50_v1c’,
backbone=dict(
type=‘ResNetV1c’,
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=(1, 2, 1, 1),
norm_cfg=dict(type=‘BN’, requires_grad=True),
norm_eval=False,
style=‘pytorch’,
contract_dilation=True),
decode_head=dict(
type=‘PSPHead’,
in_channels=2048,
in_index=3,
channels=512,
pool_scales=(1, 2, 3, 6),
dropout_ratio=0.1,
num_classes=2,
norm_cfg=dict(type=‘BN’, requires_grad=True),
align_corners=False,
loss_decode=dict(
type=‘CrossEntropyLoss’, use_sigmoid=False, loss_weight=1.0)),
auxiliary_head=dict(
type=‘FCNHead’,
in_channels=1024,
in_index=2,
channels=256,
num_convs=1,
concat_input=False,
dropout_ratio=0.1,
num_classes=2,
norm_cfg=dict(type=‘BN’, requires_grad=True),
align_corners=False,
loss_decode=dict(
type=‘CrossEntropyLoss’, use_sigmoid=False, loss_weight=0.4)),
train_cfg=dict(),
test_cfg=dict(mode=‘whole’))
dataset_type = ‘PlasmabubbleDataset’
data_root = ‘/home/zhongjia/plasmabubble/data//img_2_geo/north/plasmabubble_voc/’
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (512, 512)
train_pipeline = [
dict(type=‘LoadImageFromFile’),
dict(type=‘LoadAnnotations’),
dict(type=‘Resize’, img_scale=(512, 512), ratio_range=(0.5, 2.0)),
dict(type=‘RandomCrop’, crop_size=(512, 512), cat_max_ratio=0.75),
dict(type=‘RandomFlip’, prob=0.5),
dict(type=‘PhotoMetricDistortion’),
dict(
type=‘Normalize’,
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type=‘Pad’, size=(512, 512), pad_val=0, seg_pad_val=255),
dict(type=‘DefaultFormatBundle’),
dict(type=‘Collect’, keys=[‘img’, ‘gt_semantic_seg’])
]
test_pipeline = [
dict(type=‘LoadImageFromFile’),
dict(
type=‘MultiScaleFlipAug’,
img_scale=(512, 512),
flip=False,
transforms=[
dict(type=‘Resize’, keep_ratio=True),
dict(type=‘RandomFlip’),
dict(
type=‘Normalize’,
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type=‘ImageToTensor’, keys=[‘img’]),
dict(type=‘Collect’, keys=[‘img’])
])
]

data = dict(
samples_per_gpu=8,
workers_per_gpu=1,
train=dict(
type=‘PlasmabubbleDataset’,
data_root=
‘/home/zhongjia/plasmabubble/data//img_2_geo/north/plasmabubble_voc/’,
img_dir=‘JPEGImages’,
ann_dir=‘SegmentationClassPNG’,
split=‘splits/train.txt’,
pipeline=[
dict(type=‘LoadImageFromFile’),
dict(type=‘LoadAnnotations’),
dict(type=‘Resize’, img_scale=(512, 512), ratio_range=(0.5, 2.0)),
dict(type=‘RandomCrop’, crop_size=(512, 512), cat_max_ratio=0.75),
dict(type=‘RandomFlip’, prob=0.5),
dict(type=‘PhotoMetricDistortion’),
dict(
type=‘Normalize’,
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type=‘Pad’, size=(512, 512), pad_val=0, seg_pad_val=255),
dict(type=‘DefaultFormatBundle’),
dict(type=‘Collect’, keys=[‘img’, ‘gt_semantic_seg’])
]),
val=dict(
type=‘PlasmabubbleDataset’,
data_root=
‘/home/zhongjia/plasmabubble/data//img_2_geo/north/plasmabubble_voc/’,
img_dir=‘JPEGImages’,
ann_dir=‘SegmentationClassPNG’,
split=‘splits/val.txt’,
pipeline=[
dict(type=‘LoadImageFromFile’),
dict(
type=‘MultiScaleFlipAug’,
img_scale=(512, 512),
flip=False,
transforms=[
dict(type=‘Resize’, keep_ratio=True),
dict(type=‘RandomFlip’),
dict(
type=‘Normalize’,
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type=‘ImageToTensor’, keys=[‘img’]),
dict(type=‘Collect’, keys=[‘img’])
])
]),
test=dict(
type=‘PlasmabubbleDataset’,
data_root=
‘/home/zhongjia/plasmabubble/data//img_2_geo/north/plasmabubble_voc/’,
img_dir=‘JPEGImages’,
ann_dir=‘SegmentationClassPNG’,
split=‘splits/val.txt’,
pipeline=[
dict(type=‘LoadImageFromFile’),
dict(
type=‘MultiScaleFlipAug’,
img_scale=(512, 512),
flip=False,
transforms=[
dict(type=‘Resize’, keep_ratio=True),
dict(type=‘RandomFlip’),
dict(
type=‘Normalize’,
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type=‘ImageToTensor’, keys=[‘img’]),
dict(type=‘Collect’, keys=[‘img’])
])
]))
log_config = dict(
interval=50, hooks=[dict(type=‘TextLoggerHook’, by_epoch=False)])
dist_params = dict(backend=‘nccl’)
log_level = ‘INFO’
load_from = ‘https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x512_20k_voc12aug/pspnet_r50-d8_512x512_20k_voc12aug_20200617_101958-ed5dfbd9.pth’
resume_from = None
workflow = [(‘train’, 1)]
cudnn_benchmark = True
optimizer = dict(type=‘SGD’, lr=0.01, momentum=0.9, weight_decay=0.0005)
optimizer_config = dict()
lr_config = dict(policy=‘poly’, power=0.9, min_lr=0.0001, by_epoch=False)
runner = dict(type=‘IterBasedRunner’, max_iters=3000)
checkpoint_config = dict(by_epoch=False, interval=3000)
evaluation = dict(interval=100, metric=‘mIoU’, pre_eval=True)
work_dir = ‘/home/zhongjia/plasmabubble/code/openmmlab/mmsegmentation/work_dirs/dataset1/pspnet_r50-d8_512x512_20k_job4’
seed = 0
gpu_ids = [1]
auto_resume = False

如下是运行新版本的openmml 时,修改后的config文件:

norm_cfg = dict(type=‘SyncBN’, requires_grad=True)
norm_cfg = dict(type=‘BN’, requires_grad=True)
data_preprocessor = dict(
type=‘SegDataPreProcessor’,
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
bgr_to_rgb=True,
pad_val=0,
seg_pad_val=255,
size=(512, 512))
model = dict(
type=‘EncoderDecoder’,
data_preprocessor=dict(
type=‘SegDataPreProcessor’,
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
bgr_to_rgb=True,
pad_val=0,
seg_pad_val=255,
size=(512, 512)),
pretrained=‘open-mmlab://resnet50_v1c’,
backbone=dict(
type=‘ResNetV1c’,
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=(1, 2, 1, 1),
norm_cfg=dict(type=‘SyncBN’, requires_grad=True),
norm_eval=False,
style=‘pytorch’,
contract_dilation=True),
decode_head=dict(
type=‘PSPHead’,
in_channels=2048,
in_index=3,
channels=512,
pool_scales=(1, 2, 3, 6),
dropout_ratio=0.1,
num_classes=2,
norm_cfg=dict(type=‘SyncBN’, requires_grad=True),
align_corners=False,
loss_decode=dict(
type=‘CrossEntropyLoss’, use_sigmoid=False, loss_weight=1.0)),
auxiliary_head=dict(
type=‘FCNHead’,
in_channels=1024,
in_index=2,
channels=256,
num_convs=1,
concat_input=False,
dropout_ratio=0.1,
num_classes=2,
norm_cfg=dict(type=‘SyncBN’, requires_grad=True),
align_corners=False,
loss_decode=dict(
type=‘CrossEntropyLoss’, use_sigmoid=False, loss_weight=0.4)),
train_cfg=dict(),
test_cfg=dict(mode=‘whole’))
dataset_type = ‘PascalVOCDataset’
data_root = ‘/code_data/img_2_geo/north/plasmabubble_voc/’
crop_size = (512, 512)
train_pipeline = [
dict(type=‘LoadImageFromFile’),
dict(type=‘LoadAnnotations’),
dict(
type=‘RandomResize’,
scale=(2048, 512),
ratio_range=(0.5, 2.0),
keep_ratio=True),
dict(type=‘RandomCrop’, crop_size=(512, 512), cat_max_ratio=0.75),
dict(type=‘RandomFlip’, prob=0.5),
dict(type=‘PhotoMetricDistortion’),
dict(type=‘Pad’, size=(512, 512)),
dict(type=‘PackSegInputs’)
]
test_pipeline = [
dict(type=‘LoadImageFromFile’),
dict(type=‘Resize’, scale=(2048, 512), keep_ratio=True),
dict(type=‘LoadAnnotations’),
dict(type=‘PackSegInputs’)
]
img_ratios = [0.5, 0.75, 1.0, 1.25, 1.5, 1.75]
tta_pipeline = [
dict(type=‘LoadImageFromFile’, backend_args=None),
dict(
type=‘TestTimeAug’,
transforms=[[{
‘type’: ‘Resize’,
‘scale_factor’: 0.5,
‘keep_ratio’: True
}, {
‘type’: ‘Resize’,
‘scale_factor’: 0.75,
‘keep_ratio’: True
}, {
‘type’: ‘Resize’,
‘scale_factor’: 1.0,
‘keep_ratio’: True
}, {
‘type’: ‘Resize’,
‘scale_factor’: 1.25,
‘keep_ratio’: True
}, {
‘type’: ‘Resize’,
‘scale_factor’: 1.5,
‘keep_ratio’: True
}, {
‘type’: ‘Resize’,
‘scale_factor’: 1.75,
‘keep_ratio’: True
}],
[{
‘type’: ‘RandomFlip’,
‘prob’: 0.0,
‘direction’: ‘horizontal’
}, {
‘type’: ‘RandomFlip’,
‘prob’: 1.0,
‘direction’: ‘horizontal’
}], [{
‘type’: ‘LoadAnnotations’
}], [{
‘type’: ‘PackSegInputs’
}]])
]
dataset_train = dict(
type=dataset_type,
data_root=data_root,
data_prefix=dict(img_path=‘JPEGImages’, seg_map_path=‘SegmentationClass’),
ann_file=‘ImageSets/Segmentation/train.txt’,
pipeline=[
dict(type=‘LoadImageFromFile’),
dict(type=‘LoadAnnotations’),
dict(
type=‘RandomResize’,
scale=(2048, 512),
ratio_range=(0.5, 2.0),
keep_ratio=True),
dict(type=‘RandomCrop’, crop_size=(512, 512), cat_max_ratio=0.75),
dict(type=‘RandomFlip’, prob=0.5),
dict(type=‘PhotoMetricDistortion’),
dict(type=‘Pad’, size=(512, 512)),
dict(type=‘PackSegInputs’)
])
dataset_aug = dict(
type=dataset_type,
data_root=data_root,
data_prefix=dict(
img_path=‘JPEGImages’, seg_map_path=‘SegmentationClassAug’),
ann_file=‘ImageSets/Segmentation/aug.txt’,
pipeline=[
dict(type=‘LoadImageFromFile’),
dict(type=‘LoadAnnotations’),
dict(
type=‘RandomResize’,
scale=(2048, 512),
ratio_range=(0.5, 2.0),
keep_ratio=True),
dict(type=‘RandomCrop’, crop_size=(512, 512), cat_max_ratio=0.75),
dict(type=‘RandomFlip’, prob=0.5),
dict(type=‘PhotoMetricDistortion’),
dict(type=‘Pad’, size=(512, 512)),
dict(type=‘PackSegInputs’)
])
img_dir = ‘JPEGImages’
ann_dir = “SegmentationClassPNG”
split_dir = ‘splits’
train_dataloader = dict(
batch_size=4,
num_workers=4,
persistent_workers=True,
sampler=dict(type=‘InfiniteSampler’, shuffle=True),
dataset=dict(
type=‘ConcatDataset’,
datasets=[
dict(
type=dataset_type,
data_root=data_root,
data_prefix=dict(
img_path=‘JPEGImages’, seg_map_path=‘SegmentationClassPNG’),
ann_file=‘splits/train.txt’,
pipeline=[
dict(type=‘LoadImageFromFile’),
dict(type=‘LoadAnnotations’),
dict(
type=‘RandomResize’,
scale=(2048, 512),
ratio_range=(0.5, 2.0),
keep_ratio=True),
dict(
type=‘RandomCrop’,
crop_size=(512, 512),
cat_max_ratio=0.75),
dict(type=‘RandomFlip’, prob=0.5),
dict(type=‘PhotoMetricDistortion’),
dict(type=‘Pad’, size=(512, 512)),
dict(type=‘PackSegInputs’)
]),
dict(
type=dataset_type,
data_root=data_root,
data_prefix=dict(
img_path=‘JPEGImages’, seg_map_path=‘SegmentationClassPNG’),
ann_file=‘splits/aug.txt’,
pipeline=[
dict(type=‘LoadImageFromFile’),
dict(type=‘LoadAnnotations’),
dict(
type=‘RandomResize’,
scale=(2048, 512),
ratio_range=(0.5, 2.0),
keep_ratio=True),
dict(
type=‘RandomCrop’,
crop_size=(512, 512),
cat_max_ratio=0.75),
dict(type=‘RandomFlip’, prob=0.5),
dict(type=‘PhotoMetricDistortion’),
dict(type=‘Pad’, size=(512, 512)),
dict(type=‘PackSegInputs’)
])
]))
val_dataloader = dict(
batch_size=1,
num_workers=4,
persistent_workers=True,
sampler=dict(type=‘DefaultSampler’, shuffle=False),
dataset=dict(
type=dataset_type,
data_root=data_root,
data_prefix=dict(
img_path=‘JPEGImages’, seg_map_path=‘SegmentationClassPNG’),
ann_file=‘splits/val.txt’,
pipeline=[
dict(type=‘LoadImageFromFile’),
dict(type=‘Resize’, scale=(2048, 512), keep_ratio=True),
dict(type=‘LoadAnnotations’),
dict(type=‘PackSegInputs’)
]))
test_dataloader = dict(
batch_size=1,
num_workers=4,
persistent_workers=True,
sampler=dict(type=‘DefaultSampler’, shuffle=False),
dataset=dict(
type=dataset_type,
data_root=data_root,
data_prefix=dict(
img_path=‘JPEGImages’, seg_map_path=‘SegmentationClassPNG’),
ann_file=‘splits/val.txt’,
pipeline=[
dict(type=‘LoadImageFromFile’),
dict(type=‘Resize’, scale=(2048, 512), keep_ratio=True),
dict(type=‘LoadAnnotations’),
dict(type=‘PackSegInputs’)
]))
val_evaluator = dict(type=‘IoUMetric’, iou_metrics=[‘mIoU’])
test_evaluator = dict(type=‘IoUMetric’, iou_metrics=[‘mIoU’])
default_scope = ‘mmseg’
env_cfg = dict(
cudnn_benchmark=True,
mp_cfg=dict(mp_start_method=‘fork’, opencv_num_threads=0),
dist_cfg=dict(backend=‘nccl’))
vis_backends = [dict(type=‘LocalVisBackend’)]
visualizer = dict(
type=‘SegLocalVisualizer’,
vis_backends=[dict(type=‘LocalVisBackend’)],
name=‘visualizer’)
log_processor = dict(by_epoch=False)
log_level = ‘INFO’
load_from = ‘https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x512_20k_voc12aug/pspnet_r50-d8_512x512_20k_voc12aug_20200617_101958-ed5dfbd9.pth’
resume = False
tta_model = dict(type=‘SegTTAModel’)
optimizer = dict(type=‘SGD’, lr=0.01, momentum=0.9, weight_decay=0.0005)
optim_wrapper = dict(
type=‘OptimWrapper’,
optimizer=dict(type=‘SGD’, lr=0.01, momentum=0.9, weight_decay=0.0005),
clip_grad=None)
param_scheduler = [
dict(
type=‘PolyLR’,
eta_min=0.0001,
power=0.9,
begin=0,
end=20000,
by_epoch=False)
]
train_cfg = dict(type=‘IterBasedTrainLoop’, max_iters=20000, val_interval=2000)
val_cfg = dict(type=‘ValLoop’)
test_cfg = dict(type=‘TestLoop’)
default_hooks = dict(
timer=dict(type=‘IterTimerHook’),
logger=dict(type=‘LoggerHook’, interval=50, log_metric_by_epoch=False),
param_scheduler=dict(type=‘ParamSchedulerHook’),
checkpoint=dict(type=‘CheckpointHook’, by_epoch=False, interval=100),
sampler_seed=dict(type=‘DistSamplerSeedHook’),
visualization=dict(type=‘SegVisualizationHook’))
work_dir = ‘/code_data/code/code/openmmlab/mmsegmentation/work_dirs/dataset1/pspnet_r50-d8_512x512_20k_job4’
gpu_ids = range(0, 4)

  • 19
    点赞
  • 20
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
根据提供的引用内容,出现了TypeError: __init__() got an unexpected keyword argument 'date'和TypeError: __init__() got an unexpected keyword argument ‘encoding’的问题。这两个错误通常是由于使用了不支持的参数导致的。 对于第一个错误,TypeError: __init__() got an unexpected keyword argument 'date',这个错误通常发生在调用某个函数或方法时传递了不支持的参数。要解决这个问题,你需要检查你的代码,确保你传递的参数是正确的,并且与函数或方法的定义相匹配。 对于第二个错误,TypeError: __init__() got an unexpected keyword argument ‘encoding’,这个错误通常发生在使用json模块读取json文件时传递了不支持的参数。要解决这个问题,你需要检查你的代码,确保你传递的参数是正确的,并且与json模块的函数或方法的定义相匹配。 以下是两个示例来演示如何解决这两个错误: 1. 解决TypeError: __init__() got an unexpected keyword argument 'date'错误的示例: ```python class Person: def __init__(self, name): self.name = name person = Person(name='John', date='2021-01-01') # 错误的调用,传递了不支持的参数'date' ``` 在上面的示例中,我们在实例化Person类时传递了一个不支持的参数'date',导致了TypeError: __init__() got an unexpected keyword argument 'date'错误。要解决这个错误,我们需要检查代码并删除不支持的参数'date'。 2. 解决TypeError: __init__() got an unexpected keyword argument ‘encoding’错误的示例: ```python import json with open('data.json', 'r', encoding='utf-8') as file: # 错误的调用,传递了不支持的参数'encoding' data = json.load(file) ``` 在上面的示例中,我们在使用json模块读取json文件时传递了一个不支持的参数'encoding',导致了TypeError: __init__() got an unexpected keyword argument ‘encoding’错误。要解决这个错误,我们需要检查代码并删除不支持的参数'encoding'。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值