torch 加载他人的ckpt报错

torch 加载他人的ckpt报错

第一个错

Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

原因

ckpt可能是用gpu跑的,而你用cpu,所以不匹配。需要加上参数 map_location=“cpu“ 即可。

model.load_state_dict(torch.load("net.ckpt",map_location="cpu"))

第二个错

Error(s) in loading state_dict for HITNet_KITTI:
	Missing key(s) in state_dict: "feature_extractor.down_0.0.weight", "feature_extractor.down_0.0.bias", "feature_extractor.down_1.0.weight", "feature_extractor.down_1.0.bias", "feature_extractor.down_1.2.weight", "feature_extractor.down_1.2.bias", "feature_extractor.down_2.0.weight", "feature_extractor.down_2.0.bias", "feature_extractor.down_2.2.weight", "feature_extractor.down_2.2.bias", "feature_extractor.down_3.0.weight", "feature_extractor.down_3.0.bias", "feature_extractor.down_3.2.weight", "feature_extractor.down_3.2.bias", "feature_extractor.down_4.0.weight", "feature_extractor.down_4.0.bias", "feature_extractor.down_4.2.weight", "feature_extractor.down_4.2.bias", "feature_extractor.down_4.4.weight", "feature_extractor.down_4.4.bias", "feature_extractor.down_4.6.weight", "feature_extractor.down_4.6.bias", "feature_extractor.up_3.up_conv.0.weight", "feature_extractor.up_3.up_conv.0.bias", "feature_extractor.up_3.merge_conv.0.weight", "feature_extractor.up_3.merge_conv.0.bias", "feature_extractor.up_3.merge_conv.2.weight", "feature_extractor.up_3.merge_conv.2.bias", "feature_extractor.up_3.merge_conv.4.weight", "feature_extractor.up_3.merge_conv.4.bias", "feature_extractor.up_2.up_conv.0.weight", "feature_extractor.up_2.up_conv.0.bias", "feature_extractor.up_2.merge_conv.0.weight", "feature_extractor.up_2.merge_conv.0.bias", "feature_extractor.up_2.merge_conv.2.weight", "feature_extractor.up_2.merge_conv.2.bias", "feature_extractor.up_2.merge_conv.4.weight", "feature_extractor.up_2.merge_conv.4.bias", "feature_extractor.up_1.up_conv.0.weight", "feature_extractor.up_1.up_conv.0.bias", "feature_extractor.up_1.merge_conv.0.weight", "feature_extractor.up_1.merge_conv.0.bias", "feature_extractor.up_1.merge_conv.2.weight", "feature_extractor.up_1.merge_conv.2.bias", "feature_extractor.up_1.merge_conv.4.weight", "feature_extractor.up_1.merge_conv.4.bias", "feature_extractor.up_0.up_conv.0.weight", "feature_extractor.up_0.up_conv.0.bias", "feature_extractor.up_0.merge_conv.0.weight", "feature_extractor.up_0.merge_conv.0.bias", "feature_extractor.up_0.merge_conv.2.weight", "feature_extractor.up_0.merge_conv.2.bias", "feature_extractor.up_0.merge_conv.4.weight", "feature_extractor.up_0.merge_conv.4.bias", "level.0.init.conv_reduce.weight", "level.0.init.conv_reduce.bias", "level.0.init.conv_em.1.weight", "level.0.init.conv_em.1.bias", "level.0.init.conv_hyp.0.weight", "level.0.init.conv_hyp.0.bias", "level.0.prop.conv_neighbors.0.weight", "level.0.prop.conv_neighbors.0.bias", "level.0.prop.conv1.0.weight", "level.0.prop.conv1.0.bias", "level.0.prop.res_block.0.conv.0.weight", "level.0.prop.res_block.0.conv.0.bias", "level.0.prop.res_block.0.conv.2.weight", "level.0.prop.res_block.0.conv.2.bias", "level.0.prop.res_block.1.conv.0.weight", "level.0.prop.res_block.1.conv.0.bias", "level.0.prop.res_block.1.conv.2.weight", "level.0.prop.res_block.1.conv.2.bias", "level.0.prop.convn.weight", "level.0.prop.convn.bias", "level.1.init.conv_reduce.weight", "level.1.init.conv_reduce.bias", "level.1.init.conv_em.1.weight", "level.1.init.conv_em.1.bias", "level.1.init.conv_hyp.0.weight", "level.1.init.conv_hyp.0.bias", "level.1.prop.conv_neighbors.0.weight", "level.1.prop.conv_neighbors.0.bias", "level.1.prop.conv1.0.weight", "level.1.prop.conv1.0.bias", "level.1.prop.res_block.0.conv.0.weight", "level.1.prop.res_block.0.conv.0.bias", "level.1.prop.res_block.0.conv.2.weight", "level.1.prop.res_block.0.conv.2.bias", "level.1.prop.res_block.1.conv.0.weight", "level.1.prop.res_block.1.conv.0.bias", "level.1.prop.res_block.1.conv.2.weight", "level.1.prop.res_block.1.conv.2.bias", "level.1.prop.convn.weight", "level.1.prop.convn.bias", "level.2.init.conv_reduce.weight", "level.2.init.conv_reduce.bias", "level.2.init.conv_em.1.weight", "level.2.init.conv_em.1.bias", "level.2.init.conv_hyp.0.weight", "level.2.init.conv_hyp.0.bias", "level.2.prop.conv_neighbors.0.weight", "level.2.prop.conv_neighbors.0.bias", "level.2.prop.conv1.0.weight", "level.2.prop.conv1.0.bias", "level.2.prop.res_block.0.conv.0.weight", "level.2.prop.res_block.0.conv.0.bias", "level.2.prop.res_block.0.conv.2.weight", "level.2.prop.res_block.0.conv.2.bias", "level.2.prop.res_block.1.conv.0.weight", "level.2.prop.res_block.1.conv.0.bias", "level.2.prop.res_block.1.conv.2.weight", "level.2.prop.res_block.1.conv.2.bias", "level.2.prop.convn.weight", "level.2.prop.convn.bias", "level.3.init.conv_reduce.weight", "level.3.init.conv_reduce.bias", "level.3.init.conv_em.1.weight", "level.3.init.conv_em.1.bias", "level.3.init.conv_hyp.0.weight", "level.3.init.conv_hyp.0.bias", "level.3.prop.conv_neighbors.0.weight", "level.3.prop.conv_neighbors.0.bias", "level.3.prop.conv1.0.weight", "level.3.prop.conv1.0.bias", "level.3.prop.res_block.0.conv.0.weight", "level.3.prop.res_block.0.conv.0.bias", "level.3.prop.res_block.0.conv.2.weight", "level.3.prop.res_block.0.conv.2.bias", "level.3.prop.res_block.1.conv.0.weight", "level.3.prop.res_block.1.conv.0.bias", "level.3.prop.res_block.1.conv.2.weight", "level.3.prop.res_block.1.conv.2.bias", "level.3.prop.convn.weight", "level.3.prop.convn.bias", "level.4.init.conv_reduce.weight", "level.4.init.conv_reduce.bias", "level.4.init.conv_em.1.weight", "level.4.init.conv_em.1.bias", "level.4.init.conv_hyp.0.weight", "level.4.init.conv_hyp.0.bias", "level.4.prop.conv_neighbors.0.weight", "level.4.prop.conv_neighbors.0.bias", "level.4.prop.conv1.0.weight", "level.4.prop.conv1.0.bias", "level.4.prop.res_block.0.conv.0.weight", "level.4.prop.res_block.0.conv.0.bias", "level.4.prop.res_block.0.conv.2.weight", "level.4.prop.res_block.0.conv.2.bias", "level.4.prop.res_block.1.conv.0.weight", "level.4.prop.res_block.1.conv.0.bias", "level.4.prop.res_block.1.conv.2.weight", "level.4.prop.res_block.1.conv.2.bias", "level.4.prop.convn.weight", "level.4.prop.convn.bias", "refine.0.conv1x1.0.weight", "refine.0.conv1x1.0.bias", "refine.0.conv1.0.weight", "refine.0.conv1.0.bias", "refine.0.res_block.0.conv.0.weight", "refine.0.res_block.0.conv.0.bias", "refine.0.res_block.0.conv.2.weight", "refine.0.res_block.0.conv.2.bias", "refine.0.res_block.1.conv.0.weight", "refine.0.res_block.1.conv.0.bias", "refine.0.res_block.1.conv.2.weight", "refine.0.res_block.1.conv.2.bias", "refine.0.res_block.2.conv.0.weight", "refine.0.res_block.2.conv.0.bias", "refine.0.res_block.2.conv.2.weight", "refine.0.res_block.2.conv.2.bias", "refine.0.res_block.3.conv.0.weight", "refine.0.res_block.3.conv.0.bias", "refine.0.res_block.3.conv.2.weight", "refine.0.res_block.3.conv.2.bias", "refine.0.convn.weight", "refine.0.convn.bias", "refine.1.conv1x1.0.weight", "refine.1.conv1x1.0.bias", "refine.1.conv1.0.weight", "refine.1.conv1.0.bias", "refine.1.res_block.0.conv.0.weight", "refine.1.res_block.0.conv.0.bias", "refine.1.res_block.0.conv.2.weight", "refine.1.res_block.0.conv.2.bias", "refine.1.res_block.1.conv.0.weight", "refine.1.res_block.1.conv.0.bias", "refine.1.res_block.1.conv.2.weight", "refine.1.res_block.1.conv.2.bias", "refine.1.res_block.2.conv.0.weight", "refine.1.res_block.2.conv.0.bias", "refine.1.res_block.2.conv.2.weight", "refine.1.res_block.2.conv.2.bias", "refine.1.res_block.3.conv.0.weight", "refine.1.res_block.3.conv.0.bias", "refine.1.res_block.3.conv.2.weight", "refine.1.res_block.3.conv.2.bias", "refine.1.convn.weight", "refine.1.convn.bias", "refine.2.conv1x1.0.weight", "refine.2.conv1x1.0.bias", "refine.2.conv1.0.weight", "refine.2.conv1.0.bias", "refine.2.res_block.0.conv.0.weight", "refine.2.res_block.0.conv.0.bias", "refine.2.res_block.0.conv.2.weight", "refine.2.res_block.0.conv.2.bias", "refine.2.res_block.1.conv.0.weight", "refine.2.res_block.1.conv.0.bias", "refine.2.res_block.1.conv.2.weight", "refine.2.res_block.1.conv.2.bias", "refine.2.convn.weight", "refine.2.convn.bias". 
	Unexpected key(s) in state_dict: "epoch", "global_step", "pytorch-lightning_version", "state_dict", "callbacks", "optimizer_states", "lr_schedulers". 
  File "/home//文档/TinyHiTNet/models/hit_net_kitti.py", line 431, in <module>
    model.load_state_dict(torch.load("/home//文档/TinyHiTNet/ckpt/stereo_net.ckpt",map_location="cpu"))

原因

这是由于这个ckpt文件中保存有epoch等信息。 加上参数 strict=False 就好了

model.load_state_dict(torch.load("net.ckpt",map_location="cpu"),strict=False)
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值