tensorflow模型转换

本文介绍了如何从现有检查点微调TensorFlow模型,重点包括从预训练模型开始的微调步骤、模型性能评估和导出推理图。通过使用特定标志避免加载不匹配的权重,并在C++中运行标签图像。
摘要由CSDN通过智能技术生成

https://github.com/tensorflow/models/blob/master/research/slim/README.md



Fine-tuning a model from an existing checkpoint

Rather than training from scratch, we'll often want to start from a pre-trainedmodel and fine-tune it.To indicate a checkpoint from which to fine-tune, we'll call training withthe --checkpoint_path flag and assign it an absolute path to a checkpointfile.

When fine-tuning a model, we need to be careful about restoring checkpointweights. In particular, when we fine-tune a model on a new task with a differentnumber of output labels, we wont be able restore the final logits (classifier)layer. For this, we'll use the --checkpoint_exclude_scopes flag. This flaghinders certain variables from being loaded. When fine-tuning on aclassification task using a different number of classes than the trained model,the new model will have a final 'logits' layer whose dimensions differ from thepre-trained model. For example, if fine-tuning an ImageNet-trained model onFlowers, the pre-trained logits layer will have dimensions [2048 x 1001] butour new logits layer will have dimensions [2048 x 5]. Consequently, thisflag indicates to TF-Slim to avoid loading these weights from the checkpoint.

Keep in mind that warm-starting from a checkpoint affects the model's weightsonly during the initialization of the mode

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值