https://github.com/tensorflow/models/blob/master/research/slim/README.md
Fine-tuning a model from an existing checkpoint
Rather than training from scratch, we'll often want to start from a pre-trainedmodel and fine-tune it.To indicate a checkpoint from which to fine-tune, we'll call training withthe --checkpoint_path
flag and assign it an absolute path to a checkpointfile.
When fine-tuning a model, we need to be careful about restoring checkpointweights. In particular, when we fine-tune a model on a new task with a differentnumber of output labels, we wont be able restore the final logits (classifier)layer. For this, we'll use the --checkpoint_exclude_scopes
flag. This flaghinders certain variables from being loaded. When fine-tuning on aclassification task using a different number of classes than the trained model,the new model will have a final 'logits' layer whose dimensions differ from thepre-trained model. For example, if fine-tuning an ImageNet-trained model onFlowers, the pre-trained logits layer will have dimensions [2048 x 1001]
butour new logits layer will have dimensions [2048 x 5]
. Consequently, thisflag indicates to TF-Slim to avoid loading these weights from the checkpoint.
Keep in mind that warm-starting from a checkpoint affects the model's weightsonly during the initialization of the mode