深度学习之人脸识别

文章目录


前言

https://github.com/davidsandberg/facenet
2020年5月26
facenet项目:

  1. 在进入项目之前进入创建的虚拟环境
    2.进入项目文件文件,具体为
    export PYTHONPATH=~/facenet/src
    3.对图片进行处理160*160像素,具体为:
    python src/align/align_dataset_mtcnn.py ~/data/lfw/ ~/data/lfw_160 --image_size 160 --margin 32

(facenet) curt@dzb:~/facenet$ python src/align/align_dataset_mtcnn.py ~/facenet/data/lfw/ ~/facenet/data/lfw_160 --image_size 160 --margin 32

When training is started subdirectories for training session named after the data/time training was started on the format yyyymmdd-hhmm is created in the directories log_base_dir and models_base_dir. The parameter data_dir is used to point out the location of the training dataset. It should be noted that the union of several datasets can be used by separating the paths with a colon. Finally, the descriptor of the inference network is given by the model_def parameter. In the example above, models.inception_resnet_v1 points to the inception_resnet_v1 module in the package models. This module must define a function inference(images, …), where images is a placeholder for the input images (dimensions <?,160,160,3> in the case of Inception-ResNet-v1) and returns a reference to the embeddings variable.

当开始训练时,以数据/时间训练开始后命名的训练会话子目录,格式为yyyymmdd-hhmm,将在目录log_base_dir和models_base_dir中创建。参数data_dir用于指出训练数据集的位置。 应该注意的是,可以通过用冒号分隔路径来使用多个数据集的并集。最后,推理网络的描述符由model_def参数给出。 在上面的示例模型中。inception_resnet_v1指向包模型中的inception_resnet_v1模块。 此模块必须定义一个函数推理(images,…),其中images是输入图像的占位符(在Inception-ResNet-v1的情况下,尺寸为<?, 160,160,3>),并返回对嵌入的引用 变量。
If the parameter lfw_dir is set to point to a the base directory of the LFW dataset the model is evaluated on LFW once every 1000 batches. For information on how to evaluate an existing model on LFW, please refer to the Validate-on-LFW page. If no evaluation on LFW is desired during training it is fine to leave the lfw_dir parameter empty. However, please note that the LFW dataset that is used here should have been aligned in the same way as the training dataset.
如果参数lfw_dir设置为指向LFW数据集的基本目录,则每1000批次对LFW评估一次模型。有关如何评估LFW上现有模型的信息,请参阅LFW上的验证页面。
如果在训练期间不需要对LFW进行评估,则可以将lfw_dir参数留空。
但是,请注意,此处使用的LFW数据集应与训练数据集对齐。
The training will continue until the max_nrof_epochs is reached or training is terminated from the learning rate schedule file (see below). In this example training stops after 90 epochs. With a Nvidia Pascal Titan X GPU, Tensorflow r1.7, CUDA 8.0 and CuDNN 6.0 and the inception-resnet-v1 model this takes roughly 10 hours.
训练继续进行,直到达到max_nrof_epochs或从学习率计划文件中终止培训为止(请参见下文)。在此示例中,训练在90次停止。使用Nvidia Pascal Titan X GPU,Tensorflow r1.7,CUDA 8.0和CuDNN 6.0以及inception-resnet-v1模型,这大约需要10个小时。
To improve the performance of the final model the learning rate is decreased by a factor 10 when the training starts to converge. This is done through a learning rate schedule defined in a text file pointed to by the parameter learning_rate_schedule_file while also setting the parameter learning_rate to a negative value. For simplicity the learning rate schedule used in this example data/learning_rate_schedule_classifier_casia.txt is also included in the repo. The schedule looks like this:
为了提高最终模型的性能,当训练开始收敛时,学习率将降低10倍。
这是通过在参数learning_rate_schedule_file所指向的文本文件中定义的学习速率计划来完成的,同时还将参数learning_rate设置为负值。
为了简单起见,在此示例中使用的学习速率计划data / learning_rate_schedule_classifier_casia.txt也包含在存储库中。
时间表如下所示:
Here, the first column is the epoch number and the second column is the learning rate, meaning that when the epoch number is in the range 60…80 the learning rate is set to 0.005. For epoch 91 the learning rate is set to -1 and this will cause training to stop.
这里,第一列是迭代次数,第二列是学习率,这意味着当迭代次数在60 … 80范围内时,学习率设置为0.005。对于时期91,学习率设置为-1,这将导致训练停止。
The L2 weight decay is set to 5e-4 and the dropout keep probability is set to 0.8. In addition to this regularization an L1 norm loss is applied to the prelogits activations (–prelogits_norm_loss_factor 5e-4). This will make the activations a bit more sparse and improve the models ability to generalize a little bit.
L2权重衰减设置为5e-4,丢失保持概率设置为0.8。除此正则化外,还将L1规范丢失应用于prelogits激活(–prelogits_norm_loss_factor 5e-4)。 这将使激活更加稀疏,并提高模型的泛化能力。

python src/train_softmax.py
–logs_base_dir ~/logs/facenet/
–models_base_dir ~/models/facenet/
–data_dir ~/datasets/casia/casia_maxpy_mtcnnalign_182_160/
–image_size 160
–model_def models.inception_resnet_v1
–lfw_dir ~/datasets/lfw/lfw_mtcnnalign_160/
–optimizer ADAM
–learning_rate -1
–max_nrof_epochs 150
–keep_probability 0.8
–random_crop
–random_flip
–use_fixed_image_standardization
–learning_rate_schedule_file data/learning_rate_schedule_classifier_casia.txt
–weight_decay 5e-4
–embedding_size 512
–lfw_distance_metric 1
–lfw_use_flipped_images
–lfw_subtract_mean
–validation_set_split_ratio 0.05
–validate_every_n_epochs 5
–prelogits_norm_loss_factor 5e-4
/home/curt/facenet/data/CASIA_160
/home/curt/facenet/data/lfw_160
训练:
python src/train_softmax.py --logs_base_dir ~/facenet/logs/ --models_base_dir ~/facenet/models/ --data_dir ~/facenet/data/CASIA_160/ --image_size 160 --model_def models.inception_resnet_v1 --lfw_dir ~/facenet/data/lfw_160/ --optimizer ADAM --learning_rate -1 --max_nrof_epochs 150 --keep_probability 0.8 --random_crop --random_flip --use_fixed_image_standardization --learning_rate_schedule_file data/learning_rate_schedule_classifier_casia.txt --weight_decay 5e-4 --embedding_size 512 --lfw_distance_metric 1 --lfw_use_flipped_images --lfw_subtract_mean --validation_set_split_ratio 0.05 --validate_every_n_epochs 5 --prelogits_norm_loss_factor 5e-4

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

ZhiBing_Ding

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值