引用:https://blog.csdn.net/weixin_36474809/article/details/82752199
参考:
参考了邢翔瑞关于“MTCNN(一)python代码训练与运行”的博客里面的代码出现了些问题,没能联系上博主,也留意不了言,在此做记录希望得到有心人解答。
博文CSDN连接:https://blog.csdn.net/weixin_36474809/article/details/82752199
博文代码的github连接: https://github.com/AITTSMD/MTCNN-Tensorflow。
操作流程:
博文readme:https://github.com/AITTSMD/MTCNN-Tensorflow/blob/master/README.md
Prepare For Training Data
1、Download Wider Face Training part only from Official Website , unzip to replace WIDER_train and put it into prepare_data folder.
2、Download landmark training data from here,unzip and put them into prepare_data folder.
3、Run prepare_data/gen_12net_data.py to generate training data(Face Detection Part) for PNet.
4、Run gen_landmark_aug_12.py to generate training data(Face Landmark Detection Part) for PNet.
5、Run gen_imglist_pnet.py to merge two parts of training data.
6、Run gen_PNet_tfrecords.py to generate tfrecord for PNet.
7、After training PNet, run gen_hard_example to generate training data(Face Detection Part) for RNet.
8、Run gen_landmark_aug_24.py to generate training data(Face Landmark Detection Part) for RNet.
9、Run gen_imglist_rnet.py to merge two parts of training data.
10、Run gen_RNet_tfrecords.py to generate tfrecords for RNet.(you should run this script four times to generate tfrecords of neg,pos,part and landmark respectively)
11、After training RNet, run gen_hard_example to generate training data(Face Detection Part) for ONet.
12、Run gen_landmark_aug_48.py to generate training data(Face Landmark Detection Part) for ONet.
13、Run gen_imglist_onet.py to merge two parts of training data.
14、Run gen_ONet_tfrecords.py to generate tfrecords for ONet.(you should run this script four times to generate tfrecords of neg,pos,part and landmark respectively)
问题:
1、在跑前6个步骤时候,在代码里面有几个地方需要修改好。
一是各个代码里面的相对路径修改,如from prepare_data.utils import IoU 修改成from utils import IoU ,去掉prepare_data(当前目录)。
2、在原博主提到的train_models/train.py 的修改。
def image_color_distort(inputs):
inputs = tf.image.random_contrast(inputs, lower=0.5, upper=1.5)
inputs = tf.image.random_brightness(inputs, max_delta=0.2)
inputs = tf.image.random_hue(inputs,max_delta= 0.2)
inputs = tf.image.random_saturation(inputs,lower = 0.5, upper= 1.5) return inputs
把后两个inputs注释或删除都可。
def image_color_distort(inputs):
inputs = tf.image.random_contrast(inputs, lower=0.5, upper=1.5)
inputs = tf.image.random_brightness(inputs, max_delta=0.2)
#inputs = tf.image.random_hue(inputs,max_delta= 0.2)
#inputs = tf.image.random_saturation(inputs,lower = 0.5, upper= 1.5) return inputs
3、第三个问题是tensorflow的问题,由于我用的Python3.6版本问题原因,会报如下错误:
AttributeError:module 'tensorboard.plugins.projector' has no attribute 'ProjectorConfig'
问题代码出现在运行train_models/train_Pnet.py引用train_models/train.py中,需要修改train.py的tensoeflow引用。
原train.py引用。
from tensorboard.plugins import projector
修改(删去或注释)
#from tensorboard.plugins import projector
from tensorflow.contrib.tensorboard.plugins import projector
4、\在操作步骤7(7. After training PNet, run gen_hard_example
to generate training data(Face Detection Part) for RNet)时产生了疑问(可能才疏学浅,英语理解太烂原因,总之运行不下去了)。。
第一:是不是在运行第6步骤之后要到train_model的文件夹下运行,train_PNet.py,然后再到prepare_data文件夹底下运行gen_hard_example.py(github下载下来net默认填写ONet,已修改成RNet)。还是直接运行gen_hard_example.py文件的,直接运行的话会在DATA文件下生成no_LM24的文件夹底下对应neg,pos,part三个空文件夹的,这个步骤是怎么实施好的。
第二个问题是在第一个问题的基础上的。如果先运行train_PNet.py再运行gen_hard_exmaple.py,会出现 AssertionError:the params dictionnary is not valid的报错。指向…/data/MTCNN_model/PNet_no_Landmark/PNet-18不存在。
从github上下载解压的data文件夹底下的存在**PNet_landmark、RNet_landmark、RNet_landmark**
三个文件夹及其底下内容。但不存在*PNet_no_Landmark、RNet_no_Landmark、ONet_no_Landmark*
三个文件夹及底下内容。
如果我修改train_PNet.py的–prefix的默认参数
“../data/MTCNN_model/PNet_no_Landmark/PNet”、“../data/MTCNN_model/RNet_no_Landmark/PNet”、“../data/MTCNN_model/ONet_no_Landmark/PNet”
,变成“../data/MTCNN_model/PNet_landmark/PNet”、“../data/MTCNN_model/RNet_landmark/PNet”、“../data/MTCNN_model/ONet_landmark/PNet”
这一步骤的代码能运行,但是进行到接下来几个步骤会出现其他错误(如步骤8的gen_landmark_aug_24.py 无法进行)。
重点第四个问题该如何解决的,是原博主的github的data/MTCNN_models底下缺少了RNet_no_Landmark等的文件的,还是前面哪些步骤操作有误的?