windows下基于tensorflow搭建yolov3

代码测试:

首先,下载源代码:

git clone https://github.com/YunYang1994/tensorflow-yolov3.git

然后执行

cd tensorflow-yolov3
pip install -r ./docs/requirements.txt

安装必要的组件,但这里会有一个问题,就是直接按照/docs/requirements.txt内容安装很容易出问题,且破坏原有的版本,yolov3的环境配置跟faster-rcnn是类似的,所以这里建议参照之前写的文章faster-RCNN环境配置先安装必要的开发包。接着下载训练好的模型权重文件:yolov3_coco.tar.gz  下载地址https://pan.baidu.com/s/11mwiUy8KotjUVQXqkGGPFQ&shfl=sharepset#list/path=%2F,将yolov3_coco.tar.gz解压至checkpoint文件夹下。接下来回到文件夹tensorflow-yolov3/,执行

python convert_weight.py
python freeze_graph.py

将ckpt存储模型转化为pb存储模型,(ckpt和pb持久化方式的区别在于ckpt文件将模型结构与模型权重分离保存,便于训练过程;pb文件则是graph_def的序列化文件,便于发布和离线预测。官方提供freeze_grpah.py脚本来将ckpt文件转为pb文件。)接着可以调用

python image_demo.py

测试,从源代码可以看到,代码在加载模型后会对文件./docs/images/road.jpeg进行检测,结果如下:

若检测其他文件,则可以通过修改路径下的文件来实现。

同理可以调用

python video_demo.py

会演示检测视频文件的实力,视频文件的位置./docs/images/road.mp4。

模型训练:

这里用到VOC PASCAL 数据集,数据集下载参考faster RCNN配置,这里我的文件夹的位置为:

F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\

修改\data\dataset\下的voc_test.txt和voc_train.txt文件分别为:

F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000005.jpg 263,211,324,339,8 165,264,253,372,8 241,194,295,299,8
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000007.jpg 141,50,500,330,6
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000009.jpg 69,172,270,330,12 150,141,229,284,14 285,201,327,331,14 258,198,297,329,14
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000012.jpg 156,97,351,270,6
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000016.jpg 92,72,305,473,1
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000017.jpg 185,62,279,199,14 90,78,403,336,12
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000019.jpg 231,88,483,256,7 11,113,266,259,7
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000020.jpg 33,148,371,416,6
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000021.jpg 1,235,182,388,11 210,36,336,482,14 46,82,170,365,14 11,181,142,419,14
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000023.jpg 9,230,245,500,1 230,220,334,500,1 2,1,117,369,14 3,2,243,462,14 225,1,334,486,14
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000024.jpg 196,165,489,247,18
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000026.jpg 90,125,337,212,6
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000030.jpg 36,205,180,289,1 51,160,150,292,14 295,138,450,290,14
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000032.jpg 104,78,375,183,0 133,88,197,123,0 195,180,213,229,14 26,189,44,238,14
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000033.jpg 9,107,499,263,0 421,200,482,226,0 325,188,411,223,0
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000034.jpg 116,167,360,400,18 141,153,333,229,18
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000035.jpg 1,96,191,361,14 218,98,465,318,14
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000036.jpg 27,79,319,344,11
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000039.jpg 156,89,344,279,19
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000041.jpg 363,47,432,107,19 216,92,307,302,14 164,148,227,244,14
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000001.jpg 48,240,195,371,11 8,12,352,498,14
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000002.jpg 139,200,207,301,18
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000003.jpg 123,155,215,195,17 239,156,307,205,8
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000004.jpg 13,311,84,362,6 362,330,500,389,6 235,328,334,375,6 175,327,252,364,6 139,320,189,359,6 108,325,150,353,6 84,323,121,350,6
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000006.jpg 187,135,282,242,15 154,209,369,375,10 255,207,366,375,8 138,211,249,375,8
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000008.jpg 192,16,364,249,8
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000010.jpg 87,97,258,427,12 133,72,245,284,14
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000011.jpg 126,51,330,308,7
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000013.jpg 299,160,446,252,9
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000014.jpg 72,163,302,228,5 185,194,500,316,6 416,180,500,222,6 314,8,344,65,14 331,4,361,61,14 357,8,401,61,14
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000015.jpg 77,136,360,358,1
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000018.jpg 31,30,358,279,11
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000022.jpg 68,103,368,283,12 186,44,255,230,14
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000025.jpg 2,84,59,248,9 68,115,233,279,9 64,173,377,373,9 320,2,496,375,14 221,4,341,374,14 135,14,220,148,14 69,43,156,177,9 58,54,104,139,14 279,1,331,86,14 320,22,344,96,14
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000027.jpg 174,101,349,351,14
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000028.jpg 63,18,374,500,7
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000029.jpg 56,63,284,290,11
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000031.jpg 41,77,430,255,18
F:\comp_v\Yolov3\tensorflow-yolov3\data\data\VOCDevkit2007\VOC2007\JPEGImages\000037.jpg 61,96,464,339,11

执行:

python train.py

可以进行训练。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值