DeepLab_V3 Image Semantic Segmentation Network

DeepLab_V3 Image Semantic Segmentation Network

Implementation of the Semantic Segmentation DeepLab_V3 CNN as described at Rethinking Atrous Convolution for Semantic Image Segmentation.

语义分割DeepLab_V3 CNN的实现,如重新思考用于语义图像分割的Atrous卷积所描述的。

For a complete documentation of this implementation, check out the blog post.

有关此实现的完整文档,请查看博客文章。

Dependencies

  • Python 3.x
  • Numpy
  • Tensorflow 1.10.1

Downloads

Evaluation

Pre-trained model.

Place the checkpoints folder inside ./tboard_logs. If the folder does not exist, create it.

Retraining

Original datasets used for training.

Place the tfrecords files inside ./dataset/tfrecords. Create the folder if it does not exist.

Training and Eval

Once you have the training and validation TfRefords files, just run the command bellow. Before running Deeplab_v3, the code will look for the proper ResNets checkpoints inside ./resnet/checkpoints, if the folder does not exist, it will first be downloaded.

获得培训和验证TfRefords文件后,只需运行命令bellow。在运行Deeplab_v3之前,代码将在./resnet/checkpoints中查找正确的ResNets检查点,如果该文件夹不存在,则首先将其下载。
python train.py --starting_learning_rate=0.00001 --batch_norm_decay=0.997 --crop_size=513 --gpu_id=0 --resnet_model=resnet_v2_50

Check out the train.py file for more input argument options. Each run produces a folder inside the tboard_logs directory (create it if not there).

查看train.py文件以获取更多输入参数选项。每次运行都会在tboard_logs目录中生成一个文件夹(如果没有则创建它)。

To evaluate the model, run the test.py file passing to it the model_id parameter (the name of the folder created inside tboard_logs during training).

要评估模型,请运行test.py文件,向其传递model_id参数(在训练期间在tboard_logs内创建的文件夹的名称)。

Note: Make sure the test.tfrecords is downloaded and placed inside ./dataset/tfrecords.

注意:确保下载test.tfrecords并将其放在./dataset/tfrecords中。
python test.py --model_id=16645

Retraining

To use a different dataset, you just need to modify the CreateTfRecord.ipynb notebook inside the dataset/ folder, to suit your needs.要使用其他数据集,只需修改数据集/文件夹中的CreateTfRecord.ipynb笔记本即可满足您的需求。

Also, be aware that originally Deeplab_v3 performs random crops of size 513x513 on the input images. This crop_sizeparameter can be configured by changing the crop_size hyper-parameter in train.py. 此外,请注意,最初Deeplab_v3在输入图像上执行大小为513x513的随机裁剪。可以通过更改train.py中的crop_size超参数来配置此crop_size参数。

 

Datasets

To create the dataset, first make sure you have the Pascal VOC 2012 and/or the Semantic Boundaries Dataset and Benchmark datasets downloaded.要创建数据集,首先要确保已下载Pascal VOC 2012和/或语义边界数据集和基准数据集。

Note: You do not need both datasets.注意:您不需要两个数据集。

  • If you just want to test the code with one of the datasets (say the SBD), run the notebook normally, and it should work.

After, head to dataset/ and run the CreateTfRecord.ipynb notebook.

如果您只想使用其中一个数据集(比如SBD)测试代码,请正常运行笔记本,它应该可以工作。

The custom_train.txt file contains the name of the images selected for training. This file is designed to use the Pascal VOC 2012 set as a TESTING set. Therefore, it doesn't contain any images from the VOC 2012 val dataset. For more info, see the Training section of Deeplab Image Semantic Segmentation Network.

Obs. You can skip that part and direct download the datasets used in this experiment - See the Downloads section

之后,前往dataset /并运行CreateTfRecord.ipynb笔记本。 custom_train.txt文件包含为训练选择的图像的名称。此文件旨在将Pascal VOC 2012设置为TESTING集。因此,它不包含VOC 2012 val数据集中的任何图像。有关详细信息,请参阅Deeplab图像语义分割网络的“培训”部分。 OBS。您可以跳过该部分并直接下载此实验中使用的数据集 - 请参阅下载部分

Serving

For full documentation on serving this Semantic Segmentation CNN, refer to How to deploy TensorFlow models to production using TF Serving.

有关提供此语义分段CNN的完整文档,请参阅如何使用TF服务将TensorFlow模型部署到生产中。

All the serving scripts are placed inside: ./serving/.   所有服务脚本都放在:./serving /。

To export the model and to perform client requests do the following:

要导出模型并执行客户端请求,请执行以下操作:

  1. Create a python3 virtual environment and install the dependencies from the serving_requirements.txt file;

  2. 创建python3虚拟环境并从serving_requirements.txt文件安装依赖项;

  3. Using the python3 env, run deeplab_saved_model.py. The exported model should reside into ./serving/model/;

  4. 使用python3 env,运行deeplab_saved_model.py。导出的模型应驻留在./serving/model/;

  5. Create a python2 virtual environment and install the dependencies from the client_requirements.txt file;

  6. 创建python2虚拟环境并从client_requirements.txt文件安装依赖项;

  7. From the python2 env, run the deeplab_client.ipynb notebook;

  8. 从python2 env,运行deeplab_client.ipynb笔记本;

Results

  • Pixel accuracy: ~91%
  • Mean Accuracy: ~82%
  • Mean Intersection over Union (mIoU): ~74%
  • Frequency weighed Intersection over Union: ~86

Results

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值