raspberry pi_在Raspberry Pi上使用TensorFlow进行对象检测

本文介绍如何在Raspberry Pi设备上利用TensorFlow库进行对象检测。通过Python编程,详细阐述了在资源有限的Raspberry Pi上运行人工智能模型进行实时图像分析的步骤。
摘要由CSDN通过智能技术生成

raspberry pi

The following post shows how to train and test TensorFlow and TensorFlow Lite models based on SSD-architecture (to get familiar with SSD follow the links in the «References» down below) on Raspberry Pi.

以下帖子显示了如何在Raspberry Pi上基于SSD架构训练和测试TensorFlow和TensorFlow Lite模型(要熟悉SSD,请遵循下面“参考”中的链接)。

Note: The described steps were tested on Linux Mint 19.3 but shall work on Ubuntu and Debian.

注意:所描述的步骤已经在Linux Mint 19.3上进行了测试,但是可以在Ubuntu和Debian上运行。

资料准备 (Data preparation)

Like in the post dedicated to YOLO one have to prepare data first. Follow the first 7 steps and then do this:

就像在专门针对YOLO帖子中一样,必须首先准备数据。 请遵循前7个步骤,然后执行以下操作:

1. In order to get listed data and generate TFRecords clone repository «How To Train an Object Detection Classifier for Multiple Objects Using TensorFlow(GPU) on Windows 10»:

1.为了获取列出的数据并生成TFRecords克隆存储库《如何在Windows 10上使用TensorFlow(GPU)训练多个对象的对象检测分类器》:

git clone https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10.gitcd TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-
Windows-10

2. Put all labeled images into folders «images/test» and «images/train»:

2.将所有带标签的图像放入“图像/测试”和“图像/训练”文件夹中:

Image for post
Image for post

3. Get data records:

3.获取数据记录:

python3 xml_to_csv.py

This command creates «train_labels.csv» and «test_labels.csv» in «im-ages» folder:

该命令在«im-ages»文件夹中创建«train_labels.csv»和«test_labels.csv»:

Image for post

4. Open «generate_tfrecord.py»:

4.打开«generate_tfrecord.py»:

Image for post

And replace the label map starting at line 31 with your own label map, where each object is assigned an ID number, for ex.:

并用您自己的标签图替换从第31行开始的标签图,其中为每个对象分配一个ID号,例如:

Image for post

5. Generate TFRecords for data:

5.为数据生成TFRecords:

python3 generate_tfrecord.py — csv_input=images/train_labels.csv
— image_dir=images/train — output_path=train.recordpython3 generate_tfrecord.py — csv_input=images/test_labels.csv
— image_dir=images/test — output_path=test.record

These commands generate «train.record» and «test.record» file which will be used to train the new object detection classifier.

这些命令生成“ train.record”和“ test.record”文件,这些文件将用于训练新的对象检测分类器。

Image for post

6. Create a label map. The label map defines a mapping of class names to classID numbers, for ex.:

6.创建标签图。 标签映射定义了类名到classID号的映射,例如:

item {
id: 1
name: 'nutria'
}

Save it as «labelmap.pbtxt».

将其另存为«labelmap.pbtxt»。

7. Configure the object detection training pipeline. It defines which model and what parameters will be used for training.Download «ssd_mobilenet_v2_quantized_300x300_coco.config» from https://github.com/tensorflow/models/tree/master/research/object_detection/samples/configs:

7.配置对象检测训练管道。 它定义了用于训练的模型和参数。从https://github.com/tensorflow/models/tree/master/research/object_detection/samples/configs下载 «ssd_mobilenet_v2_quantized_300x300_coco.config»:

wget https://github.com/tensorflow/models/blob/master/research/object_detection/samples/config/ssd_mobilenet_v2_quantized_300x300_
coco.config

8. Change configuration file:

8.更改配置文件:

  • Set number of classes:

    设置班数:

    - num_classes: SET_YOUR_VALUE

    -num_classes:SET_YOUR_VALUE

  • Set checkpoint:

    设置检查点:

    - fine_tune_checkpoint: “/path/to/ssd_mobilenet_v2_quantized/model.ckpt”

    -fine_tune_checkpoint:“ / path / to / ssd_mobilenet_v2_quantized / model.ckpt”

  • Set «input_path» and «label_map_path» in «train_input_reader»:

    在«train_input_reader»中设置«input_path»和«label_map_path»:

    - input_path: “/path/to/train.record”

    -input_path:“ / path / to / train.record”

    - label_map_path: “/path/to/labelmap.pbtxt”

    -label_map_path:“ / path / to / labelmap.pbtxt”

  • Set «batch_size» in «train_config»:

    在«train_config»中设置«batch_size»:

    - batch_size: 6 (OR SET_YOUR_VALUE)

    -batch_size:6(或SET_YOUR_VALUE)

  • Set «input_path» and «label_map_path» in «eval_input_reader»:

    在«eval_input_reader»中设置«input_path»和«label_map_path»:

    - input_path: “/path/to/test.record”

    -input_path:“ / path / to / test.record”

    - label_map_path: “/path/to/labelmap.pbtxt”

    -label_map_path:“ / path / to / labelmap.pbtxt”

设定环境 (Setup environment)

Raspberry Pi的常规设置 (General settings for Raspberry Pi)

1. Update and upgrade first:

1.首先更新和升级:

sudo apt update
sudo apt dist-upgrade

2. Install some important dependencies:

2.安装一些重要的依赖项:

sudo apt update
sudo apt install -y joe telnet nmap htop sysbench iperf bonnie++ iftop nload hdparm bc stress python-dev python-rpi.gpio wiringpi stress sysstat zip locate nuttcp attr imagemagick netpipe-tcp netpipe-openmpi git libatlas-base-dev libhdf5-dev libc-ares-dev libeigen3-dev build-essential libsdl-ttf2.0-0 python-pygame festival

3. Install dependencies for TensorFlow:

3.安装TensorFlow的依赖项:

sudo apt update
sudo apt install libatlas-base-dev python-tk virtualenv
sudo pip3 install pillow Pillow lxml jupyter matplotlib cython numpy pygame

4. Install dependencies for OpenCV:

4.安装OpenCV依赖项:

sudo apt update
sudo apt install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev libavcodec-dev libavformat-dev libswscale-dev libv4l-dev libxvidcore-dev libx264-dev qt4-dev-tools libatlas-base-dev

5. Install OpenCV itself:

5.安装OpenCV本身:

sudo apt update
sudo pip3 install opencv-python

6. Install TensorFlow by downloading «wheel» from https://github.com/lhelontra/tensorflow-on-arm/releases:

6.通过从https://github.com/lhelontra/tensorflow-on-arm/releases下载«wheel»安装TensorFlow:

sudo apt update
sudo pip3 install tensorflow-2.2.0-cp37-none-linux armv7l.whl

Note: Experience shows that it is better to install «wheel» rather then from pip default repository, since it does not show all the versions for Raspberry Pi:

注意:经验表明,最好安装«wheel»,而不要从pip默认存储库安装,因为它不会显示Raspberry Pi的所有版本:

Image for post

训练 (Training)

Note: Training shall be done on host machine to avoid additional problems that might occur on Raspberry Pi since TensorFlow framework and its accompanying software were originally developed and optimized for usage on mainframes.

注意:由于TensorFlow框架及其随附软件最初是针对大型机开发和优化的,因此应在主机上进行培训,以避免在Raspberry Pi上可能发生的其他问题。

1. Install TensorFlow (for CPU or GPU):

1.安装TensorFlow(用于CPU或GPU):

sudo pip3 install tensorflow==1.13.1
or
sudo pip3 install tensorflow-gpu==1.13.1

Note: Use v1.13.1 since it is the most stable version for main frames and works with all other software used here (from own experience).

注意:请使用v1.13.1,因为它是主机最稳定的版本,并且可以与此处使用的所有其他软件一起使用(根据自己的经验)。

2. Get TensorFlow models:

2.获取TensorFlow模型:

git clone https://github.com/tensorflow/models.git

3. Copy «train.py» from folder «legacy» to «object_detection»:

3.将“传统”文件夹中的“ train.py”复制到“对象检测”中:

cp /path/to/models/research/object_detection/legacy/train.py
/path/to/models/research/object_detection/

4. Get pretrained model from https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md:

4.https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md获取预训练的模型:

wget http://download.tensorflow.org/models/object_detection/ssd_
mobilenet_v2_quantized_300x300_coco_2019_01_03.tar.gz

5. Unpack archive:

5.解压缩档案:

tar -xvzf ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03.tar.gz
-C /destination/folder/

Note: Unpack archive in folder for which «fine_tune_checkpoint» is configured in «*.config».

注意:在“ * .config”中配置了“ fine_tune_checkpoint”文件夹的文件夹中解压缩档案。

6. Start training:

6.开始训练:

python3 train.py --logtostderr -train_dir=/path/to/training/
--pipeline_config_path=/path/to/ssd_mobilenet_v2_quantized.config

Note #1: «/path/to/training/» is any folder where all training results couldbe saved to.Note #2: If training process is suddenly terminated one can change values«num_steps» and «num_examples» reducing the load on memory.

注意#1: «/ path / to / training /»是可以将所有训练结果保存到的任何文件夹。 注意#2:如果训练过程突然终止,则可以更改值“ num_steps”和“ num_examples”,以减少内存负荷。

7. After training has finished, the model can be exported for conversion to TensorFlow Lite using the «export_tflite_ssd_graph.py» script:

7.训练完成后,可以使用«export_tflite_ssd_graph.py»脚本将模型导出以转换为TensorFlow Lite:

python3 export_tflite_ssd_graph.py
--pipeline_config_path=/path/to/ssd_mobilenet_v2_quantized.config
--trained_checkpoint_prefix=/path/to/training/model.ckpt-XXXX
--output_directory=/path/to/output/directory
--add_postprocessing_op=true
Image for post

Note #1: For each «model.ckpt-XXXX» there must be corresponding «model.ckpt-XXXX.data-00000-of-00001», «model.ckpt-XXXX.index», «model.ckpt-XXXX.meta” in the «training» folder.Note #2: «/path/to/output/directory» is any folder where all final results could be saved to.

注意事项1:对于每个«model.ckpt-XXXX»,必须有相应的«model.ckpt-XXXX.data-00000-of-00001»,«model.ckpt-XXXX.index»和«model.ckpt-XXXX。 meta”位于“培训”文件夹中。 注意#2: «/ path / to / output / directory»是可以将所有最终结果保存到的任何文件夹。

Image for post

After the command has been executed, there must be two new files in theoutput folder specified for «output_directory»: «tflite_graph.pb» and«tflite_graph.pbtxt».

执行命令后,在为“ output_directory”指定的输出文件夹中必须有两个新文件:“ tflite_graph.pb”和“ tflite_graph.pbtxt”。

8. Install Bazel in order to optimize trained model through the TensorFlow Lite Optimizing Converter (TOCO) before it will work with the TensorFlow Lite interpreter:

8.安装Bazel以便通过TensorFlow Lite优化转换器(TOCO)优化训练后的模型,然后再与TensorFlow Lite解释器一起使用:

  • Install dependencies:

    安装依赖项:
sudo apt install g++ unzip zip
sudo apt install openjdk-11-jdk
wget https://github.com/bazelbuild/bazel/releases/download/
0.21.0/bazel-0.21.0-installer-linux-x86_64.sh

Note: The experience shows that only Bazel v0.21.0 works well. Other versions cause multiple errors.

注意:经验表明,只有Bazel v0.21.0可以正常工作。 其他版本会导致多个错误。

  • Change permission rights:

    更改权限:
chmod +x bazel*.sh
  • Install Bazel:

    安装Bazel:
./bazel*.sh –user

Installation is shown for Ubuntu (https://docs.bazel.build/versions/master/install-ubuntu.html). The same steps are applicable for Debian and Linux Mint. For other OS follow installation guide fromhttps://docs.bazel.build/versions/master/install.html

显示了针对Ubuntu的安装( https://docs.bazel.build/versions/master/install-ubuntu.html )。 相同的步骤适用于Debian和Linux Mint。 对于其他操作系统,请遵循https://docs.bazel.build/versions/master/install.html中的安装指南

9. Clone TensorFlow repository and open it:

9.克隆TensorFlow存储库并打开它:

git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow

10. Use Bazel to run the model through the TOCO tool by issuing this command:

10.通过发出以下命令,使用Bazel通过TOCO工具运行模型:

bazel run --config=opt tensorflow/lite/toco:toco --
--input_file=/path/to/tflite_graph.pb
--output_file=/path/to/detect.tflite
--input_shapes=1,300,300,3
--input_arrays=normalized_input_image_tensor
--output_arrays=TFLite_Detection_PostProcess,
TFLite_Detection_PostProcess:1,
TFLite_Detection_PostProcess:2,
28TFLite_Detection_PostProcess:3
--inference_type=QUANTIZED_UINT8
--mean_values=128
--std_values=128
--change_concat_input_ranges=false
--allow_custom_ops

Note: The output could be the following:

注意:输出可能是以下内容:

Image for post

After the command finishes running, there shall be a file called «detect.tflite» in the directory specified for «output_file».

命令运行完毕后,在为“ output_file”指定的目录中将存在一个名为“ detect.tflite”的文件。

Image for post

11. Create «labelmap.txt» and add all class (object) names for which the model was trained:

11.创建«labelmap.txt»并添加训练了模型的所有类(对象)名称:

touch labelmap.txt

The contents:

内容:

Image for post
Only one class in this case
在这种情况下只有一个班级

12. The model is ready for usage. Put «detect.tflite» and «labelmap.txt» into separate folder and use it as normal pretrained model (see «Testing» paragraph).

12.该模型可以使用了。 将“ detect.tflite”和“ labelmap.txt”放入单独的文件夹中,并将其用作常规的预训练模型(请参见“测试”段落)。

测试中 (Testing)

对于TensorFlow Lite模型 (For TensorFlow Lite model)

For custom model

对于自定义模型

1. Clone repository for Raspberry Pi and open it

1.克隆Raspberry Pi的存储库并打开它

git clone https://github.com/EdjeElectronics/TensorFlow-Lite-Object\
-Detection-on-Android-and-Raspberry-Pi.git

cd TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi

2. Put earlier trained model (custom «detect.tflite» and «labelmap.txt») into «/path/to/model» and run the command:

2.将先前训练有素的模型(自定义«detect.tflite»和«labelmap.txt»)放入«/ path / to / model»并运行以下命令:

python3 /path/to/TensorFlow-Lite-Object-Detection-on-Android-and-
Raspberry-Pi/TFLite_detection_webcam.py –modeldir=/path/to/model
Image for post

For pretrained model

对于预训练模型

The same is applicable to already pretrained model.

这同样适用于已经预训练的模型。

1. Download pretrained SSD MobileNet from https://www.tensorflow.org/lite/models/object_detection/overview:

1.https://www.tensorflow.org/lite/models/object_detection/overview下载经过预训练的SSD MobileNet:

wget https://storage.googleapis.com/download.tensorflow.org/models/
tflite/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip

2. Unzip the model:

2.解压缩模型:

unzip /path/to/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip -d
/path/to/model

Archive must contain «detect.tflite» and «labelmap.txt» files.

归档文件必须包含“ detect.tflite”和“ labelmap.txt”文件。

3. Open cloned repository and run the same command:

3.打开克隆的存储库并运行相同的命令:

python3 /path/to/TensorFlow-Lite-Object-Detection-on-Android-and-
Raspberry-Pi/TFLite_detection_webcam.py –modeldir=/path/to/model
Image for post

对于TensorFlow模型 (For TensorFlow model)

1. Install package «argparse»:

1.安装软件包«argparse»:

sudo pip3 install argparse

2.1. Either copy the script «Object_detection_webcam.py» from «TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10» repository to «models» repository in «/path/to/models/research/object_detection» and add the following:

2.1。 将«TensorFlow-对象检测-API-Tutorial-Train-Multiple-Objects-Windows-10»存储库中的脚本«Object_detection_webcam.py»复制到«/ path / to / models / research / object_detection»中的“ models”存储库中并添加以下内容:

  • import package argparse

    导入软件包argparse
import argparse
  • add the following arguments:

    添加以下参数:
ap = argparse.ArgumentParser(description='Testing tools')
ap.add_argument('-pb', '--path_to_pb')
ap.add_argument('-l', '--path_to_labels')
ap.add_argument('-nc', '-num_classes')
args = vars(ap.parse_args())
  • Comment out lines with variables «MODEL_NAME», «PATH_TO_CKPT»,

    用变量«MODEL_NAME»,«PATH_TO_CKPT»,

    «PATH_TO_LABELS», «CWD_PATH» and «NUM_CLASSES» and add the :

    «PATH_TO_LABELS»,«CWD_PATH»和«NUM_CLASSES»并添加:

ap = argparse.ArgumentParser(description='Testing tools')
ap.add_argument('-pb', '--path_to_pb')
ap.add_argument('-l', '--path_to_labels')
ap.add_argument('-nc', '--num_classes')
args = vars(ap.parse_args())# Name of the directory containing the object detection module we're using
#MODEL_NAME = 'inference_graph'# Grab path to current working directory
#CWD_PATH = os.getcwd()# Path to frozen detection graph .pb file, which contains the model that is used
# for object detection.
#PATH_TO_CKPT = os.path.join(CWD_PATH,MODEL_NAME,'frozen_inference_graph.pb')
PATH_TO_CKPT = args['path_to_pb']# Path to label map file
#PATH_TO_LABELS = os.path.join(CWD_PATH,'training','labelmap.pbtxt')
PATH_TO_LABELS = args['path_to_labels']# Number of classes the object detector can identify
#NUM_CLASSES = 6
NUM_CLASSES = int(args['num_classes'])

2.2. Or download already modified script:

2.2。 或下载已修改的脚本:

cd /path/to/models/research/object_detection
wget https://bitbucket.org/ElencheZetetique/fixed_scripts/src/master/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10/Object_detection_webcam.py

3.1. Open script «label_map_util.py» in «/path/to/models/research/object_detection/utils/» and either comment out if-statement for «item.keypoints» or add an exception for it:

3.1。 在«/ path / to / models / research / object_detection / utils /»中打开脚本«label_map_util.py»,并注释掉«item.keypoints»的if语句或为其添加例外:

# if item.keypoints:
# keypoints = {}
# list_of_keypoint_ids = []
# for kv in item.keypoints:
# if kv.id in list_of_keypoint_ids:
# raise ValueError('Duplicate keypoint ids are not allowed. Found {} more than once'.format(kv.id))
# keypoints[kv.label] = kv.id
# list_of_keypoint_ids.append(kv.id)
# category['keypoints'] = keypoints
try:
if item.keypoints:
keypoints = {}
list_of_keypoint_ids = []
for kv in item.keypoints:
if kv.id in list_of_keypoint_ids:
raise ValueError('Duplicate keypoint ids are not allowed. Found {} more than once'.format(kv.id))
keypoints[kv.label] = kv.id
list_of_keypoint_ids.append(kv.id)
category['keypoints'] = keypoints
except AttributeError:
pass

3.2. Alternatively one might download modified script:

3.2。 或者,可以下载修改后的脚本:

cd /path/to/models/research/object_detection/utils/
wget https://bitbucket.org/ElencheZetetique/fixed_scripts/src/master/models_TF/label_map_util.py

For custom model

对于自定义模型

1.1. Open script «export_inference_graph.py» in «/path/to/models/research/object_detection» and comment out last parameters:

1.1。 在«/ path / to / models / research / object_detection»中打开脚本«export_inference_graph.py»,并注释掉最后一个参数:

exporter.export_inference_graph(
FLAGS.input_type, pipeline_config, FLAGS.trained_checkpoint_prefix,
FLAGS.output_directory, input_shape=input_shape,
write_inference_graph=FLAGS.write_inference_graph,
additional_output_tensor_names=additional_output_tensor_names,
#use_side_inputs=FLAGS.use_side_inputs,
#side_input_shapes=side_input_shapes,
#side_input_names=side_input_names,
#side_input_types=side_input_types)
)

1.2. Or copy the script replacing the original one:

1.2。 或复制脚本以替换原始脚本:

cd /path/to/models/research/object_detection
wget https://bitbucket.org/ElencheZetetique/fixed_scripts/src/master/models_TF/export_inference_graph.py

2. Export inference graph using script «export_inference_graph.py» from«/path/to/models/research/object_detection»:

2.使用«/ path / to / models / research / object_detection»脚本«export_inference_graph.py»导出推理图:

python3 export_inference_graph.py
--input_type image_tensor
--pipeline_config_path /path/to/ssd_mobilenet_v2_quantized.config
--trained_checkpoint_prefix /path/to/training/model.ckpt-XXX
--output_directory /path/to/output/directory

3. In the output directory assigned for flag --output_directory theremust be file «frozen_inference_graph.pb»:

3.在分配给标志--output_directory的输出目录中,必须有文件«frozen_inference_graph.pb»:

Image for post

4. Run modified script «Object_detection_webcam.py» for custom model:

4.针对自定义模型运行修改后的脚本“ Object_detection_webcam.py”:

python3 Object_detection_webcam.py -nc 1
-pb /path/to/frozen_inference_graph.pb
-l /path/to/labelmap.pbtxt

Example of detection:

检测示例:

Image for post

For pretrained model

对于预训练模型

1. Download the model you are interested in from https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md

1.https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md下载您感兴趣的模型

wget http://download.tensorflow.org/models/object_detection/
faster_rcnn_resnet101_coco_2018_01_28.tar.gz

2. Extract file «frozen_inference_graph.pb» from archive

2.从存档中提取文件“ frozen_inference_graph.pb”

3. Run modified script «Object_detection_webcam.py» for pretrained model:

3.针对预训练的模型运行修改后的脚本“ Object_detection_webcam.py”:

python3 Object_detection_webcam.py -nc 100 
-pb /path/to/frozen_inference_graph.pb
-l /path/to/mscoco_label_map.pbtxt

Examples of detection:

检测示例:

Image for post
Image for post

Assign maximum number of classes for flag -nc/--num_classesAssign path to «/path/to/models/research/object_detection/data/mscoco_label_map.pbtxt» for flag -l/--path_to_labels

为标记-nc/--num_classes分配最大的类数为标记-l/--path_to_labels -nc/--num_classes分配到«/path/to/models/research/object_detection/data/mscoco_label_map.pbtxt»的-l/--path_to_labels

翻译自: https://medium.com/@Elenche.Zetetique/object-detection-with-tensorflow-42eda282d915

raspberry pi

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值