opencv图像深度-1_OpenCV空间AI竞赛之旅(第1部分-初始设置+深度)

opencv图像深度-1

OpenCV空间AI竞赛 (OpenCV Spatial AI Competition)

Recently, the people at OpenCV launched the OpenCV Spatial AI competition sponsored by Intel as part of OpenCV’s 20th anniversary celebration. The main objective of the competition is to develop applications that benefit from the features of the new OpenCV AI Kit with Depth (OAK-D). The competition consists of two phases, the winners of the first phase were able to obtain an OAK-D for free to develop their application and the second phase winners will receive a cash prize of up to $3,000.

最近,OpenCV人们发起了由英特尔赞助OpenCV Spatial AI竞赛 ,这是OpenCV成立20周年庆典的一部分。 竞赛的主要目的是开发利用新的带有深度的OpenCV AI套件(OAK-D)的功能的应用程序。 比赛分为两个阶段,第一阶段的获胜者可以免费获得OAK-D来开发其应用程序,第二阶段的获胜者将获得最高3,000美元的现金奖励。

The OAK-D contains a 12 MP RGB camera for deep neural inference and a stereo camera for depth estimation in real time using Intel’s Myriad X Vision Processing Unit (VPU).

OAK-D包含一个12 MP RGB摄像头,用于深度神经推理,以及一个立体声摄像头,用于使用英特尔的Myriad X视觉处理单元(VPU)进行实时深度估计。

If you want to know more about the OAK-D, make sure to check the interview by Ritesh Kanjee to Brandon Gilles, who is the Chief Architect of the OpenCV AI Kit. The kit has raised over $800,000 as part of their Kickstarter capaign with mote than 4,000 supporters. If you are interested, you can also find more about the kit in Luxoni’s community slack channel (https://luxonis-community.slack.com/)

如果您想进一步了解OAK-D,请确保检查 由仅限Ritesh Kanjee布兰登吉尔采访 谁是OpenCVAI包的首席架构师。 该套件已筹集了超过800,000美元,作为其Kickstarter活动的一部分,其中有4,000多个支持者。 如果您有兴趣,还可以在Luxoni的社区松弛频道( https://luxonis-community.slack.com/ )中找到有关该工具包的更多信息。

Due to the interesting features that the OAK-D, I decided to apply for the OpenCV Spatial AI competition and was lucky enough to be selected as one of the winners of Phase 1. You can also check the projects for the rest of the Phase 1 winners here.

由于OAK-D具有有趣的功能,我决定参加OpenCV Spatial AI竞赛,很幸运地被选为第1阶段的获奖者之一。您还可以检查第1阶段其余部分的项目这里的赢家。

This publication is part of a series of post where I will be describing my journey developing with the new OAK-D as part of my competition project.

该出版物是一系列帖子的一部分,我将在竞赛项目中描述使用新型OAK-D进行开发的过程。

拟议制度 (Proposed System)

Mask detector for social distancing for the blind
Illustration of how the output of the proposed system could detect people wearing a mask and their distance to the user.
提出的系统的输出如何检测戴口罩的人及其与用户的距离的图示。

The title of my proposal is “Social distancing feedback for visually impaired people using a wearable camera”. Due to the current worldwide outbreak of COVID-19, social distancing has become a new social norm as a measure to prevent the widespread of the pandemic.

我建议的标题是“ 使用可穿戴式摄像头为视障人士提供的社会疏远反馈 ”。 由于当前在全球范围内爆发了COVID-19,因此,社会隔离已成为一种新的社会规范,可以作为一种预防大流行的措施。

However, visually impaired people are struggling to keep independence in the new socially distanced normal¹,². For blind people, it is not possible to easily confirm if they are keeping the social distance with the people around them. As an example, a video in the Royal National Institute of Blind People (RNIB) Twitter account showed the difficulties blind people are struggling with in their daily life due to social distancing.

但是,视障人士正在努力保持与新的社会隔离的正常人的独立性1,2,3。 对于盲人,无法轻易确认他们是否与周围的人保持社交距离。 例如, 皇家国家盲人研究所(RNIB)Twitter帐户中的一段视频显示,盲人由于社会疏远而在日常生活中遇到的困难。

Moreover, common solutions for the blind such as white cane or dog cannot assist the blind to keep the social distance. Even worse, blind people cannot know if the people close to them is wearing a mask or not, thus they suffer a higher risk of infection.

此外,盲人的常见解决方案(如白手杖或狗)无法帮助盲人保持社交距离。 更糟糕的是,盲人无法知道附近的人是否戴着口罩,因此感染的风险更高。

For those reasons, the objective of my project is to develop a feedback system for the blind that informs about the distance to other people around and whether someone is not wearing a mask.

出于这些原因,我的项目的目的是为盲人开发一种反馈系统,该系统可以告知与周围其他人的距离以及有人是否没有戴口罩。

For that type of project, where the depth and Artificial Intelligence needs to be combined in real time, the OAK-D is the ideal system. As shown in one example of the DepthAI experiments, the OAK-D is able to detect in real time the position of the faces in an image and whether they are wearing a mask or not. By combining this information with the depth information obtained from the stereo cameras, it is possible to estimate the position of the people around the user and if someone is not wearing a mask.

对于那种需要实时结合深度和人工智能的项目,OAK-D是理想的系统。 如DepthAI实验的一个示例所示,OAK-D能够实时检测图像中人脸的位置以及他们是否戴着口罩。 通过将该信息与从立体摄像机获得的深度信息结合起来,可以估计用户周围的人的位置以及是否有人没有戴口罩。

Then, the system will inform the user about the distance to the people around using five haptic motors attached to the OAK-D board. The haptic motors will be related to 5 direction angles: -40, -20, 0, 20 and 40 degrees. For example, if the system detects that there is a person near at an angle of -20 degrees (as in the image above), then the second motor from the left will vibrate. Moreover, in order to inform about how close the person is, the intensity of the motor will change as the detected person gets closers. Finally, if the system detect that there is a person not wearing a mask, the system will inform the user by changing the vibration pattern.

然后,系统将使用连接到OAK-D板上的五个触觉电机,告知用户与周围人的距离。 触觉电机将与5个方向角相关:-40,-20、0、20和40度。 例如,如果系统检测到有人在-20度角附近(如上图所示),则左侧的第二个电动机将振动。 此外,为了告知人有多近,电动机的强度会随着检测到的人越来越近而变化。 最后,如果系统检测到有人没有戴口罩,则系统将通过更改振动模式来通知用户。

Windows安装程序和初始测试 (Windows Setup and Initial Testing)

This week I received the OpenCV AI Kit. As shown in the image below, the kit contains a OAK-D, a USB-C cable, a 5V (3A) wall charger and a 3D printed GoPro mount.

这周我收到了OpenCV AI工具包。 如下图所示,该套件包含OAK-D,USB-C电缆,5V(3A)壁式充电器和3D打印的GoPro支架。

OpenCV AI Kit with Depth (OAK-D) unboxing
Note: The Raspberry Pi Zero was not included with the kit, it was added only for comparing dimensions. 注意:套件中不包含Raspberry Pi Zero,它只是为了比较尺寸而添加的。

The OAK-D has a small size (46 x 100 mm) with a T shape. Actually, the lower part of the board matches with the width of the Raspberry Pi Zero, so the system combining both boards can have a compact size as shown in the image below.

OAK-D具有T形的小尺寸(46 x 100毫米)。 实际上,开发板的下部与Raspberry Pi Zero的宽度匹配,因此结合了两个开发板的系统可以具有紧凑的尺寸,如下图所示。

OpenCV AI Kit with Depth (OAK-D) with Raspberry Pi Zero

In order to connect with the OAK-D, the people at Luxonis have developed the DepthAI Python API. The DepthAI API is open source and can run in different Operating Systems including Ubuntu, Raspbian and macOS. In the case of Windows 10, as of today (August 8, 2020) it is still experimental. However, following the instructions descibed in here, the process was quite easy. One important note, if you do not want to have to compile the API, make sure to use the Python 3.7 (32 bit). I tried to use the Python 3.8 (32 bit) but it did not work correctly, so make sure you are using the correct Python version.

为了与OAK-D连接,Luxonis的人们开发了DepthAI Python API 。 DepthAI API是开源的,可以在包括Ubuntu,Raspbian和macOS在内的不同操作系统中运行。 对于Windows 10,截至今天(2020年8月8日)仍处于试验阶段。 但是,按照此处描述说明进行操作非常容易。 重要提示 ,如果您不想编译API,请确保使用Python 3.7(32位) 。 我尝试使用Python 3.8(32位),但无法正常工作,因此请确保您使用的是正确的Python版本。

Once I installed the depthAI API and its dependencies, I was able to run the default demo by running the following command (make sure to be in the depthai folder):

安装depthAI API及其依赖项后,我可以通过运行以下命令来运行默认演示(确保位于depthai文件夹中):

python depthai.py

This demo by default runs the MobileNet SSD object detection model that can detect 20 different types of objects (bicycle, car, cat…) inside an image. Moreover, the demo combines the bounding box of the detected object with the depth information of the stereo cameras to provide the 3D position of each detected object. As an example, below, I show an output of the demo for the detection of a water bottle, which is one of the classes that can detect the default demo model.

默认情况下,此演示运行MobileNet SSD对象检测模型,该模型可以检测图像中的20种不同类型的对象(自行车,汽车,猫……)。 此外,该演示将检测到的对象的边界框与立体相机的深度信息相结合,以提供每个检测到的对象的3D位置。 在下面的示例中,我显示了用于检测水瓶的演示的输出,这是可以检测默认演示模型的类之一。

OpenCV AI Kit with Depth (OAK-D) default demo output tracking bottle

The demo was able to track the object and calculate the depth at 30 fps without any problem. By looking at the Python code of the depthai.py script, I saw that the demo can be configured to other modes by add arguments when running the demo. For example, running the following code it is possible to obtain the colorized depth (Note: only works for the Refactory version for Windows 10, in the original repository the configuration has changed):

该演示能够跟踪对象并以30 fps的速度计算深度,没有任何问题。 通过查看depthai.py脚本的Python代码,我看到可以在运行演示时通过添加参数将演示配置为其他模式。 例如,运行以下代码,就有可能获得彩色深度( 注意:仅适用于Windows 10的Refactory版本 ,在原始存储库中,配置已更改):

python depthai.py --streams depth_color_h
Image for post
Depth output using the OAK-D.
使用OAK-D的深度输出。

Overall, the depth looks pretty good with some black regions on the left of the background. However, that region contains glass panels and probably the stereo camera system cannot extract many features from it, so that why no depth was provided for those regions.

总体而言,深度看起来不错,背景左侧有一些黑色区域。 但是,该区域包含玻璃面板,因此立体相机系统可能无法从中提取许多功能,因此为什么没有为这些区域提供深度。

深度估算:OAK-D与Azure Kinect DK (Depth Estimation: OAK-D vs. Azure Kinect DK)

Even though the depth estimation of the OAK-D is not its main feature, I wanted to compare the depth estimation of the OAK-D with the latest Azure Kinect DK. For that purpose, I modified wrote a small Python script (hello_depth.py) that reads the raw deoth values and displays the depth as in the Azure Kinect.

尽管OAK-D的深度估计不是其主要功能,但我想将OAK-D的深度估计与最新的Azure Kinect DK进行比较。 为此,我修改了编写一个小的Python脚本( hello_depth.py ),该脚本读取原始的牙齿值并像Azure Kinect一样显示深度。

As for the Azure Kinect, I used the depth estimation example program in my Python repository for the Azure Kinect SDK. In the image below the estimated depth for both devices is compared.

对于Azure Kinect,我在Python存储库中使用了Azure Kinect SDK的深度估算示例程序。 在下面的图像中,比较了两个设备的估计深度。

Comparison of Kinect Azure DK and openCV AI Kit with Depth (OAK-D) depth estimation
Comparison of the depth estimation for the Azure Kinect and the OAK-D.
Azure Kinect和OAK-D的深度估计的比较。

As it can be observed, even though the OAK-D uses a estereo camera the results are very good. Particularly, in the case of the table, the OAK-D was able to estimate the depth correctly whereas the TOF sensor in the Azure Kinect failed.

可以观察到,即使OAK-D使用酯化相机,效果也非常好。 特别是在桌子的情况下,OAK-D能够正确估计深度,而Azure Kinect中的TOF传感器却无法使用。

This is all for this first part, in the next part I will test the face mask detection example using the OAK-D. I will also be uploading all the scripts for this project in my repository https://github.com/ibaiGorordo/Social-Distance-Feedback.

这是第一部分的全部内容,在下一部分中,我将使用OAK-D测试面罩检测示例 。 我还将在我的存储库https://github.com/ibaiGorordo/Social-Distance-Feedback中上传该项目的所有脚本。

翻译自: https://towardsdatascience.com/opencv-spatial-ai-competition-journey-part-1-e76593d456fe

opencv图像深度-1

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值