ios 地图 省市轮廓_iOS 14中的新增功能:视觉轮廓检测

本文介绍了iOS 14中引入的视觉轮廓检测功能,探讨了这一技术如何增强地图应用,特别是在显示省市轮廓方面的改进。
摘要由CSDN通过智能技术生成

ios 地图 省市轮廓

WWDC20 (WWDC20)

Apple’s WWDC 2020 (digital-only) event kickstarted with a bang. There were a lot of new surprises (read: Apple’s own silicon chips for Macs) from the world of SwiftUI, ARKit, PencilKit, Create ML, and Core ML. But the one that stood out for me was computer vision.

苹果公司的WWDC 2020(仅限数字)活动从一声巨响开始。 SwiftUI, ARKit ,PencilKit,Create ML和Core ML带来了许多新的惊喜(请参阅: 苹果自己的Mac专用硅芯片 )。 但是对我而言突出的是计算机视觉。

Apple’s Vision framework got bolstered with a bunch of exciting new APIs that perform some complex and critical computer vision algorithms in a fairly straightforward way.

苹果的Vision框架得到了一系列令人兴奋的新API的支持,这些API以相当直接的方式执行了一些复杂而关键的计算机视觉算法。

Starting with iOS 14, the Vision framework now supports Hand and Body Pose Estimation, Optical Flow, Trajectory Detection, and Contour Detection.

从iOS 14开始,Vision框架现在支持手和身体姿势估计,光流,轨迹检测和轮廓检测。

While we’ll provide an in-depth look at each of these some other time, right now, let’s dive deeper into one particularly interesting addition—the contour detection Vision request.

尽管我们将在其他时间深入介绍这些内容,但现在让我们更深入地研究其中一项特别有趣的功能-轮廓检测Vision要求。

我们的目标 (Our Goal)

  • Understanding Vision’s contour detection request.

    了解视觉的轮廓检测要求。
  • Running it in an iOS 14 SwiftUI application to detect contours along coins.

    在iOS 14 SwiftUI应用程序中运行它以检测硬币的轮廓。
  • Simplifying the contours by leveraging Core Image filters for pre-processing the images before passing them on to the Vision request. We’ll look to mask the images in order to reduce texture noise.

    通过利用核心图像滤镜对图像进行预处理,然后再将其传递给Vision请求,从而简化了轮廓。 我们将希望掩盖图像以减少纹理噪声。

视觉轮廓检测 (Vision Contour Detection)

Contour detection detects outlines of the edges in an image. Essentially, it joins all the continuous points that have the same color or intensity.

轮廓检测可检测图像边缘的轮廓。 本质上,它连接具有相同颜色或强度的所有连续点。

This computer vision task is useful for shape analysis, edge detection, and is helpful in scenarios where you need to find similar types of objects in an image.

此计算机视觉任务对于形状分析,边缘检测很有用,并且在需要在图像中查找相似类型的对象的情况下很有用。

Coin detection and segmentation is a fairly common use case in OpenCV, and now by using Vision’s new VNDetectContoursRequest, we can perform the same in our iOS applications easily (without the need for third-party libraries).

硬币检测和分段是OpenCV中相当普遍的用例,现在,通过使用Vision的新VNDetectContoursRequest ,我们可以轻松地在iOS应用程序中执行相同的操作(无需第三方库)。

To process images or frames, the Vision framework requires a VNRequest, which is passed into an image request handler or a sequence request handler. What we get in return is a VNObservation class.

要处理图像或帧,Vision框架需要VNRequest ,将其传递到图像请求处理程序或序列请求处理程序中。 我们得到的是VNObservation类。

You can use the respective VNObservation subclass based on the type of request you’re running. In our case, we’ll use VNContoursObservation, which provides all the detected contours from the image.

您可以根据正在运行的请求类型使用相应的VNObservation子类。 在我们的例子中,我们将使用VNContoursObservation ,它提供了图像中所有检测到的轮廓。

We can inspect the following properties from the VNContoursObservation:

我们可以从VNContoursObservation检查以下属性:

  • normalizedPath — It returns the path of detected contours in normalized coordinates. We’d have to convert it into the UIKit coordinates, as we’ll see shortly.

    normalizedPath —返回归一化坐标中检测到的轮廓的路径。 我们将不得不将其转换为UIKit坐标,稍后我们将看到。

  • contourCount — The number of detected contours returned by the Vision request.

    contourCount —视觉请求返回的检测到的轮廓数。

  • topLevelContours — An array of VNContours that aren’t enclosed inside any contour.

    topLevelContours —未包含在任何轮廓内的VNContours数组。

  • contour(at:) — Using this function, we can access a child contour by passing its index or IndexPath.

    contour(at:) —使用此功能,我们可以通过传递子轮廓的索引或IndexPath来访问它。

  • confidence — The level of confidence in the overall VNContoursObservation.

    confidence —对整个VNContoursObservation的置信VNContoursObservation

Note: Using topLevelContours and accessing child contours is handy when you need to modify/remove them from the final observation.

注意:当需要从最终观察中修改/删除子轮廓时,使用topLevelContours和访问子轮廓很方便。

Now that we’ve got an idea of Vision contour detection request, let’s explore how it might work it in an iOS 14 application.

既然我们已经了解了Vision轮廓检测请求,让我们探索一下它如何在iOS 14应用程序中工作。

There’s a lot to consider when starting a mobile machine learning project. Our new free ebook explores the ins and outs of the entire project development lifecycle.

启动移动机器学习项目时需要考虑很多因素。 我们的新免费电子书探讨了整个项目开发生命周期的来龙去脉

入门 (Getting Started)

To start off, you’ll need Xcode 12 beta as the bare minimum. That’s about it, as you can directly run Vision image requests in your SwiftUI Previews.

首先,您至少需要Xcode 12 beta 。 就是这样,因为您可以在SwiftUI预览中直接运行Vision图像请求。

Create a new SwiftUI application in the Xcode wizard and notice the new SwiftUI App lifecycle:

在Xcode向导中创建一个新的SwiftUI应用程序,并注意新的SwiftUI App生命周期:

Image for post

You’ll be greeted with the following code once you complete the project setup:

完成项目设置后,您将获得以下代码:

@main
struct iOS14VisionContourDetection: App {
var body: some Scene {
WindowGroup {
ContentView()
}
}
}

Note: Starting in iOS 14, SceneDelegate has been deprecated in favor of the SwiftUI App protocol, specifically for SwiftUI-based applications. The @main annotation on the top of the struct indicates it’s the starting point of the application.

注意:从iOS 14开始,不推荐使用SceneDelegate ,而支持SwiftUI App协议,特别是针对基于SwiftUI的应用程序。 struct顶部的@main批注指示它是应用程序的起点。

使用视觉轮廓请求检测硬币 (Detect Coins Using Vision Contour Request)

In order to perform our Vision request, let’s quickly set up a SwiftUI view, as shown below:

为了执行Vision要求,让我们快速设置一个SwiftUI视图,如下所示:

In the above code, we’ve used the if let syntax that’s released with SwiftUI for iOS 14. Ignore the preprocessImage state; for now, let’s directly jump onto the detectVisionContours function that’ll update the outputImage state upon the completion of Vision request:

在上面的代码中,我们使用了if let一个与SwiftUI发布的iOS 14.忽略语法preprocessImage状态; 现在,让我们直接跳转到detectVisionContours函数,该函数将在Vision请求完成后更新outputImage状态:

In the above code, we’ve set the contrastAdjustment (to enhance the image) and detectDarkOnLight (for better contour detection as our image has light background) properties on the VNDetectContoursRequest.

在上面的代码中,我们设置了contrastAdjustment (提升形象)和detectDarkOnLight (为了更好的轮廓检测作为我们的形象有背景光)的特性VNDetectContoursRequest

Upon running the VNImageRequestHandler with the input image (present in the Assets folder ), we get back the VNContoursObservation.

使用输入图像(位于Assets文件夹中)运行VNImageRequestHandler时,我们取回VNContoursObservation

Eventually, we’ll draw the normalizedPoints as an overlay on our input image.

最终,我们将在输入图像上绘制normalizedPoints作为覆盖。

在图像上绘制轮廓 (Draw Contours on an Image)

The code for the drawContours function is given below:

drawContours函数的代码如下:

The UIImage returned by the above function is set to the contouredImage SwiftUI state, and subsequently our view gets updated:

UIImage通过上面的函数返回被设置为contouredImage SwiftUI状态,随后我们的观点得到更新:

Image for post

The results are pretty decent considering we ran this on a simulator, but they would certainly be better if we ran this on a device with iOS 14, with access to the Neural Engine.

考虑到我们在模拟器上运行此结果,结果相当不错,但是如果我们在装有iOS 14且可以访问神经引擎的设备上运行此结果,肯定会更好。

But still, there are far too many contours (mostly due to coin textures) for our liking. We can simplify (or rather reduce) them by pre-processing the image.

但是,仍然有太多轮廓适合我们喜欢(主要是由于硬币纹理)。 我们可以通过预处理图像来简化(或减少)它们。

Feeling inspired? Fritz AI Studio has the tools to build, test, and improve mobile machine learning models. Start building and teach your devices to see, hear, sense, and think.

感觉受到启发? Fritz AI Studio具有用于构建,测试和改善移动机器学习模型的工具。 开始构建并教会您的设备看,听,感知和思考

使用Core Image预处理视觉图像请求 (Use Core Image for Pre-Processing Vision Image Requests)

Core Image is Apple’s image processing and analysis framework. Though it works fine for simple face and barcode detection tasks, it isn’t scalable for complex computer vision use cases.

Core Image是Apple的图像处理和分析框架。 尽管它对于简单的面部和条形码检测任务非常有效,但对于复杂的计算机视觉用例却无法扩展。

The framework actually boasts of over 200 image filters and is handy in photography apps as well as for data augmentation in your machine learning model training.

该框架实际上拥有200多个图像过滤器,在摄影应用程序中以及在您的机器学习模型训练中用于数据增强时都很方便。

But more importantly, Core Image is a handy tool for pre-processing images that are then fed to the Vision framework for analysis.

但更重要的是,Core Image是用于对图像进行预处理的便捷工具,然后将其馈送到Vision框架进行分析。

Now, if you’ve watched the WWDC 2020 Computer Vision APIs video, you’ve seen that Apple has leveraged Core Image’s monochrome filter for pre-processing, while demonstrating their punchcard contour detection example.

现在,如果您观看了WWDC 2020计算机视觉API的视频,您已经看到Apple在演示其穿Kong卡轮廓检测示例的同时,利用Core Image的单色滤镜进行预处理。

In our case, for coin masking, the monochrome effect would not give as good results. Specifically for coins that have a similar color intensity that’s different from the background, using the black and white color filter for masking coins is a better bet.

在我们的情况下,对于硬币掩蔽来说,单色效果不会得到很好的效果。 特别是对于颜色强度与背景不同的硬币,最好使用黑白滤色镜遮盖硬币。

Image for post

For each of the above pre-processing types, we’ve also set a Gaussian filter to smoothen the image. Take note of how the monochrome pre-processing filter actually gives us significantly more contours.

对于上述每种预处理类型,我们还设置了一个高斯滤波器来平滑图像。 注意单色预处理滤镜实际上如何为我们提供更多轮廓。

Hence, it’s important to pay heed to the kinds of images you’re dealing with when doing pre-processing.

因此,在进行预处理时,一定要注意要处理的图像种类。

The outputImage obtained after the pre-processing is fed to the Vision image request. The block of code for creating and applying Core Image filters is available in this GitHub Repository, along with the full source code.

outputImage后获得的outputImage被馈送到Vision图像请求。 GitHub Repository中提供了用于创建和应用Core Image过滤器的代码块,以及完整的源代码。

分析轮廓 (Analyzing Contours)

By using the VNGeometryUtils class, we can observe properties like diameter, bounding circle, area perimeter, and aspect ratio of the contour. Simply pass the contour, as shown below:

通过使用VNGeometryUtils类,我们可以观察到诸如直径,边界圆,面积周长和轮廓的纵横比之类的属性。 只需传递轮廓,如下所示:

VNGeometryUtils.boundingCircle(for: VNContour)

This can open up new computer vision possibilities in determining the different kinds of shapes available in an image.

这可以为确定图像中可用的各种形状开辟新的计算机视觉可能性。

Additionally, by invoking the polygonApproximation(withEpsilon:) method on a VNContour, we can further simplify our contours by filtering out little noisy parts around an edge.

此外,通过在polygonApproximation(withEpsilon:)调用polygonApproximation(withEpsilon:)方法,我们可以通过滤除边缘周围的少量VNContour来进一步简化轮廓。

结论 (Conclusion)

Computer vision plays a huge role in Apple’s mixed reality future. The introduction of hand and body Pose APIs, which were a part of the ARKit framework, will open up new kinds of opportunities for building intelligent computer vision applications.

计算机视觉在苹果混合现实的未来中扮演着重要角色。 作为ARKit框架一部分的手部和身体姿势API的引入,将为构建智能计算机视觉应用程序打开新的机遇。

There’s a lot of exciting stuff that came out of WWDC 2020. I’m excited about the new kinds of possibilities for machine learning on mobile. Stay tuned for more updates, and thanks for reading.

WWDC 2020带来了很多令人兴奋的东西。我为移动机器学习的新型可能性感到兴奋。 请继续关注更多更新,并感谢您的阅读。

Editor’s Note: Heartbeat is a contributor-driven online publication and community dedicated to exploring the emerging intersection of mobile app development and machine learning. We’re committed to supporting and inspiring developers and engineers from all walks of life.

编者注: 心跳 是由贡献者驱动的在线出版物和社区,致力于探索移动应用程序开发和机器学习的新兴交集。 我们致力于为各行各业的开发人员和工程师提供支持和启发。

Editorially independent, Heartbeat is sponsored and published by Fritz AI, the machine learning platform that helps developers teach devices to see, hear, sense, and think. We pay our contributors, and we don’t sell ads.

Heartbeat在编辑上是独立的,由以下机构赞助和发布 Fritz AI ,一种机器学习平台,可帮助开发人员教设备看,听,感知和思考。 我们向贡献者付款,并且不出售广告。

If you’d like to contribute, head on over to our call for contributors. You can also sign up to receive our weekly newsletters (Deep Learning Weekly and the Fritz AI Newsletter), join us on Slack, and follow Fritz AI on Twitter for all the latest in mobile machine learning.

如果您想做出贡献,请继续我们的 呼吁捐助者 您还可以注册以接收我们的每周新闻通讯(《 深度学习每周》 和《 Fritz AI新闻通讯》 ),并加入我们 Slack ,然后关注Fritz AI Twitter 提供了有关移动机器学习的所有最新信息。

翻译自: https://heartbeat.fritz.ai/new-in-ios-14-vision-contour-detection-68fd5849816e

ios 地图 省市轮廓

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值