ios emoji_为iOS构建SwiftUI + Core ML Emoji Hunt游戏

ios emoji

The advent of machine learning on mobile has opened doors for a bunch of new opportunities. While it has allowed ML experts to tap into the mobile space, the other end of that equation is actually the show-stealer. Letting mobile application developers dabble in machine learning has actually made its with mobile application development so exciting.

机器学习在移动设备上的出现为许多新机遇打开了大门。 虽然它使机器学习专家可以进入移动领域,但该等式的另一端实际上是表演窃取者。 让移动应用程序开发人员涉足机器学习实际上使移动应用程序开发变得如此令人兴奋。

The best thing is, you needn’t be a machine learning expert in order to train or run models. Core ML, Apple’s machine learning framework, provides an easy-to-use API that lets you run inference (model predictions), fine-tune models, or re-train on the device.

最好的事情是,您不必成为训练或运行模型的机器学习专家。 核心ML ,苹果的机器学习框架,提供了一个易于使用的API,它可以让你运行的推理(模型预测),微调模型,或在设备上重新培训

Create ML, on the other hand, lets you create and train custom machine learning models (currently supported for images, objects, text, recommender systems, and linear regression) with a drag-and-drop macOS tool or in Swift Playgrounds.

另一方面,使用Create ML ,您可以使用拖放式macOS工具或Swift Playgrounds创建和训练自定义机器学习模型(当前支持图像,对象,文本,推荐系统和线性回归)。

If this didn’t amaze you, consider SwiftUI, the new declarative UI framework that caused a storm when it was announced to the iOS community during WWDC 2019. It alone has led to an influx of developers learning Swift and iOS dev, given how easy it is to quickly build user interfaces.

如果这没有让您感到惊奇,请考虑一下SwiftUI ,这是一种新的声明性UI框架,当它在WWDC 2019期间向iOS社区宣布时引起了轩然大波。仅凭这一点,开发人员就涌入了学习Swift和iOS dev的过程,因为它是如此容易这是为了快速构建用户界面。

Only together would SwiftUI, Core ML, and Vision (Apple’s computer vision framework that preceded Core ML)give rise to smart AI-based applications. But that’s not all...you can leverage the power of machine learning to build fun games as well.

只有SwiftUI,Core ML和Vision (Core ML之前的Apple计算机视觉框架)共同提供基于智能AI的应用程序。 但这还不是全部...您还可以利用机器学习的力量来构建有趣的游戏。

In the next few sections, we’ll build a camera-based iOS application that lets you hunt down the emojis in your house — something like a treasure hunt, which has to be among the popular indoor games we’re playing right now, as we find ourselves in quarantine.

在接下来的几节中,我们将构建一个基于相机的iOS应用程序,使您可以在房屋中搜寻表情符号,例如寻宝,它必须是我们目前正在玩的流行室内游戏之一,例如我们发现自己处于隔离区。

行动计划 (Plan of Action)

  • We’ll use a MobileNet Core ML model to classify objects from the camera frames. If you want to read more about the MobileNet architecture, hop on over to this article for a detailed overview.

    我们将使用MobileNet Core ML模型摄像机框架中的对象进行分类 。 如果您想了解有关MobileNet体系结构的更多信息,请跳至本文以获取详细概述。

  • For setting up the camera, we’ll use AVFoundation, Apple’s own audio-video framework. With the help of UIViewRepresentable, we’ll integrate it into our SwiftUI view.

    为了设置相机,我们将使用Apple自己的音频视频框架AVFoundation。 借助UIViewRepresentable ,我们将其集成到SwiftUI视图中。

  • We’ll drive our Core ML model with the Vision framework, matching the model’s inference with the correct emoji (because every emoticon has a meaning).

    我们将使用Vision框架来驱动Core ML模型,将模型的推论与正确的表情符号相匹配(因为每个表情符号都有其含义)。
  • Our game will consist of a timer, against which the user points the camera at different objects around a given area to find the one that matches the emoji.

    我们的游戏将包含一个计时器,用户可以将计时器对准给定区域周围的不同对象,以找到与表情符号匹配的对象。

入门 (Getting Started)

Launch Xcode and select SwiftUI as the UI template for the iOS application. Next, go to the info.plist file and add the camera privacy permissions with description.

启动Xcode并选择SwiftUI作为iOS应用程序的UI模板。 接下来,转到info.plist文件,并添加带说明的摄像机隐私权限。

Image for post

Feeling inspired? Fritz AI Studio has the tools to build, test, and improve mobile machine learning models. Start building and teach your devices to see, hear, sense, and think.

感觉受到启发? Fritz AI Studio具有用于构建,测试和改善移动机器学习模型的工具。 开始构建并教会您的设备看,听,感知和思考。

使用AVFoundation创建自定义相机视图 (Create a Custom Camera View with AVFoundation)

SwiftUI doesn’t provide native support for AVFoundation. Luckily, we can leverage SwiftUI interoperability with UIKit. Before we do that, let’s set up a custom camera view controller first. We’ll eventually wrap this in a SwiftUI struct.

SwiftUI不为AVFoundation提供本机支持。 幸运的是,我们可以利用SwiftUI与UIKit的互操作性。 在此之前,我们先设置一个自定义相机视图控制器。 我们最终会将其包装在SwiftUI struct

At large, the above code does four things:

总的来说,以上代码可以完成四件事:

  • Creates a capture session.

    创建一个捕获会话。
  • Obtains and configures the necessary capture devices. We’ll use the back camera.

    获取并配置必要的捕获设备。 我们将使用后置摄像头。
  • Sets up the inputs using the capture devices.

    使用捕获设备设置输入。
  • Configures the output object which displays the camera frames.

    配置显示摄像机框架的输出对象。

Also, we’ve added a custom protocol: EmojiFoundDelegate, which’ll eventually inform the SwiftUI view when the emoji equivalent image is found. Here’s the code for the protocol:

此外,我们还添加了一个自定义协议: EmojiFoundDelegate ,当找到与表情符号等效的图像时,它将最终通知SwiftUI视图。 这是该协议的代码:

protocol EmojiFoundDelegate{
func emojiWasFound(result: Bool)
}

You’ll also notice the protocol defined in the class declaration: AVCaptureVideoDataOutputSampleBufferDelegate. To conform to this, we need to implement the captureOutput(_:didOutputSampleBuffer:from) function wherein we can access the extracted frame buffers and pass them onto the Vision-Core ML request.

您还会注意到类声明中定义的协议: AVCaptureVideoDataOutputSampleBufferDelegate 。 为了做到这一点,我们需要实现captureOutput(_:didOutputSampleBuffer:from)函数,在此函数中,我们可以访问提取的帧缓冲区,并将它们传递给Vision-Core ML请求。

使用Vision和CoreML处理相机镜框 (Process Camera Frames With Vision And CoreML)

Now that our camera is set up, let’s extract the frames and process them in realtime. We’ll pass on the frames to the Vision request that runs the Core ML model.

现在我们的相机已经设置好了,让我们提取帧并进行实时处理。 我们会将框架传递给运行Core ML模型的Vision请求。

Add the following piece of code in the CameraVC class that we defined above:

在我们上面定义的CameraVC类中添加以下代码:

  • We wrap our CoreML model (download the MobileNet version from here or you can find it in GitHub Repository at the end of the article) in a VNCoreMLRequest.

    在我们结束我们的CoreML模型(下载MobileNet版本从这里也可以在文章的末尾找到它的GitHub库)在VNCoreMLRequest

  • The captureOutput the function converts the CGSampleBuffer retrieved from real-time camera frame into a CVPixelBuffer, which eventually gets passed onto the updateClassification function.

    captureOutput函数转换CGSampleBuffer从实时相机帧检索到CVPixelBuffer ,最终被传递到updateClassification功能。

  • The VNImageRequestHandler takes care of converting the input image into the constrains that the Core ML model requires — thereby freeing us of some boilerplate code.

    VNImageRequestHandler负责将输入图像转换为Core ML模型所需的约束,从而使我们摆脱了一些样板代码。

  • Inside the processClassifications function, we compare the image identified by the Core ML model with the emojiString (this is passed from the SwiftUI body interface that we’ll see shortly). Once there’s a match, we call the delegate to update the SwiftUI view.

    processClassifications函数内部,我们将Core ML模型标识的图像与emojiString (这是从稍后将要看到的SwiftUI主体接口传递来的)进行比较。 一旦匹配,我们就调用委托来更新SwiftUI视图。

Now that the tough part is over, let’s hop over to SwiftUI.

现在最困难的部分已经结束,让我们跳到SwiftUI。

There’s a lot to consider when starting a mobile machine learning project. Our new free ebook explores the ins and outs of the entire project development lifecycle.

启动移动机器学习项目时需要考虑很多因素。 我们的新免费电子书探讨了整个项目开发生命周期的来龙去脉。

构建我们的SwiftUI游戏 (Building our SwiftUI Game)

Our game consists of four states: emoji found, not found, emoji search, and game over. Since SwiftUI is a state-driven framework, we’ll create a @State enum type that switches between the aforementioned states and updates the user interface accordingly. Here’s the code for the enum and the struct that holds emoji data:

我们的游戏包含四个状态: emoji foundnot found emoji searchgame over 。 由于SwiftUI是状态驱动的框架,因此我们将创建一个@State枚举类型,该类型在上述状态之间切换并相应地更新用户界面。 这是用于保存表情符号数据的enumstruct的代码:

enum EmojiSearch{
case found
case notFound
case searching
case gameOver
}struct EmojiModel{
var emoji: String
var emojiName: String
}

In the following code, we’ve set up a Timer for a given number of seconds (say 10 in our case), during which the user needs to hunt an image that resembles the emoji. Depending on whether user manages to do it or not, the UI is updated accordingly:

在下面的代码中,我们将Timer设置为给定的秒数(在本例中为10秒),在此期间,用户需要搜寻类似于表情符号的图像。 根据用户是否进行管理,相应地更新UI:

The following two functions are invoked to reset the timer at each level:

调用以下两个函数来重置每个级别的计时器:

func instantiateTimer() {self.timer = Timer.publish(every: 1, on: .main, in: .common).autoconnect()
}func cancelTimer() {
self.timer.upstream.connect().cancel()
}

Now, SwiftUI doesn’t really work the best with switch statements in the body — unless you wrap them in a generic parameter AnyView. Instead, we put the switch statement in a function emojiResultText, as shown below:

现在,SwiftUI不能真正与body switch语句一起使用,除非您将它们包装在通用参数AnyView 。 相反,我们将switch语句放在函数emojiResultText ,如下所示:

Lastly, we need to create a wrapper struct for the CameraVC we created initially. The following code does that and passes the emojiString, which is eventually matched with the ML model’s classification results:

最后,我们需要为最初创建的CameraVC创建包装器结构。 以下代码将执行此操作并传递emojiString ,该emojiString最终与ML模型的分类结果匹配:

The @Binding property wrapper defined in the Coordinator class lets you update the SwiftUI State from the CustomCameraRepresentable struct. Basically the Coordinator class acts as a bridge between UIKit and SwiftUI — letting you update one from the other by using delegates and binding property wrapper(s).

通过Coordinator类中定义的@Binding属性包装器,您可以从CustomCameraRepresentable结构更新SwiftUI状态。 基本上, Coordinator类充当UIKit和SwiftUI之间的桥梁-使您可以通过使用委托和绑定属性包装器彼此更新。

Let’s look at some of the outputs from our SwiftUI game in action:

让我们看一下我们的SwiftUI游戏的一些输出:

Image for post

Here’s a screengrab of the application running on a bunch of different objects:

这是在多个不同对象上运行的应用程序的屏幕截图:

Image for post

结论 (Conclusion)

We were quickly able to build a small Emoji Hunter Game using SwiftUI, Core ML, and Vision. You can further improve on this experience by adding audio when the emoji-equivalent image is found. Also, by using this amazing library Smile, you can quickly search the keyword name of an emoji and vice-versa.

我们很快就能使用SwiftUI,Core ML和Vision构建一个小的Emoji猎人游戏。 当找到与表情符号等效的图像时,可以通过添加音频来进一步改善这种体验。 此外,通过使用这个令人赞叹的库Smile ,您可以快速搜索表情符号的关键字名称,反之亦然。

With WWDC 2020 just around the corner, it’ll be interesting to see how Apple surprises Core ML and SwiftUI developers. A simpler integration of AVFoundation with SwiftUI and expanding the set of Core ML model layers would help train more kinds of ML models on on-device.

WWDC 2020即将来临,看看苹果如何让Core ML和SwiftUI开发人员感到惊讶将是很有趣的。 AVFoundation与SwiftUI的更简单集成以及扩展Core ML模型层的集合将有助于在设备上训练更多种类的ML模型。

For instance, RNN’s layers such as LSTM would open up possibilities for the stock market prediction-based applications (perhaps for entertainment purposes only right now. — don’t use them when making investment decisions). This is something the iOS community would keenly look forward to.

例如,RNN的层(如LSTM)将为基于股票市场预测的应用程序打开可能性(也许现在仅出于娱乐目的。在进行投资决策时不要使用它们)。 这是iOS社区非常期待的事情。

You can download the full project from this GitHub Repository.

您可以从GitHub Repository下载整个项目。

That’s it for this one. I hope you enjoyed 😎

就这个。 我希望你喜欢

Editor’s Note: Heartbeat is a contributor-driven online publication and community dedicated to exploring the emerging intersection of mobile app development and machine learning. We’re committed to supporting and inspiring developers and engineers from all walks of life.

编者注: 心跳 是由贡献者驱动的在线出版物和社区,致力于探索移动应用程序开发和机器学习的新兴交集。 我们致力于为各行各业的开发人员和工程师提供支持和启发。

Editorially independent, Heartbeat is sponsored and published by Fritz AI, the machine learning platform that helps developers teach devices to see, hear, sense, and think. We pay our contributors, and we don’t sell ads.

Heartbeat在编辑上是独立的,由以下机构赞助和发布 Fritz AI ,一种机器学习平台,可帮助开发人员教设备看,听,感知和思考。 我们向贡献者付款,并且不出售广告。

If you’d like to contribute, head on over to our call for contributors. You can also sign up to receive our weekly newsletters (Deep Learning Weekly and the Fritz AI Newsletter), join us on Slack, and follow Fritz AI on Twitter for all the latest in mobile machine learning.

如果您想做出贡献,请继续我们的 呼吁捐助者 您还可以注册以接收我们的每周新闻通讯(《 深度学习每周》 和《 Fritz AI新闻通讯》 ),并加入我们 Slack ,然后关注Fritz AI Twitter 提供了有关移动机器学习的所有最新信息。

翻译自: https://heartbeat.fritz.ai/build-a-swiftui-core-ml-emoji-hunt-game-for-ios-eb4465ec4153

ios emoji

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值