黑苹果检测_苹果技术进行情绪检测

黑苹果检测

介绍 (Introduction)

Before we get our hands dirty, let’s prepare ourselves for what’s coming next.

在弄脏手之前,让我们为接下来发生的事情做好准备。

第一件事 (First things first)

Artificial Intelligence can be defined as an area of computer science that has an emphasis on the creation of intelligent machines that can work and react like humans.

人工智能可以定义为计算机科学领域,其重点是创建可以像人类一样工作和做出React的智能机器

Machine Learning can be defined as a subset of AI, in which machines can learn on their own without being explicitly programmed: they can think and perform actions based on their past experiences.In this way, they can change their algorithm based on the data sets on which they are operating.

机器学习可以定义为AI的一个子集 ,其中机器可以不经过明确编程情况下自行学习 :他们可以根据过去的经验来思考和执行动作,从而可以根据数据集更改算法他们在其上运行。

Machine Learning’s popularity is growing day after day and so are the possible use cases, also thanks to the huge amount of data produced by applications.

机器学习的受欢迎程度每天都在增长,可能的用例也在不断增长,这也要归功于应用程序产生的大量数据。

Machine Learning is used anywhere, from automating daily tasks to offering intelligent insights for basically every industry.

机器学习无处不在,从自动化日常任务到为几乎每个行业提供智能见解。

ML is used for prediction, image recognition, or speech recognition. It is trained to recognize cancerous tissues, frauds, or to optimize businesses.

ML用于预测,图像识别或语音识别。 经过培训可以识别癌组织,欺诈或优化业务。

Machine learning can be classified into 3 types of algorithms.

机器学习可分为3种算法。

  • Supervised Learning: we give labeled data to the AI system. This means that each data is tagged with the correct label.

    监督学习 :我们将标记的数据提供给AI系统。 这意味着每个数据都使用正确的标签进行了标记。

  • Unsupervised Learning: we give unlabeled, uncategorized data to the AI system and it acts on the data without any prior training, so the output is dependent upon the coded algorithms.

    无监督学习 :我们将未经标记 ,未经分类的数据提供给AI系统,并且无需事先培训即可对数据进行操作,因此输出取决于编码算法。

  • Reinforcement Learning: the system learns with no human intervention: given an environment, it will receive rewards for performing correct actions and penalties for the incorrect ones.

    强化学习 :系统无需人工干预即可学习 :在特定环境下,执行正确的动作将获得奖励,对不正确的行为将受到惩罚。

A machine learning model can be a mathematical representation of a real-world process.

机器学习模型可以是现实过程的数学表示。

To understand this, we must first know how we come to this point, for the scope of this article, we will talk more specifically about training a classification model.

要理解这一点,我们首先必须知道如何达到这一点,在本文的范围内,我们将更具体地讨论训练分类模型。

训练 (Training)

Training a model simply means learning good values

训练模型仅意味着学习良好的价值观

A neural network, at first, will try to guess the output value randomly, then, it will gradually learn from its errors and adjust its values (weights) based on these.

首先,神经网络将尝试随机 猜测输出值,然后逐渐从错误中学习并调整其 (权重) 基于这些。

There are many types of classification problems:

分类问题有很多类型:

  • Binary Classification: predict a binary possibility (one of two possible classes).

    二进制分类 :预测二进制可能性(两个可能的类别之一)。

  • Multiclass Classification: allow you to generate predictions for multiple classes (predict one of more than two outcomes).

    类别分类:允许您生成多个类别的预测(预测两个以上结果之一)。

For iOS developers, Apple provides machine learning tools like Core ML, Vision, and NLP. iOS developers have different choices for accessing trained models to provide inference:

Apple为iOS开发人员提供了机器学习工具,例如Core ML,Vision和NLP。 iOS开发人员在访问经过训练的模型以提供推论时有不同的选择:

  • Use Core ML to access a local on-device pre-trained model.

    使用Core ML访问本地的设备上预先训练的模型。
  • Host a Machine Learning Model in the cloud and send data from the device to the hosted endpoint to provide predictions.

    在云中托管机器学习模型,并将数据从设备发送到托管端点以提供预测。
  • Call third-party API-Driven Machine Learning cloud managed services where the service hosts and manages a pre-defined trained model. User data is passed through an API call from the device and the service returns the predicted values.

    调用第三方API驱动的机器学习云托管服务,其中该服务托管和管理预定义的经过训练的模型。 用户数据通过设备的API调用传递,服务返回预测值。
Image for post
h heyerlein on h heyerlein摄Unsplash Unsplash

什么是创建ML? (What is Create ML?)

Focused at present on vision and natural language data, developers can use Create ML with Swift to create machine learning models, models which are then trained to handle tasks such as understanding text, recognizing photos, or finding relationships between numbers.

目前,开发人员可以专注于视觉和自然语言数据,可以使用带有Swift的Create ML创建机器学习模型,然后对模型进行训练以处理诸如理解文本,识别照片或查找数字之间的关系等任务。

It lets developers build machine learning models on their Macs that they can then deploy across Apple’s platforms using Swift.

它使开发人员可以在Mac上构建机器学习模型,然后可以使用Swift在苹果的平台上进行部署。

Apple’s decision to commoditize its machine learning tech means developers can build natural language and image classification models much faster than the task takes if built from scratch.

苹果公司决定将其机器学习技术商品化,这意味着开发人员可以以比从头开始构建任务快得多的速度构建自然语言和图像分类模型。

It also makes it possible to create these models without the use of third-party AI training systems, such as IBM Watson or TensorFlow (though Create ML supports only very specific models).

这也使创建这些模型成为可能,而无需使用第三方AI培训系统,例如IBM Watson或TensorFlow(尽管Create ML仅支持非常特定的模型)。

什么是核心ML? (What is Core ML?)

Core ML is the machine learning framework used across Apple products (macOS, iOS, watchOS, and tvOS) for performing fast prediction or inference with easy integration of pre-trained machine learning models on the edge, which allows you to perform real-time predictions of live images or video on the device.

Core ML是跨Apple产品(macOS,iOS,watchOS和tvOS)使用的机器学习框架,用于通过边缘上预先训练好的机器学习模型的轻松集成来执行快速预测推断 ,从而使您可以执行实时预测设备上的实时图像或视频。

机器学习的优势 (Advantages of ML on the edge)

Low Latency and Near Real-Time Results: You don’t need to make a network API call by sending the data and then waiting for a response. This can be critical for applications such as video processing of successive frames from the on-device camera.

低延迟和近乎实时的结果:您不需要通过发送数据然后等待响应来进行网络API调用。 这对于应用程序(例如来自设备上摄像头的连续帧的视频处理)至关重要。

Availability (Offline), Privacy, and Compelling Cost as the application runs without network connection, no API calls, and the data never leaves the device. Imagine using your mobile device to identify historic tiles while in the subway, catalog private vacation photos while in airplane mode, or detect poisonous plants while in the wilderness.

应用程序在没有网络连接的情况下运行, 没有API调用且数据永远不会离开设备,因此可用性(脱机), 隐私和诱人的成本。 想象一下,使用移动设备在地铁中识别历史瓷砖,在飞机模式中分类私人度假照片或在旷野中检测有毒植物。

ML的缺点 (Disadvantages of ML on the edge)

  • Application Size: By adding the model to the device, you’re increasing the size of the app and some accurate models can be quite large.

    应用程序大小 :通过将模型添加到设备中,您正在增加应用程序的大小,某些准确的模型可能会很大。

  • System Utilization: Prediction and inference on the mobile device involves lots of computation, which increases battery drain. Older devices may struggle to provide real-time predictions.

    系统利用率 :对移动设备的预测和推断涉及大量计算,这会增加电池消耗。 较旧的设备可能难以提供实时预测。

  • Model Training: In most cases, the model on the device must be continually trained outside of the device with new user data. Once the model is retrained, the app will need to be updated with the new model, and depending on the size of the model, this could strain network transfer for the user. Refer back to the application size challenge listed above, and now we have a potential user experience problem.

    模型训练 :在大多数情况下,必须使用新的用户数据在设备外部持续训练设备上的模型。 重新训练模型后,将需要使用新模型更新应用程序,并且根据模型的大小,这可能会给用户带来网络传输压力。 返回上面列出的应用程序大小挑战,现在我们有一个潜在的用户体验问题

Image for post
Photo by Christopher Gower on Unsplash
Christopher GowerUnsplash上的 照片

弄脏你的手 (Getting your hands dirty)

As we dive deeper into the core of this article, we are assuming that you are quite familiar with an iOS Development environment and you have some basic knowledge about python.

当我们深入研究本文的核心时,我们假设您对iOS开发环境非常熟悉,并且您具有有关python的一些基本知识。

用python转换模型 (Converting a model with python)

Now let’s say that we found an interesting model on the web, unfortunately, we notice it’s not in CoreML format, but we absolutely want to use it in our iOS app and there’s no other way to obtain it, we can just try to convert it.

现在,我们说我们在网络上找到了一个有趣的模型,不幸的是,我们注意到它不是CoreML格式的 ,但是我们绝对希望在我们的iOS应用中使用它,并且没有其他方法可以获取它,我们可以尝试将其转换。

Apple has a specific tool to accomplish this task, a python module called coremltools that can be found at this link.

Apple有一个完成此任务的特定工具,一个名为coremltools的python模块,可以在此链接中找到。

The interesting model is built with keras (tensorflow as backend) and it’s about emotion detection, you can download it from here. Let’s now convert it, first of all we’ll install the required packages. For compatibility reasons, please use python 2.7 and packages’ specified versions, as coremltools relies on these.

这个有趣的模型是使用keras(tensorflow作为后端)构建的,它与情感检测有关,您可以从此处下载。 现在让我们对其进行转换,首先我们将安装所需的软件包。 出于兼容性原因 ,请使用python 2.7和软件包的指定版本,因为coremltools依赖于这些版本。

One final note, since we are using a deprecated version of python, we create a virtual environment to run our code.

最后一点,由于我们使用的Python版本已弃用,因此我们创建了一个虚拟环境来运行我们的代码。

Final final note, your path to Python 2.7 might be different, if you’re using Mac OS or Linux, check your /usr/bin/ directory. If you’re using Windows, check the path in which you decided to install python.

最后的最后一点, 使用Python 2.7 的路径 可能有所不同 ,如果您使用的是Mac OS或Linux,请检查/ usr / bin /目录。 如果您使用Windows,请检查决定安装python的路径。

pip3 install virtualenvvirtualenv -p /usr/bin/python2.7 venv

Now we activate the virtual environment we just created.

现在,我们激活刚刚创建的虚拟环境。

source venv/bin/activate

And finally, we install our dependencies.

最后,我们安装依赖项。

pip install coremltools keras==2.2.4 tensorflow==1.14.0

After this, we can start writing our script. 🚀

之后,我们可以开始编写脚本了。 🚀

Create a file named converter.py, the first step will be to import coremltools.

创建一个名为converter.py的文件第一步将是导入coremltools。

import coremltools

Last but not least, we convert our model into a .mlmodel one.

最后但并非最不重要的一点是,我们将模型转换为.mlmodel

output_labels = ['Angry', 'Disgust', 'Fear', 'Happy', 'Neutral', 'Sad', 'Surprise']ml_model = coremltools.converters.keras.convert(
'./model_v6.h5',
input_names=['image'],
output_names=['output'],
class_labels=output_labels,
image_input_names='image'
)ml_model.save('./model_v6.mlmodel')

As you can see, the first line is about output labels, this is the most important thing we need to know before converting a model, otherwise, the results will be useless for us since we will not be able to know what the output is about.

如您所见,第一行是关于输出标签的 ,这是我们在转换模型之前需要了解的最重要的事情,否则,结果对于我们来说将是无用的,因为我们将无法知道输出是关于什么的。

The second line is the main instruction of the script, it calls the Keras converter from the coremltools converters and converts our model based on our specifications about input and output (in this case we are specifying that we need an image as input and output_labels as output).

第二行是脚本的主要指令,它从coremltools转换器调用Keras 转换器和我们的模型基于我们对输入和输出规格转换(在这种情况下,我们指定,我们需要一个图像作为输入和output_labels作为输出)。

Finally, we save the converted model that is ready to use in our app.

最后,我们保存已转换的模型,准备在我们的应用程序中使用。

Image for post
Photo by Hitesh Choudhary on Unsplash
Hitesh Choudhary Unsplash

使用Apple技术进行机器学习 (Machine Learning with Apple technologies)

This is what we expect our final result to be.

这就是我们期望的最终结果。

A quick tour of the application
A quick tour of the finished app
快速浏览完成的应用程序

The first thing to do is to get a good model for our scope.

首先要做的是为我们的范围建立一个良好的模型。

通过CreateML应用程序创建模型 (Creation of a Model via CreateML app)

CreateML app is an application presented in the WWDC 2019 for Xcode 11.0 and Swift 5. This application allows everyone to create an ML Model without having a very big knowledge about training an ML model. It’s only necessary to find the information we want to use to train the model, label them (because the base of CreateML is the Supervised Training) and import everything in the application.

CreateML应用程序是WWDC 2019中针对Xcode 11.0和Swift 5推出的应用程序。此应用程序使每个人都可以创建ML模型,而无需掌握有关训练ML模型的大量知识。 只需找到我们想要用来训练模型的信息,对其进行标记(因为CreateML的基础是Supervised Training ),然后将所有内容导入应用程序。

Now we will use the CreateML app to train our model which will recognize our emotions.

现在,我们将使用CreateML应用程序来训练将识别我们情绪的模型。

First of all, you have to find the images. I suggest to find a very rich dataset of images because the precision is very important, but the images mustn’t have a very high resolution. After that, you have to divide your images into several categories you decide, in the base of emotions you want to recognize and create a folder for each emotion. Then you have to create two super folders: train and test. The first folder is richer of images and it’s the folder where you put the images for training the model, the second folder is used to test the model just trained.

首先,您必须找到图像。 我建议找到一个非常丰富的图像数据集,因为精度非常重要,但是图像不能具有很高的分辨率。 之后,您必须根据要识别的情绪将图像划分为几个类别,并为每种情绪创建一个文件夹。 然后,您必须创建两个超级文件夹:训练和测试。 第一个文件夹包含丰富的图像,这是放置图像以训练模型的文件夹,第二个文件夹用于测试刚刚训练的模型。

If this operation could be annoying, don’t worry: lots of datasets have a .csv file where there are the classification already done! You have only to write a simple script and solve the problem! Here is an example in Python 3.8:

如果此操作可能很烦人,请不要担心:许多数据集都有一个.csv文件,其中已经完成分类 ! 您只需要编写一个简单的脚本即可解决问题! 这是Python 3.8中的示例:

import csv
import os
import sysdef main():
input_path = “PATH_OF_YOUR_FOLDER”
file_name = “FILE_NAME.csv”
file = open(file_name,”r”)
data = csv.reader(file)
next(data) #this is used to avoid the first line
for info in data:
label_path = os.path.join(input_path, info[-1])
#here you have to consider the structure of .csv filedestination_path = “mv “ + info[1] + “ “ + label_path
os.system(destination_path)
print(“images moved”)
file.close()if name == ‘main’ : main()

After that, press and hold “control” button on the keyboard (if you are using macOS Catalina and Xcode 11) and click on the dock icon of Xcode. You should see a menu called “Open Developer Tool” and after, CreateML; then you click on “New Document”. You should see an interface where you can choose the right template of our MLModel: we will classify images, so we choose “Image Classifier”, that is the first template. Next, we will give the name of your project, the name of License, a short description, and where to save the project file.

之后,按住键盘上的“控制”按钮(如果您使用的是macOS Catalina和Xcode 11),然后单击Xco​​de的停靠图标。 您应该看到一个名为“ Open Developer Tool ”的菜单, 其后是CreateML ; 然后单击“新建文档”。 您应该看到一个界面,您可以在其中选择MLModel的正确模板:我们将对图像进行分类 ,因此我们选择“ Image Classifier ”,这是第一个模板。 接下来,我们将提供您的项目名称,许可证名称,简短说明以及项目文件的保存位置。

Now we have to select the images for training and testing, and the only thing to do is to drag the “train” folder in the “Train” section and “test” folder in the “Test” section. Validation must be set in “Auto”. Then we have to choose a maximum of iterations (600 should be good) and then press “Start” at the top of the interface. The other commands in the bottom part are only to edit images for being more useful in the training process, but in this situation, we haven’t needed it.

现在,我们必须选择要进行训练和测试的图像,唯一要做的就是拖动 “训练”部分中的“训练”文件夹和“测试”部分中的“测试”文件夹。 验证必须在“自动”中设置。 然后,我们必须选择最大迭代次数 (600个应该是好的),然后在界面顶部按“开始”。 底部的其他命令仅用于编辑图像,以便在训练过程中更有用,但是在这种情况下,我们不需要它。

After a long time, we have our characteristics about our model and, in the top right, our model. You have only to drag this model out of CreateML windows and drop it in an external folder or Desktop.

经过很长一段时间,我们对模型有了自己的特征,在右上角, 我们的模型也有了。 您只需将此模型拖出CreateML窗口,然后将其放在外部文件夹或桌面中即可。

Image for post
Photo by Yancy Min on Unsplash
Yancy MinUnsplash拍摄的照片

结合使用CoreML和Vision (Using CoreML with Vision)

For creating our software we need two frameworks and an MLModel based on image classification (created before or converted from other models): these frameworks are Vision and CoreML.

为了创建我们的软件,我们需要两个框架和一个基于图像分类的MLModel(在其他模型之前创建或从其他模型转换而来):这些框架是VisionCoreML

We already talked about CoreML, but what’s Vision about?

我们已经讨论过CoreML,但愿景是什么?

Vision permits us to manipulate images and videos using Computer Vision Algorithms for lots of operations like face and face landmark detection, text detection, barcode recognition, image registration, and general feature tracking. We will use the CoreML interfacing for classifying images.

Vision允许我们使用计算机视觉算法处理图像和视频,以进行许多操作,例如面部和面部界标检测,文本检测,条形码识别,图像配准以及一般特征跟踪。 我们将使用CoreML接口对图像进行分类。

To start with this tutorial, first of all, clone this repository: in it, there is a simple application (written in Swift 5.2 and compatible for iOS 13.2) with a simple ViewController where there is an UIImageView and a Label. The first is used to show the images we choose to verify emotions, the second is used to identify the emotion and indicate the precision of our image classification.There is also a Greyscale converted because lots of datasets give us grayscale images and, for this reason, image classification is more correct.

首先从本教程开始,克隆该存储库 :在其中,有一个简单的应用程序(用Swift 5.2编写,并且与iOS 13.2兼容)和一个简单的ViewController,其中有一个UIImageView和一个Label。 第一个用于显示我们选择用来验证情绪的图像,第二个用于识别情绪并指示我们图像分类的精度。还有一个灰度转换,因为许多数据集为我们提供了灰度图像,因此,图像分类比较正确。

Now let’s start:

现在开始:

  1. Create a new file called “PredictionManager.swift” where we can implement our classification function.

    创建一个名为“ PredictionManager.swift”的新文件,在其中我们可以实现分类功能。
  2. Save it in the folder of your app.

    将其保存在应用程序的文件夹中。
  3. Import UIKit, CoreML and Vision in your project

    在您的项目中导入UIKit,CoreML和Vision
  4. Add your model to the project. To do this just drag and drop the .mlmodel file in the project folder opened in Xcode navigator, then select “Copy as Group”.

    将模型添加到项目中。 为此,只需将.mlmodel文件拖放到Xcode导航器中打开的项目文件夹中,然后选择“复制为组”。

Now let’s start to write code! 🥳

现在让我们开始编写代码! 🥳

First, we create a class, called PredictionManager, with two variables:

首先,我们创建一个名为PredictionManager的类,其中包含两个变量:

var emotionModel: MLModel
var visionModel: VNCoreMLModel

The first variable is the MLModel we consider for our project, instead, the second variable is a Vision Container where we put our MLModel (trained for images) and make operations on it (called “VNCoreMLRequest”)

第一个变量是我们为项目考虑的MLModel,第二个变量是Vision容器,我们在其中放置了MLModel(针对图像进行训练)并对其进行操作(称为“ VNCoreMLRequest”)

After declaration, let’s create the constructor:

声明后,让我们创建构造函数:

init() {
self.emotionModel = EmotionClassificator().model
do{
self.visionModel = try VNCoreMLModel(for: self.emotionModel)
}catch{
fatalError(“Unable to create Vision Model…”)
}
}

Firstly we assigned to the MLModel variable our model (in this case called “EmotionClassificator”, but in general the name of this class is equal to the .mlmodel file name) because every .mlmodel file creates a class called like the model, and this class is usable for every operation with CoreML, but to access to its implementation, you have to open the .mlmodel file and click on the arrow on the right of the Model name.

首先,我们将模型分配给MLModel变量(在本例中为“ EmotionClassificator ”,但通常此类的名称等于.mlmodel文件的名称),因为每个.mlmodel文件都会创建一个称为模型的类,并且该类可用于CoreML的每个操作,但是要访问其实现,您必须打开.mlmodel文件,然后单击Model名称右侧的箭头。

Then we assign to visionModel the MLModel if this model is compatible with Vision.

然后,如果此模型与Vision兼容,则将MLModel分配给visionModel

Now we can start with our function:

现在我们可以从函数开始:

func classification(for image: UIImage, complete: @escaping (String) -> Void)

For classifying our image we have to use an image (UIImage) and we will have to output a String (here we can have a string with @escaping closure, that prevents us to delete information, here in type “String” when function variables are de-allocated).

为了对图像进行分类,我们必须使用图像(UIImage),并且必须输出一个String(这里我们可以使用带@escaping闭包的字符串,以防止我们删除信息,此处的函数变量是“ String”类型)。取消分配)。

Now, the first thing we have to do is the VNCoreMLRequest, to create the request to our MLModel:

现在,我们要做的第一件事是VNCoreMLRequest,以创建对MLModel的请求:

func classification(for image: UIImage, complete: @escaping (String) -> Void){let request = VNCoreMLRequest(model: self.visionModel {(request,error) inguard error == nil else {complete(“Error”); 
return
}guard let results = request.results as [VNClassificationObservation], let firstResult = results.firstelse {
complete(“No Results”);
return
}
complete(String(format: “%@ %.1f%%”, firstResult.identifier, firstResult.confidence * 100))
}
}

Our VNCoreMLRequest needs the VNCoreMLModel to operate the requests and then we consider three situations:

我们的VNCoreMLRequest需要VNCoreMLModel来操作请求,然后我们考虑三种情况:

  • the model isn’t useful for our purpose;

    该模型对我们的目的没有用

  • the entire request (where the results are represented like VNClassificationObservation) doesn’t give any result;

    整个请求(结果表示为VNClassificationObservation) 不给出任何结果;

  • choosing the first result (the most precise), we will print our information (the classification and the confidence).

    选择第一个结果(最精确的结果),我们将打印我们的信息 (分类和置信度)。

To have more precision, we will crop the images to the center:

为了获得更高的精度,我们将图像裁剪到中心:

request.imageCropAndScaleOption = .centerCrop

Now we need a handler to handle every request to the VNCoreMLModel, but first, we have to give it an image filtered and optimized for our process: for this reason, we create a CIImage (Core Image) and we will give it a prefixed orientation with CGImagePropertyOprientation:

现在我们需要一个处理程序来处理对VNCoreMLModel的每个请求,但是首先,我们必须为它提供经过过滤和优化的图像,以便针对我们的流程进行处理:基于这个原因,我们创建一个CIImage(核心图像),并为其指定前缀与CGImagePropertyOprientation

guard let ciImage = CIImage(image: image) else { complete(“error creating image”); return}let orientation = CGImagePropertyOrientation(rawValue: UInt32(image.imageOrientation.rawValue))

And now it’s time to build the request:

现在是时候建立请求了:

DispatchQueue.global(qos: .userInitiated).async {let handler = VNImageRequestHandler(ciImage: ciImage, orientation: orientation!)do {
try handler.perform([request])
} catch {
complete(“Failed to perform classification.”)
}}

To help our application, we will permit the handler to go in the global area and be activated only on the request of the user (when the user chooses the image). This operation will be asynchronous, so it will be executed independently from the rest of the app.

为了帮助我们的应用程序,我们将允许处理程序进入全局区域,并且在用户请求时(当用户选择图像时)激活处理程序。 该操作将是异步的 ,因此它将独立于应用程序的其余部分执行。

In the end, we will build our handler (using the CIImage created before and the orientation) and try to perform requests created before.

最后,我们将构建处理程序(使用之前创建的CIImage和方向)并尝试执行之前创建的请求。

Now we complete our function of classification. Let’s go to call it in the ViewController.

现在,我们完成了分类功能。 让我们在ViewController中调用它。

In the Extension of our ViewController, after the dismiss, let’s write this:

在我们的ViewController的扩展中,解雇之后,让我们这样写:

let monoImage = image.mono

Here we convert our image in mono-image, after we will

在这里,我们将图像转换为单图像后

predictionManager.classification(for: monoImage) { (result) inDispatchQueue.main.async { [weak self] in
self?.predictionLabel.text = result
}}

After the classification, the result will be processed in the main thread (DispatchQueue.main.async), and, with a weak self we will give the result of our classification.

分类之后,结果将在主线程(DispatchQueue.main.async)中进行处理,并且使用弱自身,我们将给出分类结果。

Now you can classify emotions! 🤩 What are you waiting for? Try in on your iPhone!

现在您可以对情绪进行分类! you您还在等什么? 尝试在您的iPhone上!

For the complete project, check out our repository:

对于完整的项目, 请查看我们的存储库:

团队-NoSynapses (The team — NoSynapses)

Giovanni Prisco

乔瓦尼·普里斯科(Giovanni Prisco)

Giovanni Di Guida

乔凡尼·迪·吉达

Antonio Alfonso (also on Medium)

安东尼奥·阿方索 (也在Medium上 )

Simone Serra Cassano

西蒙妮·塞拉·卡萨诺

Vincenzo Coppola

文森佐·科波拉

Simone Formisano

西蒙娜·福米萨诺

翻译自: https://medium.com/apple-developer-academy-federico-ii/emotion-detection-with-apple-technologies-b782beaa5c44

黑苹果检测

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值