heroku_在heroku中部署keras模型

heroku

Artist love to have their art viewed, writers love to have their work read, and data scientist love when their models are used. However, not everyone knows how to fire up a development environment to run the models you have. In order to make them useful to non-technical users, we have to give our models a UI. Please! Do not freak out! Web development is friend, no foe. In case you did not catch the reference, it’s from finding Nemo. Anyway, let me show you how I built and deployed Jake (Just Another Kar classifiEr).

艺术家喜欢观看艺术作品,作家喜欢阅读作品,数据科学家喜欢使用模型。 但是,并不是每个人都知道如何启动开发环境来运行您拥有的模型。 为了使它们对非技术用户有用,我们必须为模型提供一个UI。 请! 不要惊慌! Web开发是朋友,没有敌人。 如果您没有找到参考,则是从找到Nemo那里获得的。 无论如何,让我向您展示我如何构建和部署Jake(Just Another Kar classifiEr)。

杰克 (Jake)

First, what in the world is Jake? Jake is a deep learning AI that tries to classify cars into one of the following categories: SUV, sedan, coupe, or pickup truck. Jake lives in the Heroku servers, is powered by Keras, and uses Flask to function. Let’s go deeper. Heroku is a scalable cloud platform. It supports a variety of languages, but we will be using Python to make things easier. However, vanilla Python does not provide a way to build web applications, but there is a library that does. The name of the library is Flask. Flask will handle all the tedious things of web applications for you. It’s a great way to go from model to web application quickly. Now, this would not be possible without having a model. Keras is the most common library to build neural networks. For this example, we will be using a Tensorflow backend. You can find Jake here. Please keep in mind that Jake might take a few seconds to load.

首先,杰克到底是什么? 杰克(Jake)是一种深度学习型AI,试图将汽车分为以下类别之一:SUV,轿车,双门轿跑车或皮卡车。 Jake住在Heroku服务器中,由Keras提供支持,并使用Flask来运行。 让我们更深入。 Heroku是可扩展的云平台。 它支持多种语言,但是我们将使用Python简化事情。 但是,vanilla Python没有提供构建Web应用程序的方法,但是有一个库可以提供这种方法。 该库的名称是Flask。 Flask将为您处理Web应用程序中所有繁琐的事情。 这是快速从模型转换为Web应用程序的好方法。 现在,如果没有模型,这将是不可能的。 Keras是构建神经网络的最常用的库。 在此示例中,我们将使用Tensorflow后端。 你可以在这里找到杰克。 请记住,Jake可能需要花费几秒钟来加载。

有关模型的信息 (Info About the Model)

Although I will not be going into how I built the model, I want to go into a few steps I took that were out of the ordinary. Training a convolutional neural network is extremely resource intensive. A GPU is a necessity when training a network with lots of samples. Unfortunately, my computer lacks a dedicated GPU, and the CPU is not strong enough to train the model either. My solution was to export all my data and code to Google Colab. Google Colab is a free platform for data science. It provides access to GPUs and CPUs to train models faster. However, resources are not unlimited, so it is best to plan ahead before running your code on Google Colab. In order to export the training data and import the results, I used NumPy’s save method. This method saves arrays into a binary file. To load the arrays, I used NumPy’s load method. The following is an example of how to do this.

尽管我不会讨论如何构建模型,但我想采取一些与众不同的步骤。 训练卷积神经网络非常耗费资源。 在训练带有大量样本的网络时,必须使用GPU。 不幸的是,我的计算机缺少专用的GPU,并且CPU的强度不足以训练模型。 我的解决方案是将我所有的数据和代码导出到Google Colab。 Google Colab是一个免费的数据科学平台。 它提供对GPU和CPU的访问,以更快地训练模型。 但是,资源并不是无限的,因此最好先进行计划,然后再在Google Colab上运行代码。 为了导出训练数据并导入结果,我使用了NumPy的save方法。 此方法将数组保存到二进制文件中。 为了加载数组,我使用了NumPy的load方法。 以下是如何执行此操作的示例。

import numpy as np#This will save the X_train variable as X_train.npy which is a NumPy binary file.np.save("X_train.npy",X_train)#This will load the results.npy file to the results variable.results = np.load("results.npy")

If your computer is strong enough to train your convolutional neural network, then you do not need to do any of this. However, it is good knowledge to have.

如果您的计算机足够强大,可以训练您的卷积神经网络,那么您无需执行任何此类操作。 但是,这是一个很好的知识。

该模型 (The Model)

Great! We have a trained model, it is up to standards, and you want to share it with the world. You can upload it to GitHub, and you can share it with your network. However, you just talked to your mom, and you explained how cool your model is. Now, she went to your grandma to tell her about your model. Grandma is so excited, and she wants to see how this model works. Only problem is that your grandma does not have a GitHub . Fear not! We will turn this model into an app. First, we need to save the model. Since we already trained the model, it has already determined all the weights needed to perform calculations. If you trained your model in Google Colab be sure to take note of the version of Keras. You need to have the same version of Keras on your development environment and app environment in order to be able to start up your model again. Different versions handle this process differently. This is how you save your model.

大! 我们拥有训练有素的模型,它符合标准,并且您想与世界分享。 您可以将其上传到GitHub,也可以与网络共享。 但是,您只是和您的妈妈交谈过,并说明了您的模特有多酷。 现在,她去找你奶奶告诉她你的模特。 奶奶非常激动,她想看看这个模型是如何工作的。 唯一的问题是您的祖母没有GitHub。 不要怕! 我们将把这个模型变成一个应用程序。 首先,我们需要保存模型。 由于我们已经训练了模型,因此它已经确定了执行计算所需的所有权重。 如果您在Google Colab中训练了模型,请确保记下Keras的版本。 您需要在开发环境和应用程序环境上具有相同版本的Keras,以便能够再次启动模型。 不同版本处理此过程的方式有所不同。 这就是保存模型的方式。

model = Sequential()#Your model codemodel.save("myModel.h5")

That’s it! This will save your model as an h5 file which will save the architecture of your model, the weights, and the compile information. Now, to load the model you have to do the following.

而已! 这会将您的模型另存为h5文件,该文件将保存模型的结构,权重和编译信息。 现在,要加载模型,您必须执行以下操作。

from tensorflow.keras.models import load_modelmodel = load_model("myModel.h5")

Great! Now you just loaded your whole model into the model variable. From there you can use the model variable as you would if you had written your whole model. For example you can use model.predict() in order to pass in new data and get results.

大! 现在,您只需将整个模型加载到model变量中。 从那里可以像编写整个模型一样使用model变量。 例如,您可以使用model.predict()来传递新数据并获取结果。

该应用程序 (The app)

Unfortunately, I will not go too deep in detail on how I built the Flask app. The reason? It is so much information that would fill this whole blog post. I plan to go in detail about this in the future on its standalone post. Even with its own blog post, it would still not be enough to explain everything. However, you can find a great tutorial on flask here. It’s so much information this guy wrote a whole book about it. However, I will go into the logic of the app.

不幸的是,我不会太详细地介绍我如何构建Flask应用程序。 原因? 太多的信息将填补整个博客文章。 我计划将来在其独立文章中对此进行详细介绍。 即使拥有自己的博客文章,仍然不足以解释所有内容。 但是,您可以在此处找到有关flask的出色教程。 这个家伙写了一整本关于它的大量信息。 但是,我将介绍该应用程序的逻辑。

To go over the app. I used Flask blueprints in an app factory style. The way the app works is that a user will upload a picture. Unfortunately, the picture cannot be worked with right away, so the picture gets saved locally to a dedicated folder. Be aware that some cloud services do not allow data to be saved after you have uploaded your app. Google Cloud is one of those services. Once the phone is saved, it is loaded and prepared to be passed to the neural network. It is turned into grayscale, scaled, and resized. Then, the photo will be passed to the neural network. The neural network will return numbers between 0 and 1, and this represents the likelihood of the photo pertaining to each category. I, then, display the results on the next screen along with the image. Since we do not want to store the image indefinitely, I built in logic into the app to remove the photo when the user goes to the main page, or when the program starts up. However, logic does not stop there. We have to think of different scenarios that can happen. For example, what happens if the user goes to the results page before uploading an image? What happens if the user tries to upload different file formats? In most instances, the app just defaults to reloading the main page. The app has logic to allow photo formats only. When designing an app, it is very important to think about different scenarios that could happen.

要查看该应用程序。 我在应用程序工厂风格中使用了Flask蓝图。 该应用程序的工作方式是用户将上传图片。 不幸的是,图片无法立即使用,因此图片被本地保存到专用文件夹中。 请注意,某些云服务不允许在上传应用程序之后保存数据。 Google Cloud是其中的一项服务。 一旦手机被保存,它将被加载并准备传递给神经网络。 它会变成灰度,缩放和调整大小。 然后,照片将被传递到神经网络。 神经网络将返回介于0和1之间的数字,这表示照片与每个类别有关的可能性。 然后,我将结果与图像一起显示在下一个屏幕上。 由于我们不想无限期地存储图像,因此我在应用程序中内置了逻辑,以便在用户转到主页或程序启动时删除照片。 但是,逻辑并不止于此。 我们必须考虑可能发生的不同情况。 例如,如果用户在上传图像之前转到结果页面会发生什么? 如果用户尝试上传不同的文件格式会怎样? 在大多数情况下,该应用程序仅默认情况下重新加载主页。 该应用程序具有仅允许照片格式的逻辑。 设计应用程序时,考虑可能发生的不同情况非常重要。

#Heroku Once you have your flask app situated, then it is time to upload to Heroku. Remember, Heroku just provides server space. It does not provide configurations for the server. However, there packages out there that help you with this, and it is very hands off for our simple app. When you upload your app, Heroku needs 4 things.

#Heroku找到烧瓶应用程序之后,就该上传到Heroku了。 请记住,Heroku仅提供服务器空间。 它不提供服务器的配置。 但是,那里提供了可以帮助您解决此问题的软件包,对于我们简单的应用程序来说,它非常实用。 当您上传应用程序时,Heroku需要4件事。

  1. A list of all the packages needed to run in a requirements.txt file.

    在requirements.txt文件中运行所需的所有软件包的列表。
  2. Server configurations.

    服务器配置。
  3. An app that’s less than 500MB in size.

    大小小于500MB的应用。

Sounds hard? Don’t stress too much. Let’s start on number one. First, keep in mind that Heroku is a blank server with no libraries. You use the requirements.txt file to tell Heroku which libraries to download. For Jake, I downloaded NumPy version 1.18.5, Flask, gunicorn, tensorflow-cpu, and Pillow. Let’s go through a few of these that you might not be familiar with. Gunicorn is a server for python. This is what will work with Flask to run your app. Tensorflow-cpu is a Tensorflow library that uses your CPU instead of your GPU to run, and it is smaller than the regular Tensorflow. The reason is because Heroku does not provide GPUs to run your code. Last, pillow is a library that helps with manipulating images. Easy so far!

听起来很难? 不要压力太大。 让我们从第一名开始。 首先,请记住,Heroku是没有库的空白服务器。 您可以使用requirements.txt文件来告诉Heroku要下载哪些库。 对于Jake,我下载了NumPy版本1.18.5,Flask,gunicorn,tensorflow-cpu和Pillow。 让我们看一下其中一些您可能不熟悉的东西。 Gunicorn是python的服务器。 这将与Flask一起运行您的应用程序。 Tensorflow-cpu是一个Tensorflow库,它使用您的CPU而不是您的GPU来运行,并且比常规的Tensorflow小。 原因是因为Heroku不提供GPU来运行您的代码。 最后,枕头是一个有助于处理图像的库。 到目前为止很容易!

Number two, is actually the easiest of all three believe it or not. Heroku needs a Procfile in order to configure the server. No extensions or anything just a file name Procfile. Inside your Procfile you will tell Heroku what kind of server it should configure, and the path to start up your app.

第二,实际上是三者中最容易相信或不相信的。 Heroku需要一个Procfile才能配置服务器。 没有扩展名或其他任何东西,只是文件名Procfile。 在Procfile中,您将告诉Heroku应该配置哪种服务器以及启动应用程序的路径。

web: gunicorn jake:app

Above is what is written in my Procfile. That tells Heroku that we are configuring a web app with gunicorn and that the path to start up my app is in the jake file and the app variable.

以上是我的Procfile中写的内容。 这告诉Heroku我们正在使用gunicorn配置一个Web应用程序,并且启动我的应用程序的路径在jake文件和app变量中。

Number three, you either don’t have to worry about it at all, or it will take some time to figure out. This is where google is your friend. To be completely transparent, this part took me the longest to figure out. In order to meet the size requirement, I used tensorflow-cpu because it was a smaller package, and I reduced the size of the neural network as well. This step is difficult because you do not want to compromise performance, but you do not want to have a large app that does not run.

第三,您根本不必担心,否则将需要一些时间来弄清楚。 google是您的朋友。 为了完全透明,这部分花费了我最长的时间。 为了满足大小要求,我使用了tensorflow-cpu,因为它是一个较小的程序包,并且我还减小了神经网络的大小。 这一步很困难,因为您不想降低性能,但又不想让大型应用程序无法运行。

总结 (The Wrap Up)

I hope this was able to give you some idea of what goes into building an AI powered Flask app on Heroku. This is a great way to demo apps and use your models. From the example above, your grandmother should be able to handle playing around with this app. If she can’t, then she has you to help.

我希望这能使您对在Heroku上构建基于AI的Flask应用程序有什么了解。 这是演示应用程序和使用模型的好方法。 从上面的示例中,您的祖母应该能够处理这个应用程序。 如果她做不到,那就请您帮忙。

翻译自: https://medium.com/@a.colocho/deploying-keras-model-in-heroku-543cbafee6ba

heroku

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值