opencv keras_如何使用OpenCV和Flask部署预训练的Keras模型

opencv keras

In this post, I will share how to deploy a pre-trained model to a locally hosted computer with Flask, OpenCV and Keras. I initially deployed this model on PythonAnywhere using Flask, Keras and jquery. The application was designed for remote school classroom or workplace settings that require students or employees to shave their facial hair.

在本文中,我将分享如何使用Flask,OpenCV和Keras将预训练的模型部署到本地托管的计算机上。 我最初使用Flask,Keras和jquery在PythonAnywhere上部署了此模型。 该应用程序是为需要学生或员工剃胡子的偏远学校教室或工作场所设置的。

The application allowed users to upload a photo, and click a button to send a post request with the encoded image data to the backend of the website. The image transformation and classification were handled on the backend, and the results were returned to the front-end in an html response. The response updated the page dynamically, as Shaved or Unshaved.

该应用程序允许用户上传照片,然后单击按钮以将带有编码图像数据的发布请求发送到网站的后端。 图像转换和分类在后端进行处理,结果通过html响应返回到前端。 响应将页面动态更新为ShavedUnshaved

The backend of the application was built in Flask, but I wanted to allow a live video stream that would detect a user’s face, labeling the classification on screen. So since I already had a model and a basic Flask framework, I wanted to use OpenCV to do the rest on my local machine.

该应用程序的后端是在Flask中构建的,但是我想允许一个实时视频流来检测用户的脸部,并在屏幕上标记分类。 因此,由于我已经有了一个模型和一个基本的Flask框架,因此我想使用OpenCV在本地计算机上完成其余的工作。

If you have a model, you can follow the same format, but if you do not, I would recommend reading my previous blog posts (Building a Convolutional Neural Network to Recognize Shaved vs UnShaved Faces & How to Split a Pickled Model File to Bypass Upload Limits on PythonAnywhere).

如果您有模型,则可以采用相同的格式,但如果没有,则建议阅读我以前的博客文章( 构建卷积神经网络以识别剃光与未剃光的面Kong以及如何拆分腌制的模型文件以绕过上传)对PythonAnywhere的限制 )。

OpenCV (OpenCV)

OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products.

OpenCV(开源计算机视觉库)是一个开源计算机视觉和机器学习软件库。 OpenCV构建旨在为计算机视觉应用程序提供通用的基础结构,并加速在商业产品中使用机器感知。

https://opencv.org/about/

https://opencv.org/about/

After reading documentation, watching YouTube videos, and reading blog posts, I found a very helpful article, COVID-19: Face Mask Detection using TensorFlow and OpenCV by Gurucharan M K. While Gurucharan used OpenCV to detect Face Masks for COVID-19, I wanted to use a similar format to detect if a faces first, then determine if the face was shaved or not.

阅读文档,观看YouTube视频并阅读博客文章之后,我发现了一篇非常有用的文章: COVID-19:使用TensorFlow和OpenCV进行面部遮罩检测,作者Gurucharan MK 。 尽管Gurucharan使用OpenCV来检测COVID-19的面罩,但我想使用类似的格式来先检测是否有面部,然后再确定面部是否剃光。

The most helpful piece to my puzzle was detecting the faces first, and this was accomplished by using the Face detection program the Haar Feature-based Cascade Classifiers for detecting the features of a face.

我的难题中最有用的部分是首先检测人脸,这是通过使用基于Haar基于特征的级联分类器的人脸检测程序来检测人脸的特征来完成的。

face_classifier = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

Haar Cascade is a machine learning object detection algorithm used to identify objects in an image or video and based on the concept of ​​ features proposed by Paul Viola and Michael Jones in their paper “Rapid Object Detection using a Boosted Cascade of Simple Features” in 2001

Haar Cascade 是一种 机器学习 对象检测算法,用于识别图像或视频中的对象,并基于Paul Viola和Michael Jones在其论文《使用 简单特征 的增强 级联 进行 快速对象检测》中提出的特征概念。 2001

http://www.willberger.org/cascade-haar-explained/

http://www.willberger.org/cascade-haar-explained/

Video Capture

视频截取

video = cv2.VideoCapture(0)

Stop Video

停止影片

video.release()

Display Video

显示影片

(rval, im) =video.read()

Flip Video

翻转视频

im = cv2.flip(im, 1, 1)

Resize Video

调整视频

mini = cv2.resize(im, (im.shape[1] // 4, im.shape[0] // 4))

Detect Faces

检测人脸

faces = face_classifier.detectMultiScale(mini)

烧瓶 (Flask)

Flask is a micro web framework written in Python. It is classified as a microframework because it does not require particular tools or libraries. It has no database abstraction layer, form validation, or any other components where pre-existing third-party libraries provide common functions.

Flask是一个用Python编写的微型Web框架。 它被归类为微框架,因为它不需要特定的工具或库。 它没有数据库抽象层,表单验证或任何其他现有的第三方库提供公用功能的组件。

Anmol Behl has a well-written article entitled, “Video Streaming Using Flask and OpenCV“. Following this article, I was able to build out an html webpage template, as well as the routes needed to stream video for each frame’s predictions to the browser using the computer’s built-in camera.

Anmol Behl有一篇写得很好的文章,标题为“ 使用Flask和OpenCV进行视频流传输 ”。 在阅读完本文之后,我能够构建一个html网页模板,以及使用计算机的内置摄像头将每个帧的预测视频流传输到浏览器所需的路由。

The html page was similar to the page I built before, without the jquery.

html页面类似于我之前构建的页面,但没有jquery。

Here is the simple index.html code I used to display the homepage:

这是我用来显示主页的简单index.html代码:

<!DOCTYPE html>
<html lang="en">
<head>
<title>Shaved or Not_Shaved </title></head>
</head>
<body>
<center><h1>Shaved or Not_Shaved Streaming App Demo</h1></center>
<center><img id="bg" src="{{ url_for('video_feed') }}"></center>
<video id="video" autoplay>Video Stream not available.</video>
</body>
</html>

To build out the routes, I used the following code to handle when the page was loaded, as well as the video camera images pulled from our camera.py file that is initialized when the website is loaded.

为了构建路由,我使用以下代码来处理页面的加载时间,以及从我们的camera.py文件中提取的摄像机图像, camera.py文件在网站加载时初始化。

Here is the Flask code I used in the main.py file:

这是我在main.py文件中使用的Flask代码:

from flask import Flask, render_template, Response
from camera import VideoCameraimport os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
app = Flask(__name__)@app.route('/')
def index():
# rendering webpage
return render_template('index.html')def gen(camera):
while True:
#get camera frame
frame = camera.get_frame()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n')@app.route('/video_feed')
def video_feed():
return Response(gen(VideoCamera()),
mimetype='multipart/x-mixed-replace; boundary=frame')
if __name__ == '__main__':
# defining server ip address and port
app.run(host='0.0.0.0',port='5000', debug=True)

凯拉斯 (Keras)

Keras is a deep learning API written in Python, running on top of the machine learning platform TensorFlow. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result as fast as possible is key to doing good research.

Keras是用Python编写的深度学习API,在机器学习平台 TensorFlow上运行 它的开发着眼于实现快速实验。 能够尽快地从构想转变为结果是进行良好研究的关键。

https://keras.io/about/

https://keras.io/about/

Since Keras runs on top of TensorFlow, we can utilize the Graph and Session methods from the TensorFlow module. While the detail of graph and session is outside of the scope of this article, you can still use the following code when deploying your model to be accessible during scheduled sessions.

由于Keras在TensorFlow之上运行,因此我们可以利用TensorFlow模块中的Graph和Session方法。 尽管图形和会话的详细信息不在本文讨论的范围之内,但在将模型部署为可在计划的会话期间访问时,仍可以使用以下代码。

import pickle
from keras import backend as K
from tensorflow import Graph, Sessionglobal loaded_model
graph1 = Graph()
with graph1.as_default():
session1 = Session(graph=graph1)
with session1.as_default():
loaded_model = pickle.load(open('Combined_Model.p', 'rb'))

汇集全部。 (Bringing it all together.)

The final piece is our camera.py file. In this example, our file will contain a function to join our split model into one pickle file. We will use a class VideoCamera to initiallize the camera when the app loads, and also combine the face detection program with our pre-trained Keras model.

最后一块是我们的camera.py文件。 在此示例中,我们的文件将包含一个将拆分模型连接到一个pickle文件中的函数。 当应用加载时,我们将使用VideoCamera类初始化相机,并将面部检测程序与我们的预训练Keras模型相结合。

import cv2
import numpy as np
import pickle
from keras import backend as K
from tensorflow import Graph, Session# defining face detector
classifier=cv2.CascadeClassifier('haarcascade_frontalface_default.xml')size = 4labels_dict={0:'shaved',1:'not_shaved'}color_dict={0:(0,255,0),1:(0,0,255)}global loaded_model
graph1 = Graph()
with graph1.as_default():
session1 = Session(graph=graph1)
with session1.as_default():
loaded_model = pickle.load(open('Combined_Model.p', 'rb'))class VideoCamera(object):
def __init__(self):
# capturing video
self.video = cv2.VideoCapture(0) def __del__(self):
# releasing camera
self.video.release()
def get_frame(self):
# extracting frames
(rval, im) = self.video.read()
im = cv2.flip(im, 1, 1)
mini = cv2.resize(im, (im.shape[1] // size, im.shape[0] // size))
faces = classifier.detectMultiScale(mini)
for f in faces:
(x, y, w, h) = [v * size for v in f] #Scale the shapesize backup
#Save just the rectangle faces in SubRecFaces
face_img = im[y:y+h, x:x+w]
resized=cv2.resize(face_img,(300,300))
normalized=resized/255.0
reshaped=np.reshape(normalized,(1,300,300,3))
reshaped = np.vstack([reshaped])
K.set_session(session1)
with graph1.as_default():
results=loaded_model.predict(reshaped)
if results >.5:
result = np.array([[1]])
else:
result = np.array([[0]])
label = np.argmax(result)
cv2.rectangle(im,(x,y),(x+w,y+h),color_dict[result[label][0]],2)
cv2.rectangle(im,(x,y-40),(x+w,y),color_dict[result[label][0]],-1)
cv2.putText(im, labels_dict[result[label][0]], (x, y-10),cv2.FONT_HERSHEY_SIMPLEX,0.8,(255,255,255),2)
# encode OpenCV raw frame to jpg and displaying it
ret, jpeg = cv2.imencode('.jpg', im)
return jpeg.tobytes()

Please note that this application is limited to your local computer. I hope this article is helpful. I would love to deploy this model publicly using Node.js, Tensorflow.js and OpenCV.js, but this is outside the scope of this article. If I get 100 comments, I will make that my next post!

请注意,此应用程序仅限于本地计算机。 希望本文对您有所帮助。 我希望使用Node.jsTensorflow.jsOpenCV.js公开部署此模型,但这不在本文讨论范围之内。 如果我收到100条评论,我将在下一篇文章中发表!

这是我的GitHub存储库的链接: (Here’s a link to my GitHub repo:)

https://github.com/cousinskeeta/shavedVnotShaved_opencv

https://github.com/cousinskeeta/shavedVnotShaved_opencv

翻译自: https://towardsdatascience.com/how-to-deploy-a-pre-trained-keras-model-with-opencv-and-flask-86c9dab76a9c

opencv keras

  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值