深度学习模型下接口编写及Docker下生产环境部署(万能篇)

本文详细介绍了如何为深度学习口罩检测模型编写接口,将结果返回为检测结果、坐标和置信度。接着,展示了如何在服务端使用Flask接收客户端请求并返回检测结果,以及客户端的调用代码。最后,讨论了Docker的两种生产环境部署方案,并提供了相关命令,包括容器的创建、导出和导入,以实现快速部署。
摘要由CSDN通过智能技术生成

一、实现目标

通过阅读本篇文章,大家将会了解以下内容:

  1. 如何编写深度学习模型的接口。
  2. 如何撰写服务端和客户端。
  3. 如何利用docker进行生产环境部署。
  4. 在部署中了解docker相关知识。

二、操作背景

  1. 最最基础的就是大家要有一个跑的通的代码。本文以口罩检测为例。
  2. 已安装好docker.

三、编写接口

本项目实现的是口罩检测,运行主文件为detect_image.py,该文件功能为检测一张图片是否戴口罩,框出位置,并显示标签和置信度得分。

1.detect_image.py源代码

# USAGE
# python detect_mask_image.py --image images/pic1.jpeg

# import the necessary packages
from tensorflow.keras.applications.mobilenet_v2 import preprocess_input
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.models import load_model
import numpy as np
import argparse
import cv2
import os

def mask_image():
	# construct the argument parser and parse the arguments
	ap = argparse.ArgumentParser()
	ap.add_argument("-i", "--image", required=True,
		help="path to input image")
	ap.add_argument("-f", "--face", type=str,
		default="face_detector",
		help="path to face detector model directory")
	ap.add_argument("-m", "--model", type=str,
		default="mask_detector.model",
		help="path to trained face mask detector model")
	ap.add_argument("-c", "--confidence", type=float, default=0.5,
		help="minimum probability to filter weak detections")
	args = vars(ap.parse_args())

	# load our serialized face detector model from disk
	print("[INFO] loading face detector model...")
	prototxtPath = os.path.sep.join([args["face"], "deploy.prototxt"])
	weightsPath = os.path.sep.join([args["face"],
		"res10_300x300_ssd_iter_140000.caffemodel"])
	net = cv2.dnn.readNet(prototxtPath, weightsPath)

	# load the face mask detector model from disk
	print("[INFO] loading face mask detector model...")
	model = load_model(args["model"])

	# load the input image from disk, clone it, and grab the image spatial
	# dimensions
	image = cv2.imread(args["image"])
	orig = image.copy()
	(h, w) = image.shape[:2]

	# construct a blob from the image
	blob = cv2.dnn.blobFromImage(image, 1.0, (300, 300),
		(104.0, 177.0, 123.0))

	# pass the blob through the network and obtain the face detections
	print("[INFO] computing face detections...")
	net.setInput(blob)
	detections = net.forward()

	# loop over the detections
	for i in range(0, detections.shape[2]):
		# extract the confidence (i.e., probability) associated with
		# the detection
		confidence = detections[0, 0, i, 2]

		# filter out weak detections by ensuring the confidence is
		# greater than the minimum confidence
		if confidence > args["confidence"]:
			# compute the (x, y)-coordinates of the bounding box for
			# the object
			box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
			(startX, startY, endX, endY) = box.astype("int")

			# ensure the bounding boxes fall within the dimensions of
			# the frame
			(startX, startY) = (max(0, startX), max(0, startY))
			(endX, endY) = (min(w - 1, endX), min(h - 1, endY))

			# extract the face ROI, convert it from BGR to RGB channel
			# ordering, resize it to 224x224, and preprocess it
			face = image[startY:endY, startX:endX]
			face = cv2.cvtColor(face, cv2.COLOR_BGR2RGB)
			face = cv2.resize(face, (224, 224))
			face = img_to_array(face)
			face = preprocess_input(face)
			face = np.expand_dims(face, axis=0)

			# pass the face through the model to determine if the face
			# has a mask or not
			(mask, withoutMask) = model.predict(face)[0]

			# determine the class label and color we'll use to draw
			# the bounding box and text
			label = "Mask" if mask > withoutMask else "No Mask"
			color = (0, 255, 0) if label == "Mask" else (0, 0, 255)

			# include the probability in the label
			label = "{}: {:.2f}%".format(label, max(mask, withoutMask) * 100)

			# display the label and bounding box rectangle on the output
			# frame
			cv2.putText(image, label, (startX, startY - 10),
				cv2.FONT_HERSHEY_SIMPLEX, 0.45, color, 2)
			cv2.rectangle(image, (startX, startY), (endX, endY), color, 2)

	# show the output image
	cv2.imshow("Output", image)
	cv2.waitKey(0)
	
if __name__ == "__main__":
	mask_image()

2.detect_image.py增加接口代码

功能:通过接口返回3个参数,检测结果(口罩/非口罩)、口罩坐标、置信度得分。
分析该代码可知,主要用到以下代码中的变量为:

(1)位置坐标:startX、startY, endX, endY

			# frame
			cv2.putText(image, label, (startX, startY - 10),
				cv2.FONT_HERSHEY_SIMPLEX, 0.45, color, 2)
			cv2.rectangle(image, (startX, startY), (endX, endY), color, 2)

(2) 标签:label
(3) 置信度:max(mask, withoutMask)

			label = "Mask" if mask > withoutMask else "No Mask"
			color = (0, 255, 0) if label == "Mask" else (0, 0, 255)

下面的主要工作就是如何变量提取出来进行返回。

1、获取标签和置信度。

			mask_label = label
			mask_confidence = max(mask, withoutMask)

2、将坐标、标签、置信度增加至列表

			result = []
			result.append([startX.item(), startY.item(), endX.item(), endY.item()])
			result.append(mask_confidence)
			result.append(mask_label)

3、由于一张图像可能检测到多个口罩,因此需要在最外层定义一个总的变量:
例如 results=[ ],并将第二步的列表增加至本列表

			results.append(result)

增加接口后的完整代码如下:

# USAGE
# python detect_mask_image.py --image images/pic1.jpeg

# import the necessary packages
from tensorflow.keras.applications.mobilenet_v2 import preprocess_input
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.models import load_model
import numpy as np
import argparse
import cv2
import os

def mask_image(image):
	# construct the argument parser and parse the arguments
	ap = argparse.ArgumentParser()
	#ap.add_argument("-i", "--image", required=False,
		#help="path to input image")
	ap.add_argument("-f", "--face", type=str,default="face_detector",help="path to face detector model directory")
	ap.add_argument("-m", "--model", type=str,default="mask_detector.model",help="path to trained face mask detector model")
	ap.add_argument("-c", "--confidence", type=float, default=0.5,help="minimum probability to filter weak detections")
	args = vars(ap.parse_args())

	# load our serialized face detector model from disk
	print("[INFO] loading face detector model...")
	prototxtPath = os.path.sep.join([args["face"], "deploy.prototxt"])
	weightsPath = os.path.sep.join([args["face"],
		"res10_300x300_ssd_iter_140000.caffemodel"])
	net = cv2.dnn.readNet(prototxtPath, weightsPath)

	# load the face mask detector model from disk
	print("[INFO] loading face mask detector model...")
	model = load_model(args["model"])

	# load the input image from disk, clone it, and grab the image spatial
	# dimensions
	(h, w) = image.shape[:2]

	# construct a blob from the image
	blob = cv2.dnn.blobFromImage(image, 1.0, (300, 300),
		(104.0, 177.0, 123.0))

	# pass the blob through the network and obtain the face detections
	print("[INFO] computing face detections...")
	net.setInput(blob)
	detections = net.forward()

	results = []

	# loop over the detections
	for i in range(0, detections.shape[2]):
		# extract the confidence (i.e., probability) associated with
		# the detection
		confidence = detections[0, 0, i, 2]

		# filter out weak detections by ensuring the confidence is
		# greater than the minimum confidence
		if confidence > args["confidence"]:
			# compute the (x, y)-coordinates of the bounding box for
			# the object
			box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
			(startX, startY, endX, endY) = box.astype("int")

			# ensure the bounding boxes fall within the dimensions of
			# the frame
			(startX, startY) = (max(0, startX), max(0, startY))
			(endX, endY) = (min(w - 1, endX), min(h - 1, endY))

			# extract the face ROI, convert it from BGR to RGB channel
			# ordering, resize it to 224x224, and preprocess it
			face = image[startY:endY, startX:endX]
			face = cv2.cvtColor(face, cv2.COLOR_BGR2RGB)
			face = cv2.resize(face, (224, 224))
			face = img_to_array(face)
			face = preprocess_input(face)
			face = np.expand_dims(face, axis=0)

			# pass the face through the model to determine if the face
			# has a mask or not
			(mask, withoutMask) = model.predict(face)[0]

			# determine the class label and color we'll use to draw
			# the bounding box and text
			label = "Mask" if mask > withoutMask else "No Mask"
			color = (0, 255, 0) if label == "Mask" else (0, 0, 255)
			# save label and confidence for output
			mask_label = label
			mask_confidence = max(mask, withoutMask)
			# include the probability in the label
			label = "{}: {:.2f}%".format(label, max(mask, withoutMask) * 100)

			# display the label and bounding box rectangle on the output
			# frame
			cv2.putText(image, label, (startX, startY - 10),
				cv2.FONT_HERSHEY_SIMPLEX, 0.45, color, 2)
			cv2.rectangle(image, (startX, startY), (endX, endY), color, 2)

			result = []
			result.append([startX.item(), startY.item(), endX.item(), endY.item()])
			result.append(mask_confidence)
			result.append(mask_label)
			results.append(result)
	return results

if __name__ == "__main__":
	path = 'images/pic1.jpeg'
	im0s = cv2.imread(path)
	res = mask_image(im0s)
	print(res)

四、服务端代码

功能:接受客户端请求,返回检测结果相关参数。本代码可通用,针对不同的需求,只需修改2处:
1、导入并调用函数,如本文的detect_image.py文件中的mask_image()。

from detect_mask_image import *

其他功能代码

result = mask_image(image)

2、增加结果解析部分即可

    for i, x in enumerate(result):
        dictinter = {'res': str(x[2]), 'confidence': str(x[1]),
                     'vertexes_location': (x[0])}
        data_dict['item' + str(i + 1)] = dictinter

    output_dict['data'] = data_dict

3、服务端完整代码
(没有安装的模块自行安装哟,如 flasK, flask_cors)
server.py

import base64
import io
import os
import urllib.request
import logging
import time
from PIL import Image
from flask import Flask, request
from flask_cors import CORS

from detect_mask_image import *

os.environ['MXNET_CUDNN_AUTOTUNE_DEFAULT'] = '0'

app = Flask(__name__)
CORS(app, suppors_credentials=True, resources={r'/*'})

# Initialize and load the face detector.
gpuid = 0
app.logger.info('INITIALIZING MASK SERVER... USING {}'.format('GPU' if gpuid>=0 else 'CPU'))

# Logger Initialization.
#logging.basicConfig(filename="./Logs/access.log", level=logging.DEBUG, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
app.logger.info('------------------------START UP------------------------')

@app.route('/facemask', methods = ['GET', 'POST'])
def server():
    # Get client's ip address.
    ip = request.remote_addr

    output_dict = dict()
    data_dict = dict()
    start_time = time.time()

    req = eval(str(request.data.decode('utf-8')))
    # See whether variable 'req' is of type dict.
    if type(req) is not dict:
        app.logger.warning('From user ip: {}, DATA FORMAT FAILURE.'.format(ip))
        output_dict['code'] = '-1'
        output_dict['message'] = 'ERROR: DATA FORMAT FAILURE.'
        return output_dict

    # Tab up the potentially missing keys.
    if 'url' not in req.keys():
        req["url"] = ''
    if 'img' not in req.keys():
        req["img"] = ''

    # Get the corresponding variables.
    req_img = req["img"]
    url = req["url"]

    if len(req_img) == 0:
        if len(url) == 0:
            app.logger.warning('From user ip: {}, NO INPUT DATA DETECTED.'.format(ip))
            output_dict['code'] = '-2'
            output_dict['message'] = 'ERROR: NO INPUT DATA DETECTED'
            return output_dict
        # Return the error code when there's no response from the url taken in.
        try:
            response = urllib.request.urlopen(url)
        except:
            app.logger.warning('From user ip: {}, INVALID URL.'.format(ip))
            output_dict['code'] = '-3'
            output_dict['message'] = 'ERROR: INVALID URL.'
            return output_dict

        img_array = np.array(bytearray(response.read()), dtype=np.uint8)
        image = cv2.imdecode(img_array, -1)

        if image is None:
            app.logger.warning('From user ip: {}, None TYPE IMAGE FROM URL.'.format(ip))
            output_dict['code'] = '-4'
            output_dict['message'] = 'ERROR: None TYPE IMAGE FROM URL.'
            return output_dict

    else:
        # Return the error code whenever the base64 code is not decoded successfully
        try:
            image_bytes = base64.b64decode(req_img)
        except:
            app.logger.warning('From user ip: {}, IMAGE DECODING FAILURE.'.format(ip))
            output_dict['code'] = '-5'
            output_dict['message'] = 'ERROR: IMAGE DECODING FAILURE.'
            return output_dict
        image_array = np.frombuffer(image_bytes, dtype=np.uint8)
        image = cv2.imdecode(image_array, 1)

    print("image convert spend time: ", time.time() - start_time)
    result = mask_image(image)

    if len(result) == 0:
        app.logger.warning('From user ip: {}, MODLE RECOGNIZING 0 FACE MASK.'.format(ip))
        output_dict['code'] = '-6'
        output_dict['message'] = 'ERROR: MODLE RECOGNIZING 0 FACE MASK.'
        return output_dict

    for i, x in enumerate(result):
        dictinter = {'res': str(x[2]), 'confidence': str(x[1]),
                     'vertexes_location': (x[0])}
        data_dict['item' + str(i + 1)] = dictinter

    output_dict['data'] = data_dict

    output_dict['time'] = time.time() - start_time
    output_dict['code'] = '1'
    output_dict['message'] = 'SUCCESS: {} FACE MASK FOUND.'.format(len(result))

    # Calculate the duration for a single image
    duration = (time.time() - start_time)
    app.logger.info('From user ip: {}, duration: {}s, detection successful.'.format(ip, duration))
    app.logger.info('RECOGNIZING RESULT IS {}'.format(output_dict))

    print('RECOGNIZING RESULT IS {}'.format(output_dict))

    return output_dict

if __name__ == '__main__':
    # app.run(port=8864)#8865
    app.run(host='0.0.0.0', port=8864)#8865

五、客户端代码

client.py
图片路径、ip可根据真实环境修改

import requests
import base64
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry

#url = "http://部署后真实IP/facemask"
url = "http://0.0.0.0:8864/facemask"
image = 'images/5.jpg'
image_url = 'http://p7.qhmsg.com/dr/270_500_/t01439835ff6c40e12e.jpg'

local_image = True

s = requests.Session()
retry = Retry(connect =5, backoff_factor =1)
adapter = HTTPAdapter(max_retries = retry)
s.keep_alive =False

if local_image:
    with open(image, 'rb') as file:
        base64_data = base64.b64encode(file.read())
        img = base64_data.decode()
    post_info = {'img': img}
else:
    post_info = {'url': image_url}

r = s.post(url, json=post_info)
r.encoding = 'gb2312'
print(r.content.decode("unicode_escape"))

六、docker生产环境部署

第一种方案:编写dockfile文件,利用docker命令一箭部署。缺点,容易缺少安装某个包,编译出错后,不容易定位问。
第二种方案:一步一步部署。新手建议按照此方案。

1、docker常用命令

可参考https://www.runoob.com/docker/ubuntu-docker-install.html,docker安装,容器创建,镜像创建等命令。

2、部署步骤

首先我们要拉取一个镜像A1,如果没有可以在docker-hub上面拉取,比如我的模型是pytorch框架下的,就可以选择nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04,然后利用这个镜像A1实例化一个容器B1,然后再在这个容器B1中进环境配置,导入项目,测试服务等一列操作。最后再将该容器B1导出,然后再将导出的包C1导入成新的镜像A2,再创建新的容器B2,然后就可以在新的容器B2直接运行服、服务即可。
那麽为什麽我们要创建导出再创建再导出呢?因为第一次我们创建的容器,环境配置是一步一步操作的,如果要把这个项目部署到其他的服务器上,还需要再次配置环境。而第二次创建容器,是利用导出的容器创建的,环境已经配置好了,后续再部署,我们只需要利用这个导出包创建新的镜像就好了,就无需再配置环境,代码可一键运行。
好啦,下面我们就开始一步一步部署吧。注:本项目为centos系统

  1. docker安装
    本地主机上运行:
curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
  1. docker版本检查
    本地主机上运行:
docker version

出现相应版本信息,安装即为成功。

  1. 拉取基础镜像
docker pull nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04
  1. 查看所有镜像
    我们可以使用 docker images 来列出本地主机上的镜像
sudo docker images

在这里插入图片描述
各个选项说明:
REPOSITORY:表示镜像的仓库源
TAG:镜像的标签
IMAGE ID:镜像ID
CREATED:镜像创建时间
SIZE:镜像大小
可以看到本镜像的ID为:a4bdb1190443

  1. 利用镜像创建容器,并做端口映射
sudo docker run -it --runtime=nvidia -p 8860:8860 a4bdb1190443 /bin/bash
  1. 查看创建的容器
sudo docker ps -a 

在这里插入图片描述
该图为本人之前创建好的容器。
假设我们的 contariner id = 9345dde9dafb
如果要删除容器,可用命令

docker rm -f 容器ID
  1. 项目拷贝到容器里
    主机运行:
    例如: docker cp 本地项目路径 contariner id :容器路径
docker cp /home/zhangjl396/code/Face-Mask-Detection-master 9345dde9dafb:/root

注:容器的最后不能使/root/,不能加“/”

  1. 进入容器

主机运行:

sudo docker attach 9345dde9dafb
  1. 容器中配置项目环境
    容器中运行:
    逐行运行dockfile文件代码时,不能加RUN ,前提是dockerfile已经写好。只需运行下面文件带RUN的命令行即可。安装到requirements.txt时,如果报错,可逐行安装requirements.txt所需安装包。
    本人使用的dockerfile文件为:
FROM nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04

COPY . /faceMask

WORKDIR /faceMask

RUN sed -i 's/archive.ubuntu.com/mirrors.aliyun.com/g'  /etc/apt/sources.list && \
    sed -i 's/security.ubuntu.com/mirrors.aliyun.com/g' /etc/apt/sources.list

RUN mv /etc/apt/sources.list.d/nvidia-ml.list /etc/apt/sources.list.d/nvidia-ml.list.bak && \
    mv /etc/apt/sources.list.d/cuda.list /etc/apt/sources.list.d/cuda.list.bak

RUN apt-get clean && apt-get update --fix-missing
RUN apt-get update -y && apt-get upgrade -y

RUN apt-get install -y libsm6 libxext6 libxrender-dev && \
    apt-get install -y python3 python3-pip

RUN pip3 install --upgrade pip

# opencv-contrib-python 依赖
RUN apt-get install -y libglib2.0-0 libgl1-mesa-glx
RUN pip3 install --user -i https://mirrors.aliyun.com/pypi/simple -r requirements.txt

CMD python3 server.py

EXPOSE 8860
  1. 导出容器
    如果到此运行server.py的文件测试无误,即可导出容器。
    如果要导出本地某个容器,可以使用 docker export 命令。注意导出路径

主机运行:

 docker export 9345dde9dafb > facemask.tar
  1. 导入容器快照
    可以使用 docker import 从容器快照文件中再导入为镜像,以下实例将快照文件 facemask.tar 导入到镜像 test/facemask:v1.0:

格式:cat 导出的容器.tar | sudo docker import - 镜像名:版本
主机运行:

cat facemask.tar | sudo docker import - test/facemask:v1.0
  1. 查看镜像是否导入成功,查看所有镜像
sudo docker images

在这里插入图片描述
可以看到新的镜像ID为:45be85df5b5e

  1. 再次利用镜像创建容器,并做端口映射
sudo docker run -it --runtime=nvidia -p 8860:8860 45be85df5b5e /bin/bash
  1. 查看创建的容器
sudo docker ps -a 

在这里插入图片描述
新的容器ID=e200b990da9a

  1. 进入容器
docker attach e200b990da9a 

进入新的容器就相当于进入了一个新的服务器,可以查看目录ls, 也可以cd到相应文件夹中。

  1. 找到项目运行程序即可,至此大功告成

3、相关命令

  1. 远程拷贝文件
scp local_file remote_username@remote_ip:remote_folder
scp local_file remote_username@remote_ip:remote_file
scp local_file remote_ip:remote_folder
scp local_file remote_ip:remote_file

指定用户名是需要输入密码,不指定用户名需要同时输入用户名和密码。

  1. 进入远程主机
ssh username@IP
  1. cp 命令
    docker cp :用于容器与主机之间的数据拷贝。
docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-
docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值