基于flask+pytorch的yolov3目标检测服务开发

背景:

以yolov3为例子,构建基于flask的微服务架构,对外提供算法能力。制定服务的接口规范,后续有新的算法模型,也可以按照此例子的做法,快速地部署。

概念:

wsgi:Web服务器网关接口(Python Web Server Gateway Interface,缩写为WSGI)是为Python语言定义的Web服务器和Web应用程序或框架之间的一种简单而通用的接口。

Flask:python web微服务开发框架

gunicorn:在unix系统运行的wsgi服务器

nginx:http和反向代理服务器(客户端无法感知代理的存在)

准备工作:

安装pytorch/flask/gunicorn/nginx

注:安装方法不在此说明,请自行百度。

下载yolov3以及其权重:

git clone https://github.com/eriklindernoren/PyTorch-YOLOv3

新建文件夹:

在项目根目录新建文件夹

static:用于存放静态文件,这里存放检测后的图片结果,用于在浏览器显示出来。

templates:存放html模板。

upload.html:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>上传图片</title>
</head>
<body>
    <h1>欢迎使用Web目标检测服务</h1>
    <form action="" enctype='multipart/form-data' method='POST'>
        <input type="file" name="file" style="margin-top:20px;"/>
        <br>
        <input type="submit" value="上传并识别" class="button-new" style="margin-top:15px;"/>
    </form>
</body>
</html>

upload_ok.html:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>目标检测结果</title>
</head>
<body>
    <h1>欢迎使用Web目标检测服务</h1>
    <form action="" enctype='multipart/form-data' method='POST'>
        <input type="file" name="file" style="margin-top:20px;"/>
        <br>
        <input type="submit" value="上传并识别" class="button-new" style="margin-top:15px;"/>
    </form>
    <img src="{{ url_for('static', filename= 'images/test.jpg',_t=val1) }}" width="400" height="400" alt=""/>
</body>
</html>

upload:用于存放客户端上传上来的待预测的图片。

新建服务端代码predict_server.py:

import json
from flask import Flask,jsonify, request,redirect,url_for,render_template,flash
from werkzeug.utils import secure_filename
import os
import numpy as np
import cv2
from predict_bdr import *
import time
from datetime import timedelta

app = Flask(__name__)

UPLOAD_FOLDER = 'upload' #上传的文件保存目录
ALLOWED_EXTENSIONS = {'txt', 'pdf', 'png', 'jpg', 'jpeg', 'gif'} #允许的上传文件类型

app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
app.config['SEND_FILE_MAX_AGE_DEFAULT'] = timedelta(seconds=5)
print(app.config['SEND_FILE_MAX_AGE_DEFAULT'])

model = predict_init() #导入模型,以便节省Predict时间

def allow_filename(filename):
    return '.' in filename and filename.rsplit('.',1)[1].lower() in ALLOWED_EXTENSIONS
    #文件名必须有效

@app.route('/upload',methods=['GET','POST'])
def upload():
    if request.method == 'POST':
        if 'file' not in request.files:
            flash('not file part!')
            return redirect(request.url)
        
        f = request.files['file']
        if f.filename == '':
            flash('not file upload')
            return redirect(request.url)
        
        if f and allow_filename(f.filename):
            filename = secure_filename(f.filename)
            #secure_filename 不支持中文文件名称的获取……
            
            filepath = './'+app.config['UPLOAD_FOLDER']+'/'+f.filename
            f.save(filepath)

            img = predict_yolo(model,filepath)
            return render_template('upload_ok.html')
    return render_template('upload.html')

if __name__ ==  '__main__':
    app.run(host='0.0.0.0',port=2222)

新建(重构)predict.py

说明:原有项目的detect.py接口不方便服务端调用,所以重构:

# -*- coding: utf-8 -*-
"""
Created on Mon May  4 10:55:33 2020

@author: rain
"""


from __future__ import division

from models import *
from utils.utils import *
from utils.datasets import *

import os
import sys
import time
import datetime
import argparse

from PIL import Image

import torch
from torch.utils.data import DataLoader
from torchvision import datasets
from torch.autograd import Variable

import matplotlib.pyplot as plt
import matplotlib.patches as patches
from matplotlib.ticker import NullLocator

def predict_init():
    model_def = 'config/yolov3.cfg'
    weights_path = 'weights/yolov3.weights'
    img_size = 416
    
    start = time.time()
    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

    os.makedirs("output", exist_ok=True)

    # Set up model
    model = Darknet(model_def, img_size=img_size).to(device)
    end = time.time()
    print('model init time:',end-start)
    
    start = time.time()
    if weights_path.endswith(".weights"):
        # Load darknet weights
        model.load_darknet_weights(weights_path)
    else:
        # Load checkpoint weights
        model.load_state_dict(torch.load(weights_path))
    
    end = time.time()
    print('model load time:',end-start)
    #把iamage转换为yolo处理的图片格式
    return model

def predict_yolo(model,image_path='data/samples/dog.jpg'):
    parser = argparse.ArgumentParser()
    #parser.add_argument("--image_path", type=str, default="data/samples/dog.jpg", help="path to dataset")
    parser.add_argument("--model_def", type=str, default="config/yolov3.cfg", help="path to model definition file")
    parser.add_argument("--weights_path", type=str, default="weights/yolov3.weights", help="path to weights file")
    parser.add_argument("--class_path", type=str, default="data/coco.names", help="path to class label file")
    parser.add_argument("--conf_thres", type=float, default=0.8, help="object confidence threshold")
    parser.add_argument("--nms_thres", type=float, default=0.4, help="iou thresshold for non-maximum suppression")
    parser.add_argument("--batch_size", type=int, default=1, help="size of the batches")
    parser.add_argument("--n_cpu", type=int, default=0, help="number of cpu threads to use during batch generation")
    parser.add_argument("--img_size", type=int, default=416, help="size of each image dimension")
    parser.add_argument("--checkpoint_model", type=str, help="path to checkpoint model")
    opt = parser.parse_args()
    print(opt)

    
    start = time.time()
    img = transforms.ToTensor()(Image.open(image_path))
    # Pad to square resolution
    img, _ = pad_to_square(img, 0)
    # Resize
    img = resize(img, opt.img_size)
    
    model.eval()  # Set in evaluation mode

    classes = load_classes(opt.class_path)  # Extracts class labels from file

    Tensor = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor
    print(img.size())
    input_imgs = Variable(img.type(Tensor))
    print(input_imgs.size())
    input_imgs=input_imgs.unsqueeze(0)
    print(input_imgs.size())
    # Get detections
    with torch.no_grad():
        detections = model(input_imgs)
        detections = non_max_suppression(detections, opt.conf_thres, opt.nms_thres)
    
    end = time.time()
    print('model predict time:',end-start)
    
    start = time.time()
    # Bounding-box colors
    cmap = plt.get_cmap("tab20b")
    colors = [cmap(i) for i in np.linspace(0, 1, 20)]
    
    # Create plot
    img = np.array(Image.open(image_path))
    plt.figure()
    fig, ax = plt.subplots(1)
    ax.imshow(img)
        
    # Draw bounding boxes and labels of detections
    if detections is not None:
        # Rescale boxes to original image
        detections = detections[0]
        detections = rescale_boxes(detections, opt.img_size, img.shape[:2])
        #rescale_boxes(detections, opt.img_size, img.shape[:2])
        unique_labels = detections[:, -1].cpu().unique()
        n_cls_preds = len(unique_labels)
        bbox_colors = random.sample(colors, n_cls_preds)
        for x1, y1, x2, y2, conf, cls_conf, cls_pred in detections:

            print("\t+ Label: %s, Conf: %.5f" % (classes[int(cls_pred)], cls_conf.item()))

            box_w = x2 - x1
            box_h = y2 - y1

            color = bbox_colors[int(np.where(unique_labels == int(cls_pred))[0])]
            # Create a Rectangle patch
            bbox = patches.Rectangle((x1, y1), box_w, box_h, linewidth=2, edgecolor=color, facecolor="none")
            # Add the bbox to the plot
            ax.add_patch(bbox)
            # Add label
            plt.text(
                x1,
                y1,
                s=classes[int(cls_pred)],
                color="white",
                verticalalignment="top",
                bbox={"color": color, "pad": 0},
            )
    
    # Save generated image with detections
    plt.axis("off")
    plt.gca().xaxis.set_major_locator(NullLocator())
    plt.gca().yaxis.set_major_locator(NullLocator())
    filename = image_path.split("/")[-1].split(".")[0]
    #filename = opt.image_path.split("\\")[-1].split(".")[0]
    #plt.savefig(f"output/{filename}.png", bbox_inches="tight", pad_inches=0.0)
    plt.savefig(f"static/images/test.jpg", bbox_inches="tight", pad_inches=0.0)
    plt.close()
    
    end = time.time()
    print('model draw time:',end-start)
    
    return img
    
if __name__ == "__main__":
    model = predict_init()
    predict_yolo(model=model)

开启服务:

浏览器访问测试:127.0.0.1:2222/upload

 

参考:

https://blog.csdn.net/qq_31112205/article/details/101076676  基于flask的YOLO目标检测web服务及接口

https://blog.csdn.net/watermelon1123/article/details/88576639 Flask + Pytorch (以yolov3为例)构建算法微服务

  • 8
    点赞
  • 63
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值