【flask整合深度学习】ubuntu系统下显示深度学习视觉检测结果图片并可在web端访问,配置允许手机浏览器打开

介绍

之前有一篇flask和mongodb交互的记录文:
https://blog.csdn.net/qq_41358574/article/details/117845077

首先需要先下载的工具:pycharm,pytorch一堆的相关包,flask相关包
本电脑没有cuda,故模型传入时输入:

 device = torch.device('cpu')
    # Set up modella
    model = Darknet(opt.model_def, img_size=opt.img_size).to(device)
    if opt.weights_path.endswith(".weights"):
        # Load darknet weights
        model.load_darknet_weights(opt.weights_path)
    else:
        # Load checkpoint weights
        model.load_state_dict(torch.load(opt.weights_path,map_location=device))

项目在深度学习的pytorch框架下载入模型对图片进行检测,然后结果保存在文件夹中,用flask渲染前段页面动画显示结果,且可以手机浏览器输入ip+端口号访问页面
效果:
在这里插入图片描述
在这里插入图片描述
(字体是动画的,图片看不出来)

flask文件

目录:
在这里插入图片描述utils 和model.py是和深度学习有关的文件代码。
detect.py相当于main.py,深度学习的目标检测也写在里面了,所以代码挺多

from __future__ import division

from models import *
from utils.utils import *
from utils.datasets import *
from utils.augmentations import *
from utils.transforms import *

import os
import sys
import time
import datetime
import argparse

from PIL import Image

import torch
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
from torchvision import datasets
from torch.autograd import Variable

import matplotlib.pyplot as plt
import matplotlib.patches as patches
from matplotlib.ticker import NullLocator
from flask import Flask,request,make_response,render_template
import socket
from time import sleep
myhost = socket.gethostbyname(socket.gethostname())
app = Flask(__name__)
igpath = '/home/heziyi/pic/'
@app.route('/', methods=['GET', 'POST'])  # 使用methods参数处理不同HTTP方法
def home():
    return render_template('index.html')


#@app.route('/img/<string:filename>',methods=['GET'])
@app.route('/img/',methods=['GET'])
def display():
    if request.method == 'GET':
        # if filename is None:
        #     pass
        # else:
            image = open("/static/he_21.png", "rb").read()
           # image = open(igpath+filename,"rb").read()
            response = make_response(image)
            response.headers['Content-Type'] = 'image/jpg'
            return response
if __name__ == "__main__":

    parser = argparse.ArgumentParser()
    parser.add_argument("--image_folder", type=str, default="data/custom/dd", help="path to dataset")
    parser.add_argument("--model_def", type=str, default="config/yolov3-custom.cfg", help="path to model definition file")
    parser.add_argument("--weights_path", type=str, default="checkpoints/ckpt_88.pth", help="path to weights file")
    parser.add_argument("--class_path", type=str, default="data/custom/classes.names", help="path to class label file")
    parser.add_argument("--conf_thres", type=float, default=0.8, help="object confidence threshold")
    parser.add_argument("--nms_thres", type=float, default=0.4, help="iou thresshold for non-maximum suppression")
    parser.add_argument("--batch_size", type=int, default=1, help="size of the batches")
    parser.add_argument("--n_cpu", type=int, default=0, help="number of cpu threads to use during batch generation")
    parser.add_argument("--img_size", type=int, default=416, help="size of each image dimension")
    parser.add_argument("--checkpoint_model", type=str,default="checkpoints/ckpt_88.pth",help="path to checkpoint model")
    opt = parser.parse_args()
    print(opt)

    #device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    device = torch.device('cpu')
    os.makedirs("../output", exist_ok=True)

    # Set up modella
    model = Darknet(opt.model_def, img_size=opt.img_size).to(device)

    if opt.weights_path.endswith(".weights"):
        # Load darknet weights
        model.load_darknet_weights(opt.weights_path)
    else:
        # Load checkpoint weights
        model.load_state_dict(torch.load(opt.weights_path,map_location=device))
        #cpu!!!!!!
    model.eval()  # Set in evaluation mode

    dataloader = DataLoader(
        ImageFolder(opt.image_folder, transform= \
            transforms.Compose([DEFAULT_TRANSFORMS, Resize(opt.img_size)])),
        batch_size=opt.batch_size,
        shuffle=False,
        num_workers=opt.n_cpu,
    )

    classes = load_classes(opt.class_path)  # Extracts class labels from file

    Tensor = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor

    imgs = []  # Stores image paths
    img_detections = []  # Stores detections for each image index

    print("\nPerforming object detection:")
    prev_time = time.time()
    for batch_i, (img_paths, input_imgs) in enumerate(dataloader):
        # Configure input
        input_imgs = Variable(input_imgs.type(Tensor))

        # Get detections
        with torch.no_grad():
            detections = model(input_imgs)
            detections = non_max_suppression(detections, opt.conf_thres, opt.nms_thres)

        # Log progress
        current_time = time.time()
        inference_time = datetime.timedelta(seconds=current_time - prev_time)
        prev_time = current_time
        print("\t+ Batch %d, Inference Time: %s" % (batch_i, inference_time))

        # Save image and detections
        imgs.extend(img_paths)
        img_detections.extend(detections)

    # Bounding-box colors
    cmap = plt.get_cmap("tab20b")
    colors = [cmap(i) for i in np.linspace(0, 1, 20)]

    print("\nSaving images:")
    # Iterate through images and save plot of detections
    for img_i, (path, detections) in enumerate(zip(imgs, img_detections)):

        print("(%d) Image: '%s'" % (img_i, path))

        # Create plot
        img = np.array(Image.open(path))
        plt.figure()
        fig, ax = plt.subplots(1)
        ax.imshow(img)

        # Draw bounding boxes and labels of detections
        if detections is not None:
            # Rescale boxes to original image
            detections = rescale_boxes(detections, opt.img_size, img.shape[:2])
            unique_labels = detections[:, -1].cpu().unique()
            n_cls_preds = len(unique_labels)
            bbox_colors = random.sample(colors, n_cls_preds)
            for x1, y1, x2, y2, conf, cls_conf, cls_pred in detections:

                print("\t+ Label: %s, Conf: %.5f" % (classes[int(cls_pred)], cls_conf.item()))

                box_w = x2 - x1
                box_h = y2 - y1
                

                
                color = bbox_colors[int(np.where(unique_labels == int(cls_pred))[0])]
                # Create a Rectangle patch
                bbox = patches.Rectangle((x1, y1), box_w, box_h, linewidth=2, edgecolor=color, facecolor="none")
                print(int(box_w)*int(box_h))
                # if(box_w*box_h>10000):
                #     se.write("1".encode())
                #     time.sleep(3)
                #     se.write("0".encode())
                # Add the bbox to the plot
                ax.add_patch(bbox)
                # Add label
                plt.text(
                    x1,
                    y1,
                    s=classes[int(cls_pred)],
                    color="white",
                    verticalalignment="top",
                    bbox={"color": color, "pad": 0},
                )

        # Save generated image with detections
        plt.axis("off")
        plt.gca().xaxis.set_major_locator(NullLocator())
        plt.gca().yaxis.set_major_locator(NullLocator())
        filename = os.path.basename(path).split(".")[0]
        output_path = os.path.join("../output", f"{filename}.png")
        plt.savefig(output_path, bbox_inches="tight", pad_inches=0.0)
        plt.close()
        app.run() #启动flask服务器

其中函数display():用于在浏览器输入地址后直接返回图片

@app.route('/img/',methods=['GET'])
def display():
    if request.method == 'GET':
        # if filename is None:
        #     pass
        # else:
            image = open("/static/he_21.png", "rb").read()
           # image = open(igpath+filename,"rb").read()
            response = make_response(image)
            response.headers['Content-Type'] = 'image/jpg'
            return response

除了最后一行app.run()其他的都是深度学习的代码,学过的应该很容易看懂。

前端代码

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Title</title>
     <link rel="stylesheet" type="text/css" href="static/Login.css"/>
</head>
<body>

 <h1 >检测结果</h1>

<h1>hello!!!!!!!!!!!!```</h1>
<h2>this is the detection result</h2>
<img src="/static/he_21.png">
<img src="/static/he_14.png">
<img src="/static/he_4.png">
</body>
</html>

css:

html{
 width: 100%;
 height: 100%;
 overflow: hidden;
 font-style: sans-serif;
}
body{
 width: 100%;
 height: 100%;
 font-family: 'Open Sans',sans-serif;
 margin: 0;
 background-color: #4A374A;
}
   img
        {
            widht: 300px;
            height: 300px;
            border: 1px solid red;
        }
  h1
        {
            text-align:center;
            color:#fff;
            font-size:48px;
            text-shadow: 1px 1px 1px #ccc,
                       0 0 10px #fff,
                       0 0 20px #fff,
                       0 0 30px #fff,
                       0 0 40px #ff00de,
                       0 0 70px #ff00de,
                       0 0 80px #ff00de,
                       0 0 100px #ff00de,
                       0 0 150px #ff00de;
            transform-style: preserve-3d;
            -moz-transform-style: preserve-3d;
            -webkit-transform-style: preserve-3d;
            -ms-transform-style: preserve-3d;
            -o-transform-style: preserve-3d;
            animation: run  ease-in-out 9s infinite;
            -moz-animation: run  ease-in-out 9s infinite ;
            -webkit-animation: run  ease-in-out 9s infinite;
            -ms-animation: run  ease-in-out 9s infinite;
            -o-animation: run  ease-in-out 9s infinite;
          }
        @keyframes run
        {
            0% {transform:rotateX(-5deg) rotateY(0);}
            50%
            {
                transform:rotateX(0) rotateY(180deg);
                text-shadow: 1px  1px 1px #ccc,
                           0 0 10px #fff,
                           0 0 20px #fff,
                           0 0 30px #fff,
                           0 0 40px #3EFF3E,
                           0 0 70px #3EFFff,
                           0 0 80px #3EFFff,
                           0 0 100px #3EFFee,
                           0 0 150px #3EFFee;
            }
            100% {transform:rotateX(5deg) rotateY(360deg);}
        }
        @-webkit-keyframes run
        {
            0% {transform:rotateX(-5deg) rotateY(0);}
            50%
            {
                transform:rotateX(0) rotateY(180deg);
                text-shadow: 1px  1px 1px #ccc,
                           0 0 10px #fff,
                           0 0 20px #fff,
                           0 0 30px #fff,
                           0 0 40px #3EFF3E,
                           0 0 70px #3EFFff,
                           0 0 80px #3EFFff,
                           0 0 100px #3EFFee,
                           0 0 150px #3EFFee;
            }
            100% {transform:rotateX(5deg) rotateY(360deg);}
        }
        @-moz-keyframes run
        {
            0% {transform:rotateX(-5deg) rotateY(0);}
            50%
            {
                transform:rotateX(0) rotateY(180deg);
                text-shadow: 1px  1px 1px #ccc,
                           0 0 10px #fff,
                           0 0 20px #fff,
                           0 0 30px #fff,
                           0 0 40px #3EFF3E,
                           0 0 70px #3EFFff,
                           0 0 80px #3EFFff,
                           0 0 100px #3EFFee,
                           0 0 150px #3EFFee;
            }
            100% {transform:rotateX(5deg) rotateY(360deg);}
        }
        @-ms-keyframes run
        {
            0% {transform:rotateX(-5deg) rotateY(0);}
            50%
            {
                transform:rotateX(0) rotateY(180deg);
                text-shadow: 1px  1px 1px #ccc,
                           0 0 10px #fff,
                           0 0 20px #fff,
                           0 0 30px #fff,
                           0 0 40px #3EFF3E,
                           0 0 70px #3EFFff,
                           0 0 80px #3EFFff,
                           0 0 100px #3EFFee,
                           0 0 150px #3EFFee;
            }
            100% {transform:rotateX(5deg) rotateY(360deg);}
        }

输入命令

export FLASK_APP=“detect.py” #由于是linux故输入这个
flask run --host=0.0.0.0 --port=8000
在这里插入图片描述记得开放ubuntu的端口:

sudo ufw allow 8000
在这里插入图片描述
附:
开启/禁用

sudo ufw allow|deny [service]

sudo ufw allow smtp 允许所有的外部IP访问本机的25/tcp (smtp)端口

sudo ufw allow 22/tcp 允许所有的外部IP访问本机的22/tcp (ssh)端口

sudo ufw allow 53 允许外部访问53端口(tcp/udp)

sudo ufw allow from 192.168.1.100 允许此IP访问所有的本机端口

sudo ufw allow proto udp 192.168.0.1 port 53 to 192.168.0.2 port 53
查看防火墙状态

sudo ufw status

  • 1
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值