flask整合mask_rcnn模型

12 篇文章 1 订阅
4 篇文章 0 订阅

案例源码地址:https://github.com/junlintianxiatjm/FlaskVideo-master-litehttps://github.com/junlintianxiatjm/FlaskVideo-master-lite

运行步骤:

1.执行数据库脚本:pipeline_monitor.sql,修改数据库配置文件为本地mysql地址和账号;

2.解压缩文件FlaskVideo-master-lite\app\mrcnn\h5\shapes20210726T1823下面的rar文件,生成mask_rcnn的.h5模型文件;

3.启动flask框架,manage.py;

4.访问http://localhost:5000/admin/login/

5.账号/密码:admin/flaskadmin

执行“检测”,即调用模型文件。

1.背景:

  • Flask

是一个扩展性很好的Web框架,可以使用各种Web开发库和工具来灵活地开发Web应用程序。对于经验丰富的开发人员可以自由地插入和使用他们喜欢的库和数据库,框架很少会强制开发人员使用什么。相反,开发人员可以转到自己喜欢的技术栈中。

Flask基于Werkzeug WSGI工具包和Jinja2模板引擎。

WSGI

Web Server Gateway Interface(Web服务器网关接口,WSGI)已被用作Python Web应用程序开发的标准。 WSGI是Web服务器和Web应用程序之间通用接口的规范。

Werkzeug

它是一个WSGI工具包,它实现了请求,响应对象和实用函数。 这使得能够在其上构建web框架。 Flask框架使用Werkzeug作为其基础之一。

jinja2

jinja2是Python的一个流行的模板引擎。Web模板系统将模板与特定数据源组合以呈现动态网页。

  • mask_rcnn

mask_rcnn是在faster rcnn 原理上进行改进,加入mask 层,属于二阶目标检测算法。

2.flask整合mask_rcnn

flask提供web框架,类似springboot,提供基本的网页开发脚手架(模板,路由,蓝图等等),用户可以根据业务需求,通过引入第三方库,添加需要的特有功能。flask与mask_rcnn同属于python系列,所以整合起来不存在语言障碍,相比于java等web框架,更得心应手。

mask_rcnn如何训练自定义数据集,可参考这里:

https://github.com/junlintianxiatjm/Mask_RCNN-master007

mask_rcnn 训练自定义数据集(本地win10系统cpu已调通,采坑无数,均已列出解决方法)_君临天下tjm的博客-CSDN博客源代码地址:https://github.com/junlintianxiatjm/Mask_RCNN-master0071.版本信息python 3.6.9Tensorflow 1.15.0keras 2.2.5Pillow 5.3.0(必须,否则labelme执行json_to_dataset会出错)Cv2(必须安装,训练模型时用到)Wraptopt_einsumGastscikit-imageIPython虚拟环境完整配置库如下:..https://blog.csdn.net/shanxiderenheni/article/details/118832615

mask_rcnn最后训练出的模型是.h5文件,在flask中需要编码视频或图像检测方法,再通过页面上的用户事件触发传入图片或视频地址即可实现目标检测,思路大体是这样。

3.注意事项:

3.1.flask框架完整安装包集合;

(py36_pipelinemonitor_env) C:\Users\DELL>pip list
Package              Version
-------------------- -------------------
absl-py              0.13.0
alembic              1.7.1
astor                0.8.1
astunparse           1.6.3
backcall             0.2.0
cached-property      1.5.2
certifi              2021.5.30
Click                7.0
colorama             0.4.4
cycler               0.10.0
decorator            4.4.2
Flask                1.0.2
Flask-Migrate        2.7.0
flask-redis          0.4.0
Flask-Script         2.0.6
Flask-SQLAlchemy     2.3.2
Flask-WTF            0.14.2
flatbuffers          1.12
gast                 0.2.2
google-pasta         0.2.0
greenlet             1.1.1
grpcio               1.40.0
h5py                 2.10.0
imageio              2.9.0
importlib-metadata   3.10.0
importlib-resources  5.2.2
ipython              7.16.1
ipython-genutils     0.2.0
itsdangerous         0.24
jedi                 0.18.0
Jinja2               2.11.3
Keras                2.2.5
Keras-Applications   1.0.8
keras-nightly        2.5.0.dev2021032900
Keras-Preprocessing  1.1.2
kiwisolver           1.3.1
Mako                 1.1.5
Markdown             3.3.4
MarkupSafe           1.0
matplotlib           3.3.4
mysql-connector      2.2.9
networkx             2.5.1
numpy                1.19.5
object-detection     0.1
opencv-python        4.5.3.56
opt-einsum           3.3.0
pandas               1.1.5
parso                0.8.2
pickleshare          0.7.5
Pillow               8.3.2
pip                  21.0.1
prompt-toolkit       3.0.20
protobuf             3.17.3
Pygments             2.10.0
PyMySQL              1.0.2
pyparsing            2.4.7
pypinyin             0.42.0
python-dateutil      2.8.2
pytz                 2021.3
PyWavelets           1.1.1
PyYAML               5.4.1
redis                3.5.3
reportlab            3.6.1
scikit-image         0.16.2
scipy                1.4.1
setuptools           52.0.0.post20210125
six                  1.16.0
SQLAlchemy           1.3.0
tensorboard          1.15.0
tensorflow           1.15.0
tensorflow-estimator 1.15.1
termcolor            1.1.0
tifffile             2020.9.3
traitlets            4.3.3
typing-extensions    3.7.4.3
wcwidth              0.2.5
Werkzeug             0.14.1
wheel                0.37.0
wincertstore         0.2
wrapt                1.12.1
WTForms              2.2.1
xlrd                 2.0.1
zipp                 3.5.0

3.2.mask_rcnn中的mrcnn整体迁移至flask框架的app目录下面;

3.3.编码目标检测类文件detect.py(核心方法类);

# -*- coding: utf-8 -*-

"""
@author: tjm
@software: PyCharm
@file: detect.py
@time: 2021/9/16 9:48
"""

import os
import sys
import skimage.io

from app import app
from app.mrcnn.config import Config
from datetime import datetime
from pathlib import Path
import cv2

# Root directory of the project
ROOT_DIR = os.getcwd()

# Import Mask RCNN
sys.path.append(ROOT_DIR)  # To find local version of the library
# from mrcnn import utils
from app.mrcnn import model as modellib
from app.mrcnn import visualize

# Import COCO config
# sys.path.append(os.path.join(ROOT_DIR, "samples/coco/"))  # To find local version
# from samples.coco import coco


# Directory to save h5 and trained model

MODEL_DIR = os.path.join(ROOT_DIR, "app\mrcnn\h5")

# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(MODEL_DIR, "shapes20210726T1823/mask_rcnn_shapes_0014.h5")
# Download COCO trained weights from Releases if needed
if not os.path.exists(COCO_MODEL_PATH):
    # utils.download_trained_weights(COCO_MODEL_PATH)
    print("junlintianxia************ h5 model not exist ***********")

# Directory of images to run detection on
IMAGE_DIR = os.path.join(ROOT_DIR, "images")


class ShapesConfig(Config):
    """Configuration for training on the toy shapes dataset.
    Derives from the base Config class and overrides values specific
    to the toy shapes dataset.
    """
    # Give the configuration a recognizable name
    NAME = "shapes"

    # Train on 1 GPU and 8 images per GPU. We can put multiple images on each
    # GPU because the images are small. Batch size is 8 (GPUs * images/GPU).
    GPU_COUNT = 1
    IMAGES_PER_GPU = 1

    # Number of classes (including background)
    NUM_CLASSES = 1 + 12  # background + 12 shapes

    # Use small images for faster training. Set the limits of the small side
    # the large side, and that determines the image shape.
    IMAGE_MIN_DIM = 80
    IMAGE_MAX_DIM = 512

    # Use smaller anchors because our image and objects are small
    # RPN_ANCHOR_SCALES = (8 * 6, 16 * 6, 32 * 6, 64 * 6, 128 * 6)  # anchor side in pixels
    RPN_ANCHOR_SCALES = (8 * 2, 16 * 2, 32 * 2, 64 * 2, 128 * 2)

    # Reduce training ROIs per image because the images are small and have
    # few objects. Aim to allow ROI sampling to pick 33% positive ROIs.
    TRAIN_ROIS_PER_IMAGE = 10

    # Use a small epoch since the data is simple
    STEPS_PER_EPOCH = 10

    # use small validation steps since the epoch is small
    VALIDATION_STEPS = 5


# import train_tongue
# class InferenceConfig(coco.CocoConfig):
class InferenceConfig(ShapesConfig):
    # Set batch size to 1 since we'll be running inference on
    # one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
    GPU_COUNT = 1
    IMAGES_PER_GPU = 1


# 检测图片
class detect_image():

    def __init__(self, imgPath=None):
        self.imgPath = imgPath
        self.config = InferenceConfig()
        # Create model object in inference mode.
        self.model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=self.config)

        # Load weights trained on MS-COCO
        self.model.load_weights(COCO_MODEL_PATH, by_name=True)

        # COCO Class names
        # Index of the class in the list is its ID. For example, to get ID of
        # the teddy bear class, use: class_names.index('teddy bear')
        self.class_names = ['BG', "AJ", "BX", "CJ", "CK", "CR", "FZ", "JG", "PL", "QF", "TJ", "ZC", "ZW"]
        # Load a random image from the images folder
        # file_names = next(os.walk(IMAGE_DIR))[2]
        # image = skimage.io.imread("./images/CJ6798.jpg")
        self.image = skimage.io.imread(imgPath)

    def call(self):
        a = datetime.now()
        # Run detection
        results = self.model.detect([self.image], verbose=1)
        b = datetime.now()
        # Visualize results
        print("time: ", (b - a).seconds)
        r = results[0]

        print("=======", r)

        visualize.display_instances(self.image, r['rois'], r['masks'], r['class_ids'],
                                    self.class_names, r['scores'])


# 检测视频
class detect_video():

    def __init__(self, videoPath=None):
        self.videoPath = videoPath
        self.video_capture = cv2.VideoCapture(self.videoPath)

        self.config = InferenceConfig()
        # Create model object in inference mode.
        self.model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=self.config)

        # Load weights trained on MS-COCO
        self.model.load_weights(COCO_MODEL_PATH, by_name=True)

        # COCO Class names
        # Index of the class in the list is its ID. For example, to get ID of
        # the teddy bear class, use: class_names.index('teddy bear')
        self.class_names = ['BG', "AJ", "BX", "CJ", "CK", "CR", "FZ", "JG", "PL", "QF", "TJ", "ZC", "ZW"]

    def do_detect(self):
        a = datetime.now()
        # Run detection
        # TODO 动态创建保存路径,保存成功后,更新视频数据库缺陷文件夹路径(需修改数据库表,增加是否检测,检测结果路径两个字段)
        # TODO 动态创建路径已经实现,现在需要实现的是修改数据库,增加是否检测字段与检测结果路径字段
        # 获取文件名,当做文件保存路径名
        video_name = os.path.splitext(os.path.basename(self.videoPath))[0]
        # 拼成文件要保存的路径,文件直接保存在项目根目录results文件夹中,如果要保存在模型result文件夹中,需要在 visualize.display_instances中上进行设置
        folder_path = os.path.join('results', str(video_name))
        # save_path = os.path.join(ROOT_DIR, folder_path)
        # 先判断是否已经存在文件夹,若不存在,则新建新的文件夹
        # 若文件夹已经存在,则删除文件夹内的所有数据(用户点击重新检测的情况)
        if not Path(folder_path).exists():
            os.makedirs(folder_path)
            print('文件存储路径', folder_path)
        else:
            files = os.listdir(folder_path)
            for file in files:
                c_path = os.path.join(folder_path, file)
                os.remove(c_path)
        while True:
            ret, frame = self.video_capture.read()
            if ret:
                # TODO 常识转换灰度图,对比性能
                results = self.model.detect([frame], verbose=1)
                b = datetime.now()
                # Visualize results
                print("time: ", (b - a).seconds)
                r = results[0]

                print("=======", r)
                cur_class_id = len(r['class_ids'])
                if cur_class_id:
                    # TODO 方法visualize.display_instances需增加一个参数:缺陷结果保存路径
                    # 新增一个参数,file_path, 参数为想要保存的缺陷结果路径。该路径在上面设置。
                    visualize.display_instances(frame, r['rois'], r['masks'], r['class_ids'],
                                                self.class_names, folder_path, r['scores'])
            else:
                break
        return folder_path


if __name__ == "__main__":
    # det = detect_image("./images/CJ6798.jpg")
    # det.call()

    root_path = app.config['UP_DIR']  # 文件上传保存路径
    det = detect_video(
        r"D:\python-workspace\FlaskVideo-master\app\static\video\21-09-13\2202109131608391c1babe4b2204fd0a9fca5e0c1db731d.mp4")
    det.do_detect()

 

 3.4.在flask蓝图中,传入视频地址,做目标检测。

检测中:

3.5.图片检测

目录

案例源码地址:https://github.com/junlintianxiatjm/FlaskVideo-master-litehttps://github.com/junlintianxiatjm/FlaskVideo-master-lite

运行步骤:

1.背景:

Flask

WSGI

Werkzeug

jinja2

mask_rcnn

2.flask整合mask_rcnn

3.注意事项:

3.1.flask框架完整安装包集合;

3.2.mask_rcnn中的mrcnn整体迁移至flask框架的app目录下面;

3.3.编码目标检测类文件detect.py(核心方法类);

 3.4.在flask蓝图中,传入视频地址,做目标检测。

3.5.图片检测


  • 2
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值