从零搭建基于PaddleOCR+Flask+Layui的webapi平台(一、银行卡识别)

前言

由于业务需要识别银行卡卡号,为了降低成本,网上找了各种开源框架,最后决定使用PaddleOCR+Flask+Layui搭建一个提供webapi接口的OCR平台,本文尽量从小白基础讲解整个搭建过程,如有不足之处尽情见谅。文末附源代码,本地或者直接部署到Linux就可以使用,内含训练好的模型。

一、环境准备

  1. 下载源码

    下载paddleOCR的2.8分支,下载地址:
    https://github.com/PaddlePaddle/PaddleOCR/tree/release/2.8

  2. 运行环境准备

    使用的 Anaconda创建Python环境(本文用的是Python 3.8.5),详细请看源码说明:
    https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.8/doc/doc_ch/environment.md

  3. 安装requirements

    在项目根目录,执行安装requirements.txt,下载慢的话换国内镜像源试试。

pip install -r requirements.txt

为了精准的识别出银行卡号,大致分为两个步骤,检测文本的位置,然后对检测出的位置进行文字识别,如果图片方向不正的话,还要进行方向检测,本文仅对文本检测和文本识别进行训练、推理,具体可以看官方说明:
https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.8/doc/doc_ch/training.md

二、文本检测

  1. 训练数据
    训练数据来源有两种,一种是自己用PPOCRLabel标注数据,一种用别人训练好的数据。
    PPOCRLabel安装和使用教程:
    https://www.jianshu.com/p/4133fbf91981
    3000多张银行卡号已标注文本检测数据集:
    https://download.csdn.net/download/YY007H/85374437
  2. 训练模型
    开始训练前,可以看下官网文档,有详细训练和微调说明:
    文本检测:
    https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.8/doc/doc_ch/detection.md
    模型微调:
    https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.8/doc/doc_ch/finetune.md
    本文选择PP-OCRv3模型(配置文件:ch_PP-OCRv3_det_student.yml,预训练模型:ch_PP-OCRv3_det_distill_train.tar)进行微调。
    ch_PP-OCRv3_det_distill_train.tar 下载地址:
    https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_distill_train.tar
    ch_PP-OCRv3_det_student.yml 配置文件如下:
Global:
  debug: false
  use_gpu: true
  epoch_num: 1200 #最大训练epoch数
  log_smooth_window: 20 #log队列长度,每次打印输出队列里的中间值
  print_batch_step: 2 #设置打印log间隔
  save_model_dir: ./output/ch_PP-OCRv3_det_student/ #设置输出模型路径
  save_epoch_step: 1200 #设置模型保存间隔
  eval_batch_step: 1500 #设置模型评估间隔
  cal_metric_during_train: false
  pretrained_model: ./pretrain_models/ch_PP-OCRv3_det_distill_train/student.pdparams #预训练模型路径
  checkpoints: null
  save_inference_dir: null
  use_visualdl: True
  infer_img: doc/imgs_en/img_10.jpg
  save_res_path: ./output/ch_PP-OCRv3_det_student/predicts_db.txt
  distributed: true

Architecture:
  model_type: det
  algorithm: DB
  Transform:
  Backbone:
    name: MobileNetV3
    scale: 0.5
    model_name: large
    disable_se: True
  Neck:
    name: RSEFPN
    out_channels: 96
    shortcut: True
  Head:
    name: DBHead
    k: 50

Loss:
  name: DBLoss
  balance_loss: true
  main_loss_type: DiceLoss
  alpha: 5
  beta: 10
  ohem_ratio: 3
Optimizer:
  name: Adam
  beta1: 0.9
  beta2: 0.999
  lr:
    name: Cosine
    learning_rate: 0.001
    warmup_epoch: 2
  regularizer:
    name: L2
    factor: 0
PostProcess:
  name: DBPostProcess
  thresh: 0.3
  box_thresh: 0.6
  max_candidates: 1000
  unclip_ratio: 1.5
Metric:
  name: DetMetric
  main_indicator: hmean
Train:
  dataset:
    name: SimpleDataSet
    data_dir: ./pretrain_models/train_data/ #标注数据集路径
    label_file_list:
      - "./pretrain_models/train_data/bank/bank1/real_det_train.txt"
      - "./pretrain_models/train_data/bank/bank2/real_det_train.txt"
      - "./pretrain_models/train_data/bank/bank3/real_det_train.txt"
    ratio_list: [ 1.0, 1.0 , 1.0 ]
    transforms:
    - DecodeImage:
        img_mode: BGR
        channel_first: false
    - DetLabelEncode: null
    - IaaAugment:
        augmenter_args:
        - type: Fliplr
          args:
            p: 0.5
        - type: Affine
          args:
            rotate:
            - -10
            - 10
        - type: Resize
          args:
            size:
            - 0.5
            - 3
    - EastRandomCropData:
        size:
        - 960
        - 960
        max_tries: 50
        keep_ratio: true
    - MakeBorderMap:
        shrink_ratio: 0.4
        thresh_min: 0.3
        thresh_max: 0.7
    - MakeShrinkMap:
        shrink_ratio: 0.4
        min_text_size: 8
    - NormalizeImage:
        scale: 1./255.
        mean:
        - 0.485
        - 0.456
        - 0.406
        std:
        - 0.229
        - 0.224
        - 0.225
        order: hwc
    - ToCHWImage: null
    - KeepKeys:
        keep_keys:
        - image
        - threshold_map
        - threshold_mask
        - shrink_map
        - shrink_mask
  loader:
    shuffle: true
    drop_last: false
    batch_size_per_card: 14 #单卡batch size
    num_workers: 14
Eval:
  dataset:
    name: SimpleDataSet
    data_dir: ./pretrain_models/train_data/
    label_file_list:
      - "./pretrain_models/train_data/bank/bank1/real_det_test.txt"
      - "./pretrain_models/train_data/bank/bank2/real_det_test.txt"
      - "./pretrain_models/train_data/bank/bank3/real_det_test.txt"
    ratio_list: [ 1.0, 1.0 ,1.0 ]
    transforms:
    - DecodeImage:
        img_mode: BGR
        channel_first: false
    - DetLabelEncode: null
    - DetResizeForTest: null
    - NormalizeImage:
        scale: 1./255.
        mean:
        - 0.485
        - 0.456
        - 0.406
        std:
        - 0.229
        - 0.224
        - 0.225
        order: hwc
    - ToCHWImage: null
    - KeepKeys:
        keep_keys:
        - image
        - shape
        - polys
        - ignore_tags
  loader:
    shuffle: false
    drop_last: false
    batch_size_per_card: 1
    num_workers: 8

具体配置的参数说明:
https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.8/doc/doc_ch/config.md
大家根据自己电脑配置去调整Train和Eval下面的batch_size_per_card、num_workers,其中Eval下面的batch_size_per_card必须为1。
执行训练:

python -m paddle.distributed.launch --gpus 0 tools/train.py  -c pretrain_models/ch_PP-OCRv3_det_student.yml

本文使用电脑为3060 12G,训练了大概4天,实际情况看大家电脑配置和训练参数的配置。

  1. 验证模型

    执行以下命令:

 python tools/infer_det.py -c pretrain_models/ch_PP-OCRv3_det_student.yml -o Global.pretrained_model="./output/ch_PP-OCRv3_det_student/best_accuracy" Global.infer_img="./output/ch_PP-OCRv3_det_student/det_input/03.png"
  1. 导出模型

    修改源代码解决导出模型与训练模型不一致问题和检测框太小问题:

文件一:tools/infer/predict_det.py
修改:

"DetResizeForTest": {
	# "limit_side_len": args.det_limit_side_len,
	# "limit_type": args.det_limit_type,
	"resize_long": args.det_resize_long,
}

文件二:tools/infer/utility.py
修改:

parser.add_argument("--det_resize_long", type=float, default=960)
parser.add_argument("--det_db_unclip_ratio", type=float, default=3)

执行以下代码导出模型:

python tools/export_model.py -c pretrain_models/ch_PP-OCRv3_det_student.yml -o Global.pretrained_model="./output/ch_PP-OCRv3_det_student/best_accuracy" Global.save_inference_dir="./inference/ch_PP-OCRv3_det_student/"
  1. 推理模型

找张图片测试检测效果

 python tools/infer/predict_det.py --det_algorithm="DB" --det_model_dir="./inference/ch_PP-OCRv3_det_student/" --image_dir="./output/ch_PP-OCRv3_det_student/det_input/03.png" --use_gpu=True

三、文本识别

  1. 训练数据
    银行卡卡号切图数据集,用于卡号识别训练
    https://download.csdn.net/download/YY007H/88571384

  2. 模型训练
    开始训练前,可以看下官网文档,有详细训练和微调说明:
    官网文档:
    https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.8/doc/doc_ch/recognition.md
    模型微调:
    https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.8/doc/doc_ch/finetune.md
    本文选择PP-OCRv3模型(配置文件:ch_PP-OCRv3_rec_distillation.yml,预训练模型:ch_PP-OCRv3_rec_train.tar)进行微调。
    ch_PP-OCRv3_rec_train.tar 下载地址:
    https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_train.tar
    ch_PP-OCRv3_rec_distillation.yml 配置文件:

Global:
  debug: false
  use_gpu: true
  epoch_num: 500
  log_smooth_window: 20
  print_batch_step: 10
  save_model_dir: ./output/ch_PP-OCRv3_rec_train/ #设置输出模型路径
  save_epoch_step: 50
  eval_batch_step: 100
  cal_metric_during_train: true
  pretrained_model: ./pretrain_models/ch_PP-OCRv3_rec_train/best_accuracy.pdparams #预训练模型路径
  checkpoints:
  save_inference_dir:
  use_visualdl: false
  infer_img: doc/imgs_words/ch/word_1.jpg
  character_dict_path: ppocr/utils/ppocr_keys_bank.txt
  max_text_length: &max_text_length 25
  infer_mode: false
  use_space_char: False
  distributed: true
  save_res_path: ./output/ch_PP-OCRv3_rec_train/rec/ch_PP-OCRv3_rec_train.txt
  d2s_train_image_shape: [3, 48, -1]


Optimizer:
  name: Adam
  beta1: 0.9
  beta2: 0.999
  lr:
    name: Piecewise
    decay_epochs : [700]
    values : [0.0005, 0.00005]
    warmup_epoch: 5
  regularizer:
    name: L2
    factor: 3.0e-05


Architecture:
  model_type: &model_type "rec"
  name: DistillationModel
  algorithm: Distillation
  Models:
    Teacher:
      pretrained:
      freeze_params: false
      return_all_feats: true
      model_type: *model_type
      algorithm: SVTR_LCNet
      Transform:
      Backbone:
        name: MobileNetV1Enhance
        scale: 0.5
        last_conv_stride: [1, 2]
        last_pool_type: avg
        last_pool_kernel_size: [2, 2]
      Head:
        name: MultiHead
        head_list:
          - CTCHead:
              Neck:
                name: svtr
                dims: 64
                depth: 2
                hidden_dims: 120
                use_guide: False
              Head:
                name: CTCHead
                fc_decay: 0.00001
          - SARHead:
              enc_dim: 512
              max_text_length: *max_text_length
    Student:
      pretrained:
      freeze_params: false
      return_all_feats: true
      model_type: *model_type
      algorithm: SVTR_LCNet
      Transform:
      Backbone:
        name: MobileNetV1Enhance
        scale: 0.5
        last_conv_stride: [1, 2]
        last_pool_type: avg
        last_pool_kernel_size: [2, 2]
      Head:
        name: MultiHead
        head_list:
          - CTCHead:
              Neck:
                name: svtr
                dims: 64
                depth: 2
                hidden_dims: 120
                use_guide: True
              Head:
                fc_decay: 0.00001
          - SARHead:
              enc_dim: 512
              max_text_length: *max_text_length
Loss:
  name: CombinedLoss
  loss_config_list:
  - DistillationDMLLoss:
      weight: 1.0
      act: "softmax"
      use_log: true
      model_name_pairs:
      - ["Student", "Teacher"]
      key: head_out
      multi_head: True
      dis_head: ctc
      name: dml_ctc
  - DistillationDMLLoss:
      weight: 0.5
      act: "softmax"
      use_log: true
      model_name_pairs:
      - ["Student", "Teacher"]
      key: head_out
      multi_head: True
      dis_head: sar
      name: dml_sar
  - DistillationDistanceLoss:
      weight: 1.0
      mode: "l2"
      model_name_pairs:
      - ["Student", "Teacher"]
      key: backbone_out
  - DistillationCTCLoss:
      weight: 1.0
      model_name_list: ["Student", "Teacher"]
      key: head_out
      multi_head: True
  - DistillationSARLoss:
      weight: 1.0
      model_name_list: ["Student", "Teacher"]
      key: head_out
      multi_head: True

PostProcess:
  name: DistillationCTCLabelDecode
  model_name: ["Student", "Teacher"]
  key: head_out
  multi_head: True

Metric:
  name: DistillationMetric
  base_metric_name: RecMetric
  main_indicator: acc
  key: "Student"
  ignore_space: False

Train:
  dataset:
    name: SimpleDataSet
    data_dir: ./pretrain_models/rec_train_data/ #标注数据集路径
    ext_op_transform_idx: 1
    label_file_list:
      - "./pretrain_models/rec_train_data/bank/rec/bank1/real_rec_train.txt"
      - "./pretrain_models/rec_train_data/bank/rec/bank2/real_rec_train.txt"
      - "./pretrain_models/rec_train_data/bank/rec/bank3/real_rec_train.txt"
    ratio_list: [ 1.0, 1.0 , 1.0 ]
    transforms:
    - DecodeImage:
        img_mode: BGR
        channel_first: false
    - RecAug:
    - MultiLabelEncode:
    - RecResizeImg:
        image_shape: [3, 48, 320]
    - KeepKeys:
        keep_keys:
        - image
        - label_ctc
        - label_sar
        - length
        - valid_ratio
  loader:
    shuffle: true
    batch_size_per_card: 32
    drop_last: true
    num_workers: 8
Eval:
  dataset:
    name: SimpleDataSet
    data_dir: ./pretrain_models/rec_train_data/
    label_file_list:
      - "./pretrain_models/rec_train_data/bank/rec/bank1/real_rec_test.txt"
      - "./pretrain_models/rec_train_data/bank/rec/bank2/real_rec_test.txt"
      - "./pretrain_models/rec_train_data/bank/rec/bank3/real_rec_test.txt"
    ratio_list: [ 1.0, 1.0 , 1.0 ]
    transforms:
    - DecodeImage:
        img_mode: BGR
        channel_first: false
    - MultiLabelEncode:
    - RecResizeImg:
        image_shape: [3, 48, 320]
    - KeepKeys:
        keep_keys:
        - image
        - label_ctc
        - label_sar
        - length
        - valid_ratio
  loader:
    shuffle: false
    drop_last: false
    batch_size_per_card: 32
    num_workers: 8

具体配置的参数说明:
https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.8/doc/doc_ch/config.md
大家根据自己电脑配置去调整Train和Eval下面的batch_size_per_card、num_workers。

执行训练:

python -m paddle.distributed.launch --gpus 0 tools/train.py -c pretrain_models/ch_PP-OCRv3_rec_distillation.yml

导出模型:

python tools/export_model.py -c pretrain_models/ch_PP-OCRv3_rec_distillation.yml -o Global.pretrained_model="./output/ch_PP-OCRv3_rec_train/best_accuracy" Global.save_inference_dir="./inference/ch_PP-OCRv3_rec_train/"

识别训练的数据大概1W左右,本文使用电脑为3060 12G,训练了大概2天,实际情况看大家电脑配置和训练参数的配置。

四、Flask+Layui串联PaddleOCR检测识别银行卡

  1. 安装Flask
    Flask是一个Python的web框架,比Django框架更简单,上手更容易。
pip install Flask

在项目根目录新建一个web文件夹,存放Flask相关代码,为了有一个良好的在线体验效果,前端用layui框架。
下载Layui静态文件放到web目录下,下载地址:
https://layui.dev/
static文件夹放css、font、js等静态文件,templates文件夹放html(Flask默认的模板目录),temp文件夹主要用来放ocr识别的图片及结果:
在这里插入图片描述
前端布局较简单,顶部一个菜单,左边上传文件,右边显示结果:
在这里插入图片描述

  1. 串联整个流程
    新建一个app.py,作为webapi入口,添加一个方法返回html页面:
@app.route('/')
def index_view():
    return render_template('index.html')

添加一个temp静态目录解析,使得前端能访问到里面识别文件:

app = Flask(__name__)
temp_bp = Blueprint('temp_bp', __name__, static_folder='temp')
app.register_blueprint(temp_bp, url_prefix='/')

添加上传文件去识别的接口:

@app.route("/upload", methods=["POST"])
def upload():
    file = request.files.get("file")
    name, extension = os.path.splitext(os.path.basename(file.filename))
    filename = str(int(time.time())) + extension
    filepath = os.path.join("temp", filename)
    file.save(filepath)
    bank_card_num, bank_card_file = ocr(filepath, True)

    return jsonify({'success': True, 'bank_card_num': bank_card_num, 'bank_card_file': bank_card_file})

添加远程下载文件去识别的接口:

@app.route("/ocr", methods=["POST"])
def ocr():
    url = request.form['url']
    file_url = url.split('/')[-1]
    name, extension = os.path.splitext(os.path.basename(file_url))
    filename = str(int(time.time())) + extension
    filepath = os.path.join("temp", filename)

    r = requests.get(url)
    with open(filepath, 'wb') as temp_file:
        temp_file.write(r.content)

    bank_card_num, bank_card_file = ocr(filepath, False)
    return jsonify({'success': True, 'bank_card_num': bank_card_num, 'bank_card_file': bank_card_file})

添加识别方法:

def ocr(image_file, is_visualize):
    cfg = merge_configs()

    text_sys = TextSystem(cfg)
    predicted_data = read_image(image_file)
    dt_boxes, rec_res, time_dict = text_sys(predicted_data)

    result_file = None
    if is_visualize:
        result_file = draw_result(dt_boxes, rec_res, image_file)

    return rec_res[0][0], result_file

添加加载配置方法:

def merge_configs():
    backup_argv = copy.deepcopy(sys.argv)
    sys.argv = sys.argv[:1]
    cfg = parse_args()

    update_cfg_map = vars(read_params())

    for key in update_cfg_map:
        cfg.__setattr__(key, update_cfg_map[key])

    sys.argv = copy.deepcopy(backup_argv)
    return cfg

添加加载图片方法:

def read_image(img_path):
    assert os.path.isfile(
        img_path), "The {} isn't a valid file.".format(img_path)
    img = cv2.imread(img_path)
    if img is None:
        return None
    return img

添加识别图片输出方法:

def draw_result(dt_boxes, rec_res, image_file):
    img = read_image(image_file)
    image = Image.fromarray(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
    boxes = dt_boxes
    txts = [rec_res[i][0] for i in range(len(rec_res))]
    scores = [rec_res[i][1] for i in range(len(rec_res))]

    draw_img = draw_ocr_box_txt(
        image,
        boxes,
        txts,
        scores,
        0.5,
        font_path="../doc/fonts/simfang.ttf"
    )

    name, extension = os.path.splitext(os.path.basename(image_file))
    directory = os.path.dirname(image_file)

    result_file = os.path.join(directory, name + "_result" + extension)

    cv2.imwrite(
        result_file,
        draw_img[:, :, ::-1],
    )
    return result_file

添加启动方法:

if __name__ == '__main__':
    app.run(host="0.0.0.0", port=8321)

新建一个params.py文件存放配置:

def read_params():
    cfg = Config()

    # params for text detector
    cfg.det_algorithm = "DB"
    cfg.det_model_dir = "/data/paddle_ocr/models/ch_PP-OCRv3_det_student/"

    # params for text recognizer
    cfg.rec_model_dir = "/data/paddle_ocr/models/ch_PP-OCRv3_rec_train/Student"
    cfg.rec_char_dict_path = "/data/paddle_ocr/models/ch_PP-OCRv3_rec_train/ppocr_keys_bank.txt"
    
    cfg.use_gpu = False
    cfg.ir_optim = True

    return cfg

训练好的检测模型、识别模型、识别字典都放在根目录的models目录下,部署时根据服务器配置来决定是否开启gpu

  1. 部署Linux环境
    在Linux安装好Python环境,并安装好requirements.txt里面的依赖,在web目录下新建一个run.sh文件,内容如下:
nohup /usr/bin/python3 /data/paddle_ocr/web/app.py > app.log 2>&1 &

执行命令启动:

cd /data/paddle_ocr/web 
chmod +x run.sh 
sh run.sh

除了直接用 python 命令启动,还可以使用 WSGI 服务器如‌ Gunicorn 或‌ uWSGI 来部署Flask应用,本文不在做阐述。

五、查看成果

在线上传体验:
在这里插入图片描述
在这里插入图片描述

调用远程下载:
在这里插入图片描述

源代码:
基于PaddleOCR+Flask+Layui的webapi平台(一、银行卡识别)

  • 14
    点赞
  • 26
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值