yolov3模型部署实战weights转onnx并推理

前言:

最近比较忙(懒),本学渣在写毕业论文(好难受),所以博客的更新不是那么有效率,哈哈;
本文的目的是用实操带你一步一步的实现darknet模型框架的部署流程;(当然darknet算法的训练在本人之前的博客也有写,你要是串起来,那也是很棒的!)
实现yolov3/yolov3-tiny模型从.weights模型转换到.onnx模型然后再转换到.trt模型;
当然,本文也是本人自己对自己学习的一个记录和小总结吧,若有不足之处请多多指点,欢迎提出意见!
学习的过程也有参考各路大佬的博客和社区资源。
注意:本文对理论的讲解会不足,追求理论理解的,可以去看官方文档。
本文所有涉及的代码均会待博客完整写完后上传到本人的github上…

本博客共计为两个部分:
第一部分:.weights -> .onnx

1.模型转换到.onnx结构后,采用onnxruntime进行模型的推理;

  • 测试code会提供大家对图片检测的demo,以及对视频流的推理demo;
第二部分:.weights -> .onnx -> .trt

2.模型从.weights结构转换到.onnx结构然后再转换到.trt结构,然后进行模型的推理;

  • 此部分提供照片测试的代码demo,视频流的推理可以模仿上面的onnxruntime模型在视频上推理的代码写此部分代码;

第一部分

该部分的主要code来源于tensorrt的samples中的代码;

还是一样,先大致说一下project的思路;

  • 首先就是得训练好一个darknet的模型,yolov3.weights或者yolov3-tiny.weights;
  • 然后将训练好的模型结合对应的网络模型cfg文件转到onnx模型结构;
  • 然后就是采用onnxruntime进行模型的推理;(本文中的模型推理采用onnxruntime cpu版本,读者可自行换为onnxruntime-gpu版本)

本博客中测试的darknet模型采用官方的yolov3.weights, 该模型检测的类别数为80类(COCO-datasets)。
接下来我们按照project的思路来逐步实现学习。

一. project环境的配置

  • 为你的实验创建一个新的env(本人使用的是conda,建议使用conda,当然实验环境的搭建和您是否使用anaconda没有直接关系…)
conda create -n yolo_inference python=3.5
  • 激活envs,然后安装相关依赖的package
source activate yolo_inference   或者   conda activate yolo_inference
pip install pillow
pip install opencv-python
pip install onnx==1.4.1
pip install onnxruntime==1.1.0

二. weights 模型转换为onnx模型文件

这里需要说明,此部分代码的使用需要在python 2下运行。
直接上代码(代码来自于官方samples):
yolov3_to_onnx.py

#!/usr/bin/env python2
#
# Copyright 1993-2018 NVIDIA Corporation.  All rights reserved.
#
# NOTICE TO LICENSEE:
#
# This source code and/or documentation ("Licensed Deliverables") are
# subject to NVIDIA intellectual property rights under U.S. and
# international Copyright laws.
#
# These Licensed Deliverables contained herein is PROPRIETARY and
# CONFIDENTIAL to NVIDIA and is being provided under the terms and
# conditions of a form of NVIDIA software license agreement by and
# between NVIDIA and Licensee ("License Agreement") or electronically
# accepted by Licensee.  Notwithstanding any terms or conditions to
# the contrary in the License Agreement, reproduction or disclosure
# of the Licensed Deliverables to any third party without the express
# written consent of NVIDIA is prohibited.
#
# NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
# LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE
# SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE.  IT IS
# PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND.
# NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED
# DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY,
# NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.
# NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
# LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY
# SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
# OF THESE LICENSED DELIVERABLES.
#
# U.S. Government End Users.  These Licensed Deliverables are a
# "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT
# 1995), consisting of "commercial computer software" and "commercial
# computer software documentation" as such terms are used in 48
# C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government
# only as a commercial end item.  Consistent with 48 C.F.R.12.212 and
# 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all
# U.S. Government End Users acquire the Licensed Deliverables with
# only those rights set forth herein.
#
# Any use of the Licensed Deliverables in individual and commercial
# software must include, in the user documentation and internal
# comments to the code, the above Disclaimer and U.S. Government End
# Users Notice.
#

from __future__ import print_function
from collections import OrderedDict
import hashlib
import os.path


import onnx
from onnx import helper
from onnx import TensorProto
import numpy as np

import sys


class DarkNetParser(object):
    """Definition of a parser for DarkNet-based YOLOv3-608 (only tested for this topology)."""

    def __init__(self, supported_layers):
        """Initializes a DarkNetParser object.

        Keyword argument:
        supported_layers -- a string list of supported layers in DarkNet naming convention,
        parameters are only added to the class dictionary if a parsed layer is included.
        """

        # A list of YOLOv3 layers containing dictionaries with all layer
        # parameters:
        self.layer_configs = OrderedDict()
        self.supported_layers = supported_layers
        self.layer_counter = 0

    def parse_cfg_file(self, cfg_file_path):
        """Takes the yolov3.cfg file and parses it layer by layer,
        appending each layer's parameters as a dictionary to layer_configs.

        Keyword argument:
        cfg_file_path -- path to the yolov3.cfg file as string
        """
        with open(cfg_file_path, 'rb') as cfg_file:
            remainder = cfg_file.read()
            while remainder is not None:
                layer_dict, layer_name, remainder = self._next_layer(remainder)
                if layer_dict is not None:
                    self.layer_configs[layer_name] = layer_dict
        return self.layer_configs

    def _next_layer(self, remainder):
        """Takes in a string and segments it by looking for DarkNet delimiters.
        Returns the layer parameters and the remaining string after the last delimiter.
        Example for the first Conv layer in yolo.cfg ...

        [convolutional]
        batch_normalize=1
        filters=32
        size=3
        stride=1
        pad=1
        activation=leaky

        ... becomes the following layer_dict return value:
        {'activation': 'leaky', 'stride': 1, 'pad': 1, 'filters': 32,
        'batch_normalize': 1, 'type': 'convolutional', 'size': 3}.

        '001_convolutional' is returned as layer_name, and all lines that follow in yolo.cfg
        are returned as the next remainder.

        Keyword argument:
        remainder -- a string with all raw text after the previously parsed layer
        """
        remainder = remainder.split('[', 1)
        if len(remainder) == 2:
            remainder = remainder[1]
        else:
            return None, None, None
        remainder = remainder.split(']', 1)
        if len(remainder) == 2:
            layer_type, remainder = remainder
        else:
            return None, None, None
        if remainder.replace(' ', '')[0] == '#':
            remainder = remainder.split('\n', 1)[1]

        layer_param_block, remainder = remainder.split('\n\n', 1)
        layer_param_lines = layer_param_block.split('\n')[1:]
        layer_name = str(self.layer_counter).zfill(3) + '_' + layer_type
        layer_dict = dict(type=layer_type)
        if layer_type in self.supported_layers:
            for param_line in layer_param_lines:
                if param_line[0] == '#':
                    continue
                param_type, param_value = self._parse_params(param_line)
                layer_dict[param_type] = param_value
        self.layer_counter += 1
        return layer_dict, layer_name, remainder

    def _parse_params(self, param_line):
        """Identifies the parameters contained in one of the cfg file and returns
        them in the required format for each parameter type, e.g. as a list, an int or a float.

        Keyword argument:
        param_line -- one parsed line within a layer block
        """
        param_line = param_line.replace(' ', '')
        param_type, param_value_raw = param_line.split('=')
        param_value = None
        if param_type == 'layers':
            layer_indexes = list()
            for index in param_value_raw.split(','):
                layer_indexes.append(int(index))
            param_value = layer_indexes
        elif isinstance(param_value_raw, str) and not param_value_raw.isalpha():
            condition_param_value_positive = param_value_raw.isdigit()
            condition_param_value_negative = param_value_raw[0] == '-' and \
                                             param_value_raw[1:].isdigit()
            if condition_param_value_positive or condition_param_value_negative:
                param_value = int(param_value_raw)
            else:
                param_value = float(param_value_raw)
        else:
            param_value = str(param_value_raw)
        return param_type, param_value


class MajorNodeSpecs(object):
    """Helper class used to store the names of ONNX output names,
    corresponding to the output of a DarkNet layer and its output channels.
    Some DarkNet layers are not created and there is no corresponding ONNX node,
    but we still need to track them in order to set up skip connections.
    """

    def __init__(self, name, channels):
        """ Initialize a MajorNodeSpecs object.

        Keyword arguments:
        name -- name of the ONNX node
        channels -- number of output channels of this node
        """
        self.name = name
        self.channels = channels
        self.created_onnx_node = False
        if name is not None and isinstance(channels, int) and channels > 0:
            self.created_onnx_node = True


class ConvParams(object):
    """Helper class to store the hyper parameters of a Conv layer,
    including its prefix name in the ONNX graph and the expected dimensions
    of weights for convolution, bias, and batch normalization.

    Additionally acts as a wrapper for generating safe names for all
    weights, checking on feasible combinations.
    """

    def __init__(self, node_name, batch_normalize, conv_weight_dims):
        """Constructor based on the base node name (e.g. 101_convolutional), the batch
        normalization setting, and the convolutional weights shape.

        Keyword arguments:
        node_name -- base name of this YOLO convolutional layer
        batch_normalize -- bool value if batch normalization is used
        conv_weight_dims -- the dimensions of this layer's convolutional weights
        """
        self.node_name = node_name
        self.batch_normalize = batch_normalize
        assert len(conv_weight_dims) == 4
        self.conv_weight_dims = conv_weight_dims

    def generate_param_name(self, param_category, suffix):
        """Generates a name based on two string inputs,
        and checks if the combination is valid."""
        assert suffix
        assert param_category in ['bn', 'conv']
        assert (suffix in ['scale', 'mean', 'var', 'weights', 'bias'])
        if param_category == 'bn':
            assert self.batch_normalize
            assert suffix in ['scale', 'bias', 'mean', 'var']
        elif param_category == 'conv':
            assert suffix in ['weights', 'bias']
            if suffix == 'bias':
                assert not self.batch_normalize
        param_name = self.node_name + '_' + param_category + '_' + suffix
        return param_name


class WeightLoader(object):
    """Helper class used for loading the serialized weights of a binary file stream
    and returning the initializers and the input tensors required for populating
    the ONNX graph with weights.
    """

    def __init__(self, weights_file_path):
        """Initialized with a path to the YOLOv3 .weights file.

        Keyword argument:
        weights_file_path -- path to the weights file.
        """
        self.weights_file = self._open_weights_file(weights_file_path)

    def load_conv_weights(self, conv_params):
        """Returns the initializers with weights from the weights file and
        the input tensors of a convolutional layer for all corresponding ONNX nodes.

        Keyword argument:
        conv_params -- a ConvParams object
        """
        initializer = list()
        inputs = list()
        if conv_params.batch_normalize:
            bias_init, bias_input = self._create_param_tensors(
                conv_params, 'bn', 'bias')
            bn_scale_init, bn_scale_input = self._create_param_tensors(
                conv_params, 'bn', 'scale')
            bn_mean_init, bn_mean_input = self._create_param_tensors(
                conv_params, 'bn', 'mean')
            bn_var_init, bn_var_input = self._create_param_tensors(
                conv_params, 'bn', 'var')
            initializer.extend(
                [bn_scale_init, bias_init, bn_mean_init, bn_var_init])
            inputs.extend([bn_scale_input, bias_input,
                           bn_mean_input, bn_var_input])
        else:
            bias_init, bias_input = self._create_param_tensors(
                conv_params, 'conv', 'bias')
            initializer.append(bias_init)
            inputs.append(bias_input)
        conv_init, conv_input = self._create_param_tensors(
            conv_params, 'conv', 'weights')
        initializer.append(conv_init)
        inputs.append(conv_input)
        return initializer, inputs

    def _open_weights_file(self, weights_file_path):
        """Opens a YOLOv3 DarkNet file stream and skips the header.

        Keyword argument:
        weights_file_path -- path to the weights file.
        """
        weights_file = open(weights_file_path, 'rb')
        length_header = 5
        np.ndarray(
            shape=(length_header,), dtype='int32', buffer=weights_file.read(
                length_header * 4))
        return weights_file

    def _create_param_tensors(self, conv_params, param_category, suffix):
        """Creates the initializers with weights from the weights file together with
        the input tensors.

        Keyword arguments:
        conv_params -- a ConvParams object
        param_category -- the category of parameters to be created ('bn' or 'conv')
        suffix -- a string determining the sub-type of above param_category (e.g.,
        'weights' or 'bias')
        """
        param_name, param_data, param_data_shape = self._load_one_param_type(
            conv_params, param_category, suffix)

        initializer_tensor = helper.make_tensor(
            param_name, TensorProto.FLOAT, param_data_shape, param_data)
        input_tensor = helper.make_tensor_value_info(
            param_name, TensorProto.FLOAT, param_data_shape)
        return initializer_tensor, input_tensor

    def _load_one_param_type(self, conv_params, param_category, suffix):
        """Deserializes the weights from a file stream in the DarkNet order.

        Keyword arguments:
        conv_params -- a ConvParams object
        param_category -- the category of parameters to be created ('bn' or 'conv')
        suffix -- a string determining the sub-type of above param_category (e.g.,
        'weights' or 'bias')
        """
        param_name = conv_params.generate_param_name(param_category, suffix)
        channels_out, channels_in, filter_h, filter_w = conv_params.conv_weight_dims
        if param_category == 'bn':
            param_shape = [channels_out]
        elif param_category == 'conv':
            if suffix == 'weights':
                param_shape = [channels_out, channels_in, filter_h, filter_w]
            elif suffix == 'bias':
                param_shape = [channels_out]
        param_size = np.product(np.array(param_shape))
        param_data = np.ndarray(
            shape=param_shape,
            dtype='float32',
            buffer=self.weights_file.read(param_size * 4))
        param_data = param_data.flatten().astype(float)
        return param_name, param_data, param_shape


class GraphBuilderONNX(object):
    """Class for creating an ONNX graph from a previously generated list of layer dictionaries."""

    def __init__(self, output_tensors):
        """Initialize with all DarkNet default parameters used creating YOLOv3,
        and specify the output tensors as an OrderedDict for their output dimensions
        with their names as keys.

        Keyword argument:
        output_tensors -- the output tensors as an OrderedDict containing the keys'
        output dimensions
        """
        self.output_tensors = output_tensors
        self._nodes = list()
        self.graph_def = None
        self.input_tensor = None
        self.epsilon_bn = 1e-5
        self.momentum_bn = 0.99
        self.alpha_lrelu = 0.1
        self.param_dict = OrderedDict()
        self.major_node_specs = list()
        self.batch_size = 1

    def build_onnx_graph(
            self,
            layer_configs,
            weights_file_path,
            verbose=True):
        """Iterate over all layer configs (parsed from the DarkNet representation
        of YOLOv3-608), create an ONNX graph, populate it with weights from the weights
        file and return the graph definition.

        Keyword arguments:
        layer_configs -- an OrderedDict object with all parsed layers' configurations
        weights_file_path -- location of the weights file
        verbose -- toggles if the graph is printed after creation (default: True)
        """
        for layer_name in layer_configs.keys():
            layer_dict = layer_configs[layer_name]
            major_node_specs = self._make_onnx_node(layer_name, layer_dict)
            if major_node_specs.name is not None:
                self.major_node_specs.append(major_node_specs)
        outputs = list()
        for tensor_name in self.output_tensors.keys():
            output_dims = [self.batch_size, ] + \
                          self.output_tensors[tensor_name]
            output_tensor = helper.make_tensor_value_info(
                tensor_name, TensorProto.FLOAT, output_dims)
            outputs.append(output_tensor)
        inputs = [self.input_tensor]
        weight_loader = WeightLoader(weights_file_path)
        initializer = list()
        for layer_name in self.param_dict.keys():
            _, layer_type = layer_name.split('_', 1)
            conv_params = self.param_dict[layer_name]
            assert layer_type == 'convolutional'
            initializer_layer, inputs_layer = weight_loader.load_conv_weights(
                conv_params)
            initializer.extend(initializer_layer)
            inputs.extend(inputs_layer)
        del weight_loader
        self.graph_def = helper.make_graph(
            nodes=self._nodes,
            name='YOLOv3-608',
            inputs=inputs,
            outputs=outputs,
            initializer=initializer
        )
        if verbose:
            print(helper.printable_graph(self.graph_def))
        model_def = helper.make_model(self.graph_def,
                                      producer_name='NVIDIA TensorRT sample')
        return model_def

    def _make_onnx_node(self, layer_name, layer_dict):
        """Take in a layer parameter dictionary, choose the correct function for
        creating an ONNX node and store the information important to graph creation
        as a MajorNodeSpec object.

        Keyword arguments:
        layer_name -- the layer's name (also the corresponding key in layer_configs)
        layer_dict -- a layer parameter dictionary (one element of layer_configs)
        """
        layer_type = layer_dict['type']
        if self.input_tensor is None:
            if layer_type == 'net':
                major_node_output_name, major_node_output_channels = self._make_input_tensor(
                    layer_name, layer_dict)
                major_node_specs = MajorNodeSpecs(major_node_output_name,
                                                  major_node_output_channels)
            else:
                raise ValueError('The first node has to be of type "net".')
        else:
            node_creators = dict()
            node_creators['convolutional'] = self._make_conv_node
            node_creators['shortcut'] = self._make_shortcut_node
            node_creators['route'] = self._make_route_node
            node_creators['upsample'] = self._make_upsample_node

            if layer_type in node_creators.keys():
                major_node_output_name, major_node_output_channels = \
                    node_creators[layer_type](layer_name, layer_dict)
                major_node_specs = MajorNodeSpecs(major_node_output_name,
                                                  major_node_output_channels)
            else:
                print(
                    'Layer of type %s not supported, skipping ONNX node generation.' %
                    layer_type)
                major_node_specs = MajorNodeSpecs(layer_name,
                                                  None)
        return major_node_specs

    def _make_input_tensor(self, layer_name, layer_dict):
        """Create an ONNX input tensor from a 'net' layer and store the batch size.

        Keyword arguments:
        layer_name -- the layer's name (also the corresponding key in layer_configs)
        layer_dict -- a layer parameter dictionary (one element of layer_configs)
        """
        batch_size = layer_dict['batch']
        channels = layer_dict['channels']
        height = layer_dict['height']
        width = layer_dict['width']
        self.batch_size = batch_size
        input_tensor = helper.make_tensor_value_info(
            str(layer_name), TensorProto.FLOAT, [
                batch_size, channels, height, width])
        self.input_tensor = input_tensor
        return layer_name, channels

    def _get_previous_node_specs(self, target_index=-1):
        """Get a previously generated ONNX node (skip those that were not generated).
        Target index can be passed for jumping to a specific index.

        Keyword arguments:
        target_index -- optional for jumping to a specific index (default: -1 for jumping
        to previous element)
        """
        previous_node = None
        for node in self.major_node_specs[target_index::-1]:
            if node.created_onnx_node:
                previous_node = node
                break
        assert previous_node is not None
        return previous_node

    def _make_conv_node(self, layer_name, layer_dict):
        """Create an ONNX Conv node with optional batch normalization and
        activation nodes.

        Keyword arguments:
        layer_name -- the layer's name (also the corresponding key in layer_configs)
        layer_dict -- a layer parameter dictionary (one element of layer_configs)
        """
        previous_node_specs = self._get_previous_node_specs()
        inputs = [previous_node_specs.name]
        previous_channels = previous_node_specs.channels
        kernel_size = layer_dict['size']
        stride = layer_dict['stride']
        filters = layer_dict['filters']
        batch_normalize = False
        if 'batch_normalize' in layer_dict.keys(
        ) and layer_dict['batch_normalize'] == 1:
            batch_normalize = True

        kernel_shape = [kernel_size, kernel_size]
        weights_shape = [filters, previous_channels] + kernel_shape
        conv_params = ConvParams(layer_name, batch_normalize, weights_shape)

        strides = [stride, stride]
        dilations = [1, 1]
        weights_name = conv_params.generate_param_name('conv', 'weights')
        inputs.append(weights_name)
        if not batch_normalize:
            bias_name = conv_params.generate_param_name('conv', 'bias')
            inputs.append(bias_name)

        conv_node = helper.make_node(
            'Conv',
            inputs=inputs,
            outputs=[layer_name],
            kernel_shape=kernel_shape,
            strides=strides,
            auto_pad='SAME_LOWER',
            dilations=dilations,
            name=layer_name
        )
        self._nodes.append(conv_node)
        inputs = [layer_name]
        layer_name_output = layer_name

        if batch_normalize:
            layer_name_bn = layer_name + '_bn'
            bn_param_suffixes = ['scale', 'bias', 'mean', 'var']
            for suffix in bn_param_suffixes:
                bn_param_name = conv_params.generate_param_name('bn', suffix)
                inputs.append(bn_param_name)
            batchnorm_node = helper.make_node(
                'BatchNormalization',
                inputs=inputs,
                outputs=[layer_name_bn],
                epsilon=self.epsilon_bn,
                momentum=self.momentum_bn,
                name=layer_name_bn
            )
            self._nodes.append(batchnorm_node)
            inputs = [layer_name_bn]
            layer_name_output = layer_name_bn

        if layer_dict['activation'] == 'leaky':
            layer_name_lrelu = layer_name + '_lrelu'

            lrelu_node = helper.make_node(
                'LeakyRelu',
                inputs=inputs,
                outputs=[layer_name_lrelu],
                name=layer_name_lrelu,
                alpha=self.alpha_lrelu
            )
            self._nodes.append(lrelu_node)
            inputs = [layer_name_lrelu]
            layer_name_output = layer_name_lrelu
        elif layer_dict['activation'] == 'linear':
            pass
        else:
            print('Activation not supported.')

        self.param_dict[layer_name] = conv_params
        return layer_name_output, filters

    def _make_shortcut_node(self, layer_name, layer_dict):
        """Create an ONNX Add node with the shortcut properties from
        the DarkNet-based graph.

        Keyword arguments:
        layer_name -- the layer's name (also the corresponding key in layer_configs)
        layer_dict -- a layer parameter dictionary (one element of layer_configs)
        """
        shortcut_index = layer_dict['from']
        activation = layer_dict['activation']
        assert activation == 'linear'

        first_node_specs = self._get_previous_node_specs()
        second_node_specs = self._get_previous_node_specs(
            target_index=shortcut_index)
        assert first_node_specs.channels == second_node_specs.channels
        channels = first_node_specs.channels
        inputs = [first_node_specs.name, second_node_specs.name]
        shortcut_node = helper.make_node(
            'Add',
            inputs=inputs,
            outputs=[layer_name],
            name=layer_name,
        )
        self._nodes.append(shortcut_node)
        return layer_name, channels

    def _make_route_node(self, layer_name, layer_dict):
        """If the 'layers' parameter from the DarkNet configuration is only one index, continue
        node creation at the indicated (negative) index. Otherwise, create an ONNX Concat node
        with the route properties from the DarkNet-based graph.

        Keyword arguments:
        layer_name -- the layer's name (also the corresponding key in layer_configs)
        layer_dict -- a layer parameter dictionary (one element of layer_configs)
        """
        route_node_indexes = layer_dict['layers']
        if len(route_node_indexes) == 1:
            split_index = route_node_indexes[0]
            assert split_index < 0
            # Increment by one because we skipped the YOLO layer:
            split_index += 1
            self.major_node_specs = self.major_node_specs[:split_index]
            layer_name = None
            channels = None
        else:
            inputs = list()
            channels = 0
            for index in route_node_indexes:
                if index > 0:
                    # Increment by one because we count the input as a node (DarkNet
                    # does not)
                    index += 1
                route_node_specs = self._get_previous_node_specs(
                    target_index=index)
                inputs.append(route_node_specs.name)
                channels += route_node_specs.channels
            assert inputs
            assert channels > 0

            route_node = helper.make_node(
                'Concat',
                axis=1,
                inputs=inputs,
                outputs=[layer_name],
                name=layer_name,
            )
            self._nodes.append(route_node)
        return layer_name, channels

    def _make_upsample_node(self, layer_name, layer_dict):
        """Create an ONNX Upsample node with the properties from
        the DarkNet-based graph.

        Keyword arguments:
        layer_name -- the layer's name (also the corresponding key in layer_configs)
        layer_dict -- a layer parameter dictionary (one element of layer_configs)
        """
        upsample_factor = float(layer_dict['stride'])
        previous_node_specs = self._get_previous_node_specs()
        inputs = [previous_node_specs.name]
        channels = previous_node_specs.channels
        assert channels > 0
        upsample_node = helper.make_node(
            'Upsample',
            mode='nearest',
            # For ONNX versions <0.7.0, Upsample nodes accept different parameters than 'scales':
            scales=[1.0, 1.0, upsample_factor, upsample_factor],
            inputs=inputs,
            outputs=[layer_name],
            name=layer_name,
        )
        self._nodes.append(upsample_node)
        return layer_name, channels


def generate_md5_checksum(local_path):
    """Returns the MD5 checksum of a local file.

    Keyword argument:
    local_path -- path of the file whose checksum shall be generated
    """
    with open(local_path) as local_file:
        data = local_file.read()
        return hashlib.md5(data).hexdigest()


def download_file(local_path, link, checksum_reference=None):
    """Checks if a local file is present and downloads it from the specified path otherwise.
    If checksum_reference is specified, the file's md5 checksum is compared against the
    expected value.

    Keyword arguments:
    local_path -- path of the file whose checksum shall be generated
    link -- link where the file shall be downloaded from if it is not found locally
    checksum_reference -- expected MD5 checksum of the file
    """
    if not os.path.exists(local_path):
        print('Downloading from %s, this may take a while...' % link)
        wget.download(link, local_path)
        print()
    if checksum_reference is not None:
        checksum = generate_md5_checksum(local_path)
        if checksum != checksum_reference:
            raise ValueError(
                'The MD5 checksum of local file %s differs from %s, please manually remove \
                 the file and try again.' %
                (local_path, checksum_reference))
    return local_path


def main():
    """Run the DarkNet-to-ONNX conversion for YOLOv3-608."""
    # Have to use python 2 due to hashlib compatibility
    if sys.version_info[0] > 2:
        raise Exception("This is script is only compatible with python2, please re-run this script \
    with python2. The rest of this sample can be run with either version of python")

    # Download the config for YOLOv3 if not present yet, and analyze the checksum:
    cfg_file_path = 'config/yolov3.cfg'

    # These are the only layers DarkNetParser will extract parameters from. The three layers of
    # type 'yolo' are not parsed in detail because they are included in the post-processing later:
    supported_layers = ['net', 'convolutional', 'shortcut',
                        'route', 'upsample']

    # Create a DarkNetParser object, and the use it to generate an OrderedDict with all
    # layer's configs from the cfg file:
    parser = DarkNetParser(supported_layers)
    layer_configs = parser.parse_cfg_file(cfg_file_path)
    # We do not need the parser anymore after we got layer_configs:
    del parser

    # In above layer_config, there are three outputs that we need to know the output
    # shape of (in CHW format):
    output_tensor_dims = OrderedDict()
    output_tensor_dims['082_convolutional'] = [255, 19, 19]
    output_tensor_dims['094_convolutional'] = [255, 38, 38]
    output_tensor_dims['106_convolutional'] = [255, 76, 76]

    # Create a GraphBuilderONNX object with the known output tensor dimensions:
    builder = GraphBuilderONNX(output_tensor_dims)

    # We want to populate our network with weights later, that's why we download those from
    # the official mirror (and verify the checksum):
    weights_file_path = 'yolov3.weights'

    # Now generate an ONNX graph with weights from the previously parsed layer configurations
    # and the weights file:
    yolov3_model_def = builder.build_onnx_graph(
        layer_configs=layer_configs,
        weights_file_path=weights_file_path,
        verbose=True)
    # Once we have the model definition, we do not need the builder anymore:
    del builder

    # Perform a sanity check on the ONNX model definition:
    onnx.checker.check_model(yolov3_model_def)

    # Serialize the generated ONNX graph to this file:
    output_file_path = 'yolov3_608.onnx'
    onnx.save(yolov3_model_def, output_file_path)


if __name__ == '__main__':
    main()

代码里面有部分参数相关的代码需要简单的解释一下,避免跳坑:

  • 配置文件.cfg
    首先你需要将配置文件内的batch和subdivision参数设置为1;
    其次,你需要将配置文件最后一行增加一行空格;
  • 代码中的此部分需要注意,后期要是你需要转换自己的训练的模型需要修改此部分的参数,将255改为你对应使用的参数
output_tensor_dims['082_convolutional'] = [255, 19, 19] # 255代表的是3*(classes + 4 + 1)
output_tensor_dims['094_convolutional'] = [255, 38, 38] # 255代表的是3*(classes + 4 + 1)
output_tensor_dims['106_convolutional'] = [255, 76, 76] # 255代表的是3*(classes + 4 + 1)

成功运行后你将会得到一个yolov3_608.onnx的模型,在你代码的执行目录中;接下来我们开始采用onnxruntime进行模型的推理;

二. 使用onnxruntime对模型进行推理

该节包括三部分代码

  • darknet_api.py
  • onnx_img_inference.py
  • onnx_video_inference.py

1.首先简单的介绍一下darknet_api.py的功能;就是筛选bbox以及图片/视频的预处理功能;
darknet_api.py

# coding: utf-8
# 2019-12-10
"""
YOlo相关的预处理api;
"""
import cv2
import time
import numpy as np


# 加载label names;
def get_labels(names_file):
    names = list()
    with open(names_file, 'r') as f:
        lines = f.read()
        for name in lines.splitlines():
            names.append(name)
    f.close()
    return names


# 照片预处理
def process_img(img_path, input_shape):
    ori_img = cv2.imread(img_path)
    img = cv2.resize(ori_img, input_shape)
    image = img[:, :, ::-1].transpose((2, 0, 1))
    image = image[np.newaxis, :, :, :] / 255
    image = np.array(image, dtype=np.float32)
    return ori_img, ori_img.shape, image


# 视频预处理
def frame_process(frame, input_shape):
    image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
    image = cv2.resize(image, input_shape)
    # image = cv2.resize(image, (640, 480))
    image_mean = np.array([127, 127, 127])
    image = (image - image_mean) / 128
    image = np.transpose(image, [2, 0, 1])
    image = np.expand_dims(image, axis=0)
    image = image.astype(np.float32)
    return image


# sigmoid函数
def sigmoid(x):
    s = 1 / (1 + np.exp(-1 * x))
    return s


# 获取预测正确的类别,以及概率和索引;
def get_result(class_scores):
    class_score = 0
    class_index = 0
    for i in range(len(class_scores)):
        if class_scores[i] > class_score:
            class_index += 1
            class_score = class_scores[i]
    return class_score, class_index


# 通过置信度筛选得到bboxs
def get_bbox(feat, anchors, image_shape, confidence_threshold=0.25):
    box = list()
    for i in range(len(anchors)):
        for cx in range(feat.shape[0]):
            for cy in range(feat.shape[1]):
                tx = feat[cx][cy][0 + 85 * i]
                ty = feat[cx][cy][1 + 85 * i]
                tw = feat[cx][cy][2 + 85 * i]
                th = feat[cx][cy][3 + 85 * i]
                cf = feat[cx][cy][4 + 85 * i]
                cp = feat[cx][cy][5 + 85 * i:85 + 85 * i]

                bx = (sigmoid(tx) + cx) / feat.shape[0]
                by = (sigmoid(ty) + cy) / feat.shape[1]
                bw = anchors[i][0] * np.exp(tw) / image_shape[0]
                bh = anchors[i][1] * np.exp(th) / image_shape[1]
                b_confidence = sigmoid(cf)
                b_class_prob = sigmoid(cp)
                b_scores = b_confidence * b_class_prob
                b_class_score, b_class_index = get_result(b_scores)

                if b_class_score >= confidence_threshold:
                    box.append([bx, by, bw, bh, b_class_score, b_class_index])
    return box


# 采用nms算法筛选获取到的bbox
def nms(boxes, nms_threshold=0.6):
    l = len(boxes)
    if l == 0:
        return []
    else:
        b_x = boxes[:, 0]
        b_y = boxes[:, 1]
        b_w = boxes[:, 2]
        b_h = boxes[:, 3]
        scores = boxes[:, 4]
        areas = (b_w + 1) * (b_h + 1)
        order = scores.argsort()[::-1]
        keep = list()
        while order.size > 0:
            i = order[0]
            keep.append(i)
            xx1 = np.maximum(b_x[i], b_x[order[1:]])
            yy1 = np.maximum(b_y[i], b_y[order[1:]])
            xx2 = np.minimum(b_x[i] + b_w[i], b_x[order[1:]] + b_w[order[1:]])
            yy2 = np.minimum(b_y[i] + b_h[i], b_y[order[1:]] + b_h[order[1:]])

            # 相交面积,不重叠时面积为0
            w = np.maximum(0.0, xx2 - xx1 + 1)
            h = np.maximum(0.0, yy2 - yy1 + 1)
            inter = w * h
            # 相并面积,面积1+面积2-相交面积
            union = areas[i] + areas[order[1:]] - inter
            # 计算IoU:交 /(面积1+面积2-交)
            IoU = inter / union
            # 保留IoU小于阈值的box
            inds = np.where(IoU <= nms_threshold)[0]
            order = order[inds + 1]  # 因为IoU数组的长度比order数组少一个,所以这里要将所有下标后移一位

        final_boxes = [boxes[i] for i in keep]
        return final_boxes


# 绘制预测框
def draw_box(boxes, img, img_shape):
    label = ["background", "person",
	        "bicycle", "car", "motorbike", "aeroplane",
	        "bus", "train", "truck", "boat", "traffic light",
	        "fire hydrant", "stop sign", "parking meter", "bench",
	        "bird", "cat", "dog", "horse", "sheep", "cow", "elephant",
	        "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag",
	        "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball",
	        "kite", "baseball bat", "baseball glove", "skateboard", "surfboard",
	        "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon",
	        "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog",
	        "pizza", "donut", "cake", "chair", "sofa", "potted plant", "bed", "dining table",
	        "toilet", "TV monitor", "laptop", "mouse", "remote", "keyboard", "cell phone",
	        "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase",
	        "scissors", "teddy bear", "hair drier", "toothbrush"]
    for box in boxes:
        x1 = int((box[0] - box[2] / 2) * img_shape[1])
        y1 = int((box[1] - box[3] / 2) * img_shape[0])
        x2 = int((box[0] + box[2] / 2) * img_shape[1])
        y2 = int((box[1] + box[3] / 2) * img_shape[0])
        cv2.rectangle(img, (x1, y1), (x2, y2), (0, 255, 0), 2)
        cv2.putText(img, label[int(box[5])] + ":" + str(round(box[4], 3)), (x1 + 5, y1 + 10), cv2.FONT_HERSHEY_SIMPLEX,
                    0.5, (0, 0, 255), 1)
        print(label[int(box[5])] + ":" + "概率值:%.3f" % box[4])
    cv2.imshow('image', img)
    cv2.waitKey(10)
    cv2.destroyAllWindows()


# 获取预测框
def get_boxes(prediction, anchors, img_shape, confidence_threshold=0.25, nms_threshold=0.6):
    boxes = []
    for i in range(len(prediction)):
        feature_map = prediction[i][0].transpose((2, 1, 0))
        box = get_bbox(feature_map, anchors[i], img_shape, confidence_threshold)
        boxes.extend(box)
    Boxes = nms(np.array(boxes), nms_threshold)
    return Boxes

此部分代码参考的博客地址为:https://blog.csdn.net/u013597931/article/details/89412272
此部分代码参数中 85 的修改需要注意,便于后期根据自己模型而做调整:

tx = feat[cx][cy][0 + 85 * i] # 85的意思(classes + 4 + 1)
ty = feat[cx][cy][1 + 85 * i] # 85的意思(classes + 4 + 1)
tw = feat[cx][cy][2 + 85 * i] # 85的意思(classes + 4 + 1)
th = feat[cx][cy][3 + 85 * i] # 85的意思(classes + 4 + 1)
cf = feat[cx][cy][4 + 85 * i] # 85的意思(classes + 4 + 1)
cp = feat[cx][cy][5 + 85 * i:85 + 85 * i] # 85的意思(classes + 4 + 1)

2.万事具备,接下来进行照片的推理

  • onnx_img_inference.py
# coding: utf-8
# author: hxy
# 2019-12-10

"""
照片的inference;
默认推理过程在CPU上;
"""
import os
import time
import logging
import onnxruntime
from lib.darknet_api import process_img, get_boxes, draw_box


# 定义日志格式
def log_set():
    logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')


# 加载onnx模型
def load_model(onnx_model):
    sess = onnxruntime.InferenceSession(onnx_model)
    in_name = [input.name for input in sess.get_inputs()][0]
    out_name = [output.name for output in sess.get_outputs()]
    logging.info("输入的name:{}, 输出的name:{}".format(in_name, out_name))

    return sess, in_name, out_name


if __name__ == '__main__':
    log_set()
    input_shape = (608, 608)

    # anchors
    anchors_yolo = [[(116, 90), (156, 198), (373, 326)], [(30, 61), (62, 45), (59, 119)],
                    [(10, 13), (16, 30), (33, 23)]]
    anchors_yolo_tiny = [[(81, 82), (135, 169), (344, 319)], [(10, 14), (23, 27), (37, 58)]]
    session, inname, outname = load_model(onnx_model='yolov3_608.onnx')
    logging.info("开始Inference....")
    # 照片的批量inference
    img_files_path = 'test_pic'
    imgs = os.listdir(img_files_path)

    logging.debug(imgs)
    for img_name in imgs:
        img_full_path = os.path.join(img_files_path, img_name)
        logging.debug(img_full_path)
        img, img_shape, testdata = process_img(img_path=img_full_path,
                                               input_shape=input_shape)
        s = time.time()
        prediction = session.run(outname, {inname: testdata})

        # logging.info("推理照片 %s 耗时:% .2fms" % (img_name, ((time.time() - s)*1000)))
        boxes = get_boxes(prediction=prediction,
                          anchors=anchors_yolo,
                          img_shape=input_shape)
        draw_box(boxes=boxes,
                 img=img,
                 img_shape=img_shape)
        logging.info("推理照片 %s 耗时:% .2fms" % (img_name, ((time.time() - s)*1000)))

实测在本人的macbookair渣渣硬件配置上,cpu推理一张图片的速度好几十毫秒;

3.进行视屏流的推理

  • 有点尴尬的说,速度够呛,想要做到rtsp高清视屏流的实时处理,还是需要适当调整优化代码;
  • 同时,代码这里本人给的是单进程处理的代码;
  • 实测采用多进程处理视屏流的推理,会适当提高在视屏流上的推理效率;
    onnx_video_inference.py
# coding: utf-8
# author: hxy
# 2019-12-10
"""
视频流的推理过程;
默认推理过程在CPU上;
"""

import cv2
import time
import logging
import numpy as np
import onnxruntime
from lib.darknet_api import get_boxes


# 定义日志格式
def log_set():
    logging.basicConfig(level=logging.INFO, format='%(asctime)s -%(filename)s:%(lineno)d - %(levelname)s - %(message)s')


# 加载onnx模型pip
def load_model(onnx_model):
    sess = onnxruntime.InferenceSession(onnx_model)
    in_name = [input.name for input in sess.get_inputs()][0]
    out_name = [output.name for output in sess.get_outputs()]
    logging.info("输入的name:{}, 输出的name:{}".format(in_name, out_name))

    return sess, in_name, out_name


def frame_process(frame, input_shape=(608, 608)):
    image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
    image = cv2.resize(image, input_shape)
    # image = cv2.resize(image, (640, 480))
    image_mean = np.array([127, 127, 127])
    image = (image - image_mean) / 128
    image = np.transpose(image, [2, 0, 1])
    image = np.expand_dims(image, axis=0)
    image = image.astype(np.float32)
    return image


# 视屏预处理
def stream_inference():
    # 基本的参数设定
    label = ["background", "person",
	        "bicycle", "car", "motorbike", "aeroplane",
	        "bus", "train", "truck", "boat", "traffic light",
	        "fire hydrant", "stop sign", "parking meter", "bench",
	        "bird", "cat", "dog", "horse", "sheep", "cow", "elephant",
	        "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag",
	        "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball",
	        "kite", "baseball bat", "baseball glove", "skateboard", "surfboard",
	        "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon",
	        "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog",
	        "pizza", "donut", "cake", "chair", "sofa", "potted plant", "bed", "dining table",
	        "toilet", "TV monitor", "laptop", "mouse", "remote", "keyboard", "cell phone",
	        "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase",
	        "scissors", "teddy bear", "hair drier", "toothbrush"]
    anchors_yolo_tiny = [[(81, 82), (135, 169), (344, 319)], [(10, 14), (23, 27), (37, 58)]]
    anchors_yolo = [[(116, 90), (156, 198), (373, 326)], [(30, 61), (62, 45), (59, 119)],
                    [(10, 13), (16, 30), (33, 23)]]
    session, in_name, out_name = load_model(onnx_model='yolov3_608.onnx')

    # rtsp = ''
    cap = cv2.VideoCapture(0)
    while True:
        _, frame = cap.read()
        input_shape = frame.shape
        s = time.time()
        test_data = frame_process(frame, input_shape=(608, 608))
        logging.info("process per pic spend time is:{}ms".format((time.time() - s)*1000))
        s1 = time.time()
        prediction = session.run(out_name, {in_name: test_data})
        s2 = time.time()
        print("prediction cost time: %.3fms" % (s2 - s1))
        boxes = get_boxes(prediction=prediction,
                          anchors=anchors_yolo,
                          img_shape=(608, 608))
        print("get box cost time:{}ms".format((time.time()-s2)*1000))
        for box in boxes:
            x1 = int((box[0] - box[2] / 2) * input_shape[1])
            y1 = int((box[1] - box[3] / 2) * input_shape[0])
            x2 = int((box[0] + box[2] / 2) * input_shape[1])
            y2 = int((box[1] + box[3] / 2) * input_shape[0])
            logging.info(label[int(box[5])] + ":" + str(round(box[4], 3)))
            cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 255, 0), 1)
            cv2.putText(frame, label[int(box[5])] + ":" + str(round(box[4], 3)),
                        (x1 + 5, y1 + 10),
                        cv2.FONT_HERSHEY_SIMPLEX,
                        0.5,
                        (0, 0, 255),
                        1)

        frame = cv2.resize(frame, (0, 0), fx=0.7, fy=0.7)
        cv2.imshow("Results", frame)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    cap.release()
    cv2.destroyAllWindows()


if __name__ == '__main__':
    log_set()
    stream_inference()

到这里,采用onnxruntime对yolov3进行推理的过程全部做完啦~

但是还是要总结一下:

  1. 其实本人做了yolov3和yolov3-tiny的onnx转换和推理,但是本博客只写了yolov3的内容,yolov3-tiny的可以您自己去学习;但是测试部分的代码均适应yolov3和yolov3-tiny的测试,您只需要修改对你的anchors和输入的size即可;
  2. 至于yolov3-tiny部分的,等全部博客写完,本人会放到github上,供互相学习;
  3. 在yolov3-tiny的推理上,速度很快,对视频流的处理能够实时检测;
  4. 建议您尝试在cpu和GPU两个不同的环境下测试此部分代码;
  5. 大部分代码在macbook上写的,要是您复制下来到ubuntu上运行有报格式错误,还请检查代码书写格式小问题;
  6. 本人的学习能力有限,要是有不足之处还请多多指教!
  7. 本博客也参考了其它资料,感谢大家的分享~共同学习进步!!
谢谢您的浏览 ,本内容供大家学习,参考使用!要是觉得对您有帮助的话,欢迎点赞!
不得不说相比之下,采用tensort进行推理,速度更稳,第二部分的博客本人会尽快更新!!

PS:yolov3-tiny的模型转换部分博客写好了,这里贴出来一下:

  • 14
    点赞
  • 79
    收藏
    觉得还不错? 一键收藏
  • 21
    评论
要贯通 YOLOv7 网络模型并导出 ONNX 模型,您可以按照以下步骤进行: 1. 下载 YOLOv7 模型的代码库,例如 https://github.com/WongKinYiu/yolov7。 2. 在本地或云端环境中安装 PyTorch 和其他必备的 Python 包。 3. 在数据集上训练 YOLOv7 模型,或者下载预训练模型。 4. 修改代码库中的 export.py 文件,以便将模型导出为 ONNX 格式。您需要指定输入和输出张量的名称和形状,并将模型保存为 ONNX 文件。 5. 运行 export.py 文件并检查导出的 ONNX 模型是否可用。 以下是一个简单的示例代码,展示了如何导出 YOLOv7 模型ONNX 格式: ``` python import torch import torch.nn as nn import argparse # 导入yolov7模型 from models.yolov7 import YOLOv7 parser = argparse.ArgumentParser() parser.add_argument('--weights', type=str, default='path/to/weights.pt', help='path to weights file') parser.add_argument('--img_size', type=int, default=416, help='size of each image dimension') parser.add_argument('--batch_size', type=int, default=1, help='size of the batches') parser.add_argument('--output', type=str, default='path/to/output.onnx', help='output ONNX file path') opt = parser.parse_args() # 加载yolov7模型 model = YOLOv7(opt.img_size, num_classes=80) model.load_state_dict(torch.load(opt.weights, map_location='cpu')['model']) # 设置输入张量的名称和形状 input_names = ['input_0'] input_shapes = [(opt.batch_size, 3, opt.img_size, opt.img_size)] # 设置输出张量的名称和形状 output_names = ['output_0', 'output_1', 'output_2'] output_shapes = [(opt.batch_size, 255, opt.img_size//32, opt.img_size//32), (opt.batch_size, 255, opt.img_size//16, opt.img_size//16), (opt.batch_size, 255, opt.img_size//8, opt.img_size//8)] # 将模型换为ONNX格式 torch.onnx.export(model, torch.zeros(*input_shapes), opt.output, input_names=input_names, output_names=output_names, output_shapes=output_shapes, opset_version=11) print(f'Exported ONNX model to {opt.output}') ``` 这里我们假设您已经按照上述步骤安装了 PyTorch 和其他必备的 Python 包。在这个示例中,我们使用 argparse 模块来解析命令行参数,指定权重文件的路径、图像大小、批处理大小和输出 ONNX 文件的路径。然后,我们加载 YOLOv7 模型,并指定输入和输出张量的名称和形状。最后,我们使用 `torch.onnx.export()` 函数将模型保存为 ONNX 文件。 请注意,导出模型时需要指定适当的 opset 版本。在这个示例中,我们使用的是 opset_version=11,这是与 PyTorch 1.7.x 版本兼容的最新版本。 希望这个简单的示例能够帮助您了解如何贯通 YOLOv7 网络模型并导出 ONNX 模型

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 21
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值