tensorflow模型转换

目录

tflite2onnx

tflite2tensorflow

依赖项安装:

4.Build flatc

5.安装cmake:

报错:The CXX compiler identification is unknown

`GLIBC_2.29' not found

glibc-2.29版本:

测试调用代码:

tf2onnx 转onnx

Install from pypi

Install latest from github

转onnx测试:

onnx 简化simple

转ncnn:

pbtxt转tfliet,没试成功

pb文件转tflite:

2.2 查看pb文件的input_arrays & output_arrays


tflite2onnx

GitHub - zhenhuaw-me/tflite2onnx: Convert TensorFlow Lite models (*.tflite) to ONNX.

pip install tflite2onnx.

用法:

import tflite2onnx
tflite_path = 'mediapipe/modules/face_landmark/face_landmark_with_attention.tflite'
onnx_path = './onnx_model'
tflite2onnx.convert(tflite_path, onnx_path)

目前不支持自定义层,包括:

192 Landmarks2TransformMatrix.Node 

ImportError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29'

tflite2tensorflow

https://github.com/PINTO0309/tflite2tensorflow

也不支持自定义层:Landmarks2TransformMatrix

Encountered unresolved custom op: Landmarks2TransformMatrix.

linux和Windows都能用:

pip3 install --user --upgrade tflite2tensorflow

依赖项安装:

1. pip install tensorflow_datasets --user

2. pip install onnxoptimizer

3.tensorflow安装:

Release tflite2tensorflow v1.20.4 · PINTO0309/tflite2tensorflow · GitHub

错误参考:

https://github.com/PINTO0309/PINTO_model_zoo/issues/143

4.Build flatc

$ git clone -b v2.0.8 https://github.com/google/flatbuffers.git
$ cd flatbuffers && mkdir build && cd build
$ cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE=Release ..
$ make -j$(nproc)

5.安装cmake:

ubuntu安装cmake的三种方法(超方便!)_Man_1man的博客-CSDN博客_ubuntu安装cmake

报错:The CXX compiler identification is unknown

sudo apt install g++

`GLIBC_2.29' not found

查看glibc版本:ldd --version

然后发现自己电脑里面本来是GLIBC_2.27版本

glibc-2.29版本:

ImportError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29‘ not found-pudn.com

wget http://ftp.gnu.org/gnu/glibc/glibc-2.29.tar.gz


tar -zxvf glibc-2.29.tar.gz

cd glibc-2.29

mkdir build && cd build

sudo apt-get install gawk

../configure --prefix=/usr/local/glibc

make -j8

sudo make install

报错:no acceptable C compiler found in $PATH

sudo apt install gcc

报错:too old: gawk bison

安装一下gawk bison即可解决
使用指令

sudo apt-get install gawk
sudo apt-get install bison

sudo ln -s /usr/local/glibc/lib/libm-2.29.so /lib/x86_64-linux-gnu/libm.so.6
//会报错 ln: failed to create symbolic link ‘libm.so.6’: File exists
//此时需要强连
sudo ln -sf /usr/local/glibc/lib/libm-2.29.so /lib/x86_64-linux-gnu/libm.so.6

原文链接:https://blog.csdn.net/p942005405/article/details/123540761

关于模型转换的问题 · Issue #9 · FeiGeChuanShu/ncnn_Android_blazeface · GitHub

查看so文件引用:ls -l libc.so.6

ubuntu18.04上 /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28‘ not found

ubuntu18.04上 /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28‘ not found_陈 洪 伟的博客-CSDN博客

测试调用代码:

import os
import pprint
import shutil

import os
import sys
import numpy as np
np.random.seed(0)
import json
import warnings
import logging
os.environ['TF_CPP_MIN_LOG_LEVEL']='3'
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.simplefilter(action='ignore', category=Warning)
import shutil
import pprint
import argparse
from pathlib import Path
import re
import struct
import itertools
import pandas as pd
from tflite2tf.mediapipeCustomOp import Landmarks2TransformMatrix, TransformTensorBilinear, TransformLandmarks


from mediapipeCustomOp import Color


def main():
    parser = argparse.ArgumentParser()
    parser.add_argument('--model_path', type=str, default=r'F:\project\face_mediapipe\mediapipe-0.8.10.2\mediapipe\modules\face_landmark\face_landmark_with_attention.tflite', help='input tflite model path (*.tflite)')
    parser.add_argument('--flatc_path', type=str, default='..flatc', help='flatc file path (flatc)')
    parser.add_argument('--schema_path', type=str, default='schema/schema.fbs', help='schema.fbs path (schema.fbs)')

    parser.add_argument('--model_output_path', type=str, default='saved_model', help='The output folder path of the converted model file')
    parser.add_argument('--output_pb', action='store_true', help='.pb output switch')
    parser.add_argument('--output_no_quant_float32_tflite', action='store_true', help='float32 tflite output switch')
    parser.add_argument('--output_dynamic_range_quant_tflite', action='store_true', help='dynamic quant tflite output switch')
    parser.add_argument('--output_weight_quant_tflite', action='store_true', help='weight quant tflite output switch')
    parser.add_argument('--output_float16_quant_tflite', action='store_true', help='float16 quant tflite output switch')
    parser.add_argument('--output_integer_quant_tflite', action='store_true', help='integer quant tflite output switch')
    parser.add_argument('--output_full_integer_quant_tflite', action='store_true', help='full integer quant tflite output switch')
    parser.add_argument('--output_integer_quant_type', type=str, default='int8', help='Input and output types when doing Integer Quantization (\'int8 (default)\' or \'uint8\')')
    parser.add_argument('--string_formulas_for_normalization', type=str, default='(data - [127.5,127.5,127.5]) / [127.5,127.5,127.5]', help='String formulas for normalization. It is evaluated by Python\'s eval() function. Default: \'(data - [127.5,127.5,127.5]) / [127.5,127.5,127.5]\'')
    parser.add_argument('--calib_ds_type', type=str, default='numpy', help='Types of data sets for calibration. tfds or numpy. Only one of them can be specified. Default: numpy [20, 513, 513, 3] -> [Number of images, h, w, c]')
    parser.add_argument('--ds_name_for_tfds_for_calibration', type=str, default='coco/2017', help='Dataset name for TensorFlow Datasets for calibration. https://www.tensorflow.org/datasets/catalog/overview')
    parser.add_argument('--split_name_for_tfds_for_calibration', type=str, default='validation', help='Split name for TensorFlow Datasets for calibration. https://www.tensorflow.org/datasets/catalog/overview')
    tfds_dl_default_path = f'{str(Path.home())}/TFDS'
    parser.add_argument('--download_dest_folder_path_for_the_calib_tfds', type=str, default=tfds_dl_default_path, help='Download destination folder path for the calibration dataset. Default: $HOME/TFDS')
    parser.add_argument('--tfds_download_flg', action='store_true', help='True to automatically download datasets from TensorFlow Datasets. True or False')
    npy_load_default_path = 'sample_npy/calibration_data_img_sample.npy'
    parser.add_argument('--load_dest_file_path_for_the_calib_npy', type=str, default=npy_load_default_path, help='The path from which to load the .npy file containing the numpy binary version of the calibration data. Default: sample_npy/calibration_data_img_sample.npy')
    parser.add_argument('--output_tfjs', action='store_true', help='tfjs model output switch')
    parser.add_argument('--output_tftrt_float32', action='store_true', help='tftrt float32 model output switch')
    parser.add_argument('--output_tftrt_float16', action='store_true', help='tftrt float16 model output switch')
    parser.add_argument('--output_coreml', action='store_true', help='coreml model output switch')
    parser.add_argument('--optimizing_for_coreml', action='store_true', help='Optimizing graph for coreml')
    parser.add_argument('--output_edgetpu', action='store_true', help='edgetpu model output switch')
    parser.add_argument('--edgetpu_compiler_timeout', type=int, default=3600, help='edgetpu_compiler timeout for one compilation process in seconds. Default: 3600')
    parser.add_argument('--edgetpu_num_segments', type=int, default=1, help='Partition the model into [num_segments] segments. Default: 1 (no partition)')
    parser.add_argument('--output_onnx', action='store_true',default=True, help='onnx model output switch')
    parser.add_argument('--onnx_opset', type=int, default=13, help='onnx opset version number')
    parser.add_argument('--onnx_extra_opset', type=str, default='', help='The name of the onnx extra_opset to enable. Default: \'\'. "com.microsoft:1" or "ai.onnx.contrib:1" or "ai.onnx.ml:1"')
    parser.add_argument('--disable_onnx_nchw_conversion', action='store_true', help='Disable onnx NCHW conversion.')
    parser.add_argument('--disable_onnx_optimization', action='store_true', help='Disable onnx optimization.')
    parser.add_argument('--output_openvino_and_myriad', action='store_true', help='openvino model and myriad inference engine blob output switch')
    parser.add_argument('--vpu_number_of_shaves', type=int, default=4, help='vpu number of shaves. Default: 4')
    parser.add_argument('--vpu_number_of_cmx_slices', type=int, default=4, help='vpu number of cmx slices. Default: 4')
    parser.add_argument('--optimizing_for_openvino_and_myriad', action='store_true', help='Optimizing graph for openvino/myriad')
    parser.add_argument('--rigorous_optimization_for_myriad', action='store_true', help='Replace operations that are not supported by myriad with operations that are as feasible as possible.')
    parser.add_argument('--replace_swish_and_hardswish', action='store_true', help='Replace swish and hard-swish with each other')
    parser.add_argument('--optimizing_for_edgetpu', action='store_true', help='Optimizing for edgetpu')
    parser.add_argument('--replace_prelu_and_minmax', action='store_true', help='Replace prelu and minimum/maximum with each other')
    parser.add_argument('--disable_experimental_new_quantizer', action='store_true', help='Disable MLIR\'s new quantization feature during INT8 quantization in TensorFlowLite.')
    parser.add_argument('--disable_per_channel', action='store_true', help='Disable per-channel quantization for tflite')
    parser.add_argument('--optimizing_barracuda', action='store_true', help='Generates ONNX by replacing Barracuda\'s unsupported layers with standard layers.')
    parser.add_argument('--locationids_of_the_terminating_output', type=str, default='', help='A comma-separated list of location IDs to be used as output layers. Default: \'\'')
    args = parser.parse_args()

    model, ext = os.path.splitext(args.model_path)
    model_path = args.model_path
    if ext != '.tflite':
        print('The specified model is not \'.tflite\' file.')
        sys.exit(-1)
    flatc_path = args.flatc_path
    schema_path = args.schema_path

    model_output_path = args.model_output_path.rstrip('/')
    output_pb = args.output_pb
    output_no_quant_float32_tflite =  args.output_no_quant_float32_tflite
    output_dynamic_range_quant_tflite = args.output_dynamic_range_quant_tflite
    output_weight_quant_tflite = args.output_weight_quant_tflite
    output_float16_quant_tflite = args.output_float16_quant_tflite
    output_integer_quant_tflite = args.output_integer_quant_tflite
    output_full_integer_quant_tflite = args.output_full_integer_quant_tflite
    output_integer_quant_type = args.output_integer_quant_type.lower()
    string_formulas_for_normalization = args.string_formulas_for_normalization.lower()
    calib_ds_type = args.calib_ds_type.lower()
    ds_name_for_tfds_for_calibration = args.ds_name_for_tfds_for_calibration
    split_name_for_tfds_for_calibration = args.split_name_for_tfds_for_calibration
    download_dest_folder_path_for_the_calib_tfds = args.download_dest_folder_path_for_the_calib_tfds
    tfds_download_flg = args.tfds_download_flg
    load_dest_file_path_for_the_calib_npy = args.load_dest_file_path_for_the_calib_npy
    output_tfjs = args.output_tfjs
    output_tftrt_float32 = args.output_tftrt_float32
    output_tftrt_float16 = args.output_tftrt_float16
    output_coreml = args.output_coreml
    optimizing_for_coreml = args.optimizing_for_coreml
    output_edgetpu = args.output_edgetpu
    edgetpu_compiler_timeout = args.edgetpu_compiler_timeout
    edgetpu_num_segments = args.edgetpu_num_segments
    output_onnx = args.output_onnx
    onnx_opset = args.onnx_opset
    onnx_extra_opset = args.onnx_extra_opset
    use_onnx_nchw_conversion = not args.disable_onnx_nchw_conversion
    use_onnx_optimization = not args.disable_onnx_optimization
    output_openvino_and_myriad = args.output_openvino_and_myriad
    vpu_number_of_shaves = args.vpu_number_of_shaves
    vpu_number_of_cmx_slices = args.vpu_number_of_cmx_slices
    optimizing_for_openvino_and_myriad = args.optimizing_for_openvino_and_myriad
    rigorous_optimization_for_myriad = args.rigorous_optimization_for_myriad
    replace_swish_and_hardswish = args.replace_swish_and_hardswish
    optimizing_for_edgetpu = args.optimizing_for_edgetpu
    replace_prelu_and_minmax = args.replace_prelu_and_minmax
    use_experimental_new_quantizer = not args.disable_experimental_new_quantizer
    use_per_channel = not args.disable_per_channel
    optimizing_barracuda = args.optimizing_barracuda
    locationids_of_the_terminating_output_tmp = args.locationids_of_the_terminating_output
    locationids_of_the_terminating_output = None
    if locationids_of_the_terminating_output_tmp:
        locationids_of_the_terminating_output = [int(ids.strip()) for ids in locationids_of_the_terminating_output_tmp.split(',')]

    if output_coreml:
        import coremltools as ct

    optimizing_for_edgetpu_flg = False

    if output_edgetpu:
        output_full_integer_quant_tflite = True
        optimizing_for_edgetpu_flg = True

    if optimizing_for_edgetpu:
        optimizing_for_edgetpu_flg = True

    from pkg_resources import working_set
    package_list = []
    for dist in working_set:
        package_list.append(dist.project_name)

    if output_tfjs:
        if not 'tensorflowjs' in package_list:
            print('\'tensorflowjs\' is not installed. Please run the following command to install \'tensorflowjs\'.')
            print('pip3 install --upgrade tensorflowjs')
            sys.exit(-1)
    if output_tftrt_float32 or output_tftrt_float16:
        if not 'tensorrt' in package_list:
            print('\'tensorrt\' is not installed. Please check the following website and install \'tensorrt\'.')
            print('https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html')
            sys.exit(-1)
    if output_coreml:
        if not 'coremltools' in package_list:
            print('\'coremltoos\' is not installed. Please run the following command to install \'coremltoos\'.')
            print('pip3 install --upgrade coremltools')
            sys.exit(-1)
    if output_onnx:
        if not 'tf2onnx' in package_list:
            print('\'tf2onnx\' is not installed. Please run the following command to install \'tf2onnx\'.')
            print('pip3 install --upgrade onnx')
            print('pip3 install --upgrade tf2onnx')
            sys.exit(-1)
    if output_openvino_and_myriad:
        try:
            from openvino.inference_engine import IECore
        except:
            print('\'OpenVINO\' is not installed. Please check the following website and install \'OpenVINO\'.')
            print('Linux: https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_linux.html')
            print('Windows: https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_windows.html')
            sys.exit(-1)
    if output_integer_quant_tflite or output_full_integer_quant_tflite:
        if not 'tensorflow-datasets' in package_list:
            print('\'tensorflow-datasets\' is not installed. Please run the following command to install \'tensorflow-datasets\'.')
            print('pip3 install --upgrade tensorflow-datasets')
            sys.exit(-1)

    if output_integer_quant_type == 'int8' or output_integer_quant_type == 'uint8':
        pass
    else:
        print('Only \'int8\' or \'uint8\' can be specified for output_integer_quant_type.')
        sys.exit(-1)

    if calib_ds_type == 'tfds':
        pass
    elif calib_ds_type == 'numpy':
        pass
    else:
        print('Only \'tfds\' or \'numpy\' can be specified for calib_ds_type.')
        sys.exit(-1)
    del package_list

    # Check for concurrent execution of tfv1 and tfv2
    tfv1_flg = False
    tfv2_flg = False

    if output_pb:
        tfv1_flg = True
    if output_no_quant_float32_tflite or \
        output_dynamic_range_quant_tflite or \
            output_weight_quant_tflite or \
                output_float16_quant_tflite or \
                    output_integer_quant_tflite or \
                        output_full_integer_quant_tflite or \
                            output_tfjs or \
                                output_tftrt_float32 or \
                                    output_tftrt_float16 or \
                                        output_coreml or \
                                            output_edgetpu or \
                                                output_onnx or \
                                                    output_openvino_and_myriad:
        tfv2_flg = True

    if tfv1_flg and tfv2_flg:
        print(f'{Color.RED}ERROR:{Color.RESET} Group.1 and Group.2 cannot be set to True at the same time. Please specify either Group.1 or Group.2.')
        print('[Group.1] output_pb')
        print('[Group.2] output_no_quant_float32_tflite, output_weight_quant_tflite, output_float16_quant_tflite, output_integer_quant_tflite, output_full_integer_quant_tflite, output_tfjs, output_tftrt_float32, output_tftrt_float16, output_coreml, output_edgetpu, output_onnx, output_openvino_and_myriad')
        sys.exit(-1)

    if optimizing_for_openvino_and_myriad and optimizing_for_edgetpu:
        print(f'{Color.RED}ERROR:{Color.RESET} optimizing_for_openvino_and_myriad and optimizing_for_edgetpu cannot be True at the same time.')
        sys.exit(-1)

    if optimizing_for_openvino_and_myriad and optimizing_for_coreml:
        print(f'{Color.RED}ERROR:{Color.RESET} optimizing_for_openvino_and_myriad and optimizing_for_coreml cannot be True at the same time.')
        sys.exit(-1)

    if optimizing_for_edgetpu and optimizing_for_coreml:
        print(f'{Color.RED}ERROR:{Color.RESET} optimizing_for_edgetpu and optimizing_for_coreml cannot be True at the same time.')
        sys.exit(-1)

    if not optimizing_for_openvino_and_myriad and rigorous_optimization_for_myriad:
        optimizing_for_openvino_and_myriad = True

    if tfv1_flg:
        from tensorflow.lite.python.interpreter import Interpreter as tflite_interpreter

        shutil.rmtree(model_output_path, ignore_errors=True)

        jsonfile_path = f'./{model}.json'
        gen_model_json(flatc_path, model_output_path, jsonfile_path, schema_path, model_path)
        ops, json_tensor_details, op_types, full_json = parse_json(jsonfile_path)

        interpreter = tflite_interpreter(model_path)
        interpreter.allocate_tensors()
        input_details = interpreter.get_input_details()
        output_details = interpreter.get_output_details()
        ops_details = interpreter._get_ops_details()
        ops_details_pd = pd.json_normalize(ops_details, sep='_')

        print('inputs:')
        input_node_names = []
        tf_inputs = []
        for input in input_details:
            pprint.pprint(input)
            input_node_names.append(input['name']+':0')
            tf_inputs.append(input['shape'])
        output_node_names = []
        output_node_names_non_suffix = []
        print(f'{Color.REVERCE}TensorFlow/Keras model building process starts{Color.RESET}', '=' * 38)
        TFLite_Detection_PostProcess_flg = False
        TFLite_Detection_PostProcess_flg = make_graph(
            ops,
            json_tensor_details,
            full_json,
            op_types,
            ops_details,
            interpreter,
            replace_swish_and_hardswish,
            replace_prelu_and_minmax,
            optimizing_for_edgetpu_flg,
            optimizing_for_openvino_and_myriad,
            rigorous_optimization_for_myriad,
            optimizing_for_coreml,
            optimizing_barracuda
        )
        print('@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@')
        print('outputs:')
        dummy_outputs = []
        if locationids_of_the_terminating_output:
            print('locationids_of_the_terminating_output')
            outputs_filtered = ops_details_pd[ops_details_pd['index'].isin(locationids_of_the_terminating_output)]
            print(outputs_filtered)
            # ops_pd = pd.json_normalize(ops, sep='_')
            for outputs in outputs_filtered['outputs']:
                output_detail = interpreter._get_tensor_details(outputs[0])
                dummy_outputs.append(get_op_name(output_detail['name']))

        if not TFLite_Detection_PostProcess_flg:
            if len(dummy_outputs) == 0:
                for output in output_details:
                    pprint.pprint(output)
                    name_count = output_node_names_non_suffix.count(output['name'])
                    output_node_names.append(output['name']+f':{name_count}')
                    output_node_names_non_suffix.append(output['name'])
            else:
                for output_name in dummy_outputs:
                    print(f'name: {output_name}')
                    name_count = output_node_names_non_suffix.count(output_name)
                    output_node_names.append(output_name+f':{name_count}')
                    output_node_names_non_suffix.append(output_name)
        else:
            for output in output_details:
                pprint.pprint(output)
            output_node_names = [
                'TFLite_Detection_PostProcess0',
                'TFLite_Detection_PostProcess1',
                'TFLite_Detection_PostProcess2',
                'TFLite_Detection_PostProcess3'
            ]
        print(f'{Color.GREEN}TensorFlow/Keras model building process complete!{Color.RESET}')

        # saved_model / .pb output
        import tensorflow.compat.v1 as tf
        try:
            print(f'{Color.REVERCE}saved_model / .pb output started{Color.RESET}', '=' * 52)
            config = tf.ConfigProto()
            config.gpu_options.allow_growth = True
            graph = tf.get_default_graph()
            with tf.Session(config=config, graph=graph) as sess:
                sess.run(tf.global_variables_initializer())
                if not TFLite_Detection_PostProcess_flg:
                    try:
                        graph_def = tf.graph_util.convert_variables_to_constants(
                            sess=sess,
                            input_graph_def=graph.as_graph_def(),
                            output_node_names=[re.sub(':0*', '', name) for name in output_node_names]
                        )
                    except:
                        tmp_output_node_names = []
                        for oname in output_node_names:
                            try:
                                try:
                                    graph.get_tensor_by_name(oname)
                                    tmp_output_node_names.append(oname)
                                except:
                                    graph.get_tensor_by_name(re.sub(':0*', '', oname))
                                    tmp_output_node_names.append(re.sub(':0*', '', oname))
                            except:
                                for idx in range(1,10001):
                                    try:
                                        graph.get_tensor_by_name(f"{re.sub(':0*', '', oname)}_{idx}:0")
                                        tmp_output_node_names.append(f"{re.sub(':0*', '', oname)}_{idx}:0")
                                        break
                                    except:
                                        pass
                        output_node_names = tmp_output_node_names
                        graph_def = tf.graph_util.convert_variables_to_constants(
                            sess=sess,
                            input_graph_def=graph.as_graph_def(),
                            output_node_names=[re.sub(':0*', '', name) for name in output_node_names]
                        )

                    tf.saved_model.simple_save(
                        sess,
                        model_output_path,
                        inputs= {re.sub(':0*', '', t): graph.get_tensor_by_name(t) for t in input_node_names},
                        outputs={re.sub(':0*', '', t): graph.get_tensor_by_name(t) for t in output_node_names}
                    )
                else:
                    graph_def = tf.graph_util.convert_variables_to_constants(
                        sess=sess,
                        input_graph_def=graph.as_graph_def(),
                        output_node_names=output_node_names)

                    try:
                        tf.saved_model.simple_save(
                            sess,
                            model_output_path,
                            inputs= {re.sub(':0*', '', t): graph.get_tensor_by_name(t) for t in input_node_names},
                            outputs={re.sub(':0*', '', t): graph.get_tensor_by_name(f'{t}:0') for t in output_node_names}
                        )
                    except:
                        tf.saved_model.simple_save(
                            sess,
                            model_output_path,
                            inputs= {re.sub(':0*', '', t): graph.get_tensor_by_name(re.sub(':0$', '', t)) for t in input_node_names},
                            outputs={re.sub(':0*', '', t): graph.get_tensor_by_name(f'{t}:0') for t in output_node_names}
                        )

                if output_pb:
                    with tf.io.gfile.GFile(f'{model_output_path}/model_float32.pb', 'wb') as f:
                        f.write(graph_def.SerializeToString())

            print(f'{Color.GREEN}saved_model / .pb output complete!{Color.RESET}')
        except Exception as e:
            print(f'{Color.RED}ERROR:{Color.RESET}', e)
            import traceback
            traceback.print_exc()
            sys.exit(-1)


    elif tfv2_flg:
        # Tensorflow v2.x
        import tensorflow as tf
        import tensorflow_datasets as tfds
        try:
            # Custom TFLite Interpreter that implements MediaPipe's custom operations.
            # TensorFlow v2.4.1
            # https://zenn.dev/pinto0309/articles/a0e40c2817f2ee
            from tflite_runtime.interpreter import Interpreter as tflite_interpreter
        except:
            # The official TensorFlow TFLite Interpreter
            from tensorflow.lite.python.interpreter import Interpreter as tflite_interpreter

        interpreter = tflite_interpreter(model_path)
        interpreter.allocate_tensors()
        input_details = interpreter.get_input_details()
        output_details = interpreter.get_output_details()
        print('inputs:')
        input_node_names = []
        tf_inputs = []
        for input in input_details:
            pprint.pprint(input)
            input_node_names.append(input['name']+':0')
            tf_inputs.append(input['shape'])
        print('outputs:')
        output_node_names = []
        for output in output_details:
            pprint.pprint(output)
            output_node_names.append(output['name']+':0')

        # No Quantization - Input/Output=float32
        if output_no_quant_float32_tflite:
            try:
                print(f'{Color.REVERCE}tflite Float32 convertion started{Color.RESET}', '=' * 51)
                converter = tf.lite.TFLiteConverter.from_saved_model(model_output_path)
                converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
                tflite_model = converter.convert()
                with open(f'{model_output_path}/model_float32.tflite', 'wb') as w:
                    w.write(tflite_model)
                print(f'{Color.GREEN}tflite Float32 convertion complete!{Color.RESET} - {model_output_path}/model_float32.tflite')
            except Exception as e:
                print(f'{Color.RED}ERROR:{Color.RESET}', e)
                import traceback
                traceback.print_exc()

        # Dynamic range Quantization - Input/Output=float32
        if output_dynamic_range_quant_tflite:
            try:
                print(f'{Color.REVERCE}Dynamic Range Quantization started{Color.RESET}', '=' * 50)
                converter = tf.lite.TFLiteConverter.from_saved_model(model_output_path)
                converter._experimental_disable_per_channel = not use_per_channel
                converter.optimizations = [tf.lite.Optimize.DEFAULT]
                converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
                tflite_model = converter.convert()
                with open(f'{model_output_path}/model_dynamic_range_quant.tflite', 'wb') as w:
                    w.write(tflite_model)
                print(f'{Color.GREEN}Dynamic Range Quantization complete!{Color.RESET} - {model_output_path}/model_dynamic_range_quant.tflite')
            except Exception as e:
                print(f'{Color.RED}ERROR:{Color.RESET}', e)
                import traceback
                traceback.print_exc()

        # Weight Quantization - Input/Output=float32
        if output_weight_quant_tflite:
            try:
                print(f'{Color.REVERCE}Weight Quantization started{Color.RESET}', '=' * 57)
                converter = tf.lite.TFLiteConverter.from_saved_model(model_output_path)
                converter._experimental_disable_per_channel = not use_per_channel
                converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
                converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
                tflite_model = converter.convert()
                with open(f'{model_output_path}/model_weight_quant.tflite', 'wb') as w:
                    w.write(tflite_model)
                print(f'{Color.GREEN}Weight Quantization complete!{Color.RESET} - {model_output_path}/model_weight_quant.tflite')
            except Exception as e:
                print(f'{Color.RED}ERROR:{Color.RESET}', e)
                import traceback
                traceback.print_exc()

        # Float16 Quantization - Input/Output=float32
        if output_float16_quant_tflite:
            try:
                print(f'{Color.REVERCE}Float16 Quantization started{Color.RESET}', '=' * 56)
                converter = tf.lite.TFLiteConverter.from_saved_model(model_output_path)
                converter.optimizations = [tf.lite.Optimize.DEFAULT]
                converter.target_spec.supported_types = [tf.float16]
                converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
                tflite_quant_model = converter.convert()
                with open(f'{model_output_path}/model_float16_quant.tflite', 'wb') as w:
                    w.write(tflite_quant_model)
                print(f'{Color.GREEN}Float16 Quantization complete!{Color.RESET} - {model_output_path}/model_float16_quant.tflite')
            except Exception as e:
                print(f'{Color.RED}ERROR:{Color.RESET}', e)
                import traceback
                traceback.print_exc()

        # Downloading datasets for calibration
        raw_test_data = None
        input_shapes = None
        if output_integer_quant_tflite or output_full_integer_quant_tflite:
            if calib_ds_type == 'tfds':
                print(f'{Color.REVERCE}TFDS download started{Color.RESET}', '=' * 63)
                raw_test_data = tfds.load(
                    name=ds_name_for_tfds_for_calibration,
                    with_info=False,
                    split=split_name_for_tfds_for_calibration,
                    data_dir=download_dest_folder_path_for_the_calib_tfds,
                    download=tfds_download_flg
                )
                print(f'{Color.GREEN}TFDS download complete!{Color.RESET}')
            elif calib_ds_type == 'numpy':
                print(f'{Color.REVERCE}numpy dataset load started{Color.RESET}', '=' * 58)
                try:
                    if load_dest_file_path_for_the_calib_npy == npy_load_default_path and not os.path.exists(npy_load_default_path):
                        os.makedirs(os.path.dirname(npy_load_default_path), exist_ok=True)
                        import gdown
                        import subprocess
                        try:
                            result = subprocess.check_output(
                                [
                                    'gdown',
                                    '--id', '1z-K0KZCK3JBH9hXFuBTmIM4jaMPOubGN',
                                    '-O', load_dest_file_path_for_the_calib_npy
                                ],
                                stderr=subprocess.PIPE
                            ).decode('utf-8')
                        except:
                            result = subprocess.check_output(
                                [
                                    'sudo', 'gdown',
                                    '--id', '1z-K0KZCK3JBH9hXFuBTmIM4jaMPOubGN',
                                    '-O', load_dest_file_path_for_the_calib_npy
                                ],
                                stderr=subprocess.PIPE
                            ).decode('utf-8')
                    raw_test_data = np.load(load_dest_file_path_for_the_calib_npy)
                    print(f'{Color.GREEN}numpy dataset load complete!{Color.RESET}')
                except subprocess.CalledProcessError as e:
                    print(f'{Color.RED}ERROR:{Color.RESET}', e.stderr.decode('utf-8'))
                    import traceback
                    traceback.print_exc()
            else:
                pass
        input_shapes = tf_inputs
        input_shapes_permutations = list(itertools.permutations(input_shapes))

        def representative_dataset_gen():
            if calib_ds_type == 'tfds':
                for data in raw_test_data.take(10):
                    image = data['image'].numpy()
                    images = []
                    for shape in input_shapes:
                        data = tf.image.resize(image, (shape[1], shape[2]))
                        tmp_image = eval(string_formulas_for_normalization) # Default: (data - [127.5,127.5,127.5]) / [127.5,127.5,127.5]
                        tmp_image = tmp_image[np.newaxis,:,:,:]
                        images.append(tmp_image)
                    yield images

            elif calib_ds_type == 'numpy':
                for idx in range(raw_test_data.shape[0]):
                    image = raw_test_data[idx]
                    images = []
                    data = None
                    tmp_image = None
                    for shape in input_shapes_permutations[input_shapes_permutations_idx]:
                        if len(shape) == 4 and shape[0] == 1 and shape[3] == 3:
                            data = tf.image.resize(image, (shape[1], shape[2]))
                            tmp_image = eval(string_formulas_for_normalization) # Default: (data - [127.5,127.5,127.5]) / [127.5,127.5,127.5]
                            tmp_image = tmp_image[np.newaxis,:,:,:]
                        else:
                            # Since the input data of multiple inputs cannot be predicted, random numbers are generated and given for the time being.
                            shape_tuple = tuple(shape)
                            data = np.random.random_sample(shape_tuple).astype(np.float32)
                            tmp_image = data
                        images.append(tmp_image)
                    yield images

        # Integer Quantization
        if output_integer_quant_tflite:
            try:
                print(f'{Color.REVERCE}Integer Quantization started{Color.RESET}', '=' * 56)
                converter = tf.lite.TFLiteConverter.from_saved_model(model_output_path)
                converter.experimental_new_quantizer = use_experimental_new_quantizer
                converter._experimental_disable_per_channel = not use_per_channel
                converter.optimizations = [tf.lite.Optimize.DEFAULT]
                converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8, tf.lite.OpsSet.SELECT_TF_OPS]
                tflite_model = None
                input_shapes_permutations_idx = 0
                for _ in input_shapes_permutations:
                    try:
                        converter.representative_dataset = representative_dataset_gen
                        tflite_model = converter.convert()
                        break
                    except Exception as e:
                        input_shapes_permutations_idx += 1
                        if input_shapes_permutations_idx > len(input_shapes_permutations):
                            print(f'{Color.RED}ERROR:{Color.RESET}', e)

                with open(f'{model_output_path}/model_integer_quant.tflite', 'wb') as w:
                    w.write(tflite_model)
                print(f'{Color.GREEN}Integer Quantization complete!{Color.RESET} - {model_output_path}/model_integer_quant.tflite')

            except Exception as e:
                print(f'{Color.RED}ERROR:{Color.RESET}', e)
                import traceback
                traceback.print_exc()

        # Full Integer Quantization
        if output_full_integer_quant_tflite:
            try:
                print(f'{Color.REVERCE}Full Integer Quantization started{Color.RESET}', '=' * 51)
                converter = tf.lite.TFLiteConverter.from_saved_model(model_output_path)
                converter.experimental_new_quantizer = use_experimental_new_quantizer
                converter._experimental_disable_per_channel = not use_per_channel
                converter.optimizations = [tf.lite.Optimize.DEFAULT]
                converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8, tf.lite.OpsSet.SELECT_TF_OPS]
                inf_type = None
                if output_integer_quant_type == 'int8':
                    inf_type = tf.int8
                elif output_integer_quant_type == 'uint8':
                    inf_type = tf.uint8
                else:
                    inf_type = tf.int8
                converter.inference_input_type = inf_type
                converter.inference_output_type = inf_type
                tflite_model = None
                input_shapes_permutations_idx = 0
                for _ in input_shapes_permutations:
                    try:
                        converter.representative_dataset = representative_dataset_gen
                        tflite_model = converter.convert()
                        break
                    except Exception as e:
                        input_shapes_permutations_idx += 1
                        if input_shapes_permutations_idx > len(input_shapes_permutations):
                            print(f'{Color.RED}ERROR:{Color.RESET}', e)

                with open(f'{model_output_path}/model_full_integer_quant.tflite', 'wb') as w:
                    w.write(tflite_model)
                print(f'{Color.GREEN}Full Integer Quantization complete!{Color.RESET} - {model_output_path}/model_full_integer_quant.tflite')

            except Exception as e:
                print(f'{Color.RED}ERROR:{Color.RESET}', e)
                import traceback
                traceback.print_exc()

        # TensorFlow.js convert
        if output_tfjs:
            import subprocess
            try:
                print(f'{Color.REVERCE}TensorFlow.js Float32 convertion started{Color.RESET}', '=' * 44)
                result = subprocess.check_output(
                    [
                        'tensorflowjs_converter',
                        '--input_format', 'tf_saved_model',
                        '--output_format', 'tfjs_graph_model',
                        '--signature_name', 'serving_default',
                        '--saved_model_tags', 'serve',
                        model_output_path, f'{model_output_path}/tfjs_model_float32'
                    ],
                    stderr=subprocess.PIPE
                ).decode('utf-8')
                print(result)
                print(f'{Color.GREEN}TensorFlow.js convertion complete!{Color.RESET} - {model_output_path}/tfjs_model_float32')
            except subprocess.CalledProcessError as e:
                print(f'{Color.RED}ERROR:{Color.RESET}', e.stderr.decode('utf-8'))
                import traceback
                traceback.print_exc()
            try:
                print(f'{Color.REVERCE}TensorFlow.js Float16 convertion started{Color.RESET}', '=' * 44)
                result = subprocess.check_output(
                    [
                        'tensorflowjs_converter',
                        '--quantize_float16',
                        '--input_format', 'tf_saved_model',
                        '--output_format', 'tfjs_graph_model',
                        '--signature_name', 'serving_default',
                        '--saved_model_tags', 'serve',
                        model_output_path, f'{model_output_path}/tfjs_model_float16'
                    ],
                    stderr=subprocess.PIPE
                ).decode('utf-8')
                print(result)
                print(f'{Color.GREEN}TensorFlow.js convertion complete!{Color.RESET} - {model_output_path}/tfjs_model_float16')
            except subprocess.CalledProcessError as e:
                print(f'{Color.RED}ERROR:{Color.RESET}', e.stderr.decode('utf-8'))
                import traceback
                traceback.print_exc()

        # TF-TRT (TensorRT) convert
        if output_tftrt_float32:
            try:
                def input_fn():
                    input_shapes = []
                    for shape in input_shapes_permutations[input_shapes_permutations_idx]:
                        shape_tuple = tuple(shape)
                        input_shapes.append(np.zeros(shape_tuple).astype(np.float32))
                    yield input_shapes

                print(f'{Color.REVERCE}TF-TRT (TensorRT) Float32 convertion started{Color.RESET}', '=' * 40)
                params = tf.experimental.tensorrt.ConversionParams(precision_mode='FP32', maximum_cached_engines=10000)
                converter = tf.experimental.tensorrt.Converter(input_saved_model_dir=model_output_path, conversion_params=params)
                converter.convert()

                input_shapes_permutations_idx = 0
                for _ in input_shapes_permutations:
                    try:
                        converter.build(input_fn=input_fn)
                        break
                    except Exception as e:
                        input_shapes_permutations_idx += 1
                        if input_shapes_permutations_idx > len(input_shapes_permutations):
                            print(f'{Color.RED}ERROR:{Color.RESET}', e)

                converter.save(f'{model_output_path}/tensorrt_saved_model_float32')
                print(f'{Color.GREEN}TF-TRT (TensorRT) convertion complete!{Color.RESET} - {model_output_path}/tensorrt_saved_model_float32')
            except Exception as e:
                print(f'{Color.RED}ERROR:{Color.RESET}', e)
                import traceback
                traceback.print_exc()
                print(f'{Color.RED}The binary versions of TensorFlow and TensorRT may not be compatible. Please check the version compatibility of each package.{Color.RESET}')

        # TF-TRT (TensorRT) convert
        if output_tftrt_float16:
            try:
                def input_fn():
                    input_shapes = []
                    for shape in input_shapes_permutations[input_shapes_permutations_idx]:
                        shape_tuple = tuple(shape)
                        input_shapes.append(np.zeros(shape_tuple).astype(np.float32))
                    yield input_shapes

                print(f'{Color.REVERCE}TF-TRT (TensorRT) Float16 convertion started{Color.RESET}', '=' * 40)
                params = tf.experimental.tensorrt.ConversionParams(precision_mode='FP16', maximum_cached_engines=10000)
                converter = tf.experimental.tensorrt.Converter(input_saved_model_dir=model_output_path, conversion_params=params)
                converter.convert()

                input_shapes_permutations_idx = 0
                for _ in input_shapes_permutations:
                    try:
                        converter.build(input_fn=input_fn)
                        break
                    except Exception as e:
                        input_shapes_permutations_idx += 1
                        if input_shapes_permutations_idx > len(input_shapes_permutations):
                            print(f'{Color.RED}ERROR:{Color.RESET}', e)

                converter.save(f'{model_output_path}/tensorrt_saved_model_float16')
                print(f'{Color.GREEN}TF-TRT (TensorRT) convertion complete!{Color.RESET} - {model_output_path}/tensorrt_saved_model_float16')
            except Exception as e:
                print(f'{Color.RED}ERROR:{Color.RESET}', e)
                import traceback
                traceback.print_exc()
                print(f'{Color.RED}The binary versions of TensorFlow and TensorRT may not be compatible. Please check the version compatibility of each package.{Color.RESET}')

        # CoreML convert
        if output_coreml:
            try:
                print(f'{Color.REVERCE}CoreML convertion started{Color.RESET}', '=' * 59)
                mlmodel = ct.convert(model_output_path, source='tensorflow')
                mlmodel.save(f'{model_output_path}/model_coreml_float32.mlmodel')
                print(f'{Color.GREEN}CoreML convertion complete!{Color.RESET} - {model_output_path}/model_coreml_float32.mlmodel')
            except Exception as e:
                print(f'{Color.RED}ERROR:{Color.RESET}', e)
                import traceback
                traceback.print_exc()

        # EdgeTPU convert
        if output_edgetpu:
            import subprocess
            try:
                print(f'{Color.REVERCE}EdgeTPU convertion started{Color.RESET}', '=' * 58)
                result = subprocess.check_output(
                    [
                        'edgetpu_compiler',
                        '-o', model_output_path,
                        '-sad',
                        '-t', str(edgetpu_compiler_timeout),
                        '-n', str(edgetpu_num_segments),
                        f'{model_output_path}/model_full_integer_quant.tflite'
                    ],
                    stderr=subprocess.PIPE
                ).decode('utf-8')
                print(result)
                print(f'{Color.GREEN}EdgeTPU convert complete!{Color.RESET} - {model_output_path}/model_full_integer_quant_edgetpu.tflite')
            except subprocess.CalledProcessError as e:
                print(f'{Color.RED}ERROR:{Color.RESET}', e.stderr.decode('utf-8'))
                import traceback
                traceback.print_exc()
                print("-" * 80)
                print('Please install edgetpu_compiler according to the following website.')
                print('https://coral.ai/docs/edgetpu/compiler/#system-requirements')

        # ONNX convert
        if output_onnx:
            import onnx
            import onnxoptimizer
            import subprocess
            try:
                print(f'{Color.REVERCE}ONNX convertion started{Color.RESET}', '=' * 61)
                loaded = tf.saved_model.load(model_output_path).signatures['serving_default']
                inputs = ",".join(map(str, [inp.name for inp in loaded.inputs if 'unknown' not in inp.name])).rstrip(',')
                try:
                    onnx_convert_command = None
                    if not onnx_extra_opset:
                        onnx_convert_command = \
                        [
                            'python3',
                            '-m', 'tf2onnx.convert',
                            '--saved-model', model_output_path,
                            '--opset', str(onnx_opset),
                            '--output', f'{model_output_path}/model_float32.onnx',
                        ]
                        if use_onnx_nchw_conversion:
                            onnx_convert_command.append(
                                '--inputs-as-nchw'
                            )
                            onnx_convert_command.append(
                                f'{inputs}'
                            )
                    else:
                        onnx_convert_command = \
                        [
                            'python3',
                            '-m', 'tf2onnx.convert',
                            '--saved-model', model_output_path,
                            '--opset', str(onnx_opset),
                            '--output', f'{model_output_path}/model_float32.onnx',
                            '--extra_opset', onnx_extra_opset,
                        ]
                        if use_onnx_nchw_conversion:
                            onnx_convert_command.append(
                                '--inputs-as-nchw'
                            )
                            onnx_convert_command.append(
                                f'{inputs}'
                            )
                    result = subprocess.check_output(
                        onnx_convert_command,
                        stderr=subprocess.PIPE
                    ).decode('utf-8')
                    try:
                        onnx_model = onnx.load(f'{model_output_path}/model_float32.onnx')
                        onnx_model = onnx.shape_inference.infer_shapes(onnx_model)
                        onnx.save(onnx_model, f'{model_output_path}/model_float32.onnx')
                    except Exception as e:
                        print(f'{Color.YELLOW}WARNING:{Color.RESET}', e)
                        import traceback
                        traceback.print_exc()
                    print(result)
                except:
                    onnx_convert_command = None
                    if not onnx_extra_opset:
                        onnx_convert_command = \
                        [
                            'python3',
                            '-m', 'tf2onnx.convert',
                            '--saved-model', model_output_path,
                            '--opset', str(onnx_opset),
                            '--output', f'{model_output_path}/model_float32.onnx'
                        ]
                    else:
                        onnx_convert_command = \
                        [
                            'python3',
                            '-m', 'tf2onnx.convert',
                            '--saved-model', model_output_path,
                            '--opset', str(onnx_opset),
                            '--output', f'{model_output_path}/model_float32.onnx',
                            '--extra_opset', onnx_extra_opset
                        ]
                    result = subprocess.check_output(
                        onnx_convert_command,
                        stderr=subprocess.PIPE
                    ).decode('utf-8')
                    try:
                        onnx_model = onnx.load(f'{model_output_path}/model_float32.onnx')
                        onnx_model = onnx.shape_inference.infer_shapes(onnx_model)
                        onnx.save(onnx_model, f'{model_output_path}/model_float32.onnx')
                    except Exception as e:
                        print(f'{Color.YELLOW}WARNING:{Color.RESET}', e)
                        import traceback
                        traceback.print_exc()
                    print(result)
                print(f'{Color.GREEN}ONNX convertion complete!{Color.RESET} - {model_output_path}/model_float32.onnx')
            except subprocess.CalledProcessError as e:
                print(f'{Color.RED}ERROR:{Color.RESET}', e.stderr.decode('utf-8'))
                import traceback
                traceback.print_exc()

            if use_onnx_optimization:
                try:
                    print(f'{Color.REVERCE}ONNX optimization started{Color.RESET}', '=' * 59)

                    # onnxoptimizer
                    onnx_model = onnx.load(f'{model_output_path}/model_float32.onnx')
                    passes = [
                        "extract_constant_to_initializer",
                        "eliminate_unused_initializer"
                    ]
                    optimized_model = onnxoptimizer.optimize(onnx_model, passes)
                    onnx.save(optimized_model, f'{model_output_path}/model_float32.onnx')

                    # onnx-simplifier
                    result = subprocess.check_output(
                        [
                            'python3',
                            '-m', 'onnxsim',
                            f'{model_output_path}/model_float32.onnx',
                            f'{model_output_path}/model_float32.onnx'
                        ],
                        stderr=subprocess.PIPE
                    ).decode('utf-8')
                    print(result)

                    print(f'{Color.GREEN}ONNX optimization complete!{Color.RESET} - {model_output_path}/model_float32.onnx')
                except subprocess.CalledProcessError as e:
                    print(f'{Color.YELLOW}WARNING:{Color.RESET}', e.stderr.decode('utf-8'))
                    import traceback
                    traceback.print_exc()

        # OpenVINO IR and DepthAI blob convert
        if output_openvino_and_myriad:
            import subprocess
            # OpenVINO IR - FP32
            try:
                print(f'{Color.REVERCE}OpenVINO IR FP32 convertion started{Color.RESET}', '=' * 54)
                os.makedirs(f'{model_output_path}/openvino/FP32', exist_ok=True)
                INTEL_OPENVINO_DIR = os.environ['INTEL_OPENVINO_DIR']
                result = subprocess.check_output(
                    [
                        'python3',
                        f'{INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/mo_tf.py',
                        '--saved_model_dir', model_output_path,
                        '--data_type', 'FP32',
                        '--output_dir', f'{model_output_path}/openvino/FP32'
                    ],
                    stderr=subprocess.PIPE
                ).decode('utf-8')
                print(result)
                print(f'{Color.GREEN}OpenVINO IR FP32 convertion complete!{Color.RESET} - {model_output_path}/openvino/FP32')
            except subprocess.CalledProcessError as e:
                print(f'{Color.RED}ERROR:{Color.RESET}', e.stderr.decode('utf-8'))
                import traceback
                traceback.print_exc()
            # OpenVINO IR - FP16
            try:
                print(f'{Color.REVERCE}OpenVINO IR FP16 convertion started{Color.RESET}', '=' * 54)
                os.makedirs(f'{model_output_path}/openvino/FP16', exist_ok=True)
                INTEL_OPENVINO_DIR = os.environ['INTEL_OPENVINO_DIR']
                result = subprocess.check_output(
                    [
                        'python3',
                        f'{INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/mo_tf.py',
                        '--saved_model_dir', model_output_path,
                        '--data_type', 'FP16',
                        '--output_dir', f'{model_output_path}/openvino/FP16'
                    ],
                    stderr=subprocess.PIPE
                ).decode('utf-8')
                print(result)
                print(f'{Color.GREEN}OpenVINO IR FP16 convertion complete!{Color.RESET} - {model_output_path}/openvino/FP16')
            except subprocess.CalledProcessError as e:
                print(f'{Color.RED}ERROR:{Color.RESET}', e.stderr.decode('utf-8'))
                import traceback
                traceback.print_exc()
            # Myriad Inference Engine blob
            try:
                print(f'{Color.REVERCE}Myriad Inference Engine blob convertion started{Color.RESET}', '=' * 44)
                os.makedirs(f'{model_output_path}/openvino/myriad', exist_ok=True)
                INTEL_OPENVINO_DIR = os.environ['INTEL_OPENVINO_DIR']

                shutil.copy(f'{model_output_path}/openvino/FP16/saved_model.xml', f'{model_output_path}/openvino/FP16/saved_model_vino.xml')
                result = subprocess.check_output(
                    [
                        "sed", "-i", 's/sort_result_descending=\"true\"/sort_result_descending=\"false\"/g', f"{model_output_path}/openvino/FP16/saved_model.xml"
                    ],
                    stderr=subprocess.PIPE
                ).decode('utf-8')

                result = subprocess.check_output(
                    [
                        f'{INTEL_OPENVINO_DIR}/deployment_tools/inference_engine/lib/intel64/myriad_compile',
                        '-m', f'{model_output_path}/openvino/FP16/saved_model.xml',
                        '-VPU_NUMBER_OF_SHAVES', f'{vpu_number_of_shaves}',
                        '-VPU_NUMBER_OF_CMX_SLICES', f'{vpu_number_of_cmx_slices}',
                        '-o', f'{model_output_path}/openvino/myriad/saved_model.blob'
                    ],
                    stderr=subprocess.PIPE
                ).decode('utf-8')
                print(result)
                shutil.copy(f'{model_output_path}/openvino/FP16/saved_model.xml', f'{model_output_path}/openvino/FP16/saved_model_myriad.xml')
                shutil.copy(f'{model_output_path}/openvino/FP16/saved_model_vino.xml', f'{model_output_path}/openvino/FP16/saved_model.xml')

                print(f'{Color.GREEN}Myriad Inference Engine blob convertion complete!{Color.RESET} - {model_output_path}/openvino/myriad')

            except subprocess.CalledProcessError as e:
                print(f'{Color.RED}ERROR:{Color.RESET}', e.stderr.decode('utf-8'))
                import traceback
                traceback.print_exc()

if __name__ == '__main__':
    main()

报错:

Encountered unresolved custom op: Landmarks2TransformMatrix.

问题记录:

pip install tflite_runtime

https://github.com/FeiGeChuanShu/ncnn_Android_blazeface/issues/9

https://github.com/PINTO0309/PINTO_model_zoo/issues/143

tf2onnx 转onnx

Install from pypi

pip install -U tf2onnx

Install latest from github

pip install onnxruntime
pip install git+https://github.com/onnx/tensorflow-onnx
  1. 运行以下命令转换模型。

python -m tf2onnx.convert --saved-model ./checkpoints/yolov4.tf --output model.onnx --opset 11 --verbose

转onnx测试:

ok

python -m tf2onnx.convert --opset 11 --tflite face_landmark.tflite --output face_landmark.onnx

报错:

See instructions: https://www.tensorflow.org/lite/guide/ops_custom Node number 192 (Landmarks2TransformMatrix) failed to prepare.Encountered unresolved custom op: Landmarks2TransformMatrix.
ERROR - Tensorflow op [landmarks_to_transform_matrix_v2_2: TFL_Landmarks2TransformMatrix] is not supported
ERROR - Tensorflow op [transform_tensor_bilinear_v2_2: TFL_TransformTensorBilinear] is not supported
ERROR - Tensorflow op [transform_landmarks_v2_4: TFL_TransformLandmarks] is not supported
ERROR - Tensorflow op [transform_landmarks_v2_3: TFL_TransformLandmarks] is not supported
ERROR - Tensorflow op [landmarks_to_transform_matrix_v2_1: TFL_Landmarks2TransformMatrix] is not supported
ERROR - Tensorflow op [transform_tensor_bilinear_v2_1: TFL_TransformTensorBilinear] is not supported
ERROR - Tensorflow op [transform_landmarks_v2_2: TFL_TransformLandmarks] is not supported
ERROR - Tensorflow op [transform_landmarks_v2_1: TFL_TransformLandmarks] is not supported
ERROR - Tensorflow op [landmarks_to_transform_matrix_v2: TFL_Landmarks2TransformMatrix] is not supported
ERROR - Tensorflow op [transform_tensor_bilinear_v2: TFL_TransformTensorBilinear] is not supported
ERROR - Tensorflow op [transform_landmarks_v2: TFL_TransformLandmarks] is not supported
ERROR - Unsupported ops: Counter({'TFL_TransformLandmarks': 5, 'TFL_Landmarks2TransformMatrix': 3, 'TFL_TransformTensorBilinear': 3})

onnx 简化simple

代码:

import onnx


onnx_model_name=r'F:\project\detect\yolov7\ncnn-20220721-windows-vs2017-shared\x64\bin\face_landmark_with_attention.onnx'
try:
    import onnxsim

    onnx_model = onnx.load(onnx_model_name)  # load onnx model
    print('\nStarting to simplify ONNX...')
    onnx_model, check = onnxsim.simplify(onnx_model)
    assert check, 'assert check failed'
except Exception as e:
    print(f'Simplifier failure: {e}')

# print(onnx.helper.printable_graph(onnx_model.graph))  # print a human readable model
onnx.save(onnx_model, onnx_model_name.replace(".onnx", "_sim.onnx"))

simple失败: 

Starting to simplify ONNX...
Simplifier failure: No Op registered for TFL_Landmarks2TransformMatrix with domain_version of 11

转ncnn:

onnx2ncnn model_float32_sim.onnx face_landmark_with_attention.param face_landmark_with_attention.bin

pbtxt转tfliet,没试成功

2.1 了解使用tflite_convert命令的参数
下面是一条典型的转换命令

pb文件转tflite:

转自:https://blog.csdn.net/qq_42131061/article/details/106209894

tflite_convert --output_file=[tflite文件生成的路径] --graph_def_file=[pb文件所在的路径] --input_arrays=[输入数组] --output_arrays=[输出数组]

2.2 查看pb文件的input_arrays & output_arrays

使用以下python代码进行查看(部分代码来源于网络)

import os
import tensorflow as tf


def create_graph(model_path):
    with tf.gfile.FastGFile(os.path.join(model_path), 'rb') as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
        tf.import_graph_def(graph_def, name='')


def print_io_arrays(pb):
    gf = tf.GraphDef()
    m_file = open(pb, 'rb')
    gf.ParseFromString(m_file.read())

    with open('gfnode.txt', 'a') as the_file:
        for n in gf.node:
            the_file.write(n.name + '\n')

    file = open('gfnode.txt', 'r')
    data = file.readlines()
    print("output name = ")
    print(data[len(data) - 1])
    print("Input name = ")
    file.seek(0)
    print(file.readline())


if __name__ == "__main__":
    pd_file_path = './model.pb'
    print_io_arrays(pd_file_path)


请将pd_file_path变量改为为自己pd文件所在路径

代码运行后可能的输出示例如下

可以看到input_arrays为conv2d_1_input,output_arrays为dense_3/Softmax
 

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

AI算法网奇

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值