torch/onnx convert tf/tflite

import os
import subprocess

def convert_dng_to_jpg(input_dir, output_dir):
    # 确保输出目录存在
    if not os.path.exists(output_dir):
        os.makedirs(output_dir)

    # 遍历输入目录中的所有 DNG 文件
    for dng_file in os.listdir(input_dir):
        if dng_file.endswith('.dng'):
            # 提取文件名(不包括扩展名)
            base_name = os.path.splitext(dng_file)[0]
            
            # 构建完整的文件路径
            dng_path = os.path.join(input_dir, dng_file)
            tiff_path = os.path.join(output_dir, f"{base_name}.tiff")
            jpg_path = os.path.join(output_dir, f"{base_name}.jpg")

            # 使用 dcraw 将 DNG 文件转换为 TIFF 文件
            subprocess.run(['dcraw', '-T', dng_path], check=True)

            # 将生成的 TIFF 文件移动到输出目录
            os.rename(dng_path, tiff_path)

            # 使用 ImageMagick 将 TIFF 文件转换为 JPG 文件
            subprocess.run(['convert', tiff_path, jpg_path], check=True)

            # 可选:删除中间的 TIFF 文件以节省空间
            os.remove(tiff_path)
            print(dng_file)
if __name__ == "__main__":
    input_dir = "./input"
    output_dir = "./input_rgb"
    convert_dng_to_jpg(input_dir, output_dir)

# ONNX->Keras and ONNX->TFLite tools

## Welcome

If you have some good ideas, welcome to discuss or give project PRs.

## Install

```cmd

git clone https://github.com/MPolaris/onnx2tflite.git

cd onnx2tflite

python setup.py install

```

```python

from onnx2tflite import onnx_converter

res = onnx_converter(

onnx_model_path = "./model.onnx",

need_simplify = True,

output_path = "./models/",

target_formats = ['tflite'],

)

```

---

```cmd

# base

python -m onnx2tflite --weights "./your_model.onnx"

# give save path

python -m onnx2tflite --weights "./your_model.onnx" --outpath "./save_path"

# save tflite model

python -m onnx2tflite --weights "./your_model.onnx" --outpath "./save_path" --formats "tflite"

# save keras and tflite model

python -m onnx2tflite --weights "./your_model.onnx" --outpath "./save_path" --formats "tflite" "keras"

# cutoff model, redefine inputs and outputs, support middle layers

python -m onnx2tflite --weights "./your_model.onnx" --outpath "./save_path" --formats "tflite" --input-node-names "layer_inputname" --output-node-names "layer_outname1" "layer_outname2"

# quantify model weight, only weight

python -m onnx2tflite --weights "./your_model.onnx" --formats "tflite" --weigthquant

# quantify model weight, include input and output

## fp16

python -m onnx2tflite --weights "./your_model.onnx" --formats "tflite" --fp16

## recommend

python -m onnx2tflite --weights "./your_model.onnx" --formats "tflite" --int8 --imgroot "./dataset_path" --int8mean 0 0 0 --int8std 255 255 255

## generate random data, instead of read from image file

python -m onnx2tflite --weights "./your_model.onnx" --formats "tflite" --int8

```

---

## Features

- High Consistency. Compare to ONNX outputs, average error less than 1e-5 per elements.

- More Faster. Output tensorflow-lite model 30% faster than [onnx_tf](https://github.com/onnx/onnx-tensorflow).

- Auto Channel Align. Auto convert pytorch format(NCWH) to tensorflow format(NWHC).

- Deployment Support. Support output quantitative model, include fp16 quantization and uint8 quantization.

- Code Friendly. I've been trying to keep the code structure simple and clear.

---

## Pytorch -> ONNX -> Tensorflow-Keras -> Tensorflow-Lite

- ### From torchvision to tensorflow-lite

```python

import torch

import torchvision

_input = torch.randn(1, 3, 224, 224)

model = torchvision.models.mobilenet_v2(True)

# use default settings is ok

torch.onnx.export(model, _input, './mobilenetV2.onnx', opset_version=11)# or opset_version=13

from converter import onnx_converter

onnx_converter(

onnx_model_path = "./mobilenetV2.onnx",

need_simplify = True,

output_path = "./",

target_formats = ['tflite'], # or ['keras'], ['keras', 'tflite']

weight_quant = False,

fp16_model=False,

int8_model = False,

int8_mean = None,

int8_std = None,

image_root = None

)

```

- ### From custom pytorch model to tensorflow-lite-int8

```python

import torch

import torch.nn as nn

import torch.nn.functional as F

class MyModel(nn.Module):

def __init__(self):

self.conv = nn.Sequential(

nn.Conv2d(3, 64, kernel_size=3, padding=1),

nn.BatchNorm2d(64),

nn.ReLU(inplace=True),

)

def forward(self, x):

return self.conv(x)

model = MyModel()

model.load_state_dict(torch.load("model_checkpoint.pth", map_location="cpu"))

_input = torch.randn(1, 3, 224, 224)

torch.onnx.export(model, _input, './mymodel.onnx', opset_version=11)# or opset_version=13

from converter import onnx_converter

onnx_converter(

onnx_model_path = "./mymodel.onnx",

need_simplify = True,

output_path = "./",

target_formats = ['tflite'], #or ['keras'], ['keras', 'tflite']

weight_quant = False,

int8_model = True, # do quantification

int8_mean = [123.675, 116.28, 103.53], # give mean of image preprocessing

int8_std = [58.395, 57.12, 57.375], # give std of image preprocessing

image_root = "./dataset/train" # give image folder of train

)

```

---

## Validated models

- [SSD](https://github.com/qfgaohao/pytorch-ssd)

- [HRNet](HRNet-Facial-Landmark-Detection)

- [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX)

- [YOLOV3](https://github.com/ultralytics/yolov3)

- [YOLOV4](https://github.com/Tianxiaomo/pytorch-YOLOv4)

- [YOLOV5](https://github.com/ultralytics/yolov5)

- [YOLOV6](https://github.com/meituan/YOLOv6)

- [YOLOV7](https://github.com/WongKinYiu/yolov7)

- [YOLOV10](https://github.com/THU-MIG/yolov10)

- [MoveNet](https://github.com/fire717/movenet.pytorch)

- [UNet\FPN](https://github.com/bigmb/Unet-Segmentation-Pytorch-Nest-of-Unets)

- ViT(torchvision)

- [SwinTransformerV1](https://github.com/microsoft/Swin-Transformer)

- MLP(custom)

- DCGAN(custom)

- [AutoEncoder/VAE](https://github.com/AntixK/PyTorch-VAE)

- all torchvision classification models

- some segmation models in torchvision

- 1D or 2D CNN without special operators(custom)

---

## Add operator by yourself

When you counter unspported operator, you can choose to add it by yourself or make an issue.<br/>

It's very simple to implement a new operator parser by following these steps below.<br/>

Step 0: Select a corresponding layer code file in [layers folder](./onnx2tflite/layers/), such as activations_layers.py for 'HardSigmoid'.<br/>

Step 1: Open it, and edit it:

```python

# all operators regist through OPERATOR register.

# regist operator's name is onnx operator name.

@OPERATOR.register_operator("HardSigmoid")

class TFHardSigmoid():

def __init__(self, tensor_grap, node_weights, node_inputs, node_attribute, node_outputs, layout_dict, *args, **kwargs) -> None:

'''

:param tensor_grap: dict, key is node name, value is tensorflow-keras node output tensor.

:param node_weights: dict, key is node name, value is static data, such as weight/bias/constant, weight should be transfom by dimension_utils.tensor_NCD_to_NDC_format at most time.

:param node_inputs: List[str], stored node input names, indicates which nodes the input comes from, tensor_grap and node_weights are possible.

:param node_attribute: dict, key is attribute name, such as 'axis' or 'perm'. value type is indeterminate, such as List[int] or int or float. notice that type of 'axis' value should be adjusted form NCHW to NHWC by dimension_utils.channel_to_last_dimension or dimension_utils.shape_NCD_to_NDC_format.

:param node_inputs: List[str], stored node output names.

:param layout_dict: List[Layout], stored all before node's layout.

'''

super().__init__()

self.alpha = node_attribute.get("alpha", 0.2)

self.beta = node_attribute.get("beta", 0.5)

def __call__(self, inputs):

return tf.clip_by_value(self.alpha*inputs+self.beta, 0, 1)

```

Step 2: Make it work without error.<br/>

Step 3: Convert model to tflite without any quantification.<br/>

---

# License

This software is covered by Apache-2.0 license.



























GitCode - 全球开发者的开源社区,开源代码托管平台

https://data.csail.mit.edu/graphics/fivek/

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

SetMaker

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值