yoloV8-export

from ultralytics import YOLO

# Load a model
model = YOLO("yolov8n.pt")  # load an official model
model = YOLO("path/to/best.pt")  # load a custom trained model

# Export the model
model.export(format="onnx")
yolo export model=yolov8n.pt format=onnx  # export official model
yolo export model=path/to/best.pt format=onnx  # export custom trained model

Arguments 参数

Argument

Type

Default

Description

format

str

'torchscript'

Target format for the exported model, such as 'onnx', 'torchscript', 'tensorflow', or others, defining compatibility with various deployment environments.

导出模型的目标格式,如 'onnx''torchscript''tensorflow' 或其他格式,定义与各种部署环境的兼容性。

imgsz

int

or tuple

640

Desired image size for the model input. Can be an integer for square images or a tuple (height, width)

for specific dimensions.

模型输入所需的图像大小。对于方形图像可以是整数,对于特定维度可以是元组 (height, width)

keras

bool

False

Enables export to Keras format for TensorFlow SavedModel, providing compatibility with TensorFlow serving and APIs.

支持将TensorFlow SavedModel导出为Keras格式,提供与TensorFlow服务和api的兼容性。

optimize

bool

False

Applies optimization for mobile devices when exporting to TorchScript, potentially reducing model size and improving performance.

在导出到TorchScript时,适用于移动设备的优化,可能会减小模型尺寸并提高性能。

half

bool

False

Enables FP16 (half-precision) quantization, reducing model size and potentially speeding up inference on supported hardware.

支持FP16(半精度)量化,减少模型尺寸,并可能加快对支持硬件的推理。

int8

bool

False

Activates INT8 quantization, further compressing the model and speeding up inference with minimal accuracy loss, primarily for edge devices.

激活INT8量化,进一步压缩模型并以最小的精度损失加速推理,主要用于边缘设备。

dynamic

bool

False

Allows dynamic input sizes for ONNX and TensorRT exports, enhancing flexibility in handling varying image dimensions.

允许ONNX和TensorRT导出的动态输入大小,增强处理不同图像尺寸的灵活性。

simplify

bool

False

Simplifies the model graph for ONNX exports with onnxslim, potentially improving performance and compatibility.

使用 onnxslim 简化ONNX导出的模型图,潜在地提高性能和兼容性。

opset

int

None

Specifies the ONNX opset version for compatibility with different ONNX parsers and runtimes. If not set, uses the latest supported version.

指定ONNX opset版本,以便与不同的ONNX解析器和运行时兼容。如果没有设置,则使用支持的最新版本。

workspace

float

4.0

Sets the maximum workspace size in GiB for TensorRT optimizations, balancing memory usage and performance.

为TensorRT优化设置GiB中的最大工作空间大小,平衡内存使用和性能。

nms

bool

False

Adds Non-Maximum Suppression (NMS) to the CoreML export, essential for accurate and efficient detection post-processing.

将非最大抑制(NMS)添加到CoreML导出中,这对于准确有效的检测后处理至关重要。

batch

int

1

Specifies export model batch inference size or the max number of images the exported model will process concurrently in predict mode.

指定导出模型批推理大小或导出模型将在 predict 模式下并发处理的最大图像数。

Export Formats 导出格式

Format 格式

format Argument

Model 模型

Metadata 元数据

Arguments 参数

PyTorchPy

-

yolov8n.pt

-

TorchScript

torchscript

yolov8n.torchscript

imgsz, optimize, batch

ONNX

onnx

yolov8n.onnx

imgsz, half, dynamic, simplify, opset, batch

OpenVINO

openvino

yolov8n_openvino_model/

imgsz, half, int8, batch

TensorRT

engine

yolov8n.engine

imgsz, half, dynamic, simplify, workspace, int8, batch

CoreML

coreml

yolov8n.mlpackage

imgsz, half, int8, nms, batch

TF SavedModel

saved_model

yolov8n_saved_model/

imgsz, keras, int8, batch

TF GraphDefTF 

pb

yolov8n.pb

imgsz, batch

TF LiteTF

tflite

yolov8n.tflite

imgsz, half, int8, batch

TF Edge TPU

edgetpu

yolov8n_edgetpu.tflite

imgsz

TF.js

tfjs

yolov8n_web_model/

imgsz, half, int8, batch

PaddlePaddle

paddle

yolov8n_paddle_model/

imgsz, batch

NCNN

ncnn

yolov8n_ncnn_model/

imgsz, half, batch

  • 7
    点赞
  • 14
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值