ultralytics yolo 参数说明(含翻译)


注: 以下均基于ultralytics公司提供的yolov8版本

Train Settings(训练参数)

ArgumentDefaultDescription翻译
modelNoneSpecifies the model file for training. Accepts a path to either a .pt pretrained model or a .yaml configuration file. Essential for defining the model structure or initializing weights.指定用于训练的模型文件。接受 .pt 预训练模型或 .yaml 配置文件的路径。对于定义模型结构或初始化权重至关重要。
dataNonePath to the dataset configuration file (e.g., coco8.yaml). This file contains dataset-specific parameters, including paths to training and validation data, class names, and number of classes.数据集配置文件(例如coco8.yaml)的路径。该文件包含特定于数据集的参数,包括训练和验证数据的路径、类名和类数。
epochs100Total number of training epochs. Each epoch represents a full pass over the entire dataset. Adjusting this value can affect training duration and model performance.训练周期的总数。每个周期代表对整个数据集的完整遍历。调整此值会影响训练持续时间和模型性能。
timeNoneMaximum training time in hours. If set, this overrides the epochs argument, allowing training to automatically stop after the specified duration. Useful for time-constrained training scenarios.最大训练时间(以小时为单位)。如果设置,这将覆盖 epochs 参数,允许训练在指定持续时间后自动停止。适用于时间受限的训练场景。
patience100Number of epochs to wait without improvement in validation metrics before early stopping the training. Helps prevent overfitting by stopping training when performance plateaus.在提前停止训练之前,在验证指标没有改善的情况下等待的迭代次数。通过在性能停滞时停止训练来帮助防止过拟合。
batch16Batch size for training, indicating how many images are processed before the model’s internal parameters are updated. AutoBatch (batch=-1) dynamically adjusts the batch size based on GPU memory availability.训练的批量大小,表示在更新模型的内部参数之前处理多少图像。AutoBatch(batch=-1)根据GPU内存可用性动态调整批量大小。
imgsz640Target image size for training. All images are resized to this dimension before being fed into the model. Affects model accuracy and computational complexity.训练的目标图像大小。所有图像在输入模型之前都会调整到这个尺寸。影响模型的准确性和计算复杂性。
saveTRUEEnables saving of training checkpoints and final model weights. Useful for resuming training or model deployment.启用保存训练检查点和最终模型权重。对于恢复训练或模型部署很有用。
save_period-1Frequency of saving model checkpoints, specified in epochs. A value of -1 disables this feature. Useful for saving interim models during long training sessions.保存模型检查点的频率,以迭代周期为单位。值为-1时禁用此功能。适用于在长时间训练过程中保存临时模型。
cacheFALSEEnables caching of dataset images in memory (True/ram), on disk (disk), or disables it (False). Improves training speed by reducing disk I/O at the cost of increased memory usage.允许在内存(True/ram)、磁盘(disk)中缓存数据集图像,或禁用缓存(False)。通过减少磁盘I/O来提高训练速度,代价是增加内存使用量。
deviceNoneSpecifies the computational device(s) for training: a single GPU (device=0), multiple GPUs (device=0,1), CPU (device=cpu), or MPS for Apple silicon (device=mps).指定用于训练的计算设备:单个GPU(device=0)、多个GPU(device=0,1)、CPU(device=cpu)或用于Apple芯片的MPS(device=mps)。
workers8Number of worker threads for data loading (per RANK if Multi-GPU training). Influences the speed of data preprocessing and feeding into the model, especially useful in multi-GPU setups.数据加载的工作线程数(如果是多GPU训练,则为每个RANK)。影响数据预处理和馈入模型的速度,在多GPU设置中特别有用。
projectNoneName of the project directory where training outputs are saved. Allows for organized storage of different experiments.保存训练输出的项目目录的名称。允许对不同的实验进行有组织的存储。
nameNoneName of the training run. Used for creating a subdirectory within the project folder, where training logs and outputs are stored.训练运行的名称。用于在项目文件夹中创建子目录,训练日志和输出存储在该子目录中。
exist_okFALSEIf True, allows overwriting of an existing project/name directory. Useful for iterative experimentation without needing to manually clear previous outputs.如果为 True,则允许覆盖现有的项目/名称目录。对于迭代实验非常有用,无需手动清除以前的输出。
pretrainedTRUEDetermines whether to start training from a pretrained model. Can be a boolean value or a string path to a specific model from which to load weights. Enhances training efficiency and model performance.确定是否从预训练模型开始训练。可以是布尔值或字符串路径,指向要从中加载权重的特定模型。提高训练效率和模型性能。
optimizer‘auto’Choice of optimizer for training. Options include SGD, Adam, AdamW, NAdam, RAdam, RMSProp etc., or auto for automatic selection based on model configuration. Affects convergence speed and stability.训练优化器的选择。选项包括SGD、Adam、AdamW、NAdam、RAdam、RMSProp等,或根据模型配置自动选择的auto。影响收敛速度和稳定性。
verboseFALSEEnables verbose output during training, providing detailed logs and progress updates. Useful for debugging and closely monitoring the training process.在训练过程中启用详细输出,提供详细的日志和进度更新。可用于调试和密切监控训练过程。
seed0Sets the random seed for training, ensuring reproducibility of results across runs with the same configurations.设置训练的随机种子,确保在相同配置下运行结果的可重复性。
deterministicTRUEForces deterministic algorithm use, ensuring reproducibility but may affect performance and speed due to the restriction on non-deterministic algorithms.强制使用确定性算法,确保可重复性,但由于对非确定性算法的限制,可能会影响性能和速度。
single_clsFALSETreats all classes in multi-class datasets as a single class during training. Useful for binary classification tasks or when focusing on object presence rather than classification.在训练过程中将多类数据集中的所有类视为单个类。适用于二进制分类任务或关注对象存在而不是分类的情况。
rectFALSEEnables rectangular training, optimizing batch composition for minimal padding. Can improve efficiency and speed but may affect model accuracy.启用矩形训练,优化批处理组合以最小化填充。可以提高效率和速度,但可能会影响模型的准确性。
cos_lrFALSEUtilizes a cosine learning rate scheduler, adjusting the learning rate following a cosine curve over epochs. Helps in managing learning rate for better convergence.使用余弦学习率调度器,在训练周期内按照余弦曲线调整学习率。有助于管理学习率以实现更好的收敛。
close_mosaic10Disables mosaic data augmentation in the last N epochs to stabilize training before completion. Setting to 0 disables this feature.在最后 N 个迭代周期中禁用马赛克数据增强,以在完成之前稳定训练。设置为 0 可禁用此功能。
resumeFALSEResumes training from the last saved checkpoint. Automatically loads model weights, optimizer state, and epoch count, continuing training seamlessly.从上次保存的检查点恢复训练。自动加载模型权重、优化器状态和迭代计数,无缝继续训练。
ampTRUEEnables Automatic Mixed Precision (AMP) training, reducing memory usage and possibly speeding up training with minimal impact on accuracy.启用自动混合精度(AMP)训练,减少内存使用,并可能加快训练速度,同时对准确性的影响最小。 (如果出现训练报Nan的情况(没有结果),设置为Fasle或许可以改善)
fraction1Specifies the fraction of the dataset to use for training. Allows for training on a subset of the full dataset, useful for experiments or when resources are limited.指定用于训练的数据集的分数。允许在完整数据集的子集上进行训练,这对于实验或资源有限时非常有用。
profileFALSEEnables profiling of ONNX and TensorRT speeds during training, useful for optimizing model deployment.支持在训练过程中分析ONNX和TensorRT的速度,这对于优化模型部署非常有用。
freezeNoneFreezes the first N layers of the model or specified layers by index, reducing the number of trainable parameters. Useful for fine-tuning or transfer learning.按索引冻结模型的前N层或指定层,减少可训练参数的数量。适用于微调或迁移学习。
lr00.01Initial learning rate (i.e. SGD=1E-2, Adam=1E-3) . Adjusting this value is crucial for the optimization process, influencing how rapidly model weights are updated.初始学习率(即SGD=1E-2,Adam=1E-3)。调整这个值对于优化过程至关重要,它会影响模型权重的更新速度。
lrf0.01Final learning rate as a fraction of the initial rate = (lr0 * lrf), used in conjunction with schedulers to adjust the learning rate over time.最终学习率是初始学习率的一部分=(lr0 * lrf),与调度器结合使用,以随时间调整学习率。
momentum0.937Momentum factor for SGD or beta1 for Adam optimizers, influencing the incorporation of past gradients in the current update.SGD 的动量因子或 Adam 优化器的 beta1,影响当前更新中过去梯度的合并。
weight_decay0.0005L2 regularization term, penalizing large weights to prevent overfitting.L2正则化项,惩罚较大的权重以防止过拟合。
warmup_epochs3Number of epochs for learning rate warmup, gradually increasing the learning rate from a low value to the initial learning rate to stabilize training early on.学习率预热的时间段,将学习率从较低的值逐渐增加到初始学习率,以在早期稳定训练。
warmup_momentum0.8Initial momentum for warmup phase, gradually adjusting to the set momentum over the warmup period.热身阶段的初始动力,在热身期间逐渐调整到设定的动力。
warmup_bias_lr0.1Learning rate for bias parameters during the warmup phase, helping stabilize model training in the initial epochs.预热阶段偏差参数的学习率,有助于在初始迭代中稳定模型训练。
box7.5Weight of the box loss component in the loss function, influencing how much emphasis is placed on accurately predicting bounding box coordinates.损失函数中框损失分量的权重,影响对准确预测边界框坐标的重视程度。
cls0.5Weight of the classification loss in the total loss function, affecting the importance of correct class prediction relative to other components.总损失函数中分类损失的权重,影响正确类别预测相对于其他组件的重要性。
dfl1.5Weight of the distribution focal loss, used in certain YOLO versions for fine-grained classification.分布焦点损失的权重,用于某些 YOLO 版本进行细粒度分类。
pose12Weight of the pose loss in models trained for pose estimation, influencing the emphasis on accurately predicting pose keypoints.在为姿势估计训练的模型中,姿势损失的权重会影响对准确预测姿势关键点的重视程度。
kobj2Weight of the keypoint objectness loss in pose estimation models, balancing detection confidence with pose accuracy.在姿态估计模型中,关键点对象性损失的权重,平衡检测置信度与姿态精度。
label_smoothing0Applies label smoothing, softening hard labels to a mix of the target label and a uniform distribution over labels, can improve generalization.应用标签平滑,将硬标签软化为目标标签的混合,并在标签上均匀分布,可以提高泛化能力。
nbs64Nominal batch size for normalization of loss.用于损失归一化的名义批量大小。
overlap_maskTRUEDetermines whether segmentation masks should overlap during training, applicable in instance segmentation tasks.确定训练期间分割掩码是否应重叠,适用于实例分割任务。
mask_ratio4Downsample ratio for segmentation masks, affecting the resolution of masks used during training.分割掩码的下采样率,影响训练过程中使用的掩码的分辨率。
dropout0Dropout rate for regularization in classification tasks, preventing overfitting by randomly omitting units during training.分类任务中正则化的丢弃率,通过在训练过程中随机省略单元来防止过拟合。
valTRUEEnables validation during training, allowing for periodic evaluation of model performance on a separate dataset.在训练过程中启用验证,允许在单独的数据集上定期评估模型性能。
plotsFALSEGenerates and saves plots of training and validation metrics, as well as prediction examples, providing visual insights into model performance and learning progression.生成并保存训练和验证指标的图以及预测示例,提供对模型性能和学习进展的直观见解。

Augmentation Settings and Hyperparameters(数据增强参数)

ArgumentTypeDefaultRangeDescription翻译
hsv_hfloat0.0150.0 - 1.0Adjusts the hue of the image by a fraction of the color wheel, introducing color variability. Helps the model generalize across different lighting conditions.通过色轮的一小部分调整图像的色调,引入颜色变化。帮助模型在不同光照条件下进行泛化。
hsv_sfloat0.70.0 - 1.0Alters the saturation of the image by a fraction, affecting the intensity of colors. Useful for simulating different environmental conditions.通过改变图像的饱和度来影响颜色的强度。可用于模拟不同的环境条件。
hsv_vfloat0.40.0 - 1.0Modifies the value (brightness) of the image by a fraction, helping the model to perform well under various lighting conditions.将图像的值(亮度)修改为分数,帮助模型在各种照明条件下表现良好。
degreesfloat0-360Rotates the image randomly within the specified degree range, improving the model’s ability to recognize objects at various orientations.在指定的角度范围内随机旋转图像,提高模型识别不同方向物体的能力。
translatefloat0.10.0 - 1.0Translates the image horizontally and vertically by a fraction of the image size, aiding in learning to detect partially visible objects.将图像水平和垂直地按图像大小的分数进行转换,有助于学习检测部分可见的物体。
scalefloat0.5>=0.0Scales the image by a gain factor, simulating objects at different distances from the camera.按增益系数缩放图像,模拟与相机距离不同的对象。
shearfloat0-360Shears the image by a specified degree, mimicking the effect of objects being viewed from different angles.按指定角度剪切图像,模拟从不同角度查看对象的效果。
perspectivefloat00.0 - 0.001Applies a random perspective transformation to the image, enhancing the model’s ability to understand objects in 3D space.对图像应用随机透视变换,增强模型在3D空间中理解对象的能力。
flipudfloat00.0 - 1.0Flips the image upside down with the specified probability, increasing the data variability without affecting the object’s characteristics.以指定的概率将图像上下翻转,在不影响对象特征的情况下增加数据可变性。
fliplrfloat0.50.0 - 1.0Flips the image left to right with the specified probability, useful for learning symmetrical objects and increasing dataset diversity.以指定的概率从左向右翻转图像,可用于学习对称对象并增加数据集的多样性。
bgrfloat00.0 - 1.0Flips the image channels from RGB to BGR with the specified probability, useful for increasing robustness to incorrect channel ordering.以指定的概率将图像通道从RGB翻转为BGR,这对于提高对错误通道排序的鲁棒性很有用。
mosaicfloat10.0 - 1.0Combines four training images into one, simulating different scene compositions and object interactions. Highly effective for complex scene understanding.将四幅训练图像合并为一幅,模拟不同的场景构成和对象交互。对于复杂的场景理解非常有效。
mixupfloat00.0 - 1.0Blends two images and their labels, creating a composite image. Enhances the model’s ability to generalize by introducing label noise and visual variability.将两个图像及其标签混合在一起,创建一个合成图像。通过引入标签噪声和视觉可变性来增强模型的泛化能力。
copy_pastefloat00.0 - 1.0Copies objects from one image and pastes them onto another, useful for increasing object instances and learning object occlusion.从一个图像复制对象并将其粘贴到另一个图像,这对于增加对象实例和学习对象遮挡很有用。
auto_augmentstrrandaugment-Automatically applies a predefined augmentation policy (randaugment, autoaugment, augmix), optimizing for classification tasks by diversifying the visual features.自动应用预定义的增强策略(randaugment、autoaugment、augmix),通过多样化视觉特征来优化分类任务。
erasingfloat0.40.0 - 0.9Randomly erases a portion of the image during classification training, encouraging the model to focus on less obvious features for recognition.在分类训练过程中随机擦除图像的一部分,鼓励模型专注于不太明显的特征进行识别。
crop_fractionfloat10.1 - 1.0Crops the classification image to a fraction of its size to emphasize central features and adapt to object scales, reducing background distractions.将分类图像裁剪到其尺寸的一小部分,以强调中心特征并适应对象尺度,减少背景干扰。

Arguments for YOLO Model Validation(验证参数)

ArgumentTypeDefaultDescription翻译
datastrNoneSpecifies the path to the dataset configuration file (e.g., coco8.yaml). This file includes paths to validation data, class names, and number of classes.指定数据集配置文件(例如coco8.yaml)的路径。此文件包含验证数据、类名和类数的路径。
imgszint640Defines the size of input images. All images are resized to this dimension before processing.定义输入图像的大小。所有图像在处理之前都会调整到这个尺寸。
batchint16Sets the number of images per batch. Use -1 for AutoBatch, which automatically adjusts based on GPU memory availability.设置每批图像的数量。对于自动批处理,使用-1,这将根据GPU内存可用性自动调整。
save_jsonboolFALSEIf True, saves the results to a JSON file for further analysis or integration with other tools.如果为 True,则将结果保存到 JSON 文件,以便进一步分析或与其他工具集成。
save_hybridboolFALSEIf True, saves a hybrid version of labels that combines original annotations with additional model predictions.如果为 True,则保存标签的混合版本,该版本将原始注释与附加模型预测相结合。
conffloat0.001Sets the minimum confidence threshold for detections. Detections with confidence below this threshold are discarded.设置检测的最小置信度阈值。置信度低于此阈值的检测将被丢弃。
ioufloat0.6Sets the Intersection Over Union (IoU) threshold for Non-Maximum Suppression (NMS). Helps in reducing duplicate detections.设置非极大值抑制(NMS)的交集比并集(IoU)阈值。有助于减少重复检测。
max_detint300Limits the maximum number of detections per image. Useful in dense scenes to prevent excessive detections.限制每张图像的最大检测次数。在密集场景中防止过度检测。
halfboolTRUEEnables half-precision (FP16) computation, reducing memory usage and potentially increasing speed with minimal impact on accuracy.启用半精度(FP16)计算,减少内存使用,并可能提高速度,同时对准确性的影响最小。
devicestrNoneSpecifies the device for validation (cpu, cuda:0, etc.). Allows flexibility in utilizing CPU or GPU resources.指定用于验证的设备(cpu、cuda:0等)。允许灵活利用CPU或GPU资源。
dnnboolFALSEIf True, uses the OpenCV DNN module for ONNX model inference, offering an alternative to PyTorch inference methods.如果为 True,则使用 OpenCV DNN 模块进行 ONNX 模型推理,从而提供 PyTorch 推理方法的替代方法。
plotsboolFALSEWhen set to True, generates and saves plots of predictions versus ground truth for visual evaluation of the model’s performance.当设置为True时,生成并保存预测与地面真实值的图,用于模型的性能可视化评估。
rectboolFALSEIf True, uses rectangular inference for batching, reducing padding and potentially increasing speed and efficiency.如果为 True,则使用矩形推理进行批处理,减少填充并可能提高速度和效率。
splitstrvalDetermines the dataset split to use for validation (val, test, or train). Allows flexibility in choosing the data segment for performance evaluation.确定用于验证的数据集分割(val、test或train)。允许灵活选择用于性能评估的数据段。

Inference arguments(推理参数)

ArgumentTypeDefaultDescription翻译
sourcestr‘ultralytics/assets’Specifies the data source for inference. Can be an image path, video file, directory, URL, or device ID for live feeds. Supports a wide range of formats and sources, enabling flexible application across different types of input.指定推理的数据源。可以是图像路径、视频文件、目录、URL或实时流的设备ID。支持多种格式和来源,可在不同类型的输入之间实现灵活应用。
conffloat0.25Sets the minimum confidence threshold for detections. Objects detected with confidence below this threshold will be disregarded. Adjusting this value can help reduce false positives.设置检测的最小置信度阈值。置信度低于此阈值的对象将被忽略。调整此值可帮助减少假阳性。
ioufloat0.7Intersection Over Union (IoU) threshold for Non-Maximum Suppression (NMS). Lower values result in fewer detections by eliminating overlapping boxes, useful for reducing duplicates.非最大抑制(NMS)的交并比(IoU)阈值。较低的值可通过消除重叠的框来减少检测结果,这对减少重复检测有用。
imgszint or tuple640Defines the image size for inference. Can be a single integer 640 for square resizing or a (height, width) tuple. Proper sizing can improve detection accuracy and processing speed.定义推理时使用的图像大小。可以是一个640的方形缩放单个整数,也可以是一个(高度,宽度)元组。适当的尺寸可以提高检测精度和处理速度。
halfboolFALSEEnables half-precision (FP16) inference, which can speed up model inference on supported GPUs with minimal impact on accuracy.启用半精度(FP16)推理,可以在支持的GPU上以对精度影响最小的方式加快模型推理速度。
devicestrNoneSpecifies the device for inference (e.g., cpu, cuda:0 or 0). Allows users to select between CPU, a specific GPU, or other compute devices for model execution.指定推理时使用的设备(例如,cpu、cuda:0或0)。允许用户在模型执行时选择CPU、特定的GPU或其他计算设备。
max_detint300Maximum number of detections allowed per image. Limits the total number of objects the model can detect in a single inference, preventing excessive outputs in dense scenes.每张图像允许的最大检测数。限制模型在单次推理中能够检测到的对象总数,以防止在密集场景中产生过多的输出。
vid_strideint1Frame stride for video inputs. Allows skipping frames in videos to speed up processing at the cost of temporal resolution. A value of 1 processes every frame, higher values skip frames.视频输入的帧间距。允许在视频中跳过帧以加快处理速度,代价是降低了时间分辨率。值为1时处理每帧图像,更高的值会跳过帧。
stream_bufferboolFALSEDetermines if all frames should be buffered when processing video streams (True), or if the model should return the most recent frame (False). Useful for real-time applications.确定是否在处理视频流时缓存所有帧(True),或者模型是否应返回最接近的帧(False)。对于实时应用很有用。
visualizeboolFALSEActivates visualization of model features during inference, providing insights into what the model is “seeing”. Useful for debugging and model interpretation.在推理期间激活模型特征的可视化,提供有关模型“看到”的内容的见解。对于调试和模型解释很有用。
augmentboolFALSEEnables test-time augmentation (TTA) for predictions, potentially improving detection robustness at the cost of inference speed.启用预测时的测试时间增强(TTA),可能在降低推理速度的前提下提高检测的鲁棒性。
agnostic_nmsboolFALSEEnables class-agnostic Non-Maximum Suppression (NMS), which merges overlapping boxes of different classes. Useful in multi-class detection scenarios where class overlap is common.启用类无关的非最大抑制(NMS),将不同类的重叠框合并在一起。在多类检测场景中,类重叠很常见,此功能很有用。
classeslist[int]NoneFilters predictions to a set of class IDs. Only detections belonging to the specified classes will be returned. Useful for focusing on relevant objects in multi-class detection tasks.过滤预测结果到一组类ID中。只有属于指定类的检测结果才会被返回。在多类检测任务中,此功能可用于专注于相关的对象。
retina_masksboolFALSEUses high-resolution segmentation masks if available in the model. This can enhance mask quality for segmentation tasks, providing finer detail.使用模型提供的高分辨率分割掩模。这可以增强分割任务中的掩模质量,提供更细微的细节。
embedlist[int]NoneSpecifies the layers from which to extract feature vectors or embeddings. Useful for downstream tasks like clustering or similarity search.指定从模型中提取特征向量或嵌入的层。这对于如聚类或相似性搜索等下游任务很有用。

Visualization arguments(可视化参数)

ArgumentTypeDefaultDescription翻译
showboolFALSEIf True, displays the annotated images or videos in a window. Useful for immediate visual feedback during development or testing.如果为True,则在窗口中显示标注的图像或视频。这对于开发或测试期间的即时视觉反馈很有用。
saveboolFALSEEnables saving of the annotated images or videos to file. Useful for documentation, further analysis, or sharing results.启用将标注的图像或视频保存到文件。这对于文档、进一步分析或共享结果很有用。
save_framesboolFALSEWhen processing videos, saves individual frames as images. Useful for extracting specific frames or for detailed frame-by-frame analysis.处理视频时,将单个帧保存为图像。这对于提取特定帧或进行详细的逐帧分析很有用。
save_txtboolFALSESaves detection results in a text file, following the format [class] [x_center] [y_center] [width] [height] [confidence]. Useful for integration with other analysis tools.将检测结果保存为文本文件,按照格式[类][x中心][y中心][宽度][高度][置信度]。这对于与其他分析工具集成很有用。
save_confboolFALSEIncludes confidence scores in the saved text files. Enhances the detail available for post-processing and analysis.将置信度分数包含在保存的文本文件中。这增强了可用于后处理和分析的细节。
save_cropboolFALSESaves cropped images of detections. Useful for dataset augmentation, analysis, or creating focused datasets for specific objects.将检测结果的剪裁图像保存下来。这对于数据集增强、分析或创建针对特定对象的聚焦数据集很有用。
show_labelsboolTRUEDisplays labels for each detection in the visual output. Provides immediate understanding of detected objects.在可视化输出中显示每个检测的标签。能够立即理解检测到的物体。
show_confboolTRUEDisplays the confidence score for each detection alongside the label. Gives insight into the model’s certainty for each detection.在每个检测结果旁边显示置信度评分。 可以了解模型对每个检测结果的确定性。
show_boxesboolTRUEDraws bounding boxes around detected objects. Essential for visual identification and location of objects in images or video frames.在检测到的物体周围绘制矩形框。 对于在图像或视频帧中识别和定位物体至关重要。
line_widthNone or intNoneSpecifies the line width of bounding boxes. If None, the line width is automatically adjusted based on the image size. Provides visual customization for clarity.指定矩形框的线条宽度。 如果为None,则线条宽度将根据图像大小自动调整。 提供视觉定制以提高清晰度。
为了运行YOLO V8模型,你需要使用OpenCV和Ultralytics YOLOv8模型。首先,你需要安装相应的工具包,可以使用以下命令安装Ultralytics版本8.0.5的YOLOv8模型:!pip install "ultralytics==8.0.5" 。接下来,你可以使用OpenCV和YOLOv8模型来获取检测结果的位置信息和类信息,以及进行后续自定义功能 。关于跑YOLOv8的设备参数,据引用所述,如果利用OpenVINO™的量化和加速技术,结合英特尔®CPU、集成显卡和独立显卡,可以实现每秒1000帧的性能。所以,通过优化算法和硬件协同工作,可以实现高性能的YOLOv8模型在英特尔®处理器上的运行。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* [YOLO V8 + OpenCV(python)](https://download.csdn.net/download/qq_53457019/87658761)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] - *2* *3* [优化+量化,让你的YOLOv8获得1000+ FPS性能](https://blog.csdn.net/m0_59448707/article/details/129616678)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值