yolov8 参数&用法

用法:

from ultralytics import YOLO
import os
model = YOLO('yolov8n.yaml')
model = YOLO('yolov8n.pt')

results = model.train(data='custom.yaml', epochs=80, batch=8, patience=0, augment=True, val=False, degrees=15, translate=0.05, scale=0.05, shear=0.05, perspective=0.0, mosaic=0.0, hsv_h=0.010, hsv_s=0.5, hsv_v=0.2)

results = model.val()

参数:

Key				Value		Description
model			None		path to model file, i.e. yolov8n.pt, yolov8n.yaml
data			None		path to data file, i.e. coco128.yaml
epochs			100			number of epochs to train for
patience		50			epochs to wait for no observable improvement for early stopping of training
batch			16			number of images per batch (-1 for AutoBatch)
imgsz			640			size of input images as integer
save			True		save train checkpoints and predict results
save_period		-1			Save checkpoint every x epochs (disabled if < 1)
cache			False		True/ram, disk or False. Use cache for data loading
device			None		device to run on, i.e. cuda device=0 or device=0,1,2,3 or device=cpu
workers			8			number of worker threads for data loading (per RANK if DDP)
project			None		project name
name			None		experiment name
exist_ok		False		whether to overwrite existing experiment
pretrained		False		whether to use a pretrained model
optimizer		'auto'		optimizer to use, choices=[SGD, Adam, Adamax, AdamW, NAdam, RAdam, RMSProp, auto]
verbose			False		whether to print verbose output
seed			0			random seed for reproducibility
deterministic	True		whether to enable deterministic mode
single_cls		False		train multi-class data as single-class
rect			False		rectangular training with each batch collated for minimum padding
cos_lr			False		use cosine learning rate scheduler
close_mosaic	0			(int) disable mosaic augmentation for final epochs
resume			False		resume training from last checkpoint
amp	True		Automatic 	Mixed Precision (AMP) training, choices=[True, False]
fraction		1.0			dataset fraction to train on (default is 1.0, all images in train set)
profile			False		profile ONNX and TensorRT speeds during training for loggers
lr0				0.01		initial learning rate (i.e. SGD=1E-2, Adam=1E-3)
lrf				0.01		final learning rate (lr0 * lrf)
momentum		0.937		SGD momentum/Adam beta1
weight_decay	0.0005		optimizer weight decay 5e-4
warmup_epochs	3.0			warmup epochs (fractions ok)
warmup_momentum	0.8			warmup initial momentum
warmup_bias_lr	0.1			warmup initial bias lr
box				7.5			box loss gain
cls				0.5			cls loss gain (scale with pixels)
dfl				1.5			dfl loss gain
pose			12.0		pose loss gain (pose-only)
kobj			2.0			keypoint obj loss gain (pose-only)
label_smoothing	0.0			label smoothing (fraction)
nbs				64			nominal batch size
overlap_mask	True		masks should overlap during training (segment train only)
mask_ratio		4			mask downsample ratio (segment train only)
dropout			0.0			use dropout regularization (classify train only)
val				True		validate/test during training

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值