YOLOv8环境创建以及简单训练验证

conda环境创建

1 创建虚拟环境

conda create --name yoloEnv python==3.8

2 激活虚拟环境

 conda activate yoloEnv

3 安装pytorch conda11.0 gpu

conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c conda-forge

4 安装后测试

import torch
print(torch.__version__)
print(torch.cuda.is_available())
exit()

5 安装ultralytic代码库

pip install ultralytics

6 获取ultralytics代码

git clone https://github.com/ultralytics/ultralytics.git

列举一些其他conda指令

conda create --prefix=D:\Software\pytorch python=3.8:在其他位置创建环境
conda info --envs:查看所有的环境
conda env list::查看所有的环境
conda activate your_env_name:激活环境
conda deactivate:退出环境
conda list:看这个环境下安装的包和版本
conda install numpy scikit-learn::安装numpy sklearn包
conda remove -n yourname --all:删除环境
conda update package_name:更新包
conda clean -p:清理无用的安装包
conda clean -t:清理tar包
conda clean -y --all:清理所有安装包及cache
conda env export > environment.yml:导出当前环境的配置到 environment.yml 文件
conda env create -f environment.yml:根据 environment.yml 文件创建新的环境

测试代码

使用pycharm打开ultralytics代码,选择yoloEnv环境,新建一个测试文件yolo_test.py文件
运行后在runs\detect\predict文件夹会显示预测结果
测试代码:

from ultralytics import YOLO
# Load a pretrained YOLOv8n model
model = YOLO('yolov8n.pt')
# Run inference on 'bus.jpg' with arguments
model.predict('ultralytics/assets/bus.jpg', save=True, imgsz=320, conf=0.5)

训练自己的数据集

数据集

这里不在讲解,使用VOC,COCO,yolo格式数据集可以相互转换
这里参考coco128.yaml文件进行修改

训练

from ultralytics import YOLO

if __name__ == '__main__':
    # Load a model
    model = YOLO('yolov8n.yaml')  # build a new model from YAML
    model = YOLO('yolov8n.pt')  # load a pretrained model (recommended for training)
    # Train the model
    results = model.train(data='ultralytics/cfg/datasets/mydataset.yaml', 
                          epochs=100, batch=2, imgsz=512, device=0, workers=1, )

val评估模型

from ultralytics import YOLO

if __name__ == '__main__':
    # Load a model
    model = YOLO('runs/detect/train/weights/best.pt')  # load a custom model

    # Customize validation settings
    validation_results = model.val(data='ultralytics/cfg/datasets/mydataset.yaml',
                                   imgsz=512,
                                   batch=2,
                                   conf=0.25,
                                   iou=0.6,
                                   workers = 1,
                                   device='0')

YOLOV8+ByteTracker测试

import cv2
from ultralytics import YOLO

# Load the YOLOv8 model
model = YOLO('yolov8n.pt')

# Open the video file
video_path = "path/to/video.mp4"
cap = cv2.VideoCapture(video_path)

# Loop through the video frames
while cap.isOpened():
    # Read a frame from the video
    success, frame = cap.read()

    if success:
        # Run YOLOv8 tracking on the frame, persisting tracks between frames
        results = model.track(frame, persist=True)

        # Visualize the results on the frame
        annotated_frame = results[0].plot()

        # Display the annotated frame
        cv2.imshow("YOLOv8 Tracking", annotated_frame)

        # Break the loop if 'q' is pressed
        if cv2.waitKey(1) & 0xFF == ord("q"):
            break
    else:
        # Break the loop if the end of the video is reached
        break

# Release the video capture object and close the display window
cap.release()
cv2.destroyAllWindows()

模型导出ONNX

from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n.pt')  # load an official mode
# Export the model
model.export(format='onnx', opset=13)

报错处理

在这里插入图片描述
解决方法:先卸载pillow,在pip install pillow重新安装。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

MySunshine456

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值