YOLOv8使用设备摄像头实时监测

 代码如下:

from ultralytics import YOLO
import cv2
from cv2 import getTickCount, getTickFrequency
yolo=YOLO('./yolov8n.pt')




#摄像头实时检测

cap = cv2.VideoCapture(0)
while cap.isOpened():
    loop_start = getTickCount()  #记录循环开始的时间,用于计算每一帧的处理时间
    success, frame = cap.read()  # 读取摄像头的一帧图像

    if success:
        results = yolo.predict(source=frame)  # 对当前帧进行目标检测并显示结果
    annotated_frame = results[0].plot()   #将检测结果绘制在图像上,得到带有目标框的图像。

    # 显示程序
    loop_time = getTickCount() - loop_start #计算处理一帧图像所花费的时间。
    total_time = loop_time / (getTickFrequency())  #将处理时间转换为秒数。
    FPS = int(1 / total_time) #FPS计算
    # 在图像左上角添加FPS文本
    fps_text = f"FPS: {FPS:.2f}"#构造显示帧率的文本字符串
    font = cv2.FONT_HERSHEY_SIMPLEX#选择字体类型
    font_scale = 1  #设置字体的缩放比例
    font_thickness = 2 #设置字体的粗细
    text_color = (0, 0, 255)  # 红色
    text_position = (10, 30)  # 左上角位置

    cv2.putText(annotated_frame, fps_text, text_position, font, font_scale, text_color, font_thickness)
    cv2.imshow('Real-time detection', annotated_frame)
    # 通过按下 'q' 键退出循环
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

cap.release()  # 释放摄像头资源
cv2.destroyAllWindows()  # 关闭OpenCV窗口

测试结果展示:

  • 4
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
使用YOLOv8调用摄像头进行实时监测,可以按照以下步骤进行: 1. 安装OpenCV和YOLOv8 2. 下载YOLOv8的权重文件和配置文件 3. 使用OpenCV打开摄像头 4. 对于每一帧图像,使用YOLOv8进行目标检测 5. 在图像上绘制检测结果并显示 下面是一个简单的示例代码: ```python import cv2 import numpy as np # Load YOLOv8 net = cv2.dnn.readNet("yolov8.weights", "yolov8.cfg") # Load classes classes = [] with open("coco.names", "r") as f: classes = [line.strip() for line in f.readlines()] # Set up camera cap = cv2.VideoCapture(0) while True: # Read frame from camera ret, frame = cap.read() # Create blob from frame blob = cv2.dnn.blobFromImage(frame, 1/255, (416, 416), swapRB=True, crop=False) # Set input to YOLOv8 network net.setInput(blob) # Get output layer names layer_names = net.getLayerNames() output_layers = [layer_names[i[0] - 1] for i in net.getUnconnectedOutLayers()] # Run forward pass on YOLOv8 network outputs = net.forward(output_layers) # Process outputs boxes = [] confidences = [] class_ids = [] for output in outputs: for detection in output: scores = detection[5:] class_id = np.argmax(scores) confidence = scores[class_id] if confidence > 0.5: center_x = int(detection[0] * frame.shape[1]) center_y = int(detection[1] * frame.shape[0]) w = int(detection[2] * frame.shape[1]) h = int(detection[3] * frame.shape[0]) x = center_x - w // 2 y = center_y - h // 2 boxes.append([x, y, w, h]) confidences.append(float(confidence)) class_ids.append(class_id) # Apply non-max suppression indices = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4) # Draw boxes and labels on frame if len(indices) > 0: for i in indices.flatten(): x, y, w, h = boxes[i] label = str(classes[class_ids[i]]) confidence = confidences[i] color = (0, 255, 0) cv2.rectangle(frame, (x, y), (x+w, y+h), color, 2) cv2.putText(frame, f"{label} {confidence:.2f}", (x, y-10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2) # Show frame cv2.imshow("YOLOv8", frame) # Exit on 'q' key press if cv2.waitKey(1) == ord('q'): break # Release resources cap.release() cv2.destroyAllWindows() ```

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

JayGboy

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值