树莓派opencv实时摄像头传输

https://www.pyimagesearch.com/2015/03/30/accessing-the-raspberry-pi-camera-with-opencv-and-python/

Step 6: Accessing the video stream of your Raspberry Pi using Python and OpenCV.

Alright, so we’ve learned how to grab a single image from the Raspberry Pi camera. But what about a video stream?

You might guess that we are going to use the cv2.VideoCapture  function here — but I actually recommend against this. Getting cv2.VideoCapture  to play nice with your Raspberry Pi is not a nice experience (you’ll need to install extra drivers) and something you should generally avoid.

And besides, why would we use the cv2.VideoCapture  function when we can easily access the raw video stream using the picamera  module?

Let’s go ahead and take a look on how we can access the video stream. Open up a new file, name it test_video.py , and insert the following code:

This example starts off similarly to the previous one. We start off by importing our necessary packages on Lines 2-5.

And from there we construct our camera  object on Line 8 which allows us to interface with the Raspberry Pi camera. However, we also take the time to set the resolution of our camera (640 x 480 pixels) on Line 9 and the frame rate (i.e. frames per second, or simply FPS) on Line 10. We also initialize our PiRGBArray  object on Line 11, but we also take care to specify the same resolution as on Line 9.

Accessing the actual video stream is handled on Line 17 by making a call to thecapture_continuous  method of our camera  object.

This method returns a frame  from the video stream. The frame then has an array  property, which corresponds to the frame  in NumPy array format — all the hard work is done for us on Lines 17 and 20!

We then take the frame of the video and display on screen on Lines 23 and 24.

An important line to pay attention to is Line 27: You must clear the current frame before you move on to the next one!

If you fail to clear the frame, your Python script will throw an error — so be sure to pay close attention to this when implementing your own applications!

Finally, if the user presses the q  key, we break form the loop and exit the program.

To execute our script, just open a terminal (making sure you are in the cv  virtual environment, of course) and issue the following command:

Below follows an example of me executing the above command:

As you can see, the Raspberry Pi camera’s video stream is being read by OpenCV and then displayed on screen! Furthermore, the Raspberry Pi camera shows no lag when accessing frames at 32 FPS. Granted, we are not doing any processing on the individual frames, but as I’ll show in future blog posts, the Pi 2 can easily keep up 24-32 FPS even when processing each frame.

  • 4
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
### 回答1: 抱歉,由于我的训练数据只截止到2021年,所以我不了解yolov5的详细信息。但是,如果你想实现树莓派通过USB摄像头传输视频流到PC端,PC端通过对视频的识别并返回坐标给树莓派,可以参考以下步骤: 1. 在树莓派上安装摄像头驱动和所需的库。 2. 在PC端安装yolov5,并对需要识别的目标进行训练。 3. 在树莓派上通过代码读取摄像头的视频帧,并通过网络将视频帧传输到PC端。 4. 在PC端使用yolov5对视频帧进行识别,并返回识别到的目标的坐标。 5. 在树莓派上接收PC端返回的坐标,并通过代码进行处理。 希望这些信息能帮助你!如果需要更多帮助,请随时告诉我。 ### 回答2: 在树莓派上,你可以使用OpenCV库来实现通过USB摄像头传输视频流到PC端。首先,需要先在树莓派上安装OpenCV库和USB摄像头的驱动。以下是一个示例代码: ```python import cv2 import numpy as np import requests import json # 设置USB摄像头的索引号 cap = cv2.VideoCapture(0) while True: # 读取视频流 ret, frame = cap.read() # 将帧转换为字节流 _, img_encoded = cv2.imencode('.jpg', frame) img_bytes = img_encoded.tobytes() # 发送POST请求到PC端 response = requests.post("http://your-pc-ip:5000/detect", data=img_bytes) # 解析PC端返回的JSON数据 result = json.loads(response.text) # 获取目标坐标 x = result['x'] y = result['y'] # 在帧上绘制目标位置 cv2.rectangle(frame, (x, y), (x+100, y+100), (0, 255, 0), 2) # 显示帧 cv2.imshow('frame', frame) # 按下q键退出循环 if cv2.waitKey(1) & 0xFF == ord('q'): break # 释放内存 cap.release() cv2.destroyAllWindows() ``` 在PC端上,你需要搭建一个基于Yolov5的目标检测模型,并监听树莓派发送过来的视频流请求。以下是一个示例代码: ```python from flask import Flask, request from PIL import Image import torch import json # 加载预训练模型 model = torch.hub.load('ultralytics/yolov5', 'yolov5s') app = Flask(__name__) @app.route('/detect', methods=['POST']) def detect(): # 从POST请求中获取传输的视频流 img_bytes = request.data # 将字节流转换为图像 img = Image.open(io.BytesIO(img_bytes)) # 使用Yolov5模型进行目标检测 results = model(img) # 提取目标坐标(这里只取第一个目标) boxes = results.pandas().xyxy[0] x = int(boxes['xmin']) y = int(boxes['ymin']) # 构建返回的JSON数据 response_data = {'x': x, 'y': y} # 返回JSON响应 return json.dumps(response_data) # 启动Flask服务器 if __name__ == '__main__': app.run(host='0.0.0.0', port=5000) ``` 在PC端中,你需要安装Flask库和Yolov5库。将以上代码保存到一个.py文件,并在PC端运行该文件,就可以监听树莓派发送过来的视频流请求,并返回目标的坐标信息。在树莓派上运行树莓派代码,即可实现树莓派通过USB摄像头传输视频流到PC端,并通过Yolov5识别目标位置,最后将坐标信息返回给树莓派。 ### 回答3: 树莓派通常使用Linux系统,可以通过安装相应的驱动和软件来支持USB摄像头。以下是通过USB摄像头传输视频流到PC端,并使用yolov5识别目标位置并返回坐标给树莓派的代码示例: 在树莓派端: ```python import cv2 import numpy as np import socket # 连接PC端IP地址和端口号 PC_IP = '192.168.1.100' PC_PORT = 8888 # 初始化USB摄像头 cap = cv2.VideoCapture(0) # 设置摄像头分辨率 cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640) cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480) # 创建socket连接 client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) client_socket.connect((PC_IP, PC_PORT)) while True: # 读取摄像头每一帧 ret, frame = cap.read() # 转换图片格式 img_encode = cv2.imencode('.jpg', frame)[1] data = np.array(img_encode) string_data = data.tostring() # 发送图片数据 client_socket.sendall((str(len(string_data))).encode().ljust(16) + string_data) # 接收目标位置坐标 response = client_socket.recv(1024).decode().strip() print("目标位置坐标:", response) cap.release() client_socket.close() ``` 在PC端: ```python import cv2 from yolov5.detect import Detect # yolov5识别模型 # PC端IP地址和端口号 PC_IP = '192.168.1.100' PC_PORT = 8888 # 加载yolov5模型 model = Detect() # 创建socket连接 server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server_socket.bind((PC_IP, PC_PORT)) server_socket.listen(1) # 接收树莓派传输的图片数据 client_socket, address = server_socket.accept() while True: length_str = client_socket.recv(16) img_data = b'' total_size = int(length_str) while len(img_data) < total_size: img_data += client_socket.recv(4096) # 解码图片 nparr = np.frombuffer(img_data, np.uint8) img = cv2.imdecode(nparr, cv2.IMREAD_COLOR) # 使用yolov5模型检测目标 result = model.detect_object(img) # 提取目标位置坐标 coordinates = [] for obj in result: x, y, w, h = obj['rect'] coordinates.append((x, y, x + w, y + h)) # 发送目标位置坐标给树莓派 response = ' '.join([str(coord) for coord in coordinates]) client_socket.send(response.encode()) client_socket.close() server_socket.close() ``` 需要注意的是,上述示例代码中,PC端的yolov5识别模型代码是伪代码,请根据自己的具体情况进行修改和替换。同时,树莓派和PC端需要处于同一个局域网中,并确保树莓派可以连接到PC端的IP地址和端口号。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值