ZED2相机+NVIDIA NX使用及检测目标功能1

ZED相机使用

zed ros wrapper分为三部分:

  1. zed-ros-wrapper : 提供 ZED ROS Wrapper 节点的主包
  2. zed-ros-interfaces:声明自定义主题、服务和操作的包
  3. zed-ros-examples:一个支持包,包含有关如何使用 ZED ROS Wrapper 的示例和教程

启动ZED节点:

  1. ZED相机: roslaunch zed_wrapper zed.launch
  2. ZED 迷你相机: roslaunch zed_wrapper zedm.launch
  3. ZED 2相机: roslaunch zed_wrapper zed2.launch
  4. ZED 2i相机: roslaunch zed_wrapper zed2i.launch

设置相机参数:修改文件param/common.yaml(所有相机通用参数)和param/zed2.yaml
参数调整:https://www.stereolabs.com/docs/ros/zed-node/

RVIZ可视化ZED2:roslaunch zed_display_rviz display_zed2.launch
配置RVIZ:https://www.stereolabs.com/docs/ros/rviz/
显示其余ZED数据:https://www.stereolabs.com/docs/ros/
注:

1. rgb/image_rect_color : 颜色校正图像(默认为左传感器)      
2. zed/zed_node/point_cloud/cloud_registered:3D彩色点云
3. mapping/fused_cloud:融合颜色点云,仅在启用映时发布

检测目标功能

1.对象检测参数:https://www.stereolabs.com/docs/ros/zed-node/ 对象检测类别及其他API参考:https://www.stereolabs.com/docs/api/python/classpyzed_1_1sl_1_1OBJECT__SUBCLASS.html

(人、车辆、包、动物、电子设备、水果和蔬菜)
model: 要使用的对象检测模块‘0’: MULTI_CLASS_BOX - ‘1’: MULTI_CLASS_BOX_ACCURATE - ‘2’: HUMAN_BODY_FAST - ‘3’: HUMAN_BODY_ACCURATE - ‘4’: MULTI_CLASS_BOX_MEDIUM - ‘5’: HUMAN_BODY_MEDIUM
object_tracking_enabled:启用/禁用对检测到的对象的跟踪 TRUE/FALSE

2.检测到的对象msg:https://www.stereolabs.com/docs/ros/object-detection/

zed_interfaces/ObjectsStamped消息定义为:

#Standard Header
std_msgs/Header header

#Array of `object_stamped` topics
zed_interfaces/Object[] objects

zed_interfaces/Object定义为:

# Object label
string label

# Object label ID
int16 label_id

# Object sub
string sublabel

# Object confidence level (1-99)
float32 confidence

# Object centroid position
float32[3] position

# Position covariance
float32[6] position_covariance

# Object velocity
float32[3] velocity

# Tracking available
bool tracking_available

# Tracking state
# 0 -> OFF (object not valid)
# 1 -> OK
# 2 -> SEARCHING (occlusion occurred, trajectory is estimated)
int8 tracking_state

# Action state
# 0 -> IDLE
# 2 -> MOVING
int8 action_state

# 2D Bounding box projected to Camera image
zed_interfaces/BoundingBox2Di bounding_box_2d

# 3D Bounding box in world frame
zed_interfaces/BoundingBox3D bounding_box_3d

# 3D dimensions (width, height, lenght)
float32[3] dimensions_3d

# Is skeleton available?
bool skeleton_available

# 2D Bounding box projected to Camera image of the person head
zed_interfaces/BoundingBox2Df head_bounding_box_2d

# 3D Bounding box in world frame of the person head
zed_interfaces/BoundingBox3D head_bounding_box_3d

# 3D position of the centroid of the person head
float32[3] head_position

# 2D Person skeleton projected to Camera image
zed_interfaces/Skeleton2D skeleton_2d

# 3D Person skeleton in world frame
zed_interfaces/Skeleton3D skeleton_3d

子消息定义:

zed_interfaces/BoundingBox2Df:

#      0 ------- 1
#      |         |
#      |         |
#      |         |
#      3 ------- 2
zed_interfaces/Keypoint2Df[4] corners
zed_interfaces/BoundingBox2Df:

#      0 ------- 1
#      |         |
#      |         |
#      |         |
#      3 ------- 2
zed_interfaces/Keypoint2Di[4] corners
zed_interfaces/BoundingBox3D:

#      1 ------- 2
#     /.        /|
#    0 ------- 3 |
#    | .       | |
#    | 5.......| 6
#    |.        |/
#    4 ------- 7
zed_interfaces/Keypoint3D[8] corners
zed_interfaces/Keypoint2Df:

float32[2] kp
zed_interfaces/Keypoint2Di:

uint32[2] kp
zed_interfaces/Keypoint3D:

float32[3] kp
zed_interfaces/Skeleton2D:

# Skeleton joints indices
#        16-14   15-17
#             \ /
#              0
#              |
#       2------1------5
#       |    |   |    |
#       |    |   |    |
#       3    |   |    6
#       |    |   |    |
#       |    |   |    |
#       4    8   11   7
#            |   |
#            |   |
#            |   |
#            9   12
#            |   |
#            |   |
#            |   |
#           10   13
zed_interfaces/Keypoint2Df[18] keypoints
zed_interfaces/Skeleton3D:

# Skeleton joints indices
#        16-14   15-17
#             \ /
#              0
#              |
#       2------1------5
#       |    |   |    |
#       |    |   |    |
#       3    |   |    6
#       |    |   |    |
#       |    |   |    |
#       4    8   11   7
#            |   |
#            |   |
#            |   |
#            9   12
#            |   |
#            |   |
#            |   |
#           10   13
zed_interfaces/Keypoint3D[18] keypoints

3.nodelets的使用:https://www.stereolabs.com/docs/ros/zed-nodelets/

4.读取IMU、设备位置路径等信息:https://www.stereolabs.com/docs/ros/sensor-data/以及https://www.stereolabs.com/docs/ros/zed-node/

5.检测人、其他物品

  1. 参数修改:common.yaml,zed2i.yaml,在zed2i.yaml中model内修改检测目标等
  2. ZED SDK 检测图像中存在的所有对象并计算它们的 3D 位置和速度。 对象与相机的距离以公制单位(例如米)表示,并从相机左眼后部到场景对象计算。(https://www.stereolabs.com/docs/object-detection/)

6.rosbag存信息及plotjuggler展示

plotjuggler下载时使用:sudo apt install ros-melodic-plotjuggler-ros而不是sudo apt install ros-melodic-plotjuggler,才能实时订阅信息显示,否则只能读取bag。
补充信息

https://facontidavide.github.io/PlotJuggler/visualization_howto/index.html

rosbag

http://wiki.ros.org/rosbag/Commandline
https://guyuehome.com/35777
https://guyuehome.com/34588
### 使用 ZED2i 深度相机实现 YOLOv8 目标检测 #### 配置环境 为了使 ZED2i 和 YOLOv8 能够协同工作,需要安装必要的软件包并设置开发环境。 1. **安装 ZED SDK** 安装适用于 Linux 或 Windows 的最新版本 ZED SDK 是必不可少的。这可以通过访问 Stereolabs 官方网站下载相应操作系统的 SDK 来完成[^2]。 2. **安装 PyTorch** YOLOv8 基于 PyTorch 构建,因此需确保已正确安装 Python 版本以及对应的 PyTorch 库。可以参照官方文档来获取适合硬件条件的最佳配置指南[^1]。 3. **克隆 YOLOv8 仓库** 获取 Ultralytics 提供的 YOLOv8 GitHub 项目源码,并按照说明进行本地化部署。通常情况下,只需执行简单的 Git clone 即可获得最新的模型文件和训练脚本。 ```bash git clone https://github.com/ultralytics/ultralytics.git cd ultralytics/ pip install -r requirements.txt ``` #### 实现过程 ##### 加载摄像头数据流 通过 `pyzed` 接口读取来自 ZED2i 的图像帧作为输入给到 YOLOv8 进行推理处理: ```python from pyzed import sl import cv2 def load_zed_camera(): zed = sl.Camera() init_params = sl.InitParameters() init_params.camera_resolution = sl.RESOLUTION.HD720 init_params.depth_mode = sl.DEPTH_MODE.PERFORMANCE status = zed.open(init_params) if not status == sl.ERROR_CODE.SUCCESS: raise Exception("Failed to open camera") return zed ``` ##### 执行目标检测 利用预加载好的 YOLOv8 模型对每一帧视频内容实施实时分析: ```python from ultralytics import YOLO model = YOLO('yolov8n.pt') # Load a pretrained model (recommended for best performance) while True: image_left, depth_image = get_frame_from_zed(zed) # Assume this function fetches frames from the loaded camera. results = model(image_left) # Perform inference on an input frame. annotated_frame = results.plot() # Annotate detected objects onto original images. cv2.imshow("Detection", annotated_frame) key = cv2.waitKey(1) if key & 0xFF == ord('q'): break ``` 上述代码片段展示了如何集成 ZED2i 捕获的数据与 YOLOv8 检测算法相结合的工作流程。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值