Object Detection
运行zed object detection 可能出现的错误
对照detect.py 把detector.py出现的scale_coords改成scale_boxes
如果出现以下错误
TypeError: attempt_load() got an unexpected keyword argument 'map_location'
原因:可能是我用python3.6.9的关系,官网推荐python3.7
把detector.py出现的“map_location=” 删除
在运行如下命令即可
python3 detector.py --weights yolov5m.pt
也可以修改detector.py文件,让他能够实现位置 速度 等的输出并打印成csv
主要参考:
Using the Object Detection API with a Custom Detector | Stereolabs 和 官网中的positional_tracking.py 文件
在如下位置:
添加如下程序
#-------打印物体信息-------------#
for object in objects.object_list:
print("Id:{}\n 3D position:{}\n Velocity:{}\n".format(object.id, -object.position, object.velocity))
object_tracking_state = object.tracking_state
if object_tracking_state == sl.OBJECT_TRACKING_STATE.OK:
print("Object {0} is tracked\n".format(object.id))
# --------------写入csv-----------------#
import csv
position = -object.position
velocity_1 = object.velocity
data = [[str(object.id), str(position[0]), str(position[1]),str(position[2]), str(velocity_1[0]), str(velocity_1[1]), str(velocity_1[2])]]
#data = [[str(object.id), str(velocity[0]), str(velocity[1]),str(velocity[2])]]
with open('WeiZhi.csv', 'a', encoding='utf-8', newline='') as fp:
# 写
writer = csv.writer(fp)
# 将数据写入
writer.writerows(data)
在运行如下命令:
python detector.py --weights yolov5m.pt # [--img_size 512 --conf_thres 0.1 --svo path/to/file.svo]
运行后在终端会打印 Id 位置 速度 信息,并生成 WeiZhi.csv文件。
Object Detection——ROS版
参考:Adding Object Detection in ROS | Stereolabs
双目立体视觉(4)- ZED2双目视觉开发理论与实践 with examples 0.1 object detection
首先需要进行相关的安装,安装参考: 安装zed-ros-wrapper 并解决一些报错
首先需要配置ZED2i yaml
修改如下
# params/zed2i.yaml
# Parameters for Stereolabs ZED2 camera
---
general:
camera_model: 'zed2i'
#这一部分是修改相机的深度距离的
depth:
min_depth: 0.3 # Min: 0.2, Max: 3.0 - Default 0.7 - Note: reducing this value wil require more computational power and GPU memory
max_depth: 30 # Max: 40.0
pos_tracking:
imu_fusion: true # enable/disable IMU fusion. When set to false, only the optical odometry will be used.
sensors:
sensors_timestamp_sync: false # Synchronize Sensors messages timestamp with latest received frame
max_pub_rate: 200. # max frequency of publishing of sensors data. MAX: 400. - MIN: grab rate
publish_imu_tf: true # publish `IMU -> <cam_name>_left_camera_frame` TF
object_detection:
od_enabled: true # True to enable Object Detection [not available for ZED]
#对象检测模块可以配置为使用四种不同检测模型之一,各功能如下图
model: 0 # '0': MULTI_CLASS_BOX - '1': MULTI_CLASS_BOX_ACCURATE - '2': HUMAN_BODY_FAST - '3': HUMAN_BODY_ACCURATE - '4': MULTI_CLASS_BOX_MEDIUM - '5': HUMAN_BODY_MEDIUM - '6': PERSON_HEAD_BOX
confidence_threshold: 50 # Minimum value of the detection confidence of an object [0,100]
max_range: 15. # Maximum detection range
#修改object_tracking_enabled,把false改成true,可进行身体追踪
object_tracking_enabled: true # Enable/disable the tracking of the detected objects
body_fitting: false # Enable/disable body fitting for 'HUMAN_BODY_X' models
mc_people: true # Enable/disable the detection of persons for 'MULTI_CLASS_BOX_X' models
mc_vehicle: true # Enable/disable the detection of vehicles for 'MULTI_CLASS_BOX_X' models
mc_bag: true # Enable/disable the detection of bags for 'MULTI_CLASS_BOX_X' models
mc_animal: true # Enable/disable the detection of animals for 'MULTI_CLASS_BOX_X' models
mc_electronics: true # Enable/disable the detection of electronic devices for 'MULTI_CLASS_BOX_X' models
mc_fruit_vegetable: true # Enable/disable the detection of fruits and vegetables for 'MULTI_CLASS_BOX_X' models
mc_sport: true # Enable/disable the detection of sport-related objects for 'MULTI_CLASS_BOX_X' models
运行如下命令,可到 rviz 中修改和查看
roslaunch zed_display_rviz display_zed2i.launch
运行zed2i.launch
roslaunch zed_wrapper zed2i.launch
使用以下命令运行订阅者节点:
rosrun zed_obj_det_sub_tutorial zed_obj_det_sub objes:=/zed2i/zed_node/obj_det/objects
识别结果: person 数量 【1】 位置 pos [ x, y ,z ] 单位 米, confidence: 【】
其中“Tracking state”值可以解码为:
- 0 -> OFF(对象无效)
- 1 -> 好的
- 2 -> 搜索(估计轨迹)
注:如果没有出现上述结果,应该是没有订阅到话题,去
/catkin_ws/src/zed-ros-examples/tutorials/zed_obj_det_sub_tutorial/src下的zed_obj_det_sub_tutorial.cpp文件中修改话题
修改为
回到 catkin_ws 目录下重新 catkin_make 编译即可(修改yaml文件不需要重新编译) 。
只显示深度的教程
rosrun zed_depth_sub_tutorial zed_depth_sub
需要去以下文件改个话题/catkin_ws/src/zed-ros-examples/tutorials/zed_depth_sub_tutorial/src/zed_depth_sub_tutorial.cpp
ros::Subscriber subDepth = n.subscribe("/zed2i/zed_node/depth/depth_registered", 10, depthCallback);
也可启动视差查看器以可视化发布在主题上的视差图像
rosrun image_view disparity_view image:=/zed2i/d_node/disparity/disparity_image