1 在catkin_ws工作空间中,创建一个程序包,这里命名sensor_vehicle。这里程序包不需要依赖,因为之后直接拷贝官网给的创建车辆的程序包。
~$ cd catkin_ws/src/ros-bridge && catkin_create_pkg sensor_vehicle
2 现在把官方给的程序包carla_ego_vehicle中所有的内容都拷贝到刚刚创建好的程序包sensor_vehicle中。这个官方程序包结构为:
carla_ego_vehicle
——config
————sensors.json
——launch
————carla_example_ego_vehicle.launch
————carla_ego_vehicle.launch
——src
————carla_ego_vehicle
——————carla_ego_vehicle.py
——————__init__.py
——CMakeLists.txt
——package.xml
——README.md
——setup.py
这里sensors.json定义了往车辆上安装的传感器类型,可以修改这个文件安装自己想要的传感器。
两个.launch文件其实都是启动了一个结点,创建车辆和传感器,这里我们使用carla_example_ego_vehicle.launch就可以了。
carla_ego_vehicle.py定义了创建小车和传感器的结点。
CMakeLists.txt和package.xml是程序包carla_ego_vehicle的配置文件,具体见ros教程。
3 因为是拷贝过来的文件,为了和官方文件区分,我们在launch文件中修改小车的名称,在结点文件中修改结点名称和小车名称,再把文件的命名也改一下。最后在CMakeLists.txt和package.xml中同步一下程序包的名称,否则后续ros找不到结点。
4 进入catkin_ws工作空间,编译程序包
~$ cd catkin_ws && catkin_make
5 编译好程序包,试着运行一下
~$ source /opt/carla-ros-bridge/melodic/setup.bash&&source catkin_ws/devel/setup.bash
~$ roslaunch sensor_vehicle start.launch
此时报错 Timeout while waiting for world info!,想起来忘了引入carla-ros-bridge启动文件,建立ros和模拟器的连接(改写sensor_vehicle也可以,建立与世界的连接就行,不过后续也需要利用ros-carla-bridge收集传感器数据,这里就直接用官方给的文件,复制过来,将参数改一致就行)注意一定要把小车的名字加入到carla-ros-bridge/config/setting.yaml中,否则找不到小车结果如下:
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-l1xuXhmN-1602668025542)(/home/w/.config/Typora/typora-user-images/image-20201014162912217.png)]
如果不想手动控制小车,也可以不引入手动控制的launch文件。
6 现在,看看小车发布和接收了哪些数据:
~$ rostopic info -v
Published topics:
* /carla/vehicle0/gnss/gnss1/fix [sensor_msgs/NavSatFix] 1 publisher
* /carla/vehicle0/camera/rgb/view/camera_info [sensor_msgs/CameraInfo] 1 publisher
* /tf [tf2_msgs/TFMessage] 1 publisher
* /carla/objects [derived_object_msgs/ObjectArray] 1 publisher
* /carla/marker [visualization_msgs/Marker] 1 publisher
* /carla/actor_list [carla_msgs/CarlaActorList] 1 publisher
* /carla/vehicle0/camera/rgb/front/camera_info [sensor_msgs/CameraInfo] 1 publisher
* /carla/vehicle0/odometry [nav_msgs/Odometry] 1 publisher
* /carla/vehicle0/vehicle_status [carla_msgs/CarlaEgoVehicleStatus] 1 publisher
* /carla/traffic_lights_info [carla_msgs/CarlaTrafficLightInfoList] 1 publisher
* /carla/vehicle0/camera/rgb/view/image_color [sensor_msgs/Image] 1 publisher
* /carla/vehicle0/camera/rgb/front/image_color [sensor_msgs/Image] 1 publisher
* /rosout [rosgraph_msgs/Log] 4 publishers
* /carla/vehicle0/imu/imu1 [sensor_msgs/Imu] 1 publisher
* /carla/vehicle0/vehicle_info [carla_msgs/CarlaEgoVehicleInfo] 1 publisher
* /rosout_agg [rosgraph_msgs/Log] 1 publisher
* /carla/status [carla_msgs/CarlaStatus] 1 publisher
* /carla/vehicle0/objects [derived_object_msgs/ObjectArray] 1 publisher
* /carla/traffic_lights [carla_msgs/CarlaTrafficLightStatusList] 1 publisher
* /clock [rosgraph_msgs/Clock] 1 publisher
* /carla/world_info [carla_msgs/CarlaWorldInfo] 1 publisher
* /carla/vehicle0/radar/front/radar [carla_msgs/CarlaRadarMeasurement] 1 publisher
* /carla/vehicle0/lidar/lidar1/point_cloud [sensor_msgs/PointCloud2] 1 publisher
Subscribed topics:
* /carla/vehicle0/initialpose [geometry_msgs/PoseWithCovarianceStamped] 1 subscriber
* /carla/vehicle0/enable_autopilot [std_msgs/Bool] 1 subscriber
* /move_base_simple/goal [unknown type] 1 subscriber
* /carla/vehicle0/twist_cmd [geometry_msgs/Twist] 1 subscriber
* /carla/vehicle0/vehicle_control_cmd [carla_msgs/CarlaEgoVehicleControl] 1 subscriber
* /rosout [rosgraph_msgs/Log] 1 subscriber
* /carla/vehicle0/vehicle_control_cmd_manual [carla_msgs/CarlaEgoVehicleControl] 1 subscriber
* /clock [rosgraph_msgs/Clock] 5 subscribers
* /initialpose [unknown type] 1 subscriber
* /carla/debug_marker [visualization_msgs/MarkerArray] 1 subscriber
* /carla/weather_control [carla_msgs/CarlaWeatherParameters] 1 subscriber
* /carla/vehicle0/vehicle_control_manual_override [std_msgs/Bool] 1 subscriber
可以看到包含了RGB相机、LIDAR、RADAR等数据。
7 现在添加我们想要的传感器,首先添加语义分割相机:
json文件添加:
{
"type": "sensor.camera.semantic_segmentation",
"id": "semantic_camera",
"x": 2.0, "y": 0.0, "z": 2.0, "roll": 0.0, "pitch": 0.0, "yaw": 0.0,
"width": 800,
"height": 600,
"fov": 90.0,
"sensor_tick": 0.05,
"lens_circle_falloff": 5.0,
"lens_circle_multiplier": 0.0,
"lens_k": -1.0,
"lens_kcube": 0.0,
"lens_x_size": 0.08,
"lens_y_size": 0.08
}
sensor_vehicle.py文件添加:
if sensor_spec['type'].startswith('sensor.camera.semantic_segmentation'):
bp.set_attribute('lens_circle_falloff', str(
sensor_spec['lens_circle_falloff']))
bp.set_attribute('lens_circle_multiplier', str(
sensor_spec['lens_circle_multiplier']))
bp.set_attribute('lens_k', str(sensor_spec['lens_k']))
bp.set_attribute('lens_kcube', str(sensor_spec['lens_kcube']))
bp.set_attribute('lens_x_size', str(sensor_spec['lens_x_size']))
bp.set_attribute('lens_y_size', str(sensor_spec['lens_y_size']))
现在看看发布的话题多了两个内容:
/carla/vehicle0/camera/semantic_segmentation/semantic_camera/image_segmentation
/carla/vehicle0/camera/semantic_segmentation/semantic_camera/camera_info
图片提取出来如下:
按照同样的方法,我们添加语义分割LIDAR:
json:
{
"type": "sensor.lidar.ray_cast_semantic",
"id": "semantic_lidar",
"x": 0.0, "y": 0.0, "z": 2.4, "roll": 0.0, "pitch": 0.0, "yaw": 0.0,
"range": 50,
"channels": 32,
"points_per_second": 320000,
"upper_fov": 2.0,
"lower_fov": -26.8,
"rotation_frequency": 20,
"sensor_tick": 0.05
}
sensor_vehicle.py不用修改,因为语义分割雷达和普通雷达的属性一样。
现在发现新发布的话题多了:
/carla/vehicle0/lidar/semantic_lidar/point_cloud
这里也可以可视化一下点云和语义点云。。。