carla学习笔记(四)

昨天实验室要汇报暑期项目进展,花了一下午的搭建了一个小城市的map,然后跑了一个简单的车辆运行采集了camera和lidar的数据。最后一直在处理ply的数据,因为没有处理点云数据的基础,花了好几个小时。就没有时间学习example的源码,今天补上。

今天主要学习了example中的sensor_synchronization.py的源码,主要是实现设置同步模式下的sensor的camera、lidar和radar的采集数据。

#!/usr/bin/env python

# Copyright (c) 2020 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.

"""
Sensor synchronization example for CARLA

The communication model for the syncronous mode in CARLA sends the snapshot
of the world and the sensors streams in parallel.
We provide this script as an example of how to syncrononize the sensor
data gathering in the client.
To to this, we create a queue that is being filled by every sensor when the
client receives its data and the main loop is blocked until all the sensors
have received its data.
This suppose that all the sensors gather information at every tick. It this is
not the case, the clients needs to take in account at each frame how many
sensors are going to tick at each frame.
"""

import glob
import os
import sys
from queue import Queue
from queue import Empty

try:
    sys.path.append(glob.glob('../carla/dist/carla-*%d.%d-%s.egg' % (
        sys.version_info.major,
        sys.version_info.minor,
        'win-amd64' if os.name == 'nt' else 'linux-x86_64'))[0])
except IndexError:
    pass

import carla


# Sensor callback.
# This is where you receive the sensor data and
# process it as you liked and the important part is that,
# at the end, it should include an element into the sensor queue.
def sensor_callback(sensor_data, sensor_queue, sensor_name):
    # Do stuff with the sensor_data data like save it to disk
    # Then you just need to add to the queue
    sensor_queue.put((sensor_data.frame, sensor_name))

    # cam01.listen(lambda data: sensor_callback(data, sensor_queue, "camera01"))

def main():
    # We start creating the client
    client = carla.Client('localhost', 2000)
    client.set_timeout(2.0)
    world = client.get_world()

    try:
        # We need to save the settings to be able to recover them at the end
        # of the script to leave the server in the same state that we found it.
        original_settings = world.get_settings()
        settings = world.get_settings()

        # We set CARLA syncronous mode
        settings.fixed_delta_seconds = 0.2
        settings.synchronous_mode = True
        world.apply_settings(settings)

        # We create the sensor queue in which we keep track of the information
        # already received. This structure is thread safe and can be
        # accessed by all the sensors callback concurrently without problem.
        sensor_queue = Queue()

        # Bluepints for the sensors
        blueprint_library = world.get_blueprint_library()
        cam_bp = blueprint_library.find('sensor.camera.rgb')
        lidar_bp = blueprint_library.find('sensor.lidar.ray_cast')
        radar_bp = blueprint_library.find('sensor.other.radar')

        # We create all the sensors and keep them in a list for convenience.
        sensor_list = []

        cam01 = world.spawn_actor(cam_bp, carla.Transform())
        cam01.listen(lambda data: sensor_callback(data, sensor_queue, "camera01"))
        # ###################          sensor_queue.put((data.frame(), "camera01"))
        # 这里的大概意思是每次当cam01有数据的时候listen里面的内容就会执行一遍
        """
        # do_something() will be called each time a new image is generated by the camera.
            sensor.listen(lambda data: do_something(data))

        # This collision sensor would print everytime a collision is detected. 
            def callback(event):
                for actor_id in event:
                    vehicle = world_ref().get_actor(actor_id)
                    print('Vehicle too close: %s' % vehicle.type_id)

            sensor02.listen(callback)
        """
        sensor_list.append(cam01)

        lidar_bp.set_attribute('points_per_second', '100000')
        # 经过测试,发现这里的carla.Transform()完全是:
        # Transform(Location(x=0.000000, y=0.000000, z=0.000000), Rotation(pitch=0.000000, yaw=0.000000, roll=0.000000))
        # 发现sensor并不一定要attach_to vehicle,猜测可以直接在world中生产sensor,并采集相关的数据。
        lidar01 = world.spawn_actor(lidar_bp, carla.Transform())
        lidar01.listen(lambda data: sensor_callback(data, sensor_queue, "lidar01"))
        sensor_list.append(lidar01)

        lidar_bp.set_attribute('points_per_second', '1000000')
        lidar02 = world.spawn_actor(lidar_bp, carla.Transform())
        lidar02.listen(lambda data: sensor_callback(data, sensor_queue, "lidar02"))
        sensor_list.append(lidar02)

        radar01 = world.spawn_actor(radar_bp, carla.Transform())
        radar01.listen(lambda data: sensor_callback(data, sensor_queue, "radar01"))
        sensor_list.append(radar01)

        radar02 = world.spawn_actor(radar_bp, carla.Transform())
        radar02.listen(lambda data: sensor_callback(data, sensor_queue, "radar02"))
        sensor_list.append(radar02)

        # Main loop
        while True:
            # Tick the server
            world.tick()
            w_frame = world.get_snapshot().frame
            print("\nWorld's frame: %d" % w_frame)

            # Now, we wait to the sensors data to be received.
            # As the queue is blocking, we will wait in the queue.get() methods
            # until all the information is processed and we continue with the next frame.
            # We include a timeout of 1.0 s (in the get method) and if some information is
            # not received in this time we continue.
            try:
                for _ in range(len(sensor_list)):
                    s_frame = sensor_queue.get(True, 1.0)
                    print("    Frame: %d   Sensor: %s" % (s_frame[0], s_frame[1]))

            except Empty:
                print("    Some of the sensor information is missed")

    finally:
        world.apply_settings(original_settings)
        for sensor in sensor_list:
            sensor.destroy()


if __name__ == "__main__":
    try:
        main()
    except KeyboardInterrupt:
        print(' - Exited by user.')

在这段源码中,我发现了有一个carla.Transform()最开始猜测是随机生成一个transform的数据格式,后来发现是全部默认初始化为0

Transform(Location(x=0.000000, y=0.000000, z=0.000000), Rotation(pitch=0.000000, yaw=0.000000, roll=0.000000))

然后我就设想能不能不借助vehicle能后独立的生成sensor,并读取数据。参考这几天学的不同源码,从中提取了不同的部分,小小的验证了这个想法。

import glob
import os
import sys
from queue import Queue
from queue import Empty
import cv2

try:
    sys.path.append(glob.glob('../carla/dist/carla-*%d.%d-%s.egg' % (
        sys.version_info.major,
        sys.version_info.minor,
        'win-amd64' if os.name == 'nt' else 'linux-x86_64'))[0])
except IndexError:
    pass

import carla
import numpy as np

def sensor_callback(sensor_data, sensor_queue):
    array = np.frombuffer(sensor_data.raw_data, dtype=np.dtype("uint8"))
    # image is rgba format
    array = np.reshape(array, (sensor_data.height, sensor_data.width, 4))
    array = array[:, :, :3]
    sensor_queue.put((sensor_data.frame, array))

def main():
    
    client = carla.Client('localhost', 2000)
    client.set_timeout(2.0)
    world = client.get_world()

    original_settings = world.get_settings()
    settings = world.get_settings()
   
    ego_vehicle_bp = world.get_blueprint_library().find('vehicle.mercedes-benz.coupe')
    ego_vehicle_bp.set_attribute('color','0,0,0')
    ego_vehicle_bp.set_attribute('role_name', 'mybenz')
    # ego_vehicle = world.spawn_actor(ego_vehicle_bp, carla.Transform())

     # 猜想carla.Transform()是随机分配一个transform格式的数据,
     # 但结果显示是全部为零,应该是一个默认初始化的类。
    # for i in range(100):
    #     print(carla.Transform())
    # 利用显示函数直观证明
    # test_location = carla.Location(10,10,2)
    # test_rotation = carla.Rotation(0,0,0)
    # test_transform = carla.Transform(test_location, test_rotation)
    # world.debug.draw_string(carla.Transform().location, 'O', draw_shadow=False,
    #                     color=carla.Color(r=0, g=255, b=0), life_time=500,
    #                     persistent_lines=True)
    # 显示在地图正中间,说明确实是全部默认初始化为0

    #尝试在空中设置一个camera 
    # attachment_to and attachment_type, are crucial. 
    # Sensors should be attached to a parent actor, usually a vehicle,
    #  to follow it around and gather the information. 
    # The attachment type will determine how its position is updated regarding said vehicle. 
    # 好像不太行,现在想想办法从actor下手
    # 可以实现,利用cv2的库可以生成相关的图像,但不知道为什么不能保存,要仔细思考一下这个问题。
    test_location = carla.Location(15,15,2)
    test_rotation = carla.Rotation(0,0,0)
    test_transform = carla.Transform(test_location, test_rotation)
    world.debug.draw_string(test_location, 'O', draw_shadow=False,
                        color=carla.Color(r=0, g=255, b=0), life_time=500,
                        persistent_lines=True)
    # my_vehicle = world.spawn_actor(ego_vehicle_bp, test_transform)
    blueprint_library = world.get_blueprint_library()
    cam_bp = blueprint_library.find('sensor.camera.rgb')
    sensor_queue = Queue(maxsize=10)

    try:
        cam_location = carla.Location(10,10,10)
        cam_rotation = carla.Rotation(0,-90,0)
        cam01 = world.spawn_actor(cam_bp, carla.Transform(cam_location, cam_rotation))
        cam01.listen(lambda image: sensor_callback(image, sensor_queue))
        s_frame = sensor_queue.get(True, 1.0)
        print("Camera Frame: %d" % (s_frame[0]))

        # show image in a poping window
        while True:
            cv2.imshow('camera', s_frame[1])
            if cv2.waitKey(1) & 0xFF == ord('q'):
                break
        # output_path = '/media/hhh/75c0c2a9-f565-4a05-b2a5-5599a918e2f0/hhh/carlaLearning/PythonAPI/learning_document/'
        # cam01.listen(lambda image: image.save_to_disk(os.path.join(output_path, '%06d.png' % image.frame)))
    finally:
        cam01.destroy()

if __name__ == '__main__':

    try:
        main()
    except KeyboardInterrupt:
        pass
    finally:
        print('\ndone.')

最开始的想法是直接用将拍的照片用listen函数存储到同文件下,后来发现不能实现(现在猜测是有可能和snap函数有关)。看了下官方的sensor的doc发现有这么一句话:attachment_to and attachment_type, are crucial. Sensors should be attached to a parent actor, usually a vehicle, to follow it around and gather the information.

以为设想不能实现,然后我换了一个验证的方式,导入了cv2的包,利用cv

2.imshow()函数,查看能否显示该camera的图像,最后成功显示,说明sensor是可以独立设置于vehicle的,为以后的道路口的数据采集提供了实验基础。

ps:如果设置了spawn _sensor,没有destory它的话,会报:

WARNING: sensor object went out of the scope but the sensor is still alive in the simulation: Actor 172 (sensor.camera.rgb)

算是一个小彩弹。

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值