版权声明:本文为博主原创文章,遵循Creative Commons — Attribution-ShareAlike 4.0 International — CC BY-SA 4.0版权协议,转载请附上原文出处链接和本声明。
本文链接:AirSim学习和踩坑记录(不定时更新)_Duge1024的博客-CSDN博客_airsim
目录
3.2 运行官方PythonCline中的demo出现TypeError:unsupported oprand type(s) for *: 'AsyncIOLoop' and 'float'
3.5 Landscape Mountains环境中没有输电线(用于UAV强化学习)
1. AirSim官方介绍
Github: https://github.com/microsoft/AirSim
详细文档:Home - AirSim
论文地址:https://arxiv.org/abs/1705.05065
AirSim is a simulator for drones, cars and more, built on Unreal Engine (we now also have an experimental Unity release). It is open-source, cross platform, and supports software-in-the-loop simulation with popular flight controllers such as PX4 & ArduPilot and hardware-in-loop with PX4 for physically and visually realistic simulations. It is developed as an Unreal plugin that can simply be dropped into any Unreal environment. Similarly, we have an experimental release for a Unity plugin.
Our goal is to develop AirSim as a platform for AI research to experiment with deep learning, computer vision and reinforcement learning algorithms for autonomous vehicles. For this purpose, AirSim also exposes APIs to retrieve data and control vehicles in a platform independent way.
AirSim是一款基于虚幻引擎的无人机、汽车等模拟器。 它是开源和跨平台的,并支持在环软件仿真中使用流行的飞行控制器,例如PX4和ArduPilot,也可通过PX4进行硬件在环仿真,以进行物理和视觉逼真的仿真。 AirSim是作为虚幻引擎插件开发的,可以直接放入任何Unreal环境中。 同样,我们有一个Unity插件的实验版本。我们的目标是将AirSim开发为AI研究的平台,以对自动驾驶汽车的深度学习,计算机视觉和强化学习算法进行实验。 为此,AirSim还公开API以平台无关的方式检索数据和控制车辆。


2. AirSim的安装及使用
网上已经有很多教程,安利一波专栏:airsim & unreal 仿真平台 - 知乎,截止2021/7/8作者已经更新13个系列,目前仍在持续更新。
3. 问题汇总
3.1 如何使用AirSim提供的城市地图?
1) City场景截图:
2)下载地址:https://github.com/microsoft/AirSim/releases,查找对应系统和版本的Assets,如下图:
3)下载City.zip.001和City.zip.002并解压,如下图:
4)点击CityEnviron.exe直接运行仿真环境,仿真模式以及其他配置信息则自己根据需要在settings.josn文件中修改(路径:C:\Users\用户名\Documents\AirSim\settings.json),如下图:
5) 同理,我们也可以尝试其他封装好的环境,官方还提供了室内(Building_99.zip)、海岸线(Coastline.zip.001-002)、山脉(LandscapeMountains.zip)、球场(Soccer_Field.zip)等场景。但目前这些场景无法用UE4再次编辑(如果有人知道如何操作,欢迎留言)。
3.2 运行官方PythonCline中的demo出现TypeError:unsupported oprand type(s) for *: 'AsyncIOLoop' and 'float'
查看错误提示,发现是msgpackrpc调用出错,解决方法:重新安装包:
pip install msgpack-rpc-python
pip install airsim
3.3 运行虚幻引擎出现Error: CDO Constructor (PIPCamera): Failed to find Material '/AirSim/HUDAssets/CameraDistortion.CameraDistortion'
我用的版本为4.24.3,issue中提供的解决方法为将UE4升级到最新版本(4.25)。
3.4 Record无法保存未压缩的图片
3.5 Landscape Mountains环境中没有输电线(用于UAV强化学习)
官方提供的PythonClient中有关于强化学习的demo,无人机的任务的跟踪电线移动的尽可能远,但v1.3.1 - Windows中提供的地图并没有电线。
解决方法:使用1.2版本中的Landscape Mountains environment,同时需要注意1.2版本后添加的API将无法调用。
3.6 获取深度图并生成点云图
深度图包含以下三种格式:
1) DepthVis深度图将每个像素值从黑色插值为白色,纯白色像素表示深度为100米或以上,纯黑色像素表示深度为0米;
2) DepthPerspective深度图根据离相机的距离计算深度;
3)DepthPlanner深度图摄影机平面平行的所有点具有相同的深度。
通过下面的函数可以获得三种格式的深度图(ImageRequest中的第一个参数需要根据自己的相机名字修改):
def save_depth_image(client):
depth1_filename = "./DepthPlanner.png"
depth2_filename = "./DepthVis.png"
depth3_filename = "./Perspective.png"
responses = client.simGetImages([
airsim.ImageRequest("bottom", airsim.ImageType.DepthPlanner, False, False),
airsim.ImageRequest("bottom", airsim.ImageType.DepthVis, False, False),
airsim.ImageRequest("bottom", airsim.ImageType.Perspective, False, False)])
img1d = np.frombuffer(responses[0].image_data_uint8, dtype=np.uint8)
img_rgb = img1d.reshape(responses[0].height, responses[0].width, 3)
cv2.imwrite(os.path.normpath(depth1_filename), img_rgb)
img1d = np.frombuffer(responses[1].image_data_uint8, dtype=np.uint8)
img_rgb = img1d.reshape(responses[1].height, responses[1].width, 3)
cv2.imwrite(os.path.normpath(depth2_filename), img_rgb)
img1d = np.frombuffer(responses[2].image_data_uint8, dtype=np.uint8)
img_rgb = img1d.reshape(responses[2].height, responses[2].width, 3)
cv2.imwrite(os.path.normpath(depth3_filename), img_rgb)


通过下面的代码将深度图转化为点云文件cloud.asc:
代码参考issues:3316
def save_point_cloud(image, fileName):
color = (0, 255, 0)
rgb = "%d %d %d" % color
f = open(fileName, "w")
for x in range(image.shape[0]):
for y in range(image.shape[1]):
pt = image[x, y]
if math.isinf(pt[0]) or math.isnan(pt[0]) or pt[0] > 10000 or pt[1] > 10000 or pt[2] > 10000:
# skip it
None
else:
f.write("%f %f %f %s\n" % (pt[0], pt[1], pt[2] - 1, rgb))
f.close()
def depth_conversion(point_depth, f):
height = point_depth.shape[0]
width = point_depth.shape[1]
i_c = float(height) / 2 - 1
j_c = float(width) / 2 - 1
columns, rows = np.meshgrid(np.linspace(0, width - 1, num=width), np.linspace(0, height - 1, num=height))
distance_from_center = ((rows - i_c) ** 2 + (columns - j_c) ** 2) ** 0.5
point_depth = point_depth / (1 + (distance_from_center / f) ** 2) ** 0.5
return point_depth
def generate_point_cloud(depth, Fx, Fy, Cx, Cy):
rows, cols = depth.shape
c, r = np.meshgrid(np.arange(cols), np.arange(rows), sparse=True)
valid = (depth > 0) & (depth < 255)
z = 1000 * np.where(valid, depth / 256.0, np.nan)
x = np.where(valid, z * (c - Cx) / Fx, 0)
y = np.where(valid, z * (r - Cy) / Fy, 0)
return np.dstack((x, y, z))
def main():
source = "./DepthPerspective.png"
output_file = "cloud.asc"
width = 256
height = 144
camera_fov = 90
Fx = Fy = width / (2 * math.tan(camera_fov * math.pi / 360))
depth_map = Image.open(source).convert('L')
img1d = np.array(depth_map, dtype=np.float)
img1d[img1d > 255] = 255
img2d = np.reshape(img1d, (depth_map.height, depth_map.width))
# 如果depth_map是DepthPerspective格式则需要先用depth_conversion函数转化成DepthPlanner格式,是DepthPlanner格式则不需要
img2d_converted = depth_conversion(img2d, Fx)
pcl = generate_point_cloud(img2d_converted, Fx, Fy, Cx, Cy)
# pcl = generate_point_cloud(img2d, Fx, Fy, Cx, Cy)
save_point_cloud(pcl, output_file)
sys.exit(0)
if __name__ == "__main__":
main()
用CloudCompare软件可以查看生成的点云图: