CompHub 实时聚合多平台的数据类(Kaggle、天池…)和OJ类(Leetcode、牛客…)比赛。本账号会推送最新的比赛消息,欢迎关注!
更多比赛信息见 CompHub主页
以下内容摘自比赛主页(点击文末阅读原文进入)
Part1赛题介绍
题目
Scene Understanding For Autonomous Drone
举办平台
主办方
Amazon Prime Air
背景
Unmanned Aircraft Systems (UAS) have various applications, such as environmental studies, emergency responses or package delivery. The safe operation of fully autonomous UAS requires robust perception systems.
For this challenge, we will focus on images of a single downward camera to estimate the scene's depth and perform semantic segmentation. The results of these two tasks can help the development of safe and reliable autonomous control systems for aircraft.
This challenge includes the release of a new dataset of drone images that will benchmark semantic segmentation and mono-depth perception. The images in this dataset comprise realistic backyard scenarios of variable content and have been taken on various Above Ground Level (AGL) ranges.
无人驾驶飞机系统(UAS)有多种应用,如环境研究、应急响应或包裹递送。全自动无人机系统的安全运行需要强大的感知系统。
对于这个挑战,我们将专注于单个向下相机的图像,以估计场景的深度并进行语义分割。这两项工作的结果可以帮助开发安全可靠的飞机自主控制系统。
这项挑战包括发布一个新的无人机图像数据集,该数据集将对语义分割和单深度感知进行基准测试。该数据集中的图像包含可变内容的现实后院场景,并在各种地面以上(AGL)范围内拍摄。
两个赛道:
-
Semantic Segmentation:Perform semantic segmentation on aerial images from monocular downward-facing drone
-
Mono Depth Perception:Estimate depth in aerial images from monocular downward-facing drone
Part2时间安排
-
Challenge Launch: 22nd December 2022
-
Challenge End: 28th April 2023
-
Winner Announcement: 30th June 2023
Part3奖励机制
SEMANTIC SEGMENTATION
-
🥇 The Top scoring submission will receive $15,000 USD
-
🥈 The Second best submission will receive $7,500 USD
-
🥉 The Third place submission will receive $1,250 USD
DEPTH PERCEPTION
-
🥇 The Top scoring submission will receive $15,000 USD
-
🥈 The Second best submission will receive $7,500 USD
-
🥉 The Third place submission will receive $1,250 USD
🏅 The Most “Creative” solution submitted to the whole competition, as determined by the Sponsor’s sole discretion, will receive $2,500 USD.
Part4赛题描述
TASK 1: SEMANTIC SEGMENTATION
Semantic segmentation is the labelling of the pixels of an image according to the category of the object to which they belong. The output for this task is an image in which each pixel has the value of the class it represents.
For this task, we focus on labels that ensure a safe landing, such as the location of humans and animals, round or flat surfaces, tall grass and water elements, vehicles and so on. The labels chosen for this challenge are humans, animals, roads, concrete, roof, tree, furniture, vehicles, wires, snow etc. The complete list of labels is: [WATER, ASPHALT, GRASS, HUMAN, ANIMAL, HIGH_VEGETATION, GROUND_VEHICLE, FAÇADE, WIRE, GARDEN_FURNITURE, CONCRETE, ROOF, GRAVEL, SOIL, PRIMEAIR_PATTERN, SNOW].
TASK 2: MONO-DEPTH ESTIMATION
Depth estimation measures the distance between the camera and the objects in the scene. It is an important perception task for an autonomous aerial drone. Using two stereo cameras makes this task solvable with stereo vision methods. This challenge aims to create a model that can use the information of a single camera to predict the depth of every pixel.
The output of this task must be an image of equal size to the input image, in which every pixel contains a depth value.