BundleFusion的数据集中,在生成.sens文件之前,包括彩色图,深度图和一个位姿文件,并且这个pose文件中的位姿态是有变化的,所以我怀疑,推测,在这个pose文件中可以写入groundtruth的位姿,然后在重建的时候就按照传入的位姿进行计算.为了测试一下效果,我从TUM数据集开始入手,这个数据集中有彩色图,深度图,还有根据时间戳配准后的文件association.txt,还有groundtruth.但是groundtruth中的数据要比association.txt中的数据量多很多,前者更加密集,我将做这件事分成一下几个步骤:
1. 首先我要将association.txt中的时间戳在groundtruth.txt中找到匹配,并不能做到时间戳完全相等,而是找时间最近的数据.
2. 将找到的matches中的彩色图和深度图从他们原始的目录下,按照BundleFusion需要的图像命名格式对图像进行重命名,然后再拷贝到另外一个目录下.
3. 将配准后的对应到groundtruth.txt中存储的位姿数据保存下来,然后将四元数表示的旋转转换为旋转矩阵.
4. 将平移向量和旋转矩阵组合成SE3位姿矩阵,写入到文件中.
因为直接调用的是TUM数据集官网提供的associate.py中的 read_file_list()函数和associate()函数,所以需要将associate.py从TUM数据集官网上复制下来,放到一个工程内.
"""
step 1: read into two files, association.txt and groundtruth.txt
step 2: read the first column also the timestamp of each file above and then
associate the first one to teh second one to find the matches.
step 3: for all matches, get all the correspondent rotation represented in quaternion form and
then transform them into rotation form.
step 4: combine rotation matrix generated above, translation vector and the shared_vec into the
pose matrix with 4x4 form.
"""
import numpy as np
from scipy.spatial.transform import Rotation as R
import associate
import os
import shutil
def read_files(files_path):
print(files_path)
association = files_path + "association.txt"
groundtruth = files_path + "groundtruth.txt"
rgb_path = files_path + "rgb/"
depth_path = files_path + "depth/"
bf_data_path = files_path + "bf_data/"
print(rgb_path)
print(depth_path)
assoc_list = associate.read_file_list(association)
ground_list = associate.read_file_list(groundtruth)
print("length of assoc_list", len(assoc_list))
print("length of ground_list", len(ground_list))
matches = associate.associate(assoc_list, ground_list, 0.0, 0.02)
final_rgbs = []
for match in matches:
final_rgbs.append(match[0])
# print(match)
rgbs = []
depths = []
for data in assoc_list:
if data[0] in final_rgbs:
rgbs.append(data[1][0].split("/")[1])
depths.append(data[1][2].split("/")[1])
rgb_images = os.listdir(rgb_path)
depth_images = os.listdir(depth_path)
rgb_id = 0
for rgb_name in rgbs:
if rgb_name in rgb_images:
shutil.copyfile(rgb_path + "/" + rgb_name,
bf_data_path + "frame-" + str(rgb_id).zfill(6) + ".color.png")
rgb_id += 1
depth_id = 0
for depth_name in depths:
if depth_name in depth_images:
shutil.copyfile(depth_path + "/" + depth_name,
bf_data_path + "frame-" + str(depth_id).zfill(6) + ".depth.png")
depth_id += 1
print("length of matches",len(matches))
groundtruth_list = []
# initialize a dictionary with a list
ground_dict = dict(ground_list)
for match in matches:
# print(match)
groundtruth_list.append(ground_dict[match[1]])
print("length of groundtruth",len(groundtruth_list))
quaternion2rotation(groundtruth_list, bf_data_path)
def quaternion2rotation(groundtruth_list, bf_data_path):
row_vec = np.array([0,0,0,1], dtype=np.float32)[np.newaxis, :]
frame_id = 0
for pose in groundtruth_list:
translation = np.array([pose[0], pose[1], pose[2]], dtype=np.float32)[:, np.newaxis]
quaternion = np.array([pose[3], pose[4], pose[5], pose[6]])
rotation = R.from_quat(quaternion)
# print(rotation.as_matrix())
m34 = np.concatenate((rotation.as_matrix(), translation), axis=1)
m44 = np.concatenate((m34, row_vec), axis=0)
# write pose into .txt file
# print(m44)
fp = open( bf_data_path + "frame-" + str(frame_id).zfill(6) + ".pose.txt", 'w')
for row in m44:
fp.write(' '.join(str(i) for i in row)+'\n')
frame_id += 1
if __name__ == '__main__':
# step 1: read two files
read_files("/home/yunlei/Datasets/TUM/rgbd_dataset_freiburg1_teddy/")
# transform from quaternion to rotation
# quaternion2rotation(groundtruth_list)