复现RandLA-Net网络,并训练自己的数据集

具体的复现过程可参考前面发的博客。

在训练自己的数据之前,首先要针对数据预处理进行修改,因为官方提供的数据中,标签是以.label格式单独储存于独立文件中,而我在处理标签是直接作为一列储存在点云文件中。此外,由于官方数据时具有颜色的,而我的数据是未着色的,因此直接将颜色赋0.

下面是修改后的data_prepare文件:

from sklearn.neighbors import KDTree
from os.path import join, exists, dirname, abspath
import numpy as np
import os, glob, pickle
import sys

BASE_DIR = dirname(abspath(__file__))
ROOT_DIR = dirname(BASE_DIR)
sys.path.append(BASE_DIR)
sys.path.append(ROOT_DIR)
from helper_ply import write_ply
from helper_tool import DataProcessing as DP

grid_size = 0.06  #我的点云数据集比较密,所以下采样间隔取大一点
dataset_path = 'data/semantic_kitti/out_16'
original_pc_folder = join(dirname(dataset_path), 'original_ply')
sub_pc_folder = join(dirname(dataset_path), 'input_{:.3f}'.format(grid_size))
os.mkdir(original_pc_folder) if not exists(original_pc_folder) else None
os.mkdir(sub_pc_folder) if not exists(sub_pc_folder) else None

for pc_path in glob.glob(join(dataset_path, '*.txt')):
    print(pc_path)
    # file_name = pc_path.split('/')[-1][:-4]
    file_name=os.path.basename(pc_path)[:-4]

    # check if it has already calculated
    if exists(join(sub_pc_folder, file_name + '_KDTree.pkl')):
        continue

    #直接使用numpy加载txt格式的点云文件
    pc=np.loadtxt(pc_path)

    #我的数据集中训练和测试集都是x,y,z,r,g,b,label的存储格式
    labels=pc[:,-1].astype(np.uint8)

    full_ply_path = join(original_pc_folder, file_name + '.ply')

    #  Subsample to save space
    sub_points, sub_colors, sub_labels = DP.grid_sub_sampling(pc[:, :3].astype(np.float32),
                                                              pc[:, 3:6].astype(np.uint8), labels, 0.01)
    sub_labels = np.squeeze(sub_labels)

    write_ply(full_ply_path, (sub_points, sub_colors, sub_labels), ['x', 'y', 'z', 'red', 'green', 'blue', 'class'])

    # save sub_cloud and KDTree file
    sub_xyz, sub_colors, sub_labels = DP.grid_sub_sampling(sub_points, sub_colors, sub_labels, grid_size)
    sub_colors = sub_colors / 255.0
    sub_labels = np.squeeze(sub_labels)
    sub_ply_file = join(sub_pc_folder, file_name + '.ply')
    write_ply(sub_ply_file, [sub_xyz, sub_colors, sub_labels], ['x', 'y', 'z', 'red', 'green', 'blue', 'class'])

    search_tree = KDTree(sub_xyz, leaf_size=50)
    kd_tree_file = join(sub_pc_folder, file_name + '_KDTree.pkl')
    with open(kd_tree_file, 'wb') as f:
        pickle.dump(search_tree, f)

    proj_idx = np.squeeze(search_tree.query(sub_points, return_distance=False))
    proj_idx = proj_idx.astype(np.int32)
    proj_save = join(sub_pc_folder, file_name + '_proj.pkl')
    with open(proj_save, 'wb') as f:
        pickle.dump([proj_idx, labels], f)

此外,main文件中的类定义也要修改,将分类标签改成自己数据所对应的,同时要将训练集和测试集的划分代码进行修改。

    def __init__(self):
        self.name = 'power'
        self.path = 'utils/data/semantic_kitti'
        self.label_to_names = {0: 'background',
                               1: 'build',
                               }
        self.num_classes = len(self.label_to_names)
        self.label_values = np.sort([k for k, v in self.label_to_names.items()])
        self.label_to_idx = {l: i for i, l in enumerate(self.label_values)}
        self.ignored_labels = np.sort([])

        self.original_folder = join(self.path, 'out_16')
        self.full_pc_folder = join(self.path, 'original_ply')
        self.sub_pc_folder = join(self.path, 'input_{:.3f}'.format(0.06))

        # 训练、验证、测试数据都在original_data数据集中,需要做划分


        # Initial training-validation-testing files
        self.train_files = []
        self.val_files = []
        self.test_files = []
        cloud_names = [file_name[:-4] for file_name in os.listdir(self.original_folder) if file_name[-4:] == '.txt']
        self.val_split = cloud_names[:30]  # "file1"是点云文件的名字
        self.test_split = cloud_names[30:45]
        # 根据文件名划分训练、验证、测试数据集
        for pc_name in cloud_names:
            pc_file = join(self.sub_pc_folder, pc_name + '.ply')
            if pc_name in self.val_split:
                self.val_files.append(pc_file)
            elif pc_name in self.test_split:
                self.test_files.append(pc_file)
            else:
                self.train_files.append(pc_file)

        # Initiate containers
        self.val_proj = []
        self.val_labels = []
        self.test_proj = []
        self.test_labels = []

        self.possibility = {}
        self.min_possibility = {}
        self.class_weight = {}
        self.input_trees = {'training': [], 'validation': [], 'test': []}
        self.input_colors = {'training': [], 'validation': [], 'test': []}
        self.input_labels = {'training': [], 'validation': []}

        self.load_sub_sampled_clouds(cfg.sub_grid_size)

此外,数据集的权重也要进行修改,因为我是进行单类别分割,因此只是将权值全部在设置为1.

最后,按照正常的训练流程进行训练即可。

  • 4
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
RandLA-Net是一种基于点云数据的深度学习模型,用于点云分割和场景理解。下面是使用PyTorch实现RandLA-Net的简单步骤: 1. 安装依赖库 在Python环境中安装以下库: - PyTorch - NumPy - Open3D - Scikit-learn 其中PyTorch是必须的,其余库是为了可视化和数据预处理。 2. 下载数据集 下载点云数据集,例如S3DIS数据集,该数据集包含了用于建筑物场景的点云数据。可以从官方网站下载数据集。 3. 数据预处理 使用Open3D库读取点云数据并进行预处理。具体来说,可以使用Open3D库将点云数据转换为numpy数组,然后将其分为小的块,以便在GPU上进行训练。 ```python import open3d as o3d import numpy as np import os def load_data(path): pcd = o3d.io.read_point_cloud(path) points = np.asarray(pcd.points) return points def process_data(points, block_size=3.0, stride=1.5): blocks = [] for x in range(0, points.shape[0], stride): for y in range(0, points.shape[1], stride): for z in range(0, points.shape[2], stride): block = points[x:x+block_size, y:y+block_size, z:z+block_size] if block.shape[0] == block_size and block.shape[1] == block_size and block.shape[2] == block_size: blocks.append(block) return np.asarray(blocks) # Example usage points = load_data("data/room1.pcd") blocks = process_data(points) ``` 这将生成大小为3x3x3的块,每个块之间的距离为1.5。 4. 构建模型 RandLA-Net是一个基于点云的分割模型,它使用了局部注意力机制和多层感知器(MLP)。这里给出一个简单的RandLA-Net模型的实现: ```python import torch import torch.nn as nn class RandLANet(nn.Module): def __init__(self, input_channels, num_classes): super(RandLANet, self).__init__() # TODO: Define the model architecture self.conv1 = nn.Conv1d(input_channels, 32, 1) self.conv2 = nn.Conv1d(32, 64, 1) self.conv3 = nn.Conv1d(64, 128, 1) self.conv4 = nn.Conv1d(128, 256, 1) self.conv5 = nn.Conv1d(256, 512, 1) self.mlp1 = nn.Sequential( nn.Linear(512, 256), nn.BatchNorm1d(256), nn.ReLU(), nn.Linear(256, 128), nn.BatchNorm1d(128), nn.ReLU(), nn.Linear(128, num_classes), nn.BatchNorm1d(num_classes) ) def forward(self, x): # TODO: Implement the forward pass x = self.conv1(x) x = self.conv2(x) x = self.conv3(x) x = self.conv4(x) x = self.conv5(x) x = torch.max(x, dim=-1)[0] x = self.mlp1(x) return x ``` 这个模型定义了5个卷积层和一个多层感知器(MLP)。在前向传递过程中,点云数据被送入卷积层,然后通过局部最大池化层进行处理。最后,通过MLP将数据转换为预测的类别。 5. 训练模型 在准备好数据和模型之后,可以使用PyTorch的内置函数训练模型。这里使用交叉熵损失函数和Adam优化器: ```python import torch.optim as optim device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # TODO: Initialize the model model = RandLANet(input_channels=3, num_classes=13).to(device) # TODO: Initialize the optimizer and the loss function optimizer = optim.Adam(model.parameters(), lr=0.001) loss_fn = nn.CrossEntropyLoss() # TODO: Train the model for epoch in range(num_epochs): running_loss = 0.0 for i, batch in enumerate(train_loader): # Move the batch to the GPU batch = batch.to(device) # Zero the gradients optimizer.zero_grad() # Forward pass outputs = model(batch) loss = loss_fn(outputs, batch.labels) # Backward pass and optimization loss.backward() optimizer.step() # Record the loss running_loss += loss.item() # Print the epoch and the loss print('Epoch [%d], Loss: %.4f' % (epoch+1, running_loss / len(train_loader))) ``` 这里使用Adam优化器和交叉熵损失函数进行训练训练完成后,可以使用预测函数对新数据进行分类: ```python def predict(model, data): with torch.no_grad(): # Move the data to the GPU data = data.to(device) # Make predictions outputs = model(data) _, predicted = torch.max(outputs.data, 1) # Move the predictions back to CPU predicted = predicted.cpu().numpy() return predicted # Example usage data = load_data("data/room2.pcd") data = process_data(data) data = torch.from_numpy(data).float().permute(0, 2, 1) predicted = predict(model, data) ``` 这将返回点云数据的分类预测。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值