PCT的 part_seg 模型 详细信息 记录

PointTransformerSeg(
(backbone): Backbone(
(fc1): Sequential(
(0): Linear(in_features=19, out_features=32, bias=True)
(1): ReLU()
(2): Linear(in_features=32, out_features=32, bias=True)
)
(transformer1): TransformerBlock(
(fc1): Linear(in_features=32, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=32, bias=True)
(fc_delta): Sequential(
(0): Linear(in_features=3, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(fc_gamma): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(w_qs): Linear(in_features=512, out_features=512, bias=False)
(w_ks): Linear(in_features=512, out_features=512, bias=False)
(w_vs): Linear(in_features=512, out_features=512, bias=False)
)
(transition_downs): ModuleList(
(0): TransitionDown(
(sa): PointNetSetAbstraction(
(mlp_convs): ModuleList(
(0): Conv2d(35, 64, kernel_size=(1, 1), stride=(1, 1))
(1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1))
)
(mlp_bns): ModuleList(
(0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(1): TransitionDown(
(sa): PointNetSetAbstraction(
(mlp_convs): ModuleList(
(0): Conv2d(67, 128, kernel_size=(1, 1), stride=(1, 1))
(1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1))
)
(mlp_bns): ModuleList(
(0): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(2): TransitionDown(
(sa): PointNetSetAbstraction(
(mlp_convs): ModuleList(
(0): Conv2d(131, 256, kernel_size=(1, 1), stride=(1, 1))
(1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))
)
(mlp_bns): ModuleList(
(0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(3): TransitionDown(
(sa): PointNetSetAbstraction(
(mlp_convs): ModuleList(
(0): Conv2d(259, 512, kernel_size=(1, 1), stride=(1, 1))
(1): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1))
)
(mlp_bns): ModuleList(
(0): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
)
(transformers): ModuleList(
(0): TransformerBlock(
(fc1): Linear(in_features=64, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=64, bias=True)
(fc_delta): Sequential(
(0): Linear(in_features=3, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(fc_gamma): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(w_qs): Linear(in_features=512, out_features=512, bias=False)
(w_ks): Linear(in_features=512, out_features=512, bias=False)
(w_vs): Linear(in_features=512, out_features=512, bias=False)
)
(1): TransformerBlock(
(fc1): Linear(in_features=128, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=128, bias=True)
(fc_delta): Sequential(
(0): Linear(in_features=3, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(fc_gamma): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(w_qs): Linear(in_features=512, out_features=512, bias=False)
(w_ks): Linear(in_features=512, out_features=512, bias=False)
(w_vs): Linear(in_features=512, out_features=512, bias=False)
)
(2): TransformerBlock(
(fc1): Linear(in_features=256, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=256, bias=True)
(fc_delta): Sequential(
(0): Linear(in_features=3, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(fc_gamma): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(w_qs): Linear(in_features=512, out_features=512, bias=False)
(w_ks): Linear(in_features=512, out_features=512, bias=False)
(w_vs): Linear(in_features=512, out_features=512, bias=False)
)
(3): TransformerBlock(
(fc1): Linear(in_features=512, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=512, bias=True)
(fc_delta): Sequential(
(0): Linear(in_features=3, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(fc_gamma): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(w_qs): Linear(in_features=512, out_features=512, bias=False)
(w_ks): Linear(in_features=512, out_features=512, bias=False)
(w_vs): Linear(in_features=512, out_features=512, bias=False)
)
)
)
(fc2): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
(3): ReLU()
(4): Linear(in_features=512, out_features=512, bias=True)
)
(transformer2): TransformerBlock(
(fc1): Linear(in_features=512, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=512, bias=True)
(fc_delta): Sequential(
(0): Linear(in_features=3, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(fc_gamma): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(w_qs): Linear(in_features=512, out_features=512, bias=False)
(w_ks): Linear(in_features=512, out_features=512, bias=False)
(w_vs): Linear(in_features=512, out_features=512, bias=False)
)
(transition_ups): ModuleList(
(0): TransitionUp(
(fc1): Sequential(
(0): Linear(in_features=512, out_features=256, bias=True)
(1): SwapAxes()
(2): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): SwapAxes()
(4): ReLU()
)
(fc2): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): SwapAxes()
(2): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): SwapAxes()
(4): ReLU()
)
(fp): PointNetFeaturePropagation(
(mlp_convs): ModuleList()
(mlp_bns): ModuleList()
)
)
(1): TransitionUp(
(fc1): Sequential(
(0): Linear(in_features=256, out_features=128, bias=True)
(1): SwapAxes()
(2): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): SwapAxes()
(4): ReLU()
)
(fc2): Sequential(
(0): Linear(in_features=128, out_features=128, bias=True)
(1): SwapAxes()
(2): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): SwapAxes()
(4): ReLU()
)
(fp): PointNetFeaturePropagation(
(mlp_convs): ModuleList()
(mlp_bns): ModuleList()
)
)
(2): TransitionUp(
(fc1): Sequential(
(0): Linear(in_features=128, out_features=64, bias=True)
(1): SwapAxes()
(2): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): SwapAxes()
(4): ReLU()
)
(fc2): Sequential(
(0): Linear(in_features=64, out_features=64, bias=True)
(1): SwapAxes()
(2): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): SwapAxes()
(4): ReLU()
)
(fp): PointNetFeaturePropagation(
(mlp_convs): ModuleList()
(mlp_bns): ModuleList()
)
)
(3): TransitionUp(
(fc1): Sequential(
(0): Linear(in_features=64, out_features=32, bias=True)
(1): SwapAxes()
(2): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): SwapAxes()
(4): ReLU()
)
(fc2): Sequential(
(0): Linear(in_features=32, out_features=32, bias=True)
(1): SwapAxes()
(2): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): SwapAxes()
(4): ReLU()
)
(fp): PointNetFeaturePropagation(
(mlp_convs): ModuleList()
(mlp_bns): ModuleList()
)
)
)
(transformers): ModuleList(
(0): TransformerBlock(
(fc1): Linear(in_features=256, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=256, bias=True)
(fc_delta): Sequential(
(0): Linear(in_features=3, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(fc_gamma): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(w_qs): Linear(in_features=512, out_features=512, bias=False)
(w_ks): Linear(in_features=512, out_features=512, bias=False)
(w_vs): Linear(in_features=512, out_features=512, bias=False)
)
(1): TransformerBlock(
(fc1): Linear(in_features=128, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=128, bias=True)
(fc_delta): Sequential(
(0): Linear(in_features=3, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(fc_gamma): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(w_qs): Linear(in_features=512, out_features=512, bias=False)
(w_ks): Linear(in_features=512, out_features=512, bias=False)
(w_vs): Linear(in_features=512, out_features=512, bias=False)
)
(2): TransformerBlock(
(fc1): Linear(in_features=64, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=64, bias=True)
(fc_delta): Sequential(
(0): Linear(in_features=3, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(fc_gamma): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(w_qs): Linear(in_features=512, out_features=512, bias=False)
(w_ks): Linear(in_features=512, out_features=512, bias=False)
(w_vs): Linear(in_features=512, out_features=512, bias=False)
)
(3): TransformerBlock(
(fc1): Linear(in_features=32, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=32, bias=True)
(fc_delta): Sequential(
(0): Linear(in_features=3, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(fc_gamma): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(w_qs): Linear(in_features=512, out_features=512, bias=False)
(w_ks): Linear(in_features=512, out_features=512, bias=False)
(w_vs): Linear(in_features=512, out_features=512, bias=False)
)
)
(fc3): Sequential(
(0): Linear(in_features=32, out_features=64, bias=True)
(1): ReLU()
(2): Linear(in_features=64, out_features=64, bias=True)
(3): ReLU()
(4): Linear(in_features=64, out_features=50, bias=True)
)
)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
PointNet2是一种针对点云分类和分割任务的深度学习框架。PointNet2_Part_Seg_SSG是基于PointNet2框架的一个应用,用于点云部分分割任务。 PointNet2使用了一种层级的神经网络结构,能够有效地处理无序的点云数据。它将点云分为多个局部区域,对每个区域进行特征提取,最后整合局部特征得到全局特征表达。这种设计能够提取点云的局部和全局特征,从而实现对点云数据的分类和分割。 PointNet2_Part_Seg_SSG是PointNet2框架的一种改进,主要针对点云的部分分割任务。它使用了SSG(Single-Scale Grouping)模块,通过分组聚合点的特征,从而对点云进行细分。SSG模块首先选择每个局部区域中的中心点,并将其他点分配给最近的中心点。然后,SSG模块对每个中心点的邻域进行特征提取和聚合,得到该局部区域的特征表示。最后,通过进一步的卷积和池化操作,得到点云的全局特征表示。 在训练过程中,PointNet2_Part_Seg_SSG使用交叉熵损失函数来度量预测的分割结果与真实标签之间的差异。通过反向传播算法,可以优化网络的参数,使得网络能够更好地学习点云的特征表示和分割任务。 总的来说,PointNet2_Part_Seg_SSG是基于PointNet2框架的一个改进版本,专门用于点云的部分分割任务。它通过采用SSG模块,能够对点云进行更精细的细分和特征提取,从而提高了点云分割任务的准确性和效果。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值