pointnet2InferenceOwnModel - 使用pointnet2推理自己的模型(以obj模型数据为例)

0 推理结果推理结果

1 运行pointnet2

https://github.com/yanx27/Pointnet_Pointnet2_pytorch#part-segmentation-shapenet

下载s3dis数据:
https://docs.google.com/forms/d/e/1FAIpQLScDimvNMCGhy_rmBA2gHfDu3naktRm6A8BPwAWWDv-Uhm6Shw/viewform?c=0&w=1

2 准备自己的测试数据

思路:将自己的一个obj模型,包装成为s3dis的一样的格式

方式如下:

# 1 准备原始数据:
目录结构如下:smallRoom.txt的每一行为 x y z  r g b, floor_1.txt 是 smallRoom.txt的复制
nash5@gas:~/prjs/Pointnet_Pointnet2_pytorch/data/s3dis$ tree ./Stanford3dDataset_v1.2_Aligned_Version
./Stanford3dDataset_v1.2_Aligned_Version
└── Area_5
    └── office_1
        ├── Annotations
        │   └── floor_1.txt
        └── smallRoom.txt

# 2 使用s3dis脚本生成测试数据:(你需要修改对应的meta文件,只留下一行)
nash5@gas:~/prjs/Pointnet_Pointnet2_pytorch/data/s3dis$ cat ../../data_utils/meta/anno_paths.txt 
Area_5/office_1/Annotations
nash5@gas:~/prjs/Pointnet_Pointnet2_pytorch/data/s3dis$ 

然后:
cd data_utils
python collect_indoor3d_data.py

之后会在data目录下生产stanford_indoor3d目录,将其移动到
3dis下面:
nash5@gas:~/prjs/Pointnet_Pointnet2_pytorch/data/s3dis$ ls ..
modelnet40_normal_resampled  s3dis      shapenetcore_partanno_segmentation_benchmark_v0_normal  stanford_indoor3d_ori
myData                       s3dis_ori  
nash5@gas:~/prjs/Pointnet_Pointnet2_pytorch/data/s3dis$ ls
Stanford3dDataset_v1.2_Aligned_Version  stanford_indoor3d

# 3 运行测试脚本:(你需要改动batch_size,1080可以设置为16),然后你会在log里的几层嵌套中找到这个
nash5@gas:~/prjs/Pointnet_Pointnet2_pytorch$ python test_semseg.py --log_dir pointnet2_sem_seg --test_area 5 --visual --batch_size 16
ay(total_correct_class_tmp) / (np.array(total_iou_deno_class_tmp, dtype=np.float) + 1e-6)
[0.         0.03883974 0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.        ]
Mean IoU of Area_5_office_1: 0.0388
----------------------------
test_semseg.py:185: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  IoU = np.array(total_correct_class) / (np.array(total_iou_deno_class, dtype=np.float) + 1e-6)
test_semseg.py:190: RuntimeWarning: invalid value encountered in true_divide
  total_correct_class[l] / float(total_iou_deno_class[l]))
------- IoU --------
class ceiling       , IoU: 0.000 
class floor         , IoU: 0.039 
class wall          , IoU: 0.000 
class beam          , IoU: 0.000 
class column        , IoU: 0.000 
class window        , IoU: 0.000 
class door          , IoU: 0.000 
class table         , IoU: nan 
class chair         , IoU: 0.000 
class sofa          , IoU: 0.000 
class bookcase      , IoU: 0.000 
class board         , IoU: 0.000 
class clutter       , IoU: 0.000 

eval point avg class IoU: 0.002988
test_semseg.py:194: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  np.mean(np.array(total_correct_class) / (np.array(total_seen_class, dtype=np.float) + 1e-6))))
eval whole scene point avg class acc: 0.002988
eval whole scene petails and guidance: https://numpy.org/devdocs/release

# 4 查看结果: 用meshlab看那个pred.obj
nash5@gas:~/prjs/Pointnet_Pointnet2_pytorch$ ls log/sem_seg/pointnet2_sem_seg/visual/Area_5_office_1
Area_5_office_1_gt.obj    Area_5_office_1_pred.obj  Area_5_office_1.txt  
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
使用PySide2加载.obj模型文件,可以使用Qt 3D Studio Viewer,以下是示例代码: ```python from PySide2.Qt3DExtras import Qt3DWindow from PySide2.Qt3DCore import QEntity from PySide2.Qt3DRender import (QCamera, QCameraLens, QMesh, QPointLight, QRenderAspect, QTechnique, QMaterial, QParameter, QShaderProgram) app = QApplication([]) window = Qt3DWindow() scene_root_entity = QEntity() camera_entity = window.camera() camera_entity.lens().setPerspectiveProjection(45.0, 16.0 / 9.0, 0.1, 1000.0) camera_entity.setPosition(QVector3D(0, 0, 20)) camera_entity.setUpVector(QVector3D(0, 1, 0)) camera_entity.setViewCenter(QVector3D(0, 0, 0)) light_entity = QEntity(scene_root_entity) light = QPointLight(light_entity) light.setColor(Qt.white) light.setIntensity(1) light_entity.addComponent(light) light_entity.setPosition(QVector3D(0, 0, 20)) mesh_entity = QEntity(scene_root_entity) mesh = QMesh(mesh_entity) mesh.setSource(QUrl.fromLocalFile('your_file.obj')) material = QMaterial(mesh_entity) technique = QTechnique(material) render_pass = technique.renderPasses()[0] shader_program = QShaderProgram(render_pass) shader_program.setVertexShaderCode(QShaderProgram.Vertex, QUrl.fromLocalFile('your_vertex_shader.vert')) shader_program.setFragmentShaderCode(QShaderProgram.Fragment, QUrl.fromLocalFile('your_fragment_shader.frag')) render_pass.setShaderProgram(shader_program) texture_parameter = QParameter(material) texture_parameter.setName('diffuseTexture') texture_parameter.setValue(QUrl.fromLocalFile('your_texture.png')) render_pass.addParameter(texture_parameter) mesh_entity.addComponent(mesh) mesh_entity.addComponent(material) aspect = QRenderAspect() aspect.setCamera(camera_entity) aspect.setRenderPolicy(QRenderAspect.Synchronous) aspect.setClearBuffers(QRenderAspect.AllBuffers) window.setActiveFrameGraph(aspect) window.setRootEntity(scene_root_entity) window.show() app.exec_() ``` 其中,`your_file.obj`是要加载的.obj模型文件,`your_vertex_shader.vert`和`your_fragment_shader.frag`是顶点和片元着色器代码,`your_texture.png`是贴图文件。你需要将这些文件放在合适的位置,并将路径指定到代码中。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值