Ubuntu 16.04 PointMVSNet 环境配置
1. CUDA 选择与安装
查看机器显卡型号,找到对应的CUDA版本。
amax@amax:~$ nvidia-smi
Thu May 21 08:47:23 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.34 Driver Version: 430.34 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce RTX 208... Off | 00000000:AF:00.0 On | N/A |
| 22% 34C P8 19W / 250W | 170MiB / 11019MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce RTX 208... Off | 00000000:D8:00.0 Off | N/A |
| 22% 32C P8 4W / 250W | 1MiB / 11019MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1584 G /usr/lib/xorg/Xorg 113MiB |
| 0 2866 G compiz 49MiB |
| 0 3446 G /usr/lib/firefox/firefox 6MiB |
+-----------------------------------------------------------------------------+
此处CUDA Version: 10.1 就是对应的CUDA版本
2. 代码下载及编译环境配置
Anaconda
1. 安装anaconda: https://www.anaconda.com/download/#linux
2. 更换国内源
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/msys2/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/
# 设置搜索时显示通道地址
conda config --set show_channel_urls yes
- pip 换源
修改 ~/.pip/pip.conf (没有就创建一个), 内容如下:
[global]
index-url = https://pypi.tuna.tsinghua.edu.cn/simple
- 创建虚拟环境
conda create -n PointMVSNet python=3.6
source activate PointMVSNet
conda install -c anaconda pillow
pip install -r requirements.txt
Pytorch 1.0.1 安装
# CUDA 9.0
conda install pytorch==1.0.1 torchvision==0.2.2 cudatoolkit=9.0 -c pytorch
# CUDA 10.0
conda install pytorch==1.0.1 torchvision==0.2.2 cudatoolkit=10.0 -c pytorch
# CPU Only
conda install pytorch-cpu==1.0.1 torchvision-cpu==0.2.2 cpuonly -c pytorch
如果网络较慢去掉后面 -c pytorch 使用国内镜像源
代码下载
git clone https://github.com/callmeray/PointMVSNet.git
编译运行
cd PointMVSNet
conda activate PointMVSNet
bash compile.sh
训练
Download the preprocessed DTU training data from MVSNet and unzip it to data/dtu
如下图所示
(PointMVSNet) amax@amax:~/PointMVSNet/data/dtu$ ls
Cameras Depths Eval
运行训练脚本
python pointmvsnet/train.py --cfg configs/dtu_wde3.yaml
等待训练完成,此时outputs/dtu_wde3/ 会生成.pth 结尾的模型
Testing
- Download the rectified images from DTU benchmark and unzip it to
data/dtu/Eval
如下图所示
(PointMVSNet) amax@amax:~/PointMVSNet/data/dtu/Eval$ ls
dtu_wde3 Rectified
python pointmvsnet/test.py --cfg configs/dtu_wde3.yaml
成功后data/dtu/Eval/dtu_wde3 下会有一些以scan开头的文件夹
Depth Fusion
PointMVSNet generates per-view depth map. We need to apply depth fusion tools/depthfusion.py
to get the complete point cloud. Please refer to MVSNet for more details.
代码下载
git clone https://github.com/YoYo000/fusibile
编译
cmake .
修改CMakeLists.txt 中cuda 算力
-set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS};-O3 --use_fast_math --ptxas-options=-v -std=c++11 --compiler-options -Wall -gencode arch=compute_60,code=sm_60 -gencode arch=compute_60,code=sm_60)
+set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS};-O3 --use_fast_math --ptxas-options=-v -std=c++11 --compiler-options -Wall -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=sm_75)
显卡GPU 算力查询
https://developer.nvidia.com/cuda-gpus
GTX1050 GPU算力6.1 ,即compute_61,sm_61
执行编译命令make
make
修改tools/depthfusion.py代码
diff --git a/tools/depthfusion.py b/tools/depthfusion.py
old mode 100644
new mode 100755
index cf30efe..064d66c
--- a/tools/depthfusion.py
+++ b/tools/depthfusion.py
@@ -10,6 +10,8 @@ from __future__ import print_function
import argparse
import os.path as osp
from struct import *
+import sys
+sys.path.insert(0, osp.dirname(__file__) + '/..')
from pointmvsnet.utils.io import *
@@ -125,6 +127,8 @@ def mvsnet_to_gipuma(scene_folder, gipuma_point_folder, name, view_num):
for v in range(view_num):
# convert cameras
in_cam_file = os.path.join(scene_folder, 'cam_{:08d}_{}.txt'.format(v, name))
+ if not os.path.exists(in_cam_file):
+ continue
out_cam_file = os.path.join(gipuma_cam_folder, '{:08d}.jpg.P'.format(v))
mvsnet_to_gipuma_cam(in_cam_file, out_cam_file)
@@ -133,6 +137,7 @@ def mvsnet_to_gipuma(scene_folder, gipuma_point_folder, name, view_num):
sub_depth_folder = os.path.join(gipuma_point_folder, gipuma_prefix + "{:08d}".format(v))
mkdir(sub_depth_folder)
in_depth_pfm = os.path.join(scene_folder, "{:08d}_{}_prob_filtered.pfm".format(v, name))
+
out_depth_dmb = os.path.join(sub_depth_folder, 'disp.dmb')
fake_normal_dmb = os.path.join(sub_depth_folder, 'normals.dmb')
mvsnet_to_gipuma_dmb(in_depth_pfm, out_depth_dmb)
@@ -155,6 +160,9 @@ def probability_filter(scene_folder, init_prob_threshold, flow_prob_threshold, n
init_prob_map_path = os.path.join(scene_folder, "{:08d}_init_prob.pfm".format(v))
prob_map_path = os.path.join(scene_folder, "{:08d}_{}_prob.pfm".format(v, name))
init_depth_map_path = os.path.join(scene_folder, "{:08d}_{}.pfm".format(v, name))
+ #print(init_depth_map_path)
+ if not os.path.exists(init_depth_map_path):
+ continue
out_depth_map_path = os.path.join(scene_folder, "{:08d}_{}_prob_filtered.pfm".format(v, name))
depth_map = load_pfm(init_depth_map_path)[0]
运行python脚本
python tools/depthfusion.py --eval_folder /home/amax/PointMVSNet/data/dtu/Eval --fusibile_exe_path /home/amax/fusibile/fusibile --depth_folder dtu_wde3 -n flow1 -v 48
根据自己项目修改上面参数