1. Env
The configuration process was smooth on Linux, but there are some problems with tiny_cuda_nn and colmap in Windows.
// According to the installation document
conda create --name nerfstudio -y python=3.8
conda activate nerfstudio
python -m pip install --upgrade pip
pip install torch==2.1.2+cu118 torchvision==0.16.2+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit
pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
pip install nerfstudio
git clone https://github.com/nerfstudio-project/nerfstudio.git
cd nerfstudio
pip install --upgrade pip setuptools
pip install -e .
// Optional
ns-install-cli
pip install -e .[dev]
pip install -e .[docs]
2. Train
The nerfacto
was the model recommended to train, but I use Gaussian Splatting, just a small difference.
-
Prepare the dataset
ns-download-data nerfstudio --capture-name=poster
We can see the index as follows:
-
Training
-
Install gsplat
pip install gsplat
-
Only need to change the model’s name in rendering.
ns-train splatfacto --data data/nerfstudio/poster
-
Process:
We can see the rendering of the web page window and the output of the terminal:
-
Result
We can obtain this index structure:
-
I also do some rendering and eval in the next part of custom data.
3. Custom data
3.1 Prepare
// Installation of colmap
conda install -c conda-forge colmap
colmap -h // To check
// Preprocess the data
ns-process-data images --data /home/Github_project/nerfstudio/Custom_date/Bear --output-dir /home/Github_project/nerfstudio/Custom_date/Bear2
// Small Bug:
// Could not find ffmpeg. Please install ffmpeg.
// See https://ffmpeg.org/download.html for installation instructions.
// Solution
// sudo apt install ffmpeg
If the process goes well, it will show as below:
Bug solution:
A small question I have is Error running command: colmap vocab_tree_matcher --database_path
(Github #issue). But I just rerun the code the next day. It disappears. I guess we can try to delete the "/home/ubuntu/.local/share/nerfstudio"
and restart the computer.
The train process is the same:
ns-train splatfacto --data data/Custom_data
3.2 Render and eval
We can set the keyframes on the viewer and generate the rendering instructions. Then run it, we will obtain the video.
And we can use this instruction below to eval:
ns-eval --load-config=outputs/Bear/splatfacto/2024-04-08_193703/config.yml --output-path=output.json
3.3 Results
The first dataset is around two bears. PSNR before is okay, but the video is not very good. There are a lot of floating objects
Another drone dataset:
Due to the shooting angle, only part of the overall scene can be seen. But it offers better clarity compared to previous large-scale renderings based on NeRF.
4. Summary
Linux reports much fewer errors than windows. And nerfstudio is very concise. Each method can be compressed into one python file.