一、编译相关
1.submodules/simple-knn/simple_knn.cu(90): error: identifier "FLT_MAX" is undefined
me.minn = { FLT_MAX, FLT_MAX, FLT_MAX };
部署photoreg工程,在编译simple_knn的时候,报错:
(photoreg) lee@lee-System-Product-Name:~/project/PhotoRegCodes$ pip install submodules/simple-knn
Processing ./submodules/simple-knn
Preparing metadata (setup.py) ... done
Building wheels for collected packages: simple_knn
Building wheel for simple_knn (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [113 lines of output]
running bdist_wheel
running build
running build_ext
/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/utils/cpp_extension.py:414: UserWarning: The detected CUDA version (12.5) has a minor version mismatch with the version that was used to compile PyTorch (12.4). Most likely this shouldn't be a problem.
warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda))
/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/utils/cpp_extension.py:424: UserWarning: There are no g++ version bounds defined for CUDA version 12.5
warnings.warn(f'There are no {compiler_name} version bounds defined for CUDA version {cuda_str_version}')
building 'simple_knn._C' extension
/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/utils/cpp_extension.py:1965: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
warnings.warn(
Emitting ninja build file /home/lee/project/PhotoRegCodes/submodules/simple-knn/build/temp.linux-x86_64-cpython-38/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/3] /usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /home/lee/project/PhotoRegCodes/submodules/simple-knn/build/temp.linux-x86_64-cpython-38/simple_knn.o.d -I/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/include -I/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/include/TH -I/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/lee/miniconda3/envs/photoreg/include/python3.8 -c -c /home/lee/project/PhotoRegCodes/submodules/simple-knn/simple_knn.cu -o /home/lee/project/PhotoRegCodes/submodules/simple-knn/build/temp.linux-x86_64-cpython-38/simple_knn.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_89,code=compute_89 -gencode=arch=compute_89,code=sm_89 -std=c++17
FAILED: /home/lee/project/PhotoRegCodes/submodules/simple-knn/build/temp.linux-x86_64-cpython-38/simple_knn.o
/usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /home/lee/project/PhotoRegCodes/submodules/simple-knn/build/temp.linux-x86_64-cpython-38/simple_knn.o.d -I/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/include -I/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/include/TH -I/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/lee/miniconda3/envs/photoreg/include/python3.8 -c -c /home/lee/project/PhotoRegCodes/submodules/simple-knn/simple_knn.cu -o /home/lee/project/PhotoRegCodes/submodules/simple-knn/build/temp.linux-x86_64-cpython-38/simple_knn.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_89,code=compute_89 -gencode=arch=compute_89,code=sm_89 -std=c++17
/home/lee/project/PhotoRegCodes/submodules/simple-knn/simple_knn.cu:23: warning: "__CUDACC__" redefined
23 | #define __CUDACC__
|
<command-line>: note: this is the location of the previous definition
/home/lee/project/PhotoRegCodes/submodules/simple-knn/simple_knn.cu:23: warning: "__CUDACC__" redefined
23 | #define __CUDACC__
|
<command-line>: note: this is the location of the previous definition
/home/lee/project/PhotoRegCodes/submodules/simple-knn/simple_knn.cu(90): error: identifier "FLT_MAX" is undefined
me.minn = { FLT_MAX, FLT_MAX, FLT_MAX };
^
/home/lee/project/PhotoRegCodes/submodules/simple-knn/simple_knn.cu(154): error: identifier "FLT_MAX" is undefined
float best[3] = { FLT_MAX, FLT_MAX, FLT_MAX };
^
2 errors detected in the compilation of "/home/lee/project/PhotoRegCodes/submodules/simple-knn/simple_knn.cu".
[2/3] c++ -MMD -MF /home/lee/project/PhotoRegCodes/submodules/simple-knn/build/temp.linux-x86_64-cpython-38/ext.o.d -pthread -B /home/lee/miniconda3/envs/photoreg/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/lee/miniconda3/envs/photoreg/include -fPIC -O2 -isystem /home/lee/miniconda3/envs/photoreg/include -fPIC -I/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/include -I/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/include/TH -I/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/lee/miniconda3/envs/photoreg/include/python3.8 -c -c /home/lee/project/PhotoRegCodes/submodules/simple-knn/ext.cpp -o /home/lee/project/PhotoRegCodes/submodules/simple-knn/build/temp.linux-x86_64-cpython-38/ext.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++17
[3/3] /usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /home/lee/project/PhotoRegCodes/submodules/simple-knn/build/temp.linux-x86_64-cpython-38/spatial.o.d -I/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/include -I/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/include/TH -I/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/lee/miniconda3/envs/photoreg/include/python3.8 -c -c /home/lee/project/PhotoRegCodes/submodules/simple-knn/spatial.cu -o /home/lee/project/PhotoRegCodes/submodules/simple-knn/build/temp.linux-x86_64-cpython-38/spatial.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_89,code=compute_89 -gencode=arch=compute_89,code=sm_89 -std=c++17
/home/lee/project/PhotoRegCodes/submodules/simple-knn/spatial.cu: In function ‘at::Tensor distCUDA2(const at::Tensor&)’:
/home/lee/project/PhotoRegCodes/submodules/simple-knn/spatial.cu:23:64: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
23 | SimpleKNN::knn(P, (float3*)points.contiguous().data<float>(), means.contiguous().data<float>());
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~
/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:247:1: note: declared here
247 | T * data() const {
| ^ ~~
/home/lee/project/PhotoRegCodes/submodules/simple-knn/spatial.cu:23:102: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
23 | SimpleKNN::knn(P, (float3*)points.contiguous().data<float>(), means.contiguous().data<float>());
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^
/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:247:1: note: declared here
247 | T * data() const {
| ^ ~~
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 2105, in _run_ninja_build
subprocess.run(
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/home/lee/project/PhotoRegCodes/submodules/simple-knn/setup.py", line 21, in <module>
setup(
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/setuptools/__init__.py", line 117, in setup
return distutils.core.setup(**attrs)
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 183, in setup
return run_commands(dist)
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 199, in run_commands
dist.run_commands()
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 954, in run_commands
self.run_command(cmd)
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/setuptools/dist.py", line 999, in run_command
super().run_command(command)
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 973, in run_command
cmd_obj.run()
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/setuptools/command/bdist_wheel.py", line 410, in run
self.run_command("build")
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 316, in run_command
self.distribution.run_command(command)
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/setuptools/dist.py", line 999, in run_command
super().run_command(command)
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 973, in run_command
cmd_obj.run()
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/setuptools/_distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 316, in run_command
self.distribution.run_command(command)
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/setuptools/dist.py", line 999, in run_command
super().run_command(command)
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 973, in run_command
cmd_obj.run()
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 98, in run
_build_ext.run(self)
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 359, in run
self.build_extensions()
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 866, in build_extensions
build_ext.build_extensions(self)
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 476, in build_extensions
self._build_extensions_serial()
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 502, in _build_extensions_serial
self.build_extension(ext)
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 263, in build_extension
_build_ext.build_extension(self, ext)
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 557, in build_extension
objects = self.compiler.compile(
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 679, in unix_wrap_ninja_compile
_write_ninja_file_and_compile_objects(
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1785, in _write_ninja_file_and_compile_objects
_run_ninja_build(
File "/home/lee/miniconda3/envs/photoreg/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 2121, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for simple_knn
Running setup.py clean for simple_knn
Failed to build simple_knn
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (simple_knn)
根据error信息,simple_knn.cu第90行me.minn = { FLT_MAX, FLT_MAX, FLT_MAX }中的"FLT_MAX" 变量没有定义,所以找到这个文件打开然后在前面加上一行include的代码引入即可:最后重新pip install simple-knn即可完成编译安装
二、python第三方库与conda
1.InvalidSpec: The package "nvidia/linux-64::cuda-compiler==12.6.2=0" is not available for the specified platform
输入photoreg安装环境的命令,但是报错
(photoreg) lee@lee-System-Product-Name:~/project/PhotoRegCodes$ conda install pytorch torchvision torchaudio cuda-toolkit=11.8 -c pytorch -c nvidia
Channels:
- pytorch
- nvidia
- conda-forge
- http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
- defaults
Platform: linux-64
Collecting package metadata (repodata.json): done
Solving environment: failed
InvalidSpec: The package "nvidia/linux-64::cuda-compiler==12.6.2=0" is not available for the specified platform
上面这个命令在试图安装cuda工具包以及对应的pytorch,其实因为系统里面已经安装了全局的cuda包,没有必要再重复安装某些包,比如cuda编译器等。
(1)查看相关的pytorch版本以及与cuda的对应关系
(photoreg) lee@lee-System-Product-Name:~/project/PhotoRegCodes$ conda search pytorch -c pytorch
Loading channels: done(节选)
# Name Version Build Channel
pytorch 2.3.0 cpu_generic_py310h0ab6cb9_0 conda-forge
pytorch 2.3.0 cpu_generic_py310ha4c588e_1 conda-forge
pytorch 2.3.0 cpu_generic_py311h255d53b_0 conda-forge
pytorch 2.3.0 cpu_generic_py311h8ca351a_1 conda-forge
pytorch 2.3.0 cpu_generic_py312h2f1fc2b_1 conda-forge
pytorch 2.3.0 cpu_generic_py312hab8db8b_0 conda-forge
pytorch 2.3.0 cpu_generic_py38h1fa1760_1 conda-forge
pytorch 2.3.0 cpu_generic_py38hbe06502_0 conda-forge
pytorch 2.3.0 cpu_generic_py39h87eea44_0 conda-forge
pytorch 2.3.0 cpu_generic_py39he75b87c_1 conda-forge
pytorch 2.3.0 cpu_mkl_py310h75865b9_101 conda-forge
pytorch 2.3.0 cpu_mkl_py310hcb3bde6_100 conda-forge
pytorch 2.3.0 cpu_mkl_py311h9835ca6_100 conda-forge
pytorch 2.3.0 cpu_mkl_py311hcb16b95_101 conda-forge
pytorch 2.3.0 cpu_mkl_py312h3b258cc_101 conda-forge
pytorch 2.3.0 cpu_mkl_py312he7b903e_100 conda-forge
pytorch 2.3.0 cpu_mkl_py38h1c8c993_100 conda-forge
pytorch 2.3.0 cpu_mkl_py38h51400c9_101 conda-forge
pytorch 2.3.0 cpu_mkl_py39h85c4de8_101 conda-forge
pytorch 2.3.0 cpu_mkl_py39hb6713ec_100 conda-forge
pytorch 2.3.0 cpu_py310h08bb5f6_1 pkgs/main
pytorch 2.3.0 cpu_py310h1ce4368_1 pkgs/main
pytorch 2.3.0 cpu_py310h2a1f63a_0 pkgs/main
pytorch 2.3.0 cpu_py310hcb105a3_0 pkgs/main
pytorch 2.3.0 cpu_py311h0178f48_1 pkgs/main
pytorch 2.3.0 cpu_py311h6fe12db_1 pkgs/main
pytorch 2.3.0 cpu_py311h991c31c_0 pkgs/main
pytorch 2.3.0 cpu_py311ha0631a7_0 pkgs/main
pytorch 2.3.0 cpu_py312h1f09096_0 pkgs/main
pytorch 2.3.0 cpu_py312h544eda6_0 pkgs/main
pytorch 2.3.0 cpu_py312h5a90aa3_1 pkgs/main
pytorch 2.3.0 cpu_py312hde650b8_1 pkgs/main
pytorch 2.3.0 cpu_py38h08bb5f6_1 pkgs/main
pytorch 2.3.0 cpu_py38h1ce4368_1 pkgs/main
pytorch 2.3.0 cpu_py38h2a1f63a_0 pkgs/main
pytorch 2.3.0 cpu_py38hcb105a3_0 pkgs/main
pytorch 2.3.0 cpu_py39h08bb5f6_1 pkgs/main
pytorch 2.3.0 cpu_py39h1ce4368_1 pkgs/main
pytorch 2.3.0 cpu_py39h2a1f63a_0 pkgs/main
pytorch 2.3.0 cpu_py39hcb105a3_0 pkgs/main
pytorch 2.3.0 cuda118_py310h6f85f1b_300 conda-forge
pytorch 2.3.0 cuda118_py310h954aa82_301 conda-forge
pytorch 2.3.0 cuda118_py311h4ee7bbc_301 conda-forge
pytorch 2.3.0 cuda118_py311h6c9cb27_300 conda-forge
pytorch 2.3.0 cuda118_py312h3690e1b_301 conda-forge
pytorch 2.3.0 cuda118_py312h4faf3bd_300 conda-forge
pytorch 2.3.0 cuda118_py38h25d1429_300 conda-forge
pytorch 2.3.0 cuda118_py38h32d93a2_301 conda-forge
pytorch 2.3.0 cuda118_py39hbf661d7_301 conda-forge
pytorch 2.3.0 cuda118_py39hd44be3b_300 conda-forge
pytorch 2.3.0 cuda120_py310h2c91c31_301 conda-forge
pytorch 2.3.0 cuda120_py310h7891b24_300 conda-forge
pytorch 2.3.0 cuda120_py311h2667f23_300 conda-forge
pytorch 2.3.0 cuda120_py311hf6aebf0_301 conda-forge
pytorch 2.3.0 cuda120_py312h26b3cf7_301 conda-forge
pytorch 2.3.0 cuda120_py312hf9a1e0a_300 conda-forge
pytorch 2.3.0 cuda120_py38hc4689d7_301 conda-forge
pytorch 2.3.0 cuda120_py38heb61fd4_300 conda-forge
pytorch 2.3.0 cuda120_py39h17b67e0_301 conda-forge
pytorch 2.3.0 cuda120_py39h365aa7c_300 conda-forge
pytorch 2.3.0 gpu_cuda118py310h15c2a99_100 pkgs/main
pytorch 2.3.0 gpu_cuda118py310h7338b40_100 pkgs/main
pytorch 2.3.0 gpu_cuda118py310h796af20_101 pkgs/main
pytorch 2.3.0 gpu_cuda118py310hb74dfbf_101 pkgs/main
pytorch 2.3.0 gpu_cuda118py311h3118142_101 pkgs/main
pytorch 2.3.0 gpu_cuda118py311h3911fe7_101 pkgs/main
pytorch 2.3.0 gpu_cuda118py311h6b76543_100 pkgs/main
pytorch 2.3.0 gpu_cuda118py311hd2d20a8_100 pkgs/main
pytorch 2.3.0 gpu_cuda118py38h15c2a99_100 pkgs/main
pytorch 2.3.0 gpu_cuda118py38h7338b40_100 pkgs/main
pytorch 2.3.0 gpu_cuda118py38h796af20_101 pkgs/main
pytorch 2.3.0 gpu_cuda118py38hb74dfbf_101 pkgs/main
pytorch 2.3.0 gpu_cuda118py39h15c2a99_100 pkgs/main
pytorch 2.3.0 gpu_cuda118py39h7338b40_100 pkgs/main
pytorch 2.3.0 gpu_cuda118py39h796af20_101 pkgs/main
pytorch 2.3.0 gpu_cuda118py39hb74dfbf_101 pkgs/main
pytorch 2.3.0 py3.10_cpu_0 pytorch
pytorch 2.3.0 py3.10_cuda11.8_cudnn8.7.0_0 pytorch
pytorch 2.3.0 py3.10_cuda12.1_cudnn8.9.2_0 pytorch
pytorch 2.3.0 py3.11_cpu_0 pytorch
pytorch 2.3.0 py3.11_cuda11.8_cudnn8.7.0_0 pytorch
pytorch 2.3.0 py3.11_cuda12.1_cudnn8.9.2_0 pytorch
pytorch 2.3.0 py3.12_cpu_0 pytorch
pytorch 2.3.0 py3.12_cuda11.8_cudnn8.7.0_0 pytorch
pytorch 2.3.0 py3.12_cuda12.1_cudnn8.9.2_0 pytorch
pytorch 2.3.0 py3.8_cpu_0 pytorch
pytorch 2.3.0 py3.8_cuda11.8_cudnn8.7.0_0 pytorch
pytorch 2.3.0 py3.8_cuda12.1_cudnn8.9.2_0 pytorch
pytorch 2.3.0 py3.9_cpu_0 pytorch
pytorch 2.3.0 py3.9_cuda11.8_cudnn8.7.0_0 pytorch
pytorch 2.3.0 py3.9_cuda12.1_cudnn8.9.2_0 pytorch
pytorch 2.3.1 cpu_generic_py310ha4c588e_0 conda-forge
pytorch 2.3.1 cpu_generic_py311h8ca351a_0 conda-forge
pytorch 2.3.1 cpu_generic_py312h2f1fc2b_0 conda-forge
pytorch 2.3.1 cpu_generic_py38h1fa1760_0 conda-forge
pytorch 2.3.1 cpu_generic_py39he75b87c_0 conda-forge
pytorch 2.3.1 cpu_mkl_py310h75865b9_100 conda-forge
pytorch 2.3.1 cpu_mkl_py311hcb16b95_100 conda-forge
pytorch 2.3.1 cpu_mkl_py312h3b258cc_100 conda-forge
pytorch 2.3.1 cpu_mkl_py38h51400c9_100 conda-forge
pytorch 2.3.1 cpu_mkl_py39h85c4de8_100 conda-forge
pytorch 2.3.1 cuda118_py310he8d5cbe_300 conda-forge
pytorch 2.3.1 cuda118_py311h0047a46_300 conda-forge
pytorch 2.3.1 cuda118_py312h409cda2_300 conda-forge
pytorch 2.3.1 cuda118_py38h63640cd_300 conda-forge
pytorch 2.3.1 cuda118_py39hd3e083d_300 conda-forge
pytorch 2.3.1 cuda120_py310h2c91c31_300 conda-forge
pytorch 2.3.1 cuda120_py311hf6aebf0_300 conda-forge
pytorch 2.3.1 cuda120_py312h26b3cf7_300 conda-forge
pytorch 2.3.1 cuda120_py38hc4689d7_300 conda-forge
pytorch 2.3.1 cuda120_py39h17b67e0_300 conda-forge
pytorch 2.3.1 py3.10_cpu_0 pytorch
pytorch 2.3.1 py3.10_cuda11.8_cudnn8.7.0_0 pytorch
pytorch 2.3.1 py3.10_cuda12.1_cudnn8.9.2_0 pytorch
pytorch 2.3.1 py3.11_cpu_0 pytorch
pytorch 2.3.1 py3.11_cuda11.8_cudnn8.7.0_0 pytorch
pytorch 2.3.1 py3.11_cuda12.1_cudnn8.9.2_0 pytorch
pytorch 2.3.1 py3.12_cpu_0 pytorch
pytorch 2.3.1 py3.12_cuda11.8_cudnn8.7.0_0 pytorch
pytorch 2.3.1 py3.12_cuda12.1_cudnn8.9.2_0 pytorch
pytorch 2.3.1 py3.8_cpu_0 pytorch
pytorch 2.3.1 py3.8_cuda11.8_cudnn8.7.0_0 pytorch
pytorch 2.3.1 py3.8_cuda12.1_cudnn8.9.2_0 pytorch
pytorch 2.3.1 py3.9_cpu_0 pytorch
pytorch 2.3.1 py3.9_cuda11.8_cudnn8.7.0_0 pytorch
pytorch 2.3.1 py3.9_cuda12.1_cudnn8.9.2_0 pytorch
pytorch 2.4.0 cpu_generic_py310h6ad04bf_1 conda-forge
pytorch 2.4.0 cpu_generic_py310ha4c588e_0 conda-forge
pytorch 2.4.0 cpu_generic_py311h7a8ff39_1 conda-forge
pytorch 2.4.0 cpu_generic_py311h8ca351a_0 conda-forge
pytorch 2.4.0 cpu_generic_py312h1576ffb_1 conda-forge
pytorch 2.4.0 cpu_generic_py312h2f1fc2b_0 conda-forge
pytorch 2.4.0 cpu_generic_py38h1fa1760_0 conda-forge
pytorch 2.4.0 cpu_generic_py38hbd07d99_1 conda-forge
pytorch 2.4.0 cpu_generic_py39h7552c89_1 conda-forge
pytorch 2.4.0 cpu_generic_py39he75b87c_0 conda-forge
pytorch 2.4.0 cpu_mkl_py310h0b5cf2a_101 conda-forge
pytorch 2.4.0 cpu_mkl_py310h75865b9_100 conda-forge
pytorch 2.4.0 cpu_mkl_py311h02aef37_101 conda-forge
pytorch 2.4.0 cpu_mkl_py311hcb16b95_100 conda-forge
pytorch 2.4.0 cpu_mkl_py312h31352b0_101 conda-forge
pytorch 2.4.0 cpu_mkl_py312h3b258cc_100 conda-forge
pytorch 2.4.0 cpu_mkl_py38h51400c9_100 conda-forge
pytorch 2.4.0 cpu_mkl_py38ha4c0195_101 conda-forge
pytorch 2.4.0 cpu_mkl_py39h060493f_101 conda-forge
pytorch 2.4.0 cpu_mkl_py39h85c4de8_100 conda-forge
pytorch 2.4.0 cuda118_py310h954aa82_300 conda-forge
pytorch 2.4.0 cuda118_py310h954aa82_301 conda-forge
pytorch 2.4.0 cuda118_py311h4ee7bbc_300 conda-forge
pytorch 2.4.0 cuda118_py311h4ee7bbc_301 conda-forge
pytorch 2.4.0 cuda118_py312h3690e1b_300 conda-forge
pytorch 2.4.0 cuda118_py312h3690e1b_301 conda-forge
pytorch 2.4.0 cuda118_py38h32d93a2_300 conda-forge
pytorch 2.4.0 cuda118_py38h32d93a2_301 conda-forge
pytorch 2.4.0 cuda118_py39hbf661d7_300 conda-forge
pytorch 2.4.0 cuda118_py39hbf661d7_301 conda-forge
pytorch 2.4.0 cuda120_py310h2c91c31_300 conda-forge
pytorch 2.4.0 cuda120_py310h2c91c31_301 conda-forge
pytorch 2.4.0 cuda120_py311hf6aebf0_300 conda-forge
pytorch 2.4.0 cuda120_py311hf6aebf0_301 conda-forge
pytorch 2.4.0 cuda120_py312h26b3cf7_300 conda-forge
pytorch 2.4.0 cuda120_py312h26b3cf7_301 conda-forge
pytorch 2.4.0 cuda120_py38hc4689d7_300 conda-forge
pytorch 2.4.0 cuda120_py38hc4689d7_301 conda-forge
pytorch 2.4.0 cuda120_py39h17b67e0_300 conda-forge
pytorch 2.4.0 cuda120_py39h17b67e0_301 conda-forge
pytorch 2.4.0 py3.10_cpu_0 pytorch
pytorch 2.4.0 py3.10_cuda11.8_cudnn9.1.0_0 pytorch
pytorch 2.4.0 py3.10_cuda12.1_cudnn9.1.0_0 pytorch
pytorch 2.4.0 py3.10_cuda12.4_cudnn9.1.0_0 pytorch
pytorch 2.4.0 py3.11_cpu_0 pytorch
pytorch 2.4.0 py3.11_cuda11.8_cudnn9.1.0_0 pytorch
pytorch 2.4.0 py3.11_cuda12.1_cudnn9.1.0_0 pytorch
pytorch 2.4.0 py3.11_cuda12.4_cudnn9.1.0_0 pytorch
pytorch 2.4.0 py3.12_cpu_0 pytorch
pytorch 2.4.0 py3.12_cuda11.8_cudnn9.1.0_0 pytorch
pytorch 2.4.0 py3.12_cuda12.1_cudnn9.1.0_0 pytorch
pytorch 2.4.0 py3.12_cuda12.4_cudnn9.1.0_0 pytorch
pytorch 2.4.0 py3.8_cpu_0 pytorch
pytorch 2.4.0 py3.8_cuda11.8_cudnn9.1.0_0 pytorch
pytorch 2.4.0 py3.8_cuda12.1_cudnn9.1.0_0 pytorch
pytorch 2.4.0 py3.8_cuda12.4_cudnn9.1.0_0 pytorch
pytorch 2.4.0 py3.9_cpu_0 pytorch
pytorch 2.4.0 py3.9_cuda11.8_cudnn9.1.0_0 pytorch
pytorch 2.4.0 py3.9_cuda12.1_cudnn9.1.0_0 pytorch
pytorch 2.4.0 py3.9_cuda12.4_cudnn9.1.0_0 pytorch
pytorch 2.4.1 cpu_generic_py310h6bb2ca9_1 conda-forge
pytorch 2.4.1 cpu_generic_py310h6bb2ca9_2 conda-forge
pytorch 2.4.1 cpu_generic_py310hae68ee8_3 conda-forge
pytorch 2.4.1 cpu_generic_py310hcbfaffa_0 conda-forge
pytorch 2.4.1 cpu_generic_py311h71636e0_0 conda-forge
pytorch 2.4.1 cpu_generic_py311hd3aefb3_3 conda-forge
pytorch 2.4.1 cpu_generic_py311he611d14_1 conda-forge
pytorch 2.4.1 cpu_generic_py311he611d14_2 conda-forge
pytorch 2.4.1 cpu_generic_py312h2b7556c_3 conda-forge
pytorch 2.4.1 cpu_generic_py312h411db4e_0 conda-forge
pytorch 2.4.1 cpu_generic_py312h916ba9d_1 conda-forge
pytorch 2.4.1 cpu_generic_py312h916ba9d_2 conda-forge
pytorch 2.4.1 cpu_generic_py313h30720f7_1 conda-forge
pytorch 2.4.1 cpu_generic_py313h30720f7_2 conda-forge
pytorch 2.4.1 cpu_generic_py313h72fb371_0 conda-forge
pytorch 2.4.1 cpu_generic_py313h8874172_3 conda-forge
pytorch 2.4.1 cpu_generic_py39h0079ae9_1 conda-forge
pytorch 2.4.1 cpu_generic_py39h0079ae9_2 conda-forge
pytorch 2.4.1 cpu_generic_py39h7d91780_0 conda-forge
pytorch 2.4.1 cpu_generic_py39hbaadbe5_3 conda-forge
pytorch 2.4.1 cpu_mkl_py310h1581fbd_100 conda-forge
pytorch 2.4.1 cpu_mkl_py310h218c519_103 conda-forge
pytorch 2.4.1 cpu_mkl_py310h4ef1421_101 conda-forge
pytorch 2.4.1 cpu_mkl_py310h4ef1421_102 conda-forge
pytorch 2.4.1 cpu_mkl_py311h4c611e5_101 conda-forge
pytorch 2.4.1 cpu_mkl_py311h4c611e5_102 conda-forge
pytorch 2.4.1 cpu_mkl_py311hb499fb8_100 conda-forge
pytorch 2.4.1 cpu_mkl_py311hb71f701_103 conda-forge
pytorch 2.4.1 cpu_mkl_py312h1b0a35b_103 conda-forge
pytorch 2.4.1 cpu_mkl_py312ha1f5ba4_101 conda-forge
pytorch 2.4.1 cpu_mkl_py312ha1f5ba4_102 conda-forge
pytorch 2.4.1 cpu_mkl_py312hf535c18_100 conda-forge
pytorch 2.4.1 cpu_mkl_py313hbc6f0e9_101 conda-forge
pytorch 2.4.1 cpu_mkl_py313hbc6f0e9_102 conda-forge
pytorch 2.4.1 cpu_mkl_py313he7ed12f_103 conda-forge
pytorch 2.4.1 cpu_mkl_py313hf50a166_100 conda-forge
pytorch 2.4.1 cpu_mkl_py39h2fcb8f5_101 conda-forge
pytorch 2.4.1 cpu_mkl_py39h2fcb8f5_102 conda-forge
pytorch 2.4.1 cpu_mkl_py39h32901ce_100 conda-forge
pytorch 2.4.1 cpu_mkl_py39ha1b8702_103 conda-forge
pytorch 2.4.1 cuda118_py310h22ea9a0_300 conda-forge
pytorch 2.4.1 cuda118_py310h8b36b8a_303 conda-forge
pytorch 2.4.1 cuda118_py310hd65b3e3_301 conda-forge
pytorch 2.4.1 cuda118_py310hd65b3e3_302 conda-forge
pytorch 2.4.1 cuda118_py311h156befe_303 conda-forge
pytorch 2.4.1 cuda118_py311h1771f17_300 conda-forge
pytorch 2.4.1 cuda118_py311hb6eb748_301 conda-forge
pytorch 2.4.1 cuda118_py311hb6eb748_302 conda-forge
pytorch 2.4.1 cuda118_py312h02e3f75_303 conda-forge
pytorch 2.4.1 cuda118_py312h1e5d2cd_301 conda-forge
pytorch 2.4.1 cuda118_py312h1e5d2cd_302 conda-forge
pytorch 2.4.1 cuda118_py312he805367_300 conda-forge
pytorch 2.4.1 cuda118_py313h0a01257_303 conda-forge
pytorch 2.4.1 cuda118_py313h49748f1_301 conda-forge
pytorch 2.4.1 cuda118_py313h49748f1_302 conda-forge
pytorch 2.4.1 cuda118_py313h5b1df02_300 conda-forge
pytorch 2.4.1 cuda118_py39h31bdb47_303 conda-forge
pytorch 2.4.1 cuda118_py39h7622074_301 conda-forge
pytorch 2.4.1 cuda118_py39h7622074_302 conda-forge
pytorch 2.4.1 cuda118_py39hc022698_300 conda-forge
pytorch 2.4.1 cuda120_py310h5d94b2e_301 conda-forge
pytorch 2.4.1 cuda120_py310h5d94b2e_302 conda-forge
pytorch 2.4.1 cuda120_py310haf35510_300 conda-forge
pytorch 2.4.1 cuda120_py310hf7eb567_303 conda-forge
pytorch 2.4.1 cuda120_py311h5e7e484_300 conda-forge
pytorch 2.4.1 cuda120_py311h9de5d04_301 conda-forge
pytorch 2.4.1 cuda120_py311h9de5d04_302 conda-forge
pytorch 2.4.1 cuda120_py311he27b719_303 conda-forge
pytorch 2.4.1 cuda120_py312h257e401_300 conda-forge
pytorch 2.4.1 cuda120_py312h6defd05_303 conda-forge
pytorch 2.4.1 cuda120_py312hf8d5e09_301 conda-forge
pytorch 2.4.1 cuda120_py312hf8d5e09_302 conda-forge
pytorch 2.4.1 cuda120_py313h37013bb_303 conda-forge
pytorch 2.4.1 cuda120_py313h3885a58_300 conda-forge
pytorch 2.4.1 cuda120_py313h6ccb88c_301 conda-forge
pytorch 2.4.1 cuda120_py313h6ccb88c_302 conda-forge
pytorch 2.4.1 cuda120_py39h13e8a3a_300 conda-forge
pytorch 2.4.1 cuda120_py39h2e0a0f3_303 conda-forge
pytorch 2.4.1 cuda120_py39hb75c377_301 conda-forge
pytorch 2.4.1 cuda120_py39hb75c377_302 conda-forge
pytorch 2.4.1 py3.10_cpu_0 pytorch
pytorch 2.4.1 py3.10_cuda11.8_cudnn9.1.0_0 pytorch
pytorch 2.4.1 py3.10_cuda12.1_cudnn9.1.0_0 pytorch
pytorch 2.4.1 py3.10_cuda12.4_cudnn9.1.0_0 pytorch
pytorch 2.4.1 py3.11_cpu_0 pytorch
pytorch 2.4.1 py3.11_cuda11.8_cudnn9.1.0_0 pytorch
pytorch 2.4.1 py3.11_cuda12.1_cudnn9.1.0_0 pytorch
pytorch 2.4.1 py3.11_cuda12.4_cudnn9.1.0_0 pytorch
pytorch 2.4.1 py3.12_cpu_0 pytorch
pytorch 2.4.1 py3.12_cuda11.8_cudnn9.1.0_0 pytorch
pytorch 2.4.1 py3.12_cuda12.1_cudnn9.1.0_0 pytorch
pytorch 2.4.1 py3.12_cuda12.4_cudnn9.1.0_0 pytorch
pytorch 2.4.1 py3.8_cpu_0 pytorch
pytorch 2.4.1 py3.8_cuda11.8_cudnn9.1.0_0 pytorch
pytorch 2.4.1 py3.8_cuda12.1_cudnn9.1.0_0 pytorch
pytorch 2.4.1 py3.8_cuda12.4_cudnn9.1.0_0 pytorch
pytorch 2.4.1 py3.9_cpu_0 pytorch
pytorch 2.4.1 py3.9_cuda11.8_cudnn9.1.0_0 pytorch
pytorch 2.4.1 py3.9_cuda12.1_cudnn9.1.0_0 pytorch
pytorch 2.4.1 py3.9_cuda12.4_cudnn9.1.0_0 pytorch
pytorch 2.5.0 py3.10_cpu_0 pytorch
pytorch 2.5.0 py3.10_cuda11.8_cudnn9.1.0_0 pytorch
pytorch 2.5.0 py3.10_cuda12.1_cudnn9.1.0_0 pytorch
pytorch 2.5.0 py3.10_cuda12.4_cudnn9.1.0_0 pytorch
pytorch 2.5.0 py3.11_cpu_0 pytorch
pytorch 2.5.0 py3.11_cuda11.8_cudnn9.1.0_0 pytorch
pytorch 2.5.0 py3.11_cuda12.1_cudnn9.1.0_0 pytorch
pytorch 2.5.0 py3.11_cuda12.4_cudnn9.1.0_0 pytorch
pytorch 2.5.0 py3.12_cpu_0 pytorch
pytorch 2.5.0 py3.12_cuda11.8_cudnn9.1.0_0 pytorch
pytorch 2.5.0 py3.12_cuda12.1_cudnn9.1.0_0 pytorch
pytorch 2.5.0 py3.12_cuda12.4_cudnn9.1.0_0 pytorch
pytorch 2.5.0 py3.9_cpu_0 pytorch
pytorch 2.5.0 py3.9_cuda11.8_cudnn9.1.0_0 pytorch
pytorch 2.5.0 py3.9_cuda12.1_cudnn9.1.0_0 pytorch
pytorch 2.5.0 py3.9_cuda12.4_cudnn9.1.0_0 pytorch
pytorch 2.5.1 cpu_generic_py310hae68ee8_0 conda-forge
pytorch 2.5.1 cpu_generic_py310hae68ee8_2 conda-forge
pytorch 2.5.1 cpu_generic_py310hae68ee8_3 conda-forge
pytorch 2.5.1 cpu_generic_py311hd3aefb3_0 conda-forge
pytorch 2.5.1 cpu_generic_py311hd3aefb3_2 conda-forge
pytorch 2.5.1 cpu_generic_py311hd3aefb3_3 conda-forge
pytorch 2.5.1 cpu_generic_py312h2b7556c_0 conda-forge
pytorch 2.5.1 cpu_generic_py312h2b7556c_2 conda-forge
pytorch 2.5.1 cpu_generic_py312h2b7556c_3 conda-forge
pytorch 2.5.1 cpu_generic_py313h8874172_0 conda-forge
pytorch 2.5.1 cpu_generic_py313h8874172_2 conda-forge
pytorch 2.5.1 cpu_generic_py313h8874172_3 conda-forge
pytorch 2.5.1 cpu_generic_py39hbaadbe5_0 conda-forge
pytorch 2.5.1 cpu_generic_py39hbaadbe5_2 conda-forge
pytorch 2.5.1 cpu_generic_py39hbaadbe5_3 conda-forge
pytorch 2.5.1 cpu_mkl_py310h218c519_100 conda-forge
pytorch 2.5.1 cpu_mkl_py310h218c519_102 conda-forge
pytorch 2.5.1 cpu_mkl_py310h89e431c_103 conda-forge
pytorch 2.5.1 cpu_mkl_py311hb71f701_100 conda-forge
pytorch 2.5.1 cpu_mkl_py311hb71f701_102 conda-forge
pytorch 2.5.1 cpu_mkl_py311hc928171_103 conda-forge
pytorch 2.5.1 cpu_mkl_py312h01fbe9c_103 conda-forge
pytorch 2.5.1 cpu_mkl_py312h1b0a35b_100 conda-forge
pytorch 2.5.1 cpu_mkl_py312h1b0a35b_102 conda-forge
pytorch 2.5.1 cpu_mkl_py313h9aca207_103 conda-forge
pytorch 2.5.1 cpu_mkl_py313he7ed12f_100 conda-forge
pytorch 2.5.1 cpu_mkl_py313he7ed12f_102 conda-forge
pytorch 2.5.1 cpu_mkl_py39h5c24141_103 conda-forge
pytorch 2.5.1 cpu_mkl_py39ha1b8702_100 conda-forge
pytorch 2.5.1 cpu_mkl_py39ha1b8702_102 conda-forge
pytorch 2.5.1 cuda118_py310h8b36b8a_300 conda-forge
pytorch 2.5.1 cuda118_py310h8b36b8a_302 conda-forge
pytorch 2.5.1 cuda118_py310h920319e_303 conda-forge
pytorch 2.5.1 cuda118_py311h156befe_300 conda-forge
pytorch 2.5.1 cuda118_py311h156befe_302 conda-forge
pytorch 2.5.1 cuda118_py311hb9b6578_303 conda-forge
pytorch 2.5.1 cuda118_py312h02e3f75_300 conda-forge
pytorch 2.5.1 cuda118_py312h02e3f75_302 conda-forge
pytorch 2.5.1 cuda118_py312h919e71f_303 conda-forge
pytorch 2.5.1 cuda118_py313h0a01257_300 conda-forge
pytorch 2.5.1 cuda118_py313h0a01257_302 conda-forge
pytorch 2.5.1 cuda118_py313h40cdc2d_303 conda-forge
pytorch 2.5.1 cuda118_py39h31bdb47_300 conda-forge
pytorch 2.5.1 cuda118_py39h31bdb47_302 conda-forge
pytorch 2.5.1 cuda118_py39h89da91e_303 conda-forge
pytorch 2.5.1 cuda120_py310h9d63651_303 conda-forge
pytorch 2.5.1 cuda120_py310hf7eb567_300 conda-forge
pytorch 2.5.1 cuda120_py310hf7eb567_302 conda-forge
pytorch 2.5.1 cuda120_py311h7a71dd8_303 conda-forge
pytorch 2.5.1 cuda120_py311he27b719_300 conda-forge
pytorch 2.5.1 cuda120_py311he27b719_302 conda-forge
pytorch 2.5.1 cuda120_py312h6defd05_300 conda-forge
pytorch 2.5.1 cuda120_py312h6defd05_302 conda-forge
pytorch 2.5.1 cuda120_py312hd285dae_303 conda-forge
pytorch 2.5.1 cuda120_py313h37013bb_300 conda-forge
pytorch 2.5.1 cuda120_py313h37013bb_302 conda-forge
pytorch 2.5.1 cuda120_py313h869cad7_303 conda-forge
pytorch 2.5.1 cuda120_py39h2e0a0f3_300 conda-forge
pytorch 2.5.1 cuda120_py39h2e0a0f3_302 conda-forge
pytorch 2.5.1 cuda120_py39hfb32a81_303 conda-forge
pytorch 2.5.1 cuda126_py310h4acf282_301 conda-forge
pytorch 2.5.1 cuda126_py310h4acf282_302 conda-forge
pytorch 2.5.1 cuda126_py310he4c8055_303 conda-forge
pytorch 2.5.1 cuda126_py311h8adc4d4_301 conda-forge
pytorch 2.5.1 cuda126_py311h8adc4d4_302 conda-forge
pytorch 2.5.1 cuda126_py311hd4abd4e_303 conda-forge
pytorch 2.5.1 cuda126_py312h7c58cdf_303 conda-forge
pytorch 2.5.1 cuda126_py312hb0dc81f_301 conda-forge
pytorch 2.5.1 cuda126_py312hb0dc81f_302 conda-forge
pytorch 2.5.1 cuda126_py313ha14af55_301 conda-forge
pytorch 2.5.1 cuda126_py313ha14af55_302 conda-forge
pytorch 2.5.1 cuda126_py313he9a4f5b_303 conda-forge
pytorch 2.5.1 cuda126_py39h07e2c9a_303 conda-forge
pytorch 2.5.1 cuda126_py39hfe5c751_301 conda-forge
pytorch 2.5.1 cuda126_py39hfe5c751_302 conda-forge
pytorch 2.5.1 py3.10_cpu_0 pytorch
pytorch 2.5.1 py3.10_cuda11.8_cudnn9.1.0_0 pytorch
pytorch 2.5.1 py3.10_cuda12.1_cudnn9.1.0_0 pytorch
pytorch 2.5.1 py3.10_cuda12.4_cudnn9.1.0_0 pytorch
pytorch 2.5.1 py3.11_cpu_0 pytorch
pytorch 2.5.1 py3.11_cuda11.8_cudnn9.1.0_0 pytorch
pytorch 2.5.1 py3.11_cuda12.1_cudnn9.1.0_0 pytorch
pytorch 2.5.1 py3.11_cuda12.4_cudnn9.1.0_0 pytorch
pytorch 2.5.1 py3.12_cpu_0 pytorch
pytorch 2.5.1 py3.12_cuda11.8_cudnn9.1.0_0 pytorch
pytorch 2.5.1 py3.12_cuda12.1_cudnn9.1.0_0 pytorch
pytorch 2.5.1 py3.12_cuda12.4_cudnn9.1.0_0 pytorch
pytorch 2.5.1 py3.9_cpu_0 pytorch
pytorch 2.5.1 py3.9_cuda11.8_cudnn9.1.0_0 pytorch
pytorch 2.5.1 py3.9_cuda12.1_cudnn9.1.0_0 pytorch
pytorch 2.5.1 py3.9_cuda12.4_cudnn9.1.0_0 pytorch
(2)因为本地安装的cuda是12.5,并且作者命令中使用了python3.8,所以根据上面的版本号与对应关系,找一个能满足这两个条件的pytorch版本直接install即可,在安装的过程中,也会自动安装好pytorch需要的几个cuda包
(base) lee@lee-System-Product-Name:~/project/PhotoRegCodes$ conda create -n photoreg pytorch=2.4.1=py3.8_cuda12.4_cudnn9.1.0_0 torchvision torchaudio -c pytorch -c nvidia
Channels:
- pytorch
- nvidia
- conda-forge
- http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
- defaults
Platform: linux-64
Collecting package metadata (repodata.json): - Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'RemoteDisconnected('Remote end closed connection without response')': /pkgs/r/noarch/repodata.json.zst
\ Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'RemoteDisconnected('Remote end closed connection without response')': /nvidia/noarch/repodata.json.zst
Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'RemoteDisconnected('Remote end closed connection without response')': /conda-forge/linux-64/repodata.json.zst
/ Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'RemoteDisconnected('Remote end closed connection without response')': /conda-forge/noarch/repodata.json.zst
| Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'RemoteDisconnected('Remote end closed connection without response')': /conda-forge/linux-64/repodata.json.zst
/ Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'RemoteDisconnected('Remote end closed connection without response')': /nvidia/noarch/repodata.json.zst
Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'RemoteDisconnected('Remote end closed connection without response')': /conda-forge/noarch/repodata.json.zst
- Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'RemoteDisconnected('Remote end closed connection without response')': /pkgs/r/noarch/repodata.json.zst
\ Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'RemoteDisconnected('Remote end closed connection without response')': /pkgs/r/noarch/repodata.json.zst
| Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'RemoteDisconnected('Remote end closed connection without response')': /conda-forge/noarch/repodata.json.zst
Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'RemoteDisconnected('Remote end closed connection without response')': /conda-forge/linux-64/repodata.json.zst
failed
ProxyError: Conda cannot proceed due to an error in your proxy configuration.
Check for typos and other configuration errors in any '.netrc' file in your home directory,
any environment variables ending in '_PROXY', and any other system-wide proxy
configuration settings.
然后发现网络问题一直安装不了,于是添加设置国内的几个源
(3)conda设置国内的源
conda添加清华镜像源_conda配置清华镜像源-CSDN博客
最后完成pytorch与附带的几个cuda包的安装,像一些重复的包就没必要安装进去,比如cuda编译器,如果在conda环境下找不到,它就会自动使用全局也就是系统的cuda编译器来完成编译。
2.QObject::moveToThread: Current thread(...) is not the object`s thread. Cannot move to target thread(
在跑photoreg的时候,报这个错,网上的说法是conda中的pyqt库与pip的opencv库冲突了,有三种方法尝试解决:
查看了我的opencv版本是4.10,但是我的电脑使用上面的方法没有用,所以就直接降级,一路降到了4.2的版本才正常!
3.ERROR: Exception in ASGI application
在部署好triposr运行python gradio_app.py的时候,报错
(triposr) lee@lee-System-Product-Name:~/project/TripoSR$ python gradio_app.py
/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/numba/np/ufunc/parallel.py:371: NumbaWarning: The TBB threading layer requires TBB version 2021 update 6 or later i.e., TBB_INTERFACE_VERSION >= 12060. Found TBB_INTERFACE_VERSION = 12050. The TBB threading layer is disabled.
warnings.warn(problem)
/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/transformers/utils/generic.py:441: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
_torch_pytree._register_pytree_node(
/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/transformers/utils/generic.py:309: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
_torch_pytree._register_pytree_node(
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
IMPORTANT: You are using gradio version 4.8.0, however version 4.44.1 is available, please upgrade.
--------
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/pydantic/type_adapter.py", line 270, in _init_core_attrs
self._core_schema = _getattr_no_parents(self._type, '__pydantic_core_schema__')
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/pydantic/type_adapter.py", line 112, in _getattr_no_parents
raise AttributeError(attribute)
AttributeError: __pydantic_core_schema__
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 406, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
return await self.app(scope, receive, send)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/starlette/applications.py", line 113, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/starlette/middleware/errors.py", line 187, in __call__
raise exc
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/starlette/middleware/errors.py", line 165, in __call__
await self.app(scope, receive, _send)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/starlette/middleware/cors.py", line 93, in __call__
await self.simple_response(scope, receive, send, request_headers=headers)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/starlette/middleware/cors.py", line 144, in simple_response
await self.app(scope, receive, send)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/starlette/routing.py", line 715, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/starlette/routing.py", line 735, in app
await route.handle(scope, receive, send)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/starlette/routing.py", line 73, in app
response = await f(request)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/fastapi/routing.py", line 291, in app
solved_result = await solve_dependencies(
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/fastapi/dependencies/utils.py", line 666, in solve_dependencies
) = await request_body_to_args( # body_params checked above
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/fastapi/dependencies/utils.py", line 891, in request_body_to_args
fields_to_extract = get_cached_model_fields(first_field.type_)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/fastapi/_compat.py", line 659, in get_cached_model_fields
return get_model_fields(model)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/fastapi/_compat.py", line 285, in get_model_fields
return [
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/fastapi/_compat.py", line 286, in <listcomp>
ModelField(field_info=field_info, name=name)
File "<string>", line 6, in __init__
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/fastapi/_compat.py", line 111, in __post_init__
self._type_adapter: TypeAdapter[Any] = TypeAdapter(
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/pydantic/type_adapter.py", line 257, in __init__
self._init_core_attrs(rebuild_mocks=False)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/pydantic/type_adapter.py", line 135, in wrapped
return func(self, *args, **kwargs)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/pydantic/type_adapter.py", line 277, in _init_core_attrs
self._core_schema = _get_schema(self._type, config_wrapper, parent_depth=self._parent_depth)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/pydantic/type_adapter.py", line 95, in _get_schema
schema = gen.generate_schema(type_)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 655, in generate_schema
schema = self._generate_schema_inner(obj)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 908, in _generate_schema_inner
return self._annotated_schema(obj)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 2028, in _annotated_schema
schema = self._apply_annotations(source_type, annotations)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 2107, in _apply_annotations
schema = get_inner_schema(source_type)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/pydantic/_internal/_schema_generation_shared.py", line 83, in __call__
schema = self._handler(source_type)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 2189, in new_handler
schema = metadata_get_schema(source, get_inner_schema)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 2185, in <lambda>
lambda source, handler: handler(source)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/pydantic/_internal/_schema_generation_shared.py", line 83, in __call__
schema = self._handler(source_type)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 2088, in inner_handler
schema = self._generate_schema_inner(obj)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 929, in _generate_schema_inner
return self.match_type(obj)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 1029, in match_type
return self._match_generic_type(obj, origin)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 1058, in _match_generic_type
return self._union_schema(obj)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 1378, in _union_schema
choices.append(self.generate_schema(arg))
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 655, in generate_schema
schema = self._generate_schema_inner(obj)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 929, in _generate_schema_inner
return self.match_type(obj)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 1038, in match_type
return self._unknown_type_schema(obj)
File "/home/lee/miniconda3/envs/triposr/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 558, in _unknown_type_schema
raise PydanticSchemaGenerationError(
pydantic.errors.PydanticSchemaGenerationError: Unable to generate pydantic-core schema for <class 'starlette.requests.Request'>. Set `arbitrary_types_allowed=True` in the model_config to ignore this error or implement `__get_pydantic_core_schema__` on your type to fully support it.
If you got this error by calling handler(<some type>) within `__get_pydantic_core_schema__` then you likely need to call `handler.generate_schema(<some type>)` since we do not call `__get_pydantic_core_schema__` on `<some type>` otherwise to avoid infinite recursion.
这是由于gradio
版本与pydantic
版本不匹配导致的错误。
升级合适的版本即可解决,需要尝试多次
我查看了requirements.txt,因为作者推荐的python版本是>3.8,而我的是3.10,所有只升级gradio会因为其他库版本低而冲突,
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tokenizers 0.14.1 requires huggingface_hub<0.18,>=0.16.4, but you have huggingface-hub 0.26.2 which is incompatible.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
gradio 4.43.0 requires huggingface-hub>=0.19.3, but you have huggingface-hub 0.17.0 which is incompatible.
gradio-client 1.3.0 requires huggingface-hub>=0.19.3, but you have huggingface-hub 0.17.0 which is incompatible.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tokenizers 0.14.1 requires huggingface_hub<0.18,>=0.16.4, but you have huggingface-hub 0.26.2 which is incompatible.
so,我的解决办法是一键直接把当前的库全部升级,让pip去解决版本关系的问题:
pip install --upgrade -r requirements.txt
然后就行了不报错了
三、python代码bug
1.文件中的图片损坏,OSError: image file is truncated (7 bytes not processed)
【Bug解决】OSError: image file is truncated (7 bytes not processed)-CSDN博客
这位博主还写了检测图片是否有损坏的脚本,非常有用!
2.RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
在运行photoreg代码的时候报这个错,原因是要加载的预训练权重文件损坏或者不完整,另外代码中前面使用torch.hub.load()下载过dino v2的模型,但是中间可能断了,所以,需要找到下载的缓存路径删除已下载的部分,重新下载
model = torch.hub.load('facebookresearch/dinov2', 'dinov2_vits14')
缓存路径可以询问GTP:
3.AttributeError: 'PosixPath' object has no attribute 'lower'
在photoreg代码中替换了手动输入图片来匹配两个splat时报错:
Using cache found in /home/lee/.cache/torch/hub/facebookresearch_dinov2_main
/home/lee/.cache/torch/hub/facebookresearch_dinov2_main/dinov2/layers/swiglu_ffn.py:51: UserWarning: xFormers is not available (SwiGLU)
warnings.warn("xFormers is not available (SwiGLU)")
/home/lee/.cache/torch/hub/facebookresearch_dinov2_main/dinov2/layers/attention.py:33: UserWarning: xFormers is not available (Attention)
warnings.warn("xFormers is not available (Attention)")
/home/lee/.cache/torch/hub/facebookresearch_dinov2_main/dinov2/layers/block.py:40: UserWarning: xFormers is not available (Block)
warnings.warn("xFormers is not available (Block)")
/home/lee/project/PhotoRegCodes/dust3r/cloud_opt/base_opt.py:275: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
@torch.cuda.amp.autocast(enabled=False)
Optimizing
>> Loading a list of 2 images
Traceback (most recent call last):
File "main_input_image.py", line 89, in <module>
iniR, iniT = coarseReg(args.img1_path, args.img2_path)
File "/home/lee/project/PhotoRegCodes/coarseReg.py", line 49, in coarseReg
imgs = load_images(paths, size=image_size)
File "/home/lee/project/PhotoRegCodes/dust3r/utils/image.py", line 96, in load_images
if not path.lower().endswith(supported_images_extensions):
AttributeError: 'PosixPath' object has no attribute 'lower'
找到指示的位置:PhotoRegCodes/dust3r/utils/image.py", line 96,原因是在前面脚本中添加图片路径参数时设置了type为Path,如下:
parser.add_argument("--img2_path", type=Path, default=None)
把这里改为str即可恢复,参考
四、colmap相关bug
1.在执行对空白points3D.txt以及只有pose信息的images.txt(无特征点和3D点对应关系)的三角测量时,colmap point_triangulator报错terminate called after throwing an instance of 'std::out_of_range'
(base) lee@lee-System-Product-Name:/media/lee/软件/project_exp/粮仓渲染/liangcha
ngrender$ colmap point_triangulator --database_path ./distorted/database.db --image_path ./input/ --input_path ./distorted/sparse/1/ --output_path ./distorted/sparse/0
I1128 22:07:43.246433 3819093 misc.cc:198]
==============================================================================
Loading model
==============================================================================
I1128 22:07:43.502694 3819093 incremental_mapper.cc:225] Loading database
I1128 22:07:43.691694 3819093 database_cache.cc:65] Loading cameras...
I1128 22:07:43.727530 3819093 database_cache.cc:75] 1 in 0.036s
I1128 22:07:43.727545 3819093 database_cache.cc:83] Loading matches...
I1128 22:07:43.880146 3819093 database_cache.cc:89] 94523 in 0.153s
I1128 22:07:43.880167 3819093 database_cache.cc:105] Loading images...
I1128 22:07:43.960179 3819093 database_cache.cc:155] 2477 in 0.080s (connected 2463)
I1128 22:07:43.960522 3819093 database_cache.cc:166] Building correspondence graph...
I1128 22:07:45.373440 3819093 database_cache.cc:195] in 1.413s (ignored 0)
I1128 22:07:45.379315 3819093 timer.cc:91] Elapsed time: 0.028 [minutes]
terminate called after throwing an instance of 'std::out_of_range'
what(): _Map_base::at
*** Aborted at 1732802865 (unix time) try "date -d @1732802865" if you are using GNU date ***
PC: @ 0x0 (unknown)
*** SIGABRT (@0x3e8003a4655) received by PID 3819093 (TID 0x727cdddb5000) from PID 3819093; stack trace: ***
@ 0x727ce3cd9046 (unknown)
@ 0x727ce1642520 (unknown)
@ 0x727ce16969fc pthread_kill
@ 0x727ce1642476 raise
@ 0x727ce16287f3 abort
@ 0x727ce1aa2b9e (unknown)
@ 0x727ce1aae20c (unknown)
@ 0x727ce1aae277 std::terminate()
@ 0x727ce1aae4d8 __cxa_throw
@ 0x727ce1aa54a0 std::__throw_out_of_range()
@ 0x5c52cbb5b232 colmap::ObservationManager::ObservationManager()
@ 0x5c52cbb4e036 colmap::IncrementalMapper::BeginReconstruction()
@ 0x5c52cbada3db colmap::IncrementalMapperController::TriangulateReconstruction()
@ 0x5c52cb9fe700 colmap::RunPointTriangulatorImpl()
@ 0x5c52cb9fed01 colmap::RunPointTriangulator()
@ 0x5c52cb9e123b main
@ 0x727ce1629d90 (unknown)
@ 0x727ce1629e40 __libc_start_main
@ 0x5c52cb9e7ac5 _start
Aborted (core dumped)
这是因为在图片集合中有纯黑图片添加进了database.db中,需要删除对应的黑色图片,然后注意在images.txt中也需要删除对应的图片pose信息!!!(如果删除不干净,重新跑的时候就会出现下面的报错,*.png不存在,意思是,在images.txt有这个图片的信息,但是数据集里面已经删了)。删除以后,重新打开colmap进行特征提取、匹配得到新的database.db。
在此基础上重新执行colmap point_triangulator。
报错图片0712.png(其实还有0239.png)不存在:
(base) lee@lee-System-Product-Name:/media/lee/软件/project_exp/粮仓渲染/liangchangrender$ colmap point_triangulator --database_path ./distorted/database.db --image_path ./input/ --input_path ./distorted/sparse/1/ --output_path ./distorted/sparse/0
I1128 22:38:13.733253 3951415 misc.cc:198]
==============================================================================
Loading model
==============================================================================
E1128 22:38:14.055574 3951415 reconstruction.cc:445] Image with name 0712.png does not exist in database
terminate called after throwing an instance of 'std::invalid_argument'
what(): [reconstruction.cc:445] Image with name 0712.png does not exist in database
*** Aborted at 1732804694 (unix time) try "date -d @1732804694" if you are using GNU date ***
PC: @ 0x0 (unknown)
*** SIGABRT (@0x3e8003c4b37) received by PID 3951415 (TID 0x72f7f757a000) from PID 3951415; stack trace: ***
@ 0x72f7fd357046 (unknown)
@ 0x72f7fae42520 (unknown)
@ 0x72f7fae969fc pthread_kill
@ 0x72f7fae42476 raise
@ 0x72f7fae287f3 abort
@ 0x72f7fb2a2b9e (unknown)
@ 0x72f7fb2ae20c (unknown)
@ 0x72f7fb2ae277 std::terminate()
@ 0x72f7fb2ae4d8 __cxa_throw
@ 0x5fda2980c176 _ZN6colmap14Reconstruction28TranscribeImageIdsToDatabaseERKNS_8DatabaseE.cold
@ 0x5fda2985f2b4 colmap::RunPointTriangulatorImpl()
@ 0x5fda2985fd01 colmap::RunPointTriangulator()
@ 0x5fda2984223b main
@ 0x72f7fae29d90 (unknown)
@ 0x72f7fae29e40 __libc_start_main
@ 0x5fda29848ac5 _start
Aborted (core dumped)
234 -0.652954 -0.690852 -0.280519 -0.132975 119.071 10.8471 -22.9721 234 0234.png
235 -0.720245 -0.620382 -0.292658 -0.10356 119.071 5.9413 -24.6997 235 0235.png
236 -0.794446 -0.521998 -0.303595 -0.0648315 119.071 -0.594246 -25.3973 236 0236.png
237 -0.855748 -0.413911 -0.309458 -0.024677 119.071 -7.14299 -24.3794 237 0237.png
238 -0.90245 -0.298681 -0.310032 0.0159237 119.071 -13.2091 -21.7002 238 0238.png
246 -0.947572 -0.0757217 -0.297372 0.0891263 119.071 -18.0732 -20.8251 246 0246.png
247 -0.929594 -0.198698 -0.306458 0.0495684 119.071 -12.0694 -24.793 247 0247.png
248 -0.915416 -0.256203 -0.30895 0.0303873 119.071 -8.89378 -26.101 248 0248.png
249 -0.914643 -0.258948 -0.30904 0.0294604 119.071 -8.73781 -29.0829 249 0249.png
250 -0.918272 -0.24577 -0.308585 0.0339011 119.071 -9.62785 -30.796 250 0250.png
251 -0.890929 -0.331469 -0.310408 0.00458033 119.071 -3.64567 -32.1455 251 0251.png
252 -0.84073 -0.443621 -0.308405 -0.0354975 119.071 4.70081 -32.0119 252 0252.png
253 -0.775668 -0.549513 -0.301138 -0.0754294 119.071 12.8222 -29.7062 253 0253.png
254 -0.732787 -0.605516 -0.294716 -0.097551 119.071 17.0745 -27.4833 254 0254.png
images.txt中原本有0239.png的信息,后来被我删除了
gpt的回答:
2.CuTexImage::BindTexture: invalid argument,FilterV: out of memory
在某一次后台训练pixelgs的时候,显存占用95%,同时执行另一个任务,使用colmap feature_extract提取数据集图片特征,报这个错(如果在gui界面执行的话会直接闪退),这是因为默认使用GPU进行特征提取,而特征匹配也是一样,有两种解决方式,一种是等GPU空出来再跑,另一种是设置为CPU来跑(--SiftExtraction.use_gpu这个参数)
W1203 11:17:49.795233 521801 feature_extraction.cc:406] Your current options use the maximum number of threads on the machine to extract features. Extracting SIFT features on the CPU can consume a lot of RAM per thread for large images. Consider reducing the maximum image size and/or the first octave or manually limit the number of extraction threads. Ignore this warning, if your machine has sufficient memory for the current settings.
I1203 11:17:49.795477 521805 misc.cc:198]
==============================================================================
Feature extraction
==============================================================================
I1203 11:17:49.796958 521830 sift.cc:722] Creating SIFT CPU feature extractor
I1203 11:17:49.797012 521831 sift.cc:722] Creating SIFT CPU feature extractor
I1203 11:17:49.797065 521832 sift.cc:722] Creating SIFT CPU feature extractor
I1203 11:17:49.797113 521833 sift.cc:722] Creating SIFT CPU feature extractor
I1203 11:17:49.797166 521834 sift.cc:722] Creating SIFT CPU feature extractor
I1203 11:17:49.797221 521835 sift.cc:722] Creating SIFT CPU feature extractor
I1203 11:17:49.797273 521836 sift.cc:722] Creating SIFT CPU feature extractor
I1203 11:17:49.797326 521837 sift.cc:722] Creating SIFT CPU feature extractor
I1203 11:17:49.797379 521838 sift.cc:722] Creating SIFT CPU feature extractor
I1203 11:17:49.797432 521839 sift.cc:722] Creating SIFT CPU feature extractor
I1203 11:17:49.797483 521840 sift.cc:722] Creating SIFT CPU feature extractor
I1203 11:17:49.797535 521841 sift.cc:722] Creating SIFT CPU feature extractor
I1203 11:17:49.797586 521842 sift.cc:722] Creating SIFT CPU feature extractor
I1203 11:17:49.797638 521843 sift.cc:722] Creating SIFT CPU feature extractor
I1203 11:17:49.797693 521844 sift.cc:722] Creating SIFT CPU feature extractor
I1203 11:17:49.797740 521845 sift.cc:722] Creating SIFT CPU feature extractor
I1203 11:17:49.797796 521846 sift.cc:722] Creating SIFT CPU feature extractor
I1203 11:17:49.797850 521847 sift.cc:722] Creating SIFT CPU feature extractor
I1203 11:17:49.797909 521848 sift.cc:722] Creating SIFT CPU feature extractor
I1203 11:17:49.797953 521849 sift.cc:722] Creating SIFT CPU feature extractor
I1203 11:17:49.798004 521850 sift.cc:722] Creating SIFT CPU feature extractor
I1203 11:17:49.798055 521851 sift.cc:722] Creating SIFT CPU feature extractor
I1203 11:17:49.798107 521852 sift.cc:722] Creating SIFT CPU feature extractor
I1203 11:17:49.798157 521853 sift.cc:722] Creating SIFT CPU feature extractor
I1203 11:18:02.733354 521854 feature_extraction.cc:257] Processed file [1/1000]
I1203 11:18:02.733392 521854 feature_extraction.cc:260] Name: Forward/F1-Q-0408-0009.JPG
I1203 11:18:02.733397 521854 feature_extraction.cc:286] Dimensions: 6000 x 4000
I1203 11:18:02.733400 521854 feature_extraction.cc:289] Camera: #1 - PINHOLE
I1203 11:18:02.733405 521854 feature_extraction.cc:292] Focal Length: 7200.00px
I1203 11:18:02.733417 521854 feature_extraction.cc:296] Features: 11265
I1203 11:18:06.528110 521854 feature_extraction.cc:257] Processed file [2/1000]
I1203 11:18:06.528149 521854 feature_extraction.cc:260] Name: Forward/F1-Q-0408-0008.JPG
I1203 11:18:06.528157 521854 feature_extraction.cc:286] Dimensions: 6000 x 4000
I1203 11:18:06.528164 521854 feature_extraction.cc:289] Camera: #1 - PINHOLE
I1203 11:18:06.528170 521854 feature_extraction.cc:292] Focal Length: 7200.00px
I1203 11:18:06.528183 521854 feature_extraction.cc:296] Features: 13441
3.colmap普通编译的版本,在跑4千多张4K图片使用vocab tree特征匹配的时候,报错“database disk image is malformed"
这是colmap自身的多线程bug,解决方法有两种:
一是在编译的目录下:colmap/src/colmap/controllers/feature_matching.cc的第89行那一块修改,具体哪一行忘了:对比一下改了就行,然后重新编译colmap即可
}
cache_->Setup();
DerivedPairGenerator pair_generator(pair_options_, cache_);
while (!pair_generator.HasFinished()) {
if (IsStopped()) {
run_timer.PrintMinutes();
return;
}
Timer timer;
timer.Start();
const std::vector<std::pair<image_t, image_t>> image_pairs =
pair_generator.Next();
if constexpr(std::is_same<DerivedPairGenerator, VocabTreePairGenerator>::value){
matcher_.Match(image_pairs);
}else{
DatabaseTransaction database_transaction(database_.get());
matcher_.Match(image_pairs);
}
PrintElapsedTime(timer);
}
二是,使用单线程来跑,只是会很慢,在vocab特征匹配的界面或者在命令行这设置单线程即可。
五、软件使用
1.meshlab使用经验
MeshLab使用经验_meshlab背景颜色-CSDN博客
六、python的使用
6.1 python的常见术语
1.argument -- 参数
在调用函数时传给 function (或 method )的值。参数分为两种:
-
关键字参数: 在函数调用中前面带有标识符(例如
name=
)或者作为包含在前面带有**
的字典里的值传入。举例来说,3
和5
在以下对 complex() 的调用中均属于关键字参数:
complex(real=3, imag=5)
complex(**{'real': 3, 'imag': 5})
-
位置参数: 不属于关键字参数的参数。位置参数可出现于参数列表的开头以及/或者作为前面带有
*
的 iterable 里的元素被传入。举例来说,3
和5
在以下调用中均属于位置参数:
complex(3, 5)
complex(*(3, 5))
参数会被赋值给函数体中对应的局部变量。有关赋值规则参见 调用 一节。根据语法,任何表达式都可用来表示一个参数;最终算出的值会被赋给对应的局部变量。
2.callable -- 可调用对象
可调用对象就是可以执行调用运算的对象,并可能附带一组参数 (参见 argument),使用以下语法:
callable(argument1, argument2, argumentN)
function,还可扩展到 method 等,就属于可调用对象。 实现了 __call__() 方法的类的实例也属于可调用对象。
3.callback -- 回调(就是函数里面的参数放一个函数用来处理后续过程)
一个作为参数被传入,以用以在未来的某个时刻被调用,的子例程函数。
假设我们有一个函数 read_file
,它读取一个文件的内容,并在读取完成后调用一个回调函数 process_data
来处理这些内容。
def read_file(file_path, callback):
try:
with open(file_path, 'r') as file:
data = file.read()
callback(data)
except FileNotFoundError:
print(f"文件 {file_path} 未找到")
def process_data(data):
print("处理数据:")
print(data)
# 调用读取文件函数并传递回调函数
read_file('example.txt', process_data)
4.class -- 类
用来创建用户定义对象的模板。类定义通常包含对该类的实例进行操作的方法定义。
5.class variable -- 类变量¶
在类中定义的变量,并且仅限在类的层级上修改 (而不是在类的实例中修改)。
6.context -- 上下文
此术语根据其所在场合和使用方式的不同而具有不同的含义。 一些常见的含义为:
-
通过 with 语句由一个 context manager 所创建的临时环境或状态。
with open('example.txt', 'r') as file:
content = file.read()
print(content)
-
一组关联到特定 contextvars.Context 对象并通过 ContextVar 对象来访问的键值绑定。 另请参见 context variable。
-
一个 contextvars.Context 对象。 另请参见 current context。
上下文管理协议
__enter__() 和 __exit__() 方法将由 with 语句来调用。 参见 PEP 343。
context manager -- 上下文管理器
一个实现了 context management protocol 并负责控制某个 with 语句内的环境的对象。 参见 PEP 343。
context variable -- 上下文变量¶
一个具体值取决于哪个上下文是 current context 的变量。 这些值是通过 contextvars.ContextVar 对象来访问的。 上下文变量主要被用来隔离并发的异步任务之间的状态。
假设我们有一个文件读写操作,我们希望确保文件在使用后能够正确关闭。我们可以使用一个自定义的上下文管理器来实现这一点。
class SimpleFileContextManager:
def __init__(self, file_path, mode):
self.file_path = file_path
self.mode = mode
self.file = None
def __enter__(self):
# 在进入上下文时打开文件
self.file = open(self.file_path, self.mode)
return self.file
def __exit__(self, exc_type, exc_val, exc_tb):
# 在退出上下文时关闭文件
if self.file:
self.file.close()
# 处理异常(可选)
if exc_type:
print(f"处理异常: {exc_val}")
return True # 返回 True 表示异常已被处理
# 使用自定义上下文管理器
file_path = 'example.txt'
with SimpleFileContextManager(file_path, 'r') as file:
content = file.read()
print(f"文件内容: {content}")
-
定义上下管理器类:
__init__
方法初始化文件路径和打开模式。__enter__
方法在进入with
语句块时调用,打开文件并返回文件对象。__exit__
方法在退出with
语句块时调用,关闭文件。如果在with
语句块中发生异常,__exit__
方法会被调用并处理异常。
-
使用上下文管理器:
- 使用
with
语句和自定义的上下文管理器类来读取文件内容。 - 在
with
语句块中,文件会被自动打开和关闭,确保资源被正确管理。
- 使用
7.CPython--python的C语言解释器
Python 编程语言的规范实现,在 python.org 上发布。"CPython" 一词用于在必要时将此实现与其他实现例如 Jython 或 IronPython 相区别。
CPython 是 Python 编程语言的参考实现,也是最常用的实现。它用 C 语言编写,可以直接编译成机器代码,因此得名 "C"Python。CPython 负责解释和执行 Python 代码。当你从 python.org 下载和安装 Python 时,你实际上安装的是 CPython 解释器及其标准库。
虽然 CPython 是最常用的实现,但还有一些其他实现用于特定场景:
- Jython:Python 的 Java 实现,可以在 Java 虚拟机(JVM)上运行。它适合与 Java 应用程序和库集成。
- IronPython:Python 的 .NET 实现,可以在 .NET 平台上运行。它适合与 .NET 框架和其他 .NET 语言(如 C#)集成。
- PyPy:一个替代的 Python 实现,旨在提高性能。它使用即时编译(JIT)技术,通常比 CPython 运行得更快,但某些库可能无法在 PyPy 上运行。
8.decorator -- 装饰器
装饰器(decorator)是 Python 中一种非常强大的工具,用于修改或增强函数和方法的行为。装饰器本身是一个返回函数的高阶函数,可以用来包裹其他函数或方法,从而在不改变原函数定义的情况下添加新的功能。
(1)使用 @decorator
语法糖:
在这个例子中,def my_decorator
是一个装饰器函数,它接受一个函数 func
作为参数,并返回一个新的函数 wrapper
。wrapper
函数在调用 func
之前和之后分别打印一些信息。
使用 @my_decorator
语法糖,say_hello
函数会被自动使用在 my_decorator()函数
中,也就是 my_decorator(say_hello
)
。
def my_decorator(func):
def wrapper(*args, **kwargs):
print("调用前的操作")
result = func(*args, **kwargs)
print("调用后的操作")
return result
return wrapper
@my_decorator
def say_hello(name):
print(f"Hello, {name}")
say_hello("Alice")
(2)不使用 @decorator
语法糖
def my_decorator(func):
def wrapper(*args, **kwargs):
print("调用前的操作")
result = func(*args, **kwargs)
print("调用后的操作")
return result
return wrapper
def say_hello(name):
print(f"Hello, {name}")
say_hello = my_decorator(say_hello)
say_hello("Alice")
装饰器的常见例子包括 classmethod() 和 staticmethod()。
(3)classmethod()
@classmethod等价于: classmethod()
classmethod
装饰器将一个方法转换为类方法。类方法的第一个参数是类本身,通常命名为 cls
。
class MyClass:
@classmethod
def class_method(cls, arg):
print(f"调用类方法,参数: {arg}")
MyClass.class_method("这是一个类方法")
class MyClass:
def class_method(cls, arg):
print(f"调用类方法,参数: {arg}")
class_method = classmethod(class_method)
MyClass.class_method("这是一个类方法")
(4)staticmethod
装饰器
staticmethod
装饰器将一个方法转换为静态方法。静态方法不接收任何特殊的第一个参数,它可以像普通函数一样调用,但属于类的命名空间。
class MyClass:
@staticmethod
def static_method(arg):
print(f"调用静态方法,参数: {arg}")
MyClass.static_method("这是一个静态方法")
等价于:
class MyClass:
def static_method(arg):
print(f"调用静态方法,参数: {arg}")
static_method = staticmethod(static_method)
MyClass.static_method("这是一个静态方法")
同样的概念也适用于类,但通常较少这样使用。有关装饰器的详情可参见 函数定义 和 类定义 的文档。
9.descriptor -- 描述器
任何定义了 __get__(), __set__() 或 __delete__() 方法的对象。 当一个类属性为描述器时,它的特殊绑定行为就会在属性查找时被触发。 通常情况下,使用 a.b 来获取、设置或删除一个属性时会在 a 类的字典中查找名称为 b 的对象,但如果 b 是一个描述器,则会调用对应的描述器方法。 理解描述器的概念是更深层次理解 Python 的关键,因为这是许多重要特性的基础,包括函数、方法、特征属性、类方法、静态方法以及对超类的引用等等。
描述器在 Python 中有多种应用场景,主要用于定制属性的行为,
(1)比如,描述器可以用于在属性访问、设置和删除时记录日志
class LoggedAttribute:
def __init__(self, name):
self.name = name
def __get__(self, instance, owner):
print(f"获取 {self.name}")
return instance.__dict__.get(self.name)
def __set__(self, instance, value):
print(f"设置 {self.name} 为 {value}")
instance.__dict__[self.name] = value
def __delete__(self, instance):
print(f"删除 {self.name}")
del instance.__dict__[self.name]
class MyClass:
attr = LoggedAttribute('attr')
# 实例化 MyClass
obj = MyClass()
# 设置属性
obj.attr = 10 # 输出: 设置 attr 为 10
# 访问属性
print(obj.attr) # 输出: 获取 attr
# 输出: 10
# 删除属性
del obj.attr # 输出: 删除 attr
(2)@classmethod
和 @staticmethod
也是描述器,用于绑定方法到类或定义静态方法。
class MyClass:
@classmethod
def class_method(cls):
print(f"这是 {cls.__name__} 的类方法")
@staticmethod
def static_method():
print("这是静态方法")
# 调用类方法
MyClass.class_method() # 输出: 这是 MyClass 的类方法
# 调用静态方法
MyClass.static_method() # 输出: 这是静态方法
10.描述器和装饰器的区别
区别总结
-
)定义方式:
- 描述器:是一个定义了
__get__()
、__set__()
和__delete__()
方法的类。 - 装饰器:是一个函数或类,使用
@
符号来应用。
- 描述器:是一个定义了
-
)作用范围:
- 描述器:通常作用于类属性,影响所有实例对该属性的访问、设置和删除行为。
- 装饰器:可以作用于函数、类方法、静态方法等,修改这些函数或方法的行为。
-
)触发条件:
- 描述器:当通过类实例访问、设置或删除描述器属性时触发。
- 装饰器:在函数或方法被调用时触发。
-
)使用场景:
- 描述器:用于定制属性的行为,如属性验证、日志记录、缓存、动态属性等。
- 装饰器:用于修改函数或方法的行为,如日志记录、性能测试、事务处理、缓存、权限校验等。
11.dictionary -- 字典
一个关联数组,其中的任意键都映射到相应的值。 键可以是任何具有 __hash__() 和 __eq__() 方法的对象。
dictionary comprehension -- 字典推导式
处理一个可迭代对象中的所有或部分元素并返回结果字典的一种紧凑写法。 results = {n: n ** 2 for n in range(10)}
将生成一个由键 n
到值 n ** 2
的映射构成的字典。 参见 列表、集合与字典的显示。
dictionary view -- 字典视图
从 dict.keys(), dict.values() 和 dict.items() 返回的对象被称为字典视图。它们提供了字典条目的一个动态视图,这意味着当字典改变时,视图也会相应改变。要将字典视图强制转换为真正的列表,可使用 list(dictview)
。参见 字典视图对象。
12.expression -- 表达式
可以求出某个值的语法单元。 换句话说,一个表达式就是表达元素例如字面值、名称、属性访问、运算符或函数调用的汇总,它们最终都会返回一个值。 与许多其他语言不同,并非所有语言构件都是表达式。 还存在不能被用作表达式的 statement,例如 while。 赋值也是属于语句而非表达式。
Python 中,表达式是语言的一个核心组成部分,广泛用于各种编程任务。表达式可以是简单的值、变量、运算符组合,也可以是复杂的函数调用和方法调用。
13.file object -- 文件对象
对外公开面向文件的 API(带有 read()
或 write()
等方法)以使用下层资源的对象。 根据其创建方式的不同,文件对象可以处理对真实磁盘文件、其他类型的存储或通信设备的访问(例如标准输入/输出、内存缓冲区、套接字、管道等)。 文件对象也被称为 文件型对象 或 流。
实际上共有三种类别的文件对象: 原始 二进制文件, 缓冲 二进制文件 以及 文本文件。它们的接口定义均在 io 模块中。 创建文件对象的规范方式是使用 open() 函数。
14.generator -- 生成器
返回一个 generator iterator 的函数。它看起来很像普通函数,不同点在于其包含 yield 表达式以便产生一系列值供给 for-循环使用或是通过 next() 函数逐一获取。
通常是指生成器函数,但在某些情况下也可能是指 生成器迭代器。如果需要清楚表达具体含义,请使用全称以避免歧义。