https://www.bilibili.com/video/BV14v411b733#reply3775162098
vs2019中配置使用cuda问题
win10系统上LibTorch的安装和使用(cuda10.1版本)
链接:libtorch_cuda10.1_win10
提取码:1234
链接:cuda10.1 + cudnn10.1
提取码:1234
L i b T o r c h 的安装和使用 LibTorch的安装和使用 LibTorch的安装和使用
https://pytorch.org/get-started/locally/
LibTorch就在torch的安装界面,Package选择LibTorch就行
首先,VS2019,编译的时候记得选x64,坑爹玩意,害我晚上11.09才搞完,一晚上没了。
—
E:\BaiduNetdiskDownload\libtorch\include
E:\BaiduNetdiskDownload\libtorch\include\torch\csrc\api\include
E:\BaiduNetdiskDownload\libtorch\lib
配置libtorch GPU VS2019,添加的lib依赖
GPU版
torch.lib
torch_cuda.lib
caffe2_detectron_ops_gpu.lib
caffe2_module_test_dynamic.lib
torch_cpu.lib
c10_cuda.lib
caffe2_nvrtc.lib
mkldnn.lib
c10.lib
dnnl.lib
libprotoc.lib
libprotobuf.lib
libprotobuf-lite.lib
fbgemm.lib
asmjit.lib
cpuinfo.lib
clog.lib
1.7.0-GPU
asmjit.lib
c10.lib
c10d.lib
caffe2_module_test_dynamic.lib
caffe2_detectron_ops_gpu.lib
caffe2_nvrtc.lib
clog.lib
cpuinfo.lib
dnnl.lib
fbgemm.lib
gloo.lib
gloo_cuda.lib
mkldnn.lib
torch.lib
torch_cpu.lib
torch_cuda.lib
c10_cuda.lib
libprotoc.lib
libprotobuf.lib
libprotobuf-lite.lib
deps-debug-1.12.1+cu113
asmjit.lib
c10.lib
c10_cuda.lib
caffe2_nvrtc.lib
clog.lib
cpuinfo.lib
dnnl.lib
fbgemm.lib
kineto.lib
XNNPACK.lib
torch.lib
torch_cpu.lib
torch_cuda.lib
torch_cuda_cpp.lib
torch_cuda_cu.lib
libprotocd.lib
libprotobufd.lib
libprotobuf-lited.lib
pthreadpool.lib
CPU版
asmjit.lib
c10.lib
c10d.lib
caffe2_detectron_ops.lib
caffe2_module_test_dynamic.lib
clog.lib
cpuinfo.lib
dnnl.lib
fbgemm.lib
gloo.lib
libprotobufd.lib
libprotobuf-lited.lib
libprotocd.lib
mkldnn.lib
torch.lib
torch_cpu.lib
设置dll文件的环境变量,
PATH=E:\BaiduNetdiskDownload\libtorch\lib;%PATH%
#include <iostream>
#include <torch/torch.h>
int main()
{
torch::Tensor tensor = torch::rand({ 5,3 });
std::cout << tensor << std::endl;
return EXIT_SUCCESS;
}
#include <torch/script.h> // One-stop header.
#include <iostream>
#include <memory>
int main() {
using torch::jit::script::Module;
Module module =
torch::jit::load("UNet_model.pt");
std::cout << "ok\n";
// Create a vector of inputs.
std::vector<torch::jit::IValue> inputs;
inputs.push_back(torch::ones({1, 3, 512, 512}));
// Execute the model and turn its output into a tensor.
at::Tensor output = module.forward(inputs).toTensor();
std::cout << output << '\n';
}
#include <torch/script.h> // One-stop header.
#include <iostream>
#include <memory>
int main() {
// Deserialize the ScriptModule from a file using torch::jit::load().
//std::shared_ptr<torch::jit::script::Module> module =
// torch::jit::load("E:/HM_DL/torch_test/traced_resnet_model.pt");
using torch::jit::script::Module;
Module module =
torch::jit::load("C:/Users/major/Desktop/AlexNet_model.pt");
std::cout << "model load successfully\n";
// Create a vector of inputs.
std::vector<torch::jit::IValue> inputs;
inputs.push_back(torch::ones({ 1, 3, 224, 224 }));
// Execute the model and turn its output into a tensor.
at::Tensor output = module.forward(inputs).toTensor();
std::cout << output << '\n';
std::cout << "输出结束\n";
;
}