libtorch调试——如何保存libtorch生成的张量并在python中加载

C++ --> Python

c++保存

#include <torch/script.h>

#include <iostream>
#include <memory>

int main() {
  auto x = torch::ones({3, 3});
  auto bytes = torch::jit::pickle_save(x);
  std::ofstream fout("x.zip", std::ios::out | std::ios::binary);
  fout.write(bytes.data(), bytes.size());
  fout.close();
  return 0;
}

python加载

import torch
torch.load("x.zip")

Python --> C++

python保存

import io

import torch


def save_tensor(device):
    my_tensor = torch.rand(3, 3).to(device);
    print("[python] my_tensor: ", my_tensor)
    f = io.BytesIO()
    torch.save(my_tensor, f, _use_new_zipfile_serialization=True)
    with open('my_tensor_%s.pt' % device, "wb") as out_f:
        # Copy the BytesIO stream to the output file
        out_f.write(f.getbuffer())


if __name__ == '__main__':
    save_tensor('cpu')

c++加载

#include <iostream>
#include <torch/torch.h>

std::vector<char> get_the_bytes(std::string filename) {
    std::ifstream input(filename, std::ios::binary);
    std::vector<char> bytes(
        (std::istreambuf_iterator<char>(input)),
        (std::istreambuf_iterator<char>()));

    input.close();
    return bytes;
}

int main()
{
    std::vector<char> f = get_the_bytes("my_tensor_cpu.pt");
    torch::IValue x = torch::pickle_load(f);
    torch::Tensor my_tensor = x.toTensor();
    std::cout << "[cpp] my_tensor: " << my_tensor << std::endl;

    return 0;
}
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值