pytorch源码解析:Tensor的生成过程(附:gdb对pytorch进行C语言级调试的过程)

pytorch v1.0

准备工作

首先我假设你已经安装好了pytorch的调试版本,如果没有,请参考:
pytorch源码开发:在Ubuntu中的编译调试(C语言源码级调试)
编译pytorch时一定要使用python setup.py build develop,还要设置DEBUG=1,不然无法进入源码。

下面是一个gdb调试pytorch程序的过程,有兴趣的可以参考。我要说明的是,gdb调试不是必须的,如果你C++语言掌握得比较好的话,直接读源码效果应该也是不错的,pytorch的源码貌似很少有那种非常晦涩难懂的(一般晦涩难懂的在算法中居多, ^_^ ),如果你有兴趣,直接看后面的脉络部分也是可以的,相对而言,在了解pytorch之前,最好对python的C语言架构有个初略的了解,我这里只是一个非常粗浅的脉络,就是讲一下从哪里来,到哪里去而已。

(base) ~/edu/pytorchedu/tutorial$ gdb --args python3  test.py

GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from python3...done.

(gdb) b python_arg_parser.cpp:428
No source file named python_arg_parser.cpp.
Make breakpoint pending on future shared library load? (y or [n]) y
Breakpoint 1 (python_arg_parser.cpp:428) pending.

(gdb) r
Starting program: /home/matthew/anaconda3/bin/python3 two_layer_net_nn_test.py
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".

Breakpoint 1, torch::FunctionSignature::parse (this=0x5555561bcc90, args=0x7fffd7fa19e8, kwargs=0x0, 
    dst=0x7fffffffa930, raise_exception=false) at ../torch/csrc/utils/python_arg_parser.cpp:429
429	                              bool raise_exception) {

(gdb) l
424	  // this should never be hit
425	  throw TypeError("invalid keyword arguments");
426	}
427	
428	bool FunctionSignature::parse(PyObject* args, PyObject* kwargs, PyObject* dst[],
429	                              bool raise_exception) {
430	  auto nargs = PyTuple_GET_SIZE(args);
431	  ssize_t remaining_kwargs = kwargs ? PyDict_Size(kwargs) : 0;
432	  ssize_t arg_pos = 0;
433	  bool allow_varargs_intlist = false;
(gdb) 

我的test.py文件相当简单,只有这么两句,目的就是看一个Tensor的构造过程

import torch
#x = torch.tensor( [[-0.2497,  2.0979,  1.7150],[ 0.6786,  0.4429,  0.7582]])
#print(x)
#y = torch.tensor([[-0.0217,  0.8911],  [-1.0743, -1.1462]])
#print(y)
z = torch.Tensor(3,2)
print(z)

前一节的内容是指:Pytorch源码解析--C扩展的一个简要说明

步入正题

前一节讲到了pytorch的基本构建时,有提到Tensor的tp_new函数:THPVariable_pynew,根据python的对象机制,在任何对象被创建时,tp_new函数会被首先调用,因此,输入下面的参数进行初始化时,

torch.Tensor(out, in)

THPVariable_pynew会首先被调用,
如果想看这个调用,打一个gdb断点即可:(gdb) b python_variable.cpp:129,

static PyObject *THPVariable_pynew(PyTypeObject *type, PyObject *args, PyObject *kwargs)
{
  HANDLE_TH_ERRORS
  jit::tracer::warn("torch.Tensor", jit::tracer::WARN_CONSTRUCTOR);
  auto& default_type = torch::tensors::get_default_tensor_type();
  auto tensor = torch::utils::legacy_tensor_ctor(default_type, args, kwargs);
  return THPVariable_NewWithVar(type, std::move(tensor));
  END_HANDLE_TH_ERRORS
}

先看一下legacy_tensor_ctor函数,这个是创建Tensor初始空间的,

Tensor legacy_tensor_ctor(const Type& type, PyObject* args, PyObject* kwargs) {
  static PythonArgParser parser({
    "new(*, Device? device=None)",
    "new(Storage storage)",
    "new(*, int64_t cdata)|hidden",
    "new(Tensor other)",
    "new(IntList size, *, Device? device=None)",
    "new(PyObject* data, *, Device? device=None)",
  });

  if (type.is_sparse()) {
    return legacy_sparse_tensor_ctor(type, args, kwargs);
  }

  ParsedArgs<2> parsed_args;
  auto r = parser.parse(args, kwargs, parsed_args);
  if (r.idx == 0) {
    auto deviceOptional = r.deviceOptional(0);
    check_legacy_ctor_device(type, deviceOptional);
    at::OptionalDeviceGuard device_guard(deviceOptional);
    return at::empty({0}, type.options());
  } else if (r.idx == 1) {
    return new_with_storage(type, r.storage(0));
  } else if (r.idx == 2) {
    auto cdata = reinterpret_cast<void*>(r.toInt64(0));
    return type.unsafeTensorFromTH(cdata, true);
  } else if (r.idx == 3) {
    return new_with_tensor(type, r.tensor(0));
  } else if (r.idx == 4) {
    PyObject* arg = r.pyobject(0);
    auto deviceOptional = r.deviceOptional(1);
    check_legacy_ctor_device(type, deviceOptional);
    if (!THPSize_Check(arg) && PyTuple_GET_SIZE(args) >= 1 && arg == PyTuple_GET_ITEM(args, 0)) {
      // new(sequence) binds to this signature but should be treated differently
      // unless the sequences is a torch.Size
      return legacy_new_from_sequence(type, deviceOptional, r.pyobject(0));
    }
    return new_with_sizes(type, r.deviceOptional(1), r.intlist(0));
  } else if (r.idx == 5) {
    auto deviceOptional = r.deviceOptional(1);
    check_legacy_ctor_device(type, deviceOptional);
    return legacy_new_from_sequence(type, deviceOptional, r.pyobject(0));
  }
  throw std::runtime_error("new(): invalid arguments");
}

大致过程是这样的:torch.Tensor(3,2) ===>>> new_with_sizes ===>>> r.intlist(0) ===>>> PythonArgs::intlist 
===>>> intlistWithDefault ===>>>inline int64_t THPUtils_unpackIndex===>>>THPUtils_unpackLong
===>>>value=PyLong_AsLongLongAndOverflow
===>>> 这个value返回intlistWithDefault中,再由res返回(这个res就是内存空间参数,res[0]=3,res[1]=2)
最后legacy_tensor_ctor函数中返回的return new_with_sizes(type, r.deviceOptional(1), r.intlist(0));
其中 new_with_sizes的定义在tensor_new.cpp中,如下

Tensor new_with_sizes(const Type& type, optional<Device> device, IntList sizes) {
  maybe_initialize_cuda(type);
  AutoNoGIL no_gil;
  return torch::empty(sizes, type.options(std::move(device)));
}

这里torch::empty的函数实现在variable_factories.h中,其作用就是返回一个没有初始化的张量(Tensor),有兴趣的可以找到其源码看看,这里就不再深入了。

创建一个要返回的新Object,

// Creates a new Python object for a Variable. The Variable must not already
// have a PyObject* associated with it.
static PyObject* THPVariable_NewWithVar(PyTypeObject* type, Variable var)
{
  PyObject* obj = type->tp_alloc(type, 0);
  if (obj) {
    auto v = (THPVariable*) obj;
    new (&v->cdata) Variable(std::move(var));
    v->cdata.set_pyobj(obj);
    if (auto fn = dynamic_cast<PyFunction*>(v->cdata.grad_fn_unsafe())) {
      // Create a new reference to the THPFunction. This ensures that ref count
      // of the THPFunction is at least the number of referring THPVariables.
      const auto output_nr = v->cdata.output_nr();
      auto grad_fn = THPFunction_asFunction((THPFunction*)fn->obj);
      v->cdata.set_gradient_edge({std::move(grad_fn), output_nr});
    }
  }
  return obj;
}

接下来我们再看前面函数涉及的另一个要素:参数分析,这里就不再细细展开跟踪过程了,

torch/csrc/utils/python_arg_parser.h

template<int N>
inline PythonArgs PythonArgParser::parse(PyObject* args, PyObject* kwargs, ParsedArgs<N>& dst) {
  if (N < max_args) {
    throw ValueError("PythonArgParser: dst ParsedArgs buffer does not have enough capacity, expected %d (got %d)",
        (int)max_args, N);
  }
  return raw_parse(args, kwargs, dst.args);
}

调用路线:parse ===>>> raw_parse ===>>>FunctionSignature::parse,

在FunctionSignature::parse中我们又可以看到,创建一个tensor的时候,就是把args,也就是tensor参数放到'dst[ ]'中去。

torch/csrc/utils/python_arg_parser.cpp


bool FunctionSignature::parse(PyObject* args, PyObject* kwargs, PyObject* dst[],
                              bool raise_exception) {
  auto nargs = PyTuple_GET_SIZE(args);
  ssize_t remaining_kwargs = kwargs ? PyDict_Size(kwargs) : 0;
  ssize_t arg_pos = 0;
  bool allow_varargs_intlist = false;

  // if there is a single positional IntList argument, i.e. expand(..), view(...),
  // allow a var-args style IntList, so expand(5,3) behaves as expand((5,3))
  if (max_pos_args == 1 && params[0].type_ == ParameterType::INT_LIST) {
    allow_varargs_intlist = true;
  }

  if (nargs > max_pos_args && !allow_varargs_intlist) {
    if (raise_exception) {
      // foo() takes takes 2 positional arguments but 3 were given
      extra_args(*this, nargs);
    }
    return false;
  }

  int i = 0;
  for (auto& param : params) {
    PyObject* obj = nullptr;
    bool is_kwd = false;
    if (arg_pos < nargs) {
      // extra positional args given after single positional IntList arg
      if (param.keyword_only) {
        if (raise_exception) {
          extra_args(*this, nargs);
        }
        return false;
      }
      obj = PyTuple_GET_ITEM(args, arg_pos);
    } else if (kwargs) {
      obj = PyDict_GetItem(kwargs, param.python_name);
      is_kwd = true;
    }

    if ((!obj && param.optional) || (obj == Py_None && param.allow_none)) {
      dst[i++] = nullptr;
    } else if (!obj) {
      if (raise_exception) {
        // foo() missing 1 required positional argument: "b"
        missing_args(*this, i);
      }
      return false;
    } else if (param.check(obj)) {
      dst[i++] = obj;
    // XXX: the Variable check is necessary because sizes become tensors when
    // tracer is enabled. This behavior easily leads to ambiguities, and we
    // should avoid having complex signatures that make use of it...
    } else if (allow_varargs_intlist && arg_pos == 0 && !is_kwd &&
               THPUtils_checkIndex(obj)) {
      // take all positional arguments as this parameter
      // e.g. permute(1, 2, 3) -> permute((1, 2, 3))
      dst[i++] = args;
      arg_pos = nargs;
      continue;
    } else if (raise_exception) {
      if (is_kwd) {
        // foo(): argument 'other' must be str, not int
        throw TypeError("%s(): argument '%s' must be %s, not %s",
            name.c_str(), param.name.c_str(), param.type_name().c_str(),
            Py_TYPE(obj)->tp_name);
      } else {
        // foo(): argument 'other' (position 2) must be str, not int
        throw TypeError("%s(): argument '%s' (position %d) must be %s, not %s",
            name.c_str(), param.name.c_str(), arg_pos + 1,
            param.type_name().c_str(), Py_TYPE(obj)->tp_name);
      }
    } else {
      return false;
    }

    if (!is_kwd) {
      arg_pos++;
    } else if (obj) {
      remaining_kwargs--;
    }
  }

  if (remaining_kwargs > 0) {
    if (raise_exception) {
      // foo() got an unexpected keyword argument "b"
      extra_kwargs(*this, kwargs, nargs);
    }
    return false;
  }

  return true;
}



PythonArgs PythonArgParser::raw_parse(PyObject* args, PyObject* kwargs, PyObject* parsed_args[]) {
  if (signatures_.size() == 1) {
    auto& signature = signatures_[0];
    signature.parse(args, kwargs, parsed_args, true);
    return PythonArgs(0, traceable, signature, parsed_args);
  }

  int i = 0;
  for (auto& signature : signatures_) {
    if (signature.parse(args, kwargs, parsed_args, false)) {
      return PythonArgs(i, traceable, signature, parsed_args);
    }
    i++;
  }

  print_error(args, kwargs, parsed_args);
}

后话

知道怎么进行C语言调试是学习pytorch源码的途径之一。pytorch的源码量还是比较大的,我自己也是偶尔才能抽时间看一点当时和自己相关的,发现其实抽点时间整理出来也不容易,很多地方自己也觉得比较粗糙,无力细究。
对相关更深入的算法学习有兴趣的,也可参见本人的知乎专栏,共同学习,
https://zhuanlan.zhihu.com/spaceai

 


 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值