pytorch 源码分析 c++调试 (1) - libtorch调试记录, 进入torch c++源码调试的最方便方案

涉及资源

libtorch-win-shared-with-deps-debug-1.9.0+cpu.zip

dendisuhubdy__libtorch_examples

pytorch v1.9.0源码: d69c22dd61a2f006dcfe1e3ea8468a3ecaf931aa , 就是pytorch v1.9.0

opencv-4.5.3-vc14_vc15.exe
opencv cmake配置

clion + vs2019 + cmake 开发环境

大致过程记录

libtorch-win-shared-with-deps-debug-1.9.0+cpu.zip
查看其中 文件build-hash 内容为: d69c22dd61a2f006dcfe1e3ea8468a3ecaf931aa
去pytorch源码找到该提交 d69c22dd61a2f006dcfe1e3ea8468a3ecaf931aa , 就是pytorch v1.9.0 源码

libtorch-win-shared-with-deps-debug-1.9.0+cpu.zip 中只有 .h头文件, 并没有cpp文件,

单步调试 mnist.cpp

大致经过若干次step into 会进入到 实际上是cpp中的代码 ,但是没有cpp所以显示为以下的汇编指令,下图显示的是一条callq指令: callq AutogradMetaInterface …,
在这里插入图片描述

AutogradMetaInterface 实际上是在 TensorImpl.cpp 中, 所以点击右上角 "choose file " 并找到 D:\local_external\pytorch\c10\core\TensorImpl.cpp, 调试器的符号文件.pdb将会正确对应到 pytorch仓库的源码文件了, 并且pytorch仓库的其他源码文件也被自动对应上了,至此 能够调试pytorch cpp了。

到 TensorImpl.cpp 的 const at::Tensor& TensorImpl::grad() const 加一个断点,重启调试。调试中,如下图:
在这里插入图片描述

在这里插入图片描述

流程列举

backward 遍历graph中节点 的循环
调试

在这里插入图片描述

backward 遍历graph中节点 的循环 调用栈:

# backward 遍历graph中节点 的循环 调用栈: 
torch::autograd::validate_outputs(const std::vector<torch::autograd::Edge,std::allocator<torch::autograd::Edge> > &,std::vector<at::Tensor,std::allocator<at::Tensor> > &,const std::function<std::basic_string<char,std::char_traits<char>,std::allocator<char> > __cdecl(std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &)> &) engine.cpp:626
torch::autograd::call_function(std::shared_ptr<torch::autograd::GraphTask> &,torch::autograd::Node *,torch::autograd::InputBuffer &) engine.cpp:713
torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask> &,torch::autograd::Node *,torch::autograd::InputBuffer &,const std::shared_ptr<torch::autograd::ReadyQueue> &) engine.cpp:761
torch::autograd::Engine::thread_main(const std::shared_ptr<torch::autograd::GraphTask> &) engine.cpp:417
torch::autograd::Engine::execute_with_graph_task(const std::shared_ptr<torch::autograd::GraphTask> &,shared_ptr<torch::autograd::Node>,torch::autograd::InputBuffer &&) engine.cpp:994
torch::autograd::Engine::execute(const std::vector<torch::autograd::Edge,std::allocator<torch::autograd::Edge> > &,const std::vector<at::Tensor,std::allocator<at::Tensor> > &,bool,bool,bool,const std::vector<torch::autograd::Edge,std::allocator<torch::autograd::Edge> > &) engine.cpp:946
torch::autograd::run_backward(const std::vector<at::Tensor,std::allocator<at::Tensor> > &,const std::vector<at::Tensor,std::allocator<at::Tensor> > &,bool,bool,const std::vector<at::Tensor,std::allocator<at::Tensor> > &,bool,bool) autograd.cpp:115
torch::autograd::backward(const std::vector<at::Tensor,std::allocator<at::Tensor> > &,const std::vector<at::Tensor,std::allocator<at::Tensor> > &,optional<bool>,bool,const std::vector<at::Tensor,std::allocator<at::Tensor> > &) autograd.cpp:142
torch::autograd::VariableHooks::_backward(const at::Tensor &,ArrayRef<at::Tensor>,const c10::optional<at::Tensor> &,optional<bool>,bool) variable.cpp:482
at::Tensor::_backward(ArrayRef<at::Tensor>,const c10::optional<at::Tensor> &,optional<bool>,bool) Tensor.cpp:83
at::Tensor::backward(const at::Tensor &,optional<bool>,bool,optional<c10::ArrayRef<at::Tensor> >) TensorBody.h:670
train<torch::data::StatelessDataLoader<torch::data::datasets::MapDataset<torch::data::datasets::MapDataset<torch::data::datasets::MNIST,Normalize>,torch::data::transforms::Stack<torch::data::Example<at::Tensor,at::Tensor> > >,torch::data::samplers::RandomSampler> >(int,const Options &,Net &,Device,torch::data::StatelessDataLoader<torch::data::datasets::MapDataset<torch::data::datasets::MapDataset<torch::data::datasets::MNIST,Normalize>,torch::data::transforms::Stack<torch::data::Example<at::Tensor,at::Tensor> > >,torch::data::samplers::RandomSampler> &,torch::optim::Adam &,unsigned long long) mnist.cpp:27
main(int,const char **) mnist.cpp:107
invoke_main() 0x00007ff64eed2849
__scrt_common_main_seh() 0x00007ff64eed26ee
__scrt_common_main() 0x00007ff64eed25ae
mainCRTStartup(void *) 0x00007ff64eed28de
BaseThreadInitThunk 0x00007ffe53817034
RtlUserThreadStart 0x00007ffe53cc2651
// pytorch/torch/csrc/autograd/engine.cpp
auto Engine::thread_main(const std::shared_ptr<GraphTask>& graph_task) -> void {
//...
//遍历计算图graph的循环:
  while (graph_task == nullptr || !graph_task->future_result_->completed()) {
  //...
  //在循环中下断点, watch表达式:  task.fn_.get()   ,  可以看到节点类. 大致有以下节点类:
  /*
	torch::autograd::generated::MulBackward0
	torch::autograd::generated::MkldnnConvolutionBackward
	torch::autograd::AccumulateGrad
	torch::autograd::generated::ReluBackward0
	torch::autograd::generated::MaxPool2DWithIndicesBackward
	torch::autograd::generated::NLLLossBackward  实际是  NllLossBackward
	torch::autograd::generated::LogSoftmaxBackward
	torch::autograd::generated::AddmmBackward
	torch::autograd::generated::TBackward
	torch::autograd::generated::MulBackward0
	torch::autograd::generated::ViewBackward
	
	torch::autograd::generated:: * 这些显然是生成的代码
	
*/


  //...
  }//循环结束

//...
}
分析: torch::autograd::generated:: * 哪来的?

torch::autograd::generated:: * 这些类位于 libtorch发行包中, 如下位置: Functions.h 中,
libtorch-win-shared-with-deps-debug-1.9.0+cpu.zip\libtorch\include\torch\csrc\autograd\generated\Functions.h

但是 这些 torch::autograd::generated:: *类 在
pytorch v1.9.0 源码 /torch/csrc/autograd/ 中没有

这说明 torch::autograd::generated:: * 是 编译 pytorch v1.9.0 源码 时 , 生成的

torch::autograd::generated:: * 全部类如下:



TypeAndSize
AbsBackward
AcosBackward
AddBackward0
AddBackward1
AddbmmBackward
AddcdivBackward
AddcmulBackward
AddmmBackward
SparseAddmmBackward
AddmvBackward
AddrBackward
AffineGridGeneratorBackward
AliasBackward
AngleBackward
AnyBackward0
AnyBackward1
AllBackward0
AllBackward1
AcoshBackward0
AcoshBackward1
AsinhBackward0
AsinhBackward1
AtanhBackward0
AtanhBackward1
AsStridedBackward
AsinBackward
AtanBackward
Atan2Backward
BaddbmmBackward
BernoulliBackward0
BernoulliBackward1
BernoulliBackward2
BmmBackward0
BmmBackward1
CatBackward
CauchyBackward
CeilBackward
CholeskyBackward
LinalgCholeskyExBackward
CholeskySolveBackward
CholeskyInverseBackward
ClampBackward0
ClampBackward1
ClampMinBackward0
ClampMinBackward1
ClampMaxBackward0
ClampMaxBackward1
CloneBackward
CoalesceBackward
ComplexBackward
PolarBackward
ConjBackward
CopysignBackward0
CopysignBackward1
CosBackward
CoshBackward
CrossBackward
LogcumsumexpBackward
CumprodBackward
CumsumBackward
CummaxBackward
CumminBackward
ConvTbcBackward
CtcLossBackward
Deg2RadBackward
LinalgDetBackward
DiagBackward
DiagonalBackward
DistBackward
DivBackward0
DivBackward1
DivBackward2
DivBackward3
DotBackward
VdotBackward
FusedDropoutBackward
EigBackward
EqBackward0
EqBackward1
ErfBackward
ErfcBackward
ErfinvBackward
ExpBackward
Exp2Backward
Expm1Backward
ExpandBackward
ExponentialBackward
FakeQuantizePerTensorAffineCachemaskBackward
FakeQuantizeLearnablePerTensorAffineBackward
FakeQuantizePerChannelAffineCachemaskBackward
FakeQuantizeLearnablePerChannelAffineBackward
FillBackward0
FillBackward1
FloorBackward
FmodBackward0
FmodBackward1
FracBackward
FrexpBackward
GatherBackward
GeBackward0
GeBackward1
GeometricBackward
GeqrfBackward
GerBackward
GridSampler2DBackward
GridSampler3DBackward
GridSampler2DCpuFallbackBackward
GtBackward0
GtBackward1
HardsigmoidBackward
HistcBackward
HardswishBackward
HypotBackward
I0Backward
IgammaBackward
IgammacBackward
IndexBackward
IndexAddBackward
IndexCopyBackward
IndexFillBackward0
IndexFillBackward1
IndexPutBackward
IndexPutImplBackward
IndexSelectBackward
InverseBackward
LinalgInvExBackward
KthvalueBackward
LeBackward0
LeBackward1
LerpBackward0
LerpBackward1
LgammaBackward
DigammaBackward
PolygammaBackward0
PolygammaBackward1
LogBackward
Log10Backward
Log1PBackward
Log2Backward
LogaddexpBackward
Logaddexp2Backward
XlogyBackward0
XlogyBackward1
XlogyBackward2
SpecialXlog1PyBackward0
SpecialXlog1PyBackward1
SpecialXlog1PyBackward2
LogdetBackward
LogNormalBackward
LogsumexpBackward
LstsqBackward
LinalgLstsqBackward
LtBackward0
LtBackward1
LuWithInfoBackward
LuSolveBackward
LuUnpackBackward
MaskedFillBackward0
MaskedFillBackward1
MaskedScatterBackward
MaskedSelectBackward
MatrixExpBackward
MaxBackward0
MaxBackward1
MaximumBackward
FmaxBackward
MeanBackward0
MeanBackward1
MedianBackward0
NanmedianBackward0
MedianBackward1
NanmedianBackward1
MinBackward0
MinBackward1
MinimumBackward
FminBackward
AmaxBackward
AminBackward
MmBackward
ModeBackward
MulBackward0
MulBackward1
MvBackward
MvlgammaBackward
NanToNumBackward
NativeBatchNormBackward
NativeBatchNormBackwardBackward
NativeLayerNormBackward
NativeGroupNormBackward
NeBackward0
NeBackward1
NegBackward
NextafterBackward
NormBackward0
NormBackward1
NormBackward2
NormBackward3
LinalgVectorNormBackward
PdistBackward
PdistBackwardBackward
EuclideanDistBackward
CdistBackward
CdistBackwardBackward
NormalBackward0
NormalBackward1
NormalBackward2
NormalBackward3
LinalgHouseholderProductBackward
OrmqrBackward
PermuteBackward
PoissonBackward
PowBackward0
PowBackward1
PowBackward2
ProdBackward0
ProdBackward1
PutBackward
LinalgQrBackward
Rad2DegBackward
RandomBackward0
RandomBackward1
RandomBackward2
ReciprocalBackward
RemainderBackward0
RemainderBackward1
RenormBackward
RepeatBackward
SpecialEntrBackward
RoundBackward
RsqrtBackward
ScatterBackward0
ScatterBackward1
ScatterAddBackward
SelectBackward
SigmoidBackward
LogitBackward
SignBackward
SgnBackward
SinBackward
SincBackward
SinhBackward
SliceBackward
SlogdetBackward
LinalgSlogdetBackward
SolveBackward
LinalgSolveBackward
SortBackward0
SortBackward1
SplitBackward
UnsafeSplitBackward
SplitWithSizesBackward
UnsafeSplitWithSizesBackward
SqrtBackward
SqueezeBackward0
SqueezeBackward1
SqueezeBackward2
SqueezeBackward3
StdBackward
StdMeanBackward
SubBackward0
SubBackward1
RsubBackward0
RsubBackward1
SumBackward0
SumBackward1
NansumBackward0
NansumBackward1
SvdHelperBackward
SymeigBackward
LinalgEighBackward
LinalgEigBackward
TBackward
FlipBackward
RollBackward
Rot90Backward
TakeBackward
TanBackward
TanhBackward
TopkBackward
TraceBackward
TransposeBackward0
TransposeBackward1
TriangularSolveBackward
TrilBackward
TriuBackward
TruncBackward
ToDenseBackward
ToSparseBackward
ToMkldnnBackward
UnfoldBackward
UnfoldBackwardBackward
UniformBackward
UniqueBackward
UniqueDimBackward
UniqueConsecutiveBackward
UniqueDimConsecutiveBackward
Unique2Backward
UnsafeViewBackward
UnsqueezeBackward0
UnsqueezeBackward1
VarBackward
VarMeanBackward
ViewBackward
ViewAsRealBackward
ViewAsComplexBackward
SWhereBackward
WeightNormCudaInterfaceBackward
ZeroBackward
SparseMaskBackward
SparseCooTensorWithDimsAndTensorsBackward
SparseSumBackward
StandardGammaBackward
StandardGammaGradBackward
ValuesBackward
TrilinearBackward
ConstantPadNdBackward
BinaryCrossEntropyBackward
BinaryCrossEntropyBackwardBackward
BinaryCrossEntropyWithLogitsBackward
EmbeddingBackward
EmbeddingDenseBackwardBackward
EmbeddingBagBackward
EmbeddingRenormBackward
KlDivBackward
L1LossBackward
MseLossBackward
MultiMarginLossBackward
MultilabelMarginLossBackward
NllLossBackward
NllLoss2DBackward
SmoothL1LossBackward
HuberLossBackward
SoftMarginLossBackward
ReluBackward0
ReluBackward1
SiluBackward
MishBackward
EluBackward0
EluBackward1
CeluBackward0
CeluBackward1
GeluBackward
GluBackward
HardshrinkBackward
HardshrinkBackwardBackward
HardtanhBackward0
HardtanhBackward1
LeakyReluBackward0
LeakyReluBackward1
LogSigmoidBackward
LogSoftmaxBackward
SparseLogSoftmaxBackward
PreluBackward
PreluBackwardBackward
RreluWithNoiseBackward0
RreluWithNoiseBackward1
SoftmaxBackward
SparseSoftmaxBackward
SparseSparseMatmulBackward
SoftplusBackward
SoftshrinkBackward
ThresholdBackward0
ThresholdBackward1
ReflectionPad1DBackward
ReflectionPad2DBackward
ReplicationPad1DBackward
ReplicationPad2DBackward
ReplicationPad3DBackward
UpsampleLinear1DBackward0
UpsampleBilinear2DBackward0
UpsampleBicubic2DBackward0
UpsampleTrilinear3DBackward0
UpsampleNearest1DBackward0
UpsampleNearest2DBackward0
UpsampleNearest3DBackward0
UpsampleLinear1DBackward1
UpsampleBilinear2DBackward1
UpsampleTrilinear3DBackward1
UpsampleBicubic2DBackward1
UpsampleNearest1DBackward1
UpsampleNearest2DBackward1
UpsampleNearest3DBackward1
AdaptiveAvgPool2DBackward
AdaptiveAvgPool3DBackward
AdaptiveMaxPool2DBackward
AdaptiveMaxPool3DBackward
AvgPool2DBackward
AvgPool3DBackward
FractionalMaxPool2DBackward
FractionalMaxPool3DBackward
MaxPool2DWithIndicesBackward
MaxPool3DWithIndicesBackward
MaxUnpool2DBackward
MaxUnpool3DBackward
ConvolutionOverrideableBackward
ConvolutionBackwardOverrideableBackward
SlowConvTranspose2DBackward
SlowConvTranspose2DBackwardBackward
SlowConvTranspose3DBackward
SlowConvTranspose3DBackwardBackward
ThnnConv2DBackward
ThnnConv2DBackwardBackward
ThnnConvDepthwise2DBackward
ThnnConvDepthwise2DBackwardBackward
ConvDepthwise3DBackward
ConvDepthwise3DBackwardBackward
SlowConv3DBackward
SlowConv3DBackwardBackward
SlowConvDilated2DBackward
SlowConvDilated2DBackwardBackward
SlowConvDilated3DBackward
SlowConvDilated3DBackwardBackward
Col2ImBackward
Im2ColBackward
Im2ColBackwardBackward
Col2ImBackwardBackward
AdaptiveAvgPool2DBackwardBackward
AdaptiveAvgPool3DBackwardBackward
AdaptiveMaxPool2DBackwardBackward
AdaptiveMaxPool3DBackwardBackward
AvgPool2DBackwardBackward
AvgPool3DBackwardBackward
EluBackwardBackward
FractionalMaxPool2DBackwardBackward
FractionalMaxPool3DBackwardBackward
GluBackwardBackward
HardtanhBackwardBackward
KlDivBackwardBackward
L1LossBackwardBackward
LogSigmoidBackwardBackward
LogSoftmaxBackwardDataBackward
LeakyReluBackwardBackward
MaxPool2DWithIndicesBackwardBackward
MaxPool3DWithIndicesBackwardBackward
MaxUnpool2DBackwardBackward
MseLossBackwardBackward
NllLossBackwardBackward
NllLoss2DBackwardBackward
RreluWithNoiseBackwardBackward
ReflectionPad1DBackwardBackward
ReflectionPad2DBackwardBackward
ReplicationPad1DBackwardBackward
ReplicationPad2DBackwardBackward
ReplicationPad3DBackwardBackward
SmoothL1LossBackwardBackward
HuberLossBackwardBackward
SoftplusBackwardBackward
SoftmaxBackwardDataBackward
SoftMarginLossBackwardBackward
SoftshrinkBackwardBackward
ThresholdBackwardBackward
UpsampleLinear1DBackwardBackward0
UpsampleBilinear2DBackwardBackward0
UpsampleBicubic2DBackwardBackward0
UpsampleTrilinear3DBackwardBackward0
UpsampleNearest1DBackwardBackward0
UpsampleNearest2DBackwardBackward0
UpsampleNearest3DBackwardBackward0
UpsampleLinear1DBackwardBackward1
UpsampleBilinear2DBackwardBackward1
UpsampleTrilinear3DBackwardBackward1
UpsampleBicubic2DBackwardBackward1
UpsampleNearest1DBackwardBackward1
UpsampleNearest2DBackwardBackward1
UpsampleNearest3DBackwardBackward1
SigmoidBackwardBackward
TanhBackwardBackward
CudnnCtcLossBackward
CudnnConvolutionTransposeBackward
CudnnConvolutionTransposeBackwardBackward
CudnnConvolutionBackward
CudnnConvolutionBackwardBackward
CudnnGridSamplerBackward
CudnnAffineGridGeneratorBackward
CudnnBatchNormBackward
CudnnBatchNormBackwardBackward
NnpackSpatialConvolutionBackward
CudnnRnnBackward
CudnnRnnBackwardBackward
MiopenConvolutionTransposeBackward
MiopenConvolutionTransposeBackwardBackward
MiopenConvolutionBackward
MiopenConvolutionBackwardBackward
MiopenDepthwiseConvolutionBackward
MiopenDepthwiseConvolutionBackwardBackward
MiopenBatchNormBackward
MiopenBatchNormBackwardBackward
MiopenRnnBackward
MkldnnConvolutionBackward
MkldnnConvolutionBackwardBackward
MkldnnLinearBackward
MkldnnMaxPool2DBackward
MkldnnMaxPool3DBackward
MkldnnAdaptiveAvgPool2DBackward
MkldnnReshapeBackward
FftR2CBackward
FftC2RBackward
FftC2CBackward
UnbindBackward
StackBackward
ThnnFusedLstmCellBackward
ThnnFusedGruCellBackward
PackPaddedSequenceBackward
SegmentReduceBackward

在这里插入图片描述

TODO : 需要 对应 torch源码 编译生成的 大约名字叫 Functions.cpp 的文件 以配合这里的调试
//  [libtorch-win-shared-with-deps-debug-1.9.0+cpu.zip](https://download.pytorch.org/libtorch/cpu/libtorch-win-shared-with-deps-debug-1.9.0+cpu.zip)\libtorch\include\torch\csrc\autograd\generated\Functions.h

// ...
struct TORCH_API AddmmBackward : public TraceableFunction {
  using TraceableFunction::TraceableFunction;
  variable_list apply(variable_list&& grads) override;
  //apply的实现 应该也是在编译pytorch源码的时候 生成的,   所以编译完的pytorch源码应该生成了一个类似 Functions.cpp 的文件,
  // 但是在libtorch发行包中 该文件Functions.cpp 肯定是没有的,因为 libtorch发行包中 给出的是该文件编译后的 库.  
  // 所以 还是需要编译一次  pytorch对应分支的源码  以获得Functions.cpp , 以配合这里的调试
  //...
}
//...

libtorch-win-shared-with-deps-debug-1.9.0+cpu.zip\libtorch\include\torch\csrc\autograd\generated\Functions.h 中的 AddmmBackward 的函数 apply 的实现 应该也是在编译pytorch源码的时候 生成的, 所以编译完的pytorch源码应该生成了一个类似 Functions.cpp 的文件,
但是在libtorch发行包中 该文件Functions.cpp 肯定是没有的,因为 libtorch发行包中 给出的是该文件编译后的 库.
所以 还是需要编译一次 pytorch对应分支的源码 以获得Functions.cpp , 以配合这里的调试

  • 2
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

ziix

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值