ubuntu16.04+Matlab2016b编写matcaffe的艰辛过程

本文详细记录了在Ubuntu16.04环境下,使用Matlab2016b配置Matcaffe的过程,包括遇到的GCC版本问题、Matlab崩溃等难题及其解决方法,最终成功编译并运行mattest。
摘要由CSDN通过智能技术生成

ubuntu16.04+Matlab2016b编写matcaffe的艰辛过程

作者:hexin
运气比较差,历时四天终于把caffe的matlab接口(matcaffe)写好了,其中有很多网上的博客未曾出现过的bug一并在下面写出。中间曾打算放弃这台电脑,不过最终还是成功了。希望有相同情况的学习者能够从此得到一些帮助。(个人电脑纯CPU版本,跳过了CUDA,全新系统)

- 准备工作

  1. 参照 https://blog.csdn.net/yhaolpz/article/details/71375762/ 配置环境。中间出了个错,我记得我是忘记了 ldconfig。
  2. make all -j8
  3. make runtest
  4. ok保存好文件。之后最好写个shell执行:make clean && make all -j8 && make runtest && make matcaffe && sudo make mattest命令以备后用(我反正用了无数次,运气太差)。
    正常通过之后准备开始编译接口

- Make matcaffe

这步基本上不会出错。
系统会提示警告,什么gcc版本4.9。不管他进行下一步就好

- Make mattest

这一步就是卡了我无数天的一步,真的出现了各种各样的错误。
1.GCC版本不一致的问题
这一步估计每个人都会经历。这里一定要说:降级不好使!降级不好使!降级不好使!
我采用的方法是软链接matlab的libstdc++.so.6等几个库到系统上的去。(参照了这个网址
https://blog.csdn.net/luanmaqianzhao/article/details/54669860
2.之后就是最精彩的部分了,mattest运行到一半matlab直接崩溃

cd matlab; /home/×××/MATLAB/R2016b/bin/matlab -nodisplay -r 'caffe.run_tests(), exit()'

                            < M A T L A B (R) >
                  Copyright 1984-2016 The MathWorks, Inc.
                   R2016b (9.1.0.441655) 64-bit (glnxa64)
                             September 7, 2016


要开始,请键入以下项之一: helpwin、helpdesk 或 demo。
有关产品信息,请访问 www.mathworks.com。

Cleared 0 solvers and 0 stand-alone nets
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0809 20:00:28.987077  8607 net.cpp:51] Initializing net from parameters: 
name: "testnet"
force_backward: true
state {
  phase: TRAIN
  level: 0
}
layer {
  name: "data"
  type: "DummyData"
  top: "data"
  top: "label"
  dummy_data_param {
    data_filler {
      type: "gaussian"
      std: 1
    }
    data_filler {
      type: "constant"
    }
    num: 5
    num: 5
    channels: 2
    channels: 1
    height: 3
    height: 1
    width: 4
    width: 1
  }
}
layer {
  name: "conv"
  type: "Convolution"
  bottom: "data"
  top: "conv"
  param {
    decay_mult: 1
  }
  param {
    decay_mult: 0
  }
  convolution_param {
    num_output: 11
    pad: 3
    kernel_size: 2
    weight_filler {
      type: "gaussian"
      std: 1
    }
    bias_filler {
      type: "constant"
      value: 2
    }
  }
}
layer {
  name: "ip"
  type: "InnerProduct"
  bottom: "conv"
  top: "ip"
  inner_product_param {
    num_output: 13
    weight_filler {
      type: "gaussian"
      std: 2.5
    }
    bias_filler {
      type: "constant"
      value: -3
    }
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip"
  bottom: "label"
  top: "loss"
}
I0809 20:00:28.987859  8607 layer_factory.hpp:77] Creating layer data
I0809 20:00:28.987896  8607 net.cpp:84] Creating Layer data
I0809 20:00:28.987902  8607 net.cpp:380] data -> data
I0809 20:00:28.987916  8607 net.cpp:380] data -> label
I0809 20:00:28.988409  8607 net.cpp:122] Setting up data
I0809 20:00:28.988448  8607 net.cpp:129] Top shape: 5 2 3 4 (120)
I0809 20:00:28.988478  8607 net.cpp:129] Top shape: 5 1 1 1 (5)
I0809 20:00:28.988487  8607 net.cpp:137] Memory required for data: 500
I0809 20:00:28.988494  8607 layer_factory.hpp:77] Creating layer conv
I0809 20:00:28.988508  8607 net.cpp:84] Creating Layer conv
I0809 20:00:28.988517  8607 net.cpp:406] conv <- data
I0809 20:00:28.988543  8607 net.cpp:380] conv -> conv
I0809 20:00:28.988606  8607 net.cpp:122] Setting up conv
I0809 20:00:28.988618  8607 net.cpp:129] Top shape: 5 11 8 9 (3960)
I0809 20:00:28.988626  8607 net.cpp:137] Memory required for data: 16340
I0809 20:00:28.988641  8607 layer_factory.hpp:77] Creating layer ip
I0809 20:00:28.988658  8607 net.cpp:84] Creating Layer ip
I0809 20:00:28.988682  8607 net.cpp:406] ip <- conv
I0809 20:00:28.988706  8607 net.cpp:380] ip -> ip
I0809 20:00:28.988867  8607 net.cpp:122] Setting up ip
I0809 20:00:28.988875  8607 net.cpp:129] Top shape: 5 13 (65)
I0809 20:00:28.988899  8607 net.cpp:137] Memory required for data: 16600
I0809 20:00:28.988907  8607 layer_factory.hpp:77] Creating layer loss
I0809 20:00:28.988917  8607 net.cpp:84] Creating Layer loss
I0809 20:00:28.988925  8607 net.cpp:406] loss <- ip
I0809 20:00:28.988934  8607 net.cpp:406] loss <- label
I0809 20:00:28.988960  8607 net.cpp:380] loss -> loss
I0809 20:00:28.988973  8607 layer_factory.hpp:77] Creating layer loss
I0809 20:00:28.988996  8607 net.cpp:122] Setting up loss
I0809 20:00:28.989008  8607 net.cpp:129] Top shape: (1)
I0809 20:00:28.989017  8607 net.cpp:132]     with loss weight 1
I0809 20:00:28.989040  8607 net.cpp:137] Memory required for data: 16604
I0809 20:00:28.989049  8607 net.cpp:198] loss needs backward computation.
I0809 20:00:28.989058  8607 net.cpp:198] ip needs backward computation.
I0809 20:00:28.989068  8607 net.cpp:198] conv needs backward computation.
I0809 20:00:28.989076  8607 net.cpp:200] data does not need backward computation.
I0809 20:00:28.989085  8607 net.cpp:242] This network produces output loss
I0809 20:00:28.989097  8607 net.cpp:255] Network initialization done.
I0809 20:00:29.033741  8607 net.cpp:51] Initializing net from parameters: 
name: "testnet"
force_backward: true
state {
  phase: TRAIN
  level: 0
}
layer {
  name: "data"
  type: "DummyData"
  top: "data"
  top: "label"
  dummy_data_param {
    data_filler {
      type: "gaussian"
      std: 1
    }
    data_filler {
      type: "constant"
    }
    num: 5
    num: 5
    channels: 2
    channels: 1
    height:
评论 9
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值