GPU 版 TensorFlow 资源不足问题笔记

问题1:同时多个程序训练、测试模型报错如下

Caused by op 'MatMul', defined at:
  File "F:/python/DeepFM/test/cs.py", line 214, in <module>
    y_deep = tf.add(tf.matmul(y_deep, weights["layer_%d" % i]), weights["bias_%d" % i])
  File "D:\Anaconda3\lib\site-packages\tensorflow\python\ops\math_ops.py", line 2014, in matmul
    a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
  File "D:\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 4278, in mat_mul
    name=name)
  File "D:\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "D:\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 3414, in create_op
    op_def=op_def)
  File "D:\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1740, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

InternalError (see above for traceback): Blas GEMM launch failed : a.shape=(1000, 60), b.shape=(60, 32), m=1000, n=32, k=60
     [[Node: MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Reshape_1, Variable/read)]]

 

参考解决办法1:

https://blog.csdn.net/Vinsuan1993/article/details/81142855

 

参考解决办法2:

我实际是一边Python训练模型,一边调用测试分时保存的模型,共2个程序而已;既然装的GPU版TensorFlow不能用在Python下CPU测试调模型,我就到Scala下调用测试了,主体代码如下:

try {
      val graph = new Graph()
      //导入图
      val graphBytes = IOUtils.toByteArray(new
        FileInputStream("F:/python/model/deepFM/" + "model.pb"));
      graph.importGraphDef(graphBytes);

      //根据图建立Session
      try {
        val session = new Session(graph)
        println(session)
        var arr= ofDim[Float](5,1)
        var l= List((0.0))
        val a: Array[Array[Int]] =Array(Array(81971, 81806, 483217, 483216, 81917, 2),Array(81972, 81806, 483217, 483216, 81918, 3),Array(81973, 81806, 483217, 483216, 81919, 4),Array(148077, 81813, 537857, 483216, 81920, 11772),Array(153210, 81813, 537857, 483216, 81920, 2360))
        val b =Array(Array(1,1,1,0.721348F,1,1),Array(1,1,1,1.0F,1,1),Array(1,1,1,0.225091F,1,1),Array(1,1,1,1.0F,1,1),Array(1,1,1,1.0F,1,1))

        //相当于TensorFlow Python中的sess.run(z,feed_dict = {'x': 10.0})
        val z = session.runner()
          .feed("feat_index",  Tensor.create(a) )
          .feed("feat_value", Tensor.create(b) )
          .fetch("add_out")
          .run().get(0)
        println("z = " + z);
        z.copyTo(arr).foreach(_.foreach(println))
      }
    }

 

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值