文章目录
- Tensorflow 2.1 报错整合
- RuntimeError: `loss` passed to Optimizer.compute_gradients should be a function when eager execution is enabled.
- RuntimeError: Attempting to capture an EagerTensor without building a function.
- RuntimeError: When eager execution is enabled, `var_list` must specify a list or dict of variables to save
- tf.compat.v1.placeholder(tf.float32, shape=(1024, 1024))
- RuntimeError: tf.placeholder() is not compatible with eager execution.
- AttributeError: module 'tensorflow' has no attribute 'random_normal'
- AttributeError: module 'tensorflow' has no attribute 'global_variables_initializer'
- AttributeError: module 'tensorflow' has no attribute 'Session'
- AttributeError: module 'tensorflow' has no attribute 'assign'
- AttributeError: module 'tensorflow_core._api.v2.train' has no attribute 'Saver'
- AttributeError: module 'tensorflow' has no attribute 'placeholder'
- AttributeError: module 'tensorflow' has no attribute 'mul'
- AttributeError: module 'tensorflow_core._api.v2.train' has no attribute 'SummaryWriter'
- ModuleNotFoundError: No module named 'tensorflow.examples.tutorials'
- input_data.read_data_sets() 下载数据集失败
- AttributeError: module 'tensorflow' has no attribute 'log'
- TypeError: reduce_sum() got an unexpected keyword argument 'reduction_indices'
- AttributeError: module 'tensorflow_core._api.v2.train' has no attribute 'GradientDescentOptimizer'
- AttributeError: module 'tensorflow' has no attribute 'InteractiveSession'
- ValueError: Cannot evaluate tensor using `eval()`: No default session is registered. Use `with sess.as_default()` or pass an explicit session to `eval(session=sess)`
- AttributeError: module 'tensorflow' has no attribute 'Print'
- AttributeError: module 'tensorflow_core._api.v2.train' has no attribute 'AdamOptimizer'
- NotFoundError: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
- NotFoundError: Key v1_1 not found in checkpoint [[node save_1/RestoreV2 (defined at :7) ]]
- AttributeError: module 'scipy.misc' has no attribute 'imread'
- Depth of input (32) is not a multiple of input depth of filter (3) for 'conv1_1/Conv2D' (op: 'Conv2D') with input shapes: [1,3,24,32], [3,3,3,64].
- TypeError: Input 'split_dim' of 'Split' Op has type float32 that does not match expected type of int32.
- AttributeError: module 'tensorflow' has no attribute 'variable_scope'
- AttributeError: module 'tensorflow_core._api.v2.nn' has no attribute 'rnn_cell'
- AttributeError: module 'tensorflow_core._api.v2.nn' has no attribute 'rnn'
- AttributeError: module 'tensorflow.python.ops.rnn' has no attribute 'rnn'
- AttributeError: module 'tensorflow' has no attribute 'reset_default_graph'
- Variable basic/rnn/basic_lstm_cell/kernel does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=tf.AUTO_REUSE in VarScope
- Variable rnn/basic_lstm_cell/kernel already exists, disallowed
- ValueError: Variable basic11111/rnn/basic_lstm_cell/kernel does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=tf.AUTO_REUSE in VarScope?
- AttributeError: module 'tensorflow_core._api.v2.train' has no attribute 'GradientDescentOptimizer'
Tensorflow 2.1 报错整合
RuntimeError: loss passed to Optimizer.compute_gradients should be a function when eager execution is enabled.
RuntimeError: Attempting to capture an EagerTensor without building a function.
RuntimeError: When eager execution is enabled, var_list must specify a list or dict of variables to save
- 当eager execution开启的时候,loss应该是一个Python函数。
- 在Tensorflow 2.0 中,eager execution 是默认开启的。
- 所以,需要先关闭eager execution
- tf.compat.v1.disable_eager_execution()
tf.compat.v1.placeholder(tf.float32, shape=(1024, 1024))
RuntimeError: tf.placeholder() is not compatible with eager execution.
- 如果要将代码从tensorflow v1转换为tensorflow v2,则必须实现tf.compat.v1并且占位符存在于tf.compat.v1.placeholder;
- 当eager execution开启的时候,loss应该是一个Python函数。
- 在Tensorflow 2.0 中,eager execution 是默认开启的。
- 所以,需要先关闭关闭紧急执行,eager execution。
- tf.compat.v1.disable_eager_execution()
AttributeError: module ‘tensorflow’ has no attribute ‘random_normal’
tf.random_normal- tf.random.normal
AttributeError: module ‘tensorflow’ has no attribute ‘global_variables_initializer’
tf.global_variables_initializer()- tf.compat.v1.global_variables_initializer()
AttributeError: module ‘tensorflow’ has no attribute ‘Session’
tf.Session()- tf.compat.v1.Session()
AttributeError: module ‘tensorflow’ has no attribute ‘assign’
tf.assign()- tf.compat.v1.assign()
AttributeError: module ‘tensorflow_core._api.v2.train’ has no attribute ‘Saver’
tf.train.Saver()- tf.compat.v1.train.Saver()
AttributeError: module ‘tensorflow’ has no attribute ‘placeholder’
tf.placeholder(tf.float32)- tf.compat.v1.placeholder(tf.float32)
AttributeError: module ‘tensorflow’ has no attribute ‘mul’
tf.mul(input1, input2)- tf.multiply(input1, input2)
AttributeError: module ‘tensorflow_core._api.v2.train’ has no attribute ‘SummaryWriter’
tf.train.SummaryWriter("./tmp", sess.graph)- tf.compat.v1.summary.FileWriter("./tmp", sess.graph)
ModuleNotFoundError: No module named ‘tensorflow.examples.tutorials’
解决方案1–针对没有tutorials文件报错的解决
- 先检查tensorflow中是否含有tutorials
- 检测路径:…\Lib\site-packages\tensorflow_core\examples;
- 文件夹下只有saved_model这个文件夹,没有tutorials文件夹;
- 进入github的tensorflow主页下载缺失的文件
下载地址:tensorflow tutorials - 拷贝下载的tutorials整个文件夹
- 解压下载的包,找到tutorials整个文件夹;
- 拷贝至…\Lib\site-packages\tensorflow_core\examples下;
- 再次加载包
from tensorflow.examples.tutorials.mnist import input_data
解决办法2
tensorflow2.0的数据集集成到keras高级接口之中,使用如下代码一般都能下载
mint=tf.keras.datasets.mnist
(x_,y_),(x_1,y_1)=mint.load_data()
input_data.read_data_sets() 下载数据集失败
from tensorflow.examples.tutorials.mnist import input_data
print ("Download and Extract MNIST dataset")
mnist = input_data.read_data_sets('data/', one_hot=True)
- mnist数据集获取:可从Yann LeCun教授管网获取;
- 导入下载到本地的mnist数据集; "data/"为数据集存放的位置;
AttributeError: module ‘tensorflow’ has no attribute ‘log’
tf.log()- tf.math.log
TypeError: reduce_sum() got an unexpected keyword argument ‘reduction_indices’
tf.reduce_sum(actv, reduction_indices=1)- reduction_indices: axis的旧名称(已弃用)。
- keep_dims: keepdims的弃用别名。
- tf.reduce_sum(actv)
AttributeError: module ‘tensorflow_core._api.v2.train’ has no attribute ‘GradientDescentOptimizer’
optm = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)- optm = tf.compat.v1.train.GradientDescentOptimizer(learning_rate).minimize(cost)
AttributeError: module ‘tensorflow’ has no attribute ‘InteractiveSession’
tf.InteractiveSession()- tf.compat.v1.InteractiveSession()
ValueError: Cannot evaluate tensor using eval(): No default session is registered. Use with sess.as_default() or pass an explicit session to eval(session=sess)
- tf.compat.v1.disable_eager_execution() 与这个有关;
- 方案1
sess = tf.compat.v1.InteractiveSession()
- 方案2
# SESSION
with tf.compat.v1.Session() as sess:
sess.run(init)
AttributeError: module ‘tensorflow’ has no attribute ‘Print’
tf.Print(a, [a], "a: ")- tf.compat.v1.Print(a, [a], "a: ")
AttributeError: module ‘tensorflow_core._api.v2.train’ has no attribute ‘AdamOptimizer’
tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)- tf.compat.v1.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
NotFoundError: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
NotFoundError: Key v1_1 not found in checkpoint [[node save_1/RestoreV2 (defined at :7) ]]
- 不要重复运行
- 保存与读取分开运行
AttributeError: module ‘scipy.misc’ has no attribute ‘imread’
scipy.misc.imread(path);- SciPy1.0.0不赞成使用imread,在1.2中已经弃用;
- 可以使用import imageio content_image=imageio.imread()来代替;
Depth of input (32) is not a multiple of input depth of filter (3) for ‘conv1_1/Conv2D’ (op: ‘Conv2D’) with input shapes: [1,3,24,32], [3,3,3,64].
- tf.nn.conv2d(_input_r, _w[‘wc1’], strides=[1, 1, 1, 1], padding=‘SAME’)
IMG_PATH = “/vgg_data/cat.png”- IMG_PATH = “/vgg_data/cat.jpg”
- 注意图片格式:jpg
TypeError: Input ‘split_dim’ of ‘Split’ Op has type float32 that does not match expected type of int32.
x = tf.split(0, n_steps, x) # tf.split(axis, num_or_size_splits, value)- x = tf.split(x, n_steps, 0) # tf.split(value, num_or_size_splits, axis)
AttributeError: module ‘tensorflow’ has no attribute ‘variable_scope’
tf.variable_scope- tf.compat.v1.variable_scope
AttributeError: module ‘tensorflow_core._api.v2.nn’ has no attribute ‘rnn_cell’
tf.nn.rnn_cell.BasicLSTMCell()- tf.compat.v1.nn.rnn_cell.BasicLSTMCell
- from tensorflow.python.ops import rnn, rnn_cell
AttributeError: module ‘tensorflow_core._api.v2.nn’ has no attribute ‘rnn’
tf.nn.rnn- from tensorflow.python.ops import rnn, rnn_cell
AttributeError: module ‘tensorflow.python.ops.rnn’ has no attribute ‘rnn’
_LSTM_O, _LSTM_S = rnn.rnn(lstm_cell, _Hsplit, dtype=tf.float32)- _LSTM_O, _LSTM_S = rnn.static_rnn(lstm_cell, _Hsplit,dtype=tf.float32)
AttributeError: module ‘tensorflow’ has no attribute ‘reset_default_graph’
tf.reset_default_graph()- tf.compat.v1.reset_default_graph()
Variable basic/rnn/basic_lstm_cell/kernel does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=tf.AUTO_REUSE in VarScope
调用了两次RNN网络,在第二次调用的时候报了上面这个错误。主要是因为第二次的变量名和第一次的变量名一样,导致了变量命名相同的冲突。在Tensorflow中有两种方法生成变量variable,tf.get_variable()和tf.Variable()。在tf.name_scope()的框架下使用这两种方法,使用tf.Variable(),尽管name一样,但为了不重复变量名,Tensorflow输出的变量名并不一样,所以本质上是不一样的变量;使用tf.get_variable()定义的变量虽然不会被tf.name_scope()中的名字影响,但在未指定共享变量时,如果重名了会报错。要实现变量共享,可以使用tf.variable_scope(reuse=tf.AUTO_REUSE)创建具有相同名称的作用域。
# 出错代码
with tf.name_scope("fw_side"):
# 改成
with tf.name_scope("fw_side"), tf.variable_scope("fw_side", reuse=tf.AUTO_REUSE):
Variable rnn/basic_lstm_cell/kernel already exists, disallowed
- 通过 with tf.variable_scope(‘scope’,reuse=True),给不同的RNN模型定义不同的作用域
- 在代码中写上 tf.reset_default_graph()
ValueError: Variable basic11111/rnn/basic_lstm_cell/kernel does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=tf.AUTO_REUSE in VarScope?
- tf.compat.v1.variable_scope(_name);
- _name 重复运行,变量已经存在;
方案1
with tf.compat.v1.variable_scope(_name) as scope:
# 共享变量
scope.reuse_variables()
方案2
with tf.compat.v1.variable_scope(_name, reuse=tf.compat.v1.AUTO_REUSE) as scope:
AttributeError: module ‘tensorflow_core._api.v2.train’ has no attribute ‘GradientDescentOptimizer’
tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)- tf.compat.v1.train.GradientDescentOptimizer(learning_rate).minimize(cost)
本文汇总了TensorFlow 2.x环境下常见的错误信息与对应的解决策略,包括Eager Execution相关问题、API迁移指南、数据集加载故障、变量命名冲突等,为开发者提供实用的调试指南。
7076

被折叠的 条评论
为什么被折叠?



