tensorflow.initialize_all_variables已改为tensorflow.global_variables_initializer()
AttributeError: module ‘tensorflow.python.training.training’ has no attribute ‘SummaryWriter’
tf.train.SummaryWriter已废除
使用 tf.train.summary.FileWriter
AttributeError: module ‘tensorflow’ has no attribute ‘sub’
减法 tf.sub() 已改为tf.subtract()
参考:http://blog.csdn.NET/edwards_june/article/details/65652385
前4个是 V0.11 的API 用在 V1.0 的错误
5.1. AttributeError: 'module' object has no attribute 'merge_all_summaries'
>> tf.merge_all_summaries() 改为:summary_op = tf.summary.merge_all()
5.2. AttributeError: 'module' object has no attribute 'SummaryWriter'
>> tf.train.SummaryWriter 改为:tf.summary.FileWriter
5.3. AttributeError: 'module' object has no attribute 'scalar_summary'
>> tf.scalar_summary 改为:tf.summary.scalar
5.4. AttributeError: 'module' object has no attribute 'histogram_summary'
>> histogram_summary 改为:tf.summary.histogram
下边这个是 V1.0 的API 用在 V0.11 的错误
File "dis-alexnet_benchmark.py", line 110, in alexnet_v2
biases_initializer=tf.zeros_initializer(),
TypeError: zeros_initializer() takes at least 1 argument (0 given)
>> 将 biases_initializer=tf.zeros_initializer() 改为:biases_initializer=tf.zeros_initialize
程序
"""
Please note, this code is only for python 3+. If you are using python 2+, please modify the code accordingly.
"""
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
# number 1 to 10 data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
# from tensorflow.examples.tutorials.mnist import input_data
# mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
def add_layer(inputs, in_size, out_size, activation_function=None,):
# add one more layer and return the output of this layer
Weights = tf.Variable(tf.random_normal([in_size, out_size]))
biases = tf.Variable(tf.zeros([1, out_size]) + 0.1,)
Wx_plus_b = tf.matmul(inputs, Weights) + biases
if activation_function is None:
outputs = Wx_plus_b
else:
outputs = activation_function(Wx_plus_b,)
return outputs
def compute_accuracy(v_xs, v_ys):
global prediction
y_pre = sess.run(prediction, feed_dict={xs: v_xs})
correct_prediction = tf.equal(tf.argmax(y_pre,1), tf.argmax(v_ys,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
result = sess.run(accuracy, feed_dict={xs: v_xs, ys: v_ys})
return result
# define placeholder for inputs to network
xs = tf.placeholder(tf.float32, [None, 784]) # 28x28
ys = tf.placeholder(tf.float32, [None, 10])
# add output layer
prediction = add_layer(xs, 784, 10, activation_function=tf.nn.softmax)
# the error between prediction and real data
cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction),
reduction_indices=[1])) # loss
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
sess = tf.Session()
# important step
tf.global_variables_initializer()
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={xs: batch_xs, ys: batch_ys})
if i % 50 == 0:
print(compute_accuracy(
mnist.test.images, mnist.test.labels))