'import module' or 'from module import'

‘import module’和’from module import’,两者的区别其实是主观的。选择一种风格,并且在以后的编码过程中坚持使用它。以下是关于这两种方法的介绍。

  • import module
    • Pros
      相对少的维护import语句,不需要再使用额外的imports来应用模块中的其它功能。
    • Cons
      每次都用module.foo这样的形式去引用某个功能会显得冗长和重复,冗长可以通过import module as mo,然后mo.foo来降低冗长。
  • from module import foo
    • Pros
      要用foo的话只要打foo就可以了。可以更好的控制对于模块中功能的使用。
    • Cons
      要使用模块中的一个新的功能,你还要使用一次import语句。失去了foo的上下文,相对module.foo不够清晰。

总的来说,import module,更加清晰,但是要频繁使用模块中的某个功能的时候,from module import也不错。

参考文献:http://stackoverflow.com/questions/710551/import-module-or-from-module-import


展开阅读全文

import mnist_forward 报了 No module named mnist_forward 错误

04-23

rnmnist数据集 手写识别反向传播一段代码, import mnist_forward 报了 No module named mnist_forward 错误。 没找都解决办法,忘大佬告知。rnrnrnrnrnimport tensorflow as tfrnfrom tensorflow.examples.tutorials.mnist import input_datarnimport mnist_forwardrnimport osrnrnBATCH_SIZE = 200rnLEARNING_RATE_BASE = 0.1rnLEARNING_RATE_DECAY = 0.99rnREGULARIZER = 0.0001rnSTEPS = 50000rnMOVING_AVERAGE_DECAY = 0.99rnMODEL_SAVE_PATH="./model/"rnMODEL_NAME="mnist_model"rnrnrndef backward(mnist):rnrn x = tf.placeholder(tf.float32, [None, mnist_forward.INPUT_NODE])rn y_ = tf.placeholder(tf.float32, [None, mnist_forward.OUTPUT_NODE])rn y = mnist_forward.forward(x, REGULARIZER)rn global_step = tf.Variable(0, trainable=False)rnrn ce = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1))rn cem = tf.reduce_mean(ce)rn loss = cem + tf.add_n(tf.get_collection('losses'))rnrn learning_rate = tf.train.exponential_decay(rn LEARNING_RATE_BASE,rn global_step,rn mnist.train.num_examples / BATCH_SIZE, rn LEARNING_RATE_DECAY,rn staircase=True)rnrn train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)rnrn ema = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)rn ema_op = ema.apply(tf.trainable_variables())rn with tf.control_dependencies([train_step, ema_op]):rn train_op = tf.no_op(name='train')rnrn saver = tf.train.Saver()rnrn with tf.Session() as sess:rn init_op = tf.global_variables_initializer()rn sess.run(init_op)rnrn for i in range(STEPS):rn xs, ys = mnist.train.next_batch(BATCH_SIZE)rn _, loss_value, step = sess.run([train_op, loss, global_step], feed_dict=x: xs, y_: ys)rn if i % 1000 == 0:rn print("After %d training step(s), loss on training batch is %g." % (step, loss_value))rn saver.save(sess, os.path.join(MODEL_SAVE_PATH, MODEL_NAME), global_step=global_step)rnrnrndef main():rn mnist = input_data.read_data_sets("./data/", one_hot=True)rn backward(mnist)rnrnif __name__ == '__main__':rn main()rnrnrn 论坛

没有更多推荐了,返回首页