用来记录模型训练过程的一些参数。
常用的几项:
tf.logging.set_verbosity(tf.logging.INFO)
设计日志级别. 控制那些日志打印到屏幕上。
tf.logging.info(msg, *args, **kwargs)
记录INFO级别的日志. args 是配合msg中的占位符用的. 比如 info("I have been in love with %s for %d years.","yichu",7)
tf.logging.log_every_n( level, msg, n, *args)
改行代码每执行n次输出一次. 打印的时机分别是(1, n+1, 2n+1,…).
举例说明:
def main():
tf.logging.set_verbosity(tf.logging.INFO)
tf.logging.info("I have been in love with %s for %d years.", "yichu", 7)
# 因为set_verbosity这里只设置了INFO级别的日志,所以只会打印上一条I have...years这一条,
# 不会打印下面这一行this is ... info这一条,如果希望输出INFO和DEBUG2个级别的信息,
# 那么需要再次调用tf.logging.set_verbosity(tf.logging.DEBUG)才可以打印info和debug2个级别。
tf.logging.debug("this is a debug info")
train_op = tf.train.GradientDescentOptimizer(learning_rate=LEARNING_RATE).\
minimize(loss_tensor, global_step=tf.train.create_global_step())
with tf.Session() as sess:
...
tf.logging.log_every_n(tf.logging.INFO, "np.mean(loss_evl)= %f at step %d", 100, np.mean(loss_evl),
sess.run(tf.train.get_global_step()))
# 注意,最后的tf.logging.log_every_n是每隔100次打印一次,打印的日志如下:
# INFO:tensorflow:np.mean(loss_evl)= 1.396970 at step 1
# INFO:tensorflow:np.mean(loss_evl)= 1.221397 at step 101
# INFO:tensorflow:np.mean(loss_evl)= 1.061688 at step 201
参考博客: