简单使用tf.app.run()、tf.logging和tf.app.flags机制

1、测试输入

# fila_name: temp.py
import tensorflow as tf

FLAGS = tf.app.flags.FLAGS

tf.app.flags.DEFINE_string('string', 'train', 'This is a string')
tf.app.flags.DEFINE_float('learning_rate', 0.001, 'This is the rate in training')
tf.app.flags.DEFINE_boolean('flag', True, 'This is a flag')

print('string: ', FLAGS.string)
print('learning_rate: ', FLAGS.learning_rate)
print('flag: ', FLAGS.flag)


# python3 tf_app.py --string 'test' --learning_rate 0.2 --flag 0

string:  train
learning_rate:  0.001
flag:  True

2、参数传入

import tensorflow as tf

FLAGS = tf.app.flags.FLAGS

tf.app.flags.DEFINE_string('string', 'train', 'This is a string')
tf.app.flags.DEFINE_float('learning_rate', 0.001, 'This is the rate in training')
tf.app.flags.DEFINE_boolean('flag', True, 'This is a flag')


def main(_):
    print('string: ', FLAGS.string)
    print('learning_rate: ', FLAGS.learning_rate)
    print('flag: ', FLAGS.flag)

def test_name(args):
    print('字符: ', FLAGS.string)
    print('学习率: ', FLAGS.learning_rate)
    print('标示: ', FLAGS.flag)

if __name__ == '__main__':
    # tf.app.run()  # 默认调用main
    tf.app.run(test_name) #指定函数名调用

    #python3 tf_run.py --string 'test' --learning_rate 0.2 --flag 0

3、tf.logging机制

当使用tf.logging.set_verbosity(tf.logging.DEBUG)设定日志级别为DEBUG级别时,所有的logging输出都会被打印到屏幕上,

当使用tf.logging.set_verbosity(tf.logging.INFO)设定日志级别为INFO级别时,只有INFO级别及以上的logging会被打印到屏幕上,

当使用tf.logging.set_verbosity(tf.logging.WARN)设定日志级别为WARN级别时,只有WARN级别及以上的logging会被打印到屏幕上,

当使用tf.logging.set_verbosity(tf.logging.ERROR)设定日志级别为ERROR级别时,只有ERROR级别及以上的logging会被打印到屏幕上,

当使用tf.logging.set_verbosity(tf.logging.FATAL)设定日志级别为FATAL级别时,只有FATAL级别及以上的logging会被打印到屏幕上。

import tensorflow as tf
import numpy as np

tf.logging.set_verbosity(tf.logging.DEBUG)
# tf.logging.set_verbosity(tf.logging.INFO)
# tf.logging.set_verbosity(tf.logging.WARN)
# tf.logging.set_verbosity(tf.logging.ERROR)
# tf.logging.set_verbosity(tf.logging.FATAL)
tf.logging.debug('Test tf logging output image size: %dx%d' % (100, 100))
tf.logging.info('Test tf logging output image  size: %dx%d' % (100, 100))
tf.logging.warn('Test tf logging output image  size: %dx%d' % (100, 100))
tf.logging.error('Test tf logging output image  size: %dx%d' % (100, 100))
tf.logging.fatal('Test tf logging output image  size: %dx%d' % (100, 100))

a = np.array([[1, 0, 0], [0, 1, 1]])
a1 = np.array([[3, 2, 3], [4, 5, 6]])

equal_one = tf.equal(a, 1)
equal_one_index = tf.where(equal_one)
new_al = tf.where(tf.equal(a, 1), a1, 1 - a1)

with tf.Session() as sess:
    print('equal_one --------------------------')
    print(equal_one.eval())
    print('equal_one_index --------------------------')
    print(equal_one_index.eval())
    print('new_al --------------------------')
    print(new_al.eval())

输出结果:

2021-11-19 14:04:05.524456: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2021-11-19 14:04:05.543600: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2299965000 Hz
2021-11-19 14:04:05.543867: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x28dabf0 executing computations on platform Host. Devices:
2021-11-19 14:04:05.543880: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
2021-11-19 14:04:05.545813: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
equal_one --------------------------
[[ True False False]
 [False  True  True]]
equal_one_index --------------------------
[[0 0]
 [1 1]
 [1 2]]
new_al --------------------------
[[ 3 -1 -2]
 [-3  5  6]]

4、tf.app.flags机制

import tensorflow as tf

FLAGS = tf.app.flags.FLAGS

tf.app.flags.DEFINE_string('train_directory', './',
                           'Training data directory')
tf.app.flags.DEFINE_string('string', 'train', 'This is a string')
tf.app.flags.DEFINE_float('learning_rate', 0.001, 'This is the rate in training')
tf.app.flags.DEFINE_boolean('flag', True, 'This is a flag')


def main(unuse_args):
    print('train_directory', FLAGS.train_directory)

    print('string: ', FLAGS.string)
    print('learning_rate: ', FLAGS.learning_rate)
    print('flag: ', FLAGS.flag)

if __name__ == '__main__':
    tf.app.run()
# python tf_flag.py --train_directory test1  --string test2 --learning_rate 0.002 --flag T

输出结果:

WARNING:tensorflow:From tf_flag.py:20: The name tf.app.run is deprecated. Please use tf.compat.v1.app.run instead.

train_directory test1
string:  test2
learning_rate:  0.002
flag:  True

参考:https://blog.csdn.net/u011089570/article/details/99636150

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

源代码杀手

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值