transfer learning入门学习

transfer learning :顾名思义,就是将已经训练好的模型,稍加修改迁移到新数据模型上。

先跑一遍数据,贴身感受下。然后再分析代码构成,先模仿,后实践。

基于tensorflow的实现方法:

0、准备数据

https://github.com/tensorflow/tensorflow/tree/master/tensorflow(数据里面都有)

1、加载迁移模型

   以往,搭建网络是,都是自己设置tensor,这里从文件里加载。

from tensorflow.python.platform import gfile

  with tf.Graph().as_default() as graph:
    model_path = os.path.join(FLAGS.model_dir, model_info['model_file_name'])
    print('Model path: ', model_path)
    with gfile.FastGFile(model_path, 'rb') as f:
      graph_def = tf.GraphDef()
      graph_def.ParseFromString(f.read())
      bottleneck_tensor, resized_input_tensor = (tf.import_graph_def(
          graph_def,
          name='',
          return_elements=[
              model_info['bottleneck_tensor_name'],
              model_info['resized_input_tensor_name'],
          ]))
2、对数据进行迁移,即先走一遍迁移网络,得到迁移后的图片,然后重定义输出层。(和以往相比,主要是多了这一步,少了自己定义的隐藏层,同时模型还提供了训练好的参数值,相当于只训练了最后的输出层,大大节省了时间,而且降低了硬件要求。
 bottleneck_values = run_bottleneck_on_image(
        sess, image_data, jpeg_data_tensor, decoded_image_tensor,
        resized_input_tensor, bottleneck_tensor)
   然后将模型结果进行全链接输出,后续和基础框架一样。


3、设置网络输出层(tensorflow retrain.py 的源码,看得的话,也可以使用mnist分类网络的原始代码)

# Organizing the following ops as `final_training_ops` so they're easier
  # to see in TensorBoard
  layer_name = 'final_training_ops'
  with tf.name_scope(layer_name):
    with tf.name_scope('weights'):
      initial_value = tf.truncated_normal(
          [bottleneck_tensor_size, class_count], stddev=0.001)
      layer_weights = tf.Variable(initial_value, name='final_weights')
      if quantize_layer:
        quantized_layer_weights = quant_ops.MovingAvgQuantize(
            layer_weights, is_training=True)
        variable_summaries(quantized_layer_weights)

      variable_summaries(layer_weights)
    with tf.name_scope('biases'):
      layer_biases = tf.Variable(tf.zeros([class_count]), name='final_biases')
      if quantize_layer:
        quantized_layer_biases = quant_ops.MovingAvgQuantize(
            layer_biases, is_training=True)
        variable_summaries(quantized_layer_biases)

      variable_summaries(layer_biases)

    with tf.name_scope('Wx_plus_b'):
      if quantize_layer:
        logits = tf.matmul(bottleneck_input,
                           quantized_layer_weights) + quantized_layer_biases
        logits = quant_ops.MovingAvgQuantize(
            logits,
            init_min=-32.0,
            init_max=32.0,
            is_training=True,
            num_bits=8,
            narrow_range=False,
            ema_decay=0.5)
        tf.summary.histogram('pre_activations', logits)
      else:
        logits = tf.matmul(bottleneck_input, layer_weights) + layer_biases
        tf.summary.histogram('pre_activations', logits)

  final_tensor = tf.nn.softmax(logits, name=final_tensor_name)

  tf.summary.histogram('activations', final_tensor)

  with tf.name_scope('cross_entropy'):
    cross_entropy_mean = tf.losses.sparse_softmax_cross_entropy(
        labels=ground_truth_input, logits=logits)

  tf.summary.scalar('cross_entropy', cross_entropy_mean)

  with tf.name_scope('train'):
    optimizer = tf.train.GradientDescentOptimizer(FLAGS.learning_rate)
    train_step = optimizer.minimize(cross_entropy_mean)

  return (train_step, cross_entropy_mean, bottleneck_input, ground_truth_input,
          final_tensor)


4、训练\验证\测试。(套用tensorflow 入门框架即可)

 optimizer = tf.train.GradientDescentOptimizer(FLAGS.learning_rate)
    train_step = optimizer.minimize(cross_entropy_mean)


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值