tensorflow feature_columns

总结来说:

  • feature_column定义了一种数据预处理的方式,可以看作是一种格式,指定了key,用于后续读取输入流中对应列的数据
  • feature_column不是tensor,所以如果在下一步应用到模型中是需要tensor,还需要通过feature_column.input_layer 进行变换后才能进行使用

如果是将feature column 喂到estimater,就不用转为Tensor(具体可以参考 wide and deep 的代码)

  • input_layer( rawData, featureColumns)
    • rawData的key和 featureColumns中的每一列 feature_column 的key是一一对应的。

reference

eat tensorflow2 in 30 days 最为系统

Tensorflow.feature_column的总结 对每一类都进行了测试 和 结果的print,比较直观

杨旭东:基于Tensorflow高阶API构建大规模分布式深度学习模型系列之特征工程Feature Columns
这篇对于feature_column的使用讲解更加深入,把一些坑点也讲的比较清楚

  • weighted_categorical_column、shared_column 和 sequence_dense_column 还没看

知乎:千寻:Tensorflow模型的Feature column 是如何处理原始数据的 很赞,有很多坑点和细节的补充

知乎:Tensorflow Feature Column Summary

  • tf.feature_column.input_layer 的trainable只对原来就可以设置trainable的feature column起作用,例如像tf.feature_column.embedding_column,并且只有2个都设置为True时候,在训练中才能trainable,任何一个设置为trainable=False,都将使得该column的值无法改变,也就是无法trainable.
  • tf.feature_column.input_layer 对于其他例feature column不起作用,如,tf.feature_column.indicator_column,tf.feature_column.numeric_column本身就不支持trainable,所以无论tf.feature_column.input_layer怎么设置都是无法trainable的
  • 另外tf.feature_column.input_layer 与其他tensor做contact,该trainable照样可以,不行的照样不行。

官方文档

misc

学习TensorFlow中有关特征工程的API

有关稀疏矩阵的更多介绍可以参考《深度学习之TensorFlow——入门、原理与进阶实战》一书中的9.4.17小节

  • SparseTensor在官网中已经说得蛮清楚的了

  • 把SparseTensor理解为一种存储矩阵的方式,稠密矩阵也可以这样存储,但是收益不大,少量元素为1的矩阵用这种方式去存储比较有收益。

  • 还可以参照《深度学习之TensorFlow工程化项目实战》一书7.5节的方式为词向量设置一个初始值。通过具体的数值可以更直观地查看词嵌入的输出内容

归纳总结

在这里插入图片描述
全部类之间的关系
下图的numeric_column 实际上都是return上述的类的实例

在这里插入图片描述

  • Categorical Column 、Embedding Column、 Indicator Column 之间的关系

    • 总的来说,Indicator Column 和 embedding Column 不能直接作用在原始特征上,而是作用在Categorical Column上,是Categorical Column的wrapper。
      在这里插入图片描述

    经过 Categorical Column的转换后,whose id_tensor 是类似这样的
    在这里插入图片描述
    再经过 Indicator Column转换后变成one-hot的向量了
    在这里插入图片描述

    • Indicator Column隶属于DenseColumn,主要是为了把Categorical Column转换成Dense Column用的,因为进入模型推理必须得是Dense Column类型
    • Embedding Column的输入是Categorical Column。可以是Indicator Column么?意义上来说是可以的,但是其本身就是Categorical Column的一个wrapper,直接把Categorical Column输入Embedding Column也是转换成了Dense Column,可以输入到模型中去的
  • Categorical Column 中 with_identity 和 with_hash_bucket 区别

    • 前者的bucket 必须设置得不小于种类数量,后者就是为了有冲突映射而存在的。
      在这里插入图片描述
  • categorical_column_with_vocabulary_list

    • default_value 和 num_oov_buckets 参数设置的相互影响
      • default_value: 当不在vocabulary_list中的默认值,这时候num_oov_buckets必须是0.
      • num_oov_buckets: 用来处理那些不在vocabulary_list中的值,如果是0,那么使用default_value进行填充;如果大于0,则会在[len(vocabulary_list), len(vocabulary_list)+num_oov_buckets]这个区间上重新计算当前特征的值.
    • 默认值-1在embeding column时映射为0向量,这是一个很有用的特性

可以用-1来填充一个不定长的ID序列,这样可以得到定长的序列,然后经过embedding column之后,填充的-1值不影响原来的结果。

  • Embedding Column的dimension设置
    推荐是输入数据种类的四次开方(fourth root)
  • crossed_column也是可以指定hash_buckect_size的,仅仅适用于sparser特征,产生的依然是sparser特征。
tf.feature_column.crossed_column(     
    keys,    
    hash_bucket_size,     
    hash_key=None
 )
  • 继承Categorical Column的类有_get_sparse_tensor的方法,继承Dense Column的类有_get_dense_tensor的方法 (Bucketized Column多继承,所以两个都有),这两个方法输入transformation_cache数据,就可以得到对应的tensor

input_layer() 实际上就是调用get_dense_tensor方法

下面是 from 知乎:千寻:Tensorflow模型的Feature column 是如何处理原始数据的 的一些总结

  • 多值情况的处理 [实际上都是取sum 哈哈]

    • categorical_column_with_vocabulary_list 然后转Indicator Column,最后得到的结果是取多值元素各自对应的vector的逐元素sum [没有可设置combiner的参数]
    • hashed_column 然后转Indicator Column,最后得到的结果是取多值元素各自对应的vector的逐元素sum [没有可设置combiner的参数]
    • embedding_column 有设置的参数,默认combiner取mean
  • categorical_column_with_hash_bucket

    • 原始值中-1或者空字符串""会被忽略,不会输出结果。

小坑注意

tf.feature_column.input_layer 特征顺序问题

tf.feature_column,input_layer(raw_features, feature_columns )最后输出的feature顺序不是依据feature_columns的排列顺序,而是根据feature_columns名称的字符排序的

代码讲解示例

def test_categorical_column_with_vocabulary_list():
    color_data = {'color': [['R', 'R'], ['G', 'R'], ['B', 'G'], ['A', 'A']]}  # 4行样本
    builder = _LazyBuilder(color_data)
    color_column = feature_column.categorical_column_with_vocabulary_list(
        'color', ['R', 'G', 'B'], dtype=tf.string, default_value=-1
    )

    color_column_tensor = color_column._get_sparse_tensors(builder)
    with tf.Session() as session:
        session.run(tf.global_variables_initializer())
        session.run(tf.tables_initializer())
        print(session.run([color_column_tensor.id_tensor]))

color_column_tensor = color_column._get_sparse_tensors(builder)

这一行, 对于categorical_column来说,可以将原始数据作为参数传递给_get_sparse_tensors函数获取转换后的tensor (因为feature_column.input_layer()其实也是调用相应的_get_sparse_tensors或者_get_dense_tensors函数)

_LazyBuilder在此是为了把原始的dict形式的数据,转换成_get_sparse_tensors需要的transformed_cache形式的输入数据

print(session.run([color_column_tensor.id_tensor]))

初始化的时候,除了session.run(tf.global_variables_initializer()),还要session.run(tf.tables_initializer()),因为feature_columns底层逻辑用到了lookup table,就得调用这个initializer。


这一行,不能直接 session.run([color_column_tensor]) 是因为color_column._get_sparse_tensors 返回的其实是 CategoricalColumn的IdWeightPair属性

IdWeightPair是这样定义的

在这里插入图片描述所以,我们得print(color_column_tensor.id_tensor),无论print(color_column_tensor)还是print(color_column_tensor.weight_tensor),都会报错,说不能访问None。(因为前者不是一个tensor,是一个class,sess.run( classInstance ) 肯定会报错;后者则是因为,在categorical_column_with_identity 的情况下传入的是一个None,如果是weighted_categorical_column就不会是None )


input_from_feature_columns

其中,_input_from_feature_columns 是 input_from_feature_columns 的实现。

def _input_from_feature_columns(columns_to_tensors,
                                feature_columns,
                                weight_collections,
                                trainable,
                                scope,
                                output_rank,
                                default_name,
                                cols_to_outs=None):
  """Implementation of `input_from(_sequence)_feature_columns`."""
  columns_to_tensors = columns_to_tensors.copy()
  check_feature_columns(feature_columns)
  if cols_to_outs is not None and not isinstance(cols_to_outs, dict):
    raise ValueError('cols_to_outs must be a dict unless None')

  with variable_scope.variable_scope(scope,
                                     default_name=default_name,
                                     values=columns_to_tensors.values()):
    output_tensors = []
    """_Transformer类以原本的tensor数据作为初始化参数,用于维护其_columns_to_tensors属性"""
    transformer = _Transformer(columns_to_tensors)
    if weight_collections:
      weight_collections = list(set(list(weight_collections) +
                                    [ops.GraphKeys.GLOBAL_VARIABLES]))

    for column in sorted(set(feature_columns), key=lambda x: x.key):
      with variable_scope.variable_scope(None,
                                         default_name=column.name,
                                         values=columns_to_tensors.values()):
		"""根据每种fc的规则去转换对应的原始数据"""
        transformed_tensor = transformer.transform(column)

        if output_rank == 3:
          transformed_tensor = nest.map_structure(
              functools.partial(
                  _maybe_reshape_input_tensor,
                  column_name=column.name,
                  output_rank=output_rank), transformed_tensor)
        try:
          """构建_EmbeddingColumn 的 embedding""" # 上面的函数和这个函数,只会实现一个
          arguments = column._deep_embedding_lookup_arguments(
              transformed_tensor)
          output_tensors.append(
              fc._embeddings_from_arguments(  # pylint: disable=protected-access
                  column,
                  arguments,
                  weight_collections,
                  trainable,
                  output_rank=output_rank))

        except NotImplementedError as ee:
          try:
            """构建_RealValuedColumn 的 tensor"""
            output_tensors.append(column._to_dnn_input_layer(
                transformed_tensor,
                weight_collections,
                trainable,
                output_rank=output_rank))
          except ValueError as e:
            raise ValueError('Error creating input layer for column: {}.\n'
                             '{}, {}'.format(column.name, e, ee))
        if cols_to_outs is not None:
          cols_to_outs[column] = output_tensors[-1]

	"""output_tensors 数组每个元素对应一个 feature column 在该批次训练数据中每个训练实例上生成的 tensor,
	根据 output_rank - 1 对 output_tensor 中的每个 tensor 在指定维度上进行拼接"""
    return array_ops.concat(output_tensors, output_rank - 1)


def input_from_feature_columns(columns_to_tensors,
                               feature_columns,
                               weight_collections=None,
                               trainable=True,
                               scope=None,
                               cols_to_outs=None):
  """A tf.contrib.layers style input layer builder based on FeatureColumns.

  Generally a single example in training data is described with feature columns.
  At the first layer of the model, this column oriented data should be converted
  to a single tensor. Each feature column needs a different kind of operation
  during this conversion. For example sparse features need a totally different
  handling than continuous features.

  Example:

  """python
    # Building model for training
    columns_to_tensor = tf.io.parse_example(...)
    first_layer = input_from_feature_columns(
        columns_to_tensors=columns_to_tensor,
        feature_columns=feature_columns)
    second_layer = fully_connected(inputs=first_layer, ...)
    ...
  """

  where feature_columns can be defined as follows:

  """python
    sparse_feature = sparse_column_with_hash_bucket(
        column_name="sparse_col", ...)
    sparse_feature_emb = embedding_column(sparse_id_column=sparse_feature, ...)
    real_valued_feature = real_valued_column(...)
    real_valued_buckets = bucketized_column(
        source_column=real_valued_feature, ...)

    feature_columns=[sparse_feature_emb, real_valued_buckets]
"""

  Args:
    columns_to_tensors: A mapping from feature column to tensors. 'string' key
      means a base feature (not-transformed). It can have FeatureColumn as a
      key too. That means that FeatureColumn is already transformed by input
      pipeline.
    feature_columns: A set containing all the feature columns. All items in the
      set should be instances of classes derived by FeatureColumn.
    weight_collections: List of graph collections to which weights are added.
    trainable: If `True` also add variables to the graph collection
      `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
    scope: Optional scope for variable_scope.
    cols_to_outs: Optional dict from feature column to output tensor,
      which is concatenated into the returned tensor.

  Returns:
    A Tensor which can be consumed by hidden layers in the neural network.

  Raises:
    ValueError: if FeatureColumn cannot be consumed by a neural network.
  """
  return _input_from_feature_columns(columns_to_tensors,
                                     feature_columns,
                                     weight_collections,
                                     trainable,
                                     scope,
                                     output_rank=2,
                                     default_name='input_from_feature_columns',
                                     cols_to_outs=cols_to_outs)


其中,用到的_Transformer 类代码附如下。
这个类主要维护一个_columns_to_tensors的map,key就是feature column或者是string,value就是处理后的tensor。当key是feature column类的实例的时候,说明这个feature已经按照feature column的规则进行过转化了,直接重复利用即可;当key是string的时候,说明这个feature还没有进行过变换(初始化的时候,key全都是string);
这个类在根据不同的feature column去转化原始输入数据的时候,其实也是调用对应的feature column类实例的 _do_transform函数去处理数据。

class _Transformer(object):
  """Handles all the transformations defined by FeatureColumn if needed.

  FeatureColumn specifies how to digest an input column to the network. Some
  feature columns require data transformations. This class handles those
  transformations if they are not handled already.

  Some features may be used in more than one place. For example, one can use a
  bucketized feature by itself and a cross with it. In that case Transformer
  should create only one bucketization op instead of multiple ops for each
  feature column. To handle re-use of transformed columns, Transformer keeps all
  previously transformed columns.

  Example:

  """python
    sparse_feature = sparse_column_with_hash_bucket(...)
    real_valued_feature = real_valued_column(...)
    real_valued_buckets = bucketized_column(source_column=real_valued_feature,
                                            ...)
    sparse_x_real = crossed_column(
        columns=[sparse_feature, real_valued_buckets], hash_bucket_size=10000)

    columns_to_tensor = tf.io.parse_example(...)
    transformer = Transformer(columns_to_tensor)

    sparse_x_real_tensor = transformer.transform(sparse_x_real)
    sparse_tensor = transformer.transform(sparse_feature)
    real_buckets_tensor = transformer.transform(real_valued_buckets)
  """
  """

  def __init__(self, columns_to_tensors):
    """Initializes transformer.

    Args:
      columns_to_tensors: A mapping from feature columns to tensors. 'string'
        key means a base feature (not-transformed). It can have FeatureColumn as
        a key too. That means that FeatureColumn is already transformed by input
        pipeline. For example, `inflow` may have handled transformations.
        Transformed features are inserted in columns_to_tensors.
    """
    self._columns_to_tensors = columns_to_tensors

  def transform(self, feature_column):
    """Returns a Tensor which represents given feature_column.

    Args:
      feature_column: An instance of FeatureColumn.

    Returns:
      A Tensor which represents given feature_column. It may create a new Tensor
      or re-use an existing one.

    Raises:
      ValueError: if FeatureColumn cannot be handled by this Transformer.
    """
    logging.debug('Transforming feature_column %s', feature_column)
    if feature_column in self._columns_to_tensors:
      # Feature_column is already transformed.
      return self._columns_to_tensors[feature_column]

    feature_column.insert_transformed_feature(self._columns_to_tensors)

    if feature_column not in self._columns_to_tensors:
      raise ValueError('Column {} is not supported.'.format(
          feature_column.name))

    return self._columns_to_tensors[feature_column]

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值