bert训练过程3

输出参数

INFO:tensorflow:*** Features ***
INFO:tensorflow:  name = input_ids, shape = (8, 128)
INFO:tensorflow:  name = input_mask, shape = (8, 128)
INFO:tensorflow:  name = masked_lm_ids, shape = (8, 20)
INFO:tensorflow:  name = masked_lm_positions, shape = (8, 20)
INFO:tensorflow:  name = masked_lm_weights, shape = (8, 20)
INFO:tensorflow:  name = next_sentence_labels, shape = (8, 1)
INFO:tensorflow:  name = segment_ids, shape = (8, 128)

INFO:tensorflow:**** Trainable Variables ****

INFO:tensorflow:  name = bert/embeddings/word_embeddings:0, shape = (30522, 768)
INFO:tensorflow:  name = bert/embeddings/token_type_embeddings:0, shape = (2, 768)
INFO:tensorflow:  name = bert/embeddings/position_embeddings:0, shape = (512, 768)
INFO:tensorflow:  name = bert/embeddings/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/embeddings/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_0/attention/self/query/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_0/attention/self/query/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_0/attention/self/key/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_0/attention/self/key/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_0/attention/self/value/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_0/attention/self/value/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_0/attention/output/dense/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_0/attention/output/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_0/attention/output/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_0/attention/output/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_0/intermediate/dense/kernel:0, shape = (768, 3072)
INFO:tensorflow:  name = bert/encoder/layer_0/intermediate/dense/bias:0, shape = (3072,)
INFO:tensorflow:  name = bert/encoder/layer_0/output/dense/kernel:0, shape = (3072, 768)
INFO:tensorflow:  name = bert/encoder/layer_0/output/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_0/output/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_0/output/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_1/attention/self/query/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_1/attention/self/query/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_1/attention/self/key/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_1/attention/self/key/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_1/attention/self/value/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_1/attention/self/value/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_1/attention/output/dense/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_1/attention/output/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_1/attention/output/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_1/attention/output/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_1/intermediate/dense/kernel:0, shape = (768, 3072)
INFO:tensorflow:  name = bert/encoder/layer_1/intermediate/dense/bias:0, shape = (3072,)
INFO:tensorflow:  name = bert/encoder/layer_1/output/dense/kernel:0, shape = (3072, 768)
INFO:tensorflow:  name = bert/encoder/layer_1/output/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_1/output/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_1/output/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_2/attention/self/query/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_2/attention/self/query/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_2/attention/self/key/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_2/attention/self/key/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_2/attention/self/value/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_2/attention/self/value/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_2/attention/output/dense/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_2/attention/output/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_2/attention/output/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_2/attention/output/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_2/intermediate/dense/kernel:0, shape = (768, 3072)
INFO:tensorflow:  name = bert/encoder/layer_2/intermediate/dense/bias:0, shape = (3072,)
INFO:tensorflow:  name = bert/encoder/layer_2/output/dense/kernel:0, shape = (3072, 768)
INFO:tensorflow:  name = bert/encoder/layer_2/output/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_2/output/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_2/output/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_3/attention/self/query/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_3/attention/self/query/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_3/attention/self/key/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_3/attention/self/key/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_3/attention/self/value/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_3/attention/self/value/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_3/attention/output/dense/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_3/attention/output/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_3/attention/output/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_3/attention/output/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_3/intermediate/dense/kernel:0, shape = (768, 3072)
INFO:tensorflow:  name = bert/encoder/layer_3/intermediate/dense/bias:0, shape = (3072,)
INFO:tensorflow:  name = bert/encoder/layer_3/output/dense/kernel:0, shape = (3072, 768)
INFO:tensorflow:  name = bert/encoder/layer_3/output/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_3/output/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_3/output/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_4/attention/self/query/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_4/attention/self/query/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_4/attention/self/key/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_4/attention/self/key/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_4/attention/self/value/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_4/attention/self/value/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_4/attention/output/dense/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_4/attention/output/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_4/attention/output/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_4/attention/output/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_4/intermediate/dense/kernel:0, shape = (768, 3072)
INFO:tensorflow:  name = bert/encoder/layer_4/intermediate/dense/bias:0, shape = (3072,)
INFO:tensorflow:  name = bert/encoder/layer_4/output/dense/kernel:0, shape = (3072, 768)
INFO:tensorflow:  name = bert/encoder/layer_4/output/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_4/output/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_4/output/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_5/attention/self/query/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_5/attention/self/query/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_5/attention/self/key/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_5/attention/self/key/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_5/attention/self/value/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_5/attention/self/value/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_5/attention/output/dense/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_5/attention/output/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_5/attention/output/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_5/attention/output/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_5/intermediate/dense/kernel:0, shape = (768, 3072)
INFO:tensorflow:  name = bert/encoder/layer_5/intermediate/dense/bias:0, shape = (3072,)
INFO:tensorflow:  name = bert/encoder/layer_5/output/dense/kernel:0, shape = (3072, 768)
INFO:tensorflow:  name = bert/encoder/layer_5/output/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_5/output/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_5/output/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_6/attention/self/query/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_6/attention/self/query/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_6/attention/self/key/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_6/attention/self/key/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_6/attention/self/value/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_6/attention/self/value/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_6/attention/output/dense/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_6/attention/output/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_6/attention/output/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_6/attention/output/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_6/intermediate/dense/kernel:0, shape = (768, 3072)
INFO:tensorflow:  name = bert/encoder/layer_6/intermediate/dense/bias:0, shape = (3072,)
INFO:tensorflow:  name = bert/encoder/layer_6/output/dense/kernel:0, shape = (3072, 768)
INFO:tensorflow:  name = bert/encoder/layer_6/output/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_6/output/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_6/output/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_7/attention/self/query/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_7/attention/self/query/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_7/attention/self/key/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_7/attention/self/key/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_7/attention/self/value/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_7/attention/self/value/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_7/attention/output/dense/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_7/attention/output/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_7/attention/output/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_7/attention/output/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_7/intermediate/dense/kernel:0, shape = (768, 3072)
INFO:tensorflow:  name = bert/encoder/layer_7/intermediate/dense/bias:0, shape = (3072,)
INFO:tensorflow:  name = bert/encoder/layer_7/output/dense/kernel:0, shape = (3072, 768)
INFO:tensorflow:  name = bert/encoder/layer_7/output/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_7/output/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_7/output/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_8/attention/self/query/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_8/attention/self/query/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_8/attention/self/key/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_8/attention/self/key/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_8/attention/self/value/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_8/attention/self/value/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_8/attention/output/dense/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_8/attention/output/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_8/attention/output/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_8/attention/output/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_8/intermediate/dense/kernel:0, shape = (768, 3072)
INFO:tensorflow:  name = bert/encoder/layer_8/intermediate/dense/bias:0, shape = (3072,)
INFO:tensorflow:  name = bert/encoder/layer_8/output/dense/kernel:0, shape = (3072, 768)
INFO:tensorflow:  name = bert/encoder/layer_8/output/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_8/output/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_8/output/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_9/attention/self/query/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_9/attention/self/query/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_9/attention/self/key/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_9/attention/self/key/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_9/attention/self/value/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_9/attention/self/value/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_9/attention/output/dense/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_9/attention/output/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_9/attention/output/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_9/attention/output/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_9/intermediate/dense/kernel:0, shape = (768, 3072)
INFO:tensorflow:  name = bert/encoder/layer_9/intermediate/dense/bias:0, shape = (3072,)
INFO:tensorflow:  name = bert/encoder/layer_9/output/dense/kernel:0, shape = (3072, 768)
INFO:tensorflow:  name = bert/encoder/layer_9/output/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_9/output/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_9/output/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_10/attention/self/query/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_10/attention/self/query/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_10/attention/self/key/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_10/attention/self/key/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_10/attention/self/value/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_10/attention/self/value/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_10/attention/output/dense/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_10/attention/output/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_10/attention/output/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_10/attention/output/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_10/intermediate/dense/kernel:0, shape = (768, 3072)
INFO:tensorflow:  name = bert/encoder/layer_10/intermediate/dense/bias:0, shape = (3072,)
INFO:tensorflow:  name = bert/encoder/layer_10/output/dense/kernel:0, shape = (3072, 768)
INFO:tensorflow:  name = bert/encoder/layer_10/output/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_10/output/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_10/output/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_11/attention/self/query/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_11/attention/self/query/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_11/attention/self/key/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_11/attention/self/key/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_11/attention/self/value/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_11/attention/self/value/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_11/attention/output/dense/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/encoder/layer_11/attention/output/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_11/attention/output/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_11/attention/output/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_11/intermediate/dense/kernel:0, shape = (768, 3072)
INFO:tensorflow:  name = bert/encoder/layer_11/intermediate/dense/bias:0, shape = (3072,)
INFO:tensorflow:  name = bert/encoder/layer_11/output/dense/kernel:0, shape = (3072, 768)
INFO:tensorflow:  name = bert/encoder/layer_11/output/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_11/output/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = bert/encoder/layer_11/output/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = bert/pooler/dense/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = bert/pooler/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = cls/predictions/transform/dense/kernel:0, shape = (768, 768)
INFO:tensorflow:  name = cls/predictions/transform/dense/bias:0, shape = (768,)
INFO:tensorflow:  name = cls/predictions/transform/LayerNorm/beta:0, shape = (768,)
INFO:tensorflow:  name = cls/predictions/transform/LayerNorm/gamma:0, shape = (768,)
INFO:tensorflow:  name = cls/predictions/output_bias:0, shape = (30522,)
INFO:tensorflow:  name = cls/seq_relationship/output_weights:0, shape = (2, 768)
INFO:tensorflow:  name = cls/seq_relationship/output_bias:0, shape = (2,)

规范化数据集
Estimator要求模型的输入为特定格式(from_tensor_slices),所以要对数据进行类封装

"""Creates an `input_fn` closure to be passed to TPUEstimator."""
  def input_fn(params):
    """The actual input function."""
    batch_size = params["batch_size"]  #32
    #tf.FixedLenFeature 返回的是一个定长的tensor
    name_to_features = {
        "input_ids":
            tf.FixedLenFeature([max_seq_length], tf.int64),
        "input_mask":
            tf.FixedLenFeature([max_seq_length], tf.int64),
        "segment_ids":
            tf.FixedLenFeature([max_seq_length], tf.int64),
        "masked_lm_positions":
            tf.FixedLenFeature([max_predictions_per_seq], tf.int64),
        "masked_lm_ids":
            tf.FixedLenFeature([max_predictions_per_seq], tf.int64),
        "masked_lm_weights":
            tf.FixedLenFeature([max_predictions_per_seq], tf.float32),
        "next_sentence_labels":
            tf.FixedLenFeature([1], tf.int64),
    }

    # For training, we want a lot of parallel reading and shuffling.
    # For eval, we want no shuffling and parallel reading doesn't matter.
    if is_training:
#它的作用是切分传入Tensor的第一个维度,生成相应的dataset。
 #dataset = tf.data.Dataset.from_tensor_slices(np.random.uniform(size=(5, 2)))
#传入的数值是一个矩阵,它的形状为(5, 2),tf.data.Dataset.from_tensor_slices就会切分它形状上的第一个维度,最后生成的dataset中
#一个含有5个元素,每个元素的形状是(2, ),即每个元素是矩阵的一行。
    '''
   对于更复杂的情形,比如元素是一个python中的元组或者字典:在图像识别中一个元素可以是{”image”:image_tensor,”label”:label_tensor}的形式。
  dataset = tf.data.Dataset.from_tensor_slices ( { “a”:np.array([1.0,2.0,3.0,4.0,5.0]), “b”:np.random.uniform(size=(5,2) ) } )
  这时,函数会分别切分”a”中的数值以及”b”中的数值,最后总dataset中的一个元素就是类似于{ “a”:1.0, “b”:[0.9,0.1] }的形式。tf.data.Dataset.from_tensor_slices真正作用是切分传入Tensor的第一个维度,生成相应的dataset,即第一维表明数据集中数据的数量,之后切分batch等操作都以第一维为基础。http://www.cnblogs.com/hellcat/p/8569651.html
  repeat的功能就是将整个序列重复多次,主要用来处理机器学习中的epoch,假设原先的数据是一个epoch,使用repeat(2)就可以将之变成2个epoch:
     '''
      d = tf.data.Dataset.from_tensor_slices(tf.constant(input_files))
      d = d.repeat()
      d = d.shuffle(buffer_size=len(input_files))
      # `cycle_length` is the number of parallel files that get read.
      cycle_length = min(num_cpu_threads, len(input_files))
      # `sloppy` mode means that the interleaving is not exact. This adds
      # even more randomness to the training pipeline.
      d = d.apply(
          tf.contrib.data.parallel_interleave(
              tf.data.TFRecordDataset,
              sloppy=is_training,
              cycle_length=cycle_length))
      d = d.shuffle(buffer_size=100)
    else:
      d = tf.data.TFRecordDataset(input_files)
      # Since we evaluate for a fixed number of steps we don't want to encounter
      # out-of-range exceptions.
      d = d.repeat()
    # We must `drop_remainder` on training because the TPU requires fixed
    # size dimensions. For eval, we assume we are evaluating on the CPU or GPU
    # and we *don't* want to drop the remainder, otherwise we wont cover
    # every sample.
    d = d.apply(
        tf.contrib.data.map_and_batch(
            lambda record: _decode_record(record, name_to_features),  #map_func:将tensor的嵌套结构映射到另一个tensor嵌套结构的函数。
            batch_size=batch_size,
            num_parallel_batches=num_cpu_threads,
            drop_remainder=True))
    return d
  return input_fn

tf.contrib.data.map_and_batch(
map_func,
batch_size,
num_parallel_batches=None,
drop_remainder=False,
num_parallel_calls=None
)
定义于:tensorflow/contrib/data/python/ops/batching.py。
复合实现map和batch。
map_func横跨dataset的batch_size个连续元素,然后将它们组合成一个batch。在功能上,它相当于map 后面跟着batch。但是,通过将两个转换融合在一起,实现可以更有效。在API中展示此转换是暂时的。一旦自动输入管道的优化实现了,map和batch的融合会自动发生,这个API将被弃用。
参数:
map_func:将tensor的嵌套结构映射到另一个tensor嵌套结构的函数。
batch_size:tf.int64,标量tf.Tensor,表示要在此数据集合并的单个batch中的连续元素数。
num_parallel_batches:(可选)tf.int64,标量tf.Tensor,表示要并行创建的batch数。一方面,较高的值可以帮助减轻落后者的影响。另一方面,如果CPU空闲,较高的值可能会增加竞争。
drop_remainder:(可选)tf.bool,标量tf.Tensor,表示是否应丢弃最后一个batch,以防其大小小于所需值; 默认行为是不删除较小的batch。
num_parallel_calls:(可选)tf.int32,标量tf.Tensor,表示要并行处理的元素数。如果未指定,则将并行处理batch_size * num_parallel_batches个元素。
返回:
一个Dataset转换函数,它可以传递给 tf.data.Dataset.apply。

def _decode_record(record, name_to_features):
  """Decodes a record to a TensorFlow example."""
  example = tf.parse_single_example(record, name_to_features)
  # tf.Example only supports tf.int64, but the TPU only supports tf.int32.
  # So cast all int64 to int32.
  for name in list(example.keys()):
    t = example[name]
    if t.dtype == tf.int64:
      t = tf.to_int32(t)
    example[name] = t
  #print(example)
  return example

#print(example):

{'masked_lm_weights': <tf.Tensor 'ParseSingleExample/ParseSingleExample:4' shape=(20,) dtype=float32>, 'segment_ids': <tf.Tensor 'ToInt32:0' shape=(128,) dtype=int32>, 'masked_lm_positions': <tf.Tensor 'ToInt32_1:0' shape=(20,) dtype=int32>, 'masked_lm_ids': <tf.Tensor 'ToInt32_2:0' shape=(20,) dtype=int32>, 'next_sentence_labels': <tf.Tensor 'ToInt32_3:0' shape=(1,) dtype=int32>, 'input_ids': <tf.Tensor 'ToInt32_4:0' shape=(128,) dtype=int32>, 'input_mask': <tf.Tensor 'ToInt32_5:0' shape=(128,) dtype=int32>}

『TensorFlow』数据读取类_data.Dataset

#输入BERT模型的最后一层encoder,输出遮蔽词预测任务的loss和概率矩阵。
def get_masked_lm_output(bert_config, input_tensor, output_weights, positions,
                         label_ids, label_weights):
    #input_tensor=model.get_sequence_output()
    #output_weights=model.get_embedding_table()
    #positions=masked_lm_positions
    #label_ids=masked_lm_ids
    #label_weights=masked_lm_weights
    # 这里的input_tensor是模型中传回的最后一层结果 [batch_size,seq_length,hidden_size]。
    # #output_weights是词向量表 [vocab_size,embedding_size]
  """Get loss and log probs for the masked LM."""   #获取positions位置的所有encoder(即要预测的那些位置的encoder)
  input_tensor = gather_indexes(input_tensor, positions)  #[batch_size*max_pred_pre_seq,hidden_size]
  #print("input_tensor",input_tensor) #shape=(640, 768) #遮蔽的20个位置

  with tf.variable_scope("cls/predictions"):
    # We apply one more non-linear transformation before the output layer.
    # This matrix is not used after pre-training.
    with tf.variable_scope("transform"):
      input_tensor = tf.layers.dense(  # #传入一个全连接层 输出shape [batch_size*max_pred_pre_seq,hidden_size]
          input_tensor,
          units=bert_config.hidden_size,
          activation=modeling.get_activation(bert_config.hidden_act),
          kernel_initializer=modeling.create_initializer(
              bert_config.initializer_range))
      input_tensor = modeling.layer_norm(input_tensor)

    # The output weights are the same as the input embeddings, but there is
    # an output-only bias for each token.
    output_bias = tf.get_variable(
        "output_bias",
        shape=[bert_config.vocab_size],
        initializer=tf.zeros_initializer())
#output_weights是embedding层 output_weights进行转置
    logits = tf.matmul(input_tensor, output_weights, transpose_b=True)  ##[batch_size*max_pred_pre_seq,vocab_size]
    logits = tf.nn.bias_add(logits, output_bias)
    log_probs = tf.nn.log_softmax(logits, axis=-1)

    label_ids = tf.reshape(label_ids, [-1])
    label_weights = tf.reshape(label_weights, [-1])

    one_hot_labels = tf.one_hot(
        label_ids, depth=bert_config.vocab_size, dtype=tf.float32)
    #print(one_hot_labels) #bert-master/run_pretraining.py:284
    # The `positions` tensor might be zero-padded (if the sequence is too
    # short to have the maximum number of predictions). The `label_weights`
    # tensor has a value of 1.0 for every real prediction and 0.0 for the
    # padding predictions.
    per_example_loss = -tf.reduce_sum(log_probs * one_hot_labels, axis=[-1])
    numerator = tf.reduce_sum(label_weights * per_example_loss)
    denominator = tf.reduce_sum(label_weights) + 1e-5
    loss = numerator / denominator
  return (loss, per_example_loss, log_probs)

输入是:model.get_sequence_output()–模型中传回的最后一层结果 [batch_size,seq_length,hidden_size]=[32,128,768]
标签:label_ids
output_weights是embedding层
同标签的计算:per_example_loss = -tf.reduce_sum(log_probs * one_hot_labels, axis=[-1]) #此处类似与进行与标签的计算one_hot_labels------shape=(640, 30522)
在做mask时为什么是80%的mask,10%的正确词,10%的错误词??????????????????
为什么不能全部换成mask?10%的错误词会由影响吗?
谷歌终于开源BERT代码:3 亿参数量,机器之心全面解读
预训练过程BERT 最核心的就是预训练过程,这也是该论文的亮点所在。简单而言,模型会从数据集抽取两句话,其中 B 句有 50% 的概率是 A 句的下一句,然后将这两句话转化前面所示的输入表征。现在我们随机遮掩(Mask 掉)输入序列中 15% 的词,并要求 Transformer 预测这些被遮掩的词,以及 B 句是 A 句下一句的概率这两个任务。
对于二分类任务,在抽取一个序列(A+B)中,B 有 50% 的概率是 A 的下一句。如果是的话就会生成标注「IsNext」,不是的话就会生成标注「NotNext」,这些标注可以作为二元分类任务判断模型预测的凭证。
对于 Mask 预测任务,首先整个序列会随机 Mask 掉 15% 的词,这里的 Mask 不只是简单地用「[MASK]」符号代替某些词,因为这会引起预训练与微调两阶段不是太匹配。所以谷歌在确定需要 Mask 掉的词后,80% 的情况下会直接替代为「[MASK]」,10% 的情况会替代为其它任意的词,最后 10% 的情况会保留原词。
原句:my dog is hairy
80%:my dog is [MASK]
10%:my dog is apple
10%:my dog is hairy
注意最后 10% 保留原句是为了将表征偏向真实观察值,而另外 10% 用其它词替代原词并不会影响模型对语言的理解能力,因为它只占所有词的 1.5%(0.1 × 0.15)。此外,作者在论文中还表示因为每次只能预测 15% 的词,因此模型收敛比较慢。
下一句预测
输入BERT模型CLS的encoder,输出下一句预测任务的loss和概率矩阵,输入为model.get_pooled_output()
标签为:0代表是下一句,1代表是随机语句
提出问题:transformer的输入端encoder和输出端decoder数据??
代码


def gather_indexes(sequence_tensor, positions):
  """Gathers the vectors at the specific positions over a minibatch."""
  sequence_shape = modeling.get_shape_list(sequence_tensor, expected_rank=3)
  batch_size = sequence_shape[0] #32
  seq_length = sequence_shape[1] #128
  width = sequence_shape[2] #768
#tf.range(start, limit, delta)  # [3, 6, 9, 12, 15]
  flat_offsets = tf.reshape(
      tf.range(0, batch_size, dtype=tf.int32) * seq_length, [-1, 1])  #偏置b
  #print(tf.Session().run(flat_offsets))
  #print(seq_length) #128
  #print('flat_offsets',flat_offsets) #flat_offsets Tensor("Reshape:0", shape=(32, 1), dtype=int32)
  flat_positions = tf.reshape(positions + flat_offsets, [-1]) #在最后一列也就是第20列加上128
  #print((positions + flat_offsets)) #Tensor("add_1:0", shape=(32, 20), dtype=int32)
  #print(positions) #Tensor("IteratorGetNext:3", shape=(32, 20), dtype=int32)
  #print('flat_positions',flat_positions) #flat_positions Tensor("Reshape_1:0", shape=(640,), dtype=int32)
  flat_sequence_tensor = tf.reshape(sequence_tensor,
                                    [batch_size * seq_length, width])  #[32*128,768]
  #print(sequence_tensor) #shape=(32, 128, 768)
  #print(width) #hidden 768
  #print('flat_sequence_tensor',flat_sequence_tensor) #flat_sequence_tensor Tensor("Reshape_2:0", shape=(4096, 768), dtype=float32)
 #tf.gather根据索引从参数轴上收集切片。
#索引必须是任何维度的整数张量(通常为 0-D 或 1-D)。生成输出张量该张量的形状为:params.shape[:axis] + indices.shape + params.shape[axis + 1:]
  output_tensor = tf.gather(flat_sequence_tensor, flat_positions)
  #print('output_tensor',output_tensor) #output_tensor Tensor("GatherV2:0", shape=(640, 768), dtype=float32)
  #本质上是完成了将对应的遮蔽的20个位置的训练后的向量取出(32*20,768)
  return output_tensor

flat_offsets = tf.reshape(
tf.range(0, batch_size, dtype=tf.int32) * seq_length, [-1, 1])

[[   0]
 [ 128]
 [ 256]
 [ 384]
 [ 512]
 [ 640]
 [ 768]
 [ 896]
 [1024]
 [1152]
 [1280]
 [1408]
 [1536]
 [1664]
 [1792]
 [1920]
 [2048]
 [2176]
 [2304]
 [2432]
 [2560]
 [2688]
 [2816]
 [2944]
 [3072]
 [3200]
 [3328]
 [3456]
 [3584]
 [3712]
 [3840]
 [3968]]

position = masked_lm_positions--------20
shape = [32,20]
[0+[1,…20]]
[128+[1,…20]]
[1282+[1,…20]]
.
.
[128
31,[1,…20]]
sequence_tensor=model.get_sequence_output() shape=[32128,768]
对照sequence_tensor可以看出第一行的20个被遮蔽的元素,对应[32
128,768]中第一行,第二行对应[32*128,768]的第二行

import tensorflow as tf 
temp = tf.range(0,10)*10 + tf.constant(1,shape=[10]) 
temp2 = tf.gather(temp,[1,5,9]) 
with tf.Session() as sess: 
print sess.run(temp) 
print sess.run(temp2)

[ 1 11 21 31 41 51 61 71 81 91]
[11 51 91]
get_masked_lm_output()函数的执行过程是,输入的是transformer最后的输出,在这个输出中将对应的遮蔽的20个位置的向量取出一共是(3220,768)形成输入intensor,然后将这个intensor和tranformer中的embeddin层相乘。
最后形成print(log_probs) #shape=(640, 30522)=(32
20,30522),也就是说每一个字对应着一个30522的向量,也就是字典的大小。最后和label(640,)做比较,并计算loss值。
label_weights的作用?

INFO:tensorflow:masked_lm_ids: 1011 1011 2171 2003 6442 1010 6697 1998 2015 8835 1010 2909 25636 4308 1011 1997 2015 1011 13610 0
INFO:tensorflow:masked_lm_weights: 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0

对于在20个遮蔽单词中不够20个的就补0,补0的位置同时label_weights对应为0,可以在计算时比较节省时间,并且可以缓解预测过程中预测数目,可能会对准确度的提高有一定的帮助,只是个人理解。
1)tokens:代表的是具体的词汇
2)input_ids:将词汇转换成对应的字典中的序列号
3)input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0-----128的长度。不够128的话补0,其他有词汇的地方为1.
4)segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0。--------前部分的0是句子A部分,可以看成问题部分,中间的1是部分,可以看成是句子B部分或答案,最后的0是不够128补充的0.masked_lm_positions:句子中遮蔽的词汇的位置
5)masked_lm_ids:遮蔽的词汇对应的字典序
6)masked_lm_weights: 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0。遮蔽的词汇不够20的部分权重为0.
7)next_sentence_labels: 说明句子是否是一个正确的句子对。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值