使用TensorFlow 构建更深层次的神经网络

1使用TensorFlow 构建更深层次的神经网络

from IPython.display import Image
%matplotlib inline

1.1Tensorflow的核心特征

Tensorflow的核心为计算图。版本1中的静态计算图具有一些优点,如后台图形优化和支持更加广泛的硬件设备。然而,静态计算图需要单独的

图形声明和图形计算步骤,这使得用户在交互式开发和使用NN的时候比较麻烦。

Tensorflow2支持了动态计算图。动态计算图允许图形声明和图形计算步骤的交叉。如此对于Python和Numpy用户,这显得更加自然。

但是,在Tensorflow2中依然支持版本1,不过需要通过tf.compat子模块实现调用。

1.1.1TensorFlow2中的计算图

Tensorflow的计算操作基于有向无环图DAG(directed acyclic graph)

1.1.2通过实例了解计算图

Tensorflow利用计算图来推导从输入到输出的张量之间的关系。

对于标量(scalar) a , b , c a,b,c a,b,c,理解为是秩为0的张量。需要计算如下表达式:

z = 2 × ( a − b ) + c z=2\times(a-b)+c z=2×(ab)+c

上述计算过程的几何表示如下:

# 上述计算的几何表示
Image(filename='images/01.png', width=500)

在这里插入图片描述

如上所示,计算图就是一个简单的节点网络。每个节点类似于一个操作,该操作将一个函数应用于其输入张量或当前张量,并返回0个或者多个张量

作为输出。Tensorflow构建这个计算图,并使用其计算相应的梯度。

1.1.3使用TensorFlow v1.x 创建计算图

在早期的Tensorflow版本中,计算图需要显式地声明。版本1中主要步骤包括:

1.实例化一个新的、空计算图;

2.向计算图中添加节点(即张量和操作);

3.执行计算图:

- 开始一个新的session会话;

- 初始化计算图中的变量值;

- 在当前会话中执行计算图;

下面为使用Tensorflow1实现的计算图,其中 a , b , c a,b,c a,b,c都为标量,即单个的数值。Tensorflow1中通过调用*tf.Graph()*进行计算图的显式声明。

import tensorflow as tf
import numpy as np
import pandas as pd

import matplotlib.pyplot as plt
%matplotlib inline

注意:如果没有显式地创建一个图,则总是会有一个默认的计算图,且变量和计算将会自动的添加到其中。

在Tensorflow V1中,session是一个可以执行图中操作和张量的环境。 Tensorflow V2中删除了Session类。

如下代码通过Tensorflow V1显式创建计算图,并通过tf.compat子模块来实现V2对V1的兼容。

会话对象可以通过:tf.compat.v1.Session()来创建,它可以接收一个现有的图,如下为g,作为参数。

# TF-v1.x 代码示例

g = tf.Graph() # 通过调用tf.Graph()显式地定义计算图
# 通过调用g.as_default()向计算图中增加节点,并指定g为默认计算图
with g.as_default():
    a = tf.constant(1, name='a')
    b = tf.constant(2, name='b')
    c = tf.constant(3, name='c')
    z = 2*(a - b) + c

# 通过tf.compat来执行版本v1的代码
with tf.compat.v1.Session(graph=g) as sess:
    print('Result: z =', sess.run(z))
    print('Result: z =', z.eval())
Result: z = 1
Result: z = 1

在TensorFlow会话中启动一个图之后,我们可以执行它的节点,也就是说,计算它的张量或执行它的操作符。计算每个张量需要在当前会话中调用其eval()方

法。当对图中的一个特定张量求值时,TensorFlow必须执行图中所有之前的节点,直到它到达给定的感兴趣的节点。如果有一个或多个占位符变量,我们还需要

通过会话的run方法为这些占位符变量提供值,

1.1.4通过TensorFlow v2创建计算图

Tensorflow v2默认使用动态计算图,因此支持动态地执行计算操作。无需显式地创建图表和会话session。上述计算在v2中的实现如下:

## TF v2 代码示例
a = tf.constant(1, name='a')
b = tf.constant(2, name='b')
c = tf.constant(3, name='c')

z = 2*(a - b) + c
# 调用tf.print()
tf.print('Result: z =', z)
Result: z = 1

1.1.5Tensorflow v1-将输入数据加载到模型中

Tensorflow v1到v2的另一个重要改进:即关于数据是如何加载到模型中的

在Tensorflow v2中,可以直接以Python变量或者Numpy数组的形式提供数据。

但是在Tensorflow v1中,必须创建占位符来为模型提供输入数据。

对于前面的计算示例 z = 2 × ( a − b ) + c z=2\times(a-b)+c z=2×(ab)+c,假设 a , b , c a,b,c a,b,c为0阶张量,需要定义三个占位符,然后通过“feed_dict”向模型提供数据。

## TF-v1.x 代码示例
g = tf.Graph()

with g.as_default():
    a = tf.compat.v1.placeholder(shape=None, dtype=tf.int32, name='tf_a')
    b = tf.compat.v1.placeholder(shape=None, dtype=tf.int32, name='tf_b')
    c = tf.compat.v1.placeholder(shape=None, dtype=tf.int32, name='tf_c')
    z = 2*(a - b) + c
    
with tf.compat.v1.Session(graph=g) as sess:
    feed_dict = {a:1, b:2, c:3}
    print('Result: z =', sess.run(z, feed_dict=feed_dict))
Result: z = 1

1.1.6Tensorflow v2-将输入数据加载到模型中

在Tensorflow v2中仅仅需要定义一个常规的函数,即可完成上述任务,代码示例如下:

## TF-v2 代码示例
def compute_z(a, b, c):
    r1 = tf.subtract(a, b)
    r2 = tf.multiply(2, r1)
    z = tf.add(r2, c)
    return z

"""
Tensorflow函数允许以Tensorflow张量对象、Numpy数组或者其他Python对象(如列表和元组)形式提供更高阶的输入。

如下代码中,分别以0阶,1阶,2阶形式作为输入。
"""

tf.print('Scalar Inputs:', compute_z(1, 2, 3)) # 0阶输入
tf.print('Rank 1 Inputs:', compute_z([1], [2], [3])) # 1阶输入
tf.print('Rank 2 Inputs:', compute_z([[1]], [[2]], [[3]])) # 2阶输入
Scalar Inputs: 1
Rank 1 Inputs: [1]
Rank 2 Inputs: [[1]]

1.1.7通过函数装饰器提高计算性能

以下的代码均以Tensorflow v2构建。

上述通过直接执行动态计算图的方式执行计算,效率实际上并没有版本v1中静态计算图那么高。

Tensorflow v2提供了一个名为AutoGraph的工具,可以自动将Python代码转换为Tensorflow的图形代码,从而实现更快执行

此外,Tensorflow还提供了一种机制,其通过将普通的Python函数编译为一个静态的计算图,实现计算效率的提高

# 通过@tf.function对函数进行注释,以便进行图形编译
@tf.function
def compute_z(a, b, c):
    r1 = tf.subtract(a, b)
    r2 = tf.multiply(2, r1)
    z = tf.add(r2, c)
    return z

tf.print('Scalar Inputs:', compute_z(1, 2, 3))
tf.print('Rank 1 Inputs:', compute_z([1], [2], [3]))
tf.print('Rank 2 Inputs:', compute_z([[1]], [[2]], [[3]]))
Scalar Inputs: 1
Rank 1 Inputs: [1]
Rank 2 Inputs: [[1]]

对于上述函数的调用还是可以使用与之前相同的方法。但是,Tensorflow现在将会根据输入参数构建一个静态图

同时,Tensorflow使用跟踪机制来基于输入参数构造一个图。对于这种跟踪机制,Tensorflow根据给定的调用函数的输入签名(input signature)生成键元组。

这里生成的keys如下:

- 对于tf.tensor参数,生成的key基于其形状和数据类型;

- 对于Python类型,如列表,他们的id()用来生成缓存键cache keys;

- 对于Python原始值,缓存键cache keys基于其输入值生成;

在调用一个装饰函数的时候,Tensorflow将检查是否已经生成了一个具有相应键的图。如果这样的图不存在,则Tensorflow将生成一个新的图并存储新的键。

另一方面,在定义函数的时候,如果想限制函数的调用方式,可以通过tf.TensorSpec对象指定它的输入签名

程序示例如下:

@tf.function(input_signature=(tf.TensorSpec(shape=[None], dtype=tf.int32),      # 这里通过tf.TensorSpec指定了1阶输入形式,数据类型为tf.int32
                              tf.TensorSpec(shape=[None], dtype=tf.int32),
                              tf.TensorSpec(shape=[None], dtype=tf.int32),))
def compute_z(a, b, c):
    r1 = tf.subtract(a, b)
    r2 = tf.multiply(2, r1)
    z = tf.add(r2, c)
    return z

# 调用函数,这里指定了输入为1阶
tf.print('Rank 1 Inputs:', compute_z([1], [2], [3]))
tf.print('Rank 1 Inputs:', compute_z([1, 2], [2, 4], [3, 6]))
Rank 1 Inputs: [1]
Rank 1 Inputs: [1 2]
# 调用函数,这里指定了输入为2阶,观察对应的结果
# tf.print('Rank 1 Inputs:', compute_z([[1], [2], [3]]))

"""
运行之后,直接报错,信息如下:

ValueError: Structure of Python function inputs does not match input_signature:
  inputs: (
    [[1], [2], [3]])
  input_signature: (
    TensorSpec(shape=(None,), dtype=tf.int32, name=None),
    TensorSpec(shape=(None,), dtype=tf.int32, name=None),
    TensorSpec(shape=(None,), dtype=tf.int32, name=None))
"""
# 调用函数,这里指定了输入为0阶,观察对应的结果
# tf.print('Rank 1 Inputs:', compute_z(1, 2, 3))
'\n运行之后,直接报错,信息如下:\n\nValueError: Structure of Python function inputs does not match input_signature:\n  inputs: (\n    [[1], [2], [3]])\n  input_signature: (\n    TensorSpec(shape=(None,), dtype=tf.int32, name=None),\n    TensorSpec(shape=(None,), dtype=tf.int32, name=None),\n    TensorSpec(shape=(None,), dtype=tf.int32, name=None))\n'
tf.TensorSpec(shape=[None], dtype=tf.int32)
TensorSpec(shape=(None,), dtype=tf.int32, name=None)

1.2TensorFlow用于存储和更新模型参数的变量对象

Tensorflow中,变量是一个特殊的张量,其允许我们在训练期间保存和更新模型参数。

变量的创建:通过tf.Variable来创建。代码示例如下:

a = tf.Variable(initial_value=3.14, name='var_a')
b = tf.Variable(initial_value=[1, 2, 3], name='var_b')
c = tf.Variable(initial_value=[True, False], dtype=tf.bool)
d = tf.Variable(initial_value=['abc'], dtype=tf.string)

print(a)
print(b)
print(c)
print(d)
<tf.Variable 'var_a:0' shape=() dtype=float32, numpy=3.14>
<tf.Variable 'var_b:0' shape=(3,) dtype=int32, numpy=array([1, 2, 3])>
<tf.Variable 'Variable:0' shape=(2,) dtype=bool, numpy=array([ True, False])>
<tf.Variable 'Variable:0' shape=(1,) dtype=string, numpy=array([b'abc'], dtype=object)>
print(a.trainable,
b.trainable,
c.trainable,
d.trainable)
True True True True

注意:在创建变量的时候,需要指定初始值。变量Varialbes有一个属性trainable,其默认设置为True.

Tensorflow的高级别API可以凭此实现对变量的操纵,如下示例定义了一个不可训练的变量,non-trainable。

w = tf.Variable([1, 2, 3], trainable=False)

print(w.trainable)
False

变量的值可以通过调用一些函数而实现修改。如.assign()、.assign_add()等;

不能在赋值期间,改变张量的形状或类型

print(w.assign([3, 1, 4], read_value=True)) # read_value的值默认为True,这将在更新变量的当前值后自动返回新的值


w.assign_add([2, -1, 2], read_value=False) # read_value的值设置为Flase,将抑制更新后的值的自动返回。调用w.value()将以张量格式进行返回
print(w.value())
<tf.Variable 'UnreadVariable' shape=(3,) dtype=int32, numpy=array([3, 1, 4])>
tf.Tensor([5 0 6], shape=(3,), dtype=int32)

类似于神经网络模型参数的随机初始化,这里也通过tf.random随机初始化Tensorflow的变量。

"""
使用Glorot初始化方法创建变量,其中Glorot是本吉奥等人提出的一种经典的

随机初始化方法。
"""

tf.random.set_seed(1)

init = tf.keras.initializers.GlorotNormal()

tf.print(init(shape=(3,)))
[-0.722795904 1.01456821 0.251808226]
v = tf.Variable(init(shape=(2, 3)))
tf.print(v)
[[0.28982234 -0.782292783 -0.0453658961]
 [0.960991383 -0.120003454 0.708528221]]
init_Uniform = tf.keras.initializers.GlorotUniform()

tf.print(init_Uniform(shape=(3, 4)))
[[0.609703302 0.248541951 0.0272701979 -0.201679051]
 [0.149802923 -0.836049199 -0.596909404 0.379033804]
 [-0.0149827 -0.285073876 0.0146478415 -0.515359282]]
tf.print(tf.math.reduce_mean(v))

tf.print(tf.math.reduce_std(v))
0.168613315
0.572621465

关于Xavier or Glorot初始化方法

这种初始化机制是由Glorot和Bengio等人在2010年提出的。他们研究了参数初始化对于模型性能的影响,因此提出了这种新的,更具鲁棒性的初始化方案。

Xavier 初始化方法的主要思想

试图在网络各层之间的梯度变化中实现大致的平衡,从而避免一些层在训练中得到过多的关注,而其它层则被相对忽略了。

相关原始论文链接

根据论文描述,如果我们想从均匀分布中初始化权重,则应该选择如下均匀分布的区间:

W ∼  Uniform  ( − 6 n in  + n out  , 6 n in  + n out  ) W \sim \text { Uniform }\left(-\frac{\sqrt{6}}{\sqrt{n_{\text {in }}+n_{\text {out }}}}, \frac{\sqrt{6}}{\sqrt{n_{\text {in }}+n_{\text {out }}}}\right) W Uniform (nin +nout  6 ,nin +nout  6 )

这里的 n i n n_{in} nin为输入神经元的个数,也就是与权重相乘的哪些。 n o u t n_{out} nout是输入到下一层的输出神经元的数量。为了从正态分布中初始化权重,建议选择

标准差为:

σ = 2 n i n + n o u t \sigma=\frac{\sqrt{2}}{\sqrt{n_{i n}+n_{o u t}}} σ=nin+nout 2

Tensorflow支持均匀和正态分布的权重初始化。

class MyModule(tf.Module):
    def __init__(self):
        init = tf.keras.initializers.GlorotNormal()
        self.w1 = tf.Variable(init(shape=(2, 3)), trainable=True)
        self.w2 = tf.Variable(init(shape=(1, 2)), trainable=False)
                
m = MyModule()
print('All module variables: ', [v.shape for v in m.variables])
print('Trainable variable:   ', [v.shape for v in
                                 m.trainable_variables])
All module variables:  [TensorShape([2, 3]), TensorShape([1, 2])]
Trainable variable:    [TensorShape([2, 3])]

在装饰函数内部创建变量将引起错误,代码示例如下:

# @tf.function
# def f(x):
#     w = tf.Variable([1, 2, 3])

# f([1])

"""
ValueError: tf.function-decorated function tried to create variables on non-first call.
"""
'\nValueError: tf.function-decorated function tried to create variables on non-first call.\n'

解决上述错误的一种方法为:在装饰函数外部定义变量Variable,然后在装饰函数内部实现对其调用。

import tensorflow as tf

tf.random.set_seed(1)
# 在装饰函数外部定义变量Variable
w = tf.Variable(tf.random.uniform((3, 3)))

@tf.function
def compute_z(x):    
    return tf.matmul(w, x)

x = tf.constant([[1], [2], [3]], dtype=tf.float32)
tf.print(compute_z(x))
[[3.8610158]
 [2.94593048]
 [3.82629013]]
import tensorflow as tf

tf.random.set_seed(1)
# 在装饰函数外部定义变量Variable
w = tf.Variable(tf.random.uniform((3, 3)))

@tf.function
def compute_z(x):    
    return tf.linalg.matmul(w, x)

x = tf.constant([[1], [2], [3]], dtype=tf.float32)
tf.print(compute_z(x))
[[3.8610158]
 [2.94593048]
 [3.82629013]]

1.3通过自动微分和GradientTape计算梯度

优化神经网络需要计算损失函数相对于网络权重的导数。

1.3.1计算损失函数相对于可训练(trainable)变量的梯度

Tensorflow支持自动微分(auto differentiation),可以将其视为计算嵌套函数导数的链式法则的实现。这里主要通过tf.GradientTape实现:

对于简单的函数 z = w x + b z=wx+b z=wx+b,定义损失函数为 L o s s = ( y − z ) 2 Loss = (y-z)^2 Loss=(yz)2。对于有很多样本的一般情况,损失函数记作 L o s s = ∑ i ( y i − z i ) 2 Loss = \sum_{i}{(y_i-z_i)}^2 Loss=i(yizi)2。定义模型的参数为

w , b w,b w,b,输入输出分别为 x , y x,y x,y,且均为张量。

import tensorflow as tf

w = tf.Variable(1.0)
b = tf.Variable(0.5)
print(w.trainable, b.trainable) # w,b均为可训练变量

x = tf.convert_to_tensor([1.4])
y = tf.convert_to_tensor([2.1])

with tf.GradientTape() as tape:
    z = tf.add(tf.multiply(w, x), b)
    loss = tf.reduce_sum(tf.square(y - z))

dloss_dw = tape.gradient(loss, w)

tf.print('dL/dw : ', dloss_dw)
True True
dL/dw :  -0.559999764

对上述计算示例:

∂  Loss  ∂ w = 2 x ( w x + b − y ) \frac{\partial \text { Loss }}{\partial w}=2 x(w x+b-y) w Loss =2x(wx+by)

# 验证上述计算结果
#tf.print(-2*x * (-b - w*x + y))

tf.print(2*x * ((w*x + b) - y))
[-0.559999764]

1.3.2计算不可训练(non-trainalbe)张量的梯度

tf.GradientTape自动支持可训练变量的微分计算。但是对于不可训练的变量或者张量对象,需要通过向GradientTape中添加**tape.watch()**进行监控。

比如需要计算损失函数关于输入的导数:

∂ Loss ∂ x \frac{\partial \text{Loss}}{\partial {x}} xLoss

with tf.GradientTape() as tape:
    tape.watch(x)
    z = tf.add(tf.multiply(w, x), b)
    loss = tf.square(y - z)

dloss_dx = tape.gradient(loss, x)

tf.print('dL/dx:', dloss_dx)
dL/dx: [-0.399999857]
# verifying the computed gradient
tf.print(2*w * ((w*x + b) - y))
[-0.399999857]

99999857]

对抗样本 Adversarial examples:

计算损失函数相对于输入示例的梯度用于生成对样样本(或者对抗攻击)。在计算机视觉中,对抗性示例是通过向输入示例添加一下小的不可知的噪声(或扰动)而

生成的样本示例。关于对抗样本的相关论文链接

1.3.3为多重梯度计算保留资源

主要是通过设置 persistent=True

当使用tf.GradientTape计算梯度的时候,tape默认情况下仅为单个的梯度计算保留资源resources。如在调用tape.gradient()之后,资源将被释放。

如果需要计算多个梯度,如计算 ∂  Loss  ∂ w  and  ∂  Loss  ∂ b \frac{\partial \text { Loss }}{\partial w} \text { and } \frac{\partial \text { Loss }}{\partial b} w Loss  and b Loss 就需要进行设置:

with tf.GradientTape(persistent=True) as tape: # 通过设置persistent=True
    z = tf.add(tf.multiply(w, x), b)
    loss = tf.reduce_sum(tf.square(y - z))

dloss_dw = tape.gradient(loss, w)
dloss_db = tape.gradient(loss, b)

tf.print('dL/dw:', dloss_dw)
tf.print('dL/db:', dloss_db)
dL/dw: -0.559999764
dL/db: -0.399999857

上述设置仅仅在需要计算多个梯度的时候才使用。因为在与单个梯度计算之后释放内存相比,记录和保存tape的内存效率更低,因此persistent的默认值为False

可以定义优化器,并使用tf.keras API应用梯度来优化模型参数:

optimizer = tf.keras.optimizers.SGD()

optimizer.apply_gradients(zip([dloss_dw, dloss_db], [w, b]))

tf.print('Updated w:', w)
tf.print('Updated bias:', b)
Updated w: 1.0056
Updated bias: 0.504

1.4通过Keras API简化通用架构的实现

model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(units=16, activation='relu'))
model.add(tf.keras.layers.Dense(units=32, activation='relu'))

## late variable creation
model.build(input_shape=(None, 4))
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                multiple                  80        
_________________________________________________________________
dense_1 (Dense)              multiple                  544       
=================================================================
Total params: 624
Trainable params: 624
Non-trainable params: 0
_________________________________________________________________
## 打印出模型参数
for v in model.variables:
    print('{:20s}'.format(v.name), v.trainable, v.shape)
dense/kernel:0       True (4, 16)
dense/bias:0         True (16,)
dense_1/kernel:0     True (16, 32)
dense_1/bias:0       True (32,)
## 打印出模型可训练参数
for v in model.trainable_variables:
    print('{:20s}'.format(v.name), v.trainable, v.shape)
dense/kernel:0       True (4, 16)
dense/bias:0         True (16,)
dense_1/kernel:0     True (16, 32)
dense_1/bias:0       True (32,)

1.4.1模型每一层(layers)的具体配置

指定不同的初始化方法,正则化,激活函数等;

  • Keras Initializers tf.keras.initializers: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/initializers
  • Keras Regularizers tf.keras.regularizers: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/regularizers
  • Activations tf.keras.activations: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/activations
model = tf.keras.Sequential()

"""
对于第一层,指定了具体的初始化方法GlorotNormal、激活函数为relu、以及kernel和bias
"""

model.add(
    tf.keras.layers.Dense(
        units=16, 
        activation=tf.keras.activations.relu,
        kernel_initializer=tf.keras.initializers.GlorotNormal(),
        bias_initializer=tf.keras.initializers.Constant(2.0)
    ))

"""
对于第二层,明确指定了通过L1正则化对权重矩阵(weight matrix)施加惩罚
"""
model.add(
    tf.keras.layers.Dense(
        units=32, 
        activation=tf.keras.activations.sigmoid,
        kernel_regularizer=tf.keras.regularizers.l1
    ))

model.build(input_shape=(None, 4))
model.summary()
Model: "sequential_3"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_5 (Dense)              multiple                  80        
_________________________________________________________________
dense_6 (Dense)              multiple                  544       
=================================================================
Total params: 624
Trainable params: 624
Non-trainable params: 0
_________________________________________________________________

1.4.2模型编译

实际上也可以在模型编译的时候,进一步进行模型的配置,如指定模型优化器的类型和训练的损失函数,评价指标

  • Keras Optimizers tf.keras.optimizers: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/optimizers
  • Keras Loss Functins tf.keras.losses: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/losses
  • Keras Metrics tf.keras.metrics: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/metrics

损失函数的选择

对于优化算法的选择,SGD和Adam是使用最为广泛的方法。损失函数的选择取决于具体的任务:

例如:对于回归任务,可以选择使用均方误差,

对于分类任务,可以选择使用交叉熵损失函数;

"""
这里采用的是SGD优化器,二进制的交叉熵损失函数,以及精确率、召回率、准确率等评估指标;
"""

model.compile(
    optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
    loss=tf.keras.losses.BinaryCrossentropy(),
    metrics=[tf.keras.metrics.Accuracy(), 
             tf.keras.metrics.Precision(),
             tf.keras.metrics.Recall(),])

1.5解决异或分类问题(XOR)

生成包含有两个特征的200个样本数据集,且数据集呈均匀分布。

数据打标签过程

y ( i ) = { 0  if  x 0 ( i ) × x 1 ( i ) < 0 1  otherwise  y^{(i)}=\left\{\begin{array}{ll} 0 & \text { if } x_{0}^{(i)} \times x_{1}^{(i)}<0 \\ 1 & \text { otherwise } \end{array}\right. y(i)={01 if x0(i)×x1(i)<0 otherwise 

这里采用100个样本用于训练,100个样本用于验证。

tf.random.set_seed(1)
np.random.seed(1)

x = np.random.uniform(low=-1, high=1, size=(200, 2))
y = np.ones(len(x))
y[x[:, 0] * x[:, 1]<0] = 0

x_train = x[:100, :]
y_train = y[:100]
x_valid = x[100:, :]
y_valid = y[100:]

fig = plt.figure(figsize=(6, 6))
plt.plot(x[y==0, 0], 
         x[y==0, 1], 'o', alpha=0.75, markersize=10)
plt.plot(x[y==1, 0], 
         x[y==1, 1], '<', alpha=0.75, markersize=10)
plt.xlabel(r'$x_1$', size=15)
plt.ylabel(r'$x_2$', size=15)
plt.show()

在这里插入图片描述

model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(units=1, 
                                input_shape=(2,), 
                                activation='sigmoid'))

model.summary()
Model: "sequential_4"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_7 (Dense)              (None, 1)                 3         
=================================================================
Total params: 3
Trainable params: 3
Non-trainable params: 0
_________________________________________________________________
# 模型编译
model.compile(optimizer=tf.keras.optimizers.SGD(),
              loss=tf.keras.losses.BinaryCrossentropy(),
              metrics=[tf.keras.metrics.BinaryAccuracy()])
  • batch_size: 在深度学习中,一般采用的是SGD,也就是在每次中,采用batch_size个样本训练;

  • iteration: 一个iteration等于适应batch_size个样本训练一次;

  • epoch: 迭代次数,一个epoch等于使用训练集中的全部样本训练一次,一个epoch = 所有训练样本的一个正向传递和一个反向传播。

示例:训练集有100个样本,batch_size为2,则训练完整个样本集,需要50次iteration,1次epoch;

# 模型训练,训练次数为200轮,batch_size = 2;
hist = model.fit(x_train, y_train, 
                 validation_data=(x_valid, y_valid), 
                 epochs=200, batch_size=2, verbose=0) # 这里的verbose参数用于控制详细训练信息的显示与否,默认值为1
# 使用mlxtend工具包,绘制学习曲线
from mlxtend.plotting import plot_decision_regions

history = hist.history

fig = plt.figure(figsize=(16, 4))
ax = fig.add_subplot(1, 3, 1)
plt.plot(history['loss'], lw=4)
plt.plot(history['val_loss'], lw=4)
plt.legend(['Train loss', 'Validation loss'], fontsize=15)
ax.set_xlabel('Epochs', size=15)

ax = fig.add_subplot(1, 3, 2)
plt.plot(history['binary_accuracy'], lw=4)
plt.plot(history['val_binary_accuracy'], lw=4)
plt.legend(['Train Acc.', 'Validation Acc.'], fontsize=15)
ax.set_xlabel('Epochs', size=15)

ax = fig.add_subplot(1, 3, 3)
plot_decision_regions(X=x_valid, y=y_valid.astype(np.integer),
                      clf=model)
ax.set_xlabel(r'$x_1$', size=15)
ax.xaxis.set_label_coords(1, -0.025)
ax.set_ylabel(r'$x_2$', size=15)
ax.yaxis.set_label_coords(-0.025, 1)
plt.show()

在这里插入图片描述

正如所看到的,没有隐藏层的简单模型只能导出线性决策边界,这无法解决XOR问题。因此,我们可以观察到训练数据集和验证数据集的损失项都很高,并且分

类准确率很低。为了得到一个非线性决策边界,我们可以添加一个或多个通过非线性激活函数连接的隐含层。通用逼近定理指出,具有单个隐层和相对较多隐含

单元的前馈神经网络可以相对较好地逼近任意连续函数。因此,更令人满意地解决XOR问题的一种方法是添加隐藏层并比较不同数量的隐藏单元,直到我们在验

证数据集上观察到令人满意的结果。添加更多隐藏单元将对应于增加层的宽度。或者,我们也可以添加更多隐藏层,这将使模型更深。使网络更深而不是更宽的

优点是,需要更少的参数即可实现类似的模型容量。然而,深度(相对于宽)模型的一个缺点是,深度模型容易消失和爆炸梯度,这使得它们更难训练。在后面示

例中,我们将查看具有三个隐藏层的前馈NN的结果:

tf.random.set_seed(1)

model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(units=4, input_shape=(2,), activation='relu'))
model.add(tf.keras.layers.Dense(units=4, activation='relu'))
model.add(tf.keras.layers.Dense(units=4, activation='relu'))
model.add(tf.keras.layers.Dense(units=1, activation='sigmoid'))

model.summary()

## compile:
model.compile(optimizer=tf.keras.optimizers.SGD(),
              loss=tf.keras.losses.BinaryCrossentropy(),
              metrics=[tf.keras.metrics.BinaryAccuracy()])

## train:
hist = model.fit(x_train, y_train, 
                 validation_data=(x_valid, y_valid), 
                 epochs=200, batch_size=2, verbose=0)

history = hist.history
Model: "sequential_5"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_8 (Dense)              (None, 4)                 12        
_________________________________________________________________
dense_9 (Dense)              (None, 4)                 20        
_________________________________________________________________
dense_10 (Dense)             (None, 4)                 20        
_________________________________________________________________
dense_11 (Dense)             (None, 1)                 5         
=================================================================
Total params: 57
Trainable params: 57
Non-trainable params: 0
_________________________________________________________________
fig = plt.figure(figsize=(16, 4))
ax = fig.add_subplot(1, 3, 1)
plt.plot(history['loss'], lw=4)
plt.plot(history['val_loss'], lw=4)
plt.legend(['Train loss', 'Validation loss'], fontsize=15)
ax.set_xlabel('Epochs', size=15)

ax = fig.add_subplot(1, 3, 2)
plt.plot(history['binary_accuracy'], lw=4)
plt.plot(history['val_binary_accuracy'], lw=4)
plt.legend(['Train Acc.', 'Validation Acc.'], fontsize=15)
ax.set_xlabel('Epochs', size=15)

ax = fig.add_subplot(1, 3, 3)
plot_decision_regions(X=x_valid, y=y_valid.astype(np.integer),
                      clf=model)
ax.set_xlabel(r'$x_1$', size=15)
ax.xaxis.set_label_coords(1, -0.025)
ax.set_ylabel(r'$x_2$', size=15)
ax.yaxis.set_label_coords(-0.025, 1)
plt.show()

在这里插入图片描述

可以看到,该模型能够为这些数据导出一个非线性决策边界,并且该模型在训练数据集上达到了100%的准确率。验证数据集的准确率为95%,这表明模

型有轻微的过拟合。

1.6通过keras’ 函数API更方便地构建模型

tf.random.set_seed(1)

## 输入层:
inputs = tf.keras.Input(shape=(2,))

## 隐藏层
h1 = tf.keras.layers.Dense(units=4, activation='relu')(inputs)
h2 = tf.keras.layers.Dense(units=4, activation='relu')(h1)
h3 = tf.keras.layers.Dense(units=4, activation='relu')(h2)

## 输出层
outputs = tf.keras.layers.Dense(units=1, activation='sigmoid')(h3)

## 构建模型:
model = tf.keras.Model(inputs=inputs, outputs=outputs)

model.summary()
Model: "model"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         [(None, 2)]               0         
_________________________________________________________________
dense_12 (Dense)             (None, 4)                 12        
_________________________________________________________________
dense_13 (Dense)             (None, 4)                 20        
_________________________________________________________________
dense_14 (Dense)             (None, 4)                 20        
_________________________________________________________________
dense_15 (Dense)             (None, 1)                 5         
=================================================================
Total params: 57
Trainable params: 57
Non-trainable params: 0
_________________________________________________________________
## compile:
model.compile(optimizer=tf.keras.optimizers.SGD(),
              loss=tf.keras.losses.BinaryCrossentropy(),
              metrics=[tf.keras.metrics.BinaryAccuracy()])

## train:
hist = model.fit(x_train, y_train, 
                 validation_data=(x_valid, y_valid), 
                 epochs=200, batch_size=2, verbose=0)

## Plotting
history = hist.history

fig = plt.figure(figsize=(16, 4))
ax = fig.add_subplot(1, 3, 1)
plt.plot(history['loss'], lw=4)
plt.plot(history['val_loss'], lw=4)
plt.legend(['Train loss', 'Validation loss'], fontsize=15)
ax.set_xlabel('Epochs', size=15)

ax = fig.add_subplot(1, 3, 2)
plt.plot(history['binary_accuracy'], lw=4)
plt.plot(history['val_binary_accuracy'], lw=4)
plt.legend(['Train Acc.', 'Validation Acc.'], fontsize=15)
ax.set_xlabel('Epochs', size=15)

ax = fig.add_subplot(1, 3, 3)
plot_decision_regions(X=x_valid, y=y_valid.astype(np.integer),
                      clf=model)
ax.set_xlabel(r'$x_1$', size=15)
ax.xaxis.set_label_coords(1, -0.025)
ax.set_ylabel(r'$x_2$', size=15)
ax.yaxis.set_label_coords(-0.025, 1)
plt.show()

在这里插入图片描述

1.7通过Keras’ Model 类创建模型

另一种方法是通过子类 tf.keras.Model,需要定义函数__init__()作为构造器,call()方法用于指定前向传递。

  • define __init__()
  • define call()

在__init__()中,将层定义为类的属性,以便可以通过self 进行访问。

在call()中,指定如何在网络的前向传递中使用这些层。

class MyModel(tf.keras.Model):
    def __init__(self):
        super(MyModel, self).__init__()
        self.hidden_1 = tf.keras.layers.Dense(units=4, activation='relu')
        self.hidden_2 = tf.keras.layers.Dense(units=4, activation='relu')
        self.hidden_3 = tf.keras.layers.Dense(units=4, activation='relu')
        self.output_layer = tf.keras.layers.Dense(units=1, activation='sigmoid')
        
    def call(self, inputs):
        h = self.hidden_1(inputs)
        h = self.hidden_2(h)
        h = self.hidden_3(h)
        return self.output_layer(h)
    
    """
    这里对所有隐藏层使用了相同的输出名称h,这将使得代码更具可读性,也更容易理解
    
    子类从tf.keras.Model中继承了通用的模型属性,如build()、compile()、fit(),等,因此这里可以直接调用;
    """
tf.random.set_seed(1)

## testing:
model = MyModel()
model.build(input_shape=(None, 2))

model.summary()

## compile:
model.compile(optimizer=tf.keras.optimizers.SGD(),
              loss=tf.keras.losses.BinaryCrossentropy(),
              metrics=[tf.keras.metrics.BinaryAccuracy()])

## train:
hist = model.fit(x_train, y_train, 
                 validation_data=(x_valid, y_valid), 
                 epochs=200, batch_size=2, verbose=0)

## Plotting
history = hist.history

fig = plt.figure(figsize=(16, 4))
ax = fig.add_subplot(1, 3, 1)
plt.plot(history['loss'], lw=4)
plt.plot(history['val_loss'], lw=4)
plt.legend(['Train loss', 'Validation loss'], fontsize=15)
ax.set_xlabel('Epochs', size=15)

ax = fig.add_subplot(1, 3, 2)
plt.plot(history['binary_accuracy'], lw=4)
plt.plot(history['val_binary_accuracy'], lw=4)
plt.legend(['Train Acc.', 'Validation Acc.'], fontsize=15)
ax.set_xlabel('Epochs', size=15)

ax = fig.add_subplot(1, 3, 3)
plot_decision_regions(X=x_valid, y=y_valid.astype(np.integer),
                      clf=model)
ax.set_xlabel(r'$x_1$', size=15)
ax.xaxis.set_label_coords(1, -0.025)
ax.set_ylabel(r'$x_2$', size=15)
ax.yaxis.set_label_coords(-0.025, 1)
plt.show()
Model: "my_model"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_16 (Dense)             multiple                  12        
_________________________________________________________________
dense_17 (Dense)             multiple                  20        
_________________________________________________________________
dense_18 (Dense)             multiple                  20        
_________________________________________________________________
dense_19 (Dense)             multiple                  5         
=================================================================
Total params: 57
Trainable params: 57
Non-trainable params: 0
_________________________________________________________________

在这里插入图片描述

1.8创建自定义Keras layers

如果想定义一个Keras还不支持的新的layer,则可以定义一个从tf.keras.layers.Layer类派生的新的类。这在设计新的网络层次或者自定义现有的

网络layers的时候非常奏效

在构造函数中,定义自定义层所需的变量和其他张量。如果向构造函数提供了input_shape,我们可以选择创建变量并在构造函数中对其进行初

始化。或者,我们可以延迟变量初始化(例如,如果我们事先不知道确切的输入形状),并将其委托给build()方法,以便稍后创建变量。此外,我们

可以为序列化定义get_config(),这意味着可以使用TensorFlow的模型保存和加载功能高效地保存使用自定义层的模型。

Defining a custom layer:

  • Define __init__()
  • Define build() for late-variable creation
  • Define call()
  • Define get_config() for serialization

比如需要计算:

w ( x + ϵ ) + b w(x+\epsilon)+b w(x+ϵ)+b

其中, ϵ \epsilon ϵ为噪音变量。dropout可理解为在模型中加入高斯噪声

"""
在构造函数中,添加了一个参数noise_stddev,用于指定从高斯分布中采样得到的\epsilon分布的标准偏差;

在call()方法中,使用了一个额外的参数Training=false.

在Keras中,Training参数是一种特殊的布尔参数,用于区分模型或者layers是训练期间使用还是仅仅用于预测,这有时也称为推断或者评估。

"""

class NoisyLinear(tf.keras.layers.Layer):
    def __init__(self, output_dim, noise_stddev=0.1, **kwargs):
        self.output_dim = output_dim
        self.noise_stddev = noise_stddev
        super(NoisyLinear, self).__init__(**kwargs)
    def build(self, input_shape):
        self.w = self.add_weight(name='weights',
                                 shape=(input_shape[1], self.output_dim),
                                 initializer='random_normal',
                                 trainable=True)
        
        self.b = self.add_weight(shape=(self.output_dim,),
                                 initializer='zeros',
                                 trainable=True)

    def call(self, inputs, training=False):
        if training:
            batch = tf.shape(inputs)[0]
            dim = tf.shape(inputs)[1]
            noise = tf.random.normal(shape=(batch, dim),
                                     mean=0.0,
                                     stddev=self.noise_stddev)

            noisy_inputs = tf.add(inputs, noise)
        else:
            noisy_inputs = inputs
        z = tf.matmul(noisy_inputs, self.w) + self.b
        return tf.keras.activations.relu(z)
    
    def get_config(self):
        config = super(NoisyLinear, self).get_config()
        config.update({'output_dim': self.output_dim,
                       'noise_stddev': self.noise_stddev})
        return config

"""
定义该层的一个实例,通过调用build()对其进行初始化,并在输入张量上执行它。

然后通过.get_config()对其进行序列化,并通过.from_config()恢复序列化之后的对象;
"""
## testing:

tf.random.set_seed(1)

noisy_layer = NoisyLinear(4)
noisy_layer.build(input_shape=(None, 4))

x = tf.zeros(shape=(1, 4))
tf.print(noisy_layer(x, training=True))

## re-building from config:
config = noisy_layer.get_config()
new_layer = NoisyLinear.from_config(config)
tf.print(new_layer(x, training=True))
[[0 0.00821428 0 0]]
[[0 0.0108502861 0 0]]

创建一个类似于前一个模型的新模型来解决异或分类问题。这里依然使用Keras’ 的 Sequential类,不同的是这里采用Noisy线性层作为第一个隐藏层。

tf.random.set_seed(1)

model = tf.keras.Sequential([
    NoisyLinear(4, noise_stddev=0.1),
    tf.keras.layers.Dense(units=4, activation='relu'),
    tf.keras.layers.Dense(units=4, activation='relu'),
    tf.keras.layers.Dense(units=1, activation='sigmoid')])

model.build(input_shape=(None, 2))
model.summary()

## compile:
model.compile(optimizer=tf.keras.optimizers.SGD(),
              loss=tf.keras.losses.BinaryCrossentropy(),
              metrics=[tf.keras.metrics.BinaryAccuracy()])

## train:
hist = model.fit(x_train, y_train, 
                 validation_data=(x_valid, y_valid), 
                 epochs=200, batch_size=2, 
                 verbose=0)

## Plotting
history = hist.history

fig = plt.figure(figsize=(16, 4))
ax = fig.add_subplot(1, 3, 1)
plt.plot(history['loss'], lw=4)
plt.plot(history['val_loss'], lw=4)
plt.legend(['Train loss', 'Validation loss'], fontsize=15)
ax.set_xlabel('Epochs', size=15)

ax = fig.add_subplot(1, 3, 2)
plt.plot(history['binary_accuracy'], lw=4)
plt.plot(history['val_binary_accuracy'], lw=4)
plt.legend(['Train Acc.', 'Validation Acc.'], fontsize=15)
ax.set_xlabel('Epochs', size=15)

ax = fig.add_subplot(1, 3, 3)
plot_decision_regions(X=x_valid, y=y_valid.astype(np.integer),
                      clf=model)
ax.set_xlabel(r'$x_1$', size=15)
ax.xaxis.set_label_coords(1, -0.025)
ax.set_ylabel(r'$x_2$', size=15)
ax.yaxis.set_label_coords(-0.025, 1)
plt.show()
Model: "sequential_6"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
noisy_linear_1 (NoisyLinear) multiple                  12        
_________________________________________________________________
dense_20 (Dense)             multiple                  20        
_________________________________________________________________
dense_21 (Dense)             multiple                  20        
_________________________________________________________________
dense_22 (Dense)             multiple                  5         
=================================================================
Total params: 57
Trainable params: 57
Non-trainable params: 0
_________________________________________________________________

在这里插入图片描述

1.9TensorFlow 的Estimators

tf.estiamtor API封装了机器学习任务中的底层步骤,如训练、预测、评估等。tf.estimator API增加了对在多个平台上运行魔性的支持,无需进行重大代码修改。

Tensorflow也提供了一些现成的estimators,这对于对比研究非常有用。

Steps for using pre-made estimators

  • Step 1: Define the input function for importing the data
  • Step 2: Define the feature columns to bridge between the estimator and the data
  • Step 3: Instantiate an estimator or convert a Keras model to an estimator
  • Step 4: Use the estimator: train() evaluate() predict()
import numpy as np
import tensorflow as tf
import pandas as pd

from IPython.display import Image
tf.random.set_seed(1)
np.random.seed(1)

1.9.1使用 feature columns

  • See definition: https://developers.google.com/machine-learning/glossary/#feature_columns
  • Documentation: https://www.tensorflow.org/api_docs/python/tf/feature_column

下图中显示的特征(生产年份、气缸、排量、马力、重量、加速度和生产地)来自Auto MPG数据集,此基准数据集,用于预测汽车的燃油效率(以每加仑里程(MPG)为单位)。数据集链接地址

Image(filename='images/02.png', width=700)

在这里插入图片描述

dataset_path = tf.keras.utils.get_file("auto-mpg.data", 
                                       ("http://archive.ics.uci.edu/ml/machine-learning-databases"
                                        "/auto-mpg/auto-mpg.data"))

column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower',
                'Weight', 'Acceleration', 'ModelYear', 'Origin']

df = pd.read_csv(dataset_path, names=column_names,
                 na_values = "?", comment='\t',
                 sep=" ", skipinitialspace=True)

df.tail()
MPGCylindersDisplacementHorsepowerWeightAccelerationModelYearOrigin
39327.04140.086.02790.015.6821
39444.0497.052.02130.024.6822
39532.04135.084.02295.011.6821
39628.04120.079.02625.018.6821
39731.04119.082.02720.019.4821
df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 398 entries, 0 to 397
Data columns (total 8 columns):
 #   Column        Non-Null Count  Dtype  
---  ------        --------------  -----  
 0   MPG           398 non-null    float64
 1   Cylinders     398 non-null    int64  
 2   Displacement  398 non-null    float64
 3   Horsepower    392 non-null    float64
 4   Weight        398 non-null    float64
 5   Acceleration  398 non-null    float64
 6   ModelYear     398 non-null    int64  
 7   Origin        398 non-null    int64  
dtypes: float64(5), int64(3)
memory usage: 25.0 KB
# 缺失值处理
print(df.isna().sum())

df = df.dropna()
df = df.reset_index(drop=True)
df.tail()
MPG             0Cylinders       0Displacement    0Horsepower      6Weight          0Acceleration    0ModelYear       0Origin          0dtype: int64
MPGCylindersDisplacementHorsepowerWeightAccelerationModelYearOrigin
38727.04140.086.02790.015.6821
38844.0497.052.02130.024.6822
38932.04135.084.02295.011.6821
39028.04120.079.02625.018.6821
39131.04119.082.02720.019.4821
import missingno as msno# msno.bar(df, color='red', figsize=(12, 5))msno.bar(df)
<matplotlib.axes._subplots.AxesSubplot at 0x2934d8155c8>

在这里插入图片描述

import sklearnimport sklearn.model_selectiondf_train, df_test = sklearn.model_selection.train_test_split(df, train_size=0.8)train_stats = df_train.describe().transpose()train_stats
countmeanstdmin25%50%75%max
MPG313.023.4041537.6669099.017.523.029.046.6
Cylinders313.05.4025561.7015063.04.04.08.08.0
Displacement313.0189.512780102.67564668.0104.0140.0260.0455.0
Horsepower313.0102.92971237.91904646.075.092.0120.0230.0
Weight313.02961.198083848.6021461613.02219.02755.03574.05140.0
Acceleration313.015.7044732.7253998.514.015.517.324.8
ModelYear313.075.9297123.67530570.073.076.079.082.0
Origin313.01.5910540.8079231.01.01.02.03.0
df_train.describe()
MPGCylindersDisplacementHorsepowerWeightAccelerationModelYearOrigin
count313.000000313.000000313.000000313.000000313.000000313.000000313.000000313.000000
mean23.4041535.402556189.512780102.9297122961.19808315.70447375.9297121.591054
std7.6669091.701506102.67564637.919046848.6021462.7253993.6753050.807923
min9.0000003.00000068.00000046.0000001613.0000008.50000070.0000001.000000
25%17.5000004.000000104.00000075.0000002219.00000014.00000073.0000001.000000
50%23.0000004.000000140.00000092.0000002755.00000015.50000076.0000001.000000
75%29.0000008.000000260.000000120.0000003574.00000017.30000079.0000002.000000
max46.6000008.000000455.000000230.0000005140.00000024.80000082.0000003.000000
# 数据标准化numeric_column_names = ['Cylinders', 'Displacement', 'Horsepower', 'Weight', 'Acceleration']df_train_norm, df_test_norm = df_train.copy(), df_test.copy()for col_name in numeric_column_names:    mean = train_stats.loc[col_name, 'mean']    std  = train_stats.loc[col_name, 'std']    df_train_norm.loc[:, col_name] = (df_train_norm.loc[:, col_name] - mean)/std    df_test_norm.loc[:, col_name] = (df_test_norm.loc[:, col_name] - mean)/std    df_train_norm.tail()
MPGCylindersDisplacementHorsepowerWeightAccelerationModelYearOrigin
20328.0-0.824303-0.901020-0.736562-0.9500310.255202763
25519.40.3511270.413800-0.3409820.2931900.548737781
7213.01.5265561.1442560.7138971.339617-0.625403721
23530.5-0.824303-0.891280-1.053025-1.0725850.475353771
3714.01.5265561.5630511.6369161.470420-1.359240711

1.9.2转换为Tensorflow estimators可以使用的数据结构

通过前面的预处理,已经得到了5个值为float类型的列,这里通过Tensorflow的feature_column函数将这些连续型的特征进行转换

numeric_features = []for col_name in numeric_column_names:    numeric_features.append(tf.feature_column.numeric_column(key=col_name))    numeric_features
[NumericColumn(key='Cylinders', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None), NumericColumn(key='Displacement', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None), NumericColumn(key='Horsepower', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None), NumericColumn(key='Weight', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None), NumericColumn(key='Acceleration', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None)]

数据分桶:

 bucket  = { 0  if year  < 73 1  if  73 ≤  year  ≤ 76 2  if  76 ≤  year  ≤ 79 3  if year  ≥ 79 \text { bucket }=\left\{\begin{array}{ll} 0 & \text { if year }<73 \\ 1 & \text { if } 73 \leq \text { year } \leq 76 \\ 2 & \text { if } 76 \leq \text { year } \leq 79 \\ 3 & \text { if year } \geq 79 \end{array}\right.  bucket =0123 if year <73 if 73 year 76 if 76 year 79 if year 79

feature_year = tf.feature_column.numeric_column(key="ModelYear")bucketized_features = []bucketized_features.append(tf.feature_column.bucketized_column(    source_column=feature_year,    boundaries=[73, 76, 79]))print(bucketized_features)
[BucketizedColumn(source_column=NumericColumn(key='ModelYear', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None), boundaries=(73, 76, 79))]

为了一致性,将上述分桶的特征添加到一个Python列表中。后面将这个列表与其他的特征列合并

接下来,处理无序分类特征Origin。

feature_origin = tf.feature_column.categorical_column_with_vocabulary_list(    key='Origin',    vocabulary_list=[1, 2, 3])categorical_indicator_features = []categorical_indicator_features.append(tf.feature_column.indicator_column(feature_origin))print(categorical_indicator_features)
[IndicatorColumn(categorical_column=VocabularyListCategoricalColumn(key='Origin', vocabulary_list=(1, 2, 3), dtype=tf.int32, default_value=-1, num_oov_buckets=0))]

一些estimators,如DNNClassifier和DNNRegressor,只接受所谓的“密集列”。因此,下一步是将现有的分类特征列转换为这种密集的列。有两种方法可以做到这一点:

embeddding_column使用内嵌列或通过indicator_column使用indicator列。

指标列将分类索引转换为一个热门编码的向量,例如,索引0将被编码为[1,0,0],索引1将被编码为[0,1,0],以此类推。另一方面,嵌入列将每个

索引映射到float类型的随机数向量,可以对其进行训练

当类别数量较多时,采用维数小于类别数量的嵌入列可以提高性能。在下面的代码片段中,我们将对分类特性使用指示符列方法,以便将其转换为

密集格式。

1.10使用预定义的estimators进行机器学习

构造好特征列之后,就可以使用预定义的Estimators了:

  • 定义一个输入函数,用于数据加载;

  • 将数据集转换为特征列,即feature columns;

  • 实例化一个estimators;

  • 使用estimators方法,train(), evaluate(), predict();

# 定义函数用于处理数据,并返回一个Tensorflow数据集def train_input_fn(df_train, batch_size=8):    df = df_train.copy()    train_x, train_y = df, df.pop('MPG')    dataset = tf.data.Dataset.from_tensor_slices((dict(train_x), train_y))    # shuffle, repeat, and batch the examples    return dataset.shuffle(1000).repeat().batch(batch_size)

上面使用了dict(train_x)将DataFrame对象转化为Python字典。从数据集中加载一个batch来看其具体内容:

## 查看inspectionds = train_input_fn(df_train_norm)batch = next(iter(ds))print('Keys:', batch[0].keys())print('Batch Model Years:', batch[0]['ModelYear'])
Keys: dict_keys(['Cylinders', 'Displacement', 'Horsepower', 'Weight', 'Acceleration', 'ModelYear', 'Origin'])Batch Model Years: tf.Tensor([82 78 76 72 78 73 70 78], shape=(8,), dtype=int32)
def eval_input_fn(df_test, batch_size=8):    df = df_test.copy()    test_x, test_y = df, df.pop('MPG')    dataset = tf.data.Dataset.from_tensor_slices((dict(test_x), test_y))    # shuffle, repeat, and batch the examples    return dataset.batch(batch_size)

已经定义了一个包含连续特征的列表、

一个已经分桶的列表、

一个用于分类特特征的列表。现在我们可以将这些单独的列表连接进行连接

all_feature_columns = (numeric_features +                        bucketized_features +                        categorical_indicator_features)print(all_feature_columns)
[NumericColumn(key='Cylinders', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None), NumericColumn(key='Displacement', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None), NumericColumn(key='Horsepower', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None), NumericColumn(key='Weight', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None), NumericColumn(key='Acceleration', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None), BucketizedColumn(source_column=NumericColumn(key='ModelYear', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None), boundaries=(73, 76, 79)), IndicatorColumn(categorical_column=VocabularyListCategoricalColumn(key='Origin', vocabulary_list=(1, 2, 3), dtype=tf.int32, default_value=-1, num_oov_buckets=0))]

实例化estimator,当前问题是回归问题,所以将使用tf.estimator.DNNRegressor

hidden_units指定希望设置的隐藏层神经单元的个数。

这里使用2个隐藏层,分别具有32, 10个神经单元。

regressor = tf.estimator.DNNRegressor(    feature_columns=all_feature_columns,    hidden_units=[32, 10],    model_dir='models/autompg-dnnregressor/') # 提供了保存模型参数的位置
INFO:tensorflow:Using default config.INFO:tensorflow:Using config: {'_model_dir': 'models/autompg-dnnregressor/', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: truegraph_options {  rewrite_options {    meta_optimizer_iterations: ONE  }}, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}

Estimators的一个优点是:在训练期间自动为模型设置checkpoint,以便于模型在训练意外中断而崩溃的情况下,可以轻松加载上次保存的

checkpoint,并从那里继续训练。

EPOCHS = 1000BATCH_SIZE = 8total_steps = EPOCHS * int(np.ceil(len(df_train) / BATCH_SIZE))print('Training Steps:', total_steps)
Training Steps: 40000
"""InternalError: 2 root error(s) found.  (0) Internal: Blas GEMM launch failed : a.shape=(8, 12), b.shape=(12, 32), m=8, n=32, k=12	 [[{{node dnn/hiddenlayer_0/MatMul}}]]	 [[dnn/zero_fraction_1/counts_to_fraction/truediv/_197]]  (1) Internal: Blas GEMM launch failed : a.shape=(8, 12), b.shape=(12, 32), m=8, n=32, k=12	 [[{{node dnn/hiddenlayer_0/MatMul}}]]0 successful operations.0 derived errors ignored."""# 显存不足,无法运行,最直接的原因是本地打开了两个以上的Tensorflow相关的notebook,关闭其余的,再次运行当前notebook
'\nInternalError: 2 root error(s) found.\n  (0) Internal: Blas GEMM launch failed : a.shape=(8, 12), b.shape=(12, 32), m=8, n=32, k=12\n\t [[{{node dnn/hiddenlayer_0/MatMul}}]]\n\t [[dnn/zero_fraction_1/counts_to_fraction/truediv/_197]]\n  (1) Internal: Blas GEMM launch failed : a.shape=(8, 12), b.shape=(12, 32), m=8, n=32, k=12\n\t [[{{node dnn/hiddenlayer_0/MatMul}}]]\n0 successful operations.\n0 derived errors ignored.\n'
regressor.train(    input_fn=lambda:train_input_fn(df_train_norm, batch_size=BATCH_SIZE),    steps=total_steps)
WARNING:tensorflow:From D:\installation\anaconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\ops\resource_variable_ops.py:1635: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.Instructions for updating:If using Keras pass *_constraint arguments to layers.WARNING:tensorflow:From D:\installation\anaconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\training\training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.Instructions for updating:Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.INFO:tensorflow:Calling model_fn.WARNING:tensorflow:Layer dnn is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.WARNING:tensorflow:From D:\installation\anaconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\feature_column\feature_column_v2.py:4267: IndicatorColumn._variable_shape (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.Instructions for updating:The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.WARNING:tensorflow:From D:\installation\anaconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\feature_column\feature_column_v2.py:4322: VocabularyListCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.Instructions for updating:The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.WARNING:tensorflow:From D:\installation\anaconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\optimizer_v2\adagrad.py:103: calling Constant.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.Instructions for updating:Call initializer instance with the dtype argument instead of passing it to the constructorINFO:tensorflow:Done calling model_fn.INFO:tensorflow:Create CheckpointSaverHook.INFO:tensorflow:Graph was finalized.INFO:tensorflow:Restoring parameters from models/autompg-dnnregressor/model.ckpt-0WARNING:tensorflow:From D:\installation\anaconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\training\saver.py:1069: get_checkpoint_mtimes (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.Instructions for updating:Use standard file utilities to get mtimes.INFO:tensorflow:Running local_init_op.INFO:tensorflow:Done running local_init_op.INFO:tensorflow:Saving checkpoints for 0 into models/autompg-dnnregressor/model.ckpt.INFO:tensorflow:loss = 334.0019, step = 0INFO:tensorflow:global_step/sec: 360.294INFO:tensorflow:loss = 506.17786, step = 100 (0.278 sec)INFO:tensorflow:global_step/sec: 474.035INFO:tensorflow:loss = 512.62555, step = 200 (0.210 sec)INFO:tensorflow:global_step/sec: 476.012INFO:tensorflow:loss = 529.2986, step = 300 (0.211 sec)INFO:tensorflow:global_step/sec: 475.2INFO:tensorflow:loss = 648.4312, step = 400 (0.210 sec)INFO:tensorflow:global_step/sec: 475.199INFO:tensorflow:loss = 706.3967, step = 500 (0.210 sec)INFO:tensorflow:global_step/sec: 470.74INFO:tensorflow:loss = 689.86096, step = 600 (0.211 sec)INFO:tensorflow:global_step/sec: 477.462INFO:tensorflow:loss = 515.6536, step = 700 (0.209 sec)INFO:tensorflow:global_step/sec: 427.54INFO:tensorflow:loss = 413.5123, step = 800 (0.235 sec)INFO:tensorflow:global_step/sec: 459.944INFO:tensorflow:loss = 444.73718, step = 900 (0.216 sec)INFO:tensorflow:global_step/sec: 465.473INFO:tensorflow:loss = 429.79562, step = 1000 (0.215 sec)INFO:tensorflow:global_step/sec: 471.66INFO:tensorflow:loss = 453.63568, step = 1100 (0.213 sec)INFO:tensorflow:global_step/sec: 481.15INFO:tensorflow:loss = 353.57745, step = 1200 (0.207 sec)INFO:tensorflow:global_step/sec: 475.201INFO:tensorflow:loss = 364.2655, step = 1300 (0.211 sec)INFO:tensorflow:global_step/sec: 470.961INFO:tensorflow:loss = 438.5925, step = 1400 (0.212 sec)INFO:tensorflow:global_step/sec: 380.381INFO:tensorflow:loss = 510.6795, step = 1500 (0.263 sec)INFO:tensorflow:global_step/sec: 482.051INFO:tensorflow:loss = 318.18594, step = 1600 (0.206 sec)INFO:tensorflow:global_step/sec: 457.844INFO:tensorflow:loss = 354.99597, step = 1700 (0.218 sec)INFO:tensorflow:global_step/sec: 477.674INFO:tensorflow:loss = 447.48907, step = 1800 (0.209 sec)INFO:tensorflow:global_step/sec: 486.733INFO:tensorflow:loss = 304.06256, step = 1900 (0.206 sec)INFO:tensorflow:global_step/sec: 415.867INFO:tensorflow:loss = 427.1187, step = 2000 (0.239 sec)INFO:tensorflow:global_step/sec: 486.733INFO:tensorflow:loss = 239.50507, step = 2100 (0.206 sec)INFO:tensorflow:global_step/sec: 370.691INFO:tensorflow:loss = 284.96924, step = 2200 (0.270 sec)INFO:tensorflow:global_step/sec: 389.236INFO:tensorflow:loss = 262.33377, step = 2300 (0.257 sec)INFO:tensorflow:global_step/sec: 382.479INFO:tensorflow:loss = 267.95856, step = 2400 (0.261 sec)INFO:tensorflow:global_step/sec: 374.418INFO:tensorflow:loss = 278.5943, step = 2500 (0.266 sec)INFO:tensorflow:global_step/sec: 355.735INFO:tensorflow:loss = 308.3356, step = 2600 (0.282 sec)INFO:tensorflow:global_step/sec: 453.792INFO:tensorflow:loss = 348.33716, step = 2700 (0.220 sec)INFO:tensorflow:global_step/sec: 401.453INFO:tensorflow:loss = 180.56784, step = 2800 (0.249 sec)INFO:tensorflow:global_step/sec: 419.529INFO:tensorflow:loss = 388.36353, step = 2900 (0.238 sec)INFO:tensorflow:global_step/sec: 457.841INFO:tensorflow:loss = 246.28963, step = 3000 (0.217 sec)INFO:tensorflow:global_step/sec: 482.055INFO:tensorflow:loss = 291.02768, step = 3100 (0.208 sec)INFO:tensorflow:global_step/sec: 354.777INFO:tensorflow:loss = 294.9154, step = 3200 (0.281 sec)INFO:tensorflow:global_step/sec: 464.202INFO:tensorflow:loss = 371.4992, step = 3300 (0.216 sec)INFO:tensorflow:global_step/sec: 457.303INFO:tensorflow:loss = 148.55008, step = 3400 (0.218 sec)INFO:tensorflow:global_step/sec: 472.255INFO:tensorflow:loss = 279.10953, step = 3500 (0.213 sec)INFO:tensorflow:global_step/sec: 477.461INFO:tensorflow:loss = 229.81213, step = 3600 (0.209 sec)INFO:tensorflow:global_step/sec: 484.267INFO:tensorflow:loss = 242.6159, step = 3700 (0.207 sec)INFO:tensorflow:global_step/sec: 484.382INFO:tensorflow:loss = 213.70067, step = 3800 (0.205 sec)INFO:tensorflow:global_step/sec: 453.632INFO:tensorflow:loss = 305.29816, step = 3900 (0.221 sec)INFO:tensorflow:global_step/sec: 475.831INFO:tensorflow:loss = 300.2825, step = 4000 (0.209 sec)INFO:tensorflow:global_step/sec: 479.748INFO:tensorflow:loss = 275.04697, step = 4100 (0.209 sec)INFO:tensorflow:global_step/sec: 470.74INFO:tensorflow:loss = 290.2198, step = 4200 (0.212 sec)INFO:tensorflow:global_step/sec: 482.053INFO:tensorflow:loss = 277.9507, step = 4300 (0.207 sec)INFO:tensorflow:global_step/sec: 477.465INFO:tensorflow:loss = 182.70616, step = 4400 (0.209 sec)INFO:tensorflow:global_step/sec: 477.462INFO:tensorflow:loss = 350.34534, step = 4500 (0.209 sec)INFO:tensorflow:global_step/sec: 478.541INFO:tensorflow:loss = 229.32553, step = 4600 (0.208 sec)INFO:tensorflow:global_step/sec: 482.02INFO:tensorflow:loss = 138.01831, step = 4700 (0.208 sec)INFO:tensorflow:global_step/sec: 475.202INFO:tensorflow:loss = 210.80838, step = 4800 (0.209 sec)INFO:tensorflow:global_step/sec: 477.461INFO:tensorflow:loss = 187.18935, step = 4900 (0.209 sec)INFO:tensorflow:global_step/sec: 436.499INFO:tensorflow:loss = 170.58662, step = 5000 (0.229 sec)INFO:tensorflow:global_step/sec: 476.611INFO:tensorflow:loss = 163.58578, step = 5100 (0.211 sec)INFO:tensorflow:global_step/sec: 484.383INFO:tensorflow:loss = 175.1297, step = 5200 (0.206 sec)INFO:tensorflow:global_step/sec: 474.902INFO:tensorflow:loss = 194.54103, step = 5300 (0.211 sec)INFO:tensorflow:global_step/sec: 479.749INFO:tensorflow:loss = 152.60086, step = 5400 (0.207 sec)INFO:tensorflow:global_step/sec: 475.199INFO:tensorflow:loss = 92.29292, step = 5500 (0.210 sec)INFO:tensorflow:global_step/sec: 475.96INFO:tensorflow:loss = 109.903435, step = 5600 (0.210 sec)INFO:tensorflow:global_step/sec: 475.707INFO:tensorflow:loss = 113.408295, step = 5700 (0.210 sec)INFO:tensorflow:global_step/sec: 464.198INFO:tensorflow:loss = 87.716324, step = 5800 (0.216 sec)INFO:tensorflow:global_step/sec: 482.055INFO:tensorflow:loss = 134.14932, step = 5900 (0.206 sec)INFO:tensorflow:global_step/sec: 464.201INFO:tensorflow:loss = 99.90868, step = 6000 (0.216 sec)INFO:tensorflow:global_step/sec: 482.053INFO:tensorflow:loss = 85.40311, step = 6100 (0.206 sec)INFO:tensorflow:global_step/sec: 472.958INFO:tensorflow:loss = 51.817772, step = 6200 (0.211 sec)INFO:tensorflow:global_step/sec: 468.539INFO:tensorflow:loss = 76.72183, step = 6300 (0.214 sec)INFO:tensorflow:global_step/sec: 486.733INFO:tensorflow:loss = 63.5703, step = 6400 (0.205 sec)INFO:tensorflow:global_step/sec: 479.748INFO:tensorflow:loss = 85.09582, step = 6500 (0.208 sec)INFO:tensorflow:global_step/sec: 466.359INFO:tensorflow:loss = 108.06959, step = 6600 (0.213 sec)INFO:tensorflow:global_step/sec: 466.359INFO:tensorflow:loss = 78.967636, step = 6700 (0.215 sec)INFO:tensorflow:global_step/sec: 468.537INFO:tensorflow:loss = 48.479454, step = 6800 (0.212 sec)INFO:tensorflow:global_step/sec: 457.842INFO:tensorflow:loss = 96.16404, step = 6900 (0.218 sec)INFO:tensorflow:global_step/sec: 462.062INFO:tensorflow:loss = 109.666466, step = 7000 (0.216 sec)INFO:tensorflow:global_step/sec: 460.982INFO:tensorflow:loss = 72.7174, step = 7100 (0.217 sec)INFO:tensorflow:global_step/sec: 477.464INFO:tensorflow:loss = 51.18933, step = 7200 (0.209 sec)INFO:tensorflow:global_step/sec: 472.956INFO:tensorflow:loss = 24.774748, step = 7300 (0.211 sec)INFO:tensorflow:global_step/sec: 466.362INFO:tensorflow:loss = 67.98491, step = 7400 (0.214 sec)INFO:tensorflow:global_step/sec: 466.358INFO:tensorflow:loss = 34.327686, step = 7500 (0.214 sec)INFO:tensorflow:global_step/sec: 470.739INFO:tensorflow:loss = 61.913986, step = 7600 (0.212 sec)INFO:tensorflow:global_step/sec: 482.057INFO:tensorflow:loss = 58.93832, step = 7700 (0.207 sec)INFO:tensorflow:global_step/sec: 482.053INFO:tensorflow:loss = 68.48285, step = 7800 (0.208 sec)INFO:tensorflow:global_step/sec: 472.958INFO:tensorflow:loss = 90.76154, step = 7900 (0.210 sec)INFO:tensorflow:global_step/sec: 475.199INFO:tensorflow:loss = 30.581844, step = 8000 (0.211 sec)INFO:tensorflow:global_step/sec: 479.748INFO:tensorflow:loss = 37.749332, step = 8100 (0.208 sec)INFO:tensorflow:global_step/sec: 477.033INFO:tensorflow:loss = 45.557228, step = 8200 (0.209 sec)INFO:tensorflow:global_step/sec: 482.051INFO:tensorflow:loss = 37.594658, step = 8300 (0.207 sec)INFO:tensorflow:global_step/sec: 478.583INFO:tensorflow:loss = 35.52199, step = 8400 (0.209 sec)INFO:tensorflow:global_step/sec: 482.053INFO:tensorflow:loss = 46.246033, step = 8500 (0.207 sec)INFO:tensorflow:global_step/sec: 484.383INFO:tensorflow:loss = 129.63525, step = 8600 (0.206 sec)INFO:tensorflow:global_step/sec: 477.465INFO:tensorflow:loss = 49.10182, step = 8700 (0.209 sec)INFO:tensorflow:global_step/sec: 475.198INFO:tensorflow:loss = 21.94509, step = 8800 (0.211 sec)INFO:tensorflow:global_step/sec: 468.537INFO:tensorflow:loss = 30.859087, step = 8900 (0.212 sec)INFO:tensorflow:global_step/sec: 426.673INFO:tensorflow:loss = 60.56188, step = 9000 (0.235 sec)INFO:tensorflow:global_step/sec: 466.358INFO:tensorflow:loss = 65.31733, step = 9100 (0.214 sec)INFO:tensorflow:global_step/sec: 468.537INFO:tensorflow:loss = 60.623745, step = 9200 (0.212 sec)INFO:tensorflow:global_step/sec: 465.963INFO:tensorflow:loss = 18.565424, step = 9300 (0.216 sec)INFO:tensorflow:global_step/sec: 457.838INFO:tensorflow:loss = 33.145386, step = 9400 (0.217 sec)INFO:tensorflow:global_step/sec: 466.36INFO:tensorflow:loss = 42.293655, step = 9500 (0.214 sec)INFO:tensorflow:global_step/sec: 475.2INFO:tensorflow:loss = 37.78272, step = 9600 (0.211 sec)INFO:tensorflow:global_step/sec: 466.359INFO:tensorflow:loss = 20.325016, step = 9700 (0.213 sec)INFO:tensorflow:global_step/sec: 468.539INFO:tensorflow:loss = 18.994242, step = 9800 (0.213 sec)INFO:tensorflow:global_step/sec: 470.737INFO:tensorflow:loss = 30.900612, step = 9900 (0.212 sec)INFO:tensorflow:global_step/sec: 459.943INFO:tensorflow:loss = 23.605618, step = 10000 (0.218 sec)INFO:tensorflow:global_step/sec: 471.826INFO:tensorflow:loss = 20.148094, step = 10100 (0.211 sec)INFO:tensorflow:global_step/sec: 479.748INFO:tensorflow:loss = 16.90689, step = 10200 (0.209 sec)INFO:tensorflow:global_step/sec: 470.738INFO:tensorflow:loss = 22.767551, step = 10300 (0.211 sec)INFO:tensorflow:global_step/sec: 470.737INFO:tensorflow:loss = 29.20251, step = 10400 (0.212 sec)INFO:tensorflow:global_step/sec: 466.359INFO:tensorflow:loss = 50.916847, step = 10500 (0.214 sec)INFO:tensorflow:global_step/sec: 453.581INFO:tensorflow:loss = 19.724995, step = 10600 (0.221 sec)INFO:tensorflow:global_step/sec: 455.759INFO:tensorflow:loss = 26.879524, step = 10700 (0.218 sec)INFO:tensorflow:global_step/sec: 477.464INFO:tensorflow:loss = 33.109955, step = 10800 (0.209 sec)INFO:tensorflow:global_step/sec: 468.536INFO:tensorflow:loss = 17.757587, step = 10900 (0.213 sec)INFO:tensorflow:global_step/sec: 462.063INFO:tensorflow:loss = 12.032714, step = 11000 (0.216 sec)INFO:tensorflow:global_step/sec: 475.201INFO:tensorflow:loss = 31.113628, step = 11100 (0.210 sec)INFO:tensorflow:global_step/sec: 464.201INFO:tensorflow:loss = 20.600323, step = 11200 (0.215 sec)INFO:tensorflow:global_step/sec: 477.463INFO:tensorflow:loss = 30.92495, step = 11300 (0.209 sec)INFO:tensorflow:global_step/sec: 459.941INFO:tensorflow:loss = 18.475273, step = 11400 (0.217 sec)INFO:tensorflow:global_step/sec: 489.108INFO:tensorflow:loss = 64.99293, step = 11500 (0.204 sec)INFO:tensorflow:global_step/sec: 482.054INFO:tensorflow:loss = 10.316778, step = 11600 (0.207 sec)INFO:tensorflow:global_step/sec: 466.362INFO:tensorflow:loss = 48.88976, step = 11700 (0.214 sec)INFO:tensorflow:global_step/sec: 475.198INFO:tensorflow:loss = 18.542145, step = 11800 (0.211 sec)INFO:tensorflow:global_step/sec: 468.538INFO:tensorflow:loss = 16.06894, step = 11900 (0.212 sec)INFO:tensorflow:global_step/sec: 484.383INFO:tensorflow:loss = 17.698406, step = 12000 (0.206 sec)INFO:tensorflow:global_step/sec: 475.203INFO:tensorflow:loss = 7.225834, step = 12100 (0.210 sec)INFO:tensorflow:global_step/sec: 484.381INFO:tensorflow:loss = 15.862753, step = 12200 (0.206 sec)INFO:tensorflow:global_step/sec: 468.538INFO:tensorflow:loss = 30.014956, step = 12300 (0.213 sec)INFO:tensorflow:global_step/sec: 470.739INFO:tensorflow:loss = 6.2746124, step = 12400 (0.213 sec)INFO:tensorflow:global_step/sec: 468.54INFO:tensorflow:loss = 20.813778, step = 12500 (0.213 sec)INFO:tensorflow:global_step/sec: 462.059INFO:tensorflow:loss = 44.111954, step = 12600 (0.215 sec)INFO:tensorflow:global_step/sec: 482.053INFO:tensorflow:loss = 17.700146, step = 12700 (0.207 sec)INFO:tensorflow:global_step/sec: 472.961INFO:tensorflow:loss = 65.04686, step = 12800 (0.211 sec)INFO:tensorflow:global_step/sec: 484.384INFO:tensorflow:loss = 27.399656, step = 12900 (0.207 sec)INFO:tensorflow:global_step/sec: 475.199INFO:tensorflow:loss = 41.927788, step = 13000 (0.209 sec)INFO:tensorflow:global_step/sec: 451.347INFO:tensorflow:loss = 37.994545, step = 13100 (0.223 sec)INFO:tensorflow:global_step/sec: 464.197INFO:tensorflow:loss = 49.842514, step = 13200 (0.214 sec)INFO:tensorflow:global_step/sec: 462.063INFO:tensorflow:loss = 25.281408, step = 13300 (0.216 sec)INFO:tensorflow:global_step/sec: 451.657INFO:tensorflow:loss = 21.672173, step = 13400 (0.222 sec)INFO:tensorflow:global_step/sec: 477.462INFO:tensorflow:loss = 26.24228, step = 13500 (0.208 sec)INFO:tensorflow:global_step/sec: 475.201INFO:tensorflow:loss = 7.7528048, step = 13600 (0.211 sec)INFO:tensorflow:global_step/sec: 466.359INFO:tensorflow:loss = 24.990301, step = 13700 (0.213 sec)INFO:tensorflow:global_step/sec: 477.463INFO:tensorflow:loss = 20.220085, step = 13800 (0.210 sec)INFO:tensorflow:global_step/sec: 470.741INFO:tensorflow:loss = 7.8597417, step = 13900 (0.212 sec)INFO:tensorflow:global_step/sec: 482.052INFO:tensorflow:loss = 28.036829, step = 14000 (0.206 sec)INFO:tensorflow:global_step/sec: 482.053INFO:tensorflow:loss = 9.270989, step = 14100 (0.207 sec)INFO:tensorflow:global_step/sec: 468.539INFO:tensorflow:loss = 31.381905, step = 14200 (0.214 sec)INFO:tensorflow:global_step/sec: 482.056INFO:tensorflow:loss = 22.113922, step = 14300 (0.207 sec)INFO:tensorflow:global_step/sec: 464.2INFO:tensorflow:loss = 3.7733753, step = 14400 (0.215 sec)INFO:tensorflow:global_step/sec: 484.383INFO:tensorflow:loss = 6.536296, step = 14500 (0.206 sec)INFO:tensorflow:global_step/sec: 479.747INFO:tensorflow:loss = 22.178534, step = 14600 (0.207 sec)INFO:tensorflow:global_step/sec: 475.198INFO:tensorflow:loss = 7.1920033, step = 14700 (0.210 sec)INFO:tensorflow:global_step/sec: 484.385INFO:tensorflow:loss = 25.886337, step = 14800 (0.207 sec)INFO:tensorflow:global_step/sec: 466.36INFO:tensorflow:loss = 24.276703, step = 14900 (0.213 sec)INFO:tensorflow:global_step/sec: 484.38INFO:tensorflow:loss = 13.244961, step = 15000 (0.206 sec)INFO:tensorflow:global_step/sec: 489.109INFO:tensorflow:loss = 16.167114, step = 15100 (0.204 sec)INFO:tensorflow:global_step/sec: 482.056INFO:tensorflow:loss = 17.560364, step = 15200 (0.207 sec)INFO:tensorflow:global_step/sec: 486.734INFO:tensorflow:loss = 11.527983, step = 15300 (0.205 sec)INFO:tensorflow:global_step/sec: 479.748INFO:tensorflow:loss = 9.782318, step = 15400 (0.208 sec)INFO:tensorflow:global_step/sec: 459.941INFO:tensorflow:loss = 21.247673, step = 15500 (0.218 sec)INFO:tensorflow:global_step/sec: 462.061INFO:tensorflow:loss = 36.103348, step = 15600 (0.215 sec)INFO:tensorflow:global_step/sec: 470.738INFO:tensorflow:loss = 47.77614, step = 15700 (0.212 sec)INFO:tensorflow:global_step/sec: 479.749INFO:tensorflow:loss = 21.15739, step = 15800 (0.208 sec)INFO:tensorflow:global_step/sec: 474.085INFO:tensorflow:loss = 11.973942, step = 15900 (0.211 sec)INFO:tensorflow:global_step/sec: 453.699INFO:tensorflow:loss = 26.631302, step = 16000 (0.221 sec)INFO:tensorflow:global_step/sec: 475.2INFO:tensorflow:loss = 8.51922, step = 16100 (0.209 sec)INFO:tensorflow:global_step/sec: 482.052INFO:tensorflow:loss = 19.000793, step = 16200 (0.208 sec)INFO:tensorflow:global_step/sec: 475.201INFO:tensorflow:loss = 21.718674, step = 16300 (0.210 sec)INFO:tensorflow:global_step/sec: 475.199INFO:tensorflow:loss = 5.054392, step = 16400 (0.210 sec)INFO:tensorflow:global_step/sec: 462.062INFO:tensorflow:loss = 22.084238, step = 16500 (0.216 sec)INFO:tensorflow:global_step/sec: 466.36INFO:tensorflow:loss = 34.220917, step = 16600 (0.214 sec)INFO:tensorflow:global_step/sec: 482.055INFO:tensorflow:loss = 24.765583, step = 16700 (0.207 sec)INFO:tensorflow:global_step/sec: 475.199INFO:tensorflow:loss = 14.8624935, step = 16800 (0.210 sec)INFO:tensorflow:global_step/sec: 468.54INFO:tensorflow:loss = 3.1536963, step = 16900 (0.212 sec)INFO:tensorflow:global_step/sec: 470.737INFO:tensorflow:loss = 14.53073, step = 17000 (0.212 sec)INFO:tensorflow:global_step/sec: 470.738INFO:tensorflow:loss = 37.877678, step = 17100 (0.212 sec)INFO:tensorflow:global_step/sec: 486.736INFO:tensorflow:loss = 22.535511, step = 17200 (0.206 sec)INFO:tensorflow:global_step/sec: 468.536INFO:tensorflow:loss = 11.86511, step = 17300 (0.212 sec)INFO:tensorflow:global_step/sec: 477.464INFO:tensorflow:loss = 12.693979, step = 17400 (0.210 sec)INFO:tensorflow:global_step/sec: 472.959INFO:tensorflow:loss = 17.272137, step = 17500 (0.210 sec)INFO:tensorflow:global_step/sec: 479.747INFO:tensorflow:loss = 15.169901, step = 17600 (0.208 sec)INFO:tensorflow:global_step/sec: 475.201INFO:tensorflow:loss = 12.819642, step = 17700 (0.211 sec)INFO:tensorflow:global_step/sec: 465.258INFO:tensorflow:loss = 26.261839, step = 17800 (0.214 sec)INFO:tensorflow:global_step/sec: 464.199INFO:tensorflow:loss = 51.766647, step = 17900 (0.215 sec)INFO:tensorflow:global_step/sec: 482.053INFO:tensorflow:loss = 16.25539, step = 18000 (0.208 sec)INFO:tensorflow:global_step/sec: 475.2INFO:tensorflow:loss = 14.337395, step = 18100 (0.209 sec)INFO:tensorflow:global_step/sec: 466.359INFO:tensorflow:loss = 14.794557, step = 18200 (0.214 sec)INFO:tensorflow:global_step/sec: 475.202INFO:tensorflow:loss = 21.373344, step = 18300 (0.211 sec)INFO:tensorflow:global_step/sec: 466.358INFO:tensorflow:loss = 46.23558, step = 18400 (0.213 sec)INFO:tensorflow:global_step/sec: 468.539INFO:tensorflow:loss = 20.706932, step = 18500 (0.213 sec)INFO:tensorflow:global_step/sec: 472.957INFO:tensorflow:loss = 7.8188047, step = 18600 (0.211 sec)INFO:tensorflow:global_step/sec: 479.749INFO:tensorflow:loss = 12.587075, step = 18700 (0.208 sec)INFO:tensorflow:global_step/sec: 472.961INFO:tensorflow:loss = 6.5228834, step = 18800 (0.211 sec)INFO:tensorflow:global_step/sec: 447.62INFO:tensorflow:loss = 13.267336, step = 18900 (0.223 sec)INFO:tensorflow:global_step/sec: 455.762INFO:tensorflow:loss = 27.55281, step = 19000 (0.220 sec)INFO:tensorflow:global_step/sec: 468.538INFO:tensorflow:loss = 30.267008, step = 19100 (0.212 sec)INFO:tensorflow:global_step/sec: 484.381INFO:tensorflow:loss = 14.5306225, step = 19200 (0.206 sec)INFO:tensorflow:global_step/sec: 475.2INFO:tensorflow:loss = 9.910925, step = 19300 (0.210 sec)INFO:tensorflow:global_step/sec: 455.759INFO:tensorflow:loss = 17.336899, step = 19400 (0.220 sec)INFO:tensorflow:global_step/sec: 468.54INFO:tensorflow:loss = 12.301694, step = 19500 (0.212 sec)INFO:tensorflow:global_step/sec: 457.841INFO:tensorflow:loss = 47.56683, step = 19600 (0.218 sec)INFO:tensorflow:global_step/sec: 468.54INFO:tensorflow:loss = 20.995422, step = 19700 (0.214 sec)INFO:tensorflow:global_step/sec: 468.096INFO:tensorflow:loss = 66.49049, step = 19800 (0.213 sec)INFO:tensorflow:global_step/sec: 464.201INFO:tensorflow:loss = 31.611864, step = 19900 (0.216 sec)INFO:tensorflow:global_step/sec: 464.201INFO:tensorflow:loss = 34.991585, step = 20000 (0.215 sec)INFO:tensorflow:global_step/sec: 455.759INFO:tensorflow:loss = 22.152897, step = 20100 (0.218 sec)INFO:tensorflow:global_step/sec: 468.54INFO:tensorflow:loss = 10.452751, step = 20200 (0.214 sec)INFO:tensorflow:global_step/sec: 468.538INFO:tensorflow:loss = 6.076998, step = 20300 (0.212 sec)INFO:tensorflow:global_step/sec: 465.256INFO:tensorflow:loss = 30.571922, step = 20400 (0.215 sec)INFO:tensorflow:global_step/sec: 482.056INFO:tensorflow:loss = 14.499392, step = 20500 (0.207 sec)INFO:tensorflow:global_step/sec: 482.052INFO:tensorflow:loss = 23.927994, step = 20600 (0.207 sec)INFO:tensorflow:global_step/sec: 441.706INFO:tensorflow:loss = 30.936108, step = 20700 (0.226 sec)INFO:tensorflow:global_step/sec: 468.54INFO:tensorflow:loss = 18.862068, step = 20800 (0.213 sec)INFO:tensorflow:global_step/sec: 472.957INFO:tensorflow:loss = 7.4329357, step = 20900 (0.212 sec)INFO:tensorflow:global_step/sec: 477.464INFO:tensorflow:loss = 6.3936844, step = 21000 (0.209 sec)INFO:tensorflow:global_step/sec: 470.739INFO:tensorflow:loss = 9.487746, step = 21100 (0.211 sec)INFO:tensorflow:global_step/sec: 472.959INFO:tensorflow:loss = 6.517421, step = 21200 (0.211 sec)INFO:tensorflow:global_step/sec: 482.055INFO:tensorflow:loss = 23.223011, step = 21300 (0.208 sec)INFO:tensorflow:global_step/sec: 468.537INFO:tensorflow:loss = 33.165955, step = 21400 (0.212 sec)INFO:tensorflow:global_step/sec: 485.356INFO:tensorflow:loss = 6.438647, step = 21500 (0.207 sec)INFO:tensorflow:global_step/sec: 477.462INFO:tensorflow:loss = 6.1409864, step = 21600 (0.208 sec)INFO:tensorflow:global_step/sec: 470.738INFO:tensorflow:loss = 8.393456, step = 21700 (0.212 sec)INFO:tensorflow:global_step/sec: 472.957INFO:tensorflow:loss = 19.489632, step = 21800 (0.212 sec)INFO:tensorflow:global_step/sec: 472.961INFO:tensorflow:loss = 8.656776, step = 21900 (0.211 sec)INFO:tensorflow:global_step/sec: 482.054INFO:tensorflow:loss = 22.226353, step = 22000 (0.206 sec)INFO:tensorflow:global_step/sec: 475.201INFO:tensorflow:loss = 14.260347, step = 22100 (0.210 sec)INFO:tensorflow:global_step/sec: 475.199INFO:tensorflow:loss = 7.9998455, step = 22200 (0.210 sec)INFO:tensorflow:global_step/sec: 475.201INFO:tensorflow:loss = 19.34966, step = 22300 (0.211 sec)INFO:tensorflow:global_step/sec: 482.054INFO:tensorflow:loss = 14.79817, step = 22400 (0.206 sec)INFO:tensorflow:global_step/sec: 482.053INFO:tensorflow:loss = 30.882837, step = 22500 (0.208 sec)INFO:tensorflow:global_step/sec: 479.749INFO:tensorflow:loss = 13.485092, step = 22600 (0.207 sec)INFO:tensorflow:global_step/sec: 482.052INFO:tensorflow:loss = 5.355327, step = 22700 (0.207 sec)INFO:tensorflow:global_step/sec: 477.464INFO:tensorflow:loss = 7.544854, step = 22800 (0.209 sec)INFO:tensorflow:global_step/sec: 486.838INFO:tensorflow:loss = 16.801712, step = 22900 (0.206 sec)INFO:tensorflow:global_step/sec: 483.813INFO:tensorflow:loss = 5.0712447, step = 23000 (0.206 sec)INFO:tensorflow:global_step/sec: 479.749INFO:tensorflow:loss = 5.4016266, step = 23100 (0.209 sec)INFO:tensorflow:global_step/sec: 479.748INFO:tensorflow:loss = 17.210314, step = 23200 (0.207 sec)INFO:tensorflow:global_step/sec: 453.697INFO:tensorflow:loss = 13.114401, step = 23300 (0.220 sec)INFO:tensorflow:global_step/sec: 464.2INFO:tensorflow:loss = 28.91079, step = 23400 (0.216 sec)INFO:tensorflow:global_step/sec: 451.655INFO:tensorflow:loss = 13.396203, step = 23500 (0.221 sec)INFO:tensorflow:global_step/sec: 478.585INFO:tensorflow:loss = 9.627031, step = 23600 (0.208 sec)INFO:tensorflow:global_step/sec: 472.96INFO:tensorflow:loss = 17.649878, step = 23700 (0.212 sec)INFO:tensorflow:global_step/sec: 466.358INFO:tensorflow:loss = 29.884354, step = 23800 (0.214 sec)INFO:tensorflow:global_step/sec: 397.886INFO:tensorflow:loss = 13.614048, step = 23900 (0.250 sec)INFO:tensorflow:global_step/sec: 477.463INFO:tensorflow:loss = 16.63028, step = 24000 (0.210 sec)INFO:tensorflow:global_step/sec: 479.748INFO:tensorflow:loss = 8.562258, step = 24100 (0.207 sec)INFO:tensorflow:global_step/sec: 474.054INFO:tensorflow:loss = 10.901126, step = 24200 (0.212 sec)INFO:tensorflow:global_step/sec: 424.861INFO:tensorflow:loss = 15.733988, step = 24300 (0.234 sec)INFO:tensorflow:global_step/sec: 424.862INFO:tensorflow:loss = 7.1398253, step = 24400 (0.236 sec)INFO:tensorflow:global_step/sec: 477.464INFO:tensorflow:loss = 10.8976555, step = 24500 (0.208 sec)INFO:tensorflow:global_step/sec: 430.331INFO:tensorflow:loss = 11.259674, step = 24600 (0.232 sec)INFO:tensorflow:global_step/sec: 457.842INFO:tensorflow:loss = 12.008896, step = 24700 (0.219 sec)INFO:tensorflow:global_step/sec: 461.935INFO:tensorflow:loss = 10.050714, step = 24800 (0.215 sec)INFO:tensorflow:global_step/sec: 475.2INFO:tensorflow:loss = 33.29493, step = 24900 (0.210 sec)INFO:tensorflow:global_step/sec: 432.186INFO:tensorflow:loss = 26.459604, step = 25000 (0.231 sec)INFO:tensorflow:global_step/sec: 441.706INFO:tensorflow:loss = 13.845991, step = 25100 (0.226 sec)INFO:tensorflow:global_step/sec: 466.36INFO:tensorflow:loss = 14.557849, step = 25200 (0.214 sec)INFO:tensorflow:global_step/sec: 477.465INFO:tensorflow:loss = 19.257524, step = 25300 (0.210 sec)INFO:tensorflow:global_step/sec: 477.459INFO:tensorflow:loss = 19.799717, step = 25400 (0.209 sec)INFO:tensorflow:global_step/sec: 381.327INFO:tensorflow:loss = 40.00123, step = 25500 (0.262 sec)INFO:tensorflow:global_step/sec: 358.097INFO:tensorflow:loss = 17.50845, step = 25600 (0.279 sec)INFO:tensorflow:global_step/sec: 405.813INFO:tensorflow:loss = 4.8813953, step = 25700 (0.246 sec)INFO:tensorflow:global_step/sec: 447.621INFO:tensorflow:loss = 49.33701, step = 25800 (0.222 sec)INFO:tensorflow:global_step/sec: 468.541INFO:tensorflow:loss = 7.751701, step = 25900 (0.214 sec)INFO:tensorflow:global_step/sec: 396.817INFO:tensorflow:loss = 32.643646, step = 26000 (0.251 sec)INFO:tensorflow:global_step/sec: 434.054INFO:tensorflow:loss = 11.188934, step = 26100 (0.231 sec)INFO:tensorflow:global_step/sec: 451.654INFO:tensorflow:loss = 15.741041, step = 26200 (0.220 sec)INFO:tensorflow:global_step/sec: 452.391INFO:tensorflow:loss = 3.752925, step = 26300 (0.222 sec)INFO:tensorflow:global_step/sec: 391.671INFO:tensorflow:loss = 16.379671, step = 26400 (0.254 sec)INFO:tensorflow:global_step/sec: 462.059INFO:tensorflow:loss = 11.137123, step = 26500 (0.216 sec)INFO:tensorflow:global_step/sec: 480.88INFO:tensorflow:loss = 5.845155, step = 26600 (0.208 sec)INFO:tensorflow:global_step/sec: 472.957INFO:tensorflow:loss = 12.275885, step = 26700 (0.211 sec)INFO:tensorflow:global_step/sec: 479.747INFO:tensorflow:loss = 10.686385, step = 26800 (0.208 sec)INFO:tensorflow:global_step/sec: 486.73INFO:tensorflow:loss = 18.116901, step = 26900 (0.206 sec)INFO:tensorflow:global_step/sec: 484.386INFO:tensorflow:loss = 5.5666404, step = 27000 (0.205 sec)INFO:tensorflow:global_step/sec: 477.464INFO:tensorflow:loss = 18.93435, step = 27100 (0.209 sec)INFO:tensorflow:global_step/sec: 481.784INFO:tensorflow:loss = 17.85114, step = 27200 (0.209 sec)INFO:tensorflow:global_step/sec: 482.055INFO:tensorflow:loss = 40.934975, step = 27300 (0.206 sec)INFO:tensorflow:global_step/sec: 484.383INFO:tensorflow:loss = 20.435343, step = 27400 (0.207 sec)INFO:tensorflow:global_step/sec: 484.382INFO:tensorflow:loss = 19.137032, step = 27500 (0.205 sec)INFO:tensorflow:global_step/sec: 472.853INFO:tensorflow:loss = 10.130213, step = 27600 (0.211 sec)INFO:tensorflow:global_step/sec: 479.748INFO:tensorflow:loss = 1.0732348, step = 27700 (0.208 sec)INFO:tensorflow:global_step/sec: 475.725INFO:tensorflow:loss = 10.930531, step = 27800 (0.210 sec)INFO:tensorflow:global_step/sec: 484.379INFO:tensorflow:loss = 44.685776, step = 27900 (0.206 sec)INFO:tensorflow:global_step/sec: 482.058INFO:tensorflow:loss = 19.191761, step = 28000 (0.208 sec)INFO:tensorflow:global_step/sec: 410.931INFO:tensorflow:loss = 26.056076, step = 28100 (0.243 sec)INFO:tensorflow:global_step/sec: 475.201INFO:tensorflow:loss = 4.818268, step = 28200 (0.209 sec)INFO:tensorflow:global_step/sec: 479.748INFO:tensorflow:loss = 8.330089, step = 28300 (0.208 sec)INFO:tensorflow:global_step/sec: 479.75INFO:tensorflow:loss = 20.212261, step = 28400 (0.208 sec)INFO:tensorflow:global_step/sec: 486.732INFO:tensorflow:loss = 16.810787, step = 28500 (0.206 sec)INFO:tensorflow:global_step/sec: 472.709INFO:tensorflow:loss = 13.044697, step = 28600 (0.211 sec)INFO:tensorflow:global_step/sec: 482.055INFO:tensorflow:loss = 13.440829, step = 28700 (0.207 sec)INFO:tensorflow:global_step/sec: 482.055INFO:tensorflow:loss = 5.0004406, step = 28800 (0.207 sec)INFO:tensorflow:global_step/sec: 482.055INFO:tensorflow:loss = 6.756329, step = 28900 (0.207 sec)INFO:tensorflow:global_step/sec: 477.463INFO:tensorflow:loss = 10.694434, step = 29000 (0.210 sec)INFO:tensorflow:global_step/sec: 484.382INFO:tensorflow:loss = 14.040869, step = 29100 (0.205 sec)INFO:tensorflow:global_step/sec: 482.055INFO:tensorflow:loss = 17.21648, step = 29200 (0.207 sec)INFO:tensorflow:global_step/sec: 482.053INFO:tensorflow:loss = 11.488966, step = 29300 (0.208 sec)INFO:tensorflow:global_step/sec: 482.056INFO:tensorflow:loss = 5.529343, step = 29400 (0.206 sec)INFO:tensorflow:global_step/sec: 476.55INFO:tensorflow:loss = 15.432244, step = 29500 (0.210 sec)INFO:tensorflow:global_step/sec: 479.749INFO:tensorflow:loss = 16.552809, step = 29600 (0.208 sec)INFO:tensorflow:global_step/sec: 470.737INFO:tensorflow:loss = 7.043897, step = 29700 (0.212 sec)INFO:tensorflow:global_step/sec: 482.053INFO:tensorflow:loss = 12.584684, step = 29800 (0.207 sec)INFO:tensorflow:global_step/sec: 481.233INFO:tensorflow:loss = 24.733639, step = 29900 (0.208 sec)INFO:tensorflow:global_step/sec: 475.203INFO:tensorflow:loss = 12.632414, step = 30000 (0.211 sec)INFO:tensorflow:global_step/sec: 441.704INFO:tensorflow:loss = 12.68136, step = 30100 (0.225 sec)INFO:tensorflow:global_step/sec: 412.621INFO:tensorflow:loss = 37.484184, step = 30200 (0.242 sec)INFO:tensorflow:global_step/sec: 449.631INFO:tensorflow:loss = 15.617801, step = 30300 (0.222 sec)INFO:tensorflow:global_step/sec: 482.054INFO:tensorflow:loss = 14.349453, step = 30400 (0.208 sec)INFO:tensorflow:global_step/sec: 474.393INFO:tensorflow:loss = 12.567123, step = 30500 (0.211 sec)INFO:tensorflow:global_step/sec: 482.056INFO:tensorflow:loss = 7.4749293, step = 30600 (0.206 sec)INFO:tensorflow:global_step/sec: 475.199INFO:tensorflow:loss = 30.612448, step = 30700 (0.210 sec)INFO:tensorflow:global_step/sec: 489.111INFO:tensorflow:loss = 19.400856, step = 30800 (0.205 sec)INFO:tensorflow:global_step/sec: 453.699INFO:tensorflow:loss = 9.469748, step = 30900 (0.219 sec)INFO:tensorflow:global_step/sec: 457.839INFO:tensorflow:loss = 24.343462, step = 31000 (0.218 sec)INFO:tensorflow:global_step/sec: 472.961INFO:tensorflow:loss = 15.765479, step = 31100 (0.212 sec)INFO:tensorflow:global_step/sec: 455.17INFO:tensorflow:loss = 10.633205, step = 31200 (0.219 sec)INFO:tensorflow:global_step/sec: 482.056INFO:tensorflow:loss = 20.93686, step = 31300 (0.208 sec)INFO:tensorflow:global_step/sec: 378.363INFO:tensorflow:loss = 23.552933, step = 31400 (0.264 sec)INFO:tensorflow:global_step/sec: 430.336INFO:tensorflow:loss = 13.070565, step = 31500 (0.231 sec)INFO:tensorflow:global_step/sec: 459.941INFO:tensorflow:loss = 6.2678103, step = 31600 (0.217 sec)INFO:tensorflow:global_step/sec: 414.328INFO:tensorflow:loss = 10.120808, step = 31700 (0.241 sec)INFO:tensorflow:global_step/sec: 475.2INFO:tensorflow:loss = 18.301664, step = 31800 (0.210 sec)INFO:tensorflow:global_step/sec: 479.748INFO:tensorflow:loss = 16.886845, step = 31900 (0.208 sec)INFO:tensorflow:global_step/sec: 464.2INFO:tensorflow:loss = 16.299747, step = 32000 (0.215 sec)INFO:tensorflow:global_step/sec: 462.061INFO:tensorflow:loss = 21.316154, step = 32100 (0.217 sec)INFO:tensorflow:global_step/sec: 457.841INFO:tensorflow:loss = 12.70973, step = 32200 (0.217 sec)INFO:tensorflow:global_step/sec: 482.055INFO:tensorflow:loss = 5.452079, step = 32300 (0.207 sec)INFO:tensorflow:global_step/sec: 472.96INFO:tensorflow:loss = 4.1584835, step = 32400 (0.211 sec)INFO:tensorflow:global_step/sec: 477.462INFO:tensorflow:loss = 2.599182, step = 32500 (0.209 sec)INFO:tensorflow:global_step/sec: 428.492INFO:tensorflow:loss = 5.45612, step = 32600 (0.234 sec)INFO:tensorflow:global_step/sec: 384.166INFO:tensorflow:loss = 15.949482, step = 32700 (0.260 sec)INFO:tensorflow:global_step/sec: 381.244INFO:tensorflow:loss = 3.2476845, step = 32800 (0.261 sec)INFO:tensorflow:global_step/sec: 393.203INFO:tensorflow:loss = 3.9147024, step = 32900 (0.255 sec)INFO:tensorflow:global_step/sec: 466.129INFO:tensorflow:loss = 13.132782, step = 33000 (0.214 sec)INFO:tensorflow:global_step/sec: 475.198INFO:tensorflow:loss = 39.466587, step = 33100 (0.211 sec)INFO:tensorflow:global_step/sec: 482.056INFO:tensorflow:loss = 4.681825, step = 33200 (0.206 sec)INFO:tensorflow:global_step/sec: 486.736INFO:tensorflow:loss = 12.494517, step = 33300 (0.206 sec)INFO:tensorflow:global_step/sec: 479.744INFO:tensorflow:loss = 22.186172, step = 33400 (0.207 sec)INFO:tensorflow:global_step/sec: 470.74INFO:tensorflow:loss = 6.183546, step = 33500 (0.213 sec)INFO:tensorflow:global_step/sec: 477.464INFO:tensorflow:loss = 17.981438, step = 33600 (0.208 sec)INFO:tensorflow:global_step/sec: 475.471INFO:tensorflow:loss = 13.775921, step = 33700 (0.210 sec)INFO:tensorflow:global_step/sec: 482.056INFO:tensorflow:loss = 6.911639, step = 33800 (0.207 sec)INFO:tensorflow:global_step/sec: 479.745INFO:tensorflow:loss = 15.76024, step = 33900 (0.208 sec)INFO:tensorflow:global_step/sec: 472.955INFO:tensorflow:loss = 13.304266, step = 34000 (0.211 sec)INFO:tensorflow:global_step/sec: 483.562INFO:tensorflow:loss = 2.8294363, step = 34100 (0.208 sec)INFO:tensorflow:global_step/sec: 477.462INFO:tensorflow:loss = 19.316208, step = 34200 (0.208 sec)INFO:tensorflow:global_step/sec: 393.939INFO:tensorflow:loss = 44.878494, step = 34300 (0.254 sec)INFO:tensorflow:global_step/sec: 394.753INFO:tensorflow:loss = 2.0533133, step = 34400 (0.254 sec)INFO:tensorflow:global_step/sec: 409.255INFO:tensorflow:loss = 9.190418, step = 34500 (0.243 sec)INFO:tensorflow:global_step/sec: 399.47INFO:tensorflow:loss = 26.359379, step = 34600 (0.250 sec)INFO:tensorflow:global_step/sec: 464.201INFO:tensorflow:loss = 18.134321, step = 34700 (0.215 sec)INFO:tensorflow:global_step/sec: 422.232INFO:tensorflow:loss = 10.139101, step = 34800 (0.237 sec)INFO:tensorflow:global_step/sec: 402.686INFO:tensorflow:loss = 10.342932, step = 34900 (0.249 sec)INFO:tensorflow:global_step/sec: 457.843INFO:tensorflow:loss = 4.9976006, step = 35000 (0.217 sec)INFO:tensorflow:global_step/sec: 462.061INFO:tensorflow:loss = 29.88851, step = 35100 (0.217 sec)INFO:tensorflow:global_step/sec: 447.623INFO:tensorflow:loss = 18.480728, step = 35200 (0.222 sec)INFO:tensorflow:global_step/sec: 472.957INFO:tensorflow:loss = 11.742952, step = 35300 (0.211 sec)INFO:tensorflow:global_step/sec: 477.465INFO:tensorflow:loss = 13.0005455, step = 35400 (0.209 sec)INFO:tensorflow:global_step/sec: 464.197INFO:tensorflow:loss = 13.378135, step = 35500 (0.216 sec)INFO:tensorflow:global_step/sec: 397.887INFO:tensorflow:loss = 14.961187, step = 35600 (0.251 sec)INFO:tensorflow:global_step/sec: 466.359INFO:tensorflow:loss = 5.486117, step = 35700 (0.213 sec)INFO:tensorflow:global_step/sec: 472.959INFO:tensorflow:loss = 4.022088, step = 35800 (0.211 sec)INFO:tensorflow:global_step/sec: 479.748INFO:tensorflow:loss = 6.2967634, step = 35900 (0.209 sec)INFO:tensorflow:global_step/sec: 475.201INFO:tensorflow:loss = 4.9626236, step = 36000 (0.209 sec)INFO:tensorflow:global_step/sec: 470.406INFO:tensorflow:loss = 12.672366, step = 36100 (0.214 sec)INFO:tensorflow:global_step/sec: 443.661INFO:tensorflow:loss = 16.372377, step = 36200 (0.224 sec)INFO:tensorflow:global_step/sec: 443.246INFO:tensorflow:loss = 6.994566, step = 36300 (0.227 sec)INFO:tensorflow:global_step/sec: 472.957INFO:tensorflow:loss = 6.4031715, step = 36400 (0.211 sec)INFO:tensorflow:global_step/sec: 421.293INFO:tensorflow:loss = 16.94249, step = 36500 (0.237 sec)INFO:tensorflow:global_step/sec: 443.661INFO:tensorflow:loss = 17.236038, step = 36600 (0.224 sec)INFO:tensorflow:global_step/sec: 462.062INFO:tensorflow:loss = 12.616972, step = 36700 (0.216 sec)INFO:tensorflow:global_step/sec: 475.199INFO:tensorflow:loss = 9.171519, step = 36800 (0.210 sec)INFO:tensorflow:global_step/sec: 416.045INFO:tensorflow:loss = 17.064512, step = 36900 (0.241 sec)INFO:tensorflow:global_step/sec: 486.736INFO:tensorflow:loss = 18.815414, step = 37000 (0.204 sec)INFO:tensorflow:global_step/sec: 496.373INFO:tensorflow:loss = 10.074402, step = 37100 (0.201 sec)INFO:tensorflow:global_step/sec: 496.372INFO:tensorflow:loss = 26.890045, step = 37200 (0.202 sec)INFO:tensorflow:global_step/sec: 489.111INFO:tensorflow:loss = 5.393712, step = 37300 (0.204 sec)INFO:tensorflow:global_step/sec: 482.053INFO:tensorflow:loss = 13.061617, step = 37400 (0.206 sec)INFO:tensorflow:global_step/sec: 472.958INFO:tensorflow:loss = 3.7344925, step = 37500 (0.211 sec)INFO:tensorflow:global_step/sec: 476.309INFO:tensorflow:loss = 17.618496, step = 37600 (0.210 sec)INFO:tensorflow:global_step/sec: 470.735INFO:tensorflow:loss = 14.651289, step = 37700 (0.212 sec)INFO:tensorflow:global_step/sec: 484.383INFO:tensorflow:loss = 3.8855934, step = 37800 (0.207 sec)INFO:tensorflow:global_step/sec: 479.749INFO:tensorflow:loss = 4.125215, step = 37900 (0.207 sec)INFO:tensorflow:global_step/sec: 479.75INFO:tensorflow:loss = 10.512346, step = 38000 (0.208 sec)INFO:tensorflow:global_step/sec: 469.651INFO:tensorflow:loss = 5.511177, step = 38100 (0.213 sec)INFO:tensorflow:global_step/sec: 482.055INFO:tensorflow:loss = 16.000267, step = 38200 (0.207 sec)INFO:tensorflow:global_step/sec: 498.843INFO:tensorflow:loss = 11.044554, step = 38300 (0.201 sec)INFO:tensorflow:global_step/sec: 491.507INFO:tensorflow:loss = 5.934253, step = 38400 (0.202 sec)INFO:tensorflow:global_step/sec: 489.108INFO:tensorflow:loss = 13.422502, step = 38500 (0.204 sec)INFO:tensorflow:global_step/sec: 490.285INFO:tensorflow:loss = 12.331272, step = 38600 (0.204 sec)INFO:tensorflow:global_step/sec: 493.929INFO:tensorflow:loss = 3.0388405, step = 38700 (0.202 sec)INFO:tensorflow:global_step/sec: 486.274INFO:tensorflow:loss = 6.4854827, step = 38800 (0.206 sec)INFO:tensorflow:global_step/sec: 479.748INFO:tensorflow:loss = 2.552942, step = 38900 (0.209 sec)INFO:tensorflow:global_step/sec: 484.384INFO:tensorflow:loss = 4.9176054, step = 39000 (0.205 sec)INFO:tensorflow:global_step/sec: 489.13INFO:tensorflow:loss = 7.4359365, step = 39100 (0.204 sec)INFO:tensorflow:global_step/sec: 489.107INFO:tensorflow:loss = 10.91906, step = 39200 (0.204 sec)INFO:tensorflow:global_step/sec: 485.536INFO:tensorflow:loss = 8.183137, step = 39300 (0.206 sec)INFO:tensorflow:global_step/sec: 491.507INFO:tensorflow:loss = 40.337734, step = 39400 (0.203 sec)INFO:tensorflow:global_step/sec: 489.107INFO:tensorflow:loss = 9.783309, step = 39500 (0.205 sec)INFO:tensorflow:global_step/sec: 491.507INFO:tensorflow:loss = 26.599018, step = 39600 (0.202 sec)INFO:tensorflow:global_step/sec: 490.141INFO:tensorflow:loss = 10.325868, step = 39700 (0.204 sec)INFO:tensorflow:global_step/sec: 493.929INFO:tensorflow:loss = 6.5988894, step = 39800 (0.202 sec)INFO:tensorflow:global_step/sec: 486.737INFO:tensorflow:loss = 4.4995484, step = 39900 (0.205 sec)INFO:tensorflow:Saving checkpoints for 40000 into models/autompg-dnnregressor/model.ckpt.INFO:tensorflow:Loss for final step: 8.175077.





<tensorflow_estimator.python.estimator.canned.dnn.DNNRegressorV2 at 0x2934fc23a88>

调用.train()将会自动地进行checkpoint的保存,也可以重新加载最后一个检查点checkpoint;

reloaded_regressor = tf.estimator.DNNRegressor(    feature_columns=all_feature_columns,    hidden_units=[32, 10],    warm_start_from='models/autompg-dnnregressor/',    model_dir='models/autompg-dnnregressor/')
INFO:tensorflow:Using default config.INFO:tensorflow:Using config: {'_model_dir': 'models/autompg-dnnregressor/', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: truegraph_options {  rewrite_options {    meta_optimizer_iterations: ONE  }}, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
def eval_input_fn(df_test, batch_size=8):    df = df_test.copy()    test_x, test_y = df, df.pop('MPG')    dataset = tf.data.Dataset.from_tensor_slices((dict(test_x), test_y))    return dataset.batch(batch_size)"""实际上对于测试集数据也需要定义input function,这个在前面已经定义了,主要是用于模型的评估"""eval_results = reloaded_regressor.evaluate(    input_fn=lambda:eval_input_fn(df_test_norm, batch_size=8))for key in eval_results:    print('{:15s} {}'.format(key, eval_results[key]))    print('Average-Loss {:.4f}'.format(eval_results['average_loss']))
INFO:tensorflow:Calling model_fn.WARNING:tensorflow:Layer dnn is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.INFO:tensorflow:Done calling model_fn.INFO:tensorflow:Starting evaluation at 2021-06-13T19:45:35ZINFO:tensorflow:Graph was finalized.INFO:tensorflow:Restoring parameters from models/autompg-dnnregressor/model.ckpt-40000INFO:tensorflow:Running local_init_op.INFO:tensorflow:Done running local_init_op.INFO:tensorflow:Inference Time : 0.44349sINFO:tensorflow:Finished evaluation at 2021-06-13-19:45:36INFO:tensorflow:Saving dict for global step 40000: average_loss = 15.717226, global_step = 40000, label/mean = 23.611391, loss = 15.636656, prediction/mean = 22.074104INFO:tensorflow:Saving 'checkpoint_path' summary for global step 40000: models/autompg-dnnregressor/model.ckpt-40000average_loss    15.717226028442383label/mean      23.611391067504883loss            15.636655807495117prediction/mean 22.07410430908203global_step     40000Average-Loss 15.7172

要将模型用于预测新数据点上的目标值,我们可以使用predict()方法。

这里假设,测试数据集表示真实世界中新的,未标记的数据;

pred_res = regressor.predict(input_fn=lambda: eval_input_fn(df_test_norm, batch_size=8))print(next(iter(pred_res)))
INFO:tensorflow:Calling model_fn.WARNING:tensorflow:Layer dnn is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.INFO:tensorflow:Done calling model_fn.INFO:tensorflow:Graph was finalized.INFO:tensorflow:Restoring parameters from models/autompg-dnnregressor/model.ckpt-40000INFO:tensorflow:Running local_init_op.INFO:tensorflow:Done running local_init_op.{'predictions': array([23.095266], dtype=float32)}

1.11Decision tree boosting

boosted_tree = tf.estimator.BoostedTreesRegressor(    feature_columns=all_feature_columns,    n_batches_per_layer=20,    n_trees=200)boosted_tree.train(    input_fn=lambda:train_input_fn(df_train_norm, batch_size=BATCH_SIZE))eval_results = boosted_tree.evaluate(    input_fn=lambda:eval_input_fn(df_test_norm, batch_size=8))print(eval_results)print('Average-Loss {:.4f}'.format(eval_results['average_loss']))
INFO:tensorflow:Using default config.WARNING:tensorflow:Using temporary folder as model directory: C:\Users\xiaoyao\AppData\Local\Temp\tmpb9neejqyINFO:tensorflow:Using config: {'_model_dir': 'C:\\Users\\xiaoyao\\AppData\\Local\\Temp\\tmpb9neejqy', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: truegraph_options {  rewrite_options {    meta_optimizer_iterations: ONE  }}, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}INFO:tensorflow:Calling model_fn.INFO:tensorflow:Done calling model_fn.INFO:tensorflow:Create CheckpointSaverHook.WARNING:tensorflow:Issue encountered when serializing resources.Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.'_Resource' object has no attribute 'name'INFO:tensorflow:Graph was finalized.INFO:tensorflow:Running local_init_op.INFO:tensorflow:Done running local_init_op.WARNING:tensorflow:Issue encountered when serializing resources.Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.'_Resource' object has no attribute 'name'INFO:tensorflow:Saving checkpoints for 0 into C:\Users\xiaoyao\AppData\Local\Temp\tmpb9neejqy\model.ckpt.WARNING:tensorflow:Issue encountered when serializing resources.Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.'_Resource' object has no attribute 'name'INFO:tensorflow:loss = 594.5, step = 0WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.INFO:tensorflow:loss = 397.9144, step = 80 (0.719 sec)INFO:tensorflow:global_step/sec: 107.729INFO:tensorflow:loss = 92.11904, step = 180 (0.606 sec)INFO:tensorflow:global_step/sec: 201.117INFO:tensorflow:loss = 38.965584, step = 280 (0.482 sec)INFO:tensorflow:global_step/sec: 212.47INFO:tensorflow:loss = 15.642084, step = 380 (0.480 sec)INFO:tensorflow:global_step/sec: 202.809INFO:tensorflow:loss = 9.050722, step = 480 (0.478 sec)INFO:tensorflow:global_step/sec: 216.093INFO:tensorflow:loss = 3.5679352, step = 580 (0.454 sec)INFO:tensorflow:global_step/sec: 217.48INFO:tensorflow:loss = 5.915474, step = 680 (0.490 sec)INFO:tensorflow:global_step/sec: 204.209INFO:tensorflow:loss = 0.59443474, step = 780 (0.471 sec)INFO:tensorflow:global_step/sec: 214.267INFO:tensorflow:loss = 4.654083, step = 880 (0.474 sec)INFO:tensorflow:global_step/sec: 211.089INFO:tensorflow:loss = 2.1725254, step = 980 (0.471 sec)INFO:tensorflow:global_step/sec: 213.334INFO:tensorflow:loss = 2.8145058, step = 1080 (0.463 sec)INFO:tensorflow:global_step/sec: 215.629INFO:tensorflow:loss = 1.1895944, step = 1180 (0.456 sec)INFO:tensorflow:global_step/sec: 216.54INFO:tensorflow:loss = 1.9002458, step = 1280 (0.460 sec)INFO:tensorflow:global_step/sec: 218.945INFO:tensorflow:loss = 1.4389169, step = 1380 (0.465 sec)INFO:tensorflow:global_step/sec: 216.093INFO:tensorflow:loss = 1.9878839, step = 1480 (0.463 sec)INFO:tensorflow:global_step/sec: 215.628INFO:tensorflow:loss = 2.1321201, step = 1580 (0.464 sec)INFO:tensorflow:global_step/sec: 216.093INFO:tensorflow:loss = 2.26365, step = 1680 (0.464 sec)INFO:tensorflow:global_step/sec: 216.094INFO:tensorflow:loss = 1.4354768, step = 1780 (0.459 sec)INFO:tensorflow:global_step/sec: 215.165INFO:tensorflow:loss = 1.4097517, step = 1880 (0.461 sec)INFO:tensorflow:global_step/sec: 217.973INFO:tensorflow:loss = 2.5963967, step = 1980 (0.464 sec)INFO:tensorflow:global_step/sec: 216.56INFO:tensorflow:loss = 1.259823, step = 2080 (0.464 sec)INFO:tensorflow:global_step/sec: 217.003INFO:tensorflow:loss = 0.8283167, step = 2180 (0.465 sec)INFO:tensorflow:global_step/sec: 214.668INFO:tensorflow:loss = 0.44151562, step = 2280 (0.467 sec)INFO:tensorflow:global_step/sec: 214.705INFO:tensorflow:loss = 0.9649812, step = 2380 (0.471 sec)INFO:tensorflow:global_step/sec: 202.97INFO:tensorflow:loss = 0.7152928, step = 2480 (0.494 sec)INFO:tensorflow:global_step/sec: 206.737INFO:tensorflow:loss = 0.44679368, step = 2580 (0.469 sec)INFO:tensorflow:global_step/sec: 217.972INFO:tensorflow:loss = 0.70432854, step = 2680 (0.465 sec)INFO:tensorflow:global_step/sec: 215.629INFO:tensorflow:loss = 0.5013429, step = 2780 (0.466 sec)INFO:tensorflow:global_step/sec: 215.166INFO:tensorflow:loss = 0.42529047, step = 2880 (0.508 sec)INFO:tensorflow:global_step/sec: 195.412INFO:tensorflow:loss = 1.167603, step = 2980 (0.464 sec)INFO:tensorflow:global_step/sec: 213.79INFO:tensorflow:loss = 1.4155076, step = 3080 (0.474 sec)INFO:tensorflow:global_step/sec: 212.64INFO:tensorflow:loss = 0.29487702, step = 3180 (0.509 sec)INFO:tensorflow:global_step/sec: 196.217INFO:tensorflow:loss = 1.1168075, step = 3280 (0.483 sec)INFO:tensorflow:global_step/sec: 207.593INFO:tensorflow:loss = 0.59628636, step = 3380 (0.516 sec)INFO:tensorflow:global_step/sec: 192.821INFO:tensorflow:loss = 0.56733793, step = 3480 (0.509 sec)INFO:tensorflow:global_step/sec: 199.339INFO:tensorflow:loss = 0.74258906, step = 3580 (0.469 sec)INFO:tensorflow:global_step/sec: 209.326INFO:tensorflow:loss = 0.17978495, step = 3680 (0.499 sec)INFO:tensorflow:global_step/sec: 201.745INFO:tensorflow:loss = 0.5176583, step = 3780 (0.493 sec)INFO:tensorflow:global_step/sec: 204.627INFO:tensorflow:loss = 0.9223843, step = 3880 (0.467 sec)INFO:tensorflow:global_step/sec: 213.79INFO:tensorflow:loss = 0.9779916, step = 3980 (0.474 sec)INFO:tensorflow:global_step/sec: 205.465INFO:tensorflow:loss = 0.53736234, step = 4080 (0.503 sec)INFO:tensorflow:global_step/sec: 204.627INFO:tensorflow:loss = 0.50275886, step = 4180 (0.462 sec)INFO:tensorflow:global_step/sec: 214.705INFO:tensorflow:loss = 0.18778464, step = 4280 (0.464 sec)INFO:tensorflow:global_step/sec: 216.093INFO:tensorflow:loss = 0.5284414, step = 4380 (0.473 sec)INFO:tensorflow:global_step/sec: 210.646INFO:tensorflow:loss = 0.3454864, step = 4480 (0.509 sec)INFO:tensorflow:global_step/sec: 194.637INFO:tensorflow:loss = 0.58134747, step = 4580 (0.499 sec)INFO:tensorflow:global_step/sec: 202.152INFO:tensorflow:loss = 0.26161724, step = 4680 (0.493 sec)INFO:tensorflow:global_step/sec: 205.045INFO:tensorflow:loss = 0.33963072, step = 4780 (0.472 sec)INFO:tensorflow:global_step/sec: 210.204INFO:tensorflow:loss = 0.36360538, step = 4880 (0.466 sec)INFO:tensorflow:global_step/sec: 213.789INFO:tensorflow:loss = 0.86018693, step = 4980 (0.493 sec)INFO:tensorflow:global_step/sec: 203.795INFO:tensorflow:loss = 0.33194345, step = 5080 (0.475 sec)INFO:tensorflow:global_step/sec: 213.335INFO:tensorflow:loss = 0.28883997, step = 5180 (0.486 sec)INFO:tensorflow:global_step/sec: 204.21INFO:tensorflow:loss = 0.22847646, step = 5280 (0.492 sec)INFO:tensorflow:global_step/sec: 203.179INFO:tensorflow:loss = 0.37491018, step = 5380 (0.485 sec)INFO:tensorflow:global_step/sec: 203.795INFO:tensorflow:loss = 0.4460971, step = 5480 (0.507 sec)INFO:tensorflow:global_step/sec: 198.943INFO:tensorflow:loss = 0.07239157, step = 5580 (0.484 sec)INFO:tensorflow:global_step/sec: 207.164INFO:tensorflow:loss = 0.42915332, step = 5680 (0.477 sec)INFO:tensorflow:global_step/sec: 211.089INFO:tensorflow:loss = 0.12459916, step = 5780 (0.473 sec)INFO:tensorflow:global_step/sec: 211.981INFO:tensorflow:loss = 0.26628804, step = 5880 (0.475 sec)INFO:tensorflow:global_step/sec: 209.326INFO:tensorflow:loss = 0.09934674, step = 5980 (0.493 sec)INFO:tensorflow:global_step/sec: 199.719INFO:tensorflow:loss = 0.20531085, step = 6080 (0.490 sec)INFO:tensorflow:global_step/sec: 206.755INFO:tensorflow:loss = 0.10222736, step = 6180 (0.484 sec)INFO:tensorflow:global_step/sec: 207.383INFO:tensorflow:loss = 0.17050584, step = 6280 (0.482 sec)INFO:tensorflow:global_step/sec: 209.322INFO:tensorflow:loss = 0.18450022, step = 6380 (0.476 sec)INFO:tensorflow:global_step/sec: 207.592INFO:tensorflow:loss = 0.13014236, step = 6480 (0.492 sec)INFO:tensorflow:global_step/sec: 203.382INFO:tensorflow:loss = 0.030490585, step = 6580 (0.489 sec)INFO:tensorflow:global_step/sec: 202.152INFO:tensorflow:loss = 0.25665477, step = 6680 (0.499 sec)INFO:tensorflow:global_step/sec: 203.795INFO:tensorflow:loss = 0.09629044, step = 6780 (0.485 sec)INFO:tensorflow:global_step/sec: 205.888INFO:tensorflow:loss = 0.0702603, step = 6880 (0.478 sec)INFO:tensorflow:global_step/sec: 210.204INFO:tensorflow:loss = 0.10450436, step = 6980 (0.478 sec)INFO:tensorflow:global_step/sec: 208.89INFO:tensorflow:loss = 0.109945424, step = 7080 (0.497 sec)INFO:tensorflow:global_step/sec: 200.895INFO:tensorflow:loss = 0.10699215, step = 7180 (0.485 sec)INFO:tensorflow:global_step/sec: 202.97INFO:tensorflow:loss = 0.044471674, step = 7280 (0.499 sec)INFO:tensorflow:global_step/sec: 202.97INFO:tensorflow:loss = 0.101569235, step = 7380 (0.484 sec)INFO:tensorflow:global_step/sec: 207.164INFO:tensorflow:loss = 0.03163476, step = 7480 (0.493 sec)INFO:tensorflow:global_step/sec: 200.936INFO:tensorflow:loss = 0.05657347, step = 7580 (0.499 sec)INFO:tensorflow:global_step/sec: 203.795INFO:tensorflow:loss = 0.095940724, step = 7680 (0.491 sec)INFO:tensorflow:global_step/sec: 199.736INFO:tensorflow:loss = 0.109338716, step = 7780 (0.581 sec)INFO:tensorflow:global_step/sec: 170.233INFO:tensorflow:loss = 0.10963531, step = 7880 (0.534 sec)INFO:tensorflow:global_step/sec: 189.001INFO:tensorflow:loss = 0.060836654, step = 7980 (0.522 sec)INFO:tensorflow:global_step/sec: 193.566INFO:tensorflow:loss = 0.057471186, step = 8080 (0.510 sec)INFO:tensorflow:global_step/sec: 197.766INFO:tensorflow:loss = 0.020037735, step = 8180 (0.479 sec)INFO:tensorflow:global_step/sec: 210.645INFO:tensorflow:loss = 0.041732106, step = 8280 (0.483 sec)INFO:tensorflow:global_step/sec: 207.164INFO:tensorflow:loss = 0.020463925, step = 8380 (0.472 sec)INFO:tensorflow:global_step/sec: 208.89INFO:tensorflow:loss = 0.039593343, step = 8480 (0.479 sec)INFO:tensorflow:global_step/sec: 209.764INFO:tensorflow:loss = 0.023464061, step = 8580 (0.483 sec)INFO:tensorflow:global_step/sec: 206.737INFO:tensorflow:loss = 0.017001137, step = 8680 (0.500 sec)INFO:tensorflow:global_step/sec: 200.936INFO:tensorflow:loss = 0.020053506, step = 8780 (0.493 sec)INFO:tensorflow:global_step/sec: 201.745INFO:tensorflow:loss = 0.05659123, step = 8880 (0.521 sec)INFO:tensorflow:global_step/sec: 194.689INFO:tensorflow:loss = 0.07180028, step = 8980 (0.472 sec)INFO:tensorflow:global_step/sec: 208.023INFO:tensorflow:loss = 0.054130584, step = 9080 (0.522 sec)INFO:tensorflow:global_step/sec: 191.35INFO:tensorflow:loss = 0.023348626, step = 9180 (0.483 sec)INFO:tensorflow:global_step/sec: 208.023INFO:tensorflow:loss = 0.084897295, step = 9280 (0.507 sec)INFO:tensorflow:global_step/sec: 196.989INFO:tensorflow:loss = 0.06040998, step = 9380 (0.508 sec)INFO:tensorflow:global_step/sec: 198.549INFO:tensorflow:loss = 0.05724717, step = 9480 (0.507 sec)INFO:tensorflow:global_step/sec: 197.968INFO:tensorflow:loss = 0.044703532, step = 9580 (0.474 sec)INFO:tensorflow:global_step/sec: 208.024INFO:tensorflow:loss = 0.037816003, step = 9680 (0.498 sec)INFO:tensorflow:global_step/sec: 192.821INFO:tensorflow:loss = 0.056285925, step = 9780 (0.547 sec)INFO:tensorflow:global_step/sec: 190.985INFO:tensorflow:loss = 0.04360781, step = 9880 (0.524 sec)INFO:tensorflow:global_step/sec: 186.37INFO:tensorflow:loss = 0.033723988, step = 9980 (0.543 sec)INFO:tensorflow:global_step/sec: 187.416INFO:tensorflow:loss = 0.010137219, step = 10080 (0.535 sec)INFO:tensorflow:global_step/sec: 189.9INFO:tensorflow:loss = 0.05526405, step = 10180 (0.483 sec)INFO:tensorflow:global_step/sec: 202.97INFO:tensorflow:loss = 0.03139647, step = 10280 (0.500 sec)INFO:tensorflow:global_step/sec: 198.157INFO:tensorflow:loss = 0.030928008, step = 10380 (0.517 sec)INFO:tensorflow:global_step/sec: 193.941INFO:tensorflow:loss = 0.023712154, step = 10480 (0.536 sec)INFO:tensorflow:global_step/sec: 192.083INFO:tensorflow:loss = 0.018156884, step = 10580 (0.487 sec)INFO:tensorflow:global_step/sec: 205.888INFO:tensorflow:loss = 0.03386273, step = 10680 (0.499 sec)INFO:tensorflow:global_step/sec: 199.339INFO:tensorflow:loss = 0.031449106, step = 10780 (0.484 sec)INFO:tensorflow:global_step/sec: 205.045INFO:tensorflow:loss = 0.018380232, step = 10880 (0.484 sec)INFO:tensorflow:global_step/sec: 208.89INFO:tensorflow:loss = 0.014858067, step = 10980 (0.481 sec)INFO:tensorflow:global_step/sec: 205.887INFO:tensorflow:loss = 0.01890551, step = 11080 (0.492 sec)INFO:tensorflow:global_step/sec: 205.878INFO:tensorflow:loss = 0.020791374, step = 11180 (0.486 sec)INFO:tensorflow:global_step/sec: 205.466INFO:tensorflow:loss = 0.015776766, step = 11280 (0.491 sec)INFO:tensorflow:global_step/sec: 203.795INFO:tensorflow:loss = 0.016891034, step = 11380 (0.482 sec)INFO:tensorflow:global_step/sec: 205.465INFO:tensorflow:loss = 0.009852876, step = 11480 (0.485 sec)INFO:tensorflow:global_step/sec: 207.593INFO:tensorflow:loss = 0.022515398, step = 11580 (0.489 sec)INFO:tensorflow:global_step/sec: 205.465INFO:tensorflow:loss = 0.025882918, step = 11680 (0.492 sec)INFO:tensorflow:global_step/sec: 202.971INFO:tensorflow:loss = 0.022389574, step = 11780 (0.490 sec)INFO:tensorflow:global_step/sec: 204.21INFO:tensorflow:loss = 0.013651388, step = 11880 (0.491 sec)INFO:tensorflow:global_step/sec: 204.627INFO:tensorflow:loss = 0.01773899, step = 11980 (0.483 sec)INFO:tensorflow:global_step/sec: 203.363INFO:tensorflow:loss = 0.008984467, step = 12080 (0.486 sec)INFO:tensorflow:global_step/sec: 207.613INFO:tensorflow:loss = 0.016311975, step = 12180 (0.490 sec)INFO:tensorflow:global_step/sec: 205.466INFO:tensorflow:loss = 0.025608364, step = 12280 (0.493 sec)INFO:tensorflow:global_step/sec: 202.97INFO:tensorflow:loss = 0.021858571, step = 12380 (0.494 sec)INFO:tensorflow:global_step/sec: 202.56INFO:tensorflow:loss = 0.0037323395, step = 12480 (0.493 sec)INFO:tensorflow:global_step/sec: 202.97INFO:tensorflow:loss = 0.020438628, step = 12580 (0.485 sec)INFO:tensorflow:global_step/sec: 203.795INFO:tensorflow:loss = 0.0056387247, step = 12680 (0.492 sec)INFO:tensorflow:global_step/sec: 204.397INFO:tensorflow:loss = 0.019056372, step = 12780 (0.493 sec)INFO:tensorflow:global_step/sec: 203.814INFO:tensorflow:loss = 0.009075225, step = 12880 (0.497 sec)INFO:tensorflow:global_step/sec: 202.152INFO:tensorflow:loss = 0.0020143697, step = 12980 (0.493 sec)INFO:tensorflow:global_step/sec: 202.151INFO:tensorflow:loss = 0.010026528, step = 13080 (0.504 sec)INFO:tensorflow:global_step/sec: 198.525INFO:tensorflow:loss = 0.015646547, step = 13180 (0.488 sec)INFO:tensorflow:global_step/sec: 203.795INFO:tensorflow:loss = 0.007972131, step = 13280 (0.488 sec)INFO:tensorflow:global_step/sec: 205.888INFO:tensorflow:loss = 0.0026837657, step = 13380 (0.494 sec)INFO:tensorflow:global_step/sec: 203.382INFO:tensorflow:loss = 0.013160771, step = 13480 (0.494 sec)INFO:tensorflow:global_step/sec: 202.97INFO:tensorflow:loss = 0.0032375457, step = 13580 (0.493 sec)INFO:tensorflow:global_step/sec: 202.56INFO:tensorflow:loss = 0.005799314, step = 13680 (0.498 sec)INFO:tensorflow:global_step/sec: 200.936INFO:tensorflow:loss = 0.015664538, step = 13780 (0.490 sec)INFO:tensorflow:global_step/sec: 202.152INFO:tensorflow:loss = 0.0061534, step = 13880 (0.495 sec)INFO:tensorflow:global_step/sec: 203.795INFO:tensorflow:loss = 0.0070481645, step = 13980 (0.494 sec)INFO:tensorflow:global_step/sec: 202.151INFO:tensorflow:loss = 0.005759836, step = 14080 (0.501 sec)INFO:tensorflow:global_step/sec: 200.134INFO:tensorflow:loss = 0.030739682, step = 14180 (0.499 sec)INFO:tensorflow:global_step/sec: 201.34INFO:tensorflow:loss = 0.0066532074, step = 14280 (0.499 sec)INFO:tensorflow:global_step/sec: 200.134INFO:tensorflow:loss = 0.0047936947, step = 14380 (0.492 sec)INFO:tensorflow:global_step/sec: 200.535INFO:tensorflow:loss = 0.0076716417, step = 14480 (0.494 sec)INFO:tensorflow:global_step/sec: 203.785INFO:tensorflow:loss = 0.0077446224, step = 14580 (0.497 sec)INFO:tensorflow:global_step/sec: 202.57INFO:tensorflow:loss = 0.0035522755, step = 14680 (0.495 sec)INFO:tensorflow:global_step/sec: 199.338INFO:tensorflow:loss = 0.004705483, step = 14780 (0.517 sec)INFO:tensorflow:global_step/sec: 196.989INFO:tensorflow:loss = 0.0034934967, step = 14880 (0.501 sec)INFO:tensorflow:global_step/sec: 199.736INFO:tensorflow:loss = 0.014653761, step = 14980 (0.490 sec)INFO:tensorflow:global_step/sec: 201.744INFO:tensorflow:loss = 0.008845761, step = 15080 (0.504 sec)INFO:tensorflow:global_step/sec: 199.339INFO:tensorflow:loss = 0.0064387145, step = 15180 (0.501 sec)INFO:tensorflow:global_step/sec: 200.134INFO:tensorflow:loss = 0.0032204473, step = 15280 (0.500 sec)INFO:tensorflow:global_step/sec: 200.134INFO:tensorflow:loss = 0.0018862245, step = 15380 (0.506 sec)INFO:tensorflow:global_step/sec: 197.766INFO:tensorflow:loss = 0.0038445, step = 15480 (0.503 sec)INFO:tensorflow:global_step/sec: 199.736INFO:tensorflow:loss = 0.006779802, step = 15580 (0.493 sec)INFO:tensorflow:global_step/sec: 200.131INFO:tensorflow:loss = 0.0015608877, step = 15680 (0.502 sec)INFO:tensorflow:global_step/sec: 200.94INFO:tensorflow:loss = 0.00086546084, step = 15780 (0.499 sec)INFO:tensorflow:global_step/sec: 200.936INFO:tensorflow:loss = 0.0006937778, step = 15880 (0.501 sec)INFO:tensorflow:global_step/sec: 199.338INFO:tensorflow:loss = 0.005188491, step = 15980 (0.502 sec)INFO:tensorflow:global_step/sec: 199.932INFO:tensorflow:loss = 0.0033115228, step = 16080 (0.504 sec)INFO:tensorflow:global_step/sec: 197.766INFO:tensorflow:loss = 0.0024715662, step = 16180 (0.504 sec)INFO:tensorflow:global_step/sec: 195.834INFO:tensorflow:loss = 0.0013440654, step = 16280 (0.503 sec)INFO:tensorflow:global_step/sec: 201.34INFO:tensorflow:loss = 0.0014806448, step = 16380 (0.498 sec)INFO:tensorflow:global_step/sec: 201.745INFO:tensorflow:loss = 0.0012659947, step = 16480 (0.503 sec)INFO:tensorflow:global_step/sec: 199.339INFO:tensorflow:loss = 0.0030099086, step = 16580 (0.504 sec)INFO:tensorflow:global_step/sec: 197.766INFO:tensorflow:loss = 0.0032264753, step = 16680 (0.502 sec)INFO:tensorflow:global_step/sec: 199.338INFO:tensorflow:loss = 0.002296832, step = 16780 (0.504 sec)INFO:tensorflow:global_step/sec: 195.453INFO:tensorflow:loss = 0.0016440335, step = 16880 (0.504 sec)INFO:tensorflow:global_step/sec: 201.745INFO:tensorflow:loss = 0.0027072341, step = 16980 (0.504 sec)INFO:tensorflow:global_step/sec: 198.157INFO:tensorflow:loss = 0.0021052386, step = 17080 (0.511 sec)INFO:tensorflow:global_step/sec: 196.217INFO:tensorflow:loss = 0.0012230584, step = 17180 (0.506 sec)INFO:tensorflow:global_step/sec: 198.549INFO:tensorflow:loss = 0.004355407, step = 17280 (0.506 sec)INFO:tensorflow:global_step/sec: 197.766INFO:tensorflow:loss = 0.0015282638, step = 17380 (0.501 sec)INFO:tensorflow:global_step/sec: 196.201INFO:tensorflow:loss = 0.0017098666, step = 17480 (0.502 sec)INFO:tensorflow:global_step/sec: 201.762INFO:tensorflow:loss = 0.0014834856, step = 17580 (0.507 sec)INFO:tensorflow:global_step/sec: 198.157INFO:tensorflow:loss = 0.001997375, step = 17680 (0.504 sec)INFO:tensorflow:global_step/sec: 198.157INFO:tensorflow:loss = 0.0010357331, step = 17780 (0.506 sec)INFO:tensorflow:global_step/sec: 197.766INFO:tensorflow:loss = 0.0018438152, step = 17880 (0.509 sec)INFO:tensorflow:global_step/sec: 196.602INFO:tensorflow:loss = 0.0035464943, step = 17980 (0.502 sec)INFO:tensorflow:global_step/sec: 196.972INFO:tensorflow:loss = 0.0005512431, step = 18080 (0.507 sec)INFO:tensorflow:global_step/sec: 198.566INFO:tensorflow:loss = 0.0026995009, step = 18180 (0.512 sec)INFO:tensorflow:global_step/sec: 196.218INFO:tensorflow:loss = 0.0012273588, step = 18280 (0.508 sec)INFO:tensorflow:global_step/sec: 195.834INFO:tensorflow:loss = 0.0025270935, step = 18380 (0.515 sec)INFO:tensorflow:global_step/sec: 195.835INFO:tensorflow:loss = 0.0005621478, step = 18480 (0.512 sec)INFO:tensorflow:global_step/sec: 194.693INFO:tensorflow:loss = 0.0012785586, step = 18580 (0.508 sec)INFO:tensorflow:global_step/sec: 195.072INFO:tensorflow:loss = 0.0017839405, step = 18680 (0.504 sec)INFO:tensorflow:global_step/sec: 198.741INFO:tensorflow:loss = 0.00056313234, step = 18780 (0.517 sec)INFO:tensorflow:global_step/sec: 195.073INFO:tensorflow:loss = 0.0004927537, step = 18880 (0.515 sec)INFO:tensorflow:global_step/sec: 193.941INFO:tensorflow:loss = 0.0002978943, step = 18980 (0.518 sec)INFO:tensorflow:global_step/sec: 193.566INFO:tensorflow:loss = 0.0009901102, step = 19080 (0.517 sec)INFO:tensorflow:global_step/sec: 193.566INFO:tensorflow:loss = 0.0020695978, step = 19180 (0.514 sec)INFO:tensorflow:global_step/sec: 192.821INFO:tensorflow:loss = 0.0016467206, step = 19280 (0.510 sec)INFO:tensorflow:global_step/sec: 197.321INFO:tensorflow:loss = 0.001061959, step = 19380 (0.514 sec)INFO:tensorflow:global_step/sec: 195.453INFO:tensorflow:loss = 0.00065249886, step = 19480 (0.520 sec)INFO:tensorflow:global_step/sec: 192.822INFO:tensorflow:loss = 0.0005230232, step = 19580 (0.563 sec)INFO:tensorflow:global_step/sec: 177.72INFO:tensorflow:loss = 0.00076553854, step = 19680 (0.518 sec)INFO:tensorflow:global_step/sec: 192.679INFO:tensorflow:loss = 0.00068157155, step = 19780 (0.512 sec)INFO:tensorflow:global_step/sec: 193.167INFO:tensorflow:loss = 0.006701363, step = 19880 (0.513 sec)INFO:tensorflow:global_step/sec: 196.439INFO:tensorflow:loss = 0.0007709146, step = 19980 (0.518 sec)INFO:tensorflow:global_step/sec: 193.496INFO:tensorflow:loss = 0.0005396797, step = 20080 (0.520 sec)INFO:tensorflow:global_step/sec: 192.822INFO:tensorflow:loss = 0.0009751909, step = 20180 (0.514 sec)INFO:tensorflow:global_step/sec: 195.334INFO:tensorflow:loss = 0.00042971637, step = 20280 (0.516 sec)INFO:tensorflow:global_step/sec: 193.061INFO:tensorflow:loss = 0.0007336084, step = 20380 (0.510 sec)INFO:tensorflow:global_step/sec: 193.979INFO:tensorflow:loss = 0.00039536788, step = 20480 (0.514 sec)INFO:tensorflow:global_step/sec: 196.218INFO:tensorflow:loss = 0.00016499823, step = 20580 (0.516 sec)INFO:tensorflow:global_step/sec: 194.316INFO:tensorflow:loss = 0.000585136, step = 20680 (0.520 sec)INFO:tensorflow:global_step/sec: 192.762INFO:tensorflow:loss = 0.00014850167, step = 20780 (0.520 sec)INFO:tensorflow:global_step/sec: 192.027INFO:tensorflow:loss = 0.0027633293, step = 20880 (0.521 sec)INFO:tensorflow:global_step/sec: 191.715INFO:tensorflow:loss = 0.00012317163, step = 20980 (0.521 sec)INFO:tensorflow:global_step/sec: 189.541INFO:tensorflow:loss = 0.00056407216, step = 21080 (0.522 sec)INFO:tensorflow:global_step/sec: 193.941INFO:tensorflow:loss = 0.00038926542, step = 21180 (0.523 sec)INFO:tensorflow:global_step/sec: 192.083INFO:tensorflow:loss = 0.0003591198, step = 21280 (0.527 sec)INFO:tensorflow:global_step/sec: 189.861INFO:tensorflow:loss = 0.00027140023, step = 21380 (0.526 sec)INFO:tensorflow:global_step/sec: 190.294INFO:tensorflow:loss = 0.00047079835, step = 21480 (0.521 sec)INFO:tensorflow:global_step/sec: 191.349INFO:tensorflow:loss = 0.00027462785, step = 21580 (0.521 sec)INFO:tensorflow:global_step/sec: 191.133INFO:tensorflow:loss = 0.000202149, step = 21680 (0.518 sec)INFO:tensorflow:global_step/sec: 194.007INFO:tensorflow:loss = 0.0005416189, step = 21780 (0.523 sec)INFO:tensorflow:global_step/sec: 192.452INFO:tensorflow:loss = 0.0002566016, step = 21880 (0.521 sec)INFO:tensorflow:global_step/sec: 191.715INFO:tensorflow:loss = 0.00022213916, step = 21980 (0.526 sec)INFO:tensorflow:global_step/sec: 189.184INFO:tensorflow:loss = 0.0002587656, step = 22080 (0.528 sec)INFO:tensorflow:global_step/sec: 190.095INFO:tensorflow:loss = 0.00014694504, step = 22180 (0.515 sec)INFO:tensorflow:global_step/sec: 192.083INFO:tensorflow:loss = 0.0004024317, step = 22280 (0.525 sec)INFO:tensorflow:global_step/sec: 191.604INFO:tensorflow:loss = 0.0003611985, step = 22380 (0.521 sec)INFO:tensorflow:global_step/sec: 193.118INFO:tensorflow:loss = 0.0001943085, step = 22480 (0.525 sec)INFO:tensorflow:global_step/sec: 189.489INFO:tensorflow:loss = 0.00016559009, step = 22580 (0.533 sec)INFO:tensorflow:global_step/sec: 189.183INFO:tensorflow:loss = 0.00025124004, step = 22680 (0.524 sec)INFO:tensorflow:global_step/sec: 191.147INFO:tensorflow:loss = 0.00044706714, step = 22780 (0.521 sec)INFO:tensorflow:global_step/sec: 189.228INFO:tensorflow:loss = 7.796573e-05, step = 22880 (0.529 sec)INFO:tensorflow:global_step/sec: 190.66INFO:tensorflow:loss = 0.00024380349, step = 22980 (0.525 sec)INFO:tensorflow:global_step/sec: 190.522INFO:tensorflow:loss = 0.00036416342, step = 23080 (0.522 sec)INFO:tensorflow:global_step/sec: 192.083INFO:tensorflow:loss = 0.000105397994, step = 23180 (0.525 sec)INFO:tensorflow:global_step/sec: 189.717INFO:tensorflow:loss = 0.00022973095, step = 23280 (0.528 sec)INFO:tensorflow:global_step/sec: 189.541INFO:tensorflow:loss = 0.00017624824, step = 23380 (0.523 sec)INFO:tensorflow:global_step/sec: 189.169INFO:tensorflow:loss = 0.0004000132, step = 23480 (0.521 sec)INFO:tensorflow:global_step/sec: 193.209INFO:tensorflow:loss = 0.00040363736, step = 23580 (0.526 sec)INFO:tensorflow:global_step/sec: 190.985INFO:tensorflow:loss = 0.00035797837, step = 23680 (0.531 sec)INFO:tensorflow:global_step/sec: 189.434INFO:tensorflow:loss = 0.00019022216, step = 23780 (0.528 sec)INFO:tensorflow:global_step/sec: 189.183INFO:tensorflow:loss = 0.00021026561, step = 23880 (0.529 sec)INFO:tensorflow:global_step/sec: 183.805INFO:tensorflow:loss = 5.7688507e-05, step = 23980 (0.553 sec)INFO:tensorflow:global_step/sec: 181.809INFO:tensorflow:Saving checkpoints for 24000 into C:\Users\xiaoyao\AppData\Local\Temp\tmpb9neejqy\model.ckpt.WARNING:tensorflow:Issue encountered when serializing resources.Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.'_Resource' object has no attribute 'name'INFO:tensorflow:Loss for final step: 0.0003769357.INFO:tensorflow:Calling model_fn.INFO:tensorflow:Done calling model_fn.INFO:tensorflow:Starting evaluation at 2021-06-13T19:57:30ZINFO:tensorflow:Graph was finalized.INFO:tensorflow:Restoring parameters from C:\Users\xiaoyao\AppData\Local\Temp\tmpb9neejqy\model.ckpt-24000INFO:tensorflow:Running local_init_op.INFO:tensorflow:Done running local_init_op.INFO:tensorflow:Inference Time : 0.23330sINFO:tensorflow:Finished evaluation at 2021-06-13-19:57:30INFO:tensorflow:Saving dict for global step 24000: average_loss = 11.562195, global_step = 24000, label/mean = 23.611391, loss = 11.448849, prediction/mean = 22.46442WARNING:tensorflow:Issue encountered when serializing resources.Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.'_Resource' object has no attribute 'name'INFO:tensorflow:Saving 'checkpoint_path' summary for global step 24000: C:\Users\xiaoyao\AppData\Local\Temp\tmpb9neejqy\model.ckpt-24000{'average_loss': 11.562195, 'label/mean': 23.611391, 'loss': 11.448849, 'prediction/mean': 22.46442, 'global_step': 24000}Average-Loss 11.5622

1.12使用 Estimators 进行 MNIST 手写数字分类

import tensorflow as tf
import tensorflow_datasets as tfds
import numpy as np

由于TensorFlow 2.0的某些部分仍然有些粗糙,可能会在执行下一个代码块时遇到以下问题:RuntimeError: Graph被最终确定,无法修改。目前,这个问题没有好的解决方案,建议的解决方案是在执行下一个代码块之前重新启动Python、IPython或Jupyter Notebook会话。

BUFFER_SIZE = 10000
BATCH_SIZE = 64
NUM_EPOCHS = 20
steps_per_epoch = np.ceil(60000 / BATCH_SIZE)

steps_per_epoch决定了每个epoch中的迭代次数,这对于无限重复的数据集是非常重要的。

定义预处理函数,用于对图像数据进行预处理。

def preprocess(item):
    image = item['image']
    label = item['label']
    image = tf.image.convert_image_dtype(
        image, tf.float32)
    image = tf.reshape(image, (-1,))

    return {'image-pixels':image}, label[..., tf.newaxis]

#Step 1: Defining the input functions (one for training and one for evaluation)
## Step 1: Define the input function for training
def train_input_fn():
    datasets = tfds.load(name='mnist')
    mnist_train = datasets['train']

    dataset = mnist_train.map(preprocess)
    dataset = dataset.shuffle(BUFFER_SIZE)
    dataset = dataset.batch(BATCH_SIZE)
    return dataset.repeat()

## define input-function for evaluation:
def eval_input_fn():
    datasets = tfds.load(name='mnist')
    mnist_test = datasets['test']
    dataset = mnist_test.map(preprocess).batch(BATCH_SIZE)
    return dataset

由于输入图像是unit8类型,范围为[0, 255],因此使用了tf.image.convert_image_dtype()将其类型转换为tf.float32,大达到范围[0, 1]

## Step 2: feature column
image_feature_column = tf.feature_column.numeric_column(
    key='image-pixels', shape=(28*28))
## Step 3: instantiate the estimator
dnn_classifier = tf.estimator.DNNClassifier(
    feature_columns=[image_feature_column],
    hidden_units=[32, 16],
    n_classes=10,
    model_dir='./models/mnist-dnn/')


## Step 4: train
dnn_classifier.train(
    input_fn=train_input_fn,
    steps=NUM_EPOCHS * steps_per_epoch)
INFO:tensorflow:Using default config.
INFO:tensorflow:Using config: {'_model_dir': './models/mnist-dnn/', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
WARNING:tensorflow:From D:\installation\anaconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\ops\resource_variable_ops.py:1635: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
WARNING:tensorflow:From D:\installation\anaconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\training\training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.
INFO:tensorflow:Calling model_fn.


INFO:tensorflow:Calling model_fn.


WARNING:tensorflow:From D:\installation\anaconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\optimizer_v2\adagrad.py:103: calling Constant.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor


WARNING:tensorflow:From D:\installation\anaconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\optimizer_v2\adagrad.py:103: calling Constant.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.Instructions for updating:Call initializer instance with the dtype argument instead of passing it to the constructor


INFO:tensorflow:Done calling model_fn.


INFO:tensorflow:Done calling model_fn.


INFO:tensorflow:Create CheckpointSaverHook.


INFO:tensorflow:Create CheckpointSaverHook.


INFO:tensorflow:Graph was finalized.


INFO:tensorflow:Graph was finalized.


INFO:tensorflow:Running local_init_op.


INFO:tensorflow:Running local_init_op.


INFO:tensorflow:Done running local_init_op.


INFO:tensorflow:Done running local_init_op.


INFO:tensorflow:Saving checkpoints for 0 into ./models/mnist-dnn/model.ckpt.


INFO:tensorflow:Saving checkpoints for 0 into ./models/mnist-dnn/model.ckpt.


INFO:tensorflow:loss = 2.341197, step = 0


INFO:tensorflow:loss = 2.341197, step = 0


INFO:tensorflow:global_step/sec: 130.556


INFO:tensorflow:global_step/sec: 130.556


INFO:tensorflow:loss = 2.225606, step = 100 (0.769 sec)


INFO:tensorflow:loss = 2.225606, step = 100 (0.769 sec)


INFO:tensorflow:global_step/sec: 126.281


INFO:tensorflow:global_step/sec: 126.281


INFO:tensorflow:loss = 2.1696773, step = 200 (0.793 sec)


INFO:tensorflow:loss = 2.1696773, step = 200 (0.793 sec)


INFO:tensorflow:global_step/sec: 140.431


INFO:tensorflow:global_step/sec: 140.431


INFO:tensorflow:loss = 2.0052915, step = 300 (0.710 sec)


INFO:tensorflow:loss = 2.0052915, step = 300 (0.710 sec)


INFO:tensorflow:global_step/sec: 144.895


INFO:tensorflow:global_step/sec: 144.895


INFO:tensorflow:loss = 2.0736504, step = 400 (0.690 sec)


INFO:tensorflow:loss = 2.0736504, step = 400 (0.690 sec)


INFO:tensorflow:global_step/sec: 157.504


INFO:tensorflow:global_step/sec: 157.504


INFO:tensorflow:loss = 1.9450954, step = 500 (0.634 sec)


INFO:tensorflow:loss = 1.9450954, step = 500 (0.634 sec)


INFO:tensorflow:global_step/sec: 158.65


INFO:tensorflow:global_step/sec: 158.65


INFO:tensorflow:loss = 1.8076614, step = 600 (0.632 sec)


INFO:tensorflow:loss = 1.8076614, step = 600 (0.632 sec)


INFO:tensorflow:global_step/sec: 145.048


INFO:tensorflow:global_step/sec: 145.048


INFO:tensorflow:loss = 1.7412378, step = 700 (0.689 sec)


INFO:tensorflow:loss = 1.7412378, step = 700 (0.689 sec)


INFO:tensorflow:global_step/sec: 153.502


INFO:tensorflow:global_step/sec: 153.502


INFO:tensorflow:loss = 1.6992576, step = 800 (0.650 sec)


INFO:tensorflow:loss = 1.6992576, step = 800 (0.650 sec)


INFO:tensorflow:global_step/sec: 292.71


INFO:tensorflow:global_step/sec: 292.71


INFO:tensorflow:loss = 1.7026691, step = 900 (0.342 sec)


INFO:tensorflow:loss = 1.7026691, step = 900 (0.342 sec)


INFO:tensorflow:global_step/sec: 93.2721


INFO:tensorflow:global_step/sec: 93.2721


INFO:tensorflow:loss = 1.6700308, step = 1000 (1.073 sec)


INFO:tensorflow:loss = 1.6700308, step = 1000 (1.073 sec)


INFO:tensorflow:global_step/sec: 160.747


INFO:tensorflow:global_step/sec: 160.747


INFO:tensorflow:loss = 1.5919621, step = 1100 (0.622 sec)


INFO:tensorflow:loss = 1.5919621, step = 1100 (0.622 sec)


INFO:tensorflow:global_step/sec: 171.39


INFO:tensorflow:global_step/sec: 171.39


INFO:tensorflow:loss = 1.5134835, step = 1200 (0.583 sec)


INFO:tensorflow:loss = 1.5134835, step = 1200 (0.583 sec)


INFO:tensorflow:global_step/sec: 162.099


INFO:tensorflow:global_step/sec: 162.099


INFO:tensorflow:loss = 1.4927084, step = 1300 (0.617 sec)


INFO:tensorflow:loss = 1.4927084, step = 1300 (0.617 sec)


INFO:tensorflow:global_step/sec: 150.436


INFO:tensorflow:global_step/sec: 150.436


INFO:tensorflow:loss = 1.3881073, step = 1400 (0.665 sec)


INFO:tensorflow:loss = 1.3881073, step = 1400 (0.665 sec)


INFO:tensorflow:global_step/sec: 130.723


INFO:tensorflow:global_step/sec: 130.723


INFO:tensorflow:loss = 1.279779, step = 1500 (0.764 sec)


INFO:tensorflow:loss = 1.279779, step = 1500 (0.764 sec)


INFO:tensorflow:global_step/sec: 144.27


INFO:tensorflow:global_step/sec: 144.27


INFO:tensorflow:loss = 1.2953786, step = 1600 (0.693 sec)


INFO:tensorflow:loss = 1.2953786, step = 1600 (0.693 sec)


INFO:tensorflow:global_step/sec: 158.023


INFO:tensorflow:global_step/sec: 158.023


INFO:tensorflow:loss = 1.3001316, step = 1700 (0.632 sec)


INFO:tensorflow:loss = 1.3001316, step = 1700 (0.632 sec)


INFO:tensorflow:global_step/sec: 430.334


INFO:tensorflow:global_step/sec: 430.334


INFO:tensorflow:loss = 1.2813907, step = 1800 (0.232 sec)


INFO:tensorflow:loss = 1.2813907, step = 1800 (0.232 sec)


INFO:tensorflow:global_step/sec: 175.755


INFO:tensorflow:global_step/sec: 175.755


INFO:tensorflow:loss = 1.0939623, step = 1900 (0.569 sec)


INFO:tensorflow:loss = 1.0939623, step = 1900 (0.569 sec)


INFO:tensorflow:global_step/sec: 263.986


INFO:tensorflow:global_step/sec: 263.986


INFO:tensorflow:loss = 1.1035215, step = 2000 (0.379 sec)


INFO:tensorflow:loss = 1.1035215, step = 2000 (0.379 sec)


INFO:tensorflow:global_step/sec: 272.819


INFO:tensorflow:global_step/sec: 272.819


INFO:tensorflow:loss = 1.1021643, step = 2100 (0.367 sec)


INFO:tensorflow:loss = 1.1021643, step = 2100 (0.367 sec)


INFO:tensorflow:global_step/sec: 270.251


INFO:tensorflow:global_step/sec: 270.251


INFO:tensorflow:loss = 0.98893976, step = 2200 (0.370 sec)


INFO:tensorflow:loss = 0.98893976, step = 2200 (0.370 sec)


INFO:tensorflow:global_step/sec: 245.753


INFO:tensorflow:global_step/sec: 245.753


INFO:tensorflow:loss = 1.177411, step = 2300 (0.407 sec)


INFO:tensorflow:loss = 1.177411, step = 2300 (0.407 sec)


INFO:tensorflow:global_step/sec: 225.32


INFO:tensorflow:global_step/sec: 225.32


INFO:tensorflow:loss = 1.1036263, step = 2400 (0.445 sec)


INFO:tensorflow:loss = 1.1036263, step = 2400 (0.445 sec)


INFO:tensorflow:global_step/sec: 241.027


INFO:tensorflow:global_step/sec: 241.027


INFO:tensorflow:loss = 0.92357206, step = 2500 (0.414 sec)


INFO:tensorflow:loss = 0.92357206, step = 2500 (0.414 sec)


INFO:tensorflow:global_step/sec: 237.039


INFO:tensorflow:global_step/sec: 237.039


INFO:tensorflow:loss = 1.0571336, step = 2600 (0.422 sec)


INFO:tensorflow:loss = 1.0571336, step = 2600 (0.422 sec)


INFO:tensorflow:global_step/sec: 326.603


INFO:tensorflow:global_step/sec: 326.603


INFO:tensorflow:loss = 0.94328743, step = 2700 (0.307 sec)


INFO:tensorflow:loss = 0.94328743, step = 2700 (0.307 sec)


INFO:tensorflow:global_step/sec: 466.361


INFO:tensorflow:global_step/sec: 466.361


INFO:tensorflow:loss = 0.8407285, step = 2800 (0.213 sec)


INFO:tensorflow:loss = 0.8407285, step = 2800 (0.213 sec)


INFO:tensorflow:global_step/sec: 141.421


INFO:tensorflow:global_step/sec: 141.421


INFO:tensorflow:loss = 0.9059083, step = 2900 (0.707 sec)


INFO:tensorflow:loss = 0.9059083, step = 2900 (0.707 sec)


INFO:tensorflow:global_step/sec: 278.52


INFO:tensorflow:global_step/sec: 278.52


INFO:tensorflow:loss = 0.8056251, step = 3000 (0.359 sec)


INFO:tensorflow:loss = 0.8056251, step = 3000 (0.359 sec)


INFO:tensorflow:global_step/sec: 270.25


INFO:tensorflow:global_step/sec: 270.25


INFO:tensorflow:loss = 1.0225416, step = 3100 (0.371 sec)


INFO:tensorflow:loss = 1.0225416, step = 3100 (0.371 sec)


INFO:tensorflow:global_step/sec: 270.993


INFO:tensorflow:global_step/sec: 270.993


INFO:tensorflow:loss = 0.91615427, step = 3200 (0.368 sec)


INFO:tensorflow:loss = 0.91615427, step = 3200 (0.368 sec)


INFO:tensorflow:global_step/sec: 261.112


INFO:tensorflow:global_step/sec: 261.112


INFO:tensorflow:loss = 0.9581977, step = 3300 (0.384 sec)


INFO:tensorflow:loss = 0.9581977, step = 3300 (0.384 sec)


INFO:tensorflow:global_step/sec: 270.993


INFO:tensorflow:global_step/sec: 270.993


INFO:tensorflow:loss = 1.0072098, step = 3400 (0.368 sec)


INFO:tensorflow:loss = 1.0072098, step = 3400 (0.368 sec)


INFO:tensorflow:global_step/sec: 271.727


INFO:tensorflow:global_step/sec: 271.727


INFO:tensorflow:loss = 0.90008736, step = 3500 (0.367 sec)


INFO:tensorflow:loss = 0.90008736, step = 3500 (0.367 sec)


INFO:tensorflow:global_step/sec: 296.648


INFO:tensorflow:global_step/sec: 296.648


INFO:tensorflow:loss = 1.0265195, step = 3600 (0.337 sec)


INFO:tensorflow:loss = 1.0265195, step = 3600 (0.337 sec)


INFO:tensorflow:global_step/sec: 539.071


INFO:tensorflow:global_step/sec: 539.071


INFO:tensorflow:loss = 0.8250454, step = 3700 (0.187 sec)


INFO:tensorflow:loss = 0.8250454, step = 3700 (0.187 sec)


INFO:tensorflow:global_step/sec: 164.776


INFO:tensorflow:global_step/sec: 164.776


INFO:tensorflow:loss = 0.74046755, step = 3800 (0.606 sec)


INFO:tensorflow:loss = 0.74046755, step = 3800 (0.606 sec)


INFO:tensorflow:global_step/sec: 256.438


INFO:tensorflow:global_step/sec: 256.438


INFO:tensorflow:loss = 0.8716377, step = 3900 (0.391 sec)


INFO:tensorflow:loss = 0.8716377, step = 3900 (0.391 sec)


INFO:tensorflow:global_step/sec: 248.802


INFO:tensorflow:global_step/sec: 248.802


INFO:tensorflow:loss = 0.7150433, step = 4000 (0.402 sec)


INFO:tensorflow:loss = 0.7150433, step = 4000 (0.402 sec)


INFO:tensorflow:global_step/sec: 256.438


INFO:tensorflow:global_step/sec: 256.438


INFO:tensorflow:loss = 0.5857714, step = 4100 (0.390 sec)


INFO:tensorflow:loss = 0.5857714, step = 4100 (0.390 sec)


INFO:tensorflow:global_step/sec: 246.357


INFO:tensorflow:global_step/sec: 246.357


INFO:tensorflow:loss = 0.82042086, step = 4200 (0.406 sec)


INFO:tensorflow:loss = 0.82042086, step = 4200 (0.406 sec)


INFO:tensorflow:global_step/sec: 246.964


INFO:tensorflow:global_step/sec: 246.964


INFO:tensorflow:loss = 0.75136745, step = 4300 (0.405 sec)


INFO:tensorflow:loss = 0.75136745, step = 4300 (0.405 sec)


INFO:tensorflow:global_step/sec: 251.297


INFO:tensorflow:global_step/sec: 251.297


INFO:tensorflow:loss = 0.66919875, step = 4400 (0.399 sec)


INFO:tensorflow:loss = 0.66919875, step = 4400 (0.399 sec)


INFO:tensorflow:global_step/sec: 238.164


INFO:tensorflow:global_step/sec: 238.164


INFO:tensorflow:loss = 0.7276577, step = 4500 (0.419 sec)


INFO:tensorflow:loss = 0.7276577, step = 4500 (0.419 sec)


INFO:tensorflow:global_step/sec: 387.445


INFO:tensorflow:global_step/sec: 387.445


INFO:tensorflow:loss = 0.5300507, step = 4600 (0.258 sec)


INFO:tensorflow:loss = 0.5300507, step = 4600 (0.258 sec)


INFO:tensorflow:global_step/sec: 195.532


INFO:tensorflow:global_step/sec: 195.532


INFO:tensorflow:loss = 0.65329844, step = 4700 (0.510 sec)


INFO:tensorflow:loss = 0.65329844, step = 4700 (0.510 sec)


INFO:tensorflow:global_step/sec: 279.679


INFO:tensorflow:global_step/sec: 279.679


INFO:tensorflow:loss = 0.56645966, step = 4800 (0.359 sec)


INFO:tensorflow:loss = 0.56645966, step = 4800 (0.359 sec)


INFO:tensorflow:global_step/sec: 286.478


INFO:tensorflow:global_step/sec: 286.478


INFO:tensorflow:loss = 0.7658504, step = 4900 (0.349 sec)


INFO:tensorflow:loss = 0.7658504, step = 4900 (0.349 sec)


INFO:tensorflow:global_step/sec: 286.477


INFO:tensorflow:global_step/sec: 286.477


INFO:tensorflow:loss = 0.61721206, step = 5000 (0.350 sec)


INFO:tensorflow:loss = 0.61721206, step = 5000 (0.350 sec)


INFO:tensorflow:global_step/sec: 282.443


INFO:tensorflow:global_step/sec: 282.443


INFO:tensorflow:loss = 0.66672236, step = 5100 (0.353 sec)


INFO:tensorflow:loss = 0.66672236, step = 5100 (0.353 sec)


INFO:tensorflow:global_step/sec: 255.784


INFO:tensorflow:global_step/sec: 255.784


INFO:tensorflow:loss = 0.5497521, step = 5200 (0.391 sec)


INFO:tensorflow:loss = 0.5497521, step = 5200 (0.391 sec)


INFO:tensorflow:global_step/sec: 283.241


INFO:tensorflow:global_step/sec: 283.241


INFO:tensorflow:loss = 0.59274876, step = 5300 (0.353 sec)


INFO:tensorflow:loss = 0.59274876, step = 5300 (0.353 sec)


INFO:tensorflow:global_step/sec: 257.095


INFO:tensorflow:global_step/sec: 257.095


INFO:tensorflow:loss = 0.85691667, step = 5400 (0.389 sec)


INFO:tensorflow:loss = 0.85691667, step = 5400 (0.389 sec)


INFO:tensorflow:global_step/sec: 311.389


INFO:tensorflow:global_step/sec: 311.389


INFO:tensorflow:loss = 0.6750015, step = 5500 (0.321 sec)


INFO:tensorflow:loss = 0.6750015, step = 5500 (0.321 sec)


INFO:tensorflow:global_step/sec: 493.925


INFO:tensorflow:global_step/sec: 493.925


INFO:tensorflow:loss = 0.70071304, step = 5600 (0.202 sec)


INFO:tensorflow:loss = 0.70071304, step = 5600 (0.202 sec)


INFO:tensorflow:global_step/sec: 140.038


INFO:tensorflow:global_step/sec: 140.038


INFO:tensorflow:loss = 0.6523316, step = 5700 (0.714 sec)


INFO:tensorflow:loss = 0.6523316, step = 5700 (0.714 sec)


INFO:tensorflow:global_step/sec: 256.438


INFO:tensorflow:global_step/sec: 256.438


INFO:tensorflow:loss = 0.7075953, step = 5800 (0.390 sec)


INFO:tensorflow:loss = 0.7075953, step = 5800 (0.390 sec)


INFO:tensorflow:global_step/sec: 275.817


INFO:tensorflow:global_step/sec: 275.817


INFO:tensorflow:loss = 0.5977151, step = 5900 (0.363 sec)


INFO:tensorflow:loss = 0.5977151, step = 5900 (0.363 sec)


INFO:tensorflow:global_step/sec: 288.126


INFO:tensorflow:global_step/sec: 288.126


INFO:tensorflow:loss = 0.643018, step = 6000 (0.347 sec)


INFO:tensorflow:loss = 0.643018, step = 6000 (0.347 sec)


INFO:tensorflow:global_step/sec: 278.52


INFO:tensorflow:global_step/sec: 278.52


INFO:tensorflow:loss = 0.6037477, step = 6100 (0.359 sec)


INFO:tensorflow:loss = 0.6037477, step = 6100 (0.359 sec)


INFO:tensorflow:global_step/sec: 285.662


INFO:tensorflow:global_step/sec: 285.662


INFO:tensorflow:loss = 0.56311625, step = 6200 (0.350 sec)


INFO:tensorflow:loss = 0.56311625, step = 6200 (0.350 sec)


INFO:tensorflow:global_step/sec: 286.88


INFO:tensorflow:global_step/sec: 286.88


INFO:tensorflow:loss = 0.59105015, step = 6300 (0.348 sec)


INFO:tensorflow:loss = 0.59105015, step = 6300 (0.348 sec)


INFO:tensorflow:global_step/sec: 280.861


INFO:tensorflow:global_step/sec: 280.861


INFO:tensorflow:loss = 0.62224126, step = 6400 (0.357 sec)


INFO:tensorflow:loss = 0.62224126, step = 6400 (0.357 sec)


INFO:tensorflow:global_step/sec: 505.118


INFO:tensorflow:global_step/sec: 505.118


INFO:tensorflow:loss = 0.9679465, step = 6500 (0.198 sec)


INFO:tensorflow:loss = 0.9679465, step = 6500 (0.198 sec)


INFO:tensorflow:global_step/sec: 161.247


INFO:tensorflow:global_step/sec: 161.247


INFO:tensorflow:loss = 0.59385425, step = 6600 (0.621 sec)


INFO:tensorflow:loss = 0.59385425, step = 6600 (0.621 sec)


INFO:tensorflow:global_step/sec: 279.705


INFO:tensorflow:global_step/sec: 279.705


INFO:tensorflow:loss = 0.44148135, step = 6700 (0.357 sec)


INFO:tensorflow:loss = 0.44148135, step = 6700 (0.357 sec)


INFO:tensorflow:global_step/sec: 266.668


INFO:tensorflow:global_step/sec: 266.668


INFO:tensorflow:loss = 0.5055759, step = 6800 (0.375 sec)


INFO:tensorflow:loss = 0.5055759, step = 6800 (0.375 sec)


INFO:tensorflow:global_step/sec: 284.044


INFO:tensorflow:global_step/sec: 284.044


INFO:tensorflow:loss = 0.6075337, step = 6900 (0.352 sec)


INFO:tensorflow:loss = 0.6075337, step = 6900 (0.352 sec)


INFO:tensorflow:global_step/sec: 277.748


INFO:tensorflow:global_step/sec: 277.748


INFO:tensorflow:loss = 0.47149235, step = 7000 (0.360 sec)


INFO:tensorflow:loss = 0.47149235, step = 7000 (0.360 sec)


INFO:tensorflow:global_step/sec: 282.04


INFO:tensorflow:global_step/sec: 282.04


INFO:tensorflow:loss = 0.57221234, step = 7100 (0.355 sec)


INFO:tensorflow:loss = 0.57221234, step = 7100 (0.355 sec)


INFO:tensorflow:global_step/sec: 259.417


INFO:tensorflow:global_step/sec: 259.417


INFO:tensorflow:loss = 0.5137416, step = 7200 (0.385 sec)


INFO:tensorflow:loss = 0.5137416, step = 7200 (0.385 sec)


INFO:tensorflow:global_step/sec: 276.982


INFO:tensorflow:global_step/sec: 276.982


INFO:tensorflow:loss = 0.5288558, step = 7300 (0.361 sec)


INFO:tensorflow:loss = 0.5288558, step = 7300 (0.361 sec)


INFO:tensorflow:global_step/sec: 372.354


INFO:tensorflow:global_step/sec: 372.354


INFO:tensorflow:loss = 0.4269986, step = 7400 (0.269 sec)


INFO:tensorflow:loss = 0.4269986, step = 7400 (0.269 sec)


INFO:tensorflow:global_step/sec: 525.034


INFO:tensorflow:global_step/sec: 525.034


INFO:tensorflow:loss = 0.64362246, step = 7500 (0.190 sec)


INFO:tensorflow:loss = 0.64362246, step = 7500 (0.190 sec)


INFO:tensorflow:global_step/sec: 151.233


INFO:tensorflow:global_step/sec: 151.233


INFO:tensorflow:loss = 0.59131324, step = 7600 (0.661 sec)


INFO:tensorflow:loss = 0.59131324, step = 7600 (0.661 sec)


INFO:tensorflow:global_step/sec: 281.65


INFO:tensorflow:global_step/sec: 281.65


INFO:tensorflow:loss = 0.6011491, step = 7700 (0.356 sec)


INFO:tensorflow:loss = 0.6011491, step = 7700 (0.356 sec)


INFO:tensorflow:global_step/sec: 284.044


INFO:tensorflow:global_step/sec: 284.044


INFO:tensorflow:loss = 0.58360064, step = 7800 (0.351 sec)


INFO:tensorflow:loss = 0.58360064, step = 7800 (0.351 sec)


INFO:tensorflow:global_step/sec: 276.218


INFO:tensorflow:global_step/sec: 276.218


INFO:tensorflow:loss = 0.53161764, step = 7900 (0.362 sec)


INFO:tensorflow:loss = 0.53161764, step = 7900 (0.362 sec)


INFO:tensorflow:global_step/sec: 282.443


INFO:tensorflow:global_step/sec: 282.443


INFO:tensorflow:loss = 0.5346626, step = 8000 (0.354 sec)


INFO:tensorflow:loss = 0.5346626, step = 8000 (0.354 sec)


INFO:tensorflow:global_step/sec: 260.756


INFO:tensorflow:global_step/sec: 260.756


INFO:tensorflow:loss = 0.48507464, step = 8100 (0.384 sec)


INFO:tensorflow:loss = 0.48507464, step = 8100 (0.384 sec)


INFO:tensorflow:global_step/sec: 265.961


INFO:tensorflow:global_step/sec: 265.961


INFO:tensorflow:loss = 0.60715926, step = 8200 (0.376 sec)


INFO:tensorflow:loss = 0.60715926, step = 8200 (0.376 sec)


INFO:tensorflow:global_step/sec: 280.861


INFO:tensorflow:global_step/sec: 280.861


INFO:tensorflow:loss = 0.45317304, step = 8300 (0.357 sec)


INFO:tensorflow:loss = 0.45317304, step = 8300 (0.357 sec)


INFO:tensorflow:global_step/sec: 466.355


INFO:tensorflow:global_step/sec: 466.355


INFO:tensorflow:loss = 0.49131745, step = 8400 (0.213 sec)


INFO:tensorflow:loss = 0.49131745, step = 8400 (0.213 sec)


INFO:tensorflow:global_step/sec: 152.614


INFO:tensorflow:global_step/sec: 152.614


INFO:tensorflow:loss = 0.6108663, step = 8500 (0.655 sec)


INFO:tensorflow:loss = 0.6108663, step = 8500 (0.655 sec)


INFO:tensorflow:global_step/sec: 277.284


INFO:tensorflow:global_step/sec: 277.284


INFO:tensorflow:loss = 0.50533366, step = 8600 (0.361 sec)


INFO:tensorflow:loss = 0.50533366, step = 8600 (0.361 sec)


INFO:tensorflow:global_step/sec: 252.819


INFO:tensorflow:global_step/sec: 252.819


INFO:tensorflow:loss = 0.42019895, step = 8700 (0.397 sec)


INFO:tensorflow:loss = 0.42019895, step = 8700 (0.397 sec)


INFO:tensorflow:global_step/sec: 257.418


INFO:tensorflow:global_step/sec: 257.418


INFO:tensorflow:loss = 0.4073945, step = 8800 (0.386 sec)


INFO:tensorflow:loss = 0.4073945, step = 8800 (0.386 sec)


INFO:tensorflow:global_step/sec: 278.52


INFO:tensorflow:global_step/sec: 278.52


INFO:tensorflow:loss = 0.58546597, step = 8900 (0.360 sec)


INFO:tensorflow:loss = 0.58546597, step = 8900 (0.360 sec)


INFO:tensorflow:global_step/sec: 264.557


INFO:tensorflow:global_step/sec: 264.557


INFO:tensorflow:loss = 0.3981418, step = 9000 (0.377 sec)


INFO:tensorflow:loss = 0.3981418, step = 9000 (0.377 sec)


INFO:tensorflow:global_step/sec: 250.043


INFO:tensorflow:global_step/sec: 250.043


INFO:tensorflow:loss = 0.47648117, step = 9100 (0.400 sec)


INFO:tensorflow:loss = 0.47648117, step = 9100 (0.400 sec)


INFO:tensorflow:global_step/sec: 251.606


INFO:tensorflow:global_step/sec: 251.606


INFO:tensorflow:loss = 0.63293767, step = 9200 (0.398 sec)


INFO:tensorflow:loss = 0.63293767, step = 9200 (0.398 sec)


INFO:tensorflow:global_step/sec: 371.36


INFO:tensorflow:global_step/sec: 371.36


INFO:tensorflow:loss = 0.4732299, step = 9300 (0.269 sec)


INFO:tensorflow:loss = 0.4732299, step = 9300 (0.269 sec)


INFO:tensorflow:global_step/sec: 182.292


INFO:tensorflow:global_step/sec: 182.292


INFO:tensorflow:loss = 0.41575903, step = 9400 (0.549 sec)


INFO:tensorflow:loss = 0.41575903, step = 9400 (0.549 sec)


INFO:tensorflow:global_step/sec: 233.445


INFO:tensorflow:global_step/sec: 233.445


INFO:tensorflow:loss = 0.68400383, step = 9500 (0.428 sec)


INFO:tensorflow:loss = 0.68400383, step = 9500 (0.428 sec)


INFO:tensorflow:global_step/sec: 239.301


INFO:tensorflow:global_step/sec: 239.301


INFO:tensorflow:loss = 0.56715393, step = 9600 (0.418 sec)


INFO:tensorflow:loss = 0.56715393, step = 9600 (0.418 sec)


INFO:tensorflow:global_step/sec: 261.795


INFO:tensorflow:global_step/sec: 261.795


INFO:tensorflow:loss = 0.5000055, step = 9700 (0.381 sec)


INFO:tensorflow:loss = 0.5000055, step = 9700 (0.381 sec)


INFO:tensorflow:global_step/sec: 270.262


INFO:tensorflow:global_step/sec: 270.262


INFO:tensorflow:loss = 0.5495507, step = 9800 (0.371 sec)


INFO:tensorflow:loss = 0.5495507, step = 9800 (0.371 sec)


INFO:tensorflow:global_step/sec: 280.076


INFO:tensorflow:global_step/sec: 280.076


INFO:tensorflow:loss = 0.4771184, step = 9900 (0.357 sec)


INFO:tensorflow:loss = 0.4771184, step = 9900 (0.357 sec)


INFO:tensorflow:global_step/sec: 280.254


INFO:tensorflow:global_step/sec: 280.254


INFO:tensorflow:loss = 0.42904872, step = 10000 (0.358 sec)


INFO:tensorflow:loss = 0.42904872, step = 10000 (0.358 sec)


INFO:tensorflow:global_step/sec: 268.094


INFO:tensorflow:global_step/sec: 268.094


INFO:tensorflow:loss = 0.42247504, step = 10100 (0.372 sec)


INFO:tensorflow:loss = 0.42247504, step = 10100 (0.372 sec)


INFO:tensorflow:global_step/sec: 351.194


INFO:tensorflow:global_step/sec: 351.194


INFO:tensorflow:loss = 0.44701618, step = 10200 (0.285 sec)


INFO:tensorflow:loss = 0.44701618, step = 10200 (0.285 sec)


INFO:tensorflow:global_step/sec: 508.969


INFO:tensorflow:global_step/sec: 508.969


INFO:tensorflow:loss = 0.49490082, step = 10300 (0.196 sec)


INFO:tensorflow:loss = 0.49490082, step = 10300 (0.196 sec)


INFO:tensorflow:global_step/sec: 143.999


INFO:tensorflow:global_step/sec: 143.999


INFO:tensorflow:loss = 0.49324322, step = 10400 (0.694 sec)


INFO:tensorflow:loss = 0.49324322, step = 10400 (0.694 sec)


INFO:tensorflow:global_step/sec: 282.443


INFO:tensorflow:global_step/sec: 282.443


INFO:tensorflow:loss = 0.57512164, step = 10500 (0.353 sec)


INFO:tensorflow:loss = 0.57512164, step = 10500 (0.353 sec)


INFO:tensorflow:global_step/sec: 284.439


INFO:tensorflow:global_step/sec: 284.439


INFO:tensorflow:loss = 0.40432844, step = 10600 (0.353 sec)


INFO:tensorflow:loss = 0.40432844, step = 10600 (0.353 sec)


INFO:tensorflow:global_step/sec: 283.24


INFO:tensorflow:global_step/sec: 283.24


INFO:tensorflow:loss = 0.7260103, step = 10700 (0.353 sec)


INFO:tensorflow:loss = 0.7260103, step = 10700 (0.353 sec)


INFO:tensorflow:global_step/sec: 280.077


INFO:tensorflow:global_step/sec: 280.077


INFO:tensorflow:loss = 0.40734774, step = 10800 (0.358 sec)


INFO:tensorflow:loss = 0.40734774, step = 10800 (0.358 sec)


INFO:tensorflow:global_step/sec: 251.927


INFO:tensorflow:global_step/sec: 251.927


INFO:tensorflow:loss = 0.43431798, step = 10900 (0.396 sec)


INFO:tensorflow:loss = 0.43431798, step = 10900 (0.396 sec)


INFO:tensorflow:global_step/sec: 232.904


INFO:tensorflow:global_step/sec: 232.904


INFO:tensorflow:loss = 0.61126375, step = 11000 (0.430 sec)


INFO:tensorflow:loss = 0.61126375, step = 11000 (0.430 sec)


INFO:tensorflow:global_step/sec: 274.705


INFO:tensorflow:global_step/sec: 274.705


INFO:tensorflow:loss = 0.3306471, step = 11100 (0.363 sec)


INFO:tensorflow:loss = 0.3306471, step = 11100 (0.363 sec)


INFO:tensorflow:global_step/sec: 410.932


INFO:tensorflow:global_step/sec: 410.932


INFO:tensorflow:loss = 0.44442305, step = 11200 (0.243 sec)


INFO:tensorflow:loss = 0.44442305, step = 11200 (0.243 sec)


INFO:tensorflow:global_step/sec: 166.845


INFO:tensorflow:global_step/sec: 166.845


INFO:tensorflow:loss = 0.6409091, step = 11300 (0.599 sec)


INFO:tensorflow:loss = 0.6409091, step = 11300 (0.599 sec)


INFO:tensorflow:global_step/sec: 268.094


INFO:tensorflow:global_step/sec: 268.094


INFO:tensorflow:loss = 0.6164366, step = 11400 (0.373 sec)


INFO:tensorflow:loss = 0.6164366, step = 11400 (0.373 sec)


INFO:tensorflow:global_step/sec: 265.961


INFO:tensorflow:global_step/sec: 265.961


INFO:tensorflow:loss = 0.43065536, step = 11500 (0.376 sec)


INFO:tensorflow:loss = 0.43065536, step = 11500 (0.376 sec)


INFO:tensorflow:global_step/sec: 277.748


INFO:tensorflow:global_step/sec: 277.748


INFO:tensorflow:loss = 0.6466435, step = 11600 (0.359 sec)


INFO:tensorflow:loss = 0.6466435, step = 11600 (0.359 sec)


INFO:tensorflow:global_step/sec: 282.039


INFO:tensorflow:global_step/sec: 282.039


INFO:tensorflow:loss = 0.4716864, step = 11700 (0.356 sec)


INFO:tensorflow:loss = 0.4716864, step = 11700 (0.356 sec)


INFO:tensorflow:global_step/sec: 273.954


INFO:tensorflow:global_step/sec: 273.954


INFO:tensorflow:loss = 0.43568903, step = 11800 (0.365 sec)


INFO:tensorflow:loss = 0.43568903, step = 11800 (0.365 sec)


INFO:tensorflow:global_step/sec: 252.563


INFO:tensorflow:global_step/sec: 252.563


INFO:tensorflow:loss = 0.3280003, step = 11900 (0.396 sec)


INFO:tensorflow:loss = 0.3280003, step = 11900 (0.396 sec)


INFO:tensorflow:global_step/sec: 273.954


INFO:tensorflow:global_step/sec: 273.954


INFO:tensorflow:loss = 0.44437432, step = 12000 (0.365 sec)


INFO:tensorflow:loss = 0.44437432, step = 12000 (0.365 sec)


INFO:tensorflow:global_step/sec: 355.559


INFO:tensorflow:global_step/sec: 355.559


INFO:tensorflow:loss = 0.4998352, step = 12100 (0.281 sec)


INFO:tensorflow:loss = 0.4998352, step = 12100 (0.281 sec)


INFO:tensorflow:global_step/sec: 188.827


INFO:tensorflow:global_step/sec: 188.827


INFO:tensorflow:loss = 0.41904166, step = 12200 (0.530 sec)


INFO:tensorflow:loss = 0.41904166, step = 12200 (0.530 sec)


INFO:tensorflow:global_step/sec: 279.296


INFO:tensorflow:global_step/sec: 279.296


INFO:tensorflow:loss = 0.5812112, step = 12300 (0.358 sec)


INFO:tensorflow:loss = 0.5812112, step = 12300 (0.358 sec)


INFO:tensorflow:global_step/sec: 276.971


INFO:tensorflow:global_step/sec: 276.971


INFO:tensorflow:loss = 0.40000308, step = 12400 (0.361 sec)


INFO:tensorflow:loss = 0.40000308, step = 12400 (0.361 sec)


INFO:tensorflow:global_step/sec: 280.076


INFO:tensorflow:global_step/sec: 280.076


INFO:tensorflow:loss = 0.3276708, step = 12500 (0.357 sec)


INFO:tensorflow:loss = 0.3276708, step = 12500 (0.357 sec)


INFO:tensorflow:global_step/sec: 280.861


INFO:tensorflow:global_step/sec: 280.861


INFO:tensorflow:loss = 0.4873331, step = 12600 (0.355 sec)


INFO:tensorflow:loss = 0.4873331, step = 12600 (0.355 sec)


INFO:tensorflow:global_step/sec: 279.677


INFO:tensorflow:global_step/sec: 279.677


INFO:tensorflow:loss = 0.38836107, step = 12700 (0.359 sec)


INFO:tensorflow:loss = 0.38836107, step = 12700 (0.359 sec)


INFO:tensorflow:global_step/sec: 280.861


INFO:tensorflow:global_step/sec: 280.861


INFO:tensorflow:loss = 0.5129917, step = 12800 (0.355 sec)


INFO:tensorflow:loss = 0.5129917, step = 12800 (0.355 sec)


INFO:tensorflow:global_step/sec: 279.296


INFO:tensorflow:global_step/sec: 279.296


INFO:tensorflow:loss = 0.47138983, step = 12900 (0.359 sec)


INFO:tensorflow:loss = 0.47138983, step = 12900 (0.359 sec)


INFO:tensorflow:global_step/sec: 322.403


INFO:tensorflow:global_step/sec: 322.403


INFO:tensorflow:loss = 0.547824, step = 13000 (0.310 sec)


INFO:tensorflow:loss = 0.547824, step = 13000 (0.310 sec)


INFO:tensorflow:global_step/sec: 524.958


INFO:tensorflow:global_step/sec: 524.958


INFO:tensorflow:loss = 0.63355505, step = 13100 (0.190 sec)


INFO:tensorflow:loss = 0.63355505, step = 13100 (0.190 sec)


INFO:tensorflow:global_step/sec: 155.937


INFO:tensorflow:global_step/sec: 155.937


INFO:tensorflow:loss = 0.31268102, step = 13200 (0.641 sec)


INFO:tensorflow:loss = 0.31268102, step = 13200 (0.641 sec)


INFO:tensorflow:global_step/sec: 282.442


INFO:tensorflow:global_step/sec: 282.442


INFO:tensorflow:loss = 0.35912088, step = 13300 (0.353 sec)


INFO:tensorflow:loss = 0.35912088, step = 13300 (0.353 sec)


INFO:tensorflow:global_step/sec: 278.521


INFO:tensorflow:global_step/sec: 278.521


INFO:tensorflow:loss = 0.5524012, step = 13400 (0.360 sec)


INFO:tensorflow:loss = 0.5524012, step = 13400 (0.360 sec)


INFO:tensorflow:global_step/sec: 275.85


INFO:tensorflow:global_step/sec: 275.85


INFO:tensorflow:loss = 0.388526, step = 13500 (0.363 sec)


INFO:tensorflow:loss = 0.388526, step = 13500 (0.363 sec)


INFO:tensorflow:global_step/sec: 270.993


INFO:tensorflow:global_step/sec: 270.993


INFO:tensorflow:loss = 0.43048787, step = 13600 (0.368 sec)


INFO:tensorflow:loss = 0.43048787, step = 13600 (0.368 sec)


INFO:tensorflow:global_step/sec: 267.379


INFO:tensorflow:global_step/sec: 267.379


INFO:tensorflow:loss = 0.48376712, step = 13700 (0.375 sec)


INFO:tensorflow:loss = 0.48376712, step = 13700 (0.375 sec)


INFO:tensorflow:global_step/sec: 267.379


INFO:tensorflow:global_step/sec: 267.379


INFO:tensorflow:loss = 0.5608258, step = 13800 (0.374 sec)


INFO:tensorflow:loss = 0.5608258, step = 13800 (0.374 sec)


INFO:tensorflow:global_step/sec: 270.184


INFO:tensorflow:global_step/sec: 270.184


INFO:tensorflow:loss = 0.47742698, step = 13900 (0.370 sec)


INFO:tensorflow:loss = 0.47742698, step = 13900 (0.370 sec)


INFO:tensorflow:global_step/sec: 454.255


INFO:tensorflow:global_step/sec: 454.255


INFO:tensorflow:loss = 0.4433214, step = 14000 (0.219 sec)


INFO:tensorflow:loss = 0.4433214, step = 14000 (0.219 sec)


INFO:tensorflow:global_step/sec: 124.872


INFO:tensorflow:global_step/sec: 124.872


INFO:tensorflow:loss = 0.48722425, step = 14100 (0.803 sec)


INFO:tensorflow:loss = 0.48722425, step = 14100 (0.803 sec)


INFO:tensorflow:global_step/sec: 230.759


INFO:tensorflow:global_step/sec: 230.759


INFO:tensorflow:loss = 0.23760815, step = 14200 (0.432 sec)


INFO:tensorflow:loss = 0.23760815, step = 14200 (0.432 sec)


INFO:tensorflow:global_step/sec: 250.668


INFO:tensorflow:global_step/sec: 250.668


INFO:tensorflow:loss = 0.4176306, step = 14300 (0.399 sec)


INFO:tensorflow:loss = 0.4176306, step = 14300 (0.399 sec)


INFO:tensorflow:global_step/sec: 234.808


INFO:tensorflow:global_step/sec: 234.808


INFO:tensorflow:loss = 0.32535088, step = 14400 (0.427 sec)


INFO:tensorflow:loss = 0.32535088, step = 14400 (0.427 sec)


INFO:tensorflow:global_step/sec: 240.78


INFO:tensorflow:global_step/sec: 240.78


INFO:tensorflow:loss = 0.36358663, step = 14500 (0.414 sec)


INFO:tensorflow:loss = 0.36358663, step = 14500 (0.414 sec)


INFO:tensorflow:global_step/sec: 257.095


INFO:tensorflow:global_step/sec: 257.095


INFO:tensorflow:loss = 0.48048332, step = 14600 (0.389 sec)


INFO:tensorflow:loss = 0.48048332, step = 14600 (0.389 sec)


INFO:tensorflow:global_step/sec: 273.327


INFO:tensorflow:global_step/sec: 273.327


INFO:tensorflow:loss = 0.33044523, step = 14700 (0.365 sec)


INFO:tensorflow:loss = 0.33044523, step = 14700 (0.365 sec)


INFO:tensorflow:global_step/sec: 276.981


INFO:tensorflow:global_step/sec: 276.981


INFO:tensorflow:loss = 0.2851036, step = 14800 (0.362 sec)


INFO:tensorflow:loss = 0.2851036, step = 14800 (0.362 sec)


INFO:tensorflow:global_step/sec: 361.292


INFO:tensorflow:global_step/sec: 361.292


INFO:tensorflow:loss = 0.42728183, step = 14900 (0.277 sec)


INFO:tensorflow:loss = 0.42728183, step = 14900 (0.277 sec)


INFO:tensorflow:global_step/sec: 516.843


INFO:tensorflow:global_step/sec: 516.843


INFO:tensorflow:loss = 0.44955292, step = 15000 (0.193 sec)


INFO:tensorflow:loss = 0.44955292, step = 15000 (0.193 sec)


INFO:tensorflow:global_step/sec: 140.627


INFO:tensorflow:global_step/sec: 140.627


INFO:tensorflow:loss = 0.34023708, step = 15100 (0.711 sec)


INFO:tensorflow:loss = 0.34023708, step = 15100 (0.711 sec)


INFO:tensorflow:global_step/sec: 273.561


INFO:tensorflow:global_step/sec: 273.561


INFO:tensorflow:loss = 0.2914019, step = 15200 (0.366 sec)


INFO:tensorflow:loss = 0.2914019, step = 15200 (0.366 sec)


INFO:tensorflow:global_step/sec: 279.296


INFO:tensorflow:global_step/sec: 279.296


INFO:tensorflow:loss = 0.39689398, step = 15300 (0.357 sec)


INFO:tensorflow:loss = 0.39689398, step = 15300 (0.357 sec)


INFO:tensorflow:global_step/sec: 263.861


INFO:tensorflow:global_step/sec: 263.861


INFO:tensorflow:loss = 0.38614956, step = 15400 (0.381 sec)


INFO:tensorflow:loss = 0.38614956, step = 15400 (0.381 sec)


INFO:tensorflow:global_step/sec: 265.961


INFO:tensorflow:global_step/sec: 265.961


INFO:tensorflow:loss = 0.57720923, step = 15500 (0.375 sec)


INFO:tensorflow:loss = 0.57720923, step = 15500 (0.375 sec)


INFO:tensorflow:global_step/sec: 277.356


INFO:tensorflow:global_step/sec: 277.356


INFO:tensorflow:loss = 0.3890375, step = 15600 (0.361 sec)


INFO:tensorflow:loss = 0.3890375, step = 15600 (0.361 sec)


INFO:tensorflow:global_step/sec: 264.558


INFO:tensorflow:global_step/sec: 264.558


INFO:tensorflow:loss = 0.48057225, step = 15700 (0.378 sec)


INFO:tensorflow:loss = 0.48057225, step = 15700 (0.378 sec)


INFO:tensorflow:global_step/sec: 294.903


INFO:tensorflow:global_step/sec: 294.903


INFO:tensorflow:loss = 0.5134254, step = 15800 (0.339 sec)


INFO:tensorflow:loss = 0.5134254, step = 15800 (0.339 sec)


INFO:tensorflow:global_step/sec: 496.074


INFO:tensorflow:global_step/sec: 496.074


INFO:tensorflow:loss = 0.3694862, step = 15900 (0.202 sec)


INFO:tensorflow:loss = 0.3694862, step = 15900 (0.202 sec)


INFO:tensorflow:global_step/sec: 169.085


INFO:tensorflow:global_step/sec: 169.085


INFO:tensorflow:loss = 0.6417716, step = 16000 (0.590 sec)


INFO:tensorflow:loss = 0.6417716, step = 16000 (0.590 sec)


INFO:tensorflow:global_step/sec: 238.707


INFO:tensorflow:global_step/sec: 238.707


INFO:tensorflow:loss = 0.36315265, step = 16100 (0.420 sec)


INFO:tensorflow:loss = 0.36315265, step = 16100 (0.420 sec)


INFO:tensorflow:global_step/sec: 235.921


INFO:tensorflow:global_step/sec: 235.921


INFO:tensorflow:loss = 0.35373494, step = 16200 (0.424 sec)


INFO:tensorflow:loss = 0.35373494, step = 16200 (0.424 sec)


INFO:tensorflow:global_step/sec: 261.782


INFO:tensorflow:global_step/sec: 261.782


INFO:tensorflow:loss = 0.33874848, step = 16300 (0.382 sec)


INFO:tensorflow:loss = 0.33874848, step = 16300 (0.382 sec)


INFO:tensorflow:global_step/sec: 268.095


INFO:tensorflow:global_step/sec: 268.095


INFO:tensorflow:loss = 0.329934, step = 16400 (0.373 sec)


INFO:tensorflow:loss = 0.329934, step = 16400 (0.373 sec)


INFO:tensorflow:global_step/sec: 264.102


INFO:tensorflow:global_step/sec: 264.102


INFO:tensorflow:loss = 0.39854503, step = 16500 (0.379 sec)


INFO:tensorflow:loss = 0.39854503, step = 16500 (0.379 sec)


INFO:tensorflow:global_step/sec: 258.749


INFO:tensorflow:global_step/sec: 258.749


INFO:tensorflow:loss = 0.63696265, step = 16600 (0.386 sec)


INFO:tensorflow:loss = 0.63696265, step = 16600 (0.386 sec)


INFO:tensorflow:global_step/sec: 263.861


INFO:tensorflow:global_step/sec: 263.861


INFO:tensorflow:loss = 0.35895604, step = 16700 (0.379 sec)


INFO:tensorflow:loss = 0.35895604, step = 16700 (0.379 sec)


INFO:tensorflow:global_step/sec: 390.144


INFO:tensorflow:global_step/sec: 390.144


INFO:tensorflow:loss = 0.42186034, step = 16800 (0.255 sec)


INFO:tensorflow:loss = 0.42186034, step = 16800 (0.255 sec)


INFO:tensorflow:global_step/sec: 171.397


INFO:tensorflow:global_step/sec: 171.397


INFO:tensorflow:loss = 0.35558316, step = 16900 (0.584 sec)


INFO:tensorflow:loss = 0.35558316, step = 16900 (0.584 sec)


INFO:tensorflow:global_step/sec: 272.052


INFO:tensorflow:global_step/sec: 272.052


INFO:tensorflow:loss = 0.3402847, step = 17000 (0.368 sec)


INFO:tensorflow:loss = 0.3402847, step = 17000 (0.368 sec)


INFO:tensorflow:global_step/sec: 285.662


INFO:tensorflow:global_step/sec: 285.662


INFO:tensorflow:loss = 0.48451772, step = 17100 (0.350 sec)


INFO:tensorflow:loss = 0.48451772, step = 17100 (0.350 sec)


INFO:tensorflow:global_step/sec: 284.042


INFO:tensorflow:global_step/sec: 284.042


INFO:tensorflow:loss = 0.3078544, step = 17200 (0.352 sec)


INFO:tensorflow:loss = 0.3078544, step = 17200 (0.352 sec)


INFO:tensorflow:global_step/sec: 286.479


INFO:tensorflow:global_step/sec: 286.479


INFO:tensorflow:loss = 0.40967262, step = 17300 (0.349 sec)


INFO:tensorflow:loss = 0.40967262, step = 17300 (0.349 sec)


INFO:tensorflow:global_step/sec: 285.449


INFO:tensorflow:global_step/sec: 285.449


INFO:tensorflow:loss = 0.3714304, step = 17400 (0.350 sec)


INFO:tensorflow:loss = 0.3714304, step = 17400 (0.350 sec)


INFO:tensorflow:global_step/sec: 286.478


INFO:tensorflow:global_step/sec: 286.478


INFO:tensorflow:loss = 0.23813489, step = 17500 (0.349 sec)


INFO:tensorflow:loss = 0.23813489, step = 17500 (0.349 sec)


INFO:tensorflow:global_step/sec: 285.661


INFO:tensorflow:global_step/sec: 285.661


INFO:tensorflow:loss = 0.3649739, step = 17600 (0.350 sec)


INFO:tensorflow:loss = 0.3649739, step = 17600 (0.350 sec)


INFO:tensorflow:global_step/sec: 349.684


INFO:tensorflow:global_step/sec: 349.684


INFO:tensorflow:loss = 0.3647774, step = 17700 (0.286 sec)


INFO:tensorflow:loss = 0.3647774, step = 17700 (0.286 sec)


INFO:tensorflow:global_step/sec: 535.795


INFO:tensorflow:global_step/sec: 535.795


INFO:tensorflow:loss = 0.39384446, step = 17800 (0.188 sec)


INFO:tensorflow:loss = 0.39384446, step = 17800 (0.188 sec)


INFO:tensorflow:global_step/sec: 149.554


INFO:tensorflow:global_step/sec: 149.554


INFO:tensorflow:loss = 0.27594262, step = 17900 (0.668 sec)


INFO:tensorflow:loss = 0.27594262, step = 17900 (0.668 sec)


INFO:tensorflow:global_step/sec: 286.478


INFO:tensorflow:global_step/sec: 286.478


INFO:tensorflow:loss = 0.38803437, step = 18000 (0.349 sec)


INFO:tensorflow:loss = 0.38803437, step = 18000 (0.349 sec)


INFO:tensorflow:global_step/sec: 285.662


INFO:tensorflow:global_step/sec: 285.662


INFO:tensorflow:loss = 0.50599015, step = 18100 (0.350 sec)


INFO:tensorflow:loss = 0.50599015, step = 18100 (0.350 sec)


INFO:tensorflow:global_step/sec: 288.955


INFO:tensorflow:global_step/sec: 288.955


INFO:tensorflow:loss = 0.33111382, step = 18200 (0.346 sec)


INFO:tensorflow:loss = 0.33111382, step = 18200 (0.346 sec)


INFO:tensorflow:global_step/sec: 284.85


INFO:tensorflow:global_step/sec: 284.85


INFO:tensorflow:loss = 0.39881623, step = 18300 (0.351 sec)


INFO:tensorflow:loss = 0.39881623, step = 18300 (0.351 sec)


INFO:tensorflow:global_step/sec: 266.668


INFO:tensorflow:global_step/sec: 266.668


INFO:tensorflow:loss = 0.44344792, step = 18400 (0.375 sec)


INFO:tensorflow:loss = 0.44344792, step = 18400 (0.375 sec)


INFO:tensorflow:global_step/sec: 267.016


INFO:tensorflow:global_step/sec: 267.016


INFO:tensorflow:loss = 0.43959028, step = 18500 (0.375 sec)


INFO:tensorflow:loss = 0.43959028, step = 18500 (0.375 sec)


INFO:tensorflow:global_step/sec: 283.241


INFO:tensorflow:global_step/sec: 283.241


INFO:tensorflow:loss = 0.2900188, step = 18600 (0.353 sec)


INFO:tensorflow:loss = 0.2900188, step = 18600 (0.353 sec)


INFO:tensorflow:global_step/sec: 524.957


INFO:tensorflow:global_step/sec: 524.957


INFO:tensorflow:loss = 0.6182838, step = 18700 (0.190 sec)


INFO:tensorflow:loss = 0.6182838, step = 18700 (0.190 sec)


INFO:tensorflow:Saving checkpoints for 18760 into ./models/mnist-dnn/model.ckpt.


INFO:tensorflow:Saving checkpoints for 18760 into ./models/mnist-dnn/model.ckpt.


INFO:tensorflow:Loss for final step: 0.67470294.


INFO:tensorflow:Loss for final step: 0.67470294.





<tensorflow_estimator.python.estimator.canned.dnn.DNNClassifierV2 at 0x1781f882388>
eval_result = dnn_classifier.evaluate(    input_fn=eval_input_fn)print(eval_result)
INFO:tensorflow:Calling model_fn.


INFO:tensorflow:Calling model_fn.


INFO:tensorflow:Done calling model_fn.


INFO:tensorflow:Done calling model_fn.


INFO:tensorflow:Starting evaluation at 2021-06-13T20:14:53Z


INFO:tensorflow:Starting evaluation at 2021-06-13T20:14:53Z


INFO:tensorflow:Graph was finalized.


INFO:tensorflow:Graph was finalized.


INFO:tensorflow:Restoring parameters from ./models/mnist-dnn/model.ckpt-18760


INFO:tensorflow:Restoring parameters from ./models/mnist-dnn/model.ckpt-18760


INFO:tensorflow:Running local_init_op.


INFO:tensorflow:Running local_init_op.


INFO:tensorflow:Done running local_init_op.


INFO:tensorflow:Done running local_init_op.


INFO:tensorflow:Inference Time : 1.23237s


INFO:tensorflow:Inference Time : 1.23237s


INFO:tensorflow:Finished evaluation at 2021-06-13-20:14:55


INFO:tensorflow:Finished evaluation at 2021-06-13-20:14:55


INFO:tensorflow:Saving dict for global step 18760: accuracy = 0.8985, average_loss = 0.37901935, global_step = 18760, loss = 0.37936696


INFO:tensorflow:Saving dict for global step 18760: accuracy = 0.8985, average_loss = 0.37901935, global_step = 18760, loss = 0.37936696


INFO:tensorflow:Saving 'checkpoint_path' summary for global step 18760: ./models/mnist-dnn/model.ckpt-18760


INFO:tensorflow:Saving 'checkpoint_path' summary for global step 18760: ./models/mnist-dnn/model.ckpt-18760


{'accuracy': 0.8985, 'average_loss': 0.37901935, 'loss': 0.37936696, 'global_step': 18760}

1.13从现有的 Keras模型创建自定义的Estimator

如果已经开发了一个模型并希望将其发布或与组织中的其他成员共享该模型,则将KERAS模型转换为Estimator在学术界和工业界都很有用。这样的转换允许我们使用Estimators的优点,例如分布式训练和自动检查点。此外,通过指定特征列和输入函数,可以方便其他人使用该模型,特别是避免在解释输入特征时产生混淆。

## Set random seeds for reproducibilitytf.random.set_seed(1)np.random.seed(1)## Create the datax = np.random.uniform(low=-1, high=1, size=(200, 2))y = np.ones(len(x))y[x[:, 0] * x[:, 1]<0] = 0x_train = x[:100, :]y_train = y[:100]x_valid = x[100:, :]y_valid = y[100:]
## Step 1: Define the input functionsdef train_input_fn(x_train, y_train, batch_size=8):    dataset = tf.data.Dataset.from_tensor_slices(        ({'input-features':x_train}, y_train.reshape(-1, 1)))    # Shuffle, repeat, and batch the examples.    return dataset.shuffle(100).repeat().batch(batch_size)def eval_input_fn(x_test, y_test=None, batch_size=8):    if y_test is None:        dataset = tf.data.Dataset.from_tensor_slices(            {'input-features':x_test})    else:        dataset = tf.data.Dataset.from_tensor_slices(            ({'input-features':x_test}, y_test.reshape(-1, 1)))    # Shuffle, repeat, and batch the examples.    return dataset.batch(batch_size)
## Step 2: Define the feature columnsfeatures = [    tf.feature_column.numeric_column(        key='input-features:', shape=(2,))]    features
[NumericColumn(key='input-features:', shape=(2,), default_value=None, dtype=tf.float32, normalizer_fn=None)]
## Step 3: Create the estimator: convert from a Keras modelmodel = tf.keras.Sequential([    tf.keras.layers.Input(shape=(2,), name='input-features'),    tf.keras.layers.Dense(units=4, activation='relu'),    tf.keras.layers.Dense(units=4, activation='relu'),    tf.keras.layers.Dense(units=4, activation='relu'),    tf.keras.layers.Dense(1, activation='sigmoid')])model.summary()model.compile(optimizer=tf.keras.optimizers.SGD(),              loss=tf.keras.losses.BinaryCrossentropy(),              metrics=[tf.keras.metrics.BinaryAccuracy()])my_estimator = tf.keras.estimator.model_to_estimator(    keras_model=model,    model_dir='./models/estimator-for-XOR/')
Model: "sequential"_________________________________________________________________Layer (type)                 Output Shape              Param #   =================================================================dense (Dense)                (None, 4)                 12        _________________________________________________________________dense_1 (Dense)              (None, 4)                 20        _________________________________________________________________dense_2 (Dense)              (None, 4)                 20        _________________________________________________________________dense_3 (Dense)              (None, 1)                 5         =================================================================Total params: 57Trainable params: 57Non-trainable params: 0_________________________________________________________________INFO:tensorflow:Using default config.


INFO:tensorflow:Using default config.


INFO:tensorflow:Using the Keras model provided.


INFO:tensorflow:Using the Keras model provided.


INFO:tensorflow:Using config: {'_model_dir': './models/estimator-for-XOR/', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: truegraph_options {  rewrite_options {    meta_optimizer_iterations: ONE  }}, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}


INFO:tensorflow:Using config: {'_model_dir': './models/estimator-for-XOR/', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: truegraph_options {  rewrite_options {    meta_optimizer_iterations: ONE  }}, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
## Step 4: use the estimator: train/evaluate/predictnum_epochs = 200batch_size = 2steps_per_epoch = np.ceil(len(x_train) / batch_size)my_estimator.train(    input_fn=lambda: train_input_fn(x_train, y_train, batch_size),    steps=num_epochs * steps_per_epoch)
INFO:tensorflow:Calling model_fn.


INFO:tensorflow:Calling model_fn.


INFO:tensorflow:Done calling model_fn.


INFO:tensorflow:Done calling model_fn.


INFO:tensorflow:Warm-starting with WarmStartSettings: WarmStartSettings(ckpt_to_initialize_from='models/estimator-for-XOR/keras/keras_model.ckpt', vars_to_warm_start='.*', var_name_to_vocab_info={}, var_name_to_prev_var_name={})


INFO:tensorflow:Warm-starting with WarmStartSettings: WarmStartSettings(ckpt_to_initialize_from='models/estimator-for-XOR/keras/keras_model.ckpt', vars_to_warm_start='.*', var_name_to_vocab_info={}, var_name_to_prev_var_name={})


INFO:tensorflow:Warm-starting from: models/estimator-for-XOR/keras/keras_model.ckpt


INFO:tensorflow:Warm-starting from: models/estimator-for-XOR/keras/keras_model.ckpt


INFO:tensorflow:Warm-starting variables only in TRAINABLE_VARIABLES.


INFO:tensorflow:Warm-starting variables only in TRAINABLE_VARIABLES.


INFO:tensorflow:Warm-started 8 variables.


INFO:tensorflow:Warm-started 8 variables.


INFO:tensorflow:Create CheckpointSaverHook.


INFO:tensorflow:Create CheckpointSaverHook.


INFO:tensorflow:Graph was finalized.


INFO:tensorflow:Graph was finalized.


INFO:tensorflow:Running local_init_op.


INFO:tensorflow:Running local_init_op.


INFO:tensorflow:Done running local_init_op.


INFO:tensorflow:Done running local_init_op.


INFO:tensorflow:Saving checkpoints for 0 into models/estimator-for-XOR/model.ckpt.


INFO:tensorflow:Saving checkpoints for 0 into models/estimator-for-XOR/model.ckpt.


INFO:tensorflow:loss = 0.67599964, step = 0


INFO:tensorflow:loss = 0.67599964, step = 0


INFO:tensorflow:global_step/sec: 1031.16


INFO:tensorflow:global_step/sec: 1031.16


INFO:tensorflow:loss = 0.66389585, step = 100 (0.098 sec)


INFO:tensorflow:loss = 0.66389585, step = 100 (0.098 sec)


INFO:tensorflow:global_step/sec: 2229.1


INFO:tensorflow:global_step/sec: 2229.1


INFO:tensorflow:loss = 0.69760054, step = 200 (0.045 sec)


INFO:tensorflow:loss = 0.69760054, step = 200 (0.045 sec)


INFO:tensorflow:global_step/sec: 2045.16


INFO:tensorflow:global_step/sec: 2045.16


INFO:tensorflow:loss = 0.64448076, step = 300 (0.049 sec)


INFO:tensorflow:loss = 0.64448076, step = 300 (0.049 sec)


INFO:tensorflow:global_step/sec: 2124.04


INFO:tensorflow:global_step/sec: 2124.04


INFO:tensorflow:loss = 0.7329457, step = 400 (0.047 sec)


INFO:tensorflow:loss = 0.7329457, step = 400 (0.047 sec)


INFO:tensorflow:global_step/sec: 2099.51


INFO:tensorflow:global_step/sec: 2099.51


INFO:tensorflow:loss = 0.735602, step = 500 (0.047 sec)


INFO:tensorflow:loss = 0.735602, step = 500 (0.047 sec)


INFO:tensorflow:global_step/sec: 2092.14


INFO:tensorflow:global_step/sec: 2092.14


INFO:tensorflow:loss = 0.6452929, step = 600 (0.048 sec)


INFO:tensorflow:loss = 0.6452929, step = 600 (0.048 sec)


INFO:tensorflow:global_step/sec: 2012.06


INFO:tensorflow:global_step/sec: 2012.06


INFO:tensorflow:loss = 0.6274626, step = 700 (0.050 sec)


INFO:tensorflow:loss = 0.6274626, step = 700 (0.050 sec)


INFO:tensorflow:global_step/sec: 2081.3


INFO:tensorflow:global_step/sec: 2081.3


INFO:tensorflow:loss = 0.68258405, step = 800 (0.048 sec)


INFO:tensorflow:loss = 0.68258405, step = 800 (0.048 sec)


INFO:tensorflow:global_step/sec: 2060.75


INFO:tensorflow:global_step/sec: 2060.75


INFO:tensorflow:loss = 0.6896081, step = 900 (0.049 sec)


INFO:tensorflow:loss = 0.6896081, step = 900 (0.049 sec)


INFO:tensorflow:global_step/sec: 2140.55


INFO:tensorflow:global_step/sec: 2140.55


INFO:tensorflow:loss = 0.6942914, step = 1000 (0.047 sec)


INFO:tensorflow:loss = 0.6942914, step = 1000 (0.047 sec)


INFO:tensorflow:global_step/sec: 2111.04


INFO:tensorflow:global_step/sec: 2111.04


INFO:tensorflow:loss = 0.6800655, step = 1100 (0.047 sec)


INFO:tensorflow:loss = 0.6800655, step = 1100 (0.047 sec)


INFO:tensorflow:global_step/sec: 1856.39


INFO:tensorflow:global_step/sec: 1856.39


INFO:tensorflow:loss = 0.70231795, step = 1200 (0.054 sec)


INFO:tensorflow:loss = 0.70231795, step = 1200 (0.054 sec)


INFO:tensorflow:global_step/sec: 1910.4


INFO:tensorflow:global_step/sec: 1910.4


INFO:tensorflow:loss = 0.7397743, step = 1300 (0.052 sec)


INFO:tensorflow:loss = 0.7397743, step = 1300 (0.052 sec)


INFO:tensorflow:global_step/sec: 2009.45


INFO:tensorflow:global_step/sec: 2009.45


INFO:tensorflow:loss = 0.69408834, step = 1400 (0.050 sec)


INFO:tensorflow:loss = 0.69408834, step = 1400 (0.050 sec)


INFO:tensorflow:global_step/sec: 1985.23


INFO:tensorflow:global_step/sec: 1985.23


INFO:tensorflow:loss = 0.7528011, step = 1500 (0.050 sec)


INFO:tensorflow:loss = 0.7528011, step = 1500 (0.050 sec)


INFO:tensorflow:global_step/sec: 2006.86


INFO:tensorflow:global_step/sec: 2006.86


INFO:tensorflow:loss = 0.6945518, step = 1600 (0.050 sec)


INFO:tensorflow:loss = 0.6945518, step = 1600 (0.050 sec)


INFO:tensorflow:global_step/sec: 1909.74


INFO:tensorflow:global_step/sec: 1909.74


INFO:tensorflow:loss = 0.6146264, step = 1700 (0.052 sec)


INFO:tensorflow:loss = 0.6146264, step = 1700 (0.052 sec)


INFO:tensorflow:global_step/sec: 2079.31


INFO:tensorflow:global_step/sec: 2079.31


INFO:tensorflow:loss = 0.65715224, step = 1800 (0.048 sec)


INFO:tensorflow:loss = 0.65715224, step = 1800 (0.048 sec)


INFO:tensorflow:global_step/sec: 2007.72


INFO:tensorflow:global_step/sec: 2007.72


INFO:tensorflow:loss = 0.64627206, step = 1900 (0.050 sec)


INFO:tensorflow:loss = 0.64627206, step = 1900 (0.050 sec)


INFO:tensorflow:global_step/sec: 1950.83


INFO:tensorflow:global_step/sec: 1950.83


INFO:tensorflow:loss = 0.7336613, step = 2000 (0.051 sec)


INFO:tensorflow:loss = 0.7336613, step = 2000 (0.051 sec)


INFO:tensorflow:global_step/sec: 1948.98


INFO:tensorflow:global_step/sec: 1948.98


INFO:tensorflow:loss = 0.69352937, step = 2100 (0.051 sec)


INFO:tensorflow:loss = 0.69352937, step = 2100 (0.051 sec)


INFO:tensorflow:global_step/sec: 1687.34


INFO:tensorflow:global_step/sec: 1687.34


INFO:tensorflow:loss = 0.7212291, step = 2200 (0.059 sec)


INFO:tensorflow:loss = 0.7212291, step = 2200 (0.059 sec)


INFO:tensorflow:global_step/sec: 1898.24


INFO:tensorflow:global_step/sec: 1898.24


INFO:tensorflow:loss = 0.6852537, step = 2300 (0.053 sec)


INFO:tensorflow:loss = 0.6852537, step = 2300 (0.053 sec)


INFO:tensorflow:global_step/sec: 2019.88


INFO:tensorflow:global_step/sec: 2019.88


INFO:tensorflow:loss = 0.54438925, step = 2400 (0.049 sec)


INFO:tensorflow:loss = 0.54438925, step = 2400 (0.049 sec)


INFO:tensorflow:global_step/sec: 1935.91


INFO:tensorflow:global_step/sec: 1935.91


INFO:tensorflow:loss = 0.55820775, step = 2500 (0.052 sec)


INFO:tensorflow:loss = 0.55820775, step = 2500 (0.052 sec)


INFO:tensorflow:global_step/sec: 1992.99


INFO:tensorflow:global_step/sec: 1992.99


INFO:tensorflow:loss = 0.66507506, step = 2600 (0.050 sec)


INFO:tensorflow:loss = 0.66507506, step = 2600 (0.050 sec)


INFO:tensorflow:global_step/sec: 2003.13


INFO:tensorflow:global_step/sec: 2003.13


INFO:tensorflow:loss = 0.43656868, step = 2700 (0.050 sec)


INFO:tensorflow:loss = 0.43656868, step = 2700 (0.050 sec)


INFO:tensorflow:global_step/sec: 1979.96


INFO:tensorflow:global_step/sec: 1979.96


INFO:tensorflow:loss = 0.63751894, step = 2800 (0.051 sec)


INFO:tensorflow:loss = 0.63751894, step = 2800 (0.051 sec)


INFO:tensorflow:global_step/sec: 1932.48


INFO:tensorflow:global_step/sec: 1932.48


INFO:tensorflow:loss = 0.45888674, step = 2900 (0.052 sec)


INFO:tensorflow:loss = 0.45888674, step = 2900 (0.052 sec)


INFO:tensorflow:global_step/sec: 2005.83


INFO:tensorflow:global_step/sec: 2005.83


INFO:tensorflow:loss = 0.68522173, step = 3000 (0.050 sec)


INFO:tensorflow:loss = 0.68522173, step = 3000 (0.050 sec)


INFO:tensorflow:global_step/sec: 2057.6


INFO:tensorflow:global_step/sec: 2057.6


INFO:tensorflow:loss = 0.5820494, step = 3100 (0.048 sec)


INFO:tensorflow:loss = 0.5820494, step = 3100 (0.048 sec)


INFO:tensorflow:global_step/sec: 2030.71


INFO:tensorflow:global_step/sec: 2030.71


INFO:tensorflow:loss = 0.1965836, step = 3200 (0.049 sec)


INFO:tensorflow:loss = 0.1965836, step = 3200 (0.049 sec)


INFO:tensorflow:global_step/sec: 2016.58


INFO:tensorflow:global_step/sec: 2016.58


INFO:tensorflow:loss = 0.48559472, step = 3300 (0.050 sec)


INFO:tensorflow:loss = 0.48559472, step = 3300 (0.050 sec)


INFO:tensorflow:global_step/sec: 2044.57


INFO:tensorflow:global_step/sec: 2044.57


INFO:tensorflow:loss = 0.370632, step = 3400 (0.049 sec)


INFO:tensorflow:loss = 0.370632, step = 3400 (0.049 sec)


INFO:tensorflow:global_step/sec: 2020.45


INFO:tensorflow:global_step/sec: 2020.45


INFO:tensorflow:loss = 0.74945277, step = 3500 (0.050 sec)


INFO:tensorflow:loss = 0.74945277, step = 3500 (0.050 sec)


INFO:tensorflow:global_step/sec: 2114.97


INFO:tensorflow:global_step/sec: 2114.97


INFO:tensorflow:loss = 0.74562764, step = 3600 (0.047 sec)


INFO:tensorflow:loss = 0.74562764, step = 3600 (0.047 sec)


INFO:tensorflow:global_step/sec: 2075.85


INFO:tensorflow:global_step/sec: 2075.85


INFO:tensorflow:loss = 0.41700324, step = 3700 (0.048 sec)


INFO:tensorflow:loss = 0.41700324, step = 3700 (0.048 sec)


INFO:tensorflow:global_step/sec: 2187.13


INFO:tensorflow:global_step/sec: 2187.13


INFO:tensorflow:loss = 0.3938596, step = 3800 (0.046 sec)


INFO:tensorflow:loss = 0.3938596, step = 3800 (0.046 sec)


INFO:tensorflow:global_step/sec: 2150.86


INFO:tensorflow:global_step/sec: 2150.86


INFO:tensorflow:loss = 0.4955755, step = 3900 (0.046 sec)


INFO:tensorflow:loss = 0.4955755, step = 3900 (0.046 sec)


INFO:tensorflow:global_step/sec: 2163.47


INFO:tensorflow:global_step/sec: 2163.47


INFO:tensorflow:loss = 0.51575875, step = 4000 (0.046 sec)


INFO:tensorflow:loss = 0.51575875, step = 4000 (0.046 sec)


INFO:tensorflow:global_step/sec: 2145.37


INFO:tensorflow:global_step/sec: 2145.37


INFO:tensorflow:loss = 0.5289919, step = 4100 (0.047 sec)


INFO:tensorflow:loss = 0.5289919, step = 4100 (0.047 sec)


INFO:tensorflow:global_step/sec: 2040.49


INFO:tensorflow:global_step/sec: 2040.49


INFO:tensorflow:loss = 0.3431827, step = 4200 (0.049 sec)


INFO:tensorflow:loss = 0.3431827, step = 4200 (0.049 sec)


INFO:tensorflow:global_step/sec: 1919.23


INFO:tensorflow:global_step/sec: 1919.23


INFO:tensorflow:loss = 0.37460858, step = 4300 (0.052 sec)


INFO:tensorflow:loss = 0.37460858, step = 4300 (0.052 sec)


INFO:tensorflow:global_step/sec: 1925.94


INFO:tensorflow:global_step/sec: 1925.94


INFO:tensorflow:loss = 0.511979, step = 4400 (0.052 sec)


INFO:tensorflow:loss = 0.511979, step = 4400 (0.052 sec)


INFO:tensorflow:global_step/sec: 1933.33


INFO:tensorflow:global_step/sec: 1933.33


INFO:tensorflow:loss = 0.01807601, step = 4500 (0.052 sec)


INFO:tensorflow:loss = 0.01807601, step = 4500 (0.052 sec)


INFO:tensorflow:global_step/sec: 2023.4


INFO:tensorflow:global_step/sec: 2023.4


INFO:tensorflow:loss = 0.30498773, step = 4600 (0.049 sec)


INFO:tensorflow:loss = 0.30498773, step = 4600 (0.049 sec)


INFO:tensorflow:global_step/sec: 1913.8


INFO:tensorflow:global_step/sec: 1913.8


INFO:tensorflow:loss = 0.30013862, step = 4700 (0.052 sec)


INFO:tensorflow:loss = 0.30013862, step = 4700 (0.052 sec)


INFO:tensorflow:global_step/sec: 2037.61


INFO:tensorflow:global_step/sec: 2037.61


INFO:tensorflow:loss = 0.2524454, step = 4800 (0.049 sec)


INFO:tensorflow:loss = 0.2524454, step = 4800 (0.049 sec)


INFO:tensorflow:global_step/sec: 2060.83


INFO:tensorflow:global_step/sec: 2060.83


INFO:tensorflow:loss = 0.2248443, step = 4900 (0.048 sec)


INFO:tensorflow:loss = 0.2248443, step = 4900 (0.048 sec)


INFO:tensorflow:global_step/sec: 1998.21


INFO:tensorflow:global_step/sec: 1998.21


INFO:tensorflow:loss = 0.13730438, step = 5000 (0.050 sec)


INFO:tensorflow:loss = 0.13730438, step = 5000 (0.050 sec)


INFO:tensorflow:global_step/sec: 2068.03


INFO:tensorflow:global_step/sec: 2068.03


INFO:tensorflow:loss = 0.0097431615, step = 5100 (0.048 sec)


INFO:tensorflow:loss = 0.0097431615, step = 5100 (0.048 sec)


INFO:tensorflow:global_step/sec: 2121.97


INFO:tensorflow:global_step/sec: 2121.97


INFO:tensorflow:loss = 0.04345341, step = 5200 (0.047 sec)


INFO:tensorflow:loss = 0.04345341, step = 5200 (0.047 sec)


INFO:tensorflow:global_step/sec: 2051.71


INFO:tensorflow:global_step/sec: 2051.71


INFO:tensorflow:loss = 0.27216375, step = 5300 (0.049 sec)


INFO:tensorflow:loss = 0.27216375, step = 5300 (0.049 sec)


INFO:tensorflow:global_step/sec: 2070.39


INFO:tensorflow:global_step/sec: 2070.39


INFO:tensorflow:loss = 0.042003, step = 5400 (0.048 sec)


INFO:tensorflow:loss = 0.042003, step = 5400 (0.048 sec)


INFO:tensorflow:global_step/sec: 1986.3


INFO:tensorflow:global_step/sec: 1986.3


INFO:tensorflow:loss = 0.056521147, step = 5500 (0.050 sec)


INFO:tensorflow:loss = 0.056521147, step = 5500 (0.050 sec)


INFO:tensorflow:global_step/sec: 2026.74


INFO:tensorflow:global_step/sec: 2026.74


INFO:tensorflow:loss = 0.107332416, step = 5600 (0.049 sec)


INFO:tensorflow:loss = 0.107332416, step = 5600 (0.049 sec)


INFO:tensorflow:global_step/sec: 2112.21


INFO:tensorflow:global_step/sec: 2112.21


INFO:tensorflow:loss = 0.04123502, step = 5700 (0.047 sec)


INFO:tensorflow:loss = 0.04123502, step = 5700 (0.047 sec)


INFO:tensorflow:global_step/sec: 1960.51


INFO:tensorflow:global_step/sec: 1960.51


INFO:tensorflow:loss = 0.07338292, step = 5800 (0.051 sec)


INFO:tensorflow:loss = 0.07338292, step = 5800 (0.051 sec)


INFO:tensorflow:global_step/sec: 2059.9


INFO:tensorflow:global_step/sec: 2059.9


INFO:tensorflow:loss = 0.24629912, step = 5900 (0.048 sec)


INFO:tensorflow:loss = 0.24629912, step = 5900 (0.048 sec)


INFO:tensorflow:global_step/sec: 1967.66


INFO:tensorflow:global_step/sec: 1967.66


INFO:tensorflow:loss = 0.015422126, step = 6000 (0.051 sec)


INFO:tensorflow:loss = 0.015422126, step = 6000 (0.051 sec)


INFO:tensorflow:global_step/sec: 2105.21


INFO:tensorflow:global_step/sec: 2105.21


INFO:tensorflow:loss = 0.02205166, step = 6100 (0.048 sec)


INFO:tensorflow:loss = 0.02205166, step = 6100 (0.048 sec)


INFO:tensorflow:global_step/sec: 2057.95


INFO:tensorflow:global_step/sec: 2057.95


INFO:tensorflow:loss = 0.0031883994, step = 6200 (0.049 sec)


INFO:tensorflow:loss = 0.0031883994, step = 6200 (0.049 sec)


INFO:tensorflow:global_step/sec: 2039.86


INFO:tensorflow:global_step/sec: 2039.86


INFO:tensorflow:loss = 0.34405905, step = 6300 (0.049 sec)


INFO:tensorflow:loss = 0.34405905, step = 6300 (0.049 sec)


INFO:tensorflow:global_step/sec: 2056.38


INFO:tensorflow:global_step/sec: 2056.38


INFO:tensorflow:loss = 0.04320626, step = 6400 (0.049 sec)


INFO:tensorflow:loss = 0.04320626, step = 6400 (0.049 sec)


INFO:tensorflow:global_step/sec: 1959.83


INFO:tensorflow:global_step/sec: 1959.83


INFO:tensorflow:loss = 0.0003893703, step = 6500 (0.051 sec)


INFO:tensorflow:loss = 0.0003893703, step = 6500 (0.051 sec)


INFO:tensorflow:global_step/sec: 2005.69


INFO:tensorflow:global_step/sec: 2005.69


INFO:tensorflow:loss = 0.005372245, step = 6600 (0.050 sec)


INFO:tensorflow:loss = 0.005372245, step = 6600 (0.050 sec)


INFO:tensorflow:global_step/sec: 1958.52


INFO:tensorflow:global_step/sec: 1958.52


INFO:tensorflow:loss = 0.017107084, step = 6700 (0.051 sec)


INFO:tensorflow:loss = 0.017107084, step = 6700 (0.051 sec)


INFO:tensorflow:global_step/sec: 1890.68


INFO:tensorflow:global_step/sec: 1890.68


INFO:tensorflow:loss = 0.00046630704, step = 6800 (0.053 sec)


INFO:tensorflow:loss = 0.00046630704, step = 6800 (0.053 sec)


INFO:tensorflow:global_step/sec: 2033.63


INFO:tensorflow:global_step/sec: 2033.63


INFO:tensorflow:loss = 0.009309545, step = 6900 (0.049 sec)


INFO:tensorflow:loss = 0.009309545, step = 6900 (0.049 sec)


INFO:tensorflow:global_step/sec: 2021.11


INFO:tensorflow:global_step/sec: 2021.11


INFO:tensorflow:loss = 0.032709546, step = 7000 (0.049 sec)


INFO:tensorflow:loss = 0.032709546, step = 7000 (0.049 sec)


INFO:tensorflow:global_step/sec: 1988.39


INFO:tensorflow:global_step/sec: 1988.39


INFO:tensorflow:loss = 0.001911474, step = 7100 (0.051 sec)


INFO:tensorflow:loss = 0.001911474, step = 7100 (0.051 sec)


INFO:tensorflow:global_step/sec: 1954.84


INFO:tensorflow:global_step/sec: 1954.84


INFO:tensorflow:loss = 0.012432969, step = 7200 (0.051 sec)


INFO:tensorflow:loss = 0.012432969, step = 7200 (0.051 sec)


INFO:tensorflow:global_step/sec: 1982.94


INFO:tensorflow:global_step/sec: 1982.94


INFO:tensorflow:loss = 0.07940982, step = 7300 (0.050 sec)


INFO:tensorflow:loss = 0.07940982, step = 7300 (0.050 sec)


INFO:tensorflow:global_step/sec: 2019.72


INFO:tensorflow:global_step/sec: 2019.72


INFO:tensorflow:loss = 0.0011171552, step = 7400 (0.049 sec)


INFO:tensorflow:loss = 0.0011171552, step = 7400 (0.049 sec)


INFO:tensorflow:global_step/sec: 1967.88


INFO:tensorflow:global_step/sec: 1967.88


INFO:tensorflow:loss = 0.010894721, step = 7500 (0.051 sec)


INFO:tensorflow:loss = 0.010894721, step = 7500 (0.051 sec)


INFO:tensorflow:global_step/sec: 1954.85


INFO:tensorflow:global_step/sec: 1954.85


INFO:tensorflow:loss = 0.0873113, step = 7600 (0.051 sec)


INFO:tensorflow:loss = 0.0873113, step = 7600 (0.051 sec)


INFO:tensorflow:global_step/sec: 1990.32


INFO:tensorflow:global_step/sec: 1990.32


INFO:tensorflow:loss = 0.009088446, step = 7700 (0.050 sec)


INFO:tensorflow:loss = 0.009088446, step = 7700 (0.050 sec)


INFO:tensorflow:global_step/sec: 1945.23


INFO:tensorflow:global_step/sec: 1945.23


INFO:tensorflow:loss = 0.009189545, step = 7800 (0.052 sec)


INFO:tensorflow:loss = 0.009189545, step = 7800 (0.052 sec)


INFO:tensorflow:global_step/sec: 2099.03


INFO:tensorflow:global_step/sec: 2099.03


INFO:tensorflow:loss = 8.943558e-05, step = 7900 (0.047 sec)


INFO:tensorflow:loss = 8.943558e-05, step = 7900 (0.047 sec)


INFO:tensorflow:global_step/sec: 2060.79


INFO:tensorflow:global_step/sec: 2060.79


INFO:tensorflow:loss = 0.012509959, step = 8000 (0.048 sec)


INFO:tensorflow:loss = 0.012509959, step = 8000 (0.048 sec)


INFO:tensorflow:global_step/sec: 2041.15


INFO:tensorflow:global_step/sec: 2041.15


INFO:tensorflow:loss = 0.014975408, step = 8100 (0.049 sec)


INFO:tensorflow:loss = 0.014975408, step = 8100 (0.049 sec)


INFO:tensorflow:global_step/sec: 2031.03


INFO:tensorflow:global_step/sec: 2031.03


INFO:tensorflow:loss = 1.5024917e-05, step = 8200 (0.049 sec)


INFO:tensorflow:loss = 1.5024917e-05, step = 8200 (0.049 sec)


INFO:tensorflow:global_step/sec: 2023.02


INFO:tensorflow:global_step/sec: 2023.02


INFO:tensorflow:loss = 0.010492004, step = 8300 (0.050 sec)


INFO:tensorflow:loss = 0.010492004, step = 8300 (0.050 sec)


INFO:tensorflow:global_step/sec: 1968.74


INFO:tensorflow:global_step/sec: 1968.74


INFO:tensorflow:loss = 0.00027322883, step = 8400 (0.051 sec)


INFO:tensorflow:loss = 0.00027322883, step = 8400 (0.051 sec)


INFO:tensorflow:global_step/sec: 2007.71


INFO:tensorflow:global_step/sec: 2007.71


INFO:tensorflow:loss = 1.0195827e-06, step = 8500 (0.050 sec)


INFO:tensorflow:loss = 1.0195827e-06, step = 8500 (0.050 sec)


INFO:tensorflow:global_step/sec: 2054.11


INFO:tensorflow:global_step/sec: 2054.11


INFO:tensorflow:loss = 0.020234762, step = 8600 (0.048 sec)


INFO:tensorflow:loss = 0.020234762, step = 8600 (0.048 sec)


INFO:tensorflow:global_step/sec: 2057.91


INFO:tensorflow:global_step/sec: 2057.91


INFO:tensorflow:loss = 0.005651293, step = 8700 (0.049 sec)


INFO:tensorflow:loss = 0.005651293, step = 8700 (0.049 sec)


INFO:tensorflow:global_step/sec: 2008.43


INFO:tensorflow:global_step/sec: 2008.43


INFO:tensorflow:loss = 0.005406373, step = 8800 (0.050 sec)


INFO:tensorflow:loss = 0.005406373, step = 8800 (0.050 sec)


INFO:tensorflow:global_step/sec: 2057.19


INFO:tensorflow:global_step/sec: 2057.19


INFO:tensorflow:loss = 0.0052554747, step = 8900 (0.049 sec)


INFO:tensorflow:loss = 0.0052554747, step = 8900 (0.049 sec)


INFO:tensorflow:global_step/sec: 2163.99


INFO:tensorflow:global_step/sec: 2163.99


INFO:tensorflow:loss = 0.005111583, step = 9000 (0.046 sec)


INFO:tensorflow:loss = 0.005111583, step = 9000 (0.046 sec)


INFO:tensorflow:global_step/sec: 2063.64


INFO:tensorflow:global_step/sec: 2063.64


INFO:tensorflow:loss = 3.581464e-05, step = 9100 (0.048 sec)


INFO:tensorflow:loss = 3.581464e-05, step = 9100 (0.048 sec)


INFO:tensorflow:global_step/sec: 2132.11


INFO:tensorflow:global_step/sec: 2132.11


INFO:tensorflow:loss = 0.07763314, step = 9200 (0.047 sec)


INFO:tensorflow:loss = 0.07763314, step = 9200 (0.047 sec)


INFO:tensorflow:global_step/sec: 2076.11


INFO:tensorflow:global_step/sec: 2076.11


INFO:tensorflow:loss = 0.07831412, step = 9300 (0.048 sec)


INFO:tensorflow:loss = 0.07831412, step = 9300 (0.048 sec)


INFO:tensorflow:global_step/sec: 2046.96


INFO:tensorflow:global_step/sec: 2046.96


INFO:tensorflow:loss = 0.0043091094, step = 9400 (0.049 sec)


INFO:tensorflow:loss = 0.0043091094, step = 9400 (0.049 sec)


INFO:tensorflow:global_step/sec: 2079.31


INFO:tensorflow:global_step/sec: 2079.31


INFO:tensorflow:loss = 0.00083986373, step = 9500 (0.048 sec)


INFO:tensorflow:loss = 0.00083986373, step = 9500 (0.048 sec)


INFO:tensorflow:global_step/sec: 2015.72


INFO:tensorflow:global_step/sec: 2015.72


INFO:tensorflow:loss = 0.0040194714, step = 9600 (0.050 sec)


INFO:tensorflow:loss = 0.0040194714, step = 9600 (0.050 sec)


INFO:tensorflow:global_step/sec: 2127.03


INFO:tensorflow:global_step/sec: 2127.03


INFO:tensorflow:loss = 0.003948324, step = 9700 (0.047 sec)


INFO:tensorflow:loss = 0.003948324, step = 9700 (0.047 sec)


INFO:tensorflow:global_step/sec: 2034.76


INFO:tensorflow:global_step/sec: 2034.76


INFO:tensorflow:loss = 0.0039140303, step = 9800 (0.049 sec)


INFO:tensorflow:loss = 0.0039140303, step = 9800 (0.049 sec)


INFO:tensorflow:global_step/sec: 2023.84


INFO:tensorflow:global_step/sec: 2023.84


INFO:tensorflow:loss = 0.0021532348, step = 9900 (0.049 sec)


INFO:tensorflow:loss = 0.0021532348, step = 9900 (0.049 sec)


INFO:tensorflow:Saving checkpoints for 10000 into models/estimator-for-XOR/model.ckpt.


INFO:tensorflow:Saving checkpoints for 10000 into models/estimator-for-XOR/model.ckpt.


INFO:tensorflow:Loss for final step: 0.00016663199.


INFO:tensorflow:Loss for final step: 0.00016663199.





<tensorflow_estimator.python.estimator.estimator.EstimatorV2 at 0x7fe829ea0f28>
my_estimator.evaluate(
    input_fn=lambda: eval_input_fn(x_valid, y_valid, batch_size))
INFO:tensorflow:Calling model_fn.


INFO:tensorflow:Calling model_fn.


INFO:tensorflow:Done calling model_fn.


INFO:tensorflow:Done calling model_fn.


INFO:tensorflow:Starting evaluation at 2019-10-29T21:23:01Z


INFO:tensorflow:Starting evaluation at 2019-10-29T21:23:01Z


INFO:tensorflow:Graph was finalized.


INFO:tensorflow:Graph was finalized.


INFO:tensorflow:Restoring parameters from models/estimator-for-XOR/model.ckpt-10000


INFO:tensorflow:Restoring parameters from models/estimator-for-XOR/model.ckpt-10000


INFO:tensorflow:Running local_init_op.


INFO:tensorflow:Running local_init_op.


INFO:tensorflow:Done running local_init_op.


INFO:tensorflow:Done running local_init_op.


INFO:tensorflow:Finished evaluation at 2019-10-29-21:23:01


INFO:tensorflow:Finished evaluation at 2019-10-29-21:23:01


INFO:tensorflow:Saving dict for global step 10000: binary_accuracy = 0.96, global_step = 10000, loss = 0.081909806


INFO:tensorflow:Saving dict for global step 10000: binary_accuracy = 0.96, global_step = 10000, loss = 0.081909806


INFO:tensorflow:Saving 'checkpoint_path' summary for global step 10000: models/estimator-for-XOR/model.ckpt-10000


INFO:tensorflow:Saving 'checkpoint_path' summary for global step 10000: models/estimator-for-XOR/model.ckpt-10000





{'binary_accuracy': 0.96, 'loss': 0.081909806, 'global_step': 10000}

将keras模型转换为Estimators可以实现轻松体验Estimators的各种优势,例如:分布式训练和在训练期间自动保存checkpoint检查点等;


  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 4
    评论
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值