win10环境下TensorFlow的python代码练习

未完待续

0.概要

0.1 基础架构

功能组件
视图层计算图可视化TensorBoard
工作流层数据集准备,存储,加载keras/TF Slim
计算图层计算图构造与优化 前向计算/后向传播TensorFlow Core

0.2 计算图(Data Flow Graph)

计算图(有向图|数据流图)描述了张量数据(Tensor)的计算流程,负责维护和更新状态,一旦输入端的所有张量准备好,节点将被异步并行执行运算。

1.入门

1.1 安装

下载并安装 python3.5.3

下载并安装Python的包管理工具pip

cmd中利用pip安装numpy

pip install numpy-1.11.1+mkl-cp35-cp35m-win32.whl

PowerShell(管理员)下利用pip安装tensorflow

pip install tensorflow

1.2 计算图的构造流程

  1. tensorflow 有个重要的数据类型 叫tensor(张量),在python中是多维列表,形如[[0,1],[0,2]]。
  2. tf的代码 主要分4步:inference loss train evaluate
  3. 图是由多个op(运算节点)构成的网络。
  4. op负责接收tensor,运算后输出tensor。(第一层源op不需要任何输入)
import tensorflow as tf
import numpy as np
#使用numpy生成假数据。共100个点
x_data=np.float32(np.radom.rand(2,100))
y_data=np.dot(0.100,0.200)+0.300
#构造一个线性模型
b=tf.Variable(tf.zero([1]))
W=tf.Variable(tf.radom_uniform([1,2],-1.0,1.0))
y=tf.matmul(W,x_data)+b
#最小化方差
loss=tf.reduce_mean(tf.square(y-y_data))
optimizer=tf.train.GradientDescentOptimizer(0.5)

clipboard.png

图1.2计算图

1.3 计算图创建深入

1.3.0 Tensor运算操作节点(op)

运算类型示例
标量add,sub,mul,div,exp,log,greater,less,equal
向量concat,slice,split,constant,rank,shape,shuffle
矩阵matnul,matrixInverse.matrixDeterminant
状态Variable,Assign,AssignAdd
组件softmax,sigmod,relu,convolution2D,maxPooling
存储save,restore
队列与同步enqueue,dequeue,mutexAcquire,MutexRelease
控制流Merge,Switch,Enter,Leave,Nextlteration
pythonproduct = tf.matmul(matrix1, matrix2)#矩阵相乘
intermed = tf.add(input2, input3)#相加
mul = tf.mul(input1, intermed)#相乘

1.3.1 变量(Variable)

创建
当创建一个变量时,你将一个张量作为初始值传入构造函数Variable()。TensorFlow提供一系列操作初始化张量,初始值是常量或随机值。

weight = tf.Variable(tf.random_normal([784,200],stddev=0.35),name="weight")
biases=tf.Variabke(tf.zeros([200]),name="biases")

一个初始化op将变量设置为初始值。这事实上是一个tf.assign操作

update = tf.assign(state, new_value)#state=new_value

初始化
启动图之前需要初始化

init=tf.initialize_all_vaiavles()#初始化变量
sess=tf.Session()#启动图
sess.run(init)#执行init节点

由另一个变量初始化

weights=tf.Variable(tf.random_normal([784,200],stddev=0.35),name="weights")
w2=tf.Variable(weights.initilized_value(),name="w2")
w_twice=tf.Variable(weights.initilized_value()+0.2,name="w_twice")

保存
使用tf.train.Saver。数据主要包含从变量名到tensor的映射关系。

v1=tf.Variable(tf.zeros([200]),name="v1")
v2=tf.Variable(tf.zeros([200]),name="v2")
init_op=tf.initialize_all_variables()
saver=tf.train.Saver()
with tf.Session() as sess:
    sess.run(init_op)
    save_path=saver.save(sess,"tfsave/model.ckpt")
    print ("Model saved in file:",save_path)

部分保存

saver=tf.train.Saver({"my_v2",v2})

恢复

v1=tf.Variable(tf.zeros([200]),name="v1")
v2=tf.Variable(tf.zeros([200]),name="v2")
init_op=tf.initialize_all_variables()
saver=tf.train.Saver()
with tf.Session() as sess:
    saver.restore(sess,"tfsave/model.ckpt")
    print ("Model restored")

1.3.2会话(Session)

创建
方法1:全局可见

sess=tf.Session()

方法2:局部执行

with tf.Session() as sess:

运行

sess.run(init)

循环训练

for step in xrange(0,201):
    sess.run(train)
    if step &20==0:
        print(step,sess.run(W),sess.run(b))

1.4 HelloWorld

1.4.1 helloworld

import os
import tensorflow as tf
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
hello=tf.constant("HelloWorld")
sess=tf.Session()
print(sess.run(hello))

1.4.2 常量操作

import os
import tensorflow as tf
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
a=tf.constant(2)
b=tf.constant(3)
with tf.Session() as sess:
    print("a=2,b=3")
    print("常量节点相加: %i" % sess.run(a+b))
    print("常量节点相乘: %i" % sess.run(a*b))

1.4.3 变量操作

import os
import tensorflow as tf
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
a=tf.placeholder(tf.int16)
b=tf.placeholder(tf.int16)
add=tf.add(a,b)
mul= tf.multiply(a,b)
with tf.Session() as sess:
    print("变量节点相加: %i" % sess.run(add,feed_dict={a:2,b:3}))
    print("变量节点相乘: %i" % sess.run(mul,feed_dict={a:2,b:3}))

1.4.3 矩阵操作(保存计算图)

import os
import tensorflow as tf
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
matrix1=tf.constant([[3.,3.]])
matrix2=tf.constant([[2.],[2.]])
product=tf.matmul(matrix1,matrix2)
with tf.Session() as sess:
    print("变量节点相乘: %i" % sess.run(product))
#保存计算图
writer=tf.summary.FileWriter(logdir='logs',graph=tf.get_default_graph())
writer.flush()

cmd 执行tensorboard进行可视化查看

tensorboard --logdir=C:\workplace\py\tf\logs

浏览器打开127.0.0.1:6006可看到结果

1.5 TensorBoard

1.5.1 理论

TensorBoard源码基于nodejs,该面板能看到可视化的序列化数据集。包括标量(scalars),图片(images),音频(audio),计算图(graph),数据分布(distributions),直方图(histograms)和嵌入式向量(emmbedings)。

  1. TensorBoard前台呈现的数据是tensorflow程序执行过程中,将一些summary类型的数据写入到日志目录的evenet文件中。
  2. summary_op包括 summart.scalar,summary.histogram,summary.image等操作,这些操作输出是各种summary protobuf,最后通过summary.writer写入到event文件中。

clipboard.png

图1.5.1 tensorboard内部原理

标量

tf.summary.scalar(tags,values,collections=None,name=None)

图像

tf.summary.image(tag,tensor,max_images=33,collections=None,name=None)

tensor必须四维,形如[batch-size,height,width,channels]
TensorBoard中看到的image summary永远是最后一个global step的
音频

tf.summary.audio(tag,tensor,ssample_rate,max_outputs=3,collections=None,name=None)

直方图
记录变量的直方图,输出带直方图的汇总protobuf

tf.summary.histogram(tag,values,collections=None,name=None)

values可以是任意形状的

1.5.2 实践

执行py

import os
import tensorflow as tf
import math
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'

IMAGE_PIXELS=10
images=tf.placeholder(tf.float32,shape=(None,IMAGE_PIXELS))

#hidden 1: y1=relu(W*x1+b1)
hidden1_units=20
with tf.name_scope('hidden1'):
    weights=tf.Variable(tf.truncated_normal([IMAGE_PIXELS,hidden1_units],stddev=1.0/math.sqrt(float(IMAGE_PIXELS))),name='weights')
    biases=tf.Variable(tf.zeros([hidden1_units]),name='biases')
    hidden1=tf.nn.relu(tf.matmul(images,weights)+biases)
    
#hidden 2:y2=relu(W*x2+b2)
hidden2_units=10
with tf.name_scope('hidden2'):
    weights=tf.Variable(tf.truncated_normal([hidden1_units,hidden2_units],stddev=1.0/math.sqrt(float(hidden1_units))),name='weights')
    biases=tf.Variable(tf.zeros([hidden2_units]),name='biases')
    hidden2=tf.nn.relu(tf.matmul(hidden1,weights)+biases)
#存储计算图
writer=tf.summary.FileWriter("logs/test_tensorboard",tf.get_default_graph())
writer.close()

执行cmd

tensorboard --logdir=C:\workplace\py\tf\logs

打开 http://127.0.0.1:6006/#graphs

clipboard.png

图1.5.2-2 tensorboard浏览器查看

1.6一元线性回归

1.6.1 理论

回归模型:给定一组一维随机变量x和y变化的数据点{[x1,y1],[x2,y2],...,[xn,yn]},求函数y=wx+b
优化模型:
clipboard.png

优化器:梯度下降 Gradient Descent
梯度下降法Gradient Descent是一种常用的一阶(first-order)优化方法,是求解无约束优化问题最简单、最经典的方法之一。考虑无约束优化问题$min_xf(x)$,其中$f(x)$为连续可微函数。如果能构造出一个序列$x^0,x^1,...,x^t$满足:

$$ f(x^{t+1}) < f(x^t),t=0,1,2... $$

则不断执行该过程即可以收敛到局部极小点。而根据泰勒展示我们可以知道:

$$ f(x+\Delta x) \simeq f(x) + \Delta x^T \nabla f(x) $$

于是,如果要满足$f(x+\Delta x) < f(x)$,可以选择:

$$ \Delta x = -{step} \nabla f(x) $$

其中$step$是一个小常数,表示步长。以求解目标函数最小化为例,梯度下降算法可能存在一下几种情况:

  • 当目标函数为凸函数时,局部极小点就对应着函数全局最小值时,这种方法可以快速的找到最优解;
  • 当目标函数存在多个局部最小值时,可能会陷入局部最优解。因此需要从多个随机的起点开始解的搜索。
  • 当目标函数不存在最小值点,则可能陷入无限循环。因此,有必要设置最大迭代次数。

1.6.2 实践

import os
import tensorflow as tf
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'

with tf.Graph().as_default():
    with tf.name_scope('Input'):
        X=tf.placeholder(tf.float32,name='X')
        Y_true=tf.placeholder(tf.float32,name='Y_true')
    with tf.name_scope('Inference'):
        W=tf.Variable(tf.zeros([1]),name='Weight')
        b=tf.Variable(tf.zeros([1]),name='Bias')
        Y_pred=tf.add(tf.multiply(X,W),b)
    with tf.name_scope('Loss'):
        #添加损失
        Train_loss=tf.reduce_mean(tf.pow((Y_true-Y_pred),2))/2
    #梯度下降优化器
    with tf.name_scope('Train'):
        Optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.1)
        #训练节点
        TrainOp=Optimizer.minimize(Train_loss)
    with tf.name_scope('Eval'):
        #评估节点
        EvalLoss=tf.reduce_mean(tf.pow((Y_true-Y_pred),2))/2
    
    
    writer=tf.summary.FileWriter("logs/test_tensorboard",tf.get_default_graph())
    writer.close()
    

执行py,tensorboard
先删除logs文件夹在执行tensorboard

clipboard.png

图1.6.2-1 一元线性回归TensorBoard

数据图示版

import os
import tensorflow as tf
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'

import numpy as np
from matplotlib import pylab as plt
#指定默认字体  
plt.rcParams['font.sans-serif'] = ['SimHei']   
plt.rcParams['font.family']='sans-serif'  
#解决负号'-'显示为方块的问题  
plt.rcParams['axes.unicode_minus'] = False

#产生训练数据集
train_X=np.asarray([3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167,7.042,10.791,5.313,7.997,5.654,9.27,3.1])
train_Y=np.asarray([1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221,2.827,3.465,1.65,2.904,2.42,2.94,1.3])
n_train_samples=train_X.shape[0]
print('训练样本数量:',n_train_samples)
#产生测试样本
test_X=np.asarray([6.83,4.668,8.9,7.91,5.7,8.7,3.1,2.1])
test_Y=np.asarray([1.84,2.273,3.2,2.831,2.92,3.24,1.35,1.03])
n_test_samples=test_X.shape[0]
print('测试样本数量:',n_test_samples)
#计算图
with tf.Graph().as_default():
    with tf.name_scope('Input'):
        X=tf.placeholder(tf.float32,name='X')
        Y_true=tf.placeholder(tf.float32,name='Y_true')
    with tf.name_scope('Inference'):
        W=tf.Variable(tf.zeros([1]),name='Weight')
        b=tf.Variable(tf.zeros([1]),name='Bias')
        Y_pred=tf.add(tf.multiply(X,W),b)
    with tf.name_scope('Loss'):
        #添加损失
        TrainLoss=tf.reduce_mean(tf.pow((Y_pred-Y_true),2))/2
    #梯度下降优化器
    with tf.name_scope('Train'):
        Optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.01)
        #训练节点
        TrainOp=Optimizer.minimize(TrainLoss)
    with tf.name_scope('Eval'):
        #评估节点
        EvalLoss=tf.reduce_mean(tf.pow((Y_pred-Y_true),2))/2
    #初始化节点
    InitOp=tf.global_variables_initializer()
    #存计算图
    writer=tf.summary.FileWriter(logdir="logs/test_tensorboard",graph=tf.get_default_graph())
    writer.close()
    #启动图
    sess=tf.Session()
    sess.run(InitOp)
    for step in range(1000):
        for tx,ty in zip(train_X,train_Y):
            _,train_loss,train_w,train_b=sess.run([TrainOp,TrainLoss,W,b],feed_dict={X:tx,Y_true:ty})
        if(step+1)%20==0:
            print("Train",'%04d' % (step+1),"loss","{:.5f}".format(train_loss))
        if(step+1)%100==0:
            test_loss=sess.run(EvalLoss,feed_dict={X:test_X,Y_true:test_Y})
            print("-Test","loss=","{:.9f}".format(train_loss),"W=",train_w,"b=",train_b)
    print("---End---")
    #绘图
    W,b=sess.run([W,b])
    train_loss=sess.run(TrainLoss,feed_dict={X:train_X,Y_true:train_Y})
    test_loss=sess.run(EvalLoss,feed_dict={X:test_X,Y_true:test_Y})
    print("Final W=",W," ,b=",b," ,trainloss=",train_loss," ,testloss=",test_loss)    
    
    plt.plot(train_X,train_Y,'ro',label='训练样本')
    plt.plot(test_X,test_Y,'b*',label='测试样本')
    plt.plot(train_X,W*train_X+b,label='拟合线')
    plt.legend()
    plt.show()

clipboard.png

图1.6.2-2 拟合图示

1.6.3 多元线性回归

参考Deep Learing

import numpy as np
import tensorflow as tf
from sklearn import linear_model
from sklearn import preprocessing
x_data = np.loadtxt("ex3x.dat").astype(np.float32)
y_data = np.loadtxt("ex3y.dat").astype(np.float32)
reg = linear_model.LinearRegression()
reg.fit(x_data, y_data)
print("Coefficients of sklearn: K=%s, b=%f" % (reg.coef_, reg.intercept_))
scaler = preprocessing.StandardScaler().fit(x_data)
print(scaler.mean_, scaler.scale_)
x_data_standard = scaler.transform(x_data)
W = tf.Variable(tf.zeros([2, 1]))
b = tf.Variable(tf.zeros([1, 1]))
y = tf.matmul(x_data_standard, W) + b
loss = tf.reduce_mean(tf.square(y - y_data.reshape(-1, 1)))/2
optimizer = tf.train.GradientDescentOptimizer(0.3)
train = optimizer.minimize(loss)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for step in range(100):
    sess.run(train)
    if step % 10 == 0:
        print(step, sess.run(W).flatten(), sess.run(b).flatten())
print("Coefficients of tensorflow (input should be standardized): K=%s, b=%s" % (sess.run(W).flatten(), sess.run(b).flatten()))
print("Coefficients of tensorflow (raw input): K=%s, b=%s" % (sess.run(W).flatten() / scaler.scale_, sess.run(b).flatten() - np.dot(scaler.mean_ / scaler.scale_, sess.run(W))))

1.7 SoftMax回归

1.7.1 理论

SoftMax回归模型:把不规范的输出预测转变成一个有效的概率分布,以便于计算对数损失(交叉熵损失)
优化模型:梯度下降

1.7.2 实践

import os
import tensorflow as tf
import math
from tensorflow.examples.tutorials.mnist import input_data
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'

with tf.Graph().as_default():
    #输入节点
    with tf.name_scope('Input'):
        X=tf.placeholder(tf.float32,shape=[None,784],name='X')
        Y_true=tf.placeholder(tf.float32,shape=[None,10],name='Y_true')
    #前向预测
    with tf.name_scope('Inference'):
        W=tf.Variable(tf.zeros([784,10]),name="Weight")
        b=tf.Variable(tf.zeros([10]),name="Bias")
        logits=tf.add(tf.matmul(X,W),b)
    #softmax把logistics变成概率分布
    with tf.name_scope('Softmax'):
        Y_pred=tf.nn.softmax(logits=logits)
    #损失节点
    with tf.name_scope('Loss'):
        TrainLoss=tf.reduce_mean(-tf.reduce_sum(Y_true*tf.log(Y_pred),axis=1))
    #训练节点
    with tf.name_scope('Train'):
        TrainStep=tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(TrainLoss)
    #评估节点
    with tf.name_scope('Eval'):
        correct_prediction=tf.equal(tf.argmax(Y_pred,1),tf.argmax(Y_true,1))
        accuracy=tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
    InitOp=tf.global_variables_initializer()
    writer=tf.summary.FileWriter(logdir='logs/mnist_softmax',graph=tf.get_default_graph())
    writer.close()
    mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
    sess = tf.InteractiveSession()
    sess.run(InitOp)
    for step in range(1000):
        batch_xs,batch_ys = mnist.train.next_batch(100)
        _,train_loss=sess.run([TrainStep,TrainLoss],feed_dict={X: batch_xs, Y_true: batch_ys})
        if step%100==0:
            print("step",step,"loss",train_loss)
    #输出结果
    print("softmax_result:")
    print(sess.run(accuracy, feed_dict={X: mnist.test.images, Y_true: mnist.test.labels}))

1.8 K最近邻分类算法(k-Nearest Neighbor,KNN)

1.8.1 理论

计算步骤
1.算距离:给定测试样本特征向量,计算它与训练集中每个样本特征向量的距离
2.找近邻:圈定距离最近的k个训练样本,作为测试样本的近邻
3.做分类:根据这k个近邻归属的主要类别,来对测试样本分类
优化模型
没有显式的模型参数优化过程

1.8.2 实践

knn4mnist.py

import numpy as np
import os
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
#导入MNIST数据集
mnist=input_data.read_data_sets("mnist-data/",one_hot=True)
#5000条数据用于训练,200用于测试
Xtrain,Ytrain=mnist.train.next_batch(5000)
Xtest,Ytest=mnist.test.next_batch(200)
print('Xtrain.shape:',Xtrain.shape,'Xtest,shape',Xtest.shape)
print('Ytrain.shape:',Ytrain.shape,'Ytest,shape',Ytest.shape)
#计算图占位符
xtrain=tf.placeholder("float",[None,784])
xtest=tf.placeholder("float",[784])
#计算距离
distance=tf.reduce_sum(tf.abs(tf.add(xtrain,tf.negative(xtest))),axis=1)
#预测:获得最小距离的索引 (根据最邻近的类标签进行判断)
pred=tf.arg_min(distance,0)
#评估:判断给定一条测试样本是否预测正确
accuracy=0
init=tf.initialize_all_variables()
with tf.Session() as sess:
    sess.run(init)
    Ntest=len(Xtest)
    for i in range(Ntest):
        #获取当前测试样本的最近邻
        nn_index=sess.run(pred,feed_dict={xtrain:Xtrain,xtest:Xtest[i,:]})
        #获得最近邻预测标签,与真实标签比较
        pred_class_label=np.argmax(Ytrain[nn_index])
        true_class_label=np.argmax(Ytest[i])
        print("Test",i,"Predicted Class Label:",pred_class_label,"True Class Label:",true_class_label)
        #计算准确率
        if pred_class_label==true_class_label:
            accuracy+=1
    print("Done")
    accuracy /=Ntest
    print("Accuracy:",accuracy)

2 MNIST官方示例

2.1 两个隐藏层的SoftMax

2.1.1 理论

clipboard.png

图2.1.1-1 两个隐藏层的SoftMax Regession 分类器模型 (来源

SoftMax数学详解

2.1.2 实践

源码见 tensorflow\examples\tutorials\mnist\mnist.py

2.2 连个隐藏层的全连接网络

2.2.1 理论

2.2.2 实践

源码见 tensorflow\examples\tutorials\mnist\fully_connected_feed.py

2.3 Tensorboard可视化mnist

2.2.1 理论

2.2.2 实践

源码见 tensorflow\examples\tutorials\mnist\mnist_with_summaries.py

2.3 Tensorboard可视化mnist

2.2.1 理论

2.2.2 实践

源码见 tensorflow\examples\tutorials\mnist\mnist_with_summaries.py

2.4 XLA示例

2.2.1 理论

2.2.2 实践

源码见 tensorflow\examples\tutorials\mnist\mnist_softmax_xla.py

3 进阶

3.1 降噪自编码器

3.1.1理论

clipboard.png

3.1.2实践

# 降噪自动编码器
import numpy as np
import sklearn.preprocessing as prep
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
# xavier均匀初始化
def xavier_init(fan_in,fan_out,constant=1):
    low=-constant*np.sqrt(6.0/(fan_in+fan_out))
    high=constant*np.sqrt(6.0/(fan_in +fan_out))
    return tf.random_uniform((fan_in,fan_out),minval=low,maxval=high,dtype=tf.float32)
#加性高斯噪声的自动编码器
#n_input 输入变量数
#n_hidden隐含层节点数
#trainsfer_function 隐含层激活函数
#Optimizer 优化器
#scale 高斯噪声系数
class AdditiveGaussianNoiseAutoencoder(object):
    def __init__(self,n_input,n_hidden,transfer_function=tf.nn.softplus,optimizer=tf.train.AdamOptimizer(),scale=0.1):
        self.n_input=n_input
        self.n_hidden=n_hidden
        self.transfer=transfer_function
        self.scale=tf.placeholder(tf.float32)
        self.training_scale=scale
        network_weights=self._initialize_weights()
        self.weights=network_weights
        self.x=tf.placeholder(tf.float32,[None,self.n_input])
        self.hidden=self.transfer(tf.add(tf.matmul(self.x+scale*tf.random_normal((n_input,)),self.weights['w1']),self.weights['b1']))
        self.reconstruction=tf.add(tf.matmul(self.hidden,self.weights['w2']),self.weights['b2'])
        self.cost=0.5*tf.reduce_sum(tf.pow(tf.subtract(self.reconstruction,self.x),2))
        self.optimizer=optimizer.minimize(self.cost)
        init=tf.global_variables_initializer()
        self.sess=tf.Session()
        self.sess.run(init)
        print('begin to run session..')
    def _initialize_weights(self):
        all_weights=dict()
        all_weights['w1']=tf.Variable(xavier_init(self.n_input,self.n_hidden))
        all_weights['b1']=tf.Variable(tf.zeros([self.n_hidden],dtype=tf.float32))
        all_weights['w2']=tf.Variable(tf.zeros([self.n_hidden,self.n_input],dtype=tf.float32))
        all_weights['b2']=tf.Variable([self.n_input],dtype=tf.float32)
        return all_weights

AGN_AC=AdditiveGaussianNoiseAutoencoder(n_input=784,n_hidden=200,transfer_function=tf.nn.softplus,optimizer=tf.train.AdamOptimizer(learning_rate=0.01),scale=0.01)
print('tensorboard ')
writer=tf.summary.FileWriter(logdir='logs',graph=AGN_AC.sess.graph)
writer.close()        

3.2 K-Means聚类

参考kmeans.py

import numpy as np
import tensorflow as tf
from tensorflow.contrib.factorization import KMeans

import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'

# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("mnistdata/", one_hot=True)
full_data_x = mnist.train.images

# Parameters
num_steps = 50 # Total steps to train
batch_size = 1024 # The number of samples per batch
k = 25 # The number of clusters
num_classes = 10 # The 10 digits
num_features = 784 # Each image is 28x28 pixels

# Input images
X = tf.placeholder(tf.float32, shape=[None, num_features])
# Labels (for assigning a label to a centroid and testing)
Y = tf.placeholder(tf.float32, shape=[None, num_classes])

# K-Means Parameters
kmeans = KMeans(inputs=X, num_clusters=k, distance_metric='cosine',
                use_mini_batch=True)

# Build KMeans graph
(all_scores, cluster_idx, scores, cluster_centers_initialized, init_op,
train_op) = kmeans.training_graph()
cluster_idx = cluster_idx[0] # fix for cluster_idx being a tuple
avg_distance = tf.reduce_mean(scores)

# Initialize the variables (i.e. assign their default value)
init_vars = tf.global_variables_initializer()

# Start TensorFlow session
sess = tf.Session()

# Run the initializer
sess.run(init_vars, feed_dict={X: full_data_x})
sess.run(init_op, feed_dict={X: full_data_x})

# Training
for i in range(1, num_steps + 1):
    _, d, idx = sess.run([train_op, avg_distance, cluster_idx],
                         feed_dict={X: full_data_x})
    if i % 10 == 0 or i == 1:
        print("Step %i, Avg Distance: %f" % (i, d))

# Assign a label to each centroid
# Count total number of labels per centroid, using the label of each training
# sample to their closest centroid (given by 'idx')
counts = np.zeros(shape=(k, num_classes))
for i in range(len(idx)):
    counts[idx[i]] += mnist.train.labels[i]
# Assign the most frequent label to the centroid
labels_map = [np.argmax(c) for c in counts]
labels_map = tf.convert_to_tensor(labels_map)

# Evaluation ops
# Lookup: centroid_id -> label
cluster_label = tf.nn.embedding_lookup(labels_map, cluster_idx)
# Compute accuracy
correct_prediction = tf.equal(cluster_label, tf.cast(tf.argmax(Y, 1), tf.int32))
accuracy_op = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

# Test Model
test_x, test_y = mnist.test.images, mnist.test.labels
print("Test Accuracy:", sess.run(accuracy_op, feed_dict={X: test_x, Y: test_y}))

3.3 随机森林

参考random_forest.py

import tensorflow as tf
from tensorflow.contrib.tensor_forest.python import tensor_forest
import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'

# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("mnistdata/", one_hot=False)

# Parameters
num_steps = 500 # Total steps to train
batch_size = 1024 # The number of samples per batch
num_classes = 10 # The 10 digits
num_features = 784 # Each image is 28x28 pixels
num_trees = 10
max_nodes = 1000

# Input and Target data
X = tf.placeholder(tf.float32, shape=[None, num_features])
# For random forest, labels must be integers (the class id)
Y = tf.placeholder(tf.int32, shape=[None])

# Random Forest Parameters
hparams = tensor_forest.ForestHParams(num_classes=num_classes,
                                      num_features=num_features,
                                      num_trees=num_trees,
                                      max_nodes=max_nodes).fill()

# Build the Random Forest
forest_graph = tensor_forest.RandomForestGraphs(hparams)
# Get training graph and loss
train_op = forest_graph.training_graph(X, Y)
loss_op = forest_graph.training_loss(X, Y)

# Measure the accuracy
infer_op = forest_graph.inference_graph(X)
correct_prediction = tf.equal(tf.argmax(infer_op, 1), tf.cast(Y, tf.int64))
accuracy_op = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

# Initialize the variables (i.e. assign their default value)
init_vars = tf.global_variables_initializer()

# Start TensorFlow session
sess = tf.Session()

# Run the initializer
sess.run(init_vars)

# Training
for i in range(1, num_steps + 1):
    # Prepare Data
    # Get the next batch of MNIST data (only images are needed, not labels)
    batch_x, batch_y = mnist.train.next_batch(batch_size)
    _, l = sess.run([train_op, loss_op], feed_dict={X: batch_x, Y: batch_y})
    if i % 50 == 0 or i == 1:
        acc = sess.run(accuracy_op, feed_dict={X: batch_x, Y: batch_y})
        print('Step %i, Loss: %f, Acc: %f' % (i, l, acc))

# Test Model
test_x, test_y = mnist.test.images, mnist.test.labels
print("Test Accuracy:", sess.run(accuracy_op, feed_dict={X: test_x, Y: test_y}))

3.4简单神经网络(NN)

参考neural_network.py

from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("mnistdata/", one_hot=False)

import tensorflow as tf
import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'

# Parameters
learning_rate = 0.1
num_steps = 1000
batch_size = 128
display_step = 100

# Network Parameters
n_hidden_1 = 256 # 1st layer number of neurons
n_hidden_2 = 256 # 2nd layer number of neurons
num_input = 784 # MNIST data input (img shape: 28*28)
num_classes = 10 # MNIST total classes (0-9 digits)


# Define the neural network
def neural_net(x_dict):
    # TF Estimator input is a dict, in case of multiple inputs
    x = x_dict['images']
    # Hidden fully connected layer with 256 neurons
    layer_1 = tf.layers.dense(x, n_hidden_1)
    # Hidden fully connected layer with 256 neurons
    layer_2 = tf.layers.dense(layer_1, n_hidden_2)
    # Output fully connected layer with a neuron for each class
    out_layer = tf.layers.dense(layer_2, num_classes)
    return out_layer


# Define the model function (following TF Estimator Template)
def model_fn(features, labels, mode):
    # Build the neural network
    logits = neural_net(features)

    # Predictions
    pred_classes = tf.argmax(logits, axis=1)
    pred_probas = tf.nn.softmax(logits)

    # If prediction mode, early return
    if mode == tf.estimator.ModeKeys.PREDICT:
        return tf.estimator.EstimatorSpec(mode, predictions=pred_classes)

        # Define loss and optimizer
    loss_op = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
        logits=logits, labels=tf.cast(labels, dtype=tf.int32)))
    optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
    train_op = optimizer.minimize(loss_op,
                                  global_step=tf.train.get_global_step())

    # Evaluate the accuracy of the model
    acc_op = tf.metrics.accuracy(labels=labels, predictions=pred_classes)

    # TF Estimators requires to return a EstimatorSpec, that specify
    # the different ops for training, evaluating, ...
    estim_specs = tf.estimator.EstimatorSpec(
        mode=mode,
        predictions=pred_classes,
        loss=loss_op,
        train_op=train_op,
        eval_metric_ops={'accuracy': acc_op})

    return estim_specs

# Build the Estimator
model = tf.estimator.Estimator(model_fn)

# Define the input function for training
input_fn = tf.estimator.inputs.numpy_input_fn(
    x={'images': mnist.train.images}, y=mnist.train.labels,
    batch_size=batch_size, num_epochs=None, shuffle=True)
# Train the Model
model.train(input_fn, steps=num_steps)

# Evaluate the Model
# Define the input function for evaluating
input_fn = tf.estimator.inputs.numpy_input_fn(
    x={'images': mnist.test.images}, y=mnist.test.labels,
    batch_size=batch_size, shuffle=False)
# Use the Estimator 'evaluate' method
e = model.evaluate(input_fn)

print("Testing Accuracy:", e['accuracy'])

参考neural_network_raw.py

from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("mnistdata/", one_hot=True)

import tensorflow as tf
import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'

# Parameters
learning_rate = 0.1
num_steps = 500
batch_size = 128
display_step = 100

# Network Parameters
n_hidden_1 = 256 # 1st layer number of neurons
n_hidden_2 = 256 # 2nd layer number of neurons
num_input = 784 # MNIST data input (img shape: 28*28)
num_classes = 10 # MNIST total classes (0-9 digits)

# tf Graph input
X = tf.placeholder("float", [None, num_input])
Y = tf.placeholder("float", [None, num_classes])

# Store layers weight & bias
weights = {
    'h1': tf.Variable(tf.random_normal([num_input, n_hidden_1])),
    'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
    'out': tf.Variable(tf.random_normal([n_hidden_2, num_classes]))
}
biases = {
    'b1': tf.Variable(tf.random_normal([n_hidden_1])),
    'b2': tf.Variable(tf.random_normal([n_hidden_2])),
    'out': tf.Variable(tf.random_normal([num_classes]))
}


# Create model
def neural_net(x):
    # Hidden fully connected layer with 256 neurons
    layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
    # Hidden fully connected layer with 256 neurons
    layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
    # Output fully connected layer with a neuron for each class
    out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
    return out_layer

# Construct model
logits = neural_net(X)
prediction = tf.nn.softmax(logits)

# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
    logits=logits, labels=Y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)

# Evaluate model
correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()

# Start training
with tf.Session() as sess:

    # Run the initializer
    sess.run(init)

    for step in range(1, num_steps+1):
        batch_x, batch_y = mnist.train.next_batch(batch_size)
        # Run optimization op (backprop)
        sess.run(train_op, feed_dict={X: batch_x, Y: batch_y})
        if step % display_step == 0 or step == 1:
            # Calculate batch loss and accuracy
            loss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x,
                                                                 Y: batch_y})
            print("Step " + str(step) + ", Minibatch Loss= " + \
                  "{:.4f}".format(loss) + ", Training Accuracy= " + \
                  "{:.3f}".format(acc))

    print("Optimization Finished!")

    # Calculate accuracy for MNIST test images
    print("Testing Accuracy:", \
        sess.run(accuracy, feed_dict={X: mnist.test.images,
                                      Y: mnist.test.labels}))

3.5卷积神经网络(CNN)

参考convolutional_network.py

from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("mnistdata/", one_hot=False)

import tensorflow as tf
import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'

# Training Parameters
learning_rate = 0.001
num_steps = 2000
batch_size = 128

# Network Parameters
num_input = 784 # MNIST data input (img shape: 28*28)
num_classes = 10 # MNIST total classes (0-9 digits)
dropout = 0.75 # Dropout, probability to keep units


# Create the neural network
def conv_net(x_dict, n_classes, dropout, reuse, is_training):
    # Define a scope for reusing the variables
    with tf.variable_scope('ConvNet', reuse=reuse):
        # TF Estimator input is a dict, in case of multiple inputs
        x = x_dict['images']

        # MNIST data input is a 1-D vector of 784 features (28*28 pixels)
        # Reshape to match picture format [Height x Width x Channel]
        # Tensor input become 4-D: [Batch Size, Height, Width, Channel]
        x = tf.reshape(x, shape=[-1, 28, 28, 1])

        # Convolution Layer with 32 filters and a kernel size of 5
        conv1 = tf.layers.conv2d(x, 32, 5, activation=tf.nn.relu)
        # Max Pooling (down-sampling) with strides of 2 and kernel size of 2
        conv1 = tf.layers.max_pooling2d(conv1, 2, 2)

        # Convolution Layer with 64 filters and a kernel size of 3
        conv2 = tf.layers.conv2d(conv1, 64, 3, activation=tf.nn.relu)
        # Max Pooling (down-sampling) with strides of 2 and kernel size of 2
        conv2 = tf.layers.max_pooling2d(conv2, 2, 2)

        # Flatten the data to a 1-D vector for the fully connected layer
        fc1 = tf.contrib.layers.flatten(conv2)

        # Fully connected layer (in tf contrib folder for now)
        fc1 = tf.layers.dense(fc1, 1024)
        # Apply Dropout (if is_training is False, dropout is not applied)
        fc1 = tf.layers.dropout(fc1, rate=dropout, training=is_training)

        # Output layer, class prediction
        out = tf.layers.dense(fc1, n_classes)

    return out


# Define the model function (following TF Estimator Template)
def model_fn(features, labels, mode):
    # Build the neural network
    # Because Dropout have different behavior at training and prediction time, we
    # need to create 2 distinct computation graphs that still share the same weights.
    logits_train = conv_net(features, num_classes, dropout, reuse=False,
                            is_training=True)
    logits_test = conv_net(features, num_classes, dropout, reuse=True,
                           is_training=False)

    # Predictions
    pred_classes = tf.argmax(logits_test, axis=1)
    pred_probas = tf.nn.softmax(logits_test)

    # If prediction mode, early return
    if mode == tf.estimator.ModeKeys.PREDICT:
        return tf.estimator.EstimatorSpec(mode, predictions=pred_classes)

        # Define loss and optimizer
    loss_op = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
        logits=logits_train, labels=tf.cast(labels, dtype=tf.int32)))
    optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
    train_op = optimizer.minimize(loss_op,
                                  global_step=tf.train.get_global_step())

    # Evaluate the accuracy of the model
    acc_op = tf.metrics.accuracy(labels=labels, predictions=pred_classes)

    # TF Estimators requires to return a EstimatorSpec, that specify
    # the different ops for training, evaluating, ...
    estim_specs = tf.estimator.EstimatorSpec(
        mode=mode,
        predictions=pred_classes,
        loss=loss_op,
        train_op=train_op,
        eval_metric_ops={'accuracy': acc_op})

    return estim_specs

# Build the Estimator
model = tf.estimator.Estimator(model_fn)

# Define the input function for training
input_fn = tf.estimator.inputs.numpy_input_fn(
    x={'images': mnist.train.images}, y=mnist.train.labels,
    batch_size=batch_size, num_epochs=None, shuffle=True)
# Train the Model
model.train(input_fn, steps=num_steps)

# Evaluate the Model
# Define the input function for evaluating
input_fn = tf.estimator.inputs.numpy_input_fn(
    x={'images': mnist.test.images}, y=mnist.test.labels,
    batch_size=batch_size, shuffle=False)
# Use the Estimator 'evaluate' method
e = model.evaluate(input_fn)

print("Testing Accuracy:", e['accuracy'])

参考convolutional_network_raw.py

import tensorflow as tf
import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'

# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("mnistdata/", one_hot=True)

# Training Parameters
learning_rate = 0.001
num_steps = 200
batch_size = 128
display_step = 10

# Network Parameters
num_input = 784 # MNIST data input (img shape: 28*28)
num_classes = 10 # MNIST total classes (0-9 digits)
dropout = 0.75 # Dropout, probability to keep units

# tf Graph input
X = tf.placeholder(tf.float32, [None, num_input])
Y = tf.placeholder(tf.float32, [None, num_classes])
keep_prob = tf.placeholder(tf.float32) # dropout (keep probability)


# Create some wrappers for simplicity
def conv2d(x, W, b, strides=1):
    # Conv2D wrapper, with bias and relu activation
    x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
    x = tf.nn.bias_add(x, b)
    return tf.nn.relu(x)


def maxpool2d(x, k=2):
    # MaxPool2D wrapper
    return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],
                          padding='SAME')


# Create model
def conv_net(x, weights, biases, dropout):
    # MNIST data input is a 1-D vector of 784 features (28*28 pixels)
    # Reshape to match picture format [Height x Width x Channel]
    # Tensor input become 4-D: [Batch Size, Height, Width, Channel]
    x = tf.reshape(x, shape=[-1, 28, 28, 1])

    # Convolution Layer
    conv1 = conv2d(x, weights['wc1'], biases['bc1'])
    # Max Pooling (down-sampling)
    conv1 = maxpool2d(conv1, k=2)

    # Convolution Layer
    conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
    # Max Pooling (down-sampling)
    conv2 = maxpool2d(conv2, k=2)

    # Fully connected layer
    # Reshape conv2 output to fit fully connected layer input
    fc1 = tf.reshape(conv2, [-1, weights['wd1'].get_shape().as_list()[0]])
    fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
    fc1 = tf.nn.relu(fc1)
    # Apply Dropout
    fc1 = tf.nn.dropout(fc1, dropout)

    # Output, class prediction
    out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
    return out

# Store layers weight & bias
weights = {
    # 5x5 conv, 1 input, 32 outputs
    'wc1': tf.Variable(tf.random_normal([5, 5, 1, 32])),
    # 5x5 conv, 32 inputs, 64 outputs
    'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])),
    # fully connected, 7*7*64 inputs, 1024 outputs
    'wd1': tf.Variable(tf.random_normal([7*7*64, 1024])),
    # 1024 inputs, 10 outputs (class prediction)
    'out': tf.Variable(tf.random_normal([1024, num_classes]))
}

biases = {
    'bc1': tf.Variable(tf.random_normal([32])),
    'bc2': tf.Variable(tf.random_normal([64])),
    'bd1': tf.Variable(tf.random_normal([1024])),
    'out': tf.Variable(tf.random_normal([num_classes]))
}

# Construct model
logits = conv_net(X, weights, biases, keep_prob)
prediction = tf.nn.softmax(logits)

# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
    logits=logits, labels=Y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)


# Evaluate model
correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()

# Start training
with tf.Session() as sess:

    # Run the initializer
    sess.run(init)

    for step in range(1, num_steps+1):
        batch_x, batch_y = mnist.train.next_batch(batch_size)
        # Run optimization op (backprop)
        sess.run(train_op, feed_dict={X: batch_x, Y: batch_y, keep_prob: 0.8})
        if step % display_step == 0 or step == 1:
            # Calculate batch loss and accuracy
            loss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x,
                                                                 Y: batch_y,
                                                                 keep_prob: 1.0})
            print("Step " + str(step) + ", Minibatch Loss= " + \
                  "{:.4f}".format(loss) + ", Training Accuracy= " + \
                  "{:.3f}".format(acc))

    print("Optimization Finished!")

    # Calculate accuracy for 256 MNIST test images
    print("Testing Accuracy:", \
        sess.run(accuracy, feed_dict={X: mnist.test.images[:256],
                                      Y: mnist.test.labels[:256],
                                      keep_prob: 1.0}))

3.6 循环神经网络(RNN)

参考recurrent_network.py

import tensorflow as tf
from tensorflow.contrib import rnn
import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'

# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("mnistdata/", one_hot=True)


# Training Parameters
learning_rate = 0.001
training_steps = 10000
batch_size = 128
display_step = 200

# Network Parameters
num_input = 28 # MNIST data input (img shape: 28*28)
timesteps = 28 # timesteps
num_hidden = 128 # hidden layer num of features
num_classes = 10 # MNIST total classes (0-9 digits)

# tf Graph input
X = tf.placeholder("float", [None, timesteps, num_input])
Y = tf.placeholder("float", [None, num_classes])

# Define weights
weights = {
    'out': tf.Variable(tf.random_normal([num_hidden, num_classes]))
}
biases = {
    'out': tf.Variable(tf.random_normal([num_classes]))
}


def RNN(x, weights, biases):

    # Prepare data shape to match `rnn` function requirements
    # Current data input shape: (batch_size, timesteps, n_input)
    # Required shape: 'timesteps' tensors list of shape (batch_size, n_input)

    # Unstack to get a list of 'timesteps' tensors of shape (batch_size, n_input)
    x = tf.unstack(x, timesteps, 1)

    # Define a lstm cell with tensorflow
    lstm_cell = rnn.BasicLSTMCell(num_hidden, forget_bias=1.0)

    # Get lstm cell output
    outputs, states = rnn.static_rnn(lstm_cell, x, dtype=tf.float32)

    # Linear activation, using rnn inner loop last output
    return tf.matmul(outputs[-1], weights['out']) + biases['out']

logits = RNN(X, weights, biases)
prediction = tf.nn.softmax(logits)

# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
    logits=logits, labels=Y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)

# Evaluate model (with test logits, for dropout to be disabled)
correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()

# Start training
with tf.Session() as sess:

    # Run the initializer
    sess.run(init)

    for step in range(1, training_steps+1):
        batch_x, batch_y = mnist.train.next_batch(batch_size)
        # Reshape data to get 28 seq of 28 elements
        batch_x = batch_x.reshape((batch_size, timesteps, num_input))
        # Run optimization op (backprop)
        sess.run(train_op, feed_dict={X: batch_x, Y: batch_y})
        if step % display_step == 0 or step == 1:
            # Calculate batch loss and accuracy
            loss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x,
                                                                 Y: batch_y})
            print("Step " + str(step) + ", Minibatch Loss= " + \
                  "{:.4f}".format(loss) + ", Training Accuracy= " + \
                  "{:.3f}".format(acc))

    print("Optimization Finished!")

    # Calculate accuracy for 128 mnist test images
    test_len = 128
    test_data = mnist.test.images[:test_len].reshape((-1, timesteps, num_input))
    test_label = mnist.test.labels[:test_len]
    print("Testing Accuracy:", \
        sess.run(accuracy, feed_dict={X: test_data, Y: test_label}))

3.7双向长短时记忆循环神经网络(Bi-directional RNN)

参考bidirectional_rnn.py

import tensorflow as tf
from tensorflow.contrib import rnn
import numpy as np
import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'

# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("mnistdata/", one_hot=True)

'''
To classify images using a bidirectional recurrent neural network, we consider
every image row as a sequence of pixels. Because MNIST image shape is 28*28px,
we will then handle 28 sequences of 28 steps for every sample.
'''

# Training Parameters
learning_rate = 0.001
training_steps = 10000
batch_size = 128
display_step = 200

# Network Parameters
num_input = 28 # MNIST data input (img shape: 28*28)
timesteps = 28 # timesteps
num_hidden = 128 # hidden layer num of features
num_classes = 10 # MNIST total classes (0-9 digits)

# tf Graph input
X = tf.placeholder("float", [None, timesteps, num_input])
Y = tf.placeholder("float", [None, num_classes])

# Define weights
weights = {
    # Hidden layer weights => 2*n_hidden because of forward + backward cells
    'out': tf.Variable(tf.random_normal([2*num_hidden, num_classes]))
}
biases = {
    'out': tf.Variable(tf.random_normal([num_classes]))
}


def BiRNN(x, weights, biases):

    # Prepare data shape to match `rnn` function requirements
    # Current data input shape: (batch_size, timesteps, n_input)
    # Required shape: 'timesteps' tensors list of shape (batch_size, num_input)

    # Unstack to get a list of 'timesteps' tensors of shape (batch_size, num_input)
    x = tf.unstack(x, timesteps, 1)

    # Define lstm cells with tensorflow
    # Forward direction cell
    lstm_fw_cell = rnn.BasicLSTMCell(num_hidden, forget_bias=1.0)
    # Backward direction cell
    lstm_bw_cell = rnn.BasicLSTMCell(num_hidden, forget_bias=1.0)

    # Get lstm cell output
    try:
        outputs, _, _ = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,
                                              dtype=tf.float32)
    except Exception: # Old TensorFlow version only returns outputs not states
        outputs = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,
                                        dtype=tf.float32)

    # Linear activation, using rnn inner loop last output
    return tf.matmul(outputs[-1], weights['out']) + biases['out']

logits = BiRNN(X, weights, biases)
prediction = tf.nn.softmax(logits)

# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
    logits=logits, labels=Y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)

# Evaluate model (with test logits, for dropout to be disabled)
correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()

# Start training
with tf.Session() as sess:

    # Run the initializer
    sess.run(init)

    for step in range(1, training_steps+1):
        batch_x, batch_y = mnist.train.next_batch(batch_size)
        # Reshape data to get 28 seq of 28 elements
        batch_x = batch_x.reshape((batch_size, timesteps, num_input))
        # Run optimization op (backprop)
        sess.run(train_op, feed_dict={X: batch_x, Y: batch_y})
        if step % display_step == 0 or step == 1:
            # Calculate batch loss and accuracy
            loss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x,
                                                                 Y: batch_y})
            print("Step " + str(step) + ", Minibatch Loss= " + \
                  "{:.4f}".format(loss) + ", Training Accuracy= " + \
                  "{:.3f}".format(acc))

    print("Optimization Finished!")

    # Calculate accuracy for 128 mnist test images
    test_len = 128
    test_data = mnist.test.images[:test_len].reshape((-1, timesteps, num_input))
    test_label = mnist.test.labels[:test_len]
    print("Testing Accuracy:", \
        sess.run(accuracy, feed_dict={X: test_data, Y: test_label}))

4 应用

4.1 游戏

flappy bird

4.2 文本分析

chatbot
QA_1
QA_2
基于CNN的中文文本分类算法(可应用于垃圾邮件过滤、情感分析等场景)

4.3图像识别

人脸表情识别
手势识别
手写汉字识别
人脸照片的自动补全
黑白图片自动上色
自动生成照片文本描述
基于深层卷积神经网络的年龄及性别检测

其他

简易应用

1、可视化声音聚类 Voice Cluster Visualization
2、用人工智能实现简笔画 Teaching Computers to Sketch
3、使用深度学习将描述转化为图像 Using Deep Learning to Generate Images from Descriptions
4、使用深度学习将图像转化为描述 Using Deep Learning to Generate Descriptions from Images
5、人工智能玩游戏:谷歌霸王龙小恐龙 How AI Can Play Games – Google’s Running T-Rex
6、基于深度学习的图像定位技术 Image Positioning based on Deep Learning
7、基于深度学习的图像风格变换技术 Using Deep Learning to Transform Image Style
8、用人工智能生成手写字 Generating Handwritten Figures by Artificial Intelligence

经济金融

1、网络购物经济学分析 Economic Analysis of Online Shopping
2、健康保险计划经济学分析 An Economic Analysis of the Value of Health Insurance Plans
3、利用机器学习算法和宏观经济数据预测中国房价变化 Predicting Housing Price in China through Machine Learning and Macroeconomics Data
4、预测股票

生活社会

1、科比的投篮选择 Shot Selection for Kobe Bryant
2、共享单车的需求研究 Demand of Shared Bike
3、旧金山犯罪分类研究 San Francisco Crime Classification
4、收容所动物福利研究 Welfare Analysis of Animals in Shelter
5、森林覆盖种类预测 Predicting Types of Forest Cover

生物医学

1、中国城市中二手烟与早产儿风险的联系 Passive Smoking and Preterm Birth in Urban China;
2、基因组中的染色体断裂信号 Chromosome Breakage Signal within a Protist Genome。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值