《Web安全之机器学习入门》笔记:第十五章 15.6 TensorFlow DNN识别验证码(三)

本小节通过DNN识别验证码(MNIST数据集)

1、DNN原理示意图

2.定义DNN

这里配置隐藏层共3层,使用relu函数

n_hidden_1 = 300
n_hidden_2 = 200
n_hidden_3 = 100
n_input = 784
n_classes = 10

x = tf.placeholder("float",[None,784])
y = tf.placeholder("float",[None,n_classes])


def multilayer_perceptron(x,weights,biases):
    layer_1 = tf.add(tf.matmul(x,weights['h1']),biases['b1'])
    layer_1 = tf.nn.relu(layer_1)

    layer_2 = tf.add(tf.matmul(layer_1,weights['h2']),biases['b2'])
    layer_2 = tf.nn.relu(layer_2)

    layer_3 = tf.add(tf.matmul(layer_2,weights['h3']),biases['b3'])
    layer_3 = tf.nn.relu(layer_3)

    out_layer = tf.matmul(layer_3,weights['out']) + biases['out']
    #out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
    return out_layer

weigths = {
    'h1': tf.Variable(tf.random_normal([n_input,n_hidden_1])),
    'h2': tf.Variable(tf.random_normal([n_hidden_1,n_hidden_2])),
    'h3': tf.Variable(tf.random_normal([n_hidden_2,n_hidden_3])),
    'out': tf.Variable(tf.random_normal([n_hidden_3,n_classes]))
    #'out': tf.Variable(tf.random_normal([n_hidden_2,n_classes]))
}

biases = {
    'b1': tf.Variable(tf.random_normal([n_hidden_1])),
    'b2': tf.Variable(tf.random_normal([n_hidden_2])),
    'b3': tf.Variable(tf.random_normal([n_hidden_3])),
    'out': tf.Variable(tf.random_normal([n_classes]))
}

pred = multilayer_perceptron(x,weigths,biases)

衰减函数则是使用交叉熵,使用Adagrad自使用调节

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred,labels=y))
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

3.完整代码

原书配套源码修改为可运行后,基于python3的源码如下所示:

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow.compat.v1 as tf

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

mnist = input_data.read_data_sets("../data/mnist",one_hot= True)

learning_rate = 0.001
training_epochs = 10
batch_size = 100
display_step = 1

n_hidden_1 = 300
n_hidden_2 = 200
n_hidden_3 = 100
n_input = 784
n_classes = 10

x = tf.placeholder("float",[None,784])
y = tf.placeholder("float",[None,n_classes])


def multilayer_perceptron(x,weights,biases):
    layer_1 = tf.add(tf.matmul(x,weights['h1']),biases['b1'])
    layer_1 = tf.nn.relu(layer_1)

    layer_2 = tf.add(tf.matmul(layer_1,weights['h2']),biases['b2'])
    layer_2 = tf.nn.relu(layer_2)

    layer_3 = tf.add(tf.matmul(layer_2,weights['h3']),biases['b3'])
    layer_3 = tf.nn.relu(layer_3)

    out_layer = tf.matmul(layer_3,weights['out']) + biases['out']
    #out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
    return out_layer

weigths = {
    'h1': tf.Variable(tf.random_normal([n_input,n_hidden_1])),
    'h2': tf.Variable(tf.random_normal([n_hidden_1,n_hidden_2])),
    'h3': tf.Variable(tf.random_normal([n_hidden_2,n_hidden_3])),
    'out': tf.Variable(tf.random_normal([n_hidden_3,n_classes]))
    #'out': tf.Variable(tf.random_normal([n_hidden_2,n_classes]))
}

biases = {
    'b1': tf.Variable(tf.random_normal([n_hidden_1])),
    'b2': tf.Variable(tf.random_normal([n_hidden_2])),
    'b3': tf.Variable(tf.random_normal([n_hidden_3])),
    'out': tf.Variable(tf.random_normal([n_classes]))
}

pred = multilayer_perceptron(x,weigths,biases)

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred,labels=y))
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    for epoch in range(training_epochs):
        avg_cost = 0.
        total_batch = int(mnist.train.num_examples / batch_size)
        for i in range(total_batch):
            batch_x,batch_y = mnist.train.next_batch(batch_size)
            _,c = sess.run([train_step,cost],feed_dict={x:batch_x,y:batch_y})
            avg_cost += c/total_batch
        if epoch % display_step == 0:
            print("Epoch:",'%04d' % (epoch+1),"cost=","{:.9f}".format(avg_cost))

    correct_prediction = tf.equal(tf.argmax(pred,1),tf.argmax(y,1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction,"float"))
    print("Accuracy:",accuracy.eval({x:mnist.test.images,y:mnist.test.labels}))

4、运行结果

Epoch: 0001 cost= 160.255845963
Epoch: 0002 cost= 3.702655527
Epoch: 0003 cost= 2.998886849
Epoch: 0004 cost= 2.768014714
Epoch: 0005 cost= 2.632289400
Epoch: 0006 cost= 2.541885174
Epoch: 0007 cost= 2.471330396
Epoch: 0008 cost= 2.419655328
Epoch: 0009 cost= 2.382263158
Epoch: 0010 cost= 2.350417513
Accuracy: 0.1641

5、程序优化

(1)参照书中内容,将学习率由代码中的0.0001改为0.3,效果无明显增长

再将epoch轮数有10改为100,运行结果如下

test_accuracy= 0.8123

(2)更改train_step参数,同时修改学习率

train_step = tf.train.AdamOptimizer(1e-4).minimize(cost)

设置10轮效果如下

test_accuracy= 0.8571

设置100轮效果如下

test_accuracy= 0.9289

这里之所以要简单改一下,是因为运行作者配套源码时出现了准确率过低的问题,实际上就是神经网络中学习率以及损失函数等优化问题,大家可以自己调一调,作者这里选择了最简单的入门级的MNIST图片数据集,命名为验证码,也只是举个应用实例。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

mooyuan天天

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值