卷积神经网络调库实现手写数字识别

import tensorflow as tf
import random
from tensorflow.examples.tutorials.mnist import input_data

随机种子

tf.set_random_seed(777)

加载数据

mnist = input_data.read_data_sets(‘MNIST_data’, one_hot=True)

training_epoch = 15
batch_size = 100

dropout = tf.placeholder(tf.float32)

x = tf.placeholder(tf.float32, [None, 784])
y = tf.placeholder(tf.float32, [None, 10])

图片维度

x_img = tf.reshape(x,[-1,28,28,1])

调库根据一个图片维度x_img, 卷积核个数32,进行(3,3)的一个卷积,步长strides=[2,2], 与之前原始图片维度变成一样padding=‘same’, 调用激活函数(可以使用别的)activation=tf.nn.relu, 定义名字(随便定义的)name=‘con_1’

con_1 = tf.layers.conv2d(x_img,32,(3,3),strides=[2,2],padding=‘same’,activation=tf.nn.relu,name=‘con_1’)

池化层 对卷积进行con_1,进行[2,2]的池化,步长为strides=[2,2],与上面一样padding=‘same’

pool_1 = tf.layers.max_pooling2d(con_1,[2,2],strides=[2,2],padding=‘same’)

pool_1 = tf.nn.dropout(pool_1,keep_prob=dropout)

con_2 = tf.layers.conv2d(pool_1,64,(3,3),strides=[2,2],padding=‘same’,
activation=tf.nn.relu,)
pool_2 = tf.layers.max_pooling2d(con_2,[2,2],strides=[2,2]
,padding=‘same’)
pool_2 = tf.nn.dropout(pool_2,keep_prob=dropout)

con_3 = tf.layers.conv2d(pool_2,128,(3,3),strides=[2,2],padding=‘same’,
activation=tf.nn.relu)
pool_3 = tf.layers.max_pooling2d(con_3,[2,2],strides=[2,2]
,padding=‘same’)
pool_3 = tf.nn.dropout(pool_3,keep_prob=dropout)

con_4 = tf.layers.conv2d(pool_3,256,(3,3),strides=[2,2],padding=‘same’,

activation=tf.nn.relu)

pool_4 = tf.layers.max_pooling2d(con_4,[2,2],strides=[2,2]

,padding=‘same’)

pool_4 = tf.nn.dropout(pool_4,keep_prob=True)

con_5 = tf.layers.conv2d(pool_4,32,(3,3),strides=[2,2],padding=‘same’,

activation=tf.nn.relu)

pool_5 = tf.layers.max_pooling2d(con_5,[2,2],strides=[2,2]

,padding=‘same’)

pool_5 = tf.nn.dropout(pool_5,keep_prob=True)

flatten = tf.layers.flatten(pool_3)

下面几乎去拿时调库可以自行查阅各种库函数

a1 = tf.layers.dense(flatten,625,activation=tf.nn.relu)
a1 = tf.nn.dropout(a1,keep_prob=dropout)
a2 = tf.layers.dense(a1,10,activation=tf.nn.softmax)

cost = -tf.reduce_mean(tf.reduce_sum(y*tf.log(a2), axis=1))

accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(a2, 1), tf.argmax(y, 1)), tf.float32))

sess = tf.Session()

optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
sess.run(tf.global_variables_initializer())
for i in range(15):
avg_cost = 0
m = int(mnist.train.num_examples/batch_size)
for j in range(m):
xdata, ydata = mnist.train.next_batch(batch_size)
cost_var, _ = sess.run([cost, optimizer], feed_dict={x:xdata, y:ydata,dropout:0.7})
avg_cost += cost_var/m
print(i+1,’----’,+avg_cost)

print(sess.run(accuracy, feed_dict={x:mnist.test.images, y:mnist.test.labels,dropout:1}))

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,我来回答你的问题。 对于二元逻辑回归的手写实现,我们需要先定义损失函数和优化方法,然后使用梯度下降算法来更新模型参数。具体步骤如下: 1. 定义sigmoid函数,用于将预测值转化为概率值。 ```python def sigmoid(x): return 1 / (1 + np.exp(-x)) ``` 2. 定义损失函数,使用交叉熵损失函数。 ```python def cross_entropy_loss(y_pred, y_true): n_samples = len(y_true) loss = -1/n_samples * (np.dot(y_true, np.log(y_pred)) + np.dot((1-y_true), np.log(1-y_pred))) return loss ``` 3. 定义优化方法,使用梯度下降算法。 ```python def gradient_descent(X, y_true, y_pred, learning_rate): n_samples, n_features = X.shape dw = 1/n_samples * np.dot(X.T, (y_pred - y_true)) db = 1/n_samples * np.sum(y_pred - y_true) w = w - learning_rate * dw b = b - learning_rate * db return w, b ``` 4. 定义模型训练函数,迭代更新模型参数。 ```python def train(X, y_true, n_iters, learning_rate): n_samples, n_features = X.shape w = np.zeros(n_features) b = 0 for i in range(n_iters): y_pred = sigmoid(np.dot(X, w) + b) loss = cross_entropy_loss(y_pred, y_true) if i % 100 == 0: print(f'Iter: {i}, Loss: {loss}') w, b = gradient_descent(X, y_true, y_pred, learning_rate) return w, b ``` 5. 使用训练好的模型进行预测,并计算准确率。 ```python def predict(X, w, b): y_pred = sigmoid(np.dot(X, w) + b) y_pred_cls = [1 if i > 0.5 else 0 for i in y_pred] return y_pred_cls def accuracy(y_pred, y_true): accuracy = np.sum(y_pred == y_true) / len(y_true) return accuracy ``` 对于使用库函数实现的二元逻辑回归,我们可以使用sklearn库中的LogisticRegression模型。具体步骤如下: 1. 导入模型库,调用LogisticRegression类。 ```python from sklearn.linear_model import LogisticRegression model = LogisticRegression() ``` 2. 使用fit函数对模型进行训练。 ```python model.fit(X_train, y_train) ``` 3. 使用predict函数对模型进行预测,并计算准确率。 ```python y_pred = model.predict(X_test) accuracy = model.score(X_test, y_test) ``` 这样,我们就可以通过手写实现和调用库函数来实现二元逻辑回归,并计算准确率。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值