【机器学习】手写神经网络处理MNIST数据集

MNIST数据集是人工智能大佬Yann LeCun给出的一套手写数字的数据集,训练集包含60,000个样本和标注,测试集包含10,000个样本和标注。可以给新手用来练手用。
上篇文章中,有讲到如何解析数据集,并使用pyplot绘制数据集中的数字,在这篇文章中,将一句吴恩达的《Machine Learning》课程中对神经网络的讲解,使用python来手写神经网络,实现对测试集的测试。

使用的库
  • numpy:python科学计算库
  • sklearn.preprocessing.OneHotEncoder:数据预处理,将训练集中样本标注的标量值,转化为向量。例如标注2代表数字2,向量表示为[0, 0, 1, 0, 0, 0, 0, 0, 0, 0];
  • sklearn.preprocessing.MinMaxScaler:对数据集中训练样本的参数进行归一化处理;
方法
  1. 激活函数
    使用标准的sigmoid函数作为激活函数。
def sigmoid(z):
    return 1 / (1 + np.exp(-z))
  1. 前向传播
def forward_propagate(X, theta1, theta2):
    m = X.shape[0]

    a1 = np.insert(X, 0, values=np.ones(m), axis=1)
    z2 = a1 * theta1.T
    a2 = np.insert(sigmoid(z2), 0, values=np.ones(m), axis=1)
    z3 = a2 * theta2.T
    h = sigmoid(z3)

    return a1, z2, a2, z3, h
  1. 成本函数
def cost(theta1, theta2, input_size, hidden_size, num_labels, X, y, learning_rate):
    m = X.shape[0]
    X = np.matrix(X)
    y = np.matrix(y)

    a1, z2, a2, z3, h = forward_propagate(X, theta1, theta2)

    J = 0
    for i in range(m):
        first_term = np.multiply(-y[i,:], np.log(h[i,:]))
        second_term = np.multiply((1 - y[i,:]), np.log(1 - h[i,:]))
        J += np.sum(first_term - second_term)

    J = J / m

    return J
  1. sigmod gradient
def sigmoid_gradient(z):
    return np.multiply(sigmoid(z), (1 - sigmoid(z)))
  1. 后向传播
def backpropReg(params, input_size, hidden_size, num_labels, X, y, learning_rate):
    m = X.shape[0]
    X = np.matrix(X)
    y = np.matrix(y)

    # reshape the parameter array into parameter matrices for each layer
    theta1 = np.matrix(np.reshape(params[:hidden_size * (input_size + 1)], (hidden_size, (input_size + 1))))
    theta2 = np.matrix(np.reshape(params[hidden_size * (input_size + 1):], (num_labels, (hidden_size + 1))))

    # run the feed-forward pass
    a1, z2, a2, z3, h = forward_propagate(X, theta1, theta2)

    # initializations
    J = 0
    delta1 = np.zeros(theta1.shape)  # (25, 401)
    delta2 = np.zeros(theta2.shape)  # (10, 26)

    # compute the cost
    for i in range(m):
        first_term = np.multiply(-y[i,:], np.log(h[i,:]))
        second_term = np.multiply((1 - y[i,:]), np.log(1 - h[i,:]))
        J += np.sum(first_term - second_term)

    J = J / m

    # add the cost regularization term
    J += (float(learning_rate) / (2 * m)) * (np.sum(np.power(theta1[:,1:], 2)) + np.sum(np.power(theta2[:,1:], 2)))

    # perform backpropagation
    for t in range(m):
        a1t = a1[t,:]  # (1, 401)
        z2t = z2[t,:]  # (1, 25)
        a2t = a2[t,:]  # (1, 26)
        ht = h[t,:]  # (1, 10)
        yt = y[t,:]  # (1, 10)

        d3t = ht - yt  # (1, 10)

        z2t = np.insert(z2t, 0, values=np.ones(1))  # (1, 26)
        d2t = np.multiply((theta2.T * d3t.T).T, sigmoid_gradient(z2t))  # (1, 26)

        delta1 = delta1 + (d2t[:,1:]).T * a1t
        delta2 = delta2 + d3t.T * a2t

    delta1 = delta1 / m
    delta2 = delta2 / m

    delta1[:,1:] = delta1[:,1:] + (theta1[:,1:] * learning_rate) / m
    delta2[:,1:] = delta2[:,1:] + (theta2[:,1:] * learning_rate) / m

    grad = np.concatenate((np.ravel(delta1), np.ravel(delta2)))

    return J, grad
主流程
X, y = load_data.load_mnist('/Users/wowo/0-Tensorflow')
X = np.matrix(X)
y = np.matrix(y).T
X = X[0:1000, :]
y = y[0:1000, :]

# 初始化设置
input_size = 784
hidden_size = 25
num_labels = 10
learning_rate = 1

encoder = OneHotEncoder(sparse=False)
y_onehot = encoder.fit_transform(y)

scaler = MinMaxScaler()
X = scaler.fit_transform(X)

params = (np.random.random(size=hidden_size * (input_size + 1) + num_labels * (hidden_size + 1)) - 0.5) * 0.24
theta1 = np.matrix(np.reshape(params[:hidden_size * (input_size + 1)], (hidden_size, (input_size + 1))))
theta2 = np.matrix(np.reshape(params[hidden_size * (input_size + 1):], (num_labels, (hidden_size + 1))))

from scipy.optimize import minimize

# minimize the objective function
fmin = minimize(fun=backpropReg, x0=(params), args=(input_size, hidden_size, num_labels, X, y_onehot, learning_rate),
                method='TNC', jac=True, options={'maxiter': 100})


X, y = load_data.load_mnist('/Users/wowo/0-Tensorflow', 't10k')
X = np.matrix(X)
y = np.matrix(y).T
X = scaler.fit_transform(X)
thetafinal1 = np.matrix(np.reshape(fmin.x[:hidden_size * (input_size + 1)], (hidden_size, (input_size + 1))))
thetafinal2 = np.matrix(np.reshape(fmin.x[hidden_size * (input_size + 1):], (num_labels, (hidden_size + 1))))
a1, z2, a2, z3, h = forward_propagate(X, thetafinal1, thetafinal2)
y_pred = np.array(np.argmax(h, axis=1))
from sklearn.metrics import classification_report
print(classification_report(y, y_pred))
运算结果

由于我的电脑性能较差,所以只截取了前1000的训练样本,训练了100轮,对测试集进行了测试,结果如下:

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值