斯坦福coursera作业题神经网络训练数字识别Feedforward Propagation and Prediction

神经网络前馈实现,主要是根据下图


def nnCostFunction(theta1,theta2,num_labels, x, y, lamda):
    m=len(x)
    one1=np.ones(m)
    a1=np.column_stack((one1,x))
    z2=np.dot(theta1,a1.T)
    x2=1/(1+np.exp(-z2))
    one2=np.ones(len(x2[0]))
    a2=np.row_stack((one2,x2))
    z3=np.dot(theta2,a2)
    z3=z3.T
    htheta=1/(1+np.exp(-z3))
    x4=np.log(htheta)
    x4_2=np.ones((m, len(htheta[0]))) - htheta
    x5=np.log(x4_2)
    a=htheta[0]
    cost=0
    y_summary = np.zeros((len(y), num_labels))
    for i in range(len(y)):
        yi=np.zeros((1,num_labels))
        j=y[i]
        if j!=10:
            yi[0][j-1]=1
        else:
            yi[0][9]=1
        yi_1 = np.ones((1, num_labels)) - yi
        cost_every_y = -np.dot(yi, x4[i].T) - np.dot(yi_1, x5[i].T)
        cost = cost + cost_every_y
        y_summary[i]=yi
    return cost/5000
data = sio.loadmat('ex4data1.mat')       #x为5000*400,y为5000*1,theta1为25*401,theta2为10*26
y = data['y']
x = data['X']                         
weight=sio.loadmat('ex4weights.mat')           
theta2 = weight['Theta2']
theta1 = weight['Theta1']
lamda=1
num_labels=10
J = nnCostFunction(theta1,theta2,num_labels,x, y, lamda)  
print J                                                       

在这里必须说一个插曲,开始我怎么改程序都不对,后来才发现数据集是按照1,2,3,4,5,6,7,8,9,0的顺序储存的,而我以为是从0到9。从1开始储存是为了配合matlab的编程特点,而改成python的话要错后一位……

def nnCostFunction(theta1,theta2,num_labels, x, y, lamda):
    m=len(x)
    one1=np.ones(m)
    a1=np.column_stack((one1,x))
    z2=np.dot(theta1,a1.T)
    x2=1/(1+np.exp(-z2))
    one2=np.ones(len(x2[0]))
    a2=np.row_stack((one2,x2))
    z3=np.dot(theta2,a2)
    z3=z3.T
    htheta=1/(1+np.exp(-z3))
    x4=np.log(htheta)
    x4_2=np.ones((m, len(htheta[0]))) - htheta
    x5=np.log(x4_2)
    a=htheta[0]
    cost=0
    y_summary = np.zeros((len(y), num_labels))
    for i in range(len(y)):
        yi=np.zeros((1,num_labels))
        j=y[i]
        if j!=10:
            yi[0][j-1]=1
        else:
            yi[0][9]=1
        yi_1 = np.ones((1, num_labels)) - yi
        cost_every_y = -np.dot(yi, x4[i].T) - np.dot(yi_1, x5[i].T)
        cost = cost + cost_every_y
        y_summary[i]=yi
        theta1_sum =0
        theta2_sum =0
    for i in range(len(theta1)):
        for j in range(len(theta1[0])):
            theta1_sum=theta1_sum+theta1[i][j]*theta1[i][j]
    for i in range(len(theta2)):
        for j in range(len(theta2[0])):
            theta2_sum=theta2_sum+theta2[i][j]*theta2[i][j]
    theta_sum=theta1_sum+theta2_sum
    cost=cost+0.5*lamda*theta_sum
    return cost/5000
data = sio.loadmat('ex4data1.mat')       #x为5000*400,y为5000*1,theta1为25*401,theta2为10*26
y = data['y']
x = data['X']
weight=sio.loadmat('ex4weights.mat')              
theta2 = weight['Theta2']
theta1 = weight['Theta1']
lamda=1
num_labels=10
J = nnCostFunction(theta1,theta2,num_labels,x, y, lamda)   
print J

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值