Backpropagation Algorithm

Backpropagation Algorithm

"Backpropagation" is neural-network terminology for minimizing our cost function, just like what we were doing with gradient descent in logistic and linear regression. Our goal is to compute:

<math xmlns="http://www.w3.org/1998/Math/MathML">
  <munder>
    <mo movablelimits="true">min</mo>
    <mi mathvariant="normal">Θ<!-- Θ --></mi>
  </munder>
  <mi>J</mi>
  <mo stretchy="false">(</mo>
  <mi mathvariant="normal">Θ<!-- Θ --></mi>
  <mo stretchy="false">)</mo>
</math>


def nnCostFunction(nn_params,input_layer_size, hidden_layer_size, num_labels,X, y,Lambda): # Reshape nn_params back into the parameters Theta1 and Theta2 Theta1 = nn_params[:((input_layer_size+1) * hidden_layer_size)].reshape(hidden_layer_size,input_layer_size+1) Theta2 = nn_params[((input_layer_size +1)* hidden_layer_size ):].reshape(num_labels,hidden_layer_size+1) m = X.shape[0] J=0 X = np.hstack((np.ones((m,1)),X)) y10 = np.zeros((m,num_labels)) a1 = sigmoid(X @ Theta1.T) a1 = np.hstack((np.ones((m,1)), a1)) # hidden layer a2 = sigmoid(a1 @ Theta2.T) # output layer for i in range(1,num_labels+1): y10[:,i-1][:,np.newaxis] = np.where(y==i,1,0) for j in range(num_labels): J = J + sum(-y10[:,j] * np.log(a2[:,j]) - (1-y10[:,j])*np.log(1-a2[:,j])) cost = 1/m* J reg_J = cost + Lambda/(2*m) * (np.sum(Theta1[:,1:]**2) + np.sum(Theta2[:,1:]**2)) # Implement the backpropagation algorithm to compute the gradients grad1 = np.zeros((Theta1.shape)) grad2 = np.zeros((Theta2.shape)) for i in range(m): xi= X[i,:] # 1 X 401 a1i = a1[i,:] # 1 X 26 a2i =a2[i,:] # 1 X 10 d2 = a2i - y10[i,:] d1 = Theta2.T @ d2.T * sigmoidGradient(np.hstack((1,xi @ Theta1.T))) grad1= grad1 + d1[1:][:,np.newaxis] @ xi[:,np.newaxis].T grad2 = grad2 + d2.T[:,np.newaxis] @ a1i[:,np.newaxis].T grad1 = 1/m * grad1 grad2 = 1/m*grad2 grad1_reg = grad1 + (Lambda/m) * np.hstack((np.zeros((Theta1.shape[0],1)),Theta1[:,1:])) grad2_reg = grad2 + (Lambda/m) * np.hstack((np.zeros((Theta2.shape[0],1)),Theta2[:,1:])) return cost, grad1, grad2,reg_J, grad1_reg,grad2_reg
05-15
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值