cs231n_two_layer_net

two_layer_net

two_layer_net原理

首先该模型的主要公式为 f = W 2 m a x ( 0 , W 1 x + b 1 ) + b 2 f=W_2max(0,W_1x+b_1)+b_2 f=W2max(0,W1x+b1)+b2, 在计算梯度时,首先计算 W 2 W_2 W2的梯度,然后计算 W 1 W_1 W1的梯度

Propagation:
F C 1 o u t = X . W 1 + b 1 FC_1out=X.W_1+b_1 FC1out=X.W1+b1

H o u t = m a x i m u m ( 0 , F C 1 o u t ) H_{out}=maximum(0,FC_1out) Hout=maximum(0,FC1out)

F C 2 o u t = H o u t . W 2 + b 2 FC_2out=H_{out}.W_2+b_2 FC2out=Hout.W2+b2

KaTeX parse error: Expected '}', got '_' at position 12: \text{final_̲output}=softmax…

Backpropogation:

KaTeX parse error: Expected '}', got '_' at position 48: …ut}=\text{final_̲output}_{[NXC]}…

KaTeX parse error: Expected '}', got '_' at position 113: …C_2out}=\text{H_̲out}^T.\frac{\p…

∂ L ∂ b 2 = ∂ F C 2 − o u t ∂ b 2 ∂ L ∂ F C 2 − o u t = [ 1 … 1 ] [ 1 × H ] ⋅ ∂ L ∂ F C 2 − o u t \frac{\partial L}{\partial b_{2}}=\frac{\partial F C 2_{-} o u t}{\partial b_{2}} \frac{\partial L}{\partial F C 2_{-} o u t}=[1 \ldots 1]_{[1 \times H]} \cdot \frac{\partial L}{\partial F C 2_{-} o u t} b2L=b2FC2outFC2outL=[11][1×H]FC2outL

KaTeX parse error: Expected '}', got '_' at position 155: …tial H_{\text {_̲out }}}=\frac{\…

∂ L ∂ W 1 = ∂ H − c o u t ∂ W 1 ⋅ ∂ L ∂ H − o u t = X T ⋅ ∂ L ∂ H − o u t \frac{\partial L}{\partial W_{1}}=\frac{\partial H_{-} c o u t}{\partial W_{1}} \cdot \frac{\partial L}{\partial H_{-} o u t}=X^{T} \cdot \frac{\partial L}{\partial H_{-} o u t} W1L=W1HcoutHoutL=XTHoutL

∂ L ∂ b 1 = ∂ H − c o u t ∂ b 1 ⋅ ∂ L ∂ H − o u t = [ 1 … 1 ] [ 1 × N ] ⋅ ∂ L ∂ H − o u t \frac{\partial L}{\partial b_{1}}=\frac{\partial H_{-} c o u t}{\partial b_{1}} \cdot \frac{\partial L}{\partial H_{-} o u t}=[1 \ldots 1]_{[1 \times N]} \cdot \frac{\partial L}{\partial H_{-} o u t} b1L=b1HcoutHoutL=[11][1×N]HoutL

作业实现

neural_net.py

  1. loss函数实现
def loss(self, X, y=None, reg=0.0):
    """
    Compute the loss and gradients for a two layer fully connected neural
    network.

    Inputs:
    - X: Input data of shape (N, D). Each X[i] is a training sample.
    - y: Vector of training labels. y[i] is the label for X[i], and each y[i] is
      an integer in the range 0 <= y[i] < C. This parameter is optional; if it
      is not passed then we only return scores, and if it is passed then we
      instead return the loss and gradients.
    - reg: Regularization strength.

    Returns:
    If y is None, return a matrix scores of shape (N, C) where scores[i, c] is
    the score for class c on input X[i].

    If y is not None, instead return a tuple of:
    - loss: Loss (data loss and regularization loss) for this batch of training
      samples.
    - grads: Dictionary mapping parameter names to gradients of those parameters
      with respect to the loss function; has the same keys as self.params.
    """
    # Unpack variables from the params dictionary
    W1, b1 = self.params['W1'], self.params['b1']
    W2, b2 = self.params['W2'], self.params['b2']
    N, D = X.shape

    # Compute the forward pass
    scores = None
    #############################################################################
    # TODO: Perform the forward pass, computing the class scores for the input. #
    # Store the result in the scores variable, which should be an array of      #
    # shape (N, C).                                                             #
    #############################################################################
    Z1 = X.dot(W1)+b1
    A1 = np.maximum(0,Z1)
    scores = A1.dot(W2)+b2
    #############################################################################
    #                              END OF YOUR CODE                             #
    #############################################################################
    
    # If the targets are not given then jump out, we're done
    if y is None:
      return scores

    # Compute the loss
    loss = None
    #############################################################################
    # TODO: Finish the forward pass, and compute the loss. This should include  #
    # both the data loss and L2 regularization for W1 and W2. Store the result  #
    # in the variable loss, which should be a scalar. Use the Softmax           #
    # classifier loss.                                                          #
    #############################################################################
    scores -= np.max(scores, axis=1, keepdims=True)
    exp_scores = np.exp(scores)
    probs = exp_scores/np.sum(exp_scores,axis=1,keepdims=True)
    y_label = np.zeros((N,probs.shape[1]))
    y_label[np.arange(N),y] = 1
    loss = (-1)*np.sum(np.multiply(np.log(probs),y_label))/N
    loss += reg*(np.sum(W1*W1)+np.sum(W2*W2))
    #############################################################################
    #                              END OF YOUR CODE                             #
    #############################################################################

    # Backward pass: compute gradients
    grads = {}
    #############################################################################
    # TODO: Compute the backward pass, computing the derivatives of the weights #
    # and biases. Store the results in the grads dictionary. For example,       #
    # grads['W1'] should store the gradient on W1, and be a matrix of same size #
    #############################################################################
    dZ2 = probs - y_label
    dW2 = A1.T.dot(dZ2)/N + 2*reg*W2
    db2 = np.sum(dZ2,axis=0)/N
    dZ1 = (dZ2).dot(W2.T)*(A1>0)
    dW1 = X.T.dot(dZ1)/N + 2*reg*W1
    db1 = np.sum(dZ1,axis=0)/N
    grads['W2'] = dW2
    grads['b2'] = db2
    grads['W1'] = dW1
    grads['b1'] = db1
    
    #############################################################################
    #                              END OF YOUR CODE                             #
    #############################################################################

    return loss, grads
  1. train函数实现
def train(self, X, y, X_val, y_val,
            learning_rate=1e-3, learning_rate_decay=0.95,
            reg=5e-6, num_iters=100,
            batch_size=200, verbose=False):
    """
    Train this neural network using stochastic gradient descent.

    Inputs:
    - X: A numpy array of shape (N, D) giving training data.
    - y: A numpy array f shape (N,) giving training labels; y[i] = c means that
      X[i] has label c, where 0 <= c < C.
    - X_val: A numpy array of shape (N_val, D) giving validation data.
    - y_val: A numpy array of shape (N_val,) giving validation labels.
    - learning_rate: Scalar giving learning rate for optimization.
    - learning_rate_decay: Scalar giving factor used to decay the learning rate
      after each epoch.
    - reg: Scalar giving regularization strength.
    - num_iters: Number of steps to take when optimizing.
    - batch_size: Number of training examples to use per step.
    - verbose: boolean; if true print progress during optimization.
    """
    num_train = X.shape[0]
    iterations_per_epoch = max(num_train / batch_size, 1)

    # Use SGD to optimize the parameters in self.model
    loss_history = []
    train_acc_history = []
    val_acc_history = []

    for it in xrange(num_iters):
      X_batch = None
      y_batch = None

      #########################################################################
      # TODO: Create a random minibatch of training data and labels, storing  #
      # them in X_batch and y_batch respectively.                             #
      #########################################################################
      batch_inx = np.random.choice(num_train,batch_size)
      X_batch = X[batch_inx,:]
      y_batch = y[batch_inx]
    
      #########################################################################
      #                             END OF YOUR CODE                          #
      #########################################################################

      # Compute loss and gradients using the current minibatch
      loss, grads = self.loss(X_batch, y=y_batch, reg=reg)
      loss_history.append(loss)

      #########################################################################
      # TODO: Use the gradients in the grads dictionary to update the         #
      # parameters of the network (stored in the dictionary self.params)      #
      # using stochastic gradient descent. You'll need to use the gradients   #
      # stored in the grads dictionary defined above.                         #
      #########################################################################
      self.params['W1'] -= learning_rate * grads['W1']
      self.params['b1'] -= learning_rate * grads['b1']
      self.params['W2'] -= learning_rate * grads['W2']
      self.params['b2'] -= learning_rate * grads['b2']
      #########################################################################
      #                             END OF YOUR CODE                          #
      #########################################################################

      if verbose and it % 100 == 0:
        print('iteration %d / %d: loss %f' % (it, num_iters, loss))

      # Every epoch, check train and val accuracy and decay learning rate.
      if it % iterations_per_epoch == 0:
        # Check accuracy
        train_acc = (self.predict(X_batch) == y_batch).mean()
        val_acc = (self.predict(X_val) == y_val).mean()
        train_acc_history.append(train_acc)
        val_acc_history.append(val_acc)

        # Decay learning rate
        learning_rate *= learning_rate_decay

    return {
      'loss_history': loss_history,
      'train_acc_history': train_acc_history,
      'val_acc_history': val_acc_history,
    }
  1. predict函数
def predict(self, X):
    """
    Use the trained weights of this two-layer network to predict labels for
    data points. For each data point we predict scores for each of the C
    classes, and assign each data point to the class with the highest score.

    Inputs:
    - X: A numpy array of shape (N, D) giving N D-dimensional data points to
      classify.

    Returns:
    - y_pred: A numpy array of shape (N,) giving predicted labels for each of
      the elements of X. For all i, y_pred[i] = c means that X[i] is predicted
      to have class c, where 0 <= c < C.
    """
    y_pred = None

    ###########################################################################
    # TODO: Implement this function; it should be VERY simple!                #
    ###########################################################################
    score = self.loss(X)
    y_pred = np.argmax(score, axis=1)
    ###########################################################################
    #                              END OF YOUR CODE                           #
    ###########################################################################

    return y_pred

two_layer_net.ipynb

  1. 计算分数

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-O6AIyspB-1618559888024)(C:\Users\vpmas\AppData\Roaming\Typora\typora-user-images\image-20210307201352063.png)]

  1. 计算损失的差异

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-jT3aXVm6-1618559888025)(C:\Users\vpmas\AppData\Roaming\Typora\typora-user-images\image-20210307201437855.png)]

  1. 反向传播计算梯度差异

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-hftxz5kX-1618559888026)(C:\Users\vpmas\AppData\Roaming\Typora\typora-user-images\image-20210307201529456.png)]

  2. 训练网络

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-bbhjNIqs-1618559888028)(C:\Users\vpmas\AppData\Roaming\Typora\typora-user-images\image-20210307201700510.png)]

  3. 训练超参数

    best_net = None # store the best model into this 
    
    #################################################################################
    # TODO: Tune hyperparameters using the validation set. Store your best trained  #
    # model in best_net.                                                            #
    #                                                                               #
    # To help debug your network, it may help to use visualizations similar to the  #
    # ones we used above; these visualizations will have significant qualitative    #
    # differences from the ones we saw above for the poorly tuned network.          #
    #                                                                               #
    # Tweaking hyperparameters by hand can be fun, but you might find it useful to  #
    # write code to sweep through possible combinations of hyperparameters          #
    # automatically like we did on the previous exercises.                          #
    #################################################################################
    input_size = 32 * 32 * 3
    hidden_size = [100, 200, 250]
    num_classes = 10
    reg = [0.03,0.05, 0.09]
    learning_rate = [1e-3, 1e-4]
    best_acc = 0
    for hs in hidden_size:
        net = TwoLayerNet(input_size, hs, num_classes)
        
        for r in reg:
            for lr in learning_rate:
                stats = net.train(X_train, y_train, X_val, y_val, num_iters=3000, batch_size=400,
                                 learning_rate=lr, learning_rate_decay=0.95, reg=r,verbose=False)
                val_acc = (net.predict(X_val)==y_val).mean()
                print('hidden_size:%f, reg:%f, learning_rate:%f'%(hs,r,lr))
                print('val accuracy:', val_acc)
                if (val_acc > best_acc):
                    best_acc = val_acc
                    best_net = net
    #################################################################################
    #                               END OF YOUR CODE                                #
    #################################################################################
    

    最后训练结果如图

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-CM5GLbsL-1618559888028)(C:\Users\vpmas\AppData\Roaming\Typora\typora-user-images\image-20210307201903691.png)]

    最后在测试集的准确率结果为:

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-GTBTqhdu-1618559888029)(C:\Users\vpmas\AppData\Roaming\Typora\typora-user-images\image-20210307201939128.png)]

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值