Deep Learning cs182 Assignment 1

本文介绍了如何在Python中配置虚拟环境,并解决了激活过程中遇到的问题。此外,详细解析了全连接层反向传播(Affine_backward)的实现,特别是梯度计算过程。同时,讨论了L2正则化的损失函数计算及其在反向传播中的应用,展示了如何在两层神经网络的损失函数中添加L2正则化项。
摘要由CSDN通过智能技术生成

希望能写出编程作业!

配置一下虚拟环境

C:\Users\Yasmin\Downloads\hw1>cd .env/Scripts

C:\Users\Yasmin\Downloads\hw1\.env\Scripts>activate

(.env) C:\Users\Yasmin\Downloads\hw1\.env\Scripts>deactivate

碰到无法

running build_ext
building 'im2col_cython' extension
error: Unable to find vcvarsall.bat

解决办法:

pip install imageio

就好了。。。。

直接开始业!

Fully-connected layer

Affine_backward ,不懂啊

# dout = dloss/dy

所以继续链式法则 乘下去就可以了

def affine_backward(dout, cache):
  """
  Computes the backward pass for an affine layer.
  Inputs:
  - dout: Upstream derivative, of shape (N, M)
  - cache: Tuple of:
    - x: Input data, of shape (N, d_1, ... d_k)
    - w: Weights, of shape (D, M)
    - b: A numpy array of biases, of shape (M,)
  Returns a tuple of:
  - dx: Gradient with respect to x, of shape (N, d1, ..., d_k)
  - dw: Gradient with respect to w, of shape (D, M)
  - db: Gradient with respect to b, of shape (M,)
  """
  x, w, b = cache
  dx, dw, db = None, None, None
  #############################################################################
  # TODO: Implement the affine backward pass.                                 #
  #############################################################################
  #print(dout.shape)
  # out = w x + b
  # dout = w dx/dout, dx = w * dout
  # NxM * MxD -> NxD, reshape to x
  dx = dout.dot(w.T).reshape(x.shape)

  # dw / dout = x * dout
  # NxD * N*M
  dw = x.reshape(x.shape[0], w.shape[0]).T.dot(dout)

  db = np.sum(dout, axis=0)
  #print(db.shape)
  #############################################################################
  #                             END OF YOUR CODE                              #
  #############################################################################
  return dx, dw, db

 不是很懂这里的 reg_loss

突然明白了什么。。。

这里的正则是对W1 和 W2 正则

0.5*slef.reg||W||^2_2

ok

    loss, grads = 0, {}
    ############################################################################
    # TODO: Implement the backward pass for the two-layer net. Store the loss  #
    # in the loss variable and gradients in the grads dictionary. Compute data #
    # loss using softmax, and make sure that grads[k] holds the gradients for  #
    # self.params[k]. Don't forget to add L2 regularization!                   #
    #                                                                          #
    # NOTE: To ensure that your implementation matches ours and you pass the   #
    # automated tests, make sure that your L2 regularization includes a factor #
    # of 0.5 to simplify the expression for the gradient.                      #
    ############################################################################

    # FC scores -> Softmax.  Calculate loss and gradients.
    loss, sm_grads = softmax_loss(scores, y)

    # Adding L2 Reg to softmax loss
    # http://cs231n.github.io/neural-networks-case-study/#grad
    reg_loss = 0.5 * self.reg * (np.sum(self.params['W1']*self.params['W1']) +
                                np.sum(self.params['W2']*self.params['W2']))
    loss = loss + reg_loss
    #print(loss, grads)

    # Backprop to get gradients
    dx_l2, grads['W2'], grads['b2'] = affine_backward(sm_grads, (cache_l2))
    _, grads['W1'], grads['b1'] = affine_relu_backward(dx_l2, cache_l1)

    # add regularization gradients contribution
    grads['W2'] += self.reg * self.params['W2']
    grads['W1'] += self.reg * self.params['W1']
    ############################################################################
    #                             END OF YOUR CODE                             #
    ############################################################################

    return loss, grads

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值