1.深度学习练习:Python Basics with Numpy(选修)

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。

课程链接:https://www.deeplearning.ai/deep-learning-specialization


目录

1 - Building basic functions with numpy

1.1 - np.exp(), sigmoid function

1.2 - Sigmoid gradient

1.3 - Reshaping arrays(常用)

1.4 - Normalizing rows(常用)

1.5 - Broadcasting and the softmax function(理解)

2 - Vectorization

2.1 - Implement the L1 and L2 loss functions

3 - 推荐阅读:


1 - Building basic functions with numpy

1.1 - np.exp(), sigmoid function

import numpy as np

x = np.array([1, 2, 3])
print(np.exp(x))

x = np.array([1, 2, 3])
print (x + 3)

Exercise: Implement the sigmoid function using numpy.

Instructions: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.

提示:x可能代表一个实数、向量或者矩阵。在numpy中使用数组来表示x的数据类型。

import numpy as np 
def sigmoid(x):
    s = 1 / (1 + np.exp(-x))    
    return s

1.2 - Sigmoid gradient

Exercise: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is:

实现sigmoid_grad()函数来计算sigmoid函数的梯度,sigmoid函数求导公式如下:

sigmoid\_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))

You often code this function in two steps:

  1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.
  2. Compute sigmoid gradient
def sigmoid_derivative(x):
    s = 1 / (1 + np.exp(-x))
    ds = s *(1-s)
      
    return ds

x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))

1.3 - Reshaping arrays(常用)

Two common numpy functions used in deep learning are np.shape and np.reshape().

  • X.shape is used to get the shape (dimension) of a matrix/vector X
  • X.reshape(...) is used to reshape X into some other dimension

For example, in computer science, an image is represented by a 3D array of shape (length,height,depth=3). However, when you read an image as the input of an algorithm you convert it to a vector of shape (length∗height∗3,1). In other words, you "unroll", or reshape, the 3D array into a 1D vector.

Exercise: Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (length*height*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:

def image2vector(image):
   
    v = image.reshape((image.shape[0] * image.shape[1] * image.shape[2], 1))
    return v

1.4 - Normalizing rows(常用)

Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to \frac{x}{\| x\|} (dividing each row vector of x by its norm).

Exercise: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).

def normalizeRows(x):
    """
   
    Argument:
    x -- A numpy matrix of shape (n, m)
    
    Returns:
    x -- The normalized (by row) numpy matrix. 
    """
    
    # Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
    x_norm = np.linalg.norm(x, ord=2, axis=1, keepdims=True) 
    
    # Divide x by its norm.
    x = x / x_norm

    return x

1.5 - Broadcasting and the softmax function(理解)

A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official broadcasting documentation.

Exercise: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.

Instructions:

def softmax(x):
    """
    Argument:
    x -- A numpy matrix of shape (n,m)

    Returns:
    s -- A numpy matrix equal to the softmax of x, of shape (n,m)
    """
    
    x_exp = np.exp(x)

    # Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).
    x_sum = np.sum(x_exp, axis=1, keepdims=True)

    s = x_exp / x_sum
   
    return s

x = np.array([
    [9, 2, 5, 0, 0],
    [7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))

**What you need to remember:** - np.exp(x) works for any np.array x and applies the exponential function to every coordinate - the sigmoid function and its gradient - image2vector is commonly used in deep learning - np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs. - numpy has efficient built-in functions - broadcasting is extremely useful.

你需要记住的:

  • np.exp(x)对任意数组都有效并且对每个元素都进行指数计算;
  • sigmoid和其梯度函数以及image2vector函数以及np.reshape函数是深度学习中最常使用的函数;
  • numpy的广播机制是最常使用到的。

2 - Vectorization

2.1 - Implement the L1 and L2 loss functions

Exercise: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.

Reminder:

  • The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions  are from the true values . In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.
  • L1 loss is defined as:

\begin{align*} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align*}

def L1(yhat, y):
  
    loss = np.sum(abs(yhat-y))
    return loss

yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))

L2 loss is defined as:

L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2

def L2(yhat, y):
  
    loss = np.dot((y-yhat),(y-yhat))  
    return loss

yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))

**What to remember:** - Vectorization is very important in deep learning. It provides computational efficiency and clarity. - You have reviewed the L1 and L2 loss. - You are familiar with many numpy functions such as np.sum, np.dot, np.multiply, np.maximum, etc.

需要记住的:

  • 向量化是深度学习中最重要的,使得计算更加高效和清晰;
  • L1和 L2损失函数公式;
  • numpy中其它有用的函数:np.sum,np.dot,np.multiply,np.maximum。

3 - 推荐阅读:

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值