网易云深度学习第一课第一周编程作业

1.1Python Basics with Numpy (optional assignment)

Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you’ve used Python before, this will help familiarize you with functions we’ll need.

介绍:
1.你将使用Python3
2.避免使用for循环和while循环,除非你被明确的要求去使用
3.Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.
4.After coding your function, run the cell right below it to check if your result is correct.

通过这次作业你将获得:
学会使用iPython Notebooks
学会使用numpy函数、numpy矩阵、向量 等操作
理解“broadcasting”的概念
能够向量化代码

关于iPython Notebooks是一个嵌入在网页当中的交互式coding环境。
有将使用iPython notebooks在这门课程中。你仅仅需要在### START CODE HERE ### and ### END CODE HERE ###之间写你的代码内容。完成你的代码后,你可以通过同时按下shift和enter键编译你的代码,或者直接点击“Run Cell”。

我们将在”(≈ X lines of code)”告诉你,你大概需要完成行多少代码。这只是一个大概估计,你不用在意你的代码长度。

练习: Set test to “Hello World” in the cell below to print “Hello World” and run the two cells below.

### START CODE HERE ### (≈ 1 line of code)
test = 'Hello World'
### END CODE HERE ###

这里写图片描述

1 - Building basic functions with numpy
Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.

1.1 - sigmoid function, np.exp()
在你使用np.exp()之前,你先使用math.exp()去实现sigmoid函数。通过对比,你将明白为什么np.exp()是优于math.exp()的。

注意: sigmoid(x)=1/1+e^-x常用于logistic函数,它是一个非线性函数。

To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().

# GRADED FUNCTION: basic_sigmoidimport math
​
def basic_sigmoid(x):
    """
    Compute sigmoid of x.
​
    Arguments:
    x -- A scalar
​
    Return:
    s -- sigmoid(x)
    """

    ### START CODE HERE ### (≈ 1 line of code)
    s = 1/(1+ math.exp(-x))
    ### END CODE HERE ###

    return s

Expected Output:
basic_sigmoid(3)=0.9525741268224334

但实际上,我们很少在深度学习中使用math库,因为深度学习的输入大部分是矩阵或者向量,而math库的输入是一个实数。这就是为什么numpy更适合的原因。

这里写图片描述

numpy可以对x中的所有元素进行同样的操作
这里写图片描述

练习:使用Numpy实现sigmoid函数
这里写图片描述

1.2 - Sigmoid gradient

你需要进行计算梯度和使用反向传播算法去优化你的损失函数

练习::Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x.
The formula is:sigmoid_derivative(x)=σ′(x)=σ(x)(1−σ(x)
你可以分两步完成这个函数:
1.coding 一个sigmoid函数
2.σ′(x)=σ(x)(1−σ(x)

这里写图片描述

1.3 - Reshaping arrays

Numpy有两个函数常用于深度学习,np.shape()和np.reshape().
X.shape is used to get the shape (dimension) of a matrix/vector X.
X.reshape(…) is used to reshape X into some other dimension.

Exercise: Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (length*height*3, 1).
这里写图片描述

1.4 - Normalizing rows

有个常用于深度学习和机器学习的技巧是标准化我们的数据。标准化数据之后,我们的数据在梯度下降时收敛的会更好更快。
Exercise: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).

这里写图片描述

Note: In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You’ll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we’ll talk about it now!

1.5 - Broadcasting and the softmax function

A very important concept to understand in numpy is “broadcasting”.
It is very useful for performing mathematical operations between arrays of different shapes.
Exercise: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.
这里写图片描述
这里写图片描述
这里写图片描述

What you need to remember:
1.np.exp(x) works for any np.array x and applies the exponential function to every coordinate
2.the sigmoid function and its gradient
3.image2vector is commonly used in deep learning
4.np.reshape is widely used. In the future, you’ll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs.
5.numpy has efficient built-in functions
6.broadcasting is extremely useful

Vectorization

在深度学习中,你处理的数据集是非常庞大的。因此,非最优计算函数将会是你算法的瓶颈,会导致模型运行的很慢。为了让你的代码计算更有效率,你应该将数据向量化。例如,try to tell the difference between the following implementations of the dot/outer/elementwise product.
Note that np.dot() performs a matrix-matrix or matrix-vector multiplication. This is different from np.multiply() and the * operator (which is equivalent to .* in Matlab/Octave), which performs an element-wise multiplication.

2.1 Implement the L1 and L2 loss functions

Exercise: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.
Reminder:
- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions (y^) are from the true values (y). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.

这里写图片描述
这里写图片描述

Exercise: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful.
这里写图片描述
这里写图片描述

What to remember:
- Vectorization is very important in deep learning. It provides computational efficiency and clarity.
- You have reviewed the L1 and L2 loss.
- You are familiar with many numpy functions such as np.sum, np.dot, np.multiply, np.maximum, etc…

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值