2019.10.8-2019.10.13学习进度

2019.10.8学习进度

**今天是第10天参与STAR Pro,今天主要完成了week2大作业——Logistic Regression as a Neural Network的剩余部分:
**

4.3 - Forward and Backward propagation

构造函数propagate(),计算出损失函数及神经网络的梯度

代码:

GRADED FUNCTION: propagate

# GRADED FUNCTION: propagate

import numpy as np

def propagate(w, b, X, Y):

    
    m = X.shape[1]
    
    # FORWARD PROPAGATION (FROM X TO COST)
    ### START CODE HERE ### (≈ 2 lines of code)
    A = sigmoid(np.dot(w.T,X)+b)            #注意这里w要转置                        # compute activation
    cost =  (- 1 / m) * np.sum(Y * np.log(A) + (1 - Y) * (np.log(1 - A)))  #这里Y与log(A)直接相乘即可                               # compute cost
    ### END CODE HERE ###
    
    # BACKWARD PROPAGATION (TO FIND GRAD)
    ### START CODE HERE ### (≈ 2 lines of code)
    dw = (1/m)*np.dot(X,(A-Y).T)        #np.dot使用过程要注意两个矩阵的维度
    db = (1/m)*np.sum(A-Y)
    ### END CODE HERE ###

    assert(dw.shape == w.shape)
    assert(db.dtype == float)
    cost = np.squeeze(cost)
    assert(cost.shape == ())
    
    grads = {"dw": dw,
             "db": db}
    
    return grads, cost

输出结果
在这里插入图片描述

4.4 -Optimization

使用梯度下降法,迭代优化参数w和b。

代码:

# GRADED FUNCTION: optimize

import numpy as np

def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):

    costs = []
    
    for i in range(num_iterations):
        
        
        # Cost and gradient calculation (≈ 1-4 lines of code)
        ### START CODE HERE ### 
        grads, cost = propagate(w, b, X, Y)
        ### END CODE HERE ###
        
        # Retrieve derivatives from grads             
        dw = grads["dw"]
        db = grads["db"]
        
        # update rule (≈ 2 lines of code)
        ### START CODE HERE ###
        w = w-learning_rate*dw
        b = b-learning_rate*db
        ### END CODE HERE ###
        
        # Record the costs
        if i % 100 == 0:
            costs.append(cost)
        
        # Print the cost every 100 training iterations
        if print_cost and i % 100 == 0:
            print ("Cost after iteration %i: %f" %(i, cost))
    
    params = {"w": w,
              "b": b}
    
    grads = {"dw": dw,
             "db": db}
    
    return params, grads, costs  #将输出结果储存在dict中

结果:
在这里插入图片描述在上几个函数的基础上,得到Y的预期值函数

代码


# GRADED FUNCTION: predict

import numpy as np

def predict(w, b, X):

    '''
    
    m = X.shape[1]
    Y_prediction = np.zeros((1,m))
    w = w.reshape(X.shape[0], 1)
    
    # Compute vector "A" predicting the probabilities of a cat being present in the picture
    ### START CODE HERE ### (≈ 1 line of code)
    A = sigmoid(np.dot(w.T,X)+b)
    ### END CODE HERE ###
    
    for i in range(A.shape[1]):
        
        # Convert probabilities A[0,i] to actual predictions p[0,i]
        ### START CODE HERE ### (≈ 4 lines of code)
       
            if A[0,i]<=0.5 :
                Y_prediction[0,i]=0
            else :
                Y_prediction[0,i]=1
        
        
        
        
        ### END CODE HERE ###
    
    assert(Y_prediction.shape == (1, m))
    
    return Y_prediction

结果

5 - Merge all functions into a model

综合本次作业写的函数进一个model中,

代码:

# GRADED FUNCTION: model

import numpy as np

def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):

    
    ### START CODE HERE ###
    
    # initialize parameters with zeros (≈ 1 line of code)
    w, b = initialize_with_zeros(X_train.shape[0])

    # Gradient descent (≈ 1 line of code)
    parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost = False)
    
    # Retrieve parameters w and b from dictionary "parameters"
    w = parameters["w"]                           #获取参数
    b = parameters["b"]
    
    # Predict test/train set examples (≈ 2 lines of code)
    Y_prediction_test = predict(w, b, X_test)           #计算出预测值
    Y_prediction_train = predict(w, b, X_train)

    ### END CODE HERE ###

    # Print train/test Errors
    print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
    print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))

    
    d = {"costs": costs,
         "Y_prediction_test": Y_prediction_test, 
         "Y_prediction_train" : Y_prediction_train, 
         "w" : w, 
         "b" : b,
         "learning_rate" : learning_rate,
         "num_iterations": num_iterations}
    
    return d

输出结果:
在这里插入图片描述

碰到问题:

1.不明白权重参数W和偏置参数b的维度是如何决定的
2.不明白numpy中矩阵的维度等参数是如何排序的?在这里插入图片描述

2019.10.9学习进度

今天是第天11参加STAR Pro,今天主要完成了编程作业week3——Planar_data_classification_with_onehidden_layer_v6c ,了解了2layer NN是如何通过编程实现的

4 - Neural Network model

在这里插入图片描述在这里插入图片描述

4.1 - Defining the neural network structure

本小节构造了一个返回神经网络输入输出维度的函数
代码如下:

# GRADED FUNCTION: layer_sizes

def layer_sizes(X, Y):

    ### START CODE HERE ### (≈ 3 lines of code)
    n_x = X.shape[0]# size of input layer
    n_h = 4
    n_y = Y.shape[0] # size of output layer
    ### END CODE HERE ###
    return (n_x, n_h, n_y)

结果:
在这里插入图片描述

4.2 - Initialize the model’s parameters

本小节构造了一个初始化权重参数w与偏置参数b,其维度与所在层的n值有关
代码如下:

# GRADED FUNCTION: initialize_parameters
import numpy as np

def initialize_parameters(n_x, n_h, n_y):  #初始化参数W1、b1、W2、b2
    
    np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random.
    
    ### START CODE HERE ### (≈ 4 lines of code)
    W1 = np.random.randn(n_h,n_x) * 0.01  #初始化w后要乘以一个小数
    b1 = np.zeros((n_h,1))
    W2 = np.random.randn(n_y,n_h) * 0.01   #注意W2的维度为(n_y,n_h)
    b2 = np.zeros((n_y,1))  
    ### END CODE HERE ###
    
    assert (W1.shape == (n_h, n_x))
    assert (b1.shape == (n_h, 1))
    assert (W2.shape == (n_y, n_h))
    assert (b2.shape == (n_y, 1))
    
    parameters = {"W1": W1,
                  "b1": b1,
                  "W2": W2,
                  "b2": b2}
    
    return parameters

结果如下:
在这里插入图片描述

4.3 - The Loop

本小节通过神经网络的正向传播计算出预值、参数导数,再由梯度下降,推出反向传播,优化各个参数。

forward_propagation

代码:

# GRADED FUNCTION: forward_propagation
import numpy as np

def forward_propagation(X, parameters):  #双层神经网络正向传播,返回值为Z1、A1、Z2、A2

    # Retrieve each parameter from the dictionary "parameters"
    ### START CODE HERE ### (≈ 4 lines of code)
    W1 = parameters["W1"]   #python中dict用法,直接输入元素名即可获取元素值
    b1 = parameters["b1"]
    W2 = parameters["W2"]
    b2 = parameters["b2"]
    ### END CODE HERE ###
    
    # Implement Forward Propagation to calculate A2 (probabilities)
    ### START CODE HERE ### (≈ 4 lines of code)
    Z1 = np.dot(W1,X)+b1  #为什么这里不用转置W1??
    A1 = np.tanh(Z1)        #第一层的输出作为第二层的输入
    Z2 = np.dot(W2,A1)+b2   #为了达到expected output,在尝试多次以后第一次激活函数使用tanh() ,第二次使用sigmoid()
    A2 = sigmoid(Z2)
    ### END CODE HERE ###
    
    assert(A2.shape == (1, X.shape[1]))
    
    cache = {"Z1": Z1,
             "A1": A1,
             "Z2": Z2,
             "A2": A2}
    
    return A2, cache

结果如下:
在这里插入图片描述

compute_cost

代码如下:

# GRADED FUNCTION: compute_cost
import numpy as np

def compute_cost(A2, Y, parameters):  #cost函数的计算
    m = Y.shape[1] # number of example

    # Compute the cross-entropy cost
    ### START CODE HERE ### (≈ 2 lines of code)
    logprobs = np.multiply(np.log(A2),Y)+np.multiply(np.log(1-A2),1-Y)  #使用np.multipy函数,不用对矩阵进行转置,实现了对应元素相乘
    cost = -(1/m)*np.sum(logprobs) 
    ### END CODE HERE ###
    
    cost = float(np.squeeze(cost))  # makes sure cost is the dimension we expect. 
                                    # E.g., turns [[17]] into 17 
    assert(isinstance(cost, float))
    
    return cost

结果如下:
在这里插入图片描述

backward_propagation

在这里插入图片描述代码如下:

# GRADED FUNCTION: backward_propagation

import numpy as np

def backward_propagation(parameters, cache, X, Y):  #反向传播过程
    m = X.shape[1]
    
    # First, retrieve W1 and W2 from the dictionary "parameters".
    ### START CODE HERE ### (≈ 2 lines of code)
    W1 = parameters["W1"]
    W2 = parameters["W2"]
    ### END CODE HERE ###
        
    # Retrieve also A1 and A2 from dictionary "cache".
    ### START CODE HERE ### (≈ 2 lines of code)
    A1 = cache["A1"]
    A2 = cache["A2"]
    ### END CODE HERE ###
    
    # Backward propagation: calculate dW1, db1, dW2, db2. 
    ### START CODE HERE ### (≈ 6 lines of code, corresponding to 6 equations on slide above)
    dZ2 = A2-Y                                            #依照公式,计算出各个微元,为后续参数优化(梯度下降法)做准备
    dW2 = (1/m)*np.dot(dZ2,A1.T)
    db2 = (1/m)*np.sum(dZ2,axis=1,keepdims=True)
    dZ1 = np.dot(W2.T,dZ2)*(1 - np.power(A1, 2))
    dW1 = (1/m)*np.dot(dZ1,X.T)
    db1 = (1/m)*np.sum(dZ1,axis=1,keepdims=True)
    ### END CODE HERE ###
    
    grads = {"dW1": dW1,
             "db1": db1,
             "dW2": dW2,
             "db2": db2}
    
    return grads

结果如下:
在这里插入图片描述

update_parameters(优化参数)

代码如下:

# GRADED FUNCTION: update_parameters

def update_parameters(parameters, grads, learning_rate = 1.2):

    # Retrieve each parameter from the dictionary "parameters"
    ### START CODE HERE ### (≈ 4 lines of code)
    W1 = parameters["W1"]
    b1 = parameters["b1"]
    W2 = parameters["W2"]
    b2 = parameters["b2"]
    ### END CODE HERE ###
    
    # Retrieve each gradient from the dictionary "grads"
    ### START CODE HERE ### (≈ 4 lines of code)
    dW1 = grads["dW1"]
    db1 = grads["db1"]
    dW2 = grads["dW2"]
    db2 = grads["db2"]
    ## END CODE HERE ###
    
    # Update rule for each parameter
    ### START CODE HERE ### (≈ 4 lines of code)
    W1 = W1-learning_rate*dW1       #此处学习率为默认参数
    b1 = b1-learning_rate*db1
    W2 = W2-learning_rate*dW2
    b2 = b2-learning_rate*db2
    ### END CODE HERE ###
    
    parameters = {"W1": W1,
                  "b1": b1,
                  "W2": W2,
                  "b2": b2}
    
    return parameters

结果如下:
在这里插入图片描述

4.4 - Integrate parts 4.1, 4.2 and 4.3 in nn_model()

代码如下:

# GRADED FUNCTION: nn_model
import numpy as np
def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False):  #封装之前定义的函数
    
    np.random.seed(3)
    n_x = layer_sizes(X, Y)[0]
    n_y = layer_sizes(X, Y)[2]
    
    # Initialize parameters
    ### START CODE HERE ### (≈ 1 line of code)
    parameters = initialize_parameters(n_x, n_h, n_y) 
    ### END CODE HERE ###
    
    # Loop (gradient descent)

    for i in range(0, num_iterations):
         
        ### START CODE HERE ### (≈ 4 lines of code)
        # Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache".
        A2, cache =   #返回正向传播得到的参数
        
        # Cost function. Inputs: "A2, Y, parameters". Outputs: "cost".
        cost = compute_cost(A2, Y, parameters)           #计算出cost函数
 
        # Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads".
        grads = backward_propagation(parameters, cache, X, Y)  #反向传播计算出梯度
 
        # Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters".
        parameters = update_parameters(parameters, grads)      #梯度下降法迭代,优化参数
        
        ### END CODE HERE ###
        
        # Print the cost every 1000 iterations
        if print_cost and i % 1000 == 0:
            print ("Cost after iteration %i: %f" %(i, cost))

    return parameters

结果如下:
在这里插入图片描述

4.5 Predictions

代码如下:

# GRADED FUNCTION: predict

import numpy as np

def predict(parameters, X):

    
    # Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold.
    ### START CODE HERE ### (≈ 2 lines of code)
    A2, cache = forward_propagation(X, parameters)
    predictions = np.around(A2)  #此处用around函数可以返回四舍五入后的值
    ### END CODE HERE ###
    
    return predictions

结果如下:
在这里插入图片描述参考:
https://blog.csdn.net/tz_zs/article/details/80775256
https://blog.csdn.net/tz_zs/article/details/90209220

碰到问题:

1.神经网络中权重参数w和偏置参数b的维度应该如何确定?(有时在dot函数中需要转置,有时却不需要。)
2.

2019.10.10学习进度

今天是第12天参加STAR Pro,今天主要完成了两件事。

深层神经网络的学习

了解了深层神经网络的概念,与浅层神经网比较,深层神经网的优势、深层网络中的前向传播,以及学习了矩阵的维数应该怎么设立,解决了我前两天碰到的问题。

在这里插入图片描述
在这里插入图片描述在这里插入图片描述
在这里插入图片描述

week4大作业——Building_your_Deep_Neural_Network_Step_by_Step_v8a

今天完成了本次大作业Part1的1~5,基于上几次作业的基础,由logistic回归、浅层神经网络,到本次的深层神经网络,由浅到深。

3 - Initialization

3.1 - 2-layer Neural Network

本小节回顾了一下双层神经网络的参数维度的设置
代码:

# GRADED FUNCTION: initialize_parameters
import numpy as np
import random
from random import randint
def initialize_parameters(n_x, n_h, n_y):    
    np.random.seed(1)  #seed( ) 用于指定随机数生成时所用算法开始的整数值,如果使用相同的seed( )值,则每次生成的随即数都相同
    
    ### START CODE HERE ### (≈ 4 lines of code)
    W1 = np.random.randn(n_h,n_x)*0.01
    b1 = np.zeros((n_h, 1))                  #注意np.zeros()函数里面参数也要加括号
    W2 = np.random.randn(n_y,n_h)*0.01
    b2 = np.zeros((n_y, 1))
    ### END CODE HERE ###
    
    assert(W1.shape == (n_h, n_x))
    assert(b1.shape == (n_h, 1))
    assert(W2.shape == (n_y, n_h))
    assert(b2.shape == (n_y, 1))
    
    parameters = {"W1": W1,
                  "b1": b1,
                  "W2": W2,
                  "b2": b2}
    
    return parameters    

结果:
在这里插入图片描述

3.2 - L-layer Neural Network

由双层神经网络引申到L层的深层神经网络
代码:

                           **初始化参数**
# GRADED FUNCTION: initialize_parameters_deep
import numpy as np

def initialize_parameters_deep(layer_dims):    
    np.random.seed(3)
    parameters = {}
    L = len(layer_dims)            # number of layers in the network

    for l in range(1, L):
        ### START CODE HERE ### (≈ 2 lines of code)
        parameters['W' + str(l)] = np.random.randn(layer_dims[l],layer_dims[l-1]) * 0.01  #layer_dims存储了不同层的n值
        parameters['b' + str(l)] = np.zeros((layer_dims[l],1))
        ### END CODE HERE ###
        
        assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
        assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))

        
    return parameters

结果:
在这里插入图片描述

4 - Forward propagation module

4.1 - Linear Forward

在这里插入图片描述代码:

# GRADED FUNCTION: linear_forward
import numpy as np

def linear_forward(A, W, b):
    """
    Implement the linear part of a layer's forward propagation.

    Arguments:
    A -- activations from previous layer (or input data): (size of previous layer, number of examples)
    W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
    b -- bias vector, numpy array of shape (size of the current layer, 1)

    Returns:
    Z -- the input of the activation function, also called pre-activation parameter 
    cache -- a python tuple containing "A", "W" and "b" ; stored for computing the backward pass efficiently
    """
    
    ### START CODE HERE ### (≈ 1 line of code)
    Z = np.dot(W,A)+b
    ### END CODE HERE ###
    
    assert(Z.shape == (W.shape[0], A.shape[1]))
    cache = (A, W, b)
    
    return Z, cache

结果:
在这里插入图片描述

4.2 - Linear-Activation Forward

本小节设立了两种激活函数的调用以及输出的函数。
在这里插入图片描述
代码:

# GRADED FUNCTION: linear_activation_forward

def linear_activation_forward(A_prev, W, b, activation):

    if activation == "sigmoid":
        # Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
        ### START CODE HERE ### (≈ 2 lines of code)
        Z, linear_cache = linear_forward(A_prev, W, b)
        A, activation_cache = sigmoid(Z)     # a"cache" contains "Z" (it's what we will feed in to the corresponding backward function).
        ### END CODE HERE ###
    
    elif activation == "relu":
        # Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
        ### START CODE HERE ### (≈ 2 lines of code)
        Z, linear_cache = linear_forward(A_prev, W, b)
        A, activation_cache = relu(Z)
        ### END CODE HERE ###
    
    assert (A.shape == (W.shape[0], A_prev.shape[1]))
    cache = (linear_cache, activation_cache)

    return A, cache

结果:
在这里插入图片描述

d) L-Layer Model

在这里插入图片描述代码:

# GRADED FUNCTION: L_model_forward

def L_model_forward(X, parameters):

    caches = []
    A = X
    L = len(parameters) // 2                  # number of layers in the neural network
    
    # Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
    for l in range(1, L):
        A_prev = A 
        ### START CODE HERE ### (≈ 2 lines of code)
        A, cache = linear_activation_forward(A_prev, parameters["W"+str(l)], parameters["b"+str(l)], activation = "relu")
        caches.append(cache)   
        ### END CODE HERE ###
    
    # Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
    ### START CODE HERE ### (≈ 2 lines of code)
    AL, cache = linear_activation_forward(A, parameters["W"+str(L)], parameters["b"+str(L)], activation = "sigmoid") #此时A为A[L-1]
    caches.append(cache)
    ### END CODE HERE ###
    
    assert(AL.shape == (1,X.shape[1]))
            
    return AL, caches

结果:
在这里插入图片描述

5 - Cost function

计算出损失函数
在这里插入图片描述
代码:

# GRADED FUNCTION: compute_cost
import numpy as np
def compute_cost(AL, Y):
    """
    Implement the cost function defined by equation (7).

    Arguments:
    AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
    Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)

    Returns:
    cost -- cross-entropy cost
    """
    
    m = Y.shape[1]

    # Compute loss from aL and y.
    ### START CODE HERE ### (≈ 1 lines of code)
    cost =(-1/m)*np.sum(Y*np.log(AL)+(1-Y)*np.log(1-AL))
    ### END CODE HERE ###
    
    cost = np.squeeze(cost)      # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
    assert(cost.shape == ())
    
    return cost

结果:
在这里插入图片描述

2019.10.11学习进度

今天是第十三天参与STAR Pro,今天主要完成了两方面内容

深层神经网络

学习完了深层神经网络的剩余内容,了解了深层神经网络的模块是如何搭建的,还有一些参数的概念,最后了解了神经网络与大脑的关系
在这里插入图片描述在这里插入图片描述在这里插入图片描述在这里插入图片描述在这里插入图片描述

week4编程作业

今天完成了week4编程作业的所有内容,学习了深层神经网络各个模块的搭建,最后集于一个model中。

Building your Deep Neural Network: Step by Step

继昨天的进度,今天完成了剩余内容。

6 - Backward propagation module
6.1 - Linear backward

在这里插入图片描述
代码:

 GRADED FUNCTION: linear_backward
import numpy as np

def linear_backward(dZ, cache):

    A_prev, W, b = cache
    m = A_prev.shape[1]

    ### START CODE HERE ### (≈ 3 lines of code)
    dW = (1/m)*np.dot(dZ,A_prev.T)                         #注意计算dW和dA时候需要转置矩阵
    db = (1/m)*np.sum(dZ,axis=1,keepdims=True)            
    dA_prev =np.dot(W.T,dZ) 
    ### END CODE HERE ###
    
    assert (dA_prev.shape == A_prev.shape)
    assert (dW.shape == W.shape)
    assert (db.shape == b.shape)
    
    return dA_prev, dW, db

结果:
在这里插入图片描述

6.2 - Linear-Activation backward

代码:

# GRADED FUNCTION: linear_activation_backward

def linear_activation_backward(dA, cache, activation):

    linear_cache, activation_cache = cache              #这里cache储存了两种cache,一个为参数另一个为超参数
    
    if activation == "relu":
        ### START CODE HERE ### (≈ 2 lines of code)
        dZ = relu_backward(dA, activation_cache)        #这个两个函数是自带的??还是系统为我们定义的?
        dA_prev, dW, db = linear_backward(dZ, linear_cache)
        ### END CODE HERE ###
        
    elif activation == "sigmoid":
        ### START CODE HERE ### (≈ 2 lines of code)
        dZ = dZ = sigmoid_backward(dA, activation_cache)
        dA_prev, dW, db = linear_backward(dZ, linear_cache)
        ### END CODE HERE ###
    
    return dA_prev, dW, db

结果:
在这里插入图片描述

6.3 - L-Model Backward

代码:

# GRADED FUNCTION: L_model_backward
import numpy as np 

def L_model_backward(AL, Y, caches):
    grads = {}
    L = len(caches) # the number of layers
    m = AL.shape[1]
    Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
    
    # Initializing the backpropagation
    ### START CODE HERE ### (1 line of code)
    dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))
    ### END CODE HERE ###
    
    # Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "dAL, current_cache". Outputs: "grads["dAL-1"], grads["dWL"], grads["dbL"]
    ### START CODE HERE ### (approx. 2 lines)
    current_cache =caches[L-1] 
    grads["dA" + str(L-1)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, current_cache, activation="sigmoid")
    ### END CODE HERE ###
    
    # Loop from l=L-2 to l=0
    for l in reversed(range(L-1)):
        # lth layer: (RELU -> LINEAR) gradients.
        # Inputs: "grads["dA" + str(l + 1)], current_cache". Outputs: "grads["dA" + str(l)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)] 
        ### START CODE HERE ### (approx. 5 lines)
        current_cache = caches[l]
        dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA"+str(l+1)], current_cache, activation="relu") #注意这里l是从L-1开始递减的
        grads["dA" + str(l)] = dA_prev_temp  ##这里的current_cacahe储存参数w,b和a_prev
        grads["dW" + str(l + 1)] = dW_temp
        grads["db" + str(l + 1)] = db_temp
        ### END CODE HERE ###

    return grads
6.4 - Update Parameters

代码:

# GRADED FUNCTION: update_parameters

def update_parameters(parameters, grads, learning_rate):

    L = len(parameters) // 2 # number of layers in the neural network

    # Update rule for each parameter. Use a for loop.
    ### START CODE HERE ### (≈ 3 lines of code)
    for l in range(L):
        parameters["W" + str(l+1)] = parameters["W" + str(l+1)]-learning_rate*grads["dW"+str(l+1)]  #grads为L_model_backward(AL, Y, caches)
        parameters["b" + str(l+1)] = parameters["b" + str(l+1)]-learning_rate*grads["db"+str(l+1)]  #的返回值
    ### END CODE HERE ###
    return parameters

结果:
在这里插入图片描述

week4 part1碰到问题:

1.不明白6.3中在这里插入图片描述所存储的是什么?
2.6.2中relu_backward()等激活函数是什么时候定义的?

Deep Neural Network for Image Classification: Application

本节作业综合了上一节作业所定义的函数,应用于一个模块中。学会了深层神经网络在监督学习中的构建与应用。

4 - Two-layer neural network

在这里插入图片描述代码:

# GRADED FUNCTION: two_layer_model

def two_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):

    np.random.seed(1)
    grads = {}
    costs = []                              # to keep track of the cost
    m = X.shape[1]                           # number of examples
    (n_x, n_h, n_y) = layers_dims
    
    # Initialize parameters dictionary, by calling one of the functions you'd previously implemented
    ### START CODE HERE ### (≈ 1 line of code)
    parameters = initialize_parameters(n_x,n_h,n_y)
    ### END CODE HERE ###
    
    # Get W1, b1, W2 and b2 from the dictionary parameters.
    W1 = parameters["W1"]
    b1 = parameters["b1"]
    W2 = parameters["W2"]
    b2 = parameters["b2"]
    
    # Loop (gradient descent)

    for i in range(0, num_iterations):

        # Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: "X, W1, b1, W2, b2". Output: "A1, cache1, A2, cache2".
        ### START CODE HERE ### (≈ 2 lines of code)
        A1, cache1 = linear_activation_forward(X, W1, b1, activation="relu")
        A2, cache2 = linear_activation_forward(A1, W2, b2, activation="sigmoid")
        ### END CODE HERE ###
        
        # Compute cost
        ### START CODE HERE ### (≈ 1 line of code)
        cost = compute_cost(A2,Y)
        ### END CODE HERE ###
        
        # Initializing backward propagation
        dA2 = - (np.divide(Y, A2) - np.divide(1 - Y, 1 - A2))
        
        # Backward propagation. Inputs: "dA2, cache2, cache1". Outputs: "dA1, dW2, db2; also dA0 (not used), dW1, db1".
        ### START CODE HERE ### (≈ 2 lines of code)
        dA1, dW2, db2 = linear_activation_backward(dA2,cache2,activation="sigmoid")
        dA0, dW1, db1 = linear_activation_backward(dA1,cache1,activation="relu")
        ### END CODE HERE ###
        
        # Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2, grads['db2'] to db2
        grads['dW1'] = dW1
        grads['db1'] = db1
        grads['dW2'] = dW2
        grads['db2'] = db2
        
        # Update parameters.
        ### START CODE HERE ### (approx. 1 line of code)
        parameters = update_parameters(parameters, grads,learning_rate)
        ### END CODE HERE ###

        # Retrieve W1, b1, W2, b2 from parameters
        W1 = parameters["W1"]
        b1 = parameters["b1"]
        W2 = parameters["W2"]
        b2 = parameters["b2"]
        
        # Print the cost every 100 training example
        if print_cost and i % 100 == 0:
            print("Cost after iteration {}: {}".format(i, np.squeeze(cost)))
        if print_cost and i % 100 == 0:
            costs.append(cost)
       
    # plot the cost

    plt.plot(np.squeeze(costs))
    plt.ylabel('cost')
    plt.xlabel('iterations (per hundreds)')
    plt.title("Learning rate =" + str(learning_rate))
    plt.show()
    
    return parameters

结果:
在这里插入图片描述

5 - L-layer Neural Network

在这里插入图片描述代码:

# GRADED FUNCTION: L_layer_model

def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):#lr was 0.009

    np.random.seed(1)
    costs = []                         # keep track of cost
    
    # Parameters initialization. (≈ 1 line of code)
    ### START CODE HERE ###
    parameters = initialize_parameters_deep(layers_dims)
    ### END CODE HERE ###
    
    # Loop (gradient descent)
    for i in range(0, num_iterations):

        # Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.
        ### START CODE HERE ### (≈ 1 line of code)
        AL, caches = L_model_forward(X, parameters)
        ### END CODE HERE ###
        
        # Compute cost.
        ### START CODE HERE ### (≈ 1 line of code)
        cost = compute_cost(AL, Y)
        ### END CODE HERE ###
    
        # Backward propagation.
        ### START CODE HERE ### (≈ 1 line of code)
        grads = L_model_backward(AL, Y, caches)
        ### END CODE HERE ###
 
        # Update parameters.
        ### START CODE HERE ### (≈ 1 line of code)
        parameters = update_parameters(parameters, grads, learning_rate)
        ### END CODE HERE ###
                
        # Print the cost every 100 training example
        if print_cost and i % 100 == 0:
            print ("Cost after iteration %i: %f" %(i, cost))
        if print_cost and i % 100 == 0:
            costs.append(cost)
            
    # plot the cost
    plt.plot(np.squeeze(costs))
    plt.ylabel('cost')
    plt.xlabel('iterations (per hundreds)')
    plt.title("Learning rate =" + str(learning_rate))
    plt.show()
    
    return parameters

结果:
在这里插入图片描述

2019.10.12学习进度

今天是第14天参与STAR Pro,到目前为止,我已经完成了python的基础学习、四周神经网络和深度学习的课程以及相对应四周的编程作业,还有学习了两节机器学习-李宏毅(2019) Machine Learning的内容。
今天开始学习"改善深层神经网络:超参数调试、正则化以及优化"的内容

第一周 深度学习的实用层面

今天学习了前5节知识,主要了解了在搭建深层神经网络时需要做出的决策,以及怎样优化这些参数以提高神经网络的效率。还了解了正则化的概念。
在这里插入图片描述在这里插入图片描述在这里插入图片描述在这里插入图片描述在这里插入图片描述

2019.10.13学习进度

今天是第十五天参与STAR Pro,今天主要继昨天的进度,继续学习了深层学习的实用层面的知识。今天主要学了什么是随即失活正则化、其它正则化以及归一化输入,了解了他们的作用原理以及优缺点。
在这里插入图片描述
在这里插入图片描述在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值