文章目录
Initialization
Arguments:
layer_dims – python array (list) containing the dimensions of each layer in our network
Returns:
parameters – python dictionary containing your parameters “W1”, “b1”, …, “WL”, “bL”:
Wl – weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl – bias vector of shape (layer_dims[l], 1)
def initialize_parameters_deep(layer_dims):
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
Forward propagation module
Linear Forward
The linear forward module (vectorized over all the examples) computes the following equations:
Z [ l ] = W [ l ] A [ l − 1 ] + b [ l ] Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]} Z[l]=W[l]A[l−1]+b[l]
where A [ 0 ] = X A^{[0]} = X A[0]=X.
Arguments:
A – activations from previous layer (or input data): (size of previous layer, number of examples)
W – weights matrix: numpy array of shape (size of current layer, size of previous layer)
b – bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z – the input of the activation function, also called pre-activation parameter
cache – a python tuple containing “A”, “W” and “b” ; stored for computing the backward pass efficiently
def linear_forward(A, W, b):
Z = np.dot(W,A)+b
assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
Linear-Activation Forward
- Sigmoid: σ ( Z ) = σ ( W A + b ) = 1 1 + e − ( W A + b ) \sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}} σ(Z)=σ(WA+b)=1+e−(WA+b)1.
A, activation_cache = sigmoid(Z)
- ReLU: The mathematical formula for ReLu is A = R E L U ( Z ) = m a x ( 0 , Z ) A = RELU(Z) = max(0, Z) A=RELU(Z)=max(0,Z).
A, activation_cache = relu(Z)
- These two functions each return two items: the activation value “
a
” and a “cache
” that contains “Z
” (it’s what we will feed in to the corresponding backward function).
Arguments:
A_prev – activations from previous layer (or input data): (size of previous layer, number of examples)
W – weights matrix: numpy array of shape (size of current layer, size of previous layer)
b – bias vector, numpy array of shape (size of the current layer, 1)
activation – the activation to be used in this layer, stored as a text string: “sigmoid” or "relu"
Returns:
A – the output of the activation function, also called the post-activation value
cache – a python tuple containing “linear_cache” and “activation_cache”; stored for computing the backward pass efficiently
def linear_activation_forward(A_prev, W, b, activation):
if activation == "sigmoid":
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid(Z)
elif activation == "relu":
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu(Z)
assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
L-Layer Model
Arguments:
X – data, numpy array of shape (input size, number of examples)
parameters – output of initialize_parameters_deep()
Returns:
AL – last post-activation value
caches – list of caches containing:
every cache of linear_activation_forward() (there are L-1 of them, indexed from 0 to L-1)
def L_model_forward(X, parameters):
caches = []
A = X
L = len(parameters)//2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
A, cache = linear_activation_forward(A, parameters['W' + str(l)], parameters['b' + str(l)], activation = "relu") #Input A(=A[l-1])
caches.append(cache)
None
AL, cache = linear_activation_forward(A, parameters['W' + str(L)], parameters['b' + str(L)], activation = "sigmoid")
caches.append(cache)
None
assert(AL.shape == (1,X.shape[1]))
return AL, caches
Cost Function
Compute the cross-entropy cost J J J, using the following formula: (7) − 1 m ∑ i = 1 m ( y ( i ) log ( a [ L ] ( i ) ) + ( 1 − y ( i ) ) log ( 1 − a [ L ] ( i ) ) ) -\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right)) \tag{7} −m1i=1∑m(y(i)log(a[L](i))+(1−y(i))log(1−a[L](i)))(7)
Arguments:
AL – probability vector corresponding to your label predictions, shape (1, number of examples)
Y – true “label” vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost – cross-entropy cost
def compute_cost(AL, Y):
m = Y.shape[1]
cost = - np.sum(np.multiply(np.log(AL),Y)+ np.multiply((np.log(1-AL)),1-Y))/m
cost = np.squeeze(cost)
assert(cost.shape == ())
return cost
Backward propagation module
Linear backward
The three outputs
(
d
W
[
l
]
,
d
b
[
l
]
,
d
A
[
l
−
1
]
)
(dW^{[l]}, db^{[l]}, dA^{[l-1]})
(dW[l],db[l],dA[l−1]) are computed using the input
d
Z
[
l
]
dZ^{[l]}
dZ[l].
d
W
[
l
]
=
∂
J
∂
W
[
l
]
=
1
m
d
Z
[
l
]
A
[
l
−
1
]
T
dW^{[l]} = \frac{\partial \mathcal{J} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T}
dW[l]=∂W[l]∂J=m1dZ[l]A[l−1]T
d
b
[
l
]
=
∂
J
∂
b
[
l
]
=
1
m
∑
i
=
1
m
d
Z
[
l
]
(
i
)
db^{[l]} = \frac{\partial \mathcal{J} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{[l](i)}
db[l]=∂b[l]∂J=m1i=1∑mdZ[l](i)
d
A
[
l
−
1
]
=
∂
L
∂
A
[
l
−
1
]
=
W
[
l
]
T
d
Z
[
l
]
dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]}
dA[l−1]=∂A[l−1]∂L=W[l]TdZ[l]
Arguments:
dZ – Gradient of the cost with respect to the linear output (of current layer l)
cache – tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev – Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW – Gradient of the cost with respect to W (current layer l), same shape as W
db – Gradient of the cost with respect to b (current layer l), same shape as b
def linear_backward(dZ, cache):
A_prev, W, b = cache
m = A_prev.shape[1]
dW = np.dot(dZ,A_prev.T)/m
db = np.sum(dZ,axis=1,keepdims=True)/m
dA_prev = np.dot(W.T,dZ)
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
assert (db.shape == b.shape)
return dA_prev, dW, db
Linear-Activation backward
sigmoid_backward
: Implements the backward propagation for SIGMOID unit.
dZ = sigmoid_backward(dA, activation_cache)
relu_backward
: Implements the backward propagation for RELU unit.
dZ = relu_backward(dA, activation_cache)
sigmoid_backward
and relu_backward
compute
(11)
d
Z
[
l
]
=
d
A
[
l
]
∗
g
′
(
Z
[
l
]
)
dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}
dZ[l]=dA[l]∗g′(Z[l])(11).
Arguments:
dA – post-activation gradient for current layer l
cache – tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation – the activation to be used in this layer, stored as a text string: “sigmoid” or “relu”
Returns:
dA_prev – Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW – Gradient of the cost with respect to W (current layer l), same shape as W
db – Gradient of the cost with respect to b (current layer l), same shape as b
def linear_activation_backward(dA, cache, activation):
linear_cache, activation_cache = cache
if activation == "relu":
dZ = relu_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
elif activation == "sigmoid":
dZ = sigmoid_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
return dA_prev, dW, db
L-Model Backward
Arguments:
AL – probability vector, output of the forward propagation (L_model_forward())
Y – true “label” vector (containing 0 if non-cat, 1 if cat)
caches – list of caches containing:
every cache of linear_activation_forward() with “relu” (it’s caches[l], for l in range(L-1) i.e l = 0…L-2)
the cache of linear_activation_forward() with “sigmoid” (it’s caches[L-1])
Returns:
grads – A dictionary with the gradients
grads[“dA” + str(l)] = …
grads[“dW” + str(l)] = …
grads[“db” + str(l)] = …
def L_model_backward(AL, Y, caches):
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))
# Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "dAL, current_cache". Outputs: "grads["dAL-1"], grads["dWL"], grads["dbL"]
current_cache = caches[L-1]
grads["dA" + str(L-1)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, current_cache, activation = "sigmoid")
# Loop from l=L-2 to l=0
for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 1)], current_cache". Outputs: "grads["dA" + str(l)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
current_cache = caches[l]
dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l + 1)], current_cache, activation = "relu")
grads["dA" + str(l)] = dA_prev_temp
grads["dW" + str(l + 1)] = dW_temp
grads["db" + str(l + 1)] = db_temp
return grads
Update Parameters
W
[
l
]
=
W
[
l
]
−
α
d
W
[
l
]
W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]}
W[l]=W[l]−α dW[l]
b
[
l
]
=
b
[
l
]
−
α
d
b
[
l
]
b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]}
b[l]=b[l]−α db[l]
Arguments:
parameters – python dictionary containing your parameters
grads – python dictionary containing your gradients, output of L_model_backward
Returns:
parameters – python dictionary containing your updated parameters
parameters[“W” + str(l)] = …
parameters[“b” + str(l)] = …
def update_parameters(parameters, grads, learning_rate):
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter.
for l in range(L):
parameters["W" + str(l+1)] = parameters["W" + str(l+1)]-learning_rate*grads["dW" + str(l + 1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)]-learning_rate*grads["db" + str(l + 1)]
return parameters