Coursera吴恩达教授深度学习第五课(序列模型)第四周编程作业,发出来完整代码供大家参考学习,不足之处请各位大佬多多指正。coding不易,给个点赞行不行^_^
Transformer Network
Welcome to Week 4's assignment, the last assignment of Course 5 of the Deep Learning Specialization! And congratulations on making it to the last assignment of the entire Deep Learning Specialization - you're almost done!
Ealier in the course, you've implemented sequential neural networks such as RNNs, GRUs, and LSTMs. In this notebook you'll explore the Transformer architecture, a neural network that takes advantage of parallel processing and allows you to substantially speed up the training process.
After this assignment you'll be able to:
- Create positional encodings to capture sequential relationships in data
- Calculate scaled dot-product self-attention with word embeddings
- Implement masked multi-head attention
- Build and train a Transformer model
For the last time, let's get started!
Table of Contents
- Packages
- 1 - Positional Encoding
- 2 - Masking
- 3 - Self-Attention
- 4 - Encoder
- 5 - Decoder
- 6 - Transformer
- 7 - References
Packages
Run the following cell to load the packages you'll need.
import tensorflow as tf
import pandas as pd
import time
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.layers import Embedding, MultiHeadAttention, Dense, Input, Dropout, LayerNormalization
from transformers import DistilBertTokenizerFast #, TFDistilBertModel
from transformers import TFDistilBertForTokenClassification
from tqdm import tqdm_notebook as tqdm
1 - Positional Encoding
In sequence to sequence tasks, the relative order of your data is extremely important to its meaning. When you were training sequential neural networks such as RNNs, you fed your inputs into the network in order. Information about the order of your data was automatically fed into your model. However, when you train a Transformer network, you feed your data into the model all at once. While this dramatically reduces training time, there is no information about the order of your data. This is where positional encoding is useful - you can specifically encode the positions of your inputs and pass them into the network using these sine and cosine formulas:
𝑃𝐸(𝑝𝑜𝑠,2𝑖)=𝑠𝑖𝑛(𝑝𝑜𝑠100002𝑖𝑑)(1)(1)PE(pos,2i)=sin(pos100002id)
𝑃𝐸(𝑝𝑜𝑠,2𝑖+1)=𝑐𝑜𝑠(𝑝𝑜𝑠100002𝑖𝑑)(2)(2)PE(pos,2i+1)=cos(pos100002id)
- 𝑑d is the dimension of the word embedding and positional encoding
- 𝑝𝑜𝑠pos is the position of the word.
- 𝑖i refers to each of the different dimensions of the positional encoding.
The values of the sine and cosine equations are small enough (between -1 and 1) that when you add the positional encoding to a word embedding, the word embedding is not significantly distorted. The sum of the positional encoding and word embeding is ultimately what is fed into the model. Using a combination of these two equations helps your Transformer network attend to the relative positions of your input data. Note that while in the lectures Andrew uses vertical vectors but in this assignment, all vectors are horizontal. All matrix multiplications should be adjusted accordingly.
1.1 - Sine and Cosine Angles
Get the possible angles used to compute the positional encodings by calculating the inner term of the sine and cosine equations:
𝑝𝑜𝑠100002𝑖𝑑(3)(3)pos100002id
Exercise 1 - get_angles
Implement the function get_angles()
to calculate the possible angles for the sine and cosine positional encodings
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION get_angles
def get_angles(pos, i, d):
"""
Get the angles for the positional encoding
Arguments:
pos -- Column vector containing the positions [[0], [1], ...,[N-1]]
i -- Row vector containing the dimension span [[0, 1, 2, ..., M-1]]
d(integer) -- Encoding size
Returns:
angles -- (pos, d) numpy array
"""
# STATR CODE HERE
angles = pos / (10000 ** ((2 * (i // 2)) / d))
# END CODE HERE
return angles
# UNIT TEST
def get_angles_test(target):
position = 4
d_model = 16
pos_m = np.arange(position)[:, np.newaxis]
dims = np.arange(d_model)[np.newaxis, :]
result = target(pos_m, dims, d_model)
assert type(result) == np.ndarray, "You must return a numpy ndarray"
assert result.shape == (position, d_model), f"Wrong shape. We expected: ({position}, {d_model})"
assert np.sum(result[0, :]) == 0
assert np.isclose(np.sum(result[:, 0]), position * (position - 1) / 2)
even_cols = result[:, 0::2]
odd_cols = result[:, 1::2]
assert np.all(even_cols == odd_cols), "Submatrices of odd and even columns must be equal"
limit = (position - 1) / np.power(10000,14.0/16.0)
assert np.isclose(result[position - 1, d_model -1], limit ), f"Last value must be {limit}"
print("\033[92mAll tests passed")
get_angles_test(get_angles)
# Example
position = 4
d_model = 8
pos_m = np.arange(position)[:, np.newaxis]
dims = np.arange(d_model)[np.newaxis, :]
get_angles(pos_m, dims, d_model)
1.2 - Sine and Cosine Positional Encodings
Now you can use the angles you computed to calculate the sine and cosine positional encodings.
𝑃𝐸(𝑝𝑜𝑠,2𝑖)=𝑠𝑖𝑛(𝑝𝑜𝑠100002𝑖𝑑)PE(pos,2i)=sin(pos100002id)
𝑃𝐸(𝑝𝑜𝑠,2𝑖+1)=𝑐𝑜𝑠(𝑝𝑜𝑠100002𝑖𝑑)PE(pos,2i+1)=cos(pos100002id)
Exercise 2 - positional_encoding
Implement the function positional_encoding()
to calculate the sine and cosine positional encodings
Reminder: Use the sine equation when 𝑖i is an even number and the cosine equation when 𝑖i is an odd number.
Additional Hints
- You may find np.newaxis useful depending on the implementation you choose.
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION positional_encoding
def positional_encoding(positions, d):
"""
Precomputes a matrix with all the positional encodings
Arguments:
positions (int) -- Maximum number of positions to be encoded
d (int) -- Encoding size
Returns:
pos_encoding -- (1, position, d_model) A matrix with the positional encodings
"""
# START CODE HERE
# initialize a matrix angle_rads of all the angles
angle_rads = get_angles(np.arange(positions)[:, np.newaxis],
np.arange(d)[np.newaxis, :],
d)
print(angle_rads.shape)
# apply sin to even indices in the array; 2i
angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2])
# apply cos to odd indices in the array; 2i+1
angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2])
# END CODE HERE
pos_encoding = angle_rads[np.newaxis, ...]
return tf.cast(pos_encoding, dtype=tf.float32)
# UNIT TEST
def positional_encoding_test(target):
position = 8
d_model = 16
pos_encoding = target(position, d_model)
sin_part = pos_encoding[:, :, 0::2]
cos_part = pos_encoding[:, :, 1::2]
assert tf.is_tensor(pos_encoding), "Output is not a tensor"
assert pos_encoding.shape == (1, position, d_model), f"Wrong shape. We expected: (1, {position}, {d_model})"
ones = sin_part ** 2 + cos_part ** 2
assert np.allclose(ones, np.ones((1, position, d_model // 2))), "Sum of square pairs must be 1 = sin(a)**2 + cos(a)**2"
angs = np.arctan(sin_part / cos_part)
angs[angs < 0] += np.pi
angs[sin_part.numpy() < 0] += np.pi
angs = angs % (2 * np.pi)
pos_m = np.arange(position)[:, np.newaxis]
dims = np.arange(d_model)[np.newaxis, :]
trueAngs = get_angles(pos_m, dims, d_model)[:, 0::2] % (2 * np.pi)
assert np.allclose(angs[0], trueAngs), "Did you apply sin and cos to even and odd parts respectively?"
print("\033[92mAll tests passed")
positional_encoding_test(positional_encoding)
pos_encoding = positional_encoding(50, 512)
print (pos_encoding.shape)
plt.pcolormesh(pos_encoding[0], cmap='RdBu')
plt.xlabel('d')
plt.xlim((0, 512))
plt.ylabel('Position')
plt.colorbar()
plt.show()
(50, 512) (1, 50, 512)
2 - Masking
There are two types of masks that are useful when building your Transformer network: the padding mask and the look-ahead mask. Both help the softmax computation give the appropriate weights to the words in your input sentence.
2.1 - Padding Mask
Oftentimes your input sequence will exceed the maximum length of a sequence your netw