Python学习从0开始——Kaggle深度学习001代码参考
一、A Single Neuron
单个神经元
1) Input shape
# YOUR CODE HERE
input_shape = [11]
# Check your answer
q_1.check()
2) Define a linear model
from tensorflow import keras
from tensorflow.keras import layers
# YOUR CODE HERE
model = keras.Sequential([
layers.Dense(units=1, input_shape=input_shape)
])
# Check your answer
q_2.check()
3) Look at the weights
# YOUR CODE HERE
w, b = model.weights
print("Weights\n{}\n\nBias\n{}".format(w, b))
# Check your answer
q_3.check()
二、Deep Neural Networks
深度神经网络
1) Input Shape
# YOUR CODE HERE
input_shape = [8]
# Check your answer
q_1.check()
2) Define a Model with Hidden Layers
from tensorflow import keras
from tensorflow.keras import layers
# YOUR CODE HERE
model = keras.Sequential([
# the hidden ReLU layers
layers.Dense(units=512, activation='relu', input_shape=input_shape),
layers.Dense(units=512, activation='relu', input_shape=input_shape),
layers.Dense(units=512, activation='relu', input_shape=input_shape),
# the linear output layer
layers.Dense(units=1),
])
# Check your answer
q_2.check()
3) Activation Layers
### YOUR CODE HERE: rewrite this to use activation layers
model = keras.Sequential([
layers.Dense(units=32, input_shape=[8]),
layers.Activation('relu'),
layers.Dense(units=32),
layers.Activation('relu'),
layers.Dense(1),
])
# Check your answer
q_3.check()
三、Stochastic Gradient Descent
随机梯度下降
1) Add Loss and Optimizer
# YOUR CODE HERE
model.compile(
optimizer='adam',
loss='mae',
)
# Check your answer
q_1.check()
2) Train Model
# YOUR CODE HERE
history = model.fit(
X, y,
validation_data=(X, y),
batch_size=128,
epochs=200,
)
# Check your answer
q_2.check()
四、Overfitting and Underfitting
过拟合和欠拟合
3) Define Early Stopping Callback
from tensorflow.keras import callbacks
# YOUR CODE HERE: define an early stopping callback
early_stopping = callbacks.EarlyStopping(
min_delta=0.001, # minimium amount of change to count as an improvement
patience=5, # how many epochs to wait before stopping
restore_best_weights=True,
)
# Check your answer
q_3.check()
五、Dropout and Batch Normalization
丢弃层和批量归一化
1) Add Dropout to Spotify Model
# YOUR CODE HERE: Add two 30% dropout layers, one after 128 and one after 64
model = keras.Sequential([
layers.Dense(128, activation='relu', input_shape=input_shape),
layers.Dropout(rate=0.3),
layers.Dense(64, activation='relu'),
layers.Dropout(rate=0.3),
layers.Dense(1)
])
# Check your answer
q_1.check()
3) Add Batch Normalization Layers
# YOUR CODE HERE: Add a BatchNormalization layer before each Dense layer
model = keras.Sequential([
layers.BatchNormalization(),
layers.Dense(512, activation='relu', input_shape=input_shape),
layers.BatchNormalization(),
layers.Dense(512, activation='relu'),
layers.BatchNormalization(),
layers.Dense(512, activation='relu'),
layers.BatchNormalization(),
layers.Dense(1),
])
# Check your answer
q_3.check()
六、Binary Classification
二元分类
1) Define Model
from tensorflow import keras
from tensorflow.keras import layers
# YOUR CODE HERE: define the model given in the diagram
model = keras.Sequential([
layers.BatchNormalization(input_shape=input_shape),
layers.Dense(256, activation='relu'),
layers.BatchNormalization(),
layers.Dropout(rate=0.3),
layers.Dense(256, activation='relu'),
layers.BatchNormalization(),
layers.Dropout(rate=0.3),
layers.Dense(1, activation='sigmoid'),
])
# Check your answer
q_1.check()
2) Add Optimizer, Loss, and Metric
# YOUR CODE HERE
model.compile(
optimizer='adam',
loss='binary_crossentropy',
metrics=['binary_accuracy'],
)
# Check your answer
q_2.check()