很抱歉,代码让您的内核崩溃了。在这里,我将提供一些优化代码以减少内存使用和提高代码效率。以下是优化后的代码:
```python
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Conv2D, AveragePooling2D, Flatten, Dense
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import SGD
from tensorflow.keras.utils import to_categorical
import gc
# 加载MNIST数据集
(train_x, train_y), (test_x, test_y) = mnist.load_data()
# 数据预处理
train_x = train_x.reshape(train_x.shape[0], 28, 28, 1).astype('float32') / 255
test_x = test_x.reshape(test_x.shape[0], 28, 28, 1).astype('float32') / 255
train_y = to_categorical(train_y, 10)
test_y = to_categorical(test_y, 10)
# 构建LeNet-5模型
model = Sequential()
model.add(Conv2D(filters=6, kernel_size=(5, 5), activation='relu', input_shape=(28, 28, 1)))
model.add(AveragePooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Conv2D(filters=16, kernel_size=(5, 5), activation='relu'))
model.add(AveragePooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Flatten())
model.add(Dense(units=120, activation='relu'))
model.add(Dense(units=84, activation='relu'))
model.add(Dense(units=10, activation='softmax'))
# 编译模型
model.compile(optimizer=SGD(learning_rate=0.1), loss='categorical_crossentropy', metrics=['accuracy'])
# 训练模型
for epoch in range(20):
for i in range(0, train_x.shape[0], 128):
batch_x, batch_y = train_x[i:i+128], train_y[i:i+128]
model.train_on_batch(batch_x, batch_y)
test_loss, test_accuracy = model.evaluate(test_x, test_y)
print('Epoch: {}, Test loss: {}, Test accuracy: {}'.format(epoch+1, test_loss, test_accuracy))
gc.collect()
# 在测试集上输出精度
test_loss, test_accuracy = model.evaluate(test_x, test_y)
print('Test loss: {}, Test accuracy: {}'.format(test_loss, test_accuracy))
```
在这个代码中,我们使用 `for` 循环逐批次训练模型,而不是使用 `fit` 函数,这样可以减少内存使用,避免内核崩溃。我们还在每个 epoch 结束后手动进行垃圾回收,以确保内存得到释放。此外,我们还将 `train_on_batch` 函数用于训练,这样可以提高代码效率。
运行优化后的代码后,应该可以避免内核崩溃问题。