详细代码如下
# 训练集
train_gen = train_val_generator(
data_dir='../train',
target_size=(64, 64),
batch_size=32,
class_model='categorical',
subset='training'
)
# 验证集
val_gen = train_val_generator(
data_dir='../test',
target_size=(64, 64),
batch_size=32,
class_model='categorical',
subset='validation'
)
train_batch, train_label_batch = train_gen.next()
plot_images(train_batch, train_label_batch)
val_batch, val_label_batch = val_gen.next()
plot_images(val_batch, val_label_batch)
model = ConvModel()
model.compile(loss='categorical_crossentropy', optimizer=tf.keras.optimizers.SGD(learning_rate=0.01),
metrics=['accuracy'])
history = model.fit(x=train_gen, steps_per_epoch=351, epochs=100, validation_data=val_gen, validation_steps=88,
shuffle=True)
在训练到第50轮的时候报错
MemoryError: Unable to allocate 1.50 MiB for an array with shape (32, 64, 64, 3) and data type float32
一直到现在都没有解决,有没有大神有好的解决办法,求助
最后通过监控GPU,内存,磁盘容量,发现是由于C盘空间不够导致的!!!!
应该是某个库或者插件在运算过程中会产生临时数据,怎么找这个库就不太清楚了,我是清理了C盘,然后做了扩容