Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 3000 batches). You may need to use the repeat() function when building your dataset.
history = model.fit(
train_generator,
steps_per_epoch=100,
epochs=100,
validation_data=validation_generator,
validation_steps=50)
以上是出错代码,我使用的训练样本数为2000,验证样本数为1000,batch_size=32,出错是因为steps_per_epoch和validation_steps的取值错误。
正确取值应该为steps_per_epoc=2000/32=63,validation_steps=1000/32=32,均取上限。
计算依据:
每个轮次应该包含整个训练数据集,即每轮次2000个训练集的数据量。但这2000个数据是分批给模型的,每批32个(batch_size=32),因此需要给2000/32=63批。即
steps_per_epoch=num_training_examples/batch_size。拿这个值的上限来说。
同理validation_steps=1000/32=32
epochs:使用整个数据集训练模型的次数,我自己规定的2000个数据训练100轮,这个数据不变。
代码应该修改为:
history = model.fit(
train_generator,
steps_per_epoch=63,
epochs=100,
validation_data=validation_generator,
validation_steps=32)