问题描述:
多组数据需要多个模型,循环训练神经网络,结果发现程序越跑越慢
解决方法:
使用memory_profiler模块检测内存。安装: pip install memory_profiler
from memory_profiler import profile
@profile
def Conv_LSTM_2D( n_seq, n_steps, n_features):
model = Sequential()
model.add(ConvLSTM2D(filters = 64, kernel_size = (1, 2), padding='same', activation = 'relu',return_sequences = True,
input_shape = ( n_seq, 1, n_steps, n_features)))
model.add(BatchNormalization())
model.add(ConvLSTM2D(filters = 64, kernel_size = (1, 2), activation = 'relu'))
model.add(Flatten())
model.add(Dense(1))
model.compile(optimizer = 'adam', loss = 'mse')
#print(model.summary())
return model
for i in range(len(data)):
#write_list = []
info = data[i,17:75]
#print(info)
n_seq = 2
n_steps = 4
num_features = 1
X, y= split_sequence(info, n_steps)
model = Conv_LSTM_2D( 2, 2, 1)
X = X.reshape((X.shape[0], n_seq, 1, n_steps, num_features)) #for Conv LSTM
model.fit(X, y, epochs = 500, verbose = 0)
使用profile装饰器监测每次调用Conv_LSTM_2D函数时的内存占用,会发现,随着迭代次数的增加,内存一直在增加,程序越跑越慢。这是因为把网络的创建放在循环里,每次循环时都创建了计算图,导致内存占用越来越大,最终程序崩溃。
把建立模型这句代码model = Conv_LSTM_2D( 2, 2, 1),移到循环外,问题解决。