先划分数据集,再做归一化。
1)对于训练数据使用fit_transform,对于测试数据使用transform
scaler_x = StandardScaler()
x_train_scaled = scaler_x.fit_transform(x_train_np)
x_test_scaled = scaler_x.transform(x_test_np)
2)对于输出数据需要新开一个StandardScaler()
scaler_y = StandardScaler()
y_train_scaled = scaler_y.fit_transform(y_train_np.reshape(-1, 1))
y_test_scaled = scaler_y.transform(y_test_np.reshape(-1, 1))
3)反标准化使用
y_train = scaler.inverse_transform(y_train_scaled)
4)在训练神经网络模型时,可以将训练数据分批次给模型
epochs = 1000 for t in range(epochs): print(f"Epoch {t+1}\n-------------------------------") spilt_num = 4 part_size = int(x_train_ts.shape[0] / spilt_num) for i in range(1, spilt_num+1): if i == spilt_num: x_in = x_train_ts[(i-1) * part_size:, ] y_in = y_train_ts[(i-1) * part_size:, ] else: x_in = x_train_ts[(i - 1) * part_size: i * part_size, ] y_in = y_train_ts[(i - 1) * part_size: i * part_size, ] train(x_in, y_in, device, model, loss_fn, optimizer) test(x_test_ts, y_test_ts, device, model, loss_fn) print("Done!")