我一直使用Keras从我的神经网络中获得一些NaN输出.我每10,000个结果只得到一个NaN.最初我有一个relu激活层进入最终的softmax层.这产生了更多的NaN结果.我将构成网络中最后两个密集层的激活函数从relu更改为sigmoid.这使问题更好,但我仍然得到NaN.关于如何完全消除楠的任何建议?
model = Sequential()
model.add(InputLayer((1, IMG_H, IMG_W)))
model.add(Convolution2D(32, 3, 3, activation = 'relu'))
model.add(Convolution2D(32, 3, 3, activation = 'relu'))
model.add(MaxPooling2D(pool_size = (2, 2)))
model.add(Dropout(0.3))
model.add(Convolution2D(64, 3, 3, activation = 'relu'))
model.add(Convolution2D(64, 3, 3, activation = 'relu'))
model.add(MaxPooling2D(pool_size = (2, 2)))
model.add(Dropout(0.3))
model.add(Flatten())
model.add(Dense(256, activation = 'sigmoid'))
model.add(Dropout(0.3))
model.add(Dense(64, activation = 'sigmoid'))
model.add(Dropout(0.3))
model.add(Dense(categories, activation = 'softmax'))