神经网络优化器的选择
When constructing a neural network, there are several optimizers available in the Keras API in order to do so.
在构造神经网络时,Keras API中提供了多个优化器来实现。
An optimizer is used to minimise the loss of a network by appropriately modifying the weights and learning rate.
优化器用于通过适当修改权重和学习率来最小化网络的损失。
For regression-based problems (where the response variable is in numerical format), the most frequently encountered optimizer is the Adam optimizer, which uses a stochastic gradient descent method that estimates first-order and second-order moments.
对于基于回归的问题(响应变量为数字格式),最常遇到的优化器是Adam优化器,它使用一种随机梯度下降方法来估算一阶和二阶矩。
The available optimizers in the Keras API are as follows:
Keras API中可用的优化器如下:
- SGD 新元
- RMSprop RMSprop
- Adam 亚当
- Adadelta 阿达达
- Adagrad 阿达格勒
- Adamax 阿达玛克斯
- Nadam 那达姆
- Ftrl Ftrl
The purpose of choosing the most suitable optimiser is not necessarily to achieve the highest accuracy per se — but rather to minimise the training required by the neural network to achieve a given level of accuracy. After all, it is much more efficient if a neural network can be trained to achieve a certain level of accuracy after 10 epochs than after 50, for instance.
选择最合适的优化器的目的不一定是要获得最高的准确性,而是要使神经网络为达到给定的准确性而所需的训练降至最低。 毕竟,如果可以训练一个神经网络在10个纪元之后达到一定水平的准确度,比50个纪要高得多。
预测酒店的平均每日房价 (Predicting Average Daily Rates for Hotels)
Let’s illustrate this using an example: predicting average daily rates (ADR) for hotels. This is the output variable.
让我们用一个例子来说明这一点:预测酒店的平均每日房价(ADR) 。 这是输出变量。
The