贝叶斯优化xgboost
Hyperparameters: These are certain values/weights that determine the learning process of an algorithm.
超参数:这些是确定算法学习过程的某些值/权重。
Certain parameters for an Machine Learning model: learning-rate, alpha, max-depth, col-samples , weights, gamma and so on.
机器学习模型的某些参数:学习率,alpha,最大深度,col-samples,权重,gamma等。
Certain parameters for an Deep Learning model: units(no of units), layer(no of layers), dropout ratio, kernel regularizers, activation function and so on.
深度学习模型的某些参数 :单位(无单位),层(无层),辍学率,内核正则化函数,激活函数等。
Hyperparameter optimization is the selection of optimum or best parameter for a machine learning / deep learning algorithm. Often, we end up tuning or training the model manually with various possible range of parameters until a best fit model is obtained. Hyperparameter tuning helps in determining the optimal tuned parameters and return the best fit model, which is the best practice to follow while building an ML/DL model.
超参数优化是针对机器学习/深度学习算法的最佳或最佳参数的选择。 通常,我们最终会使用各种可能的参数范围手动调整或训练模型,直到获得最佳拟合模型为止。 超参数调整有助于确定最佳的调整参数并返回最佳拟合模型,这是构建ML / DL模型时遵循的最佳实践。
In this section let's discuss on one of the most accurate and successful hyperparameter method, which is HYPEROPT and algorithm to apply
在本节中,我们讨论一种最准确,最成功的超参数方法,即HYPEROPT和要应用的算法
Optimization is nothing but finding a minimum of cost function , that determines an overall better performance of a model on both train-set and test-set.
优化只不过是找到最小的成本函数,它决定了模型在训练集和测试集上的总体更好的性能。
HYPEROPT: It is a powerful python library that search through an hyperparameter space of values . It implements three functions for minimizing the cost function,
HYPEROPT:这是一个功能强大的python库,可在值的超参数空间中进行搜索。 它实现了三个功能以最小化成本功能,
- Random Search 随机搜寻
- TPE (Tree Parzen Estimators) TPE(树Parzen估计器)
- Adaptive TPE 自适应TPE
Importing required packages:
导入所需的软件包:
import hyperopt
from hyperopt import fmin, tpe, hp, STATUS_OK, Trials
Hyperopt functions for optimization: