了解神经网络的超参数

神经网络超参数有哪些?

神经网路中的超参数主要包括1. 学习率 ηη,2. 正则化参数 λλ,3. 神经网络的层数 LL,4. 每一个隐层中神经元的个数 jj,5. 学习的回合数EpochEpoch,6. 小批量数据 minibatchminibatch 的大小,7. 输出神经元的编码方式,8. 代价函数的选择,9. 权重初始化的方法,10. 神经元激活函数的种类,11.参加训练模型数据的规模 这十一类超参数。

隐藏层层数和每层神经元个数的设定

有一般经验公式。
详情请看BP神经网络隐藏层节点数如何确定

注意

sklearn.MLPClassifier的主要参数

hidden_layer_sizes : tuple, length = n_layers - 2, default (100,)
The ith element represents the number of neurons in the ith hidden layer.
用于设置隐藏层大小,很重要。可以自动调参

activation : {‘identity’, ‘logistic’, ‘tanh’, ‘relu’}, default ‘relu’
Activation function for the hidden layer.
‘identity’, no-op activation, useful to implement linear bottleneck, returns f(x) = x
‘logistic’, the logistic sigmoid function, returns f(x) = 1 / (1 + exp(-x)).
‘tanh’, the hyperbolic tan function, returns f(x) = tanh(x).
‘relu’, the rectified linear unit function, returns f(x) = max(0, x)
激活函数,默认‘relu’。可以手动调参。

solver : {‘lbfgs’, ‘sgd’, ‘adam’}, default ‘adam’
The solver for weight optimization.
‘lbfgs’ is an optimizer in the family of quasi-Newton methods.
‘sgd’ refers to stochastic gradient descent.
‘adam’ refers to a stochastic gradient-based optimizer proposed by Kingma, Diederik, and Jimmy Ba
Note: The default solver ‘adam’ works pretty well on relatively large datasets (with thousands of training samples or more) in terms of both training time and validation score. For small datasets, however, ‘lbfgs’ can converge faster and perform better.
大数据集上,adam更好。根据先验知识,手动调参。

alpha : float, optional, default 0.0001
L2 penalty (regularization term) parameter.
正则化系数,影响模型泛化能力。

batch_size : int, optional, default ‘auto’
Size of minibatches for stochastic optimizers. If the solver is ‘lbfgs’, the classifier will not use minibatch. When set to “auto”, batch_size=min(200, n_samples)
每次训练的的样本数目

learning_rate : {‘constant’, ‘invscaling’, ‘adaptive’}, default ‘constant’
Learning rate schedule for weight updates.
‘constant’ is a constant learning rate given by ‘learning_rate_init’.
‘invscaling’ gradually decreases the learning rate at each time step ‘t’ using an inverse scaling exponent of ‘power_t’. effective_learning_rate = learning_rate_init / pow(t, power_t)
‘adaptive’ keeps the learning rate constant to ‘learning_rate_init’ as long as training loss keeps decreasing. Each time two consecutive epochs fail to decrease training loss by at least tol, or fail to increase validation score by at least tol if ‘early_stopping’ is on, the current learning rate is divided by 5.
Only used when solver=‘sgd’.
当solver是sgd的时候,才使用的参数。

learning_rate_init : double, optional, default 0.001
The initial learning rate used. It controls the step-size in updating the weights. Only used when solver=’sgd’ or ‘adam’.
初始的学习率。

power_t : double, optional, default 0.5
The exponent for inverse scaling learning rate. It is used in updating effective learning rate when the learning_rate is set to ‘invscaling’. Only used when solver=’sgd’.
当solver是sgd时使用。

max_iter : int, optional, default 200
Maximum number of iterations. The solver iterates until convergence (determined by ‘tol’) or this number of iterations. For stochastic solvers (‘sgd’, ‘adam’), note that this determines the number of epochs (how many times each data point will be used), not the number of gradient steps.
最大迭代次数

shuffle : bool, optional, default True
Whether to shuffle samples in each iteration. Only used when solver=’sgd’ or ‘adam’.

random_state : int, RandomState instance or None, optional, default None
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.

tol : float, optional, default 1e-4
Tolerance for the optimization. When the loss or score is not improving by at least tol for n_iter_no_change consecutive iterations, unless learning_rate is set to ‘adaptive’, convergence is considered to be reached and training stops.
停止训练的阈值。

verbose : bool, optional, default False
Whether to print progress messages to stdout.
是否打印训练过程的信息。

warm_start : bool, optional, default False
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary.

momentum : float, default 0.9
Momentum for gradient descent update. Should be between 0 and 1. Only used when solver=’sgd’.

nesterovs_momentum : boolean, default True
Whether to use Nesterov’s momentum. Only used when solver=’sgd’ and momentum > 0.

early_stopping : bool, default False
Whether to use early stopping to terminate training when validation score is not improving. If set to true, it will automatically set aside 10% of training data as validation and terminate training when validation score is not improving by at least tol for n_iter_no_change consecutive epochs. Only effective when solver=’sgd’ or ‘adam’

validation_fraction : float, optional, default 0.1
The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True

beta_1 : float, optional, default 0.9
Exponential decay rate for estimates of first moment vector in adam, should be in [0, 1). Only used when solver=’adam’

beta_2 : float, optional, default 0.999
Exponential decay rate for estimates of second moment vector in adam, should be in [0, 1). Only used when solver=’adam’

epsilon : float, optional, default 1e-8
Value for numerical stability in adam. Only used when solver=’adam’

n_iter_no_change : int, optional, default 10
Maximum number of epochs to not meet tol improvement. Only effective when solver=’sgd’ or ‘adam’

Reference

BP神经网络隐藏层节点数如何确定
如何选择神经网络的超参数
sklearn 神经网络MLPclassifier参数详解
卷积神经网络训练三个概念(epoch,迭代次数,batchsize)
我搭的神经网络不work该怎么办!看看这11条新手最容易犯的错误
归一化,标准化,正则化的概念和区别
sklearn.neural_network.MLPClassifier
如何得出神经网络需要多少隐藏层、每层需要多少神经元?

  • 7
    点赞
  • 50
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
神经网络超参数包括学习率(η)、正则化参数(λ)、神经网络的层数(L)、每个隐层中神经元的个数(j)、学习的回合数(Epoch)、小批量数据的大小(minibatch)、输出神经元的编码方式、代价函数的选择、权重初始化的方法、神经元激活函数的种类以及参加训练模型数据的规模。\[2\]其中,隐藏层的层数和每层神经元的个数是需要根据具体问题和数据集来设定的。在实践中,通常3层神经网络的性能优于2层神经网络,但更深的网络(4、5、6层)对性能的提升帮助不大。这与卷积神经网络形成了鲜明的对比,因为在卷积网络中,深度对于良好的识别系统非常重要。\[3\] #### 引用[.reference_title] - *1* *2* [了解神经网络超参数](https://blog.csdn.net/qq_37683835/article/details/88778325)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item] - *3* [神经网络超参数的选择](https://blog.csdn.net/qq_34464926/article/details/81477134)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值