【python学习笔记】一个最优化函数:shgo( )

求取函数最优值:shgo函数

SHGO全称simplicial homology global optimization,该函数使用SHG优化找到函数的全局最小值,使用前需要先导入模块:

from scipy.optimize import shgo

函数形式

shgo(func, bounds, args=(), constraints=None, n=None, iters=1, callback=None, 
minimizer_kwargs=None, options=None, sampling_method='simplicial')

参数介绍:

func:callable,需要最小化的目标函数,形式为f(x, *args),x为自变量,*args为其余参数(可以理解为自变量前面的固定参数);

bounds:sequence,x的取值范围,需要保证len(bounds) == len(x),没有取值范围时使用None;

args:tuple,optional,其余参数(可以理解为自变量前面的固定参数);

constraints:dict,约束条件

type : str

        Constraint type: 'eq' for equality, 'ineq' for inequality.
fun : callable

        The function defining the constraint.

jac : callable, optional

        The Jacobian of `fun` (only for SLSQP).

args : sequence, optional

        Extra arguments to be passed to the function and Jacobian.

n: int,optional,机翻意思是单纯形复形构造中使用的采样点数,不太好理解,但这个参数基本上不用设置;

iters:int,optional,循环次数,默认为1;

callback:callable,optional

Called after each iteration, as ``callback(xk)``, where ``xk`` is the current parameter vector.

minimizer_kwargs:dict, optional

Extra keyword arguments to be passed to the minimizer
        ``scipy.optimize.minimize`` Some important options could be:
            * method : str
                The minimization method, the default is ``SLSQP``.
            * args : tuple
                Extra arguments passed to the objective function (``func``) and
                its derivatives (Jacobian, Hessian).
            * options : dict, optional
                Note that by default the tolerance is specified as
                ``{ftol: 1e-12}`` 

options : dict, optional ;

sampling_method : str or function, optional;

案例

    Examples
    --------
    First consider the problem of minimizing the Rosenbrock function, `rosen`:
    
from scipy.optimize import rosen, shgo
bounds = [(0,2), (0, 2), (0, 2), (0, 2), (0, 2)]
result = shgo(rosen, bounds)
result.x, result.fun
    (array([1., 1., 1., 1., 1.]), 2.920392374190081e-18)
    
    Note that bounds determine the dimensionality of the objective
    function and is therefore a required input, however you can specify
    empty bounds using ``None`` or objects like ``np.inf`` which will be
    converted to large float numbers.
    
bounds = [(None, None), ]*4
result = shgo(rosen, bounds)
result.x
    array([0.99999851, 0.99999704, 0.99999411, 0.9999882 ])
    
    Next, we consider the Eggholder function, a problem with several local
    minima and one global minimum. We will demonstrate the use of arguments and
    the capabilities of `shgo`.
    (https://en.wikipedia.org/wiki/Test_functions_for_optimization)
    
def eggholder(x):
    return (-(x[1] + 47.0)
            * np.sin(np.sqrt(abs(x[0]/2.0 + (x[1] + 47.0))))
            - x[0] * np.sin(np.sqrt(abs(x[0] - (x[1] + 47.0))))
            )
    ...
bounds = [(-512, 512), (-512, 512)]
    
    `shgo` has built-in low discrepancy sampling sequences. First, we will
    input 64 initial sampling points of the *Sobol'* sequence:
    
result = shgo(eggholder, bounds, n=64, sampling_method='sobol')
result.x, result.fun
    (array([512.        , 404.23180824]), -959.6406627208397)
    
    `shgo` also has a return for any other local minima that was found, these
    can be called using:
    
result.xl
    array([[ 512.        ,  404.23180824],
           [ 283.0759062 , -487.12565635],
           [-294.66820039, -462.01964031],
           [-105.87688911,  423.15323845],
           [-242.97926   ,  274.38030925],
           [-506.25823477,    6.3131022 ],
           [-408.71980731, -156.10116949],
           [ 150.23207937,  301.31376595],
           [  91.00920901, -391.283763  ],
           [ 202.89662724, -269.38043241],
           [ 361.66623976, -106.96493868],
           [-219.40612786, -244.06020508]])
    
result.funl
    array([-959.64066272, -718.16745962, -704.80659592, -565.99778097,
           -559.78685655, -557.36868733, -507.87385942, -493.9605115 ,
           -426.48799655, -421.15571437, -419.31194957, -410.98477763])
    
    These results are useful in applications where there are many global minima
    and the values of other global minima are desired or where the local minima
    can provide insight into the system (for example morphologies
    in physical chemistry [4]_).
    
    If we want to find a larger number of local minima, we can increase the
    number of sampling points or the number of iterations. We'll increase the
    number of sampling points to 64 and the number of iterations from the
    default of 1 to 3. Using ``simplicial`` this would have given us
    64 x 3 = 192 initial sampling points.
    
result_2 = shgo(eggholder, bounds, n=64, iters=3, sampling_method='sobol')
len(result.xl), len(result_2.xl)
    (12, 20)
    
    Note the difference between, e.g., ``n=192, iters=1`` and ``n=64,
    iters=3``.
    In the first case the promising points contained in the minimiser pool
    are processed only once. In the latter case it is processed every 64
    sampling points for a total of 3 times.
    
    To demonstrate solving problems with non-linear constraints consider the
    following example from Hock and Schittkowski problem 73 (cattle-feed) [3]_::
    
        minimize: f = 24.55 * x_1 + 26.75 * x_2 + 39 * x_3 + 40.50 * x_4
    
        subject to: 2.3 * x_1 + 5.6 * x_2 + 11.1 * x_3 + 1.3 * x_4 - 5     >= 0,
    
                    12 * x_1 + 11.9 * x_2 + 41.8 * x_3 + 52.1 * x_4 - 21
                        -1.645 * sqrt(0.28 * x_1**2 + 0.19 * x_2**2 +
                                      20.5 * x_3**2 + 0.62 * x_4**2)       >= 0,
    
                    x_1 + x_2 + x_3 + x_4 - 1                              == 0,
    
                    1 >= x_i >= 0 for all i
    
    The approximate answer given in [3]_ is::
    
        f([0.6355216, -0.12e-11, 0.3127019, 0.05177655]) = 29.894378
    
def f(x):  # (cattle-feed)
    return 24.55*x[0] + 26.75*x[1] + 39*x[2] + 40.50*x[3]
    ...
def g1(x):
    return 2.3*x[0] + 5.6*x[1] + 11.1*x[2] + 1.3*x[3] - 5  # >=0
    ...
def g2(x):
    return (12*x[0] + 11.9*x[1] +41.8*x[2] + 52.1*x[3] - 21
            - 1.645 * np.sqrt(0.28*x[0]**2 + 0.19*x[1]**2
                            + 20.5*x[2]**2 + 0.62*x[3]**2)
            ) # >=0
    ...
def h1(x):
    return x[0] + x[1] + x[2] + x[3] - 1  # == 0
    ...
cons = ({'type': 'ineq', 'fun': g1},
        {'type': 'ineq', 'fun': g2},
        {'type': 'eq', 'fun': h1})
bounds = [(0, 1.0),]*4
res = shgo(f, bounds, iters=3, constraints=cons)
res
         fun: 29.894378159142136
        funl: array([29.89437816])
     message: 'Optimization terminated successfully.'
        nfev: 114
         nit: 3
       nlfev: 35
       nlhev: 0
       nljev: 5
     success: True
           x: array([6.35521569e-01, 1.13700270e-13, 3.12701881e-01, 5.17765506e-02])
          xl: array([[6.35521569e-01, 1.13700270e-13, 3.12701881e-01, 5.17765506e-02]])
    
g1(res.x), g2(res.x), h1(res.x)
    (-5.062616992290714e-14, -2.9594104944408173e-12, 0.0)
  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值