非线性优化-matlab函数库-optimset

创建或编辑一个最优化参数选项

句法规则

options = optimset('param1',value1,'param2',value2,...) %设置所有参数及其值,未设置的为默认值

options = optimset(optimfun)                                        %设置与最优化函数有关的参数为默认
options = optimset(oldopts,'param1',value1,...)             %复制一个已存在的选项,修改特定项
options = optimset(oldopts,newopts)                            %用另一个新选项合并目前选项因素

Parameter

Value

Description

Display

'off' | 'iter' | 'final' | 'notify'

'off' 表示不显示输出; 'iter' 显示每次迭代的结果; 'final' 只显示最终结果; 'notify' 只在函数不收敛的时候显示结果.

MaxFunEvals

positive integer

函數求值運算(Function Evaluation)的最高次數

MaxIter

positive integer

最大疊代次數.

TolFun

positive scalar

函数迭代的终止误差.

TolX

positive scalar

结束迭代的X值.


L - 只用于大规模数据拟合 
M - 中等规模 
B - 两者都可以 

Parameter Name

Description

L, M, B

Used by Functions

DerivativeCheck       

Compare user-supplied analytic derivatives (gradients or Jacobian) to finite differencing derivatives.

M

fgoalattainfmincon,fminimaxfminuncfseminf,fsolvelsqcurvefitlsqnonlin

Diagnostics

Print diagnostic information about the function to be minimized or solved.

B

All but fminbnd,fminsearchfzero, andlsqnonneg

DiffMaxChange

Maximum change in variables for finite difference derivatives.

M

fgoalattainfmincon,fminimaxfminuncfseminf,fsolvelsqcurvefitlsqnonlin

DiffMinChange

Minimum change in variables for finite difference derivatives.

M

fgoalattainfmincon,fminimaxfminuncfseminf,fsolvelsqcurvefitlsqnonlin

Display

Level of display. 'off' displays no output; 'iter' displays output at each iteration; 'final' displays just the final output; 'notify' displays output only if function does not converge.

B

All. See the individual function reference pages for the values that apply.

GoalsExactAchieve

Number of goals to achieve exactly (do not over- or underachieve).

M

fgoalattain

GradConstr

Gradients for the nonlinear constraints defined by the user.

M

fgoalattainfmincon,fminimax

GradObj

Gradient(s) for the objective function(s) defined by the user.

B

fgoalattainfmincon,fminimaxfminuncfseminf

Hessian

If 'on', function uses user-defined Hessian, or Hessian information (when using HessMult), for the objective function. If 'off', function approximates the Hessian using finite differences.

L

fminconfminunc

HessMult

Hessian multiply function defined by the user.

L

fminconfminunc,quadprog

HessPattern

Sparsity pattern of the Hessian for finite differencing. The size of the matrix is n-by-n, where n is the number of elements in x0, the starting point.

L

fminconfminunc

HessUpdate

Quasi-Newton updating scheme.

M

fminunc

Jacobian

If 'on', function uses user-defined Jacobian, or Jacobian information (when using JacobMult), for the objective function. If 'off', function approximates the Jacobian using finite differences.

B

fsolvelsqcurvefit,lsqnonlin

JacobMult

Jacobian multiply function defined by the user.

L

fsolvelsqcurvefitlsqlin,lsqnonlin

JacobPattern

Sparsity pattern of the Jacobian for finite differencing. The size of the matrix is m-by-n, where m is the number of values in the first argument returned by the user-specified function fun, and n is the number of elements in x0, the starting point.

L

fsolvelsqcurvefit,lsqnonlin

LargeScale

Use large-scale algorithm if possible.

B

fminconfminunc,fsolvelinproglsqcurvefit,lsqlinlsqnonlinquadprog

LevenbergMarquardt

Chooses Levenberg-Marquardt over Gauss-Newton algorithm.

M

lsqcurvefitlsqnonlin

LineSearchType

Line search algorithm choice.

M

fminuncfsolve,lsqcurvefitlsqnonlin

MaxFunEvals

Maximum number of function evaluations allowed.

B

fgoalattainfminbnd,fminconfminimax,fminsearchfminunc,fseminffsolvelsqcurvefit,lsqnonlin

MaxIter

Maximum number of iterations allowed.

B

All but fzero andlsqnonneg

MaxPCGIter

Maximum number of PCG iterations allowed.

L

fminconfminunc,fsolvelsqcurvefitlsqlin,lsqnonlinquadprog

MeritFunction

Use goal attainment/minimax merit function (multiobjective) vs. fmincon (single objective).

M

fgoalattainfminimax

MinAbsMax

Number of F(x) to minimize the worst case absolute values

M

fminimax

NonlEqnAlgorithm

Choose Levenberg-Marquardt or Gauss-Newton over the trust-region dogleg algorithm.

M

fsolve

PrecondBandWidth

Upper bandwidth of preconditioner for PCG.

L

fminconfminunc,fsolvelsqcurvefitlsqlin,lsqnonlinquadprog

TolCon

Termination tolerance on the constraint violation.

B

fgoalattainfmincon,fminimaxfseminf

TolFun

Termination tolerance on the function value.

B

fgoalattainfmincon,fminimaxfminsearch,fminuncfseminffsolve,linprog (large-scale only),lsqcurvefitlsqlin (large-scale only), lsqnonlin,quadprog (large-scale only)

TolPCG

Termination tolerance on the PCG iteration.

L

fminconfminunc,fsolvelsqcurvefitlsqlin,lsqnonlinquadprog

TolX

Termination tolerance on x.

B

All functions except the medium-scale algorithms forlinproglsqlin, and quadprog

TypicalX

Typical x values. The length of the vector is equal to the number of elements in x0, the starting point.

L

fminconfminunc,fsolvelsqcurvefitlsqlin,lsqnonlinquadprog

Examples

options = optimset('Display','iter','TolFun',1e-8)

This statement makes a copy of the options structure called options, changing the value of the TolX parameter and storing new values in optnew.

·                optnew = optimset(options,'TolX',1e-4);


 

This statement returns an optimization options structure that contains all the parameter names and default values relevant to the function fminbnd.

·                optimset('fminbnd')

  • 30
    点赞
  • 169
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
Matlab中,optimset函数用于设置优化器的选项。它可以通过不同的方式调用,比如optimset(Name,Value),optimset(optimfun),optimset(oldopts,Name,Value),optimset(oldopts,newopts)等。这个函数的作用是为优化器提供各种选项,以便更好地控制优化过程。例如,可以使用optimset('GradObj', 'on')来设置梯度信息可用,或者optimset('Display','iter','TolFun',1e-8)来设置显示迭代过程并设置目标函数收敛的容差值为1e-8。需要注意的是,optimset函数使用的是与fminunc和fminsearch等Matlab函数相同的接口和选项。当优化工具箱不可用时,也可以使用一个简单的结构体来替代optimset函数。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* [matlaboptimset的用法](https://blog.csdn.net/qq_45920973/article/details/119895069)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 33.333333333333336%"] - *2* [Fast Limited Memory Optimizer:准牛顿有限内存BFGS和针对大量未知数的最陡峭的优化器-matlab开发](https://download.csdn.net/download/weixin_38690089/19290907)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 33.333333333333336%"] - *3* [非线性优化-matlab函数库-optimset](https://blog.csdn.net/weixin_35285401/article/details/115845897)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 33.333333333333336%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值