Lagrange Dual Theory for NLP

  1. Classic form of nonlinear programming
    F1: \(f,h,g\) are arbitrary (not necessarily diferentiable or continuous) functions.
    1403722-20180519155150379-344684057.png
    F2:
    1403722-20180519155525432-1048642336.png
    F3:
    \[\begin{align*} \min \; & f(x)\\ \textrm{s.t.} \; & g(x)\leq 0\\ & h(x)=0 \\ & x\in X; \end{align*}\]
    As \(h(x)=0\) can be equivalently written as two inequality constraints \(h(x)\leq 0\) and \(-h(x)\leq 0\), we only consider
    1403722-20180519161102929-1696539112.png
    \(\color{red}{\mbox{Denote the primal domain by}}\) \(D=X\cap \{x|g(x)\leq 0, h(x)=0\}\).}

  2. Lagrange function and its dual
    1) Lagrange function:\(\mu \geq 0\) is called the Lagrange multiplier.
    1403722-20180519161313944-1942742877.png
    2)Lagrange dual function
    1403722-20180519161910305-1229495776.png
    [Remark]Observe that the minimization to calculate the
    dual is carried out over all \(x \in X\), rather than just those within the constraint set. For this reason, we can prove that for primal feasilbe \(x\in D\) and dual feasible \((\lambda, \mu \geq 0)\) , we have
    \[g(\bar{\lambda},\bar{\mu})\leq f(\bar{x})\]
    So we have for \(\mu \geq 0\) and \(x\in D\),
    \[d^*=\sup g(\lambda,\mu)\leq \inf f(x)=f^*\],
    which is called weak dual theorem.

  3. Weak duallity
    1403722-20180519164122837-389301842.png
    1403722-20180519164322059-948964646.png
    1403722-20180519164430390-1588007322.png
    If strong duality holds, then optimal pair \((x,\lambda,\mu)\) must satisfy the KKT conditions.
    \[\mbox{An optimal solution geometric multiplier pair} \Leftrightarrow \mbox{Dual gap=0} \Leftrightarrow \mbox{Saddle point theorem} \]
    \[ \Leftrightarrow\]
    1403722-20180519173035684-1969089035.png
    \[\Rightarrow \mbox{KKT conditions} \]
    If primal is convex optimization, then KKT is sufficient.

  4. Strong Duality: convex optimization with slater condition
    \(f, g\) is convex, \(h\) is affine and there exists point \(x\) in relative interior of the constraint set such that all of the (nonlinear convex) inequality constraints hold with strict inequality.
    In such case, dual gap disappeared and the KKT conditions are both necessary and sufficient for \(x\) to be the global solution to the primal problem.

  5. Saddle point and dualilty gap
    1403722-20180519165045300-1170074014.png
    1403722-20180519165127001-344603808.png
    1403722-20180519165139474-1191615033.png
    1403722-20180519165154822-38625296.png

  6. Saddle point and KKT condtions
    1403722-20180519165228109-1317564784.png
    1403722-20180519165320586-2119900675.png
    \(\color{red}{\mbox{Remark:}} \mu^Tg(x)=0\) and \(g(x)\leq 0\) and \(\mu\geq 0\) means \(\alpha_i g_i=0\).

  7. KKT point is optimizer when dealing with convex ptimization
    Any point which satisfies KKT conditions is an optimizer when dealing with a convex problem no matter Slater's holds or not but if it holds, an optimizer must hold the KKT conditions.

转载于:https://www.cnblogs.com/mathlife/p/9060544.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值