Convex Optimization 读书笔记 (3)

4 Convex optimization problems

4.1 Optimization problems

4.1.1 Basic terminology

We use the notation
m i n i m i z e      f 0 ( x ) s u b j e c t   t o      f i ( x ) ≤ 0 , i = 1 , . . . m h i ( x ) = 0 , i = 1 , . . . p \begin{aligned} {\rm minimize} \ \ \ \ & f_0(x)\\ {\rm subject\ to} \ \ \ \ & f_i(x)\leq0,i=1,...m \\ & h_i(x)=0,i=1,...p \end{aligned} minimize    subject to    f0(x)fi(x)0,i=1,...mhi(x)=0,i=1,...p
to describe the problem of finding an x x x that minimizes f 0 ( x ) f_0(x) f0(x) among all x x x that satisfy the conditions f i ( x ) ≤ 0 , i = 1 , . . . , m f_i(x) ≤ 0, i = 1,...,m fi(x)0,i=1,...,m, and h i ( x ) = 0 , i = 1 , . . . , p h_i(x) = 0, i = 1,...,p hi(x)=0,i=1,...,p. We call x ∈ R n x ∈ \mathbf{R}^n xRn the optimization variable and the function f 0 : R n → R f_0 : \mathbf{R}^n \rightarrow \mathbf{R} f0:RnR the objective function or cost function. The inequalities f i ( x ) ≤ 0 f_i(x) ≤ 0 fi(x)0 are called inequality constraints, and the corresponding functions f i : R n → R f_i : \mathbf{R}^n → \mathbf{R} fi:RnR are called the inequality constraint functions. The equations h i ( x ) = 0 h_i(x) = 0 hi(x)=0 are called the equality constraints, and the functions h i : R n → R h_i : \mathbf{R}^n → \mathbf{R} hi:RnR are the equality constraint functions.

Optimal and locally optimal points

We say x ⋆ x^⋆ x is an optimal point, or solves the problem above, if x ⋆ x^⋆ x is feasible and f 0 ( x ∗ ) = p ∗ f_0(x^*) = p^* f0(x)=p.

The set of all optimal points is the optimal set, denoted
X o p t = { x ∣ f i ( x ) ≤ 0 , i = 1 , . . . , m , h i ( x ) = 0 , i = 1 , . . . , p , f 0 ( x ) = p ⋆ } X_{opt} =\{x|f_i(x)≤0, i=1,...,m, h_i(x)=0, i=1,...,p, f_0(x)=p^⋆\} Xopt={xfi(x)0,i=1,...,m,hi(x)=0,i=1,...,p,f0(x)=p}
A feasible point x x x with f 0 ( x ) ≤ p ⋆ + ϵ f_0(x) ≤ p^⋆ + \epsilon f0(x)p+ϵ (where ϵ > 0 \epsilon > 0 ϵ>0) is called ϵ \epsilon ϵ-suboptimal, and the set of all ϵ \epsilon ϵ-suboptimal points is called the ϵ \epsilon ϵ-suboptimal set for the problem.

We say a feasible point x is locally optimal if there is an R > 0 R > 0 R>0 such that
f 0 ( x ) = inf ⁡ { f 0 ( z ) ∣ f i ( z ) ≤ 0 , i = 1 , . . . , m , h i ( z ) = 0 , i = 1 , . . . , p , ∥ z − x ∥ 2 ≤ R } f_0(x) = \inf\{f_0(z) \mid f_i(z) ≤ 0, i = 1,...,m,h_i(z)=0, i=1,...,p, ∥z−x∥_2 ≤R\} f0(x)=inf{f0(z)fi(z)0,i=1,...,m,hi(z)=0,i=1,...,p,zx2R}
or, in other words, x x x solves the optimization problem
m i n i m i z e      f 0 ( z ) s u b j e c t   t o      f i ( z ) ≤ 0 , i = 1 , . . . m h i ( z ) = 0 , i = 1 , . . . p ∣ ∣ z − x ∣ ∣ 2 ≤ R \begin{aligned} {\rm minimize} \ \ \ \ & f_0(z)\\ {\rm subject \ to} \ \ \ \ & f_i(z)\leq0,i=1,...m \\ & h_i(z)=0,i=1,...p \\ & ||z-x||_2 \leq R \end{aligned} minimize    subject to    f0(z)fi(z)0,i=1,...mhi(z)=0,i=1,...pzx2R

Feasibility problems

We call this the feasibility problem, and will sometimes write it as
f i n d      x s u b j e c t   t o      f i ( x ) ≤ 0 , i = 1 , . . . m h i ( x ) = 0 , i = 1 , . . . p \begin{aligned} {\rm find} \ \ \ \ & x\\ {\rm subject \ to} \ \ \ \ & f_i(x)\leq0,i=1,...m \\ & h_i(x)=0,i=1,...p \end{aligned} find    subject to    xfi(x)0,i=1,...mhi(x)=0,i=1,...p

4.1.2 Expressing problems in standard form

A standrad form of an optimization problem is
m i n i m i z e      f 0 ( x ) s u b j e c t   t o      f i ( x ) ≤ 0 , i = 1 , . . . m h i ( x ) = 0 , i = 1 , . . . p \begin{aligned} {\rm minimize} \ \ \ \ & f_0(x)\\ {\rm subject \ to} \ \ \ \ & f_i(x)\leq0,i=1,...m \\ & h_i(x)=0,i=1,...p \end{aligned} minimize    subject to    f0(x)fi(x)0,i=1,...mhi(x)=0,i=1,...p

Maximization problems

We can solve the maximization problem f f f as minimize − f -f f.

4.1.3 Equivalent problems

We call two problems equivalent if from a solution of one, a solution of the other is readily found, and vice versa.

Change of variables

Suppose ϕ : R n → R n \phi : \mathbf{R}^n → \mathbf{R}^n ϕ:RnRn is one-to-one, with image covering the problem domain D D D. We define functions f i f_i fi and h i h_i hi as
f i ~ ( z ) = f i ( ϕ ( z ) ) , i = 0 , . . . , m ,      h ~ i ( z ) = h i ( ϕ ( z ) ) , i = 1 , . . . p \tilde{f_i}(z) = f_i(\phi(z)), i = 0,...,m, \ \ \ \ \tilde{h}_i(z) = h_i(\phi(z)),i=1,...p fi~(z)=fi(ϕ(z)),i=0,...,m,    h~i(z)=hi(ϕ(z)),i=1,...p
The problem is equivalent
m i n i m i z e      f ~ 0 ( x ) s u b j e c t   t o      f ~ i ( x ) ≤ 0 , i = 1 , . . . m h ~ i ( x ) = 0 , i = 1 , . . . p \begin{aligned} {\rm minimize} \ \ \ \ & \tilde{f}_0(x)\\ {\rm subject \ to} \ \ \ \ & \tilde{f}_i(x)\leq0,i=1,...m \\ & \tilde{h}_i(x)=0,i=1,...p \end{aligned} minimize    subject to    f~0(x)f~i(x)0,i=1,...mh~i(x)=0,i=1,...p
We say that the problems are related by the change of variable or substitution of variable x = ϕ ( z ) x = \phi(z) x=ϕ(z).

Transformation of objective and constraint functions

Suppose that ψ 0 : R → R \psi_0 : \mathbf{R} → \mathbf{R} ψ0:RR is monotone increasing, ψ 1 , . . . , ψ m : R → R \psi_1,...,\psi_m : \mathbf{R} → \mathbf{R} ψ1,...,ψm:RR satisfy ψ i ( u ) ≤ 0 \psi_i(u)≤0 ψi(u)0 if and only if u ≤ 0 u≤0 u0, and ψ m + 1 , . . . , ψ m + p : R → R ψ_{m+1},...,ψ_{m+p} :\mathbf{R} → \mathbf{R} ψm+1,...,ψm+p:RR satisfy ψ i ( u ) = 0 ψ_i(u)=0 ψi(u)=0 if and only if u = 0 u = 0 u=0. We define functions f ~ i \tilde{f}_i f~i and h ~ i \tilde{h}_i h~i as the compositions
f ~ i ( x ) = ψ i ( f i ( x ) ) , i = 0 , . . . , m ,      h ~ i ( x ) = ψ m + i ( h i ( x ) ) , i = 1 , . . . , p \tilde{f}_i(x) = ψ_i(f_i(x)), i = 0,...,m,\ \ \ \ \tilde{h}_i(x) = ψ_{m+i}(h_i(x)), i = 1,...,p f~i(x)=ψi(fi(x)),i=0,...,m,    h~i(x)=ψm+i(hi(x)),i=1,...,p
The problem is equivalent
m i n i m i z e      f ~ 0 ( x ) s u b j e c t   t o      f ~ i ( x ) ≤ 0 , i = 1 , . . . m h ~ i ( x ) = 0 , i = 1 , . . . p \begin{aligned} {\rm minimize} \ \ \ \ & \tilde{f}_0(x)\\ {\rm subject \ to} \ \ \ \ & \tilde{f}_i(x)\leq0,i=1,...m \\ & \tilde{h}_i(x)=0,i=1,...p \end{aligned} minimize    subject to    f~0(x)f~i(x)0,i=1,...mh~i(x)=0,i=1,...p

Slack variables

One simple transformation is based on the observation that f i ( x ) ≤ 0 f_i(x) ≤ 0 fi(x)0 if and only if there is an s i ≥ 0 s_i ≥ 0 si0 that satisfies f i ( x ) + s i = 0 f_i(x)+s_i = 0 fi(x)+si=0. The problem is equivalent:
m i n i m i z e      f 0 ( x ) s u b j e c t   t o      s i ≥ 0 f i ( x ) + s i = 0 , i = 1 , . . . m h i ( x ) = 0 , i = 1 , . . . p \begin{aligned} {\rm minimize} \ \ \ \ & {f}_0(x)\\ {\rm subject \ to} \ \ \ \ & s_i\geq0 \\ & {f}_i(x)+s_i =0,i=1,...m \\ & {h}_i(x)=0,i=1,...p \end{aligned} minimize    subject to    f0(x)si0fi(x)+si=0,i=1,...mhi(x)=0,i=1,...p
Where x ∈ R n , s ∈ R m x\in \mathbf{R}^n,s\in\mathbf{R}^m xRn,sRm.

Eliminating equality constraints

Suppose the functionk ϕ : R k → R n \phi : \mathbf{R}^k → \mathbf{R}^n ϕ:RkRn is such that x x x satisfies if and only if there is some z ∈ R z ∈ \mathbf{R} zR such that x = ϕ ( z ) x = \phi(z) x=ϕ(z). The optimization problem
m i n i m i z e      f ~ 0 ( x ) = f 0 ( ϕ ( x ) s u b j e c t   t o      f ~ i ( x ) = f i ( ϕ ( x ) ≤ 0 , i = 1 , . . . m \begin{aligned} {\rm minimize} \ \ \ \ & \tilde{f}_0(x) = f_0(\phi(x)\\ {\rm subject \ to} \ \ \ \ & \tilde{f}_i(x) = f_i(\phi(x)\leq0,i=1,...m \\ \end{aligned} minimize    subject to    f~0(x)=f0(ϕ(x)f~i(x)=fi(ϕ(x)0,i=1,...m

Eliminating linear equality constraints

Substituting x = F z + x 0 x = Fz + x_0 x=Fz+x0 into the original problem yields the problem
m i n i m i z e      f 0 ( F z + x 0 ) s u b j e c t   t o      f i ( F z + x 0 ) ≤ 0 , i = 1 , . . . m \begin{aligned} {\rm minimize} \ \ \ \ & f_0(Fz+x_0)\\ {\rm subject \ to} \ \ \ \ & f_i(Fz+x_0)\leq0,i=1,...m \\ \end{aligned} minimize    subject to    f0(Fz+x0)fi(Fz+x0)0,i=1,...m
Where x x x is attained by eqauations.

Introducing equality constraints

We can also introduce equality constraints and new variables into a problem.

Optimizing over some variables

We always have
inf ⁡ x , y f ( x , y ) = inf ⁡ x inf ⁡ y f ( x , y ) \inf_{x,y}f(x,y)=\inf_x\inf_yf(x,y) x,yinff(x,y)=xinfyinff(x,y)
In other words, we can always minimize a function by first minimizing over some of the variables, and then minimizing over the remaining ones.

Epigraph problem form

The epigraph form of the standard problem is the problem
m i n i m i z e      t s u b j e c t   t o      f 0 ( x ) − t ≤ 0 f i ( x ) ≤ 0 , i = 1 , . . . m h i ( x ) = 0 , i = 1 , . . . p \begin{aligned} {\rm minimize} \ \ \ \ & t\\ {\rm subject \ to} \ \ \ \ & f_0(x)-t\leq 0\\ & f_i(x)\leq0,i=1,...m \\ & h_i(x)=0,i=1,...p \end{aligned} minimize    subject to    tf0(x)t0fi(x)0,i=1,...mhi(x)=0,i=1,...p
We can easily see that it is equivalent to the original problem: ( x , t ) (x,t) (x,t) is optimal if and only if x x x is optimal and t = f 0 ( x ) t = f_0(x) t=f0(x).

Implicit and explicit constraints

As an extreme example, the standard form problem can be expressed as the unconstrained problem
m i n i m i z e      F ( z ) {\rm minimize} \ \ \ \ F(z)\\ minimize    F(z)
where we define the function F as f0, but with domain restricted to the feasible set:
d o m   F = { x ∈ d o m   f 0 ∣ f i ( x ) ≤ 0 , i = 1 , . . . , m ,      h i ( x ) = 0 , i = 1 , . . . , p } \mathbf{dom} \ F =\{x∈\mathbf{dom}\ f_0 \mid f_i(x)≤0, i=1,...,m, \ \ \ \ h_i(x)=0, i=1,...,p\} dom F={xdom f0fi(x)0,i=1,...,m,    hi(x)=0,i=1,...,p}

4.1.4 Parameter and oracle problem descriptions

4.2 Convex optimization

4.2.1 Convex optimization problems in standard form

A convex optimization problem is one of the form
m i n i m i z e      f 0 ( x ) s u b j e c t   t o      f i ( x ) ≤ 0 , i = 1 , . . . m a i T x = b i , i = 1 , . . . p \begin{aligned} {\rm minimize} \ \ \ \ & f_0(x)\\ {\rm subject \ to} \ \ \ \ & f_i(x)\leq0,i=1,...m \\ & a_i^Tx=b_i,i=1,...p \end{aligned} minimize    subject to    f0(x)fi(x)0,i=1,...maiTx=bi,i=1,...p
where f 0 , . . . , f m f_0 , . . . , f_m f0,...,fm are convex functions.

If f 0 f_0 f0 is quasiconvex instead of convex, we say the problem is a (standard form) quasiconvex optimization problem.

Concave maximization problems

If f 0 f_0 f0 is concave, the optimization problem is concave. This concave maximization problem is readily solved by minimizing the convex objective function − f 0 −f_0 f0.

Abstract form convex optimization problem

4.2.2 Local and global optima

A fundamental property of convex optimization problems is that any locally optimal point is also (globally) optimal.

4.2.3 An optimality criterion for differentiable f0

Suppose that the objective f 0 f_0 f0 in a convex optimization problem is differentiable, then x x x is optimal if and only if x ∈ X x∈X xX and
∇ f 0 ( x ) T ( y − x ) ≥ 0   f o r   a l l   y ∈ X ∇f_0(x)^T (y − x) ≥ 0 \ {\rm for \ all} \ y ∈ X f0(x)T(yx)0 for all yX

4.2.4 Equivalent convex problems

Eliminating equality constraints
Introducing equality constraints
Slack variables

Introducing slack variables for linear inequalities preserves convexity of a problem.

Epigraph problem form
Minimizing over some variables

4.2.5 Quasiconvex optimization

Solving a quasiconvex optimization problem can be reduced to solving a sequence of convex optimization problems.

Locally optimal solutions and optimality conditions

The most important difference between convex and quasiconvex optimization is that a quasiconvex optimization problem can have locally optimal solutions that are not (globally) optimal.

It follows from the first-order condition for quasiconvexity that x x x is optimal if
∇ f 0 ( x ) T ( y − x ) > 0   o r   a l l   y ∈ X   \ { x } ∇f_0(x)^T (y − x) > 0 \ {\rm or \ all} \ y ∈ X \ \backslash \{x \} f0(x)T(yx)>0 or all yX \{x}

Quasiconvex optimization via convex feasibility problems

Let φ t : R n → R , t ∈ R φt : \mathbf{R}^n → \mathbf{R}, t ∈ \mathbf{R} φt:RnR,tR, be a family of convex functions that satisfy
ϕ 0 ( x ) ≤ t ⟺ ϕ t ( x ) ≤ 0 , \phi_0(x)≤t \Longleftrightarrow \phi_t(x)≤0, ϕ0(x)tϕt(x)0,
and also, for each x , ϕ s ( x ) ≤ ϕ t ( x ) w h e n e v e r   s ≥ t x, \phi_s(x) ≤ \phi_t(x) {\rm whenever} \ s ≥ t x,ϕs(x)ϕt(x)whenever st.

Let p ⋆ p^⋆ p denote the optimal value of the quasiconvex optimization problem, define convex feasibility function
m i n i m i z e      x s u b j e c t   t o      ϕ t ( x ) ≤ 0 f i ( x ) ≤ 0 , i = 1 , . . . m A x = b \begin{aligned} {\rm minimize} \ \ \ \ & x\\ {\rm subject \ to}\ \ \ \ &\phi_t(x)\leq0 \\ & f_i(x)\leq0,i=1,...m \\ & Ax=b \end{aligned} minimize    subject to    xϕt(x)0fi(x)0,i=1,...mAx=b
This is repeated until the width of the interval is small enough:

4.3 Linear optimization problems

When the objective and constraint functions are all affine, the problem is called a linear program (LP). A general linear program has the form
m i n i m i z e      c T x + d s u b j e c t   t o      G x ⪯ h A x = b \begin{aligned} {\rm minimize} \ \ \ \ & c^Tx+d \\ {\rm subject \ to} \ \ \ \ & Gx\preceq h \\ & Ax=b \end{aligned} minimize    subject to    cTx+dGxhAx=b
where G ∈ R m × n   a n d   A ∈ R p × n G ∈ \mathbf{R}^{m×n} \ {\rm and} \ A ∈ \mathbf{R} ^{p×n} GRm×n and ARp×n.

Standard and inequality form linear programs

In a standard form LP the only inequalities are componentwise nonnegativity constraints x ⪰ 0 x \succeq 0 x0:
m i n i m i z e      c T x s u b j e c t   t o      A x = b x ⪰ 0 \begin{aligned} {\rm minimize} \ \ \ \ & c^Tx \\ {\rm subject \ to} \ \ \ \ & Ax=b\\ & x\succeq 0 \\ \end{aligned} minimize    subject to    cTxAx=bx0
If LP has no equality constraints, it is a inequality form LP:
m i n i m i z e      c T x s u b j e c t   t o      A x ⪯ b \begin{aligned} {\rm minimize} \ \ \ \ & c^Tx \\ {\rm subject \ to} \ \ \ \ & Ax\preceq b\\ \end{aligned} minimize    subject to    cTxAxb
Converting LPs to standard form

It is sometimes useful to transform a general LP to one in standard form. The first step is to introduce slack variables s i s_i si for the inequalities, which results in
m i n i m i z e      c T x + d s u b j e c t   t o      G x + s = h A x = b s ⪰ 0 \begin{aligned} {\rm minimize}\ \ \ \ & c^Tx+d \\ {\rm subject \ to} \ \ \ \ & Gx + s = h \\ & Ax=b \\ & s\succeq 0 \end{aligned} minimize    subject to    cTx+dGx+s=hAx=bs0
The second step is to express the variable x x x as the difference of two nonnegative variables x + x^+ x+ and x − x^− x, i.e., x = x + − x − , x + , x − ⪰ 0 x = x^+ − x^−, x^+, x^− \succeq 0 x=x+x,x+,x0. This yields the problem
m i n i m i z e      c T ( x + − x − ) + d s u b j e c t   t o      G ( x + − x − ) + s = h A ( x + − x − ) = b x + , x − , s ⪰ 0 \begin{aligned} {\rm minimize} \ \ \ \ & c^T(x^+-x^-)+d \\ {\rm subject \ to} \ \ \ \ & G(x^+-x^-)+s=h\\ & A(x^+-x^-)=b \\ & x^+, x^−,s \succeq 0 \end{aligned} minimize    subject to    cT(x+x)+dG(x+x)+s=hA(x+x)=bx+,x,s0

4.3.1 Examples

4.3.2 Linear-fractional programming

The problem of minimizing a ratio of affine functions over a polyhedron is called a linear-fractional program:
m i n i m i z e      f 0 ( x ) s u b j e c t   t o      G x ⪯ h A x = b \begin{aligned} {\rm minimize} \ \ \ \ & f_0(x) \\ {\rm subject \ to}\ \ \ \ & Gx \preceq h \\ & Ax=b \\ \end{aligned} minimize    subject to    f0(x)GxhAx=b
Where f 0 ( x ) = c T x + d e T x + f , d o m   f 0 = { x ∣ e T x + f > 0 } f_0(x)=\frac{c^Tx+d}{e^Tx+f}, \mathbf{dom} \ f_0=\{ x\mid e^Tx+f>0 \} f0(x)=eTx+fcTx+d,dom f0={xeTx+f>0}.

Transforming to a linear program

the linear-fractional program can be transformed to an equivalent linear program:
m i n i m i z e      c T y + d z s u b j e c t   t o      G y − h z ⪯ 0 A y − b z = 0 e T y + f z = 1 z ⪰ 0 \begin{aligned} {\rm minimize} \ \ \ \ & c^Ty+dz \\ {\rm subject \ to} \ \ \ \ & Gy-hz\preceq0\\ & Ay-bz=0 \\ & e^Ty+fz=1 \\ & z\succeq 0 \end{aligned} minimize    subject to    cTy+dzGyhz0Aybz=0eTy+fz=1z0
Where y = x e T x + f ,   z = 1 e T x + f y=\frac{x}{e^Tx+f}, \ z=\frac{1}{e^Tx+f} y=eTx+fx, z=eTx+f1

Generalized linear-fractional programming

A generalization of the linear-fractional program is the generalized linear-fractional program in which
f 0 ( x ) = max ⁡ i = 1 , . . . , r c i T x + d i e i T x + f i ,    d o m   f 0 = { x ∣ e i T x + f i > 0 , i = 1 , . . . , r } f_0(x)=\max_{i=1,...,r}\frac{c^T_ix+d_i}{e^T_ix+f_i}, \ \ \mathbf{dom}\ f_0=\{ x \mid e_i^Tx+f_i>0,i=1,...,r \} f0(x)=i=1,...,rmaxeiTx+ficiTx+di,  dom f0={xeiTx+fi>0,i=1,...,r}

4.4 Quadratic optimization problems

The convex optimization problem is called a quadratic program (QP) if the objective function is (convex) quadratic, and the constraint functions are affine:
m i n i m i z e      1 2 x T P x + q T x + r s u b j e c t   t o      G x ⪯ h A x = b \begin{aligned} {\rm minimize} \ \ \ \ & \frac{1}{2}x^TPx+q^Tx+r \\ {\rm subject \ to} \ \ \ \ & Gx\preceq h\\ & Ax=b \\ \end{aligned} minimize    subject to    21xTPx+qTx+rGxhAx=b
where P ∈ S + n , P ∈ R m × n , A ∈ R p × n P\in \mathbf{S}^n_+,P\in \mathbf{R}^{m\times n},A\in \mathbf{R}^{p\times n} PS+n,PRm×n,ARp×n.

4.4.1 Examples

4.4.2 Second-order cone programming

A problem that is closely related to quadratic programming is the second-order cone program (SOCP):
m i n i m i z e      f T x s u b j e c t   t o      ∣ ∣ A i x + b i ∣ ∣ 2 ≤ c i T x + d i ,    i = 1 , . . . , m F x = g \begin{aligned} {\rm minimize} \ \ \ \ & f^Tx \\ {\rm subject \ to} \ \ \ \ & ||A_ix+b_i||_2\leq c_i^Tx+d_i,\ \ i=1,...,m\\ & Fx=g \\ \end{aligned} minimize    subject to    fTxAix+bi2ciTx+di,  i=1,...,mFx=g
The constrain of the form
∥ A x + b ∥ 2 ≤ c T x + d ∥Ax+b∥_2 ≤c^Tx+d Ax+b2cTx+d
a second-order cone constraint.

4.5 Geometric programming

4.5.1 Monomials and posynomials

A function f : R n → R f:\mathbf{R}^n\rightarrow\mathbf{R} f:RnR with d o m   f = R + + n \mathbf{dom} \ f=\mathbf{R}_{++}^n dom f=R++n is a monomial function, or simply, a monomial that
f ( x ) = c x 1 a 1 ⋯ x n a n f(x)=cx_1^{a_1}\cdots x_n^{a_n} f(x)=cx1a1xnan
where $c>0 \ \mbox{and} \ a_i \in \mathbf{R} $.

A sum of monomials
f ( x ) = ∑ k c k x 1 a 1 k ⋯ x n a n k f(x)=\sum_kc_kx_1^{a_1k}\cdots x_n^{a_nk} f(x)=kckx1a1kxnank
where c k > 0 c_k > 0 ck>0, is called a posynomial function (with K K K terms), or simply, a posynomial.

4.5.2 Geometric programming

An optimization problem of the form
m i n i m i z e      f 0 ( x ) s u b j e c t   t o      f i ( x ) ≤ 1 , i = 1 , . . . m h i ( x ) = 1 , i = 1 , . . . p \begin{aligned} {\rm minimize} \ \ \ \ & f_0(x)\\ {\rm subject \ to} \ \ \ \ & f_i(x)\leq1,i=1,...m \\ & h_i(x)=1,i=1,...p \end{aligned} minimize    subject to    f0(x)fi(x)1,i=1,...mhi(x)=1,i=1,...p
where f 0 , . . . , f m f_0, . . . , f_m f0,...,fm are posynomials and h 1 , . . . , h p h_1, . . . , h_p h1,...,hp are monomials, is called a geometric program (GP).

4.5.3 Geometric program in convex form

Define y i = log ⁡ x i y_i=\log x_i yi=logxi, and take logarithm
m i n i m i z e      log ⁡ ∑ k e a 0 k T y + b 0 k s u b j e c t   t o      log ⁡ ∑ k e a i k T y + b i k ≤ 1 ,    i = 1 , . . . m g i T y + h i = 0 ,    i = 1 , . . . p \begin{aligned} {\rm minimize} \ \ \ \ & \log \sum_{k}e^{a_{0k}^Ty+b_0k}\\ {\rm subject \ to} \ \ \ \ & \log \sum_{k}e^{a_{ik}^Ty+b_ik} \leq1, \ \ i=1,...m\\ & {g_i^Ty+h_i=0}, \ \ i=1,...p \end{aligned} minimize    subject to    logkea0kTy+b0klogkeaikTy+bik1,  i=1,...mgiTy+hi=0,  i=1,...p

4.5.4 Examples

4.6 Generalized inequality constraints

The standard form of optimization problem can be extended to vector valued:
m i n i m i z e      f 0 ( x ) s u b j e c t   t o      f i ( x ) ⪯ K i 0 ,      i = 1 , . . . m A x = b \begin{aligned} {\rm minimize} \ \ \ \ & f_0(x)\\ {\rm subject \ to} \ \ \ \ & f_i(x)\preceq_{K_i}0, \ \ \ \ i=1,...m \\ & Ax=b \end{aligned} minimize    subject to    f0(x)fi(x)Ki0,    i=1,...mAx=b
where f 0 : R n → R , K i ∈ R k i f_0:\mathbf{R}^n\rightarrow\mathbf{R},K_i\in\mathbf{R}^{k_i} f0:RnR,KiRki are proper cones and f i : R n → R k i f_i:\mathbf{R}^n\rightarrow\mathbf{R}^{k_i} fi:RnRki are K i K_i Ki-convex. We refer to this problem as a (standard form) convex optimization problem with generalized inequality constraints.

4.6.1 Conic form problems

Among the simplest convex optimization problems with generalized inequalities are the conic form problems (or cone programs)
m i n i m i z e      c T x s u b j e c t   t o      F x + g ⪯ K 0 ,      i = 1 , . . . m A x = b \begin{aligned} {\rm minimize} \ \ \ \ & c^Tx\\ {\rm subject \ to} \ \ \ \ & Fx+g\preceq_{K}0, \ \ \ \ i=1,...m \\ & Ax=b \end{aligned} minimize    subject to    cTxFx+gK0,    i=1,...mAx=b

4.6.2 Semidefinite programming

When K K K is S + k \mathbf{S}^k_+ S+k, the cone of positive semidefinite k × k k × k k×k matrices, the associated conic form problem is called a semidefinite program (SDP), and has the form
m i n i m i z e      c T x s u b j e c t   t o      x 1 F 1 + ⋯ + x n F n + G ⪯ 0 ,      i = 1 , . . . m A x = b \begin{aligned} {\rm minimize} \ \ \ \ & c^Tx\\ {\rm subject \ to} \ \ \ \ & x_1F_1+\cdots+x_nF_n+G\preceq_{}0, \ \ \ \ i=1,...m \\ & Ax=b \end{aligned} minimize    subject to    cTxx1F1++xnFn+G0,    i=1,...mAx=b
where $G,F_1,…,F_n\in \mathbf{S}^k,A\in \mathbf{R}^{p\times n} $.

4.7 Vector optimization

4.7.1 General and convex vector optimization problems

We denote a general vector optimization problem as
m i n i m i z e   ( w i t h   r e s p e c t   t o   K )      f 0 ( x ) s u b j e c t   t o      f i ( x ) ≤ 0 , i = 1 , . . . m h i ( x ) = 0 , i = 1 , . . . p \begin{aligned} {\rm minimize \ (with \ respect \ to \ } K) \ \ \ \ & f_0(x)\\ {\rm subject \ to} \ \ \ \ & f_i(x)\leq0,i=1,...m \\ & h_i(x)=0,i=1,...p \end{aligned} minimize (with respect to K)    subject to    f0(x)fi(x)0,i=1,...mhi(x)=0,i=1,...p

4.7.2 Optimal points and values

Consider the set of objective values of feasible points,
O = { f 0 ( x ) ∣ ∃ x ∈ D , f i ( x ) ≤ 0 , i = 1 , . . . , m ; h i ( x ) = 0 , i = 1 , . . . p } ⊆ R q \mathcal{O}=\{ f_0(x) \mid \exist x\in \mathcal{D},f_i(x)\leq0,i=1,...,m;h_i(x)=0,i=1,...p \}\subseteq\mathbf{R}^{q} O={f0(x)xD,fi(x)0,i=1,...,m;hi(x)=0,i=1,...p}Rq
which is called the set of achievable objective values. If this set has a minimum element, i.e., there is a feasible x x x such that f 0 ( x ) ⪯ K f 0 ( y ) f_0(x) \preceq_K f_0(y) f0(x)Kf0(y) for all feasible y y y, then we say x x x is optimal for the problem, and refer to f 0 ( x ) f_0(x) f0(x) as the optimal value of the problem.

4.7.3 Pareto optimal points and values

Thus, a point x x x is Pareto optimal if it is feasible and, for any feasible y , f 0 ( y ) ⪯ K f 0 ( x ) y, f_0(y) \preceq _K f_0(x) y,f0(y)Kf0(x) implies f 0 ( y ) = f 0 ( x ) f_0(y) = f_0(x) f0(y)=f0(x).

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-wr4NuuvF-1603953630396)(/Users/apple/Library/Application Support/typora-user-images/image-20201029141125290.png)]

4.7.4 Scalarization

Scalarization is a standard technique for finding Pareto optimal (or optimal) points for a vector optimization problem. Choose any λ ≻ K ∗ 0 λ \succ _{K^∗} 0 λK0, i.e., any vector that is positive in the dual generalized inequality. Now consider the scalar optimization problem
m i n i m i z e      λ T f 0 ( x ) s u b j e c t   t o      f i ( x ) ≤ 0 , i = 1 , . . . m h i ( x ) = 0 , i = 1 , . . . p \begin{aligned} {\rm minimize} \ \ \ \ & \lambda^Tf_0(x)\\ {\rm subject \ to}\ \ \ \ & f_i(x)\leq0,i=1,...m \\ & h_i(x)=0,i=1,...p \end{aligned} minimize    subject to    λTf0(x)fi(x)0,i=1,...mhi(x)=0,i=1,...p

4.7.5 Multicriterion optimization

In other words, x ⋆ x^⋆ x is simultaneously optimal for each of the scalar problems
m i n i m i z e      F j ( x ) s u b j e c t   t o      f i ( x ) ≤ 0 , i = 1 , . . . m h i ( x ) = 0 , i = 1 , . . . p \begin{aligned} {\rm minimize} \ \ \ \ & F_j(x)\\ {\rm subject \ to} \ \ \ \ & f_i(x)\leq0,i=1,...m \\ & h_i(x)=0,i=1,...p \end{aligned} minimize    subject to    Fj(x)fi(x)0,i=1,...mhi(x)=0,i=1,...p
it is called a multicriterion or multi-objective optimization problem.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值