在coursera上学习discrete optimization时关于linear programming的笔记整理

Linear programming

Definition(from Wikipedia)

Linear programming (LP, also called linear optimization) is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear relationships. More formally, linear programming is a technique for the optimization of a linear objective function, subject to linear equality and linear inequality constraints.

Convex set

Convex sets: for any point A and B inside the sets, every points in line AB are inside the sets.
Convex combinations: a1v1+…+anvn is a convex combination if a1+…+an=1 and ai>=0 (v means vertex)
-the intersection of convex sets is a convex set.
Theorem: at least one of the points where the objective value is minimal is a vertex
Prove:
在这里插入图片描述

In linear programming, the feasible solution is usually a convex set. It can be proved that the optimal solution is at either a single vertex or a line segment if it exist.

Algorithms for linear programming

In this note I will only mention simplex algorithm, which can solves general linear programs quickly in practice. However, with some carefully contrived inputs, the simplex algorithm can require exponential time. The first polynomial-time algorithm for linear programming was the ellipsoid algorithm, which runs slowly inn practice. Another polynomial time algorithm is interior-point-methods.

Standard form and slack form:

Standard form:
Maximize the objective function with only inequality constraints(less or equal than) and nonnegativity constraints.

Converting linear programs into standard form:
A linear program might not be in standard form for any of four possible reasons:

  1. The objective function might be a minimization rather than a maximization
  2. There might be variables without nonnegativity constraints.
  3. There might be equality constraints.
  4. There might be inequality constraints, but instead of having of a less-than-or-equal-to sign, they have a greater-than-or-equal-to sign.

Solution:
1 Negate the coefficients in the objective function.
2. Replace the variables with the differences of two variables.
3. Use two inequalities constrains, both less than or equal to and larger than or equal to, to replace the equality constraint.
4. Negate the coefficients in the equality and also change the sign.

Slack form:
The nonnegativity constraints are the only inequality constraints and the remaining constraints are equalities.

Converting standard form to slack form:
Add a slack variable to each inequality and change the sign to equal sign.

The simplex algorithm:

Goal:
you want to solve a linear program

Fact:

  1. An optimal solution is located at a vertex
  2. A vertex is a Basic feasible Solution(BFS)
  3. You can move from one BFS to a neighboring BFS
  4. You can detect whether a BFS is optimal
  5. From any BFS, you can move to a BFS with a better cost

Procedure:
A. Initialize.(two phase method) --Finding an initial solution.
If you convert a linear program to slack form, and the basic solution is infeasible, you cannot directly use simplex algorithm. The initialize step can help you to get a basic feasible solution(BFS) if it exists.

1.  let k be the index of the minimum bi
2.	if bk >=0
3.	   	return 
4.	Form Laux by adding -x0 to the left hand side of each constraint and setting the objective function to -x0
5.	Convert Laux to slack form
6.	L=n+k
7.	Set x0 as entering variable and xL as leaving variable , do pivoting
8.	Iterate the pivoting to get an optimal solution 
9.	If the optimal solution to Laux sets x0 to 0
10.	   If x0 is basic
11.	       Perform one pivot to make it nonbasic
12.	   From the final slack form, remove x0 from all place, restore the original objective function and replace each basic variable in the funciton by nonbasic variable  
13.	   Return the modified final slack form
14.	Else return infeasible

B. Pivoting

  1. Select a variable Xl which has a positive coefficient in objective function. Xe is called entering variable.
  2. Select the tightest constraint, that is, with the increment of Xl this constraint is the first to be violated. The basic variable Xl of this constraint is called leaving variable.
  3. Let Xe become a basic variable and let Xl become a nonbasic variable.
  4. Adjust other constraints and objective function. If Xe exist, replace it with the right hand side of the constraint whose basic variable is Xe.
  5. Iterating until all variables in objective function have negative coefficient.

C. Termination

  1. Usually the simplex algorithm terminates when all variables in objective function have negative coefficient. Then, the constant in objective function is the optimal result, with the solution of all the basic variables are assigned to bi respectively and non-basic variables are assigned to 0.
  2. Sometimes when we have already select a entering variable, however, all the constraints seemed to be never violated as the variable is increasing. This situation means the linear program is unbounded, that is, the optimal solution does not exist, the objective value can be infinite large.
  3. There is another case that no iteration of pivoting can increase the objective value associated with the basic solution. This phenomenon is called degeneracy. Degeneracy can prevent the simplex algorithm from terminating, because it can lead to a phenomenon known as cycling. Cycling is theoretically possible, but extremely rare. We can prevent it by choosing the entering and leaving variables somewhat more carefully. Bland’s rule is a strategy of choosing entering and leaving variables. Here is a brief introduction to Bland’s rule from Wikipedia:

One uses Bland’s rule during an iteration of the simplex method to decide first what column (known as the entering variable) and then row (known as the leaving variable) in the tableau to pivot on. Assuming that the problem is to minimize the objective function, the algorithm is loosely defined as follows:
a) Choose the lowest-numbered (i.e., leftmost) nonbasic column with a negative (reduced) cost.
b) Now among the rows, choose the one with the lowest ratio between the (transformed) right hand side and the coefficient in the pivot tableau where the coefficient is greater than zero. If the minimum ratio is shared by several rows, choose the row with the lowest-numbered column (variable) basic in it.
It can be formally proved that, with Bland’s selection rule, the simplex algorithm never cycles, so it is guaranteed to terminate in a bounded time.

Fundamental theorem of linear programming

Any linear program L, given in standard form, either

  1. Has an optimal solution with a finite objective value
  2. Is infeasible
  3. Is unbounded

Duality theory

Every linear programing problem has associated with it another linear programming problem called the dual. The relationships between the dual problem and the original problem(called the primal)prove to be extremely useful in a variety of ways.

Given a primal problem, the corresponding dual problem is shown to the right.
在这里插入图片描述
在这里插入图片描述

The dual problem uses exactly the same parameters as the primal problem, but in different locations, as summarized below.

  1. The coefficients in the objective function of the primal problem are the right-hand-sides of the functional constraints in the dual problem.
  2. The right-hand-sides of the functional constraints in the primal problem are the coefficients in the objective function of the dual problem.
  3. The coefficients of a variable in the functional constraints of the primal problem are the coefficients in a functional constraint of the dual problem.

在这里插入图片描述
If the primal problem has finite optimal value, so is the dual. If the primal problem is infeasible, the dual is either infeasible or unbounded, vice versa.

Deduction: if you want to prove a specific solution x+ is optimal solution, you just need to find a y+ that satisfied y+A<=c, and show me that cx+=y+b.

The essence of dual:
Using positive linear combination of constraints to estimate the bound of objective function. And the coefficients of constraints are ys.
在这里插入图片描述

Why should we use dual?
When executing the simplex algorithm to primal, the primal always stay feasible, but dual not, and vice versa. So, if I already have an optimal solution and I want to add a new constrain. This moment the optimal solution may be not a BFS anymore. The dual, however, is still feasible.(I am adding a variable). So, the next step is to optimal the dual and the primal become feasible at optimality.

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值