JuMP学习笔记

本文档详细介绍了JuMP库的使用,包括变量的创建、约束的定义、非线性模型的构建等。讲解了变量的上下界设置、命名、容器(Arrays、DenseAxisArrays、SparseAxisArrays)及完整性约束(二进制、整数)。同时,阐述了如何添加、删除约束,以及非线性表达式的处理和非线性参数的使用。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

variables

julia> model = Model()
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: AUTOMATIC
CachingOptimizer state: NO_OPTIMIZER
Solver name: No optimizer attached.

julia> @variable(model, x[1:2])
2-element Array{VariableRef,1}:
 x[1]
 x[2]
  • 向model中加入两个优化变量
  • 创建两个JuMP变量作为这些优化变量的引用
  • 将这些JuMP作为一个向量与Julia变量x结合起来。

JuMP变量可以具有属性,例如名称或初始初始起始值。

julia> @variable(model, y, base_name="decision variable")
decision variable

由于y是Julia变量,也可以绑定到其他值上,它就不再绑定到JuMP变量上了。但模型中的变量并不会消失,可以用model[:y]访问或重新绑定。

variable bounds

添加上下界

julia> @variable(model, x_free)
x_free

julia> @variable(model, x_lower >= 0)
x_lower

julia> @variable(model, x_upper <= 1)
x_upper

julia> @variable(model, 2 <= x_interval <= 3)
x_interval

julia> @variable(model, x_fixed == 4)
x_fixed

可以通过has_lower_bound()、has_upper_bound()来查询变量有无上下界。
如果有上、下界,可以用lower_bound()和upper_bound()来查询上下界。
可以用set_lower_bound()和set_upper_bound()来添加或修改上下界。
用delete_lower_bound()和delete_upper_bound()来删除上下界。

同理,对fixed变量也可以进行类似操作:is_fixed, fix_value, unfix

variable name

name()查看变量名
set_name(variable, name)设置或修改变量名

variable containers

JuMP提供了一种机制来创建三种类型的数据结构,我们称之为容器。这三种类型是Arrays、DenseAxisArrays和SparseAxisArrays。

Arrays

前面提到可以用x[1:2]创建vector的JuMP变量,也可以创建多维的:

julia> @variable(model, x[1:2, 1:2])
2×2 Array{VariableRef,2}:
 x[1,1]  x[1,2]
 x[2,1]  x[2,2]

索引:

julia> x[1, 2]
x[1,2]

julia> x[2, :]
2-element Array{VariableRef,1}:
 x[2,1]
 x[2,2]

上下界可以根据索引确定:

julia> @variable(model, x[i=1:2, j=1:2] >= 2i + j)
2×2 Array{VariableRef,2}:
 x[1,1]  x[1,2]
 x[2,1]  x[2,2]

# 注意在函数和括号之间的点.
julia> lower_bound.(x)
2×2 Array{Float64,2}:
 3.0  4.0
 5.0  6.0

当JuMP可以在编译时确定索引是从1开始的整数范围时,它将形成一个JuMP变量数组。因此x[1:b]将创建一个JuMP变量数组,但x[a:b]不会。如果JuMP不能确定索引是从1开始的整数范围(例如,在x[a:b]的情况下),JuMP将创建DenseAxisArray。

DenseAxisArrays

我们通常希望创建索引不是基于一个整数范围的数组。例如,我们可能希望创建一个变量,该变量的索引是产品名称或位置。语法与上面的语法相同,除了任意向量作为索引而不是基于一个范围。最大的区别是,JuMP将返回一个DenseAxisArray,而不是返回一个JuMP变量数组。例如:

julia> @variable(model, x[1:2, [:A,:B]])
2-dimensional DenseAxisArray{VariableRef,2,...} with index sets:
    Dimension 1, Base.OneTo(2)
    Dimension 2, Symbol[:A, :B]
And data, a 2×2 Array{VariableRef,2}:
 x[1,A]  x[1,B]
 x[2,A]  x[2,B]

索引:

julia> x[1, :A]
x[1,A]

julia> x[2, :]
1-dimensional DenseAxisArray{VariableRef,1,...} with index sets:
    Dimension 1, Symbol[:A, :B]
And data, a 2-element Array{VariableRef,1}:
 x[2,A]
 x[2,B]

SparseAxisArrays

当索引不为矩阵时创建SparseAxisArrays。如索引依赖于之前的索引(如三角索引、条件索引):

三角索引:

julia> @variable(model, x[i=1:2, j=i:2])
JuMP.Containers.SparseAxisArray{VariableRef,2,Tuple{Int64,Int64}} with 3 entries:
  [1, 2]  =  x[1,2]
  [2, 2]  =  x[2,2]
  [1, 1]  =  x[1,1]

条件索引:

julia> @variable(model, x[i=1:4; mod(i, 2)==0])
JuMP.Containers.SparseAxisArray{VariableRef,1,Tuple{Int64}} with 2 entries:
  [4]  =  x[4]
  [2]  =  x[2]

Integrality utilities

向模型添加完整性约束是一个常见的操作。因此,JuMP支持添加此类约束的两个快捷方式。

二进制(零一)约束

julia> @variable(model, x, Bin)
x

查询,取消:is_binary()、unset_binary()
也可以在创建变量时设定binary=true

julia> @variable(model, x, binary=true)
x

整数约束

julia> @variable(model, x, Int)
x

查询,取消:is_integer()、unset_integer()
也可以在创建变量时设定integer=true

JuMP还支持半定变量建模。如果所有特征值都是非负的,则平方对称矩阵X是半正定的。我们可以声明一个JuMP变量矩阵是半正定的,如下所示:

julia> @variable(model, x[1:2, 1:2], PSD)
2×2 LinearAlgebra.Symmetric{VariableRef,Array{VariableRef,2}}:
 x[1,1]  x[1,2]
 x[1,2]  x[2,2]

deleting variables

delete(model, variable)

Listing all variables

all_variables(model)

Start values

设定变量初始值

  • 创建变量时用start关键字指定
  • 用set_start_value()指定

可以用start_value()查询。若无返回nothing

@variables macro

julia> @variables(model, begin
           x
           y[i=1:2] >= i, (start = i, base_name = "Y_$i")
           z, Bin
       end)

julia> print(model)
Feasibility
Subject to
 Y_1[1] ≥ 1.0
 Y_2[2] ≥ 2.0
 z binary

constraints

macro

julia> @constraint(model, con, 2x <= 1)
con : 2 x <= 1.0
  • 数学约束2x ≤ 1被加入模型
  • 创建了一个名为con的Julia变量作为约束的引用
  • Julia变量被存入模型,并可以通过model[:con]访问
  • 将约束的name属性(打印时显示的属性)设置为“con”

类似地,也可以创建大于约束>=和等于约束==

复数形式的constraint macro

julia> @constraints(model, begin
           2x <=  1
            x >= -1
       end)

julia> print(model)
Feasibility
Subject to
 x ≥ -1.0
 2 x ≤ 1.0

names

查询:name()
添加、修改:set_name()

containers

Arrays

添加一组约束:

julia> @constraint(model, con[i = 1:3], i * x <= i + 1)
3-element Array{ConstraintRef{Model,MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64},MathOptInterface.LessThan{Float64}},ScalarShape},1}:
 con[1] : x <= 2.0
 con[2] : 2 x <= 3.0
 con[3] : 3 x <= 4.0

Vectorized constraints

julia> @variable(model, x[i=1:2])
2-element Array{VariableRef,1}:
 x[1]
 x[2]

julia> A = [1 2; 3 4]
2×2 Array{Int64,2}:
 1  2
 3  4

julia> b = [5, 6]
2-element Array{Int64,1}:
 5
 6

# 记得.
julia> @constraint(model, con, A * x .== b)
2-element Array{ConstraintRef{Model,MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64},MathOptInterface.EqualTo{Float64}},ScalarShape},1}:
 x[1] + 2 x[2] == 5.0
 3 x[1] + 4 x[2] == 6.0

Constraints on a single variable

零一约束

julia> @constraint(model, x in MOI.ZeroOne())
x binary

整数约束

julia> @constraint(model, x in MOI.Integer())
x integer

二次约束

julia> @variable(model, x[i=1:2])
2-element Array{VariableRef,1}:
 x[1]
 x[2]

julia> @variable(model, t >= 0)
t

julia> @constraint(model, x[1]^2 + x[2]^2 <= t^2)
x[1]² + x[2]² - t² <= 0.0

constraint modifications

Modifying a constant term

用normalized_rhs()查询右值常量
用set_normalized_rhs()修改右值常量

julia> @constraint(model, con, 2x <= 1)
con : 2 x <= 1.0

julia> set_normalized_rhs(con, 3)

julia> con
con : 2 x <= 3.0

julia> normalized_rhs(con)
3.0

JuMP包括使用fix函数将变量固定为一个值的功能。固定变量会将其上下限设置为相同的值。因此,可以通过添加虚拟变量并将其固定到不同的值来模拟常数项中的变化。

julia> @variable(model, const_term)
const_term

julia> @constraint(model, con, 2x <= const_term + 1)
con : 2 x - const_term <= 1.0

julia> fix(const_term, 1.0)

constraint deletion

delete()

Nonlinear Modeling

JuMP支持一般的光滑非线性(凸或非凸)的优化问题。能够为solver提供精确、稀疏的二阶导数,以提高solver的精度和性能。
用@NLobjective和@NLconstraint指定非线性目标和约束。它们支持sum()、prod()等语法。
Note. @objective和@constraint不支持非线性表达式。但模型可以包含线性、二次、非线性约束或目标函数的混合。初值可以通过@variable的start关键字提供。

求解范例

julia> using JuMP

julia> using Ipopt

# 建立模型
julia> model = Model(Ipopt.Optimizer)
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: AUTOMATIC
CachingOptimizer state: EMPTY_OPTIMIZER
Solver name: Ipopt
··
# 添加变量
julia> @variable(model, x, start = 0.0)
x

julia> @variable(model, y, start = 0.0)
y

# 添加优化目标
julia> @NLobjective(model, Min, (1 - x)^2 + 100 * (y - x^2)^2)

# 求解
julia> optimize!(model)

******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
 Ipopt is released as open source code under the Eclipse Public License (EPL).
         For more information visit https://github.com/coin-or/Ipopt
******************************************************************************

This is Ipopt version 3.13.4, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).

Number of nonzeros in equality constraint Jacobian...:        0
Number of nonzeros in inequality constraint Jacobian.:        0
Number of nonzeros in Lagrangian Hessian.............:        3

Total number of variables............................:        2
                     variables with only lower bounds:        0
                variables with lower and upper bounds:        0
                     variables with only upper bounds:        0
Total number of equality constraints.................:        0
Total number of inequality constraints...............:        0
        inequality constraints with only lower bounds:        0
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0  1.0000000e+00 0.00e+00 2.00e+00  -1.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1  9.5312500e-01 0.00e+00 1.25e+01  -1.0 1.00e+00    -  1.00e+00 2.50e-01f  3
   2  4.8320569e-01 0.00e+00 1.01e+00  -1.0 9.03e-02    -  1.00e+00 1.00e+00f  1
   3  4.5708829e-01 0.00e+00 9.53e+00  -1.0 4.29e-01    -  1.00e+00 5.00e-01f  2
   4  1.8894205e-01 0.00e+00 4.15e-01  -1.0 9.51e-02    -  1.00e+00 1.00e+00f  1
   5  1.3918726e-01 0.00e+00 6.51e+00  -1.7 3.49e-01    -  1.00e+00 5.00e-01f  2
   6  5.4940990e-02 0.00e+00 4.51e-01  -1.7 9.29e-02    -  1.00e+00 1.00e+00f  1
   7  2.9144630e-02 0.00e+00 2.27e+00  -1.7 2.49e-01    -  1.00e+00 5.00e-01f  2
   8  9.8586451e-03 0.00e+00 1.15e+00  -1.7 1.10e-01    -  1.00e+00 1.00e+00f  1
   9  2.3237475e-03 0.00e+00 1.00e+00  -1.7 1.00e-01    -  1.00e+00 1.00e+00f  1
iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
  10  2.3797236e-04 0.00e+00 2.19e-01  -1.7 5.09e-02    -  1.00e+00 1.00e+00f  1
  11  4.9267371e-06 0.00e+00 5.95e-02  -1.7 2.53e-02    -  1.00e+00 1.00e+00f  1
  12  2.8189505e-09 0.00e+00 8.31e-04  -2.5 3.20e-03    -  1.00e+00 1.00e+00f  1
  13  1.0095040e-15 0.00e+00 8.68e-07  -5.7 9.78e-05    -  1.00e+00 1.00e+00f  1
  14  1.3288608e-28 0.00e+00 2.02e-13  -8.6 4.65e-08    -  1.00e+00 1.00e+00f  1

Number of Iterations....: 14

                                   (scaled)                 (unscaled)
Objective...............:   1.3288608467480825e-28    1.3288608467480825e-28
Dual infeasibility......:   2.0183854587685121e-13    2.0183854587685121e-13
Constraint violation....:   0.0000000000000000e+00    0.0000000000000000e+00
Complementarity.........:   0.0000000000000000e+00    0.0000000000000000e+00
Overall NLP error.......:   2.0183854587685121e-13    2.0183854587685121e-13


Number of objective function evaluations             = 36
Number of objective gradient evaluations             = 15
Number of equality constraint evaluations            = 0
Number of inequality constraint evaluations          = 0
Number of equality constraint Jacobian evaluations   = 0
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations             = 14
Total CPU secs in IPOPT (w/o function evaluations)   =      1.127
Total CPU secs in NLP function evaluations           =      1.489

EXIT: Optimal Solution Found.
	22.836868 seconds (69.70 M allocations: 4.054 GiB, 4.99% gc time, 1.17% compilation time)
	
# 优化后的值
julia> println("x = ", value(x), " y = ", value(y))
x = 0.9999999999999899 y = 0.9999999999999792

# 加入一个线性约束
julia> @constraint(model, x + y == 10)
x + y == 10.0

# 重新求解
julia> @time optimize!(model)
This is Ipopt version 3.13.4, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).

Number of nonzeros in equality constraint Jacobian...:        2
Number of nonzeros in inequality constraint Jacobian.:        0
Number of nonzeros in Lagrangian Hessian.............:        3

Total number of variables............................:        2
                     variables with only lower bounds:        0
                variables with lower and upper bounds:        0
                     variables with only upper bounds:        0
Total number of equality constraints.................:        1
Total number of inequality constraints...............:        0
        inequality constraints with only lower bounds:        0
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0  1.0000000e+00 1.00e+01 1.00e+00  -1.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1  9.6315968e+05 1.78e-15 3.89e+05  -1.0 9.91e+00    -  1.00e+00 1.00e+00h  1
   2  1.6901461e+05 0.00e+00 1.16e+05  -1.0 3.24e+00    -  1.00e+00 1.00e+00f  1
   3  2.5433173e+04 0.00e+00 3.18e+04  -1.0 2.05e+00    -  1.00e+00 1.00e+00f  1
   4  2.6527756e+03 0.00e+00 7.79e+03  -1.0 1.19e+00    -  1.00e+00 1.00e+00f  1
   5  1.1380324e+02 0.00e+00 1.35e+03  -1.0 5.62e-01    -  1.00e+00 1.00e+00f  1
   6  3.3745506e+00 0.00e+00 8.45e+01  -1.0 1.50e-01    -  1.00e+00 1.00e+00f  1
   7  2.8946196e+00 0.00e+00 4.22e-01  -1.0 1.07e-02    -  1.00e+00 1.00e+00f  1
   8  2.8946076e+00 0.00e+00 1.07e-05  -1.7 5.42e-05    -  1.00e+00 1.00e+00f  1
   9  2.8946076e+00 0.00e+00 5.91e-13  -8.6 1.38e-09    -  1.00e+00 1.00e+00f  1

Number of Iterations....: 9

                                   (scaled)                 (unscaled)
Objective...............:   2.8946075504894599e+00    2.8946075504894599e+00
Dual infeasibility......:   5.9130478291535837e-13    5.9130478291535837e-13
Constraint violation....:   0.0000000000000000e+00    0.0000000000000000e+00
Complementarity.........:   0.0000000000000000e+00    0.0000000000000000e+00
Overall NLP error.......:   5.9130478291535837e-13    5.9130478291535837e-13


Number of objective function evaluations             = 10
Number of objective gradient evaluations             = 10
Number of equality constraint evaluations            = 10
Number of inequality constraint evaluations          = 0
Number of equality constraint Jacobian evaluations   = 1
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations             = 9
Total CPU secs in IPOPT (w/o function evaluations)   =      0.178
Total CPU secs in NLP function evaluations           =      0.100

EXIT: Optimal Solution Found.
	0.593810 seconds (295.21 k allocations: 17.634 MiB, 4.24% gc time, 62.95% compilation time)
# 输出值
julia> println("x = ", value(x), " y = ", value(y))
x = 2.701147124098218 y = 7.2988528759017814

可以看出由于Julia用的JIT编译,第二次求解花的时间和占用资源远小于第一次。

Syntax notes

非线性的表达式会比线性或二次的表达式更加严格

  • 除了下面的splatting语法之外,所有表达式都必须是简单的标量操作。不能使用点、矩阵向量积、向量切片等。需要将向量运算转换为显式sum()运算,或使用下面描述的AffExpr plus辅助变量技巧。
  • 没有提供运算符重载来建立非线性表达式。例如,如果x是一个JuMP变量,那么代码3x将返回一个AffExpr对象,该对象可以在将来的表达式和线性约束中使用。但是,代码sin(x)是一个错误。所有非线性表达式都必须在宏内部。
  • 用户定义的函数只有在注册后才能在非线性表达式中使用。例如,下面的代码会导致错误,因为必须首先调用register()。
model = Model()
my_function(a, b) = exp(a) * b
@variable(model, x)
@variable(model, y)
@NLobjective(model, Min, my_function(x, y))

# output

ERROR: Unrecognized function "my_function" used in nonlinear expression.

AffExpr和QuadExpr形式的目标目前不能在非线性表达式中使用,引入辅助变量:

my_expr = dot(c, x) + 3y # where x and y are variables
@variable(model, aux)
@constraint(model, aux == my_expr)
@NLobjective(model, Min, sin(aux))

可以用@NLexpression声明可嵌入的非线性表达式:

@NLexpression(model, my_expr[i = 1:n], sin(x[i]))
@NLconstraint(model, my_constr[i = 1:n], my_expr[i] <= 0.5)

@NLexpression和@NLconstraint也支持匿名语法

my_expr = @NLexpression(model, [i = 1:n], sin(x[i]))
my_constr = @NLconstraint(model, [i = 1:n], my_expr[i] <= 0.5)

splatting operator … 可在严格的setting下被识别为函数参数的扩展标识。splatted只能是一个符号,更复杂的表达式会无法识别。

julia> model = Model();

julia> @variable(model, x[1:3]);

julia> @NLconstraint(model, *(x...) <= 1.0)
x[1] * x[2] * x[3] - 1.0 ≤ 0

julia> @NLconstraint(model, *((x / 2)...) <= 0.0)
ERROR: LoadError: Unexpected expression in (*)(x / 2...). JuMP supports splatting only symbols. For example, x... is ok, but (x + 1)..., [x; y]... and g(f(y)...) are not.

非线性参数

仅对于非线性模型,JuMP提供了一种显式“参数”对象的语法,只需更新参数的值,就可以使用该语法就地修改模型。非线性参数是通过@NLparameter宏来声明的,并且可以通过类似于JuMP变量和表达式的任意集合来索引。必须在==符号的右侧提供参数的初始值。没有用于创建参数的匿名语法。

可以用value和set_value来查询或更新参数的值

value(::JuMP.NonlinearParameter)
set_value(::JuMP.NonlinearParameter, ::Number)

非线性参数只能在非线性表达式中使用:

@NLparameter(model, x == 10)
@variable(model, z)
@objective(model, Max, x * z)             # Error: x is a nonlinear parameter.
@NLobjective(model, Max, x * z)           # Ok.
@expression(model, my_expr, x * z^2)      # Error: x is a nonlinear parameter.
@NLexpression(model, my_nl_expr, x * z^2) # Ok.

在序列中求解非线性模型时,非线性参数很有用:

using Ipopt
model = Model(Ipopt.Optimizer)
@variable(model, z)
@NLparameter(model, x == 1.0)
@NLobjective(model, Min, (z - x)^2)
optimize!(model)
value(z) # Equals 1.0.

# Now, update the value of x to solve a different problem.
set_value(x, 5.0)
optimize!(model)
value(z) # Equals 5.0

使用非线性参数可以比用更新的数据从头开始创建新模型更快,因为JuMP能够避免在将模型交给solver之前重复处理模型的许多步骤。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值