使用FORCES_PRO求解器进行NMPC控制

假设给定以下优化问题:

\quad \quad min \ \left \| X_n \right \|_Q^2+ \left \| U_n \right \|_R^2 \\ s.t.\\ x1 = x\\ x_i+1 = Axi + B*ui \quad i = 1...N-1\\ x_{min} <= x_i <= x_{max} \quad i = 1...N\\ u_{min} <= u_i <= u_{max} \quad i = 1...N

代码编写分成以下几个步骤:
一、创建模型参数:
model参数:

N:控制步长+预测步长(不一样怎么设定?)

navr:变量的参数(z),这里认为是z=[x1,x2,u],可以经过自己设定方式

neq:等式约束的数量

objective :目标函数

objectiveN :MPC终端约束

eq:等式约束

E:筛选矩阵

xinitidx:状态量索引选择,为了告诉求解器哪部分是状态量,方便后续给定初始值X0

lb:约束下界(对z的约束)

ub:约束上界(对z的约束)

值得注意的是:假设给定z=[x1,x2,u],那么经过筛选E矩阵,假设E矩阵为eye(3),即不筛选,那么在进行等式约束的时候,会对筛选后的变量进行N(预测时域=控制时域)次迭代,即求z(k+1)、z(k+2)、z(k+n),相应的目标函数也是对N个时域的变量进行概括。

代码如下:

%% FORCESPRO multistage form
% assume variable ordering z(i) = [xi; ui] for i=1...N

% dimensions
model.N     = 5;           % horizon length
model.nvar  = nu+nx;     % number of variables
model.neq   = nu+nx;        % number of equality constraints

% objective 
model.objective = @(z) z(3)*R*z(3) + [z(1);z(2)]'*Q*[z(1);z(2)];
model.objectiveN = @(z) [z(1);z(2)]'*P*[z(1);z(2)];

% equalities
model.eq = @(z) [ A(1,:)*[z(1);z(2)] + B(1)*z(3);
                  A(2,:)*[z(1);z(2)] + B(2)*z(3);
                  z(3)];

model.E =eye(3);

% initial state
model.xinitidx = 1:2;

% inequalities
model.lb = [ xmin,    umin  ];
model.ub = [ xmax,    umax  ];

二、生成模型

将上述模型参数输入进行模型生成,这里需要联网下载,可能会等待一段时间,这也是FORCES_PRO的缺点,每次更改参数需要重新下载,不过下次运行就不需要下载了。

  %% Generate FORCESPRO solver

% get options
codeoptions = getOptions('FORCESNLPsolver');

%codeoptions.nlp.ad_tool = 'symbolic-math-tbx';

% generate code
FORCES_NLP(model, codeoptions);

三、进行仿真并画图

%% simulate
x1 = [-4; 2];
kmax = 50;
X = zeros(2,kmax+1); X(:,1) = x1;
U = zeros(1,kmax);
problem.x0 = zeros(model.N*model.nvar,1); %z_guess

for k = 1:kmax

    problem.xinit = X(:,k);

    [solverout,exitflag,info] = FORCESNLPsolver(problem); 

    if( exitflag == 1 )
        U(:,k) = solverout.x1(3);
        
    else
        error('Some problem in solver');
    end

    %X(:,k+1) = A*X(:,k) + B*U(:,k);
    update = model.eq( [X(:,k);U(:,k)] )';
    X(:,k+1) = update(1:2);
    if k < kmax
        U(:,k+1) = update(3);
    end

end

%% plot
figure; clf;
subplot(3,1,1); grid on; title('states'); hold on;
plot([1 kmax], [5 5]', 'r--'); plot([1 kmax], [-5 -5]', 'r--');
ylim(1.1*[min(-5),max(5)]); stairs(1:kmax,X(:,1:kmax)');

subplot(3,1,2);  grid on; title('input'); hold on;
plot([1 kmax], [0.5 0.5]', 'r--'); plot([1 kmax], [-0.5 -0.5]', 'r--');
ylim(1.1*[min(-0.5),max(0.5)]); stairs(1:kmax,U(:,1:kmax)');

结果:

完整代码如下:

% Simple MPC - double integrator example with rate constraints on
% input rate change Delta u for use with FORCESPRO
% 
%  min   xN'*P*xN + sum_{i=1}^{N-1} xi'*Q*xi + ui'*R*ui
% xi,ui
%       s.t. x1 = x
%            x_i+1 = A*xi + B*ui  for i = 1...N-1
%            xmin <= xi <= xmax   for i = 1...N
%            umin <= ui <= umax   for i = 1...N
%
% and P is solution of Ricatti eqn. from LQR problem
%
% (c) Embotech AG, Zurich, Switzerland, 2015-2023.
    

clc;
close all;
addpath('D:\FORCE_PRO\forces_pro_client')
%% system
A = [1.1 1; 0 1];
B = [1; 0.5];
[nx,nu] = size(B);

%% MPC setup
Q = eye(nx);
R = eye(nu);
P=10*Q;
umin=-0.5;
umax=0.5;
xmin = [-5, -5]; xmax = [5, 5];


%% FORCESPRO multistage form
% assume variable ordering z(i) = [xi; ui] for i=1...N

% dimensions
model.N     = 5;           % horizon length
model.nvar  = nu+nx;     % number of variables
model.neq   = nu+nx;        % number of equality constraints

% objective 
model.objective = @(z) z(3)*R*z(3) + [z(1);z(2)]'*Q*[z(1);z(2)];
model.objectiveN = @(z) [z(1);z(2)]'*P*[z(1);z(2)];

% equalities
model.eq = @(z) [ A(1,:)*[z(1);z(2)] + B(1)*z(3);
                  A(2,:)*[z(1);z(2)] + B(2)*z(3);
                  z(3)];

model.E =eye(3);

% initial state
model.xinitidx = 1:2;

% inequalities
model.lb = [ xmin,    umin  ];
model.ub = [ xmax,    umax  ];


  %% Generate FORCESPRO solver

% get options
codeoptions = getOptions('FORCESNLPsolver');

%codeoptions.nlp.ad_tool = 'symbolic-math-tbx';

% generate code
FORCES_NLP(model, codeoptions);

%% simulate
x1 = [-4; 2];
kmax = 50;
X = zeros(2,kmax+1); X(:,1) = x1;
U = zeros(1,kmax);
problem.x0 = zeros(model.N*model.nvar,1); %z_guess

for k = 1:kmax

    problem.xinit = X(:,k);

    [solverout,exitflag,info] = FORCESNLPsolver(problem); 

    if( exitflag == 1 )
        U(:,k) = solverout.x1(3);
        
    else
        error('Some problem in solver');
    end

    %X(:,k+1) = A*X(:,k) + B*U(:,k);
    update = model.eq( [X(:,k);U(:,k)] )';
    X(:,k+1) = update(1:2);
    if k < kmax
        U(:,k+1) = update(3);
    end

end

%% plot
figure; clf;
subplot(3,1,1); grid on; title('states'); hold on;
plot([1 kmax], [5 5]', 'r--'); plot([1 kmax], [-5 -5]', 'r--');
ylim(1.1*[min(-5),max(5)]); stairs(1:kmax,X(:,1:kmax)');

subplot(3,1,2);  grid on; title('input'); hold on;
plot([1 kmax], [0.5 0.5]', 'r--'); plot([1 kmax], [-0.5 -0.5]', 'r--');
ylim(1.1*[min(-0.5),max(0.5)]); stairs(1:kmax,U(:,1:kmax)');


  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 3
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值