【EI复现】基于阶梯碳交易的含P2G-CCS耦合和燃气掺氢的虚拟电厂优化调度(Matlab代码实现)

💥💥💞💞欢迎来到本博客❤️❤️💥💥

🏆博主优势:🌞🌞🌞博客内容尽量做到思维缜密,逻辑清晰,为了方便读者。

⛳️座右铭:行百里者,半于九十。

📋📋📋本文目录如下:🎁🎁🎁

目录

💥1 概述

📚2 运行结果

🎉3 参考文献

🌈4 Matlab代码、数据及文章


💥1 概述

文献来源:

摘要:“30*60”双碳背景下,为实现低碳排放,需从低碳政策和低碳技术两个路径进行协调。为此建立了含P2G-CCS(power to gas and carbon capture system,P2G-CCS)耦合和燃气掺氢的虚拟电厂(virtualpowerplant,VPP),并提出了基于阶梯碳交易机制的VPP优化调度策略。首先,在低碳技术层面,针对P2G-CCS耦合和燃气掺氢子系统,建立了掺氢燃气轮机、掺氢燃气锅炉、两段式电转气(power to gas,P2G)和碳捕集系统(carboncapturesystem,CCS)的数学模型;其次,在低碳政策层面,建立了阶梯碳交易模型对系统碳排放进行约束;最后,在建模基础上,提出了以碳交易成本、购气和煤耗成本、碳封存成本、机组启停成本和弃风成本之和最低为目标函数的优化调度策略。对建立的模型线性化处理后,采用MATLAB调用CPLEX和粒子群算法进行求解,通过设置不同的情景进行对比,验证了所提模型的有效性,并分析了不同固定掺氢比、变掺氢比、不同的阶梯碳交易参数对VPP低碳性和经济性的影响。

关键词:

低碳;碳捕集;阶梯碳交易;掺氢比;虚拟电厂;

 碳捕集作为一种低碳化技术,利用碳捕集技术对火电厂低碳化改造,实现高碳火电机组低碳化,

在低碳电力趋势下具有重要的研究意义。文献[1]深入分析了碳捕集电厂内部的能量流,用数学模

型定量分析了碳捕集电厂的运行区间,说明了碳捕集电厂具有更深的调节范围和更快的响应速度。文献[2]从日前、日内、实时多时间尺度挖掘了碳捕集电厂的风电消纳能力。CCS 捕集的 CO2 可作为P2G 过程的优质碳原料,文献[3]将 P2G-碳捕集电厂作为整体,建立了 P2G-碳捕集电厂协调优化模型。燃气机组同样作为碳排放源,需要对含 CO2的烟气进行处理,文献[4]利用燃气热电厂捕获的 CO2送入电转气设备合成燃气供给燃气热电厂,降低了碳排放量、购气量以及弃风量。文献[5]在 CCS 与P2G 耦合基础上,同时利用 CCS 和垃圾焚烧电厂的烟气处理进行负荷转移以平抑可再生能源波动。文献[6]将 P2G 与 CCS 耦合,并将其扩展到能源复杂多样的综合能源系统中。文献[7-8]通过储碳设备连接 P2G 和 CCS,解除 CO2捕集与利用过程的耦合。文献[9-10]建立了配置储液设备的 CCS,利用储液设备解除碳吸收与再生过程的耦合,具有更大的净出力调节范围,利用其参与系统调峰时,能够提供的灵活性容量更为充裕。文献[11]针对碳捕集会产生较大捕获能耗成本的问题,采用灵活捕获运行模式调节碳捕集设备的捕获水平,以降低捕获能耗成本,同时利用储液罐实现捕获能耗时移。以上文献从 CCS 自身以及同其他单元的耦合充分挖掘了其调节的灵活性和低碳特性,但在 CCS 与 P2G耦合的系统中忽略了电转氢过程、氢气的其他利用途径和甲烷化低效率的特点,并且均未考虑与阶梯碳交易低碳机制结合。本文采用燃气掺氢提高氢的利用,对于掺氢燃气轮机的研究方面,文献[12]对氢能燃气轮机联合循环的模式进行了总结。献[13]对氢燃料化学链燃烧燃气轮机循环系统进行了能效分析,文献[14]对氢气燃气混合微型燃气轮机燃烧工况进行了 CFD 数值模拟,文献[15]提出一种新的氢储能耦合天然气燃气蒸汽联合循环系统并对其进行能量分析。但以上文献均集中在了燃气轮机的 CFD 模拟和能效计算,未从多能源系统宏观的角度考虑。

本文所提出的含 P2G-CCS 耦合和燃气掺氢的VPP 如图 1 所示,其中包括风电机组、燃气轮机、

燃煤机组、燃气锅炉、两段式 P2G 单元、电加热锅炉、储电和储热单元等。负荷包括电热负荷,由燃气轮机、燃煤机组和风电满足用电需求,热负荷由燃气轮机、燃气锅炉和电加热锅炉协调供应。

 

📚2 运行结果

 

 

🎉3 参考文献

文章中一些内容引自网络,会注明出处或引用为参考文献,难免有未尽之处,如有不妥,请随时联系删除。

[1]陈登勇,刘方,刘帅.基于阶梯碳交易的含P2G-CCS耦合和燃气掺氢的虚拟电厂优化调度[J].电网技术,2022,46(06):2042-2054.DOI:10.13335/j.1000-3673.pst.2021.2177.

🌈4 Matlab代码、数据及文章

  • 11
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是一个使用贝叶斯优化的示例函数的Matlab代码: ```matlab function [xopt,fopt] = bayesopt_fun(f,x0,lb,ub,opts) % BAYESOPT_FUN: Bayesian optimization of a function % [XOPT,FOPT] = BAYESOPT_FUN(F,X0,LB,UB,OPTS) finds the minimum of a % function F using Bayesian optimization. X0 is the initial guess, % LB and UB are the lower and upper bounds of the variables, and OPTS % is an options structure created using BAYESOPT_OPTIONS. The function % F should take a vector of variables as input and return a scalar % output. % % Example usage: % f = @(x) sin(3*x) + x.^2 - 0.7*x; % opts = bayesopt_options('AcquisitionFunctionName','expected-improvement-plus'); % [xopt,fopt] = bayesopt_fun(f,0,0,1,opts); % % See also BAYESOPT_OPTIONS. % Check inputs narginchk(4,5); if nargin < 5, opts = bayesopt_options(); end assert(isa(f,'function_handle'),'F must be a function handle'); assert(isvector(x0) && isnumeric(x0),'X0 must be a numeric vector'); assert(isvector(lb) && isnumeric(lb),'LB must be a numeric vector'); assert(isvector(ub) && isnumeric(ub),'UB must be a numeric vector'); assert(all(size(x0)==size(lb)) && all(size(x0)==size(ub)), ... 'X0, LB, and UB must have the same size'); opts = bayesopt_options(opts); % ensure opts has all fields % Initialize X = x0(:); % column vector Y = f(X); n = numel(X); Xbest = X; Ybest = Y; fmin = min(Y); fmax = max(Y); % Loop over iterations for i = 1:opts.MaxIterations % Train surrogate model model = fitrgp(X,Y,'Basis','linear','FitMethod','exact', ... 'PredictMethod','exact','Standardize',true, ... 'KernelFunction',opts.KernelFunction,'KernelParameters',opts.KernelParameters); % Find next point to evaluate if strcmp(opts.AcquisitionFunctionName,'expected-improvement-plus') % Use expected improvement with small positive improvement threshold impThreshold = 0.01*(fmax-fmin); acqFcn = @(x) expected_improvement_plus(x,model,fmin,impThreshold); else % Use acquisition function specified in options acqFcn = str2func(opts.AcquisitionFunctionName); end xnext = bayesopt_acq(acqFcn,model,lb,ub,opts.AcquisitionSamples); % Evaluate function at next point ynext = f(xnext); % Update data X = [X; xnext(:)]; Y = [Y; ynext]; if ynext < Ybest Xbest = xnext; Ybest = ynext; end fmin = min(Y); fmax = max(Y); % Check stopping criterion if i >= opts.MaxIterations || (i > 1 && abs(Y(end)-Y(end-1))/Ybest <= opts.TolFun) break; end end % Return best point found xopt = Xbest; fopt = Ybest; end function EI = expected_improvement_plus(X,model,fmin,impThreshold) % EXPECTED_IMPROVEMENT_PLUS: Expected improvement with small positive improvement threshold % EI = EXPECTED_IMPROVEMENT_PLUS(X,MODEL,FMIN,IMPTHRESHOLD) computes % the expected improvement (EI) of a surrogate model at the point X. % The input MODEL is a regression model, FMIN is the current minimum % value of the function being modeled, and IMPTHRESHOLD is a small % positive improvement threshold. % % The expected improvement is defined as: % EI = E[max(FMIN - Y, 0)] % where Y is the predicted value of the surrogate model at X. % The expected value is taken over the posterior distribution of Y. % % However, if the predicted value Y is within IMPTHRESHOLD of FMIN, % then EI is set to IMPTHRESHOLD instead. This is done to encourage % exploration of the search space, even if the expected improvement % is very small. % % See also BAYESOPT_ACQ. % Check inputs narginchk(4,4); % Compute predicted value and variance at X [Y,~,sigma] = predict(model,X); % Compute expected improvement z = (fmin - Y - impThreshold)/sigma; EI = (fmin - Y - impThreshold)*normcdf(z) + sigma*normpdf(z); EI(sigma==0) = 0; % avoid division by zero % Check if improvement is small if Y >= fmin - impThreshold EI = impThreshold; end end function opts = bayesopt_options(varargin) % BAYESOPT_OPTIONS: Create options structure for Bayesian optimization % OPTS = BAYESOPT_OPTIONS() creates an options structure with default % values for all parameters. % % OPTS = BAYESOPT_OPTIONS(P1,V1,P2,V2,...) creates an options structure % with parameter names and values specified in pairs. Any unspecified % parameters will take on their default values. % % OPTS = BAYESOPT_OPTIONS(OLDOPTS,P1,V1,P2,V2,...) creates a copy of % the OLDOPTS structure, with any parameters specified in pairs % overwriting the corresponding values. % % Available parameters: % MaxIterations - Maximum number of iterations (default 100) % TolFun - Tolerance on function value improvement (default 1e-6) % KernelFunction - Name of kernel function for Gaussian process % regression (default 'squaredexponential') % KernelParameters - Parameters of kernel function (default []) % AcquisitionFunctionName - Name of acquisition function for deciding % which point to evaluate next (default % 'expected-improvement-plus') % AcquisitionSamples - Number of samples to use when evaluating the % acquisition function (default 1000) % % See also BAYESOPT_FUN, BAYESOPT_ACQ. % Define default options opts = struct('MaxIterations',100,'TolFun',1e-6, ... 'KernelFunction','squaredexponential','KernelParameters',[], ... 'AcquisitionFunctionName','expected-improvement-plus','AcquisitionSamples',1000); % Overwrite default options with user-specified options if nargin > 0 if isstruct(varargin{1}) % Copy old options structure and overwrite fields with new values oldopts = varargin{1}; for i = 2:2:nargin fieldname = validatestring(varargin{i},fieldnames(opts)); oldopts.(fieldname) = varargin{i+1}; end opts = oldopts; else % Overwrite fields of default options with new values for i = 1:2:nargin fieldname = validatestring(varargin{i},fieldnames(opts)); opts.(fieldname) = varargin{i+1}; end end end end function xnext = bayesopt_acq(acqFcn,model,lb,ub,nSamples) % BAYESOPT_ACQ: Find next point to evaluate using an acquisition function % XNEXT = BAYESOPT_ACQ(ACQFCN,MODEL,LB,UB,NSAMPLES) finds the next point % to evaluate using the acquisition function ACQFCN and the regression % model MODEL. LB and UB are the lower and upper bounds of the variables, % and NSAMPLES is the number of random samples to use when maximizing % the acquisition function. % % The input ACQFCN should be a function handle that takes a regression % model and a set of input points as inputs, and returns a vector of % acquisition function values. The set of input points is a matrix with % one row per point and one column per variable. % % The output XNEXT is a vector containing the next point to evaluate. % % See also BAYESOPT_FUN, EXPECTED_IMPROVEMENT_PLUS. % Check inputs narginchk(4,5); assert(isa(acqFcn,'function_handle'),'ACQFCN must be a function handle'); assert(isa(model,'RegressionGP'),'MODEL must be a regressionGP object'); assert(isvector(lb) && isnumeric(lb),'LB must be a numeric vector'); assert(isvector(ub) && isnumeric(ub),'UB must be a numeric vector'); assert(all(size(lb)==size(ub)),'LB and UB must have the same size'); if nargin < 5, nSamples = 1000; end % Generate random samples X = bsxfun(@plus,lb,bsxfun(@times,rand(nSamples,numel(lb)),ub-lb)); % Evaluate acquisition function acq = acqFcn(model,X); % Find maximum of acquisition function [~,imax] = max(acq); xnext = X(imax,:); end ``` 该示例代码实现了一个使用贝叶斯优化的函数优化器。该优化器使用高斯过程回归模型来近似目标函数,并使用期望改进加上(EI+)作为获取函数。您可以将此代码用作自己的优化问题的起点,并根据需要进行修改。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值