电池SOH仿真系列-基于LSTM神经网络的电池SOH估算方法

基于 LSTM神经网络的电池SOH估算

  循环神经网络(Recurrent Neural Network,RNN)与BP神经网络不同,RNN网络不仅考虑前一时刻的输入,同时还赋予网络对前面时刻信息的记忆能力。尽管RNN网络具有较高的精度,但其存在着梯度消失的问题。对此,出现了一系列改进的RNN网络,而LSTM神经网络就是其中改进效果最好的一种。基于LSTM神经网络的电池SOH估算方法具体如下所示:
  (1)锂离子电池循环寿命数据
  数据来源于NASA研究中心搭建的锂离子电池测试平台,选取5号锂离子电池(额定容量为2Ah)。循环测试实验在室温下进行:以1.5A的恒流电流为电池充电,直到充电截止电压(4.2V),以恒压电流充电,直到充电电流降至20mA;在2A恒流(CC)模式下放电,直至电池分别降至2.7V、2.5V、2.2V、2.5V。当电池达到EOL(end-of-life, EOL)标准时,实验停止,额定容量下降30%。
  (2)数据预处理
  利用最大最小值归一化方法将数据放缩到0和1之间。
在这里插入图片描述
  式中,xmax是输入数据x的最大值,xmin是输入数据x的最小值。
  (3)仿真分析
  将NASA锂离子电池实验数据中的充电容量作为模型输入,模型的输出为以放电容量为参考的电池 SOH。经过训练,神经网络可以迭代得到网络权重及偏置具体参数。
在这里插入图片描述
  从图中可以看出,模型对于电池容量衰减的整体趋势预测是非常精准的。其具体代码如下所示:

function pre_data = LSTM_main(d,h,train_data,test_data)
%% 预处理
lag = 8;
% d = 51;
[train_input,train_output] = LSTM_data_process(d,train_data,lag); %数据处理

[train_input,min_input,max_input,train_output,min_output,max_output] = premnmx(train_input',train_output');

input_length = size(train_input,1); %样本输入长度
output_length = size(train_output,1); %样本输出长度
train_num = size(train_input,2); %训练样本个数
test_num = size(test_data,2); %测试样本个数
%% 网络参数初始化
% 结点数设置
input_num = input_length;
cell_num = 10;
output_num = output_length;
% 网络中门的偏置
bias_input_gate = rand(1,cell_num);
bias_forget_gate = rand(1,cell_num);
bias_output_gate = rand(1,cell_num);
%网络权重初始化
ab = 20;
weight_input_x = rand(input_num,cell_num)/ab;  
weight_input_h = rand(output_num,cell_num)/ab;  
weight_inputgate_x = rand(input_num,cell_num)/ab;  
weight_inputgate_h = rand(cell_num,cell_num)/ab;  
weight_forgetgate_x = rand(input_num,cell_num)/ab;  
weight_forgetgate_h = rand(cell_num,cell_num)/ab;  
weight_outputgate_x = rand(input_num,cell_num)/ab;  
weight_outputgate_h = rand(cell_num,cell_num)/ab;  
%hidden_output权重
weight_preh_h = rand(cell_num,output_num);
%网络状态初始化
cost_gate = 1e-6;
h_state = rand(output_num,train_num+test_num);
cell_state = rand(cell_num,train_num+test_num);
%% 网络训练学习
for iter = 1:3000 %迭代次数
    yita = 0.01;  %每次迭代权重调整比例
    for m = 1:train_num
    
        %前馈部分
        if(m==1)
            gate = tanh(train_input(:,m)' * weight_input_x);
            input_gate_input = train_input(:,m)' * weight_inputgate_x + bias_input_gate;
            output_gate_input = train_input(:,m)' * weight_outputgate_x + bias_output_gate;
            for n = 1:cell_num
                input_gate(1,n) = 1 / (1 + exp(-input_gate_input(1,n)));%输入门
                output_gate(1,n) = 1 / (1 + exp(-output_gate_input(1,n)));%输出门
                %sigmoid函数
            end
            forget_gate = zeros(1,cell_num);
            forget_gate_input = zeros(1,cell_num);
            cell_state(:,m) = (input_gate .* gate)';
            
        else
            gate = tanh(train_input(:,m)' * weight_input_x + h_state(:,m-1)' * weight_input_h);
            input_gate_input = train_input(:,m)' * weight_inputgate_x + cell_state(:,m-1)' * weight_inputgate_h + bias_input_gate;
            forget_gate_input = train_input(:,m)' * weight_forgetgate_x + cell_state(:,m-1)' * weight_forgetgate_h + bias_forget_gate;
            output_gate_input = train_input(:,m)' * weight_outputgate_x + cell_state(:,m-1)' * weight_outputgate_h + bias_output_gate;
            for n = 1:cell_num
                input_gate(1,n) = 1/(1+exp(-input_gate_input(1,n)));
                forget_gate(1,n) = 1/(1+exp(-forget_gate_input(1,n)));
                output_gate(1,n) = 1/(1+exp(-output_gate_input(1,n)));
            end
            cell_state(:,m) = (input_gate .* gate + cell_state(:,m-1)' .* forget_gate)';   
        end
  
        pre_h_state = tanh(cell_state(:,m)') .* output_gate;
        h_state(:,m) = (pre_h_state * weight_preh_h)';
        %误差计算
        Error = h_state(:,m) - train_output(:,m);
        Error_Cost(1,iter)=sum(Error.^2);  %误差的平方和(4个点的平方和)
        if(Error_Cost(1,iter)<cost_gate)   %判断是否满足误差最小条件
            flag = 1;
            break;
        else     
            [  weight_input_x,...
               weight_input_h,...
               weight_inputgate_x,...
               weight_inputgate_h,...
               weight_forgetgate_x,...
               weight_forgetgate_h,...
               weight_outputgate_x,...
               weight_outputgate_h,...
               weight_preh_h ] = LSTM_updata_weight(m,yita,Error,...
                                                   weight_input_x,...
                                                   weight_input_h,...
                                                   weight_inputgate_x,...
                                                   weight_inputgate_h,...
                                                   weight_forgetgate_x,...
                                                   weight_forgetgate_h,...
                                                   weight_outputgate_x,...
                                                   weight_outputgate_h,...
                                                   weight_preh_h,...
                                                   cell_state,h_state,...
                                                   input_gate,forget_gate,...
                                                   output_gate,gate,...
                                                   train_input,pre_h_state,...
                                                   input_gate_input,...
                                                   output_gate_input,...
                                                   forget_gate_input,input_num,cell_num);
       
        end
    end
    if(Error_Cost(1,iter)<cost_gate)
        break;
    end
end

%% 测试阶段
%数据加载
test_input = train_data(end-lag+1:end);
test_input = tramnmx(test_input',min_input,max_input);
% test_input = mapminmax('apply',test_input',ps_input);

%前馈
for m = train_num + 1:train_num + test_num
    gate = tanh(test_input' * weight_input_x + h_state(:,m-1)' * weight_input_h);
    input_gate_input = test_input' * weight_inputgate_x + h_state(:,m-1)' * weight_inputgate_h + bias_input_gate;
    forget_gate_input = test_input' * weight_forgetgate_x + h_state(:,m-1)' * weight_forgetgate_h + bias_forget_gate;
    output_gate_input = test_input' * weight_outputgate_x + h_state(:,m-1)' * weight_outputgate_h + bias_output_gate;
    for n = 1:cell_num
        input_gate(1,n) = 1/(1+exp(-input_gate_input(1,n)));
        forget_gate(1,n) = 1/(1+exp(-forget_gate_input(1,n)));
        output_gate(1,n) = 1/(1+exp(-output_gate_input(1,n)));  
    end
    cell_state(:,m) = (input_gate .* gate + cell_state(:,m-1)' .* forget_gate)';  
    pre_h_state = tanh(cell_state(:,m)') .* output_gate;
    h_state(:,m) = (pre_h_state * weight_preh_h)';
    
    % 将当前预测点作为下一步输入数据
    test_input = postmnmx(test_input,min_input,max_input);
    now_prepoint = postmnmx(h_state(:,m),min_output,max_output);
    %test_input = mapminmax('reverse',test_input,ps_input);
    test_input = [test_input(2:end); now_prepoint];
    test_input = tramnmx(test_input,min_input,max_input);
end

pre_data = postmnmx(h_state(:,train_num + h:h:train_num + test_num),min_output,max_output);

all_pre = postmnmx(h_state(:,1:h:train_num + test_num),min_output,max_output);

% 画图
figure
title('LSTM预测')
hold on
plot(1:size([train_data test_data],2),[train_data test_data], 'o-', 'color','r', 'linewidth', 1);
plot(size(train_data,2) + h:h:size([train_data test_data],2),pre_data, '*-','color','b','linewidth', 1);
plot([size(train_data,2) size(train_data,2)],[-0.01 0.01],'g-','LineWidth',4);
legend({ '真实值', '预测值'});
end

  想了解更过相关仿真,可以关注我的微信公众号。
在这里插入图片描述

  • 8
    点赞
  • 133
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
基于卷积神经网络的锂离子电池SOH估计是一个比较复杂的应用,需要一定的深度学习电池领域的知识。以下是一个基于卷积神经网络的锂离子电池SOH估计的代码示例,供您参考: ```matlab % 加载数据 load('data.mat'); % 数据预处理 XTrain = cell(size(data, 1), 1); YTrain = cell(size(data, 1), 1); for i = 1:size(data, 1) XTrain{i} = data{i, 1}; YTrain{i} = data{i, 2}; end % 构建卷积神经网络模型 layers = [ sequenceInputLayer(1) convolution1dLayer(5, 16, 'Padding', 'same') batchNormalizationLayer reluLayer maxPooling1dLayer(2, 'Stride', 2) convolution1dLayer(5, 32, 'Padding', 'same') batchNormalizationLayer reluLayer maxPooling1dLayer(2, 'Stride', 2) convolution1dLayer(5, 64, 'Padding', 'same') batchNormalizationLayer reluLayer maxPooling1dLayer(2, 'Stride', 2) lstmLayer(128, 'OutputMode', 'last') fullyConnectedLayer(1) regressionLayer ]; % 设置训练选项 options = trainingOptions('adam', ... 'MaxEpochs', 50, ... 'MiniBatchSize', 32, ... 'InitialLearnRate', 0.001, ... 'GradientThreshold', 1, ... 'Shuffle', 'every-epoch', ... 'Verbose', false, ... 'Plots', 'training-progress'); % 训练模型 net = trainNetwork(XTrain, YTrain, layers, options); % 预测结果 YTest = predict(net, XTest); % 评估模型 rmse = sqrt(mean((YTest - YTestTrue).^2)); ``` 这段代码中,首先加载了数据,然后进行了数据预处理,将数据转换为网络输入所需的格式。接着,构建了一个卷积神经网络模型,包括卷积层、批归一化层、ReLU层、最大池化层、LSTM层和全连接层。然后,设置了训练选项,包括优化器、最大迭代次数、批次大小、学习率等。接着,使用训练数据对模型进行训练。最后,使用测试数据对模型进行预测,并计算了预测结果的均方根误差(RMSE)。 需要注意的是,这段代码只是一个示例,实际应用中需要根据具体情况进行修改和调整。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

新能源汽车仿真团队

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值