粒子群与神经网络结合的一个例子代码

粒子群与神经网络结合的一个例子代码

function psobp
% BP neural network trained by PSO algorithm
clc
clear all
% generate training samples according to eutrophication evaluation standard
% of surface water
st=cputime;
AllSamIn=…
[ 70.00 60.00 60.00 60.00 70.00
70.00 65.00 65.00 65.00 70.00
70.00 75.00 70.00 70.00 70.00
70.00 75.00 70.00 65.00 70.00
70.00 75.00 65.00 65.00 68.00
70.00 75.00 65.00 65.00 68.00
70.00 75.00 65.00 65.00 68.00
70.00 75.00 65.00 65.00 68.00
70.00 75.00 65.00 65.00 70.00
70.00 65.00 65.00 60.00 68.00
70.00 65.00 65.00 60.00 68.00
70.00 65.00 65.00 60.00 68.00
70.00 65.00 65.00 60.00 68.00
70.00 65.00 65.00 60.00 68.00
70.00 65.00 65.00 60.00 68.00
70.00 65.00 65.00 60.00 68.00
70.00 65.00 65.00 60.00 68.00
70.00 65.00 65.00 60.00 68.00
70.00 65.00 65.00 60.00 68.00
70.00 65.00 65.00 60.00 68.00
70.00 65.00 65.00 60.00 68.00
70.00 65.00 65.00 60.00 68.00
70.00 65.00 65.00 60.00 68.00
70.00 65.00 65.00 60.00 68.00
70.00 75.00 65.00 65.00 68.00
70.00 75.00 65.00 65.00 68.00
70.00 75.00 65.00 65.00 68.00
70.00 75.00 65.00 65.00 68.00
135.00 120.00 135.00 100.00 105.00
135.00 120.00 135.00 100.00 105.00
135.00 120.00 135.00 100.00 105.00
135.00 130.00 135.00 100.00 105.00
135.00 135.00 135.00 90.00 105.00
135.00 120.00 135.00 105.00 105.00
130.00 120.00 85.00 85.00 90.00
110.00 120.00 80.00 80.00 90.00
120.00 120.00 80.00 80.00 90.00
120.00 120.00 80.00 80.00 90.00
120.00 100.00 80.00 80.00 90.00
135.00 135.00 120.00 105.00 105.00
135.00 130.00 130.00 105.00 105.00
135.00 130.00 130.00 105.00 105.00
135.00 135.00 135.00 105.00 105.00
135.00 135.00 135.00 105.00 105.00
110.00 110.00 110.00 85.00 85.00
110.00 110.00 110.00 85.00 85.00
110.00 110.00 110.00 85.00 85.00
110.00 110.00 110.00 85.00 85.00
100.00 100.00 80.00 85.00 85.00
100.00 100.00 80.00 80.00 85.00
100.00 100.00 80.00 80.00 85.00
100.00 100.00 80.00 80.00 85.00
100.00 100.00 80.00 80.00 85.00
100.00 100.00 80.00 80.00 85.00
100.00 100.00 80.00 80.00 85.00
100.00 100.00 80.00 80.00 85.00
100.00 100.00 80.00 80.00 85.00
100.00 100.00 80.00 80.00 85.00
100.00 100.00 80.00 80.00 85.00
100.00 100.00 80.00 80.00 85.00
100.00 100.00 80.00 80.00 85.00
100.00 100.00 80.00 80.00 85.00
100.00 100.00 80.00 80.00 85.00
100.00 100.00 80.00 80.00 85.00
100.00 100.00 80.00 80.00 85.00
100.00 100.00 80.00 80.00 85.00
100.00 100.00 80.00 80.00 85.00
100.00 100.00 80.00 80.00 85.00
100.00 100.00 80.00 80.00 85.00
100.00 100.00 80.00 80.00 85.00
100.00 100.00 80.00 80.00 85.00
100.00 100.00 80.00 80.00 85.00
100.00 100.00 80.00 80.00 85.00
100.00 85.00 80.00 80.00 85.00
100.00 85.00 80.00 80.00 85.00
100.00 100.00 80.00 80.00 85.00
135.00 135.00 135.00 135.00 135.00
135.00 135.00 135.00 135.00 135.00
135.00 135.00 135.00 135.00 135.00
135.00 135.00 135.00 135.00 135.00
135.00 135.00 135.00 135.00 135.00
135.00 135.00 135.00 135.00 135.00
135.00 135.00 135.00 135.00 135.00
135.00 135.00 135.00 135.00 135.00
125.00 135.00 135.00 135.00 135.00
125.00 130.00 130.00 130.00 135.00
110.00 130.00 130.00 130.00 130.00
125.00 130.00 130.00 130.00 135.00
125.00 120.00 115.00 115.00 120.00
125.00 120.00 115.00 115.00 120.00
125.00 120.00 115.00 115.00 120.00
125.00 120.00 115.00 115.00 100.00
70.00 75.00 60.00 60.00 66.00
70.00 75.00 60.00 60.00 66.00
70.00 75.00 60.00 60.00 66.00
70.00 75.00 60.00 60.00 66.00
]’;
% 产生噪声,加给输出
% NoiseVar=0.1;
% rand(‘state’,sum(100clock));
AllSamOut=[ 70.00
70.00
70.00
70.00
70.00
70.00
70.00
70.00
70.00
70.00
70.00
70.00
70.00
70.00
70.00
70.00
70.00
70.00
70.00
70.00
70.00
70.00
70.00
70.00
70.00
70.00
70.00
70.00
135.00
135.00
135.00
135.00
135.00
135.00
130.00
110.00
120.00
120.00
120.00
135.00
135.00
135.00
135.00
135.00
110.00
110.00
110.00
110.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
135.00
135.00
135.00
135.00
135.00
135.00
135.00
135.00
125.00
125.00
110.00
125.00
125.00
125.00
125.00
125.00
70.00
70.00
70.00
70.00
]’; %+NoiseVar
randn(1,1000);

% 预处理,压缩到[-1,+1]
global minAllSamOut;
global maxAllSamOut;
[AllSamInn,minAllSamIn,maxAllSamIn,AllSamOutn,minAllSamOut,maxAllSamOut] = premnmx(AllSamIn,AllSamOut);
% 从总样本中抽取10%的样本作为测试样本,其余作为训练样本

TestSamIn=AllSamIn;
RealTestSamOut=[ 70.00
70.00
70.00
70.00
70.00
70.00
70.00
70.00
70.00
70.00
68.00
68.00
70.00
70.00
70.00
70.00
70.00
70.00
68.00
70.00
70.00
68.00
68.00
70.00
70.00
70.00
70.00
70.00
133.00
133.00
133.00
135.00
135.00
135.00
133.00
110.00
118.00
118.00
118.00
130.00
135.00
135.00
135.00
135.00
110.00
110.00
110.00
110.00
98.00
98.00
98.00
98.00
98.00
98.00
98.00
98.00
98.00
98.00
98.00
98.00
98.00
98.00
98.00
98.00
98.00
98.00
98.00
98.00
98.00
98.00
98.00
98.00
98.00
98.00
98.00
98.00
135.00
135.00
135.00
135.00
135.00
135.00
135.00
135.00
135.00
135.00
135.00
135.00
135.00
135.00
130.00
123.00
68.00
68.00
68.00
68.00
]’;
[TestSamInn,minTestSamIn,maxTestSamIn,RealTestSamOutn,minRealTestSamOut,maxRealTestSamOut] = premnmx(TestSamIn,RealTestSamOut);

% 训练样本
TrainSamIn=AllSamIn;
TrainSamOut=AllSamOut;
[TrainSamInn,minTrainSamIn,maxTrainSamIn,TrainSamOutn,minTrainSamOut,maxTrainSamOut] = premnmx(TrainSamIn,TrainSamOut);
% Evaluation Sample
%EvaSamIn=…
% [6.5400 9.4000 8.2200 5.9100 6.1300 7.8400 8.2600 8.7900 7.2900 5.9300 4.6900 3.5100
% 0.1270 0.0390 0.0820 0.0860 0.1230 0.1370 0.0190 0.0640 0.0880 0.0500 0.0860 0.0750
% 1.2840 1.3580 1.3200 0.4500 2.2200 1.4700 1.0900 1.6600 0.8400 0.6600 0.5500 0.5200
% 3.3400 4.7200 5.0300 6.0000 7.9600 4.6600 4.0000 4.1500 4.0000 3.8100 1.6300 3.4500
%0.3500 0.4500 0.3800 0.4500 0.3000 0.4500 0.4000 0.4000 0.3000 0.3000 0.4000 0.3500];
%EvaSamInn=tramnmx(EvaSamIn,minAllSamIn,maxAllSamIn); % 预处理
%**********************************************************
% training set
global Ptrain;
Ptrain = TrainSamInn;
global Ttrain;
Ttrain = TrainSamOutn;
% testing set
Ptest = TestSamInn;

%**********************************************************
% Initialize BPN parameters
global indim;
indim=5;
global hiddennum;
hiddennum=3;
global outdim;
outdim=1;
%**********************************************************
% Initialize PSO parameters
vmax=0.5; % Maximum velocity
minerr=0.0001; % Minimum error
wmax=0.90;
wmin=0.30;
global itmax; %Maximum iteration number
itmax=100;
c1=2;
c2=1.8;
%cf=c1+c2;
for iter=1:itmax
W(iter)=wmax-((wmax-wmin)/itmax)*iter; % weight declining linearly
end
% particles are initialized between (a,b) randomly
a=-1;
b=1;
%Between (m,n), (which can also be started from zero)
m=-1;
n=1;
global N; % number of particles
N=40;
global D; % length of particle
D=(indim+1)*hiddennum+(hiddennum+1)*outdim;
global fvrec;
MinFit=[];
BestFit=[];
%MaxFit=[];
%MeanFit=[];

% Initialize positions of particles
rand(‘state’,sum(100*clock));
X=a+(b-a)*rand(N,D,1);
%Initialize velocities of particles
V=m+(n-m)*rand(N,D,1);

%**********************************************************
%Function to be minimized, performance function,i.e.,mse of net work
global net;
net=newff(minmax(Ptrain),[hiddennum,outdim],{‘tansig’,‘purelin’});
fitness=fitcal(X,net,indim,hiddennum,outdim,D,Ptrain,Ttrain,minAllSamOut,maxAllSamOut);
fvrec(:,1,1)=fitness(:,1,1);
%[maxC,maxI]=max(fitness(:,1,1));
%MaxFit=[MaxFit maxC];
%MeanFit=[MeanFit mean(fitness(:,1,1))];
[C,I]=min(fitness(:,1,1));
MinFit=[MinFit C];
BestFit=[BestFit C];
L(:,1,1)=fitness(:,1,1); %record the fitness of particle of every iterations
B(1,1,1)=C; %record the minimum fitness of particle
gbest(1,:,1)=X(I,:,1); %the global best x in population
%********************************************************
%Matrix composed of gbest vector
for p=1:N
G(p,:,1)=gbest(1,:,1);
end
for i=1:N;
pbest(i,:,1)=X(i,:,1);
end
V(:,:,2)=W(1)V(:,:,1)+c1rand*(pbest(:,:,1)-X(:,:,1))+c2rand(G(:,:,1)-X(:,:,1));
%V(:,:,2)=cf*(W(1)V(:,:,1)+c1rand*(pbest(:,:,1)-X(:,:,1))+c2rand(G(:,:,1)-X(:,:,1)));
%V(:,:,2)=cf*(V(:,:,1)+c1rand(pbest(:,:,1)-X(:,:,1))+c2rand(G(:,:,1)-X(:,:,1)));
% limits velocity of particles by vmax
for ni=1:N
for di=1:D
if V(ni,di,2)>vmax
V(ni,di,2)=vmax;
elseif V(ni,di,2)<-vmax
V(ni,di,2)=-vmax;
else
V(ni,di,2)=V(ni,di,2);
end
end
end
X(:,:,2)=X(:,:,1)+V(:,:,2);
%******************************************************
for j=2:itmax
%disp(‘Iteration and Current Best Fitness’)
%disp(j-1)
%disp(B(1,1,j-1))
% Calculation of new positions
fitness=fitcal(X,net,indim,hiddennum,outdim,D,Ptrain,Ttrain,minAllSamOut,maxAllSamOut);
fvrec(:,1,j)=fitness(:,1,j);
%[maxC,maxI]=max(fitness(:,1,j));
%MaxFit=[MaxFit maxC];
%MeanFit=[MeanFit mean(fitness(:,1,j))];
[C,I]=min(fitness(:,1,j));
MinFit=[MinFit C];
BestFit=[BestFit min(MinFit)];
L(:,1,j)=fitness(:,1,j);
B(1,1,j)=C;
gbest(1,:,j)=X(I,:,j);
[C,I]=min(B(1,1,:));
% keep gbest is the best particle of all have occured
if B(1,1,j)<=C
gbest(1,:,j)=gbest(1,:,j);
else
gbest(1,:,j)=gbest(1,:,I);
end
if C<=minerr, break, end
%Matrix composed of gbest vector
if j>=itmax, break, end
for p=1:N
G(p,:,j)=gbest(1,:,j);
end
for i=1:N;
[C,I]=min(L(i,1,:));
if L(i,1,j)<=C
pbest(i,:,j)=X(i,:,j);
else
pbest(i,:,j)=X(i,:,I);
end
end
V(:,:,j+1)=W(j)V(:,:,j)+c1rand*(pbest(:,:,j)-X(:,:,j))+c2rand(G(:,:,j)-X(:,:,j));
%V(:,:,j+1)=cf*(W(j)V(:,:,j)+c1rand*(pbest(:,:,j)-X(:,:,j))+c2rand(G(:,:,j)-X(:,:,j)));
%V(:,:,j+1)=cf*(V(:,:,j)+c1rand(pbest(:,:,j)-X(:,:,j))+c2rand(G(:,:,j)-X(:,:,j)));
for ni=1:N
for di=1:D
if V(ni,di,j+1)>vmax
V(ni,di,j+1)=vmax;
elseif V(ni,di,j+1)<-vmax
V(ni,di,j+1)=-vmax;
else
V(ni,di,j+1)=V(ni,di,j+1);
end
end
end
X(:,:,j+1)=X(:,:,j)+V(:,:,j+1);
end
disp(‘Iteration and Current Best Fitness’)
disp(j)
disp(B(1,1,j))
disp(‘Global Best Fitness and Occurred Iteration’)
[C,I]=min(B(1,1,:))
% simulation network
for t=1:hiddennum
x2iw(t,:)=gbest(1,((t-1)indim+1):tindim,j);
end
for r=1:outdim
x2lw(r,:)=gbest(1,(indimhiddennum+1):(indimhiddennum+hiddennum),j);
end
x2b=gbest(1,((indim+1)*hiddennum+1)?,j);
x2b1=x2b(1:hiddennum).’;
x2b2=x2b(hiddennum+1:hiddennum+outdim).’;
net.IW{1,1}=x2iw;
net.LW{2,1}=x2lw;
net.b{1}=x2b1;
net.b{2}=x2b2;
%net.IW{1,1}
%net.LW{2,1}
%net.b{1}
%net.b{2}
%nettesterr=mse(sim(net,Ptest)-Ttest);
TestSamOut = sim(net,Ptest);
[a]=postmnmx(TestSamOut,minRealTestSamOut,maxRealTestSamOut);
ae=abs(a-RealTestSamOut)
mae=mean(ae)
re=(a-RealTestSamOut)./RealTestSamOut
mre=mean(abs(re))
a
%EvaSamOutn = sim(net,EvaSamInn);
%EvaSamOut = postmnmx(EvaSamOutn,minAllSamOut,maxAllSamOut);

figure(1)
grid
hold on
%plot(MaxFit,‘k’);
plot(log(BestFit),’–k’,‘linewidth’,2);
title(‘适应度’);
xlabel(‘迭代次数’);
ylabel(‘fit’);
%plot(log(MinFit),‘b’);

figure(2)
grid
hold on
plot(a,‘b’);
plot(RealTestSamOut,’-k’,‘linewidth’,2);
title(‘10月6日电价预测’);
xlabel(‘时段’);
ylabel(‘电价(元);
)’
figure(3)
grid
hold on
plot(ae,‘k’,‘linewidth’,2);
title(‘绝对误差’);
xlabel(‘时段’);
ylabel(‘误差’);

figure(4)
grid
hold on
plot(re,‘k’,‘linewidth’,2);
title(‘相对误差’);
xlabel(‘时段’);
ylabel(‘误差’);
et=cputime-st;

%plot(EvaSamOut,‘k’);

save D:
%--------------------------------------------------------------------------
%sub function for getting fitness of all paiticles in specific generation
%change particle to weight matrix of BPN,then calculate training error
function fitval = fitcal(pm,net,indim,hiddennum,outdim,D,Ptrain,Ttrain,minAllSamOut,maxAllSamOut)
[x,y,z]=size(pm);
for i=1:x
for j=1:hiddennum
x2iw(j,:)=pm(i,((j-1)indim+1):jindim,z);
end
for k=1:outdim
x2lw(k,:)=pm(i,(indimhiddennum+1):(indimhiddennum+hiddennum),z);
end
x2b=pm(i,((indim+1)*hiddennum+1)?,z);
x2b1=x2b(1:hiddennum).’;
x2b2=x2b(hiddennum+1:hiddennum+outdim).’;
net.IW{1,1}=x2iw;
net.LW{2,1}=x2lw;
net.b{1}=x2b1;
net.b{2}=x2b2;
error=postmnmx(sim(net,Ptrain),minAllSamOut,maxAllSamOut)-postmnmx(Ttrain,minAllSamOut,maxAllSamOut);
fitval(i,1,z)=mse(error);
end

  • 3
    点赞
  • 31
    收藏
    觉得还不错? 一键收藏
  • 11
    评论
粒子群优化算法是一种新颖的仿生、群智能优化算法。该算法原理简单、需调整的参数少、收敛速度快而且易于实现,因此近年来粒子群算法引起了广大学者的关注。然而到目前为止粒子群算法的在理论分析和实践应用方面尚未成熟,仍有大量的问题需进一步研究。本文针对粒子群算法易出现“早熟”陷入局部极小值问题对标准粒子群算法进行改进并将改进的粒子群算法应用于BP神经网络中。本文的主要工作如下:本文首先介绍了粒子群算法的国内外的研究现状与发展概况,较系统地分析了粒子群优化算法的基本理论,总结常见的改进的粒子群优化算法。其次介绍了Hooke-Jeeves模式搜索法的算法分析、基本流程及应用领域。针对标准粒子群优化算法存在“早熟”问题,易陷入局部极小值的缺点,本文对标准粒子群算法进行改进。首先将原始定义的初始种群划分为两个相同的子种群,采用基于适应度支配的思想分别将每个子种群划分为两个子集,Pareto子集和N_Pareto子集;然后将两个子群中的适应度较优的两个Pareto子集合为新种群。Griewank和Rastrigin由于新种群的参数设置区别于标准粒子群算法的参数设置,新的粒子与标准种群中的粒子飞行轨迹不同,种群的探索范围扩大,从而使算法的全局搜索能力有所提高。 为平衡粒子群算法的全局寻优能力和局部寻优能力,提高粒子群算法的求解精度和效率,本文在新种群寻优过程中引入具有强收敛能力Hooke-Jeeves搜索法,提出了IMPSO算法。雅文网www.lunwendingzhi.com,并用IMPSO算法对标准基准测试函数进行实验,将得到的实验结果并与标准粒子群算法对基准函数的实验结果进行对比,仿真结果证明了该改进的粒子群算法的有效性。 最后本文研究改进的粒子群算法在BP神经网络中的应用。首先介绍人工神经网络的原理及基于BP算法的多层前馈神经网络,其次用IMPSO算法训练BP神经网络并给出训练流程图。 将IMPSO算法训练的BP神经网络分别应用于齿轮热处理中硬化层深的预测以及用于柴油机的缸盖与缸壁的故障诊断中,并将预测结果、诊断结果与BP神经网络、标准粒子群优化算法训练的BP神经网络的实验结果进行对比,实验结果证明了改进的粒子群算法训练BP网络具有更强的优化性能和学习能力。 英文简介: Particle swarm optimization algorithm is a novel bionic, swarm intelligence optimization algorithm. The algorithm principle is simple, less need to adjust the parameters and convergence speed is fast and easy to implement, so in recent years, particle swarm optimization (pso) to cause the attention of many scholars. So far, however, the particle swarm algorithm are not mature in theory analysis and practice applications, there are still a lot of problems need further research. Based on particle swarm algorithm is prone to "premature" into a local minimum value problem to improve the standard particle swarm algorithm and improved particle swarm optimization (pso) algorithm was applied to BP neural network. This paper's main work is as follows: at first, this paper introduces the particle swarm algorithm in the general situation of the research status and development at home and abroad, systematically analyzes the basic theory of particle swarm optimization algorithm, summarizes the common improved particle swarm optimization algorithm. Secondly introduces the analysis method of Hooke - Jeeves pattern search algorithm, the basic process and application fields. In view of the standard particle swarm optimization algorithm "precocious" problems, easy to fall into local minimum value, in this paper, the standard particle swarm algorithm was improved. First of all, the original definition of the initial population is divided into two identical sub populations, based on the fitness of thought respectively each child population is divided into two subsets, and Pareto subset N_Pareto subset; And then has a better fitness of two subgroups of two Pareto set for the new population. Griewank and Rastrigin because of the new population parameter setting differs from the standard particle swarm algorithm of the parameter is set, the new particles and particle trajectories in the different standard population, population expanding, which makes the algorithm's global search ability have improved. To balance the global search capability of the particle swarm algorithm and local optimization ability, and improve the precision and efficiency of particle swarm optimization (pso) algorithm, introduced in this article in the new population optimization process has a strong convergence ability to search method of Hooke - Jeeves, IMPSO algorithm is proposed. And standard benchmark test functions with IMPSO algorithm experiment, will receive the results with the standard particle swarm algorithm, comparing the experimental results of benchmark functions, the simulation results prove the validity of the improved particle swarm algorithm. At the end of the paper research the improved particle swarm algorithm in the application of the BP neural network. First this paper introduces the principle of artificial neural network and based on the multi-layer feed-forward neural network BP algorithm, secondly by IMPSO algorithm training the BP neural network and training flow chart is given. IMPSO algorithm training the BP neural network respectively used in the gear heat treatment hardening layer depth prediction and used for fault diagnosis of diesel engine cylinder head and cylinder wall, and the predicted results, the diagnostic results, the standard particle swarm optimization algorithm with BP neural network of training BP neural network, comparing the experimental results of the experimental results show that the improved particle swarm optimization (pso) training BP network has better optimization performance and learning ability.
粒子群优化神经网络是一种基于群体智能的算法,通过模拟群体智能的行为,来求解最优解。在神经网络中,参数的优化是重要的一步,而粒子群优化神经网络就是为了优化神经网络参数的一种方法。 Python代码实现粒子群优化神经网络可以使用pyswarms库来实现。首先需要导入pyswarms库和其他必要的库,如numpy和sklearn。接着需要定义适应度函数,它将评估神经网络的性能。然后定义粒子群优化算法的参数,如粒子数、纬度、最大迭代次数等。在定义完参数后,就可以使用pyswarms库的PSO()函数来进行粒子群优化了。最后,通过训练好的神经网络,可以使用sklearn库来评估模型的性能。 以下是Python代码一个简单示例: ``` python import numpy as np import sklearn.datasets from sklearn.neural_network import MLPClassifier import pyswarms as ps # load data X, y = sklearn.datasets.load_iris(return_X_y=True) # define fitness function def fitness_func(position): clf = MLPClassifier(hidden_layer_sizes=(position[0], position[1]), solver='sgd', learning_rate_init=position[2]) clf.fit(X, y) return -clf.score(X, y) # define particle swarm optimization parameters n_particles = 20 dimensions = 3 options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9} # perform particle swarm optimization optimizer = ps.single.GlobalBestPSO(n_particles=n_particles, dimensions=dimensions, options=options) best_position, _ = optimizer.optimize(fitness_func, iters=100) # train neural network with best parameters clf = MLPClassifier(hidden_layer_sizes=(int(best_position[0]), int(best_position[1])), solver='sgd', learning_rate_init=best_position[2]) clf.fit(X, y) # evaluate model performance score = clf.score(X, y) print("Accuracy: %.2f%%" % (score*100)) ``` 上述代码演示了如何使用粒子群优化算法来优化神经网络中的三个参数:隐层数、每层神经元数量和学习率。此外,该代码还展现了如何使用sklearn库来训练和评估神经网络的性能。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 11
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值