seqminopt matlab,svmtrain.m 源代码在线查看 - 具有图形操作界面的支持向量机多类分类实验系统.全部用Matlab实现,可以实现多种分类识别. 这是本人的毕业设计的附属程序 ...

function net = svmtrain(net, X, Y, alpha0, dodisplay)% SVMTRAIN - Train a Support Vector Machine classifier% % NET = SVMTRAIN(NET, X, Y)% Train the SVM given by NET using the training data X with target values% Y. X is a matrix of size (N,NET.nin) with N training examples (one per% row). Y is a column vector containing the target values (classes) for% each example in X. Each element of Y that is >=0 is treated as class% +1, each element % SVMTRAIN normally uses L1-norm of all training set errors in the% objective function. If NET.use2norm==1, L2-norm is used.%% All training parameters are given in the structure NET. Relevant% parameters are mainly NET.c, for fine-tuning also NET.qpsize,% NET.alphatol and NET.kkttol. See function SVM for a description of% these fields.%% NET.c is a weight for misclassifying a particular example. NET.c may% either be a scalar (where all errors have the same weights), or it may% be a column vector (size [N 1]) where entry NET.c(i) corresponds to the% error weight for example X(i,:). If NET.c is e vector of length 2,% NET.c(1) specifies the error weight for all positive examples, NET.c(2)% is the error weight for all negative examples. Specifying a different% weight for each example may be used for imbalanced data sets.%% NET = SVMTRAIN(NET, X, Y, ALHPA0) uses the column vector ALPHA0 as% the initial values for the coefficients NET.alpha. ALPHA0 may result% from a previous training with different parameters.% NET = SVMTRAIN(NET, X, Y, ALPHA0, 1) displays information on the% training progress (number of errors in the current iteration, etc)% SVMTRAIN uses either the function LOQO (Matlab-Interface to Smola's% LOQO code) or the routines QP/QUADPROG from the Matlab Optimization% Toolbox to solve the quadratic programming problem.%% See also:% SVM, SVMKERNEL. SVMFWD%% % Copyright (c) Anton Schwaighofer (2001)% $Revision: 1.19 $ $Date: 2002/01/09 12:11:41 $% mailto:anton.schwaighofer@gmx.net% % This program is released unter the GNU General Public License.% % Training a SVM involves solving a quadratic programming problem that% scales quadratically with the number of examples. SVMTRAIN uses the% decomposed training algorithm proposed by Osuna, Freund and Girosi, where% the maximum size of a quadratic program is constant.% (ftp://ftp.ai.mit.edu/pub/cbcl/nnsp97-svm.ps)% For selecting the working set, the approximation proposed by Joachims% (http://www-ai.cs.uni-dortmund.de/DOKUMENTE/joachims_99a.ps.gz) is used.% Check arguments for consistencyerrstring = consist(net, 'svm', X, Y);if ~isempty(errstring); error(errstring);end[N, d] = size(X);if N==0, error('No training examples given');endnet.nbexamples = N;if nargin dodisplay = 0;endif nargin alpha0 = [];elseif (~isempty(alpha0)) & (~all(size(alpha0)==[N 1])), error(['Initial values ALPHA0 must be a column vector with the same length' ... ' as X']); end% Find the indices of examples from class +1 and -1% Y(Y==1)=1;Y(Y==2)=-1;class1 = logical(uint8(Y>=0));class0 = logical(uint8(Yif length(net.c(:))==1, C = repmat(net.c, [N 1]); % The same upper bound for all exampleselseif length(net.c(:))==2, C = zeros([N 1]); C(class1) = net.c(1); C(class0) = net.c(2); % Different upper bounds C for the positive and negative exampleselse C = net.c; if ~all(size(C)==[N 1]), error(['Upper bound C must be a column vector with the same length' ... ' as X']); endendif min(C) error('NET.C must be positive and larger than NET.alphatol');endif ~isfield(net, 'use2norm'), net.use2norm = 0;endif ~isfield(net, 'qpsolver'), net.qpsolver = '';endqpsolver = net.qpsolver;if isempty(qpsolver), % QUADPROG is the fastest solver for both 1norm and 2norm SVMs, if % qpsize is around 10-70 (loqo is best for large 1norm SVMs) checkseq = {'quadprog', 'loqo', 'qp'}; i = 1; while (i e = exist(checkseq{i}); if (e==2) | (e==3), qpsolver = checkseq{i}; break; end i = i+1; end if isempty(qpsolver), error('No quadratic programming solver (QUADPROG,LOQO,QP) found.'); endend% Mind that there may occur problems with the QUADPROG solver. At least in% early versions of Matlab 5.3 there are severe numerical problems somewhere% deep in QUADPROG% Turn off all messages coming from quadprog, increase the maximum number% of iterations from 200 to 500 - good for low-dimensional problemsif strcmp(qpsolver, 'quadprog') & (dodisplay==0), quadprogopt = optimset('Display', 'off', 'MaxIter', 500);else quadprogopt = [];end% Actual size of quadratic program during training may not be larger than% the number of examplesQPsize = min(N, net.qpsize);chsize = net.chunksize;% SVMout contains the output of the SVM decision function for each% example. This is updated iteratively during training.SVMout = zeros(N, 1);% Make sure there are no other values in Y than +1 and -1Y(class1) = 1;Y(class0) = -1;if dodisplay>0, fprintf('Training set: %i examples (%i positive, %i negative)\n', ... length(Y), length(find(class1)), length(find(class0)));end% Start with a vector of zeros for the coefficients alpha, or the% parameter ALPHA0, if it is given. Those values will be used to perform% an initial working set selection, by assuming they are the true weights% for the training set at hand.if ~any(alpha0), net.alpha = zeros([N 1]); % If starting with a zero vector: randomize the first working set search randomWS = 1;else randomWS = 0; % for 1norm SVM: make the initial values conform to the upper bounds if ~net.use2norm, net.alpha = min(C, alpha0); endendalphaOld = net.alpha;if length(find(Y>0))==N, % only positive examples net.bias = 1; net.svcoeff = []; net.sv = []; net.svind = []; net.alpha = zeros([N 1]); return;elseif length(find(Y % only negative examples net.bias = 1; net.svcoeff = []; net.sv = []; net.svind = []; net.alpha = zeros([N 1]); return;enditeration = 0;workset = logical(uint8(zeros(N, 1)));sameWS = 0;net.bias = 0;while 1, if dodisplay>0, fprintf('\nIteration %i: ', iteration+1); end % Step 1: Determine the Support Vectors. [net, SVthresh, SV, SVbound, SVnonbound] = findSV(net, C); if dodisplay>0, fprintf(['Working set of size %i: %i Support Vectors, %i of them at' ... ' bound C\n'], length(find(workset)), length(find(workset & SV)), ... length(find(workset & SVbound))); fprintf(['Whole training set: %i Support Vectors, %i of them at upper' ... ' bound C.\n'], length(net.svind), length(find(SVbound))); if dodisplay>1, fprintf('The Support Vectors (threshold %g) are the examples\n', ... SVthresh); fprintf(' %i', net.svind); fprintf('\n'); end end % Step 2: Find the output of the SVM for all training examples if (iteration==0) | (mod(iteration, net.recompute)==0), % Every NET.recompute iterations the SVM output is built from % scratch. Use all Support Vectors for determining the output. changedSV = net.svind; changedAlpha = net.alpha(changedSV);% which net.alpha SVMout = zeros(N, 1); if strcmp(net.kernel, 'linear'), net.normalw = zeros([1 d]); end else % A normal iteration: Find the coefficients that changed and adjust % the SVM output only by the difference of old and new alpha changedSV = find(net.alpha~=alphaOld); changedAlpha = net.alpha(changedSV)-alphaOld(changedSV); end if strcmp(net.kernel, 'linear'), chunks = ceil(length(changedSV)/chsize); % Linear kernel: Build the normal vector of the separating % hyperplane by computing the weighted sum of all Support Vectors for ch = 1:chunks, ind = (1+(ch-1)*chsize):min(length(changedSV), ch*chsize); temp = changedAlpha(ind).*Y(changedSV(ind)); net.normalw = net.normalw+temp'*X(changedSV(ind), :); %计算成本 end % Find the output of the SVM by multiplying the examples with the % normal vector SVMout = zeros(N, 1); chunks = ceil(N/chsize); for ch = 1:chunks, ind = (1+(ch-1)*chsize):min(N, ch*chsize); SVMout(ind) = X(ind,:)*(net.normalw'); end else % A normal kernel function: Split both the examples and the Support % Vectors into small chunks chunks1 = ceil(N/chsize); chunks2 = ceil(length(changedSV)/chsize); for ch1 = 1:chunks1, ind1 = (1+(ch1-1)*chsize):min(N, ch1*chsize); for ch2 = 1:chunks2,% Compute the kernel function for a chunk of Support Vectors and % a chunk of examplesind2 = (1+(ch2-1)*chsize):min(length(changedSV), ch2*chsize);K12 = svmkernel(net, X(ind1, :), X(changedSV(ind2), :));% Add the weighted kernel matrix to the SVM output. In update % cycles, the kernel matrix is weighted by the difference of % alphas, in other cycles it is weighted by the value alpha alone.coeff = changedAlpha(ind2).*Y(changedSV(ind2));SVMout(ind1) = SVMout(ind1)+K12*coeff; end if dodisplay>2,K1all = svmkernel(net, X(ind1,:), X(net.svind,:));coeff2 = net.alpha(net.svind).*Y(net.svind);fprintf('Maximum error due to matrix partitioning: %g\n', ...max((SVMout(ind1)-K1all*coeff2)')); end end end % Step 3: Compute the bias of the SVM decision function. if net.use2norm, % The bias can be found from the SVM output for Support Vectors. For % those vectors, the output should be 1-alpha/C resp -1+alpha/C. workSV = find(SV & workset); if ~isempty(workSV), net.bias = mean((1-net.alpha(workSV)./C(workSV)).*Y(workSV)- ... SVMout(workSV)); end else % normal 1norm SVM: % The bias can be found from Support Vector whose value alpha is not at % the upper bound. For those vectors, the SVM output should be +1 % resp. -1. workSV = find(SVnonbound & workset); if ~isempty(workSV), net.bias = mean(Y(workSV)-SVMout(workSV)); end end % The nasty case that no SVs to determine the bias have been found.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值