Matlab实现标签平滑label smooth

文章介绍了标签平滑(LabelSmoothing)这一机器学习正则化方法,用于解决分类问题中过拟合问题,通过图割算法进行能量最小化,提供了一个名为`alpha_expansion`的函数实现该算法的过程和参数设置。
摘要由CSDN通过智能技术生成

标签平滑(Label smoothing),是机器学习领域的一种正则化方法,通常用于分类问题,目的是防止模型在训练时过于自信地预测标签,改善泛化能力差的问题。

直接上代码:

function [assignment, T, ENERGY, ENERGYAFTER] = alpha_expansion(initial_p, graph, fidelity, lambda, maxIte)

%alpha expansion algorithm to solve penalization by the potts penalty
%INPUT
%inital_labeling = classification to regularize原始标签
%graph  = the adjacency structure 网络图的构建
%fidelity = which fidelity fucntion to use (default = 0)
%	0 : linear
%	1 : quadratic  
%	2 : KL with 0.05 uniform smoothing
%	3 : loglinear with 0.05 uniform smoothing
%lambda      : regularization strength (default = 1)正则化强度
%maxIte      : maximum number of alpha expansion cycle (default = 5)扩展循环的最大数量
%benchMarking: if true will return the energy and time for the algorithm
%            : stopped after 1 ... maxIte iteration, starting from zero 
%              each time
%OUTPUT
%assignment = the regularized labeling正则化后的标签
%T          =  computing time计算时间
%loic landrieu 2016
%
%When using this method you must cite:
%
%Efficient Approximate Energy Minimization via Graph Cuts 
%		Yuri Boykov, Olga Veksler, Ramin Zabih, 
%       IEEE transactions on PAMI, vol. 20, no. 12, p. 1222-1239
%       , November 2001. 
%
%What Energy Functions can be Minimized via Graph Cuts?
%	    Vladimir Kolmogorov and Ramin Zabih. 
%	    To appear in IEEE Transactions on Pattern Analysis and Machine
%       Intelligence (PAMI). 
%	    Earlier version appeared in European Conference on Computer Vision
%       (ECCV), May 2002. 
%
%An Experimental Comparison of Min-Cut/Max-Flow Algorithms
%	    for Energy Minimization in Vision.
%	    Yuri Boykov and Vladimir Kolmogorov.
%		In IEEE Transactions on Pattern Analysis and Machine
%       Intelligence (PAMI), 
%	    September 2004




lambda=1;
if (nargin < 3)
    fidelity = 0;
end
if (nargin < 4)
    lambda = 1;
end
if (nargin < 5)
    maxIte = 30;
end
smoothing= 0.05; %the smoothing paremeter
nbNode  = max(size(initial_p,1));
nClasses    = size(initial_p,2);
pairwise     = sparse(double([graph.source]+1), double([graph.target]+1) ...
    ,double(graph.edge_weight * lambda)); 
mpairwise=size(pairwise,1);
npairwise=size(pairwise,2);
if mpairwise ~= nbNode  | npairwise ~= nbNode
    dm=nbNode-mpairwise;
    dn=nbNode-npairwise;
    pairwise=[pairwise,zeros(mpairwise,dn)];
    pairwise=[pairwise;zeros(dm,size(pairwise,2))];
end
switch fidelity
    case 0
        unary      = squeeze(sum(...
                     ((repmat(1:nClasses, [size(initial_p,1) 1 nClasses])...
                   ==  permute(repmat(1:nClasses, [size(initial_p,1) 1 nClasses]), [1 3 2])) ...
                   - repmat(initial_p, [1 1 size(initial_p,2)])).^2,2));
    case 1
        unary      = -initial_p;
    case 2
        smoothObs  = repmat(smoothing/ nClasses + (1 - smoothing) * initial_p ...
                    , [1 1 nClasses]);
        smoothAssi = smoothing / nClasses + (1 - smoothing) * ...
                    (repmat(1:nClasses, [nbNode 1 nClasses])...
                   ==  permute(repmat(1:nClasses, [nbNode 1 nClasses]), [1 3 2]));
        unary      = -squeeze(sum(smoothObs .* ( log(smoothAssi)),2));
   case 3
        smoothObs  = smoothing / nClasses + (1 - smoothing) * initial_p;
        unary      = -log(smoothObs); 
end
%normalization
unary = bsxfun(@minus, unary, min(unary,[],2));
unary = unary ./ mean(unary(:)); %normalization

labelcost  = single(1*(ones(nClasses)-eye(nClasses)));
[dump, labelInit] = max(initial_p,[],2);
labelInit = labelInit - 1;
clear('I', 'J', 'V','dump')
tic;
% [assignment] = GCMex(double(labelInit), (unary'), (pairwise)...
%                       , single(labelcost),true,int32(maxIte));
[assignment, ENERGY ENERGYAFTER] = GCMex(double(labelInit), (unary'), (pairwise)...
                      , single(labelcost),true);

                  assignment = assignment +1;
T = toc;

%% GCMex Version 2.3.0

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值