matlab svmstruct,svm-struct使用指南(原版翻译)

个人翻译的svm_struct_learn文件,翻译的有些粗糙,望不要介意!希望对你有帮助!

SVM_STRUCT_LEARN Calls the SVM-struct solver

MODEL = SVM_STRUCT_LEARN(ARGS, PARM)

runs SVM-struct solver with parameters ARGS on the problem PARM.

See [1-6] for the theory. SPARM is a structure of with the fields

运行SVM-struct通过参数ARGS解决PARM问题。看参考文献[1-6]理解原理。

SPARM的构成如下

PATTERNS:: patterns (X)

A cell array of patterns. The entries can have any nature

(they can just be indexes of the actual data for example).

一种单元阵列,输入可以有任何性质的

LABELS:: labels (Y)

A cell array of labels. The entries can have any nature.

LOSSFN:: loss function callback

A handle to the loss function. This function has the form

L = LOSS(PARAM, Y, YBAR) where PARAM is the SPARM structure,

Y a ground truth label, YBAR another label, and L a

non-negative scalar.

损失函数操作,形式为L = LOSS(PARAM, Y, YBAR),PARAM是SAPRM结构,

Y是分类标签(在机器学习中,术语“ground truth”指的是用于有监督训

练的训练集的分类准确性。),YBAR为其他标签。L是一个非负标量。

CONSTRAINTFN:: constraint callback

A handle to the constraint generation function. This function

has the form YBAR = FUNC(PARAM, MODEL, X, Y) where PARAM is

the input PARM structure, MODEL is the a structure

representing the current model, X is an input pattern, and Y

is its ground truth label. YBAR is the most violated labels.

约束生成函数操作,形式为YBAR = FUNC(PARAM, MODEL, X, Y),

PARAM参数输入为PARM参数,MODEL代表当前模型,X是需要输入的数据,Y是分类标签。

YBAR是

FEATUREN:: feature map callback

A handle to the feature map. This function has the form PSI =

FEATURE(PARAM, X, Y) where PARAM is the input PARM structure,

X is a pattern, Y is a label, and PSI a sparse vector of

dimension PARM.DIMENSION. This handle does not need to be

specified if kernels are used.

特征图谱操作。形式为PSI = FEATURE(PARAM, X, Y),PARAM参数输入为PARM参数,

X是数据,Y是标签,PSI是PARM.DIMENSION维数构成的稀疏向量。

如果引用kernels核函数,该操作不需要被定义。

DIMENSION:: dimension of the feature map

The dimension of the feature map. This value does not need to

be specified if kernels are used.

特征图谱的维数,如果引用kernels核函数,该操作不需要被定义。

KERNELFN:: kernel function callback

A handle to the kernel function. This function has the form K

= KERN(PARAM, X, Y, XP, YP) where PARAM is the input PARM

structure, and X, Y and XP, YP are two pattern-label pairs,

input of the joint kernel. This handle does not need to be

specified if feature maps are used.

核函数操作,形式为K = KERN(PARAM, X, Y, XP, YP),PARAM参数输入为PARM参数,

X, Y和XP, YP是两个数据-标签对,输入共同的内核。

如果已用了特征图谱则不需要定义此函数操作。

MODEL is a structure with fields:

W:: weight vector

This is a spare vector of size PARAM.DIMENSION. It is used

with feature maps.

权向量。由PARAM.DIMENSION尺寸构成的稀疏向量。用于特征图谱。

ALPHA:: dual variables 对偶变量。

SVPATTERNS:: patterns which are support vectors 支持向量数

SVLABELS:: labels which are support vectors 支持向量标签

Used with kernels. 用于核函数。

ARGS is a string specifying options in the usual struct

SVM. These are:

ARGS是用于普通struct SVM的常规字符命令。如下:

General Options::

-v [0..3] -> verbosity level (default 1) 详细等级(默认1)

-y [0..3] -> verbosity level for svm_light (default 0)

Learning Options::

-c float -> C: trade-off between training error

and margin (default 0.01)

C:训练误差与边界的权衡

-p [1,2] -> L-norm to use for slack variables. Use 1 for L1-norm,

use 2 for squared slacks. (default 1)

L范数用于松弛变量。1用于L1范数,2用于平方松弛。

{

当p取1,2,∞的时候分别是以下几种最简单的情形:

1-范数:║x║1=│x1│+│x2│+…+│xn│

2-范数:║x║2=sqrt(│x1│^2+│x2│^2+…+│xn│^2)

∞-范数:║x║∞=max(│x1│,│x2│,…,│xn│)

原本公式为:║x║p=(│x1│^p+│x2│^p+…+│xn│^p)^1/p

}

-o [1,2] -> Rescaling method to use for loss.

1: slack rescaling

2: margin rescaling

损失的调节模式:

1.松弛调节

2.边界调节

-l [0..] -> Loss function to use.

0: zero/one loss

?: see below in application specific options

损失函数使用:

0: 0/1 损失

?:看下面应用的特定选项

Optimization Options (see [2][5])::

-w [0,..,9] -> choice of structural learning algorithm

0: n-slack algorithm described in [2]

1: n-slack algorithm with shrinking heuristic

2: 1-slack algorithm (primal) described in [5]

3: 1-slack algorithm (dual) described in [5]

4: 1-slack algorithm (dual) with constraint cache [5]

9: custom algorithm in svm_struct_learn_custom.c

选择结构化学习算法:

0:n-slack算法在[2]中描述过

1:n-slack算法通过收缩式启发(也不知道该如何翻译)实现

2:1-slack算法(primal)在[5]中描述过

3:1-slack算法(dual)在[5]中描述过

4:1-slack算法(dual)通过约束缓存[5]

9:自定义算法,通过svm_struct_learn_custom.c

-e float -> epsilon: allow that tolerance for termination

criterion

epsilon(希腊字母):终止条件阈值

-k [1..] -> number of new constraints to accumulate before

recomputing the QP solution (default 100) (-w 0 and 1 only)

验证QP方案前新的约束条件数(默认100)(只在-w 0和1)

-f [5..] -> number of constraints to cache for each example

(default 5) (used with -w 4)

缓存每个样例的约束数(-w 4时使用)

-b [1..100] -> percentage of training set for which to refresh cache

when no epsilon violated constraint can be constructed

from current cache (default 100) (used with -w 4)

通过当前缓存构造没有epsilon违反约束的训练集的百分数(-w 4时使用)

SVM-light Options for Solving QP Subproblems (see [3])::

-n [2..q] -> number of new variables entering the working set

in each svm-light iteration (default n = q).

Set n < q to prevent zig-zagging.

每一次svm-light迭代输入工作集的新变量数(默认n=q)。

设置n

-m [5..] -> size of svm-light cache for kernel evaluations in MB

(default 40) (used only for -w 1 with kernels)

核评估中svm-light缓存的大小(MB)(默认40)(仅在-w 1时使用)

-h [5..] -> number of svm-light iterations a variable needs to be

optimal before considered for shrinking (default 100)

考虑收缩前svm-light迭代一个变量为最优化的次数

-# int -> terminate svm-light QP subproblem optimization, if no

progress after this number of iterations.

(default 100000)

如果迭代次数超过这个数还没有进展,终止svm-light QP子问题优化

Kernel Options::

-t int -> type of kernel function:

0: linear (default)

1: polynomial (s a*b+c)^d

2: radial basis function exp(-gamma ||a-b||^2)

3: sigmoid tanh(s a*b + c)

4: user defined kernel from kernel.h

核函数类型:

0:线性(默认)

1:多项式 (s a*b+c)^d

2:RBF核函数 exp(-gamma ||a-b||^2)

3:sigmoid核函数 (s a*b + c)

4:自定义核函数 kernel.h

-d int -> parameter d in polynomial kernel

多项式核函数中参数d

-g float -> parameter gamma in rbf kernel

RBF核中的gamma参数

-s float -> parameter s in sigmoid/poly kernel

sigmoid/poly(多项式)核中的s参数

-r float -> parameter c in sigmoid/poly kernel

sigmoid/poly(多项式)核中的c参数

-u string -> parameter of user defined kernel

用户自定义核

Output Options::

-a string -> write all alphas to this file after learning

(in the same order as in the training set)

学习完成后,写入所有alphas到文件(与训练集顺序相同)

References::

[1] T. Joachims, Learning to Align Sequences: A Maximum Margin Approach.

Technical Report, September, 2003. [2] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun, Large Margin

Methods for Structured and Interdependent Output Variables, Journal

of Machine Learning Research (JMLR), Vol. 6(Sep):1453-1484, 2005.

[3] T. Joachims, Making Large-Scale SVM Learning Practical. Advances in

Kernel Methods - Support Vector Learning, B. Schölkopf and C. Burges and

A. Smola (ed.), MIT Press, 1999.

[4] T. Joachims, Learning to Classify Text Using Support Vector

Machines: Methods, Theory, and Algorithms. Dissertation, Kluwer,

2002.

[5] T. Joachims, T. Finley, Chun-Nam Yu, Cutting-Plane Training of Structural

SVMs, Machine Learning Journal, to appear.

[6] http://svmlight.joachims.org/

原文:http://blog.csdn.net/badboy_1990/article/details/40300543

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值