【Coursera-Machine Learning】自用6

目录

前言

一、进度

二、基本内容

1.Support Vector Machine

2.SVM hypothesis

3.Large Margin and Mathematics

4.Kernel and Gaussian Kernel

5.Using an SVM

6.作业

总结

前言


一、进度

第七周(79%)

二、基本内容

1.Support Vector Machine

回顾之前的逻辑回归的cost:watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBAU1RBUkxJVFNLWTIzODIy,size_17,color_FFFFFF,t_70,g_se,x_16

 在SVM里进一步将该函数简化,并为了保证取值将激活的阈值进行了调整:

2e37729930aa46bf9f3095d79fef4d51.png

 个人感觉Andrew的蓝框画的不太好,后面那个框应该把前面的负号也加进去。

对于左边,y=1,θTx>=1时,cost才会为0;

对于右边,y=0,θTx<=-1时,cost才会为0。

2.SVM hypothesis

c975d1c83fbd42099800c41cbea58174.png

首先对比SVM与逻辑回归的hypothesis:首先省略了m,因为在求偏导等方法进行梯度下降法时,常数对于输出结果没有影响。

同时,省略了λ,引入了C。因此可以得知,C的作用和λ是类似的。可以理解C=1/λ:当C过大的时候,类比λ过小,会出现high variance;当C过小的时候,类比λ过大,会出现high bias。

当存在λ且λ较大的时候,regularization的权重比较大;相反当λ变为前面的C的时候,效果则刚好相反。

3.Large Margin and Mathematics

LM原则保证的是分类的boundary距离各个类之间的距离适中:

d28895c5efed4296b4f4ec2371d15ce5.png

在SVM领域,通过选择合适的C来保证LM原则。

对于LM原则及SVM hypothesis背后的数学原理,Andrew简单提了两句。

首先回顾两个向量u,v做uTv计算的结果(点乘):

dec05b5b62bd412c9e599644d9901a3c.png

这里的p,||u||都是标量。

8e251c209e0d4ae09d7986dd4f437735.png

 同理,将θ引入,那么每一个计算的θTX本质上也是两个向量的内积。而我们知道,这个内积越大,y就越可能是1,内积越小(指负数,相反),y则为0。

24c48339d16843fb8818486f0ebbf317.png

先看左下角的坐标系。假设我们分类的boundary是图中形如y=-kx的曲线。那么首先有θ向量是与该boundary垂直的一条向量(在高维空间里应该是法向量)。对于y=1的正例,我们希望θTX≥1,那么首先由图可知p会非常小(投影长度),那么θ的长度就会非常大。同理对于反例也是如此。而我们的目标是min1/2||θ||^2,所以这条boundary显然还需改进。当改进至右图时,p会非常大,那就与我们的最小化θ长度的目标相符。

4.Kernel and Gaussian Kernel

先给出一个很有意思的理解。很多时候处理线性不可分的方法是将样本引入高维空间进行分割。一开始理解不了这句话,直到看到了这张图:

f4b2b7b8d98c4e68b10d5bd31bda94f4.png

 好图。

对于一个不可分割的样本聚集图,我们之前会用一个方程来进行描述:

135e1b5d32a2458e97a41a4f8e3d8b5d.png

 现在将其中的自变量项进行修改,变为关于某个自变量x的函数f。

a1fa289775f8499c97a1fd00490e15d2.png

 f = similarity(x,l),可以理解为x和l的距离函数。显然,xl的距离越近,f值越接近1,反之接近0。

同时,σ值给出了接近的接受范围。显然,σ越大,上方的可接受差距也就越大。

a7485d10a5154a3588a3afd19a0a6970.png

 最后:

48ec115bd8784d1787cfcf1e8fc229fc.png

 这张图就给出了SVM使用高斯核的原理:给定了l点的情况下,与l点一定距离内的x去做高斯核运算都会得到1,那么运算结果为1的这些x就是分类器分出来的正(反)例的区域。

 对于l点的选择,课程里的做法是所有的l点都初始化为x点的位置。值得注意的是,对于每一个x与所有的l点运算的时候,都会有一项距离为0。

cd9f2db9d1df4ef7a88a0bd3fd465f1e.png

对于每一个x,都会运算出一个f向量,维数是x的个数。 

28fffb438969483ebaa782a46f982bc7.png

那么这张图里的维数也就可以对应了。 

 从小到大看,y=1时,希望最小化cost,就是希望θTf尽可能大于1。那么对于f向量和θ的匹配就是优化boundary的过程。

5.Using an SVM

首先,在使用高斯核之前,先做Feature Scaling。

然后就是一个需要注意的对照:

5150640328674006a68de1a43d292b0e.png

引入了一个Linear Kernel的概念,就是不使用f,只改激活函数的SVM。

其中的原理无外乎可以用High Bias/Variance来解释。但是理解也不是很透彻,就不瞎写了:(

6.作业

先放上错题:

f79038bfd60a4b2d8f434942d442447e.png

7e1d673e18e34ab590a4e27114caea9e.png 这次编程作业略水,但是实现的一个Spam分类器还是很有意思的。

function sim = gaussianKernel(x1, x2, sigma)
%RBFKERNEL returns a radial basis function kernel between x1 and x2
%   sim = gaussianKernel(x1, x2) returns a gaussian kernel between x1 and x2
%   and returns the value in sim

% Ensure that x1 and x2 are column vectors
x1 = x1(:); x2 = x2(:);

% You need to return the following variables correctly.
sim = 0;

% ====================== YOUR CODE HERE ======================
% Instructions: Fill in this function to return the similarity between x1
%               and x2 computed using a Gaussian kernel with bandwidth
%               sigma
%
%
sim = exp(-sum((x1-x2).^2)/(2*sigma^2));
% =============================================================

end
function [C, sigma] = dataset3Params(X, y, Xval, yval)
%DATASET3PARAMS returns your choice of C and sigma for Part 3 of the exercise
%where you select the optimal (C, sigma) learning parameters to use for SVM
%with RBF kernel
%   [C, sigma] = DATASET3PARAMS(X, y, Xval, yval) returns your choice of C and
%   sigma. You should complete this function to return the optimal C and
%   sigma based on a cross-validation set.
%

% You need to return the following variables correctly.
C = 1;
sigma = 0.3;

% ====================== YOUR CODE HERE ======================
% Instructions: Fill in this function to return the optimal C and sigma
%               learning parameters found using the cross validation set.
%               You can use svmPredict to predict the labels on the cross
%               validation set. For example,
%                   predictions = svmPredict(model, Xval);
%               will return the predictions on the cross validation set.
%
%  Note: You can compute the prediction error using
%        mean(double(predictions ~= yval))
%

error = Inf;
values = [0.01;0.03;0.1;0.3;1;3;10;30];
len = length(values);
for i = 1:len            %C
  for j = 1:len          %sigma
    model = svmTrain(X,y,values(i),@(x1, x2) gaussianKernel(x1, x2, values(j)));
    predictions = svmPredict(model,Xval);
    nowError = mean(double(predictions ~= yval));
    if nowError<error
      error = nowError;
      C = values(i);
      sigma = values(j);
    endif
  endfor
endfor
% =========================================================================

end
function word_indices = processEmail(email_contents)
%PROCESSEMAIL preprocesses a the body of an email and
%returns a list of word_indices
%   word_indices = PROCESSEMAIL(email_contents) preprocesses
%   the body of an email and returns a list of indices of the
%   words contained in the email.
%

% Load Vocabulary
vocabList = getVocabList();

% Init return value
word_indices = [];

% ========================== Preprocess Email ===========================

% Find the Headers ( \n\n and remove )
% Uncomment the following lines if you are working with raw emails with the
% full headers

% hdrstart = strfind(email_contents, ([char(10) char(10)]));
% email_contents = email_contents(hdrstart(1):end);

% Lower case
email_contents = lower(email_contents);

% Strip all HTML
% Looks for any expression that starts with < and ends with > and replace
% and does not have any < or > in the tag it with a space
email_contents = regexprep(email_contents, '<[^<>]+>', ' ');

% Handle Numbers
% Look for one or more characters between 0-9
email_contents = regexprep(email_contents, '[0-9]+', 'number');

% Handle URLS
% Look for strings starting with http:// or https://
email_contents = regexprep(email_contents, ...
                           '(http|https)://[^\s]*', 'httpaddr');

% Handle Email Addresses
% Look for strings with @ in the middle
email_contents = regexprep(email_contents, '[^\s]+@[^\s]+', 'emailaddr');

% Handle $ sign
email_contents = regexprep(email_contents, '[$]+', 'dollar');


% ========================== Tokenize Email ===========================

% Output the email to screen as well
fprintf('\n==== Processed Email ====\n\n');

% Process file
l = 0;

while ~isempty(email_contents)

    % Tokenize and also get rid of any punctuation
    [str, email_contents] = ...
       strtok(email_contents, ...
              [' @$/#.-:&*+=[]?!(){},''">_<;%' char(10) char(13)]);

    % Remove any non alphanumeric characters
    str = regexprep(str, '[^a-zA-Z0-9]', '');

    % Stem the word
    % (the porterStemmer sometimes has issues, so we use a try catch block)
    try str = porterStemmer(strtrim(str));
    catch str = ''; continue;
    end;

    % Skip the word if it is too short
    if length(str) < 1
       continue;
    end

    % Look up the word in the dictionary and add to word_indices if
    % found
    % ====================== YOUR CODE HERE ======================
    % Instructions: Fill in this function to add the index of str to
    %               word_indices if it is in the vocabulary. At this point
    %               of the code, you have a stemmed word from the email in
    %               the variable str. You should look up str in the
    %               vocabulary list (vocabList). If a match exists, you
    %               should add the index of the word to the word_indices
    %               vector. Concretely, if str = 'action', then you should
    %               look up the vocabulary list to find where in vocabList
    %               'action' appears. For example, if vocabList{18} =
    %               'action', then, you should add 18 to the word_indices
    %               vector (e.g., word_indices = [word_indices ; 18]; ).
    %
    % Note: vocabList{idx} returns a the word with index idx in the
    %       vocabulary list.
    %
    % Note: You can use strcmp(str1, str2) to compare two strings (str1 and
    %       str2). It will return 1 only if the two strings are equivalent.
    %
     for i = 1 : length(vocabList)
       if strcmp(str, vocabList(i))==1
         word_indices = [word_indices ; i];
       endif
     endfor
    % =============================================================


    % Print to screen, ensuring that the output lines are not too long
    if (l + length(str) + 1) > 78
        fprintf('\n');
        l = 0;
    end
    fprintf('%s ', str);
    l = l + length(str) + 1;

end

% Print footer
fprintf('\n\n=========================\n');

end
function x = emailFeatures(word_indices)
%EMAILFEATURES takes in a word_indices vector and produces a feature vector
%from the word indices
%   x = EMAILFEATURES(word_indices) takes in a word_indices vector and
%   produces a feature vector from the word indices.

% Total number of words in the dictionary
n = 1899;

% You need to return the following variables correctly.
x = zeros(n, 1);

% ====================== YOUR CODE HERE ======================
% Instructions: Fill in this function to return a feature vector for the
%               given email (word_indices). To help make it easier to
%               process the emails, we have have already pre-processed each
%               email and converted each word in the email into an index in
%               a fixed dictionary (of 1899 words). The variable
%               word_indices contains the list of indices of the words
%               which occur in one email.
%
%               Concretely, if an email has the text:
%
%                  The quick brown fox jumped over the lazy dog.
%
%               Then, the word_indices vector for this text might look
%               like:
%
%                   60  100   33   44   10     53  60  58   5
%
%               where, we have mapped each word onto a number, for example:
%
%                   the   -- 60
%                   quick -- 100
%                   ...
%
%              (note: the above numbers are just an example and are not the
%               actual mappings).
%
%              Your task is take one such word_indices vector and construct
%              a binary feature vector that indicates whether a particular
%              word occurs in the email. That is, x(i) = 1 when word i
%              is present in the email. Concretely, if the word 'the' (say,
%              index 60) appears in the email, then x(60) = 1. The feature
%              vector should look like:
%
%              x = [ 0 0 0 0 1 0 0 0 ... 0 0 0 0 1 ... 0 0 0 1 0 ..];
%
%
for i = 1:length(word_indices)
  x(word_indices(i)) = 1;
endfor
% =========================================================================


end

8c295fa8f8994988988510cf8890c797.png


总结

直接上用Spam程序实验的结果

对于邮件来源,选择了三封来自《Grand Theft Auto IV》的邮件,两封spam,一封来自Francis的非spam。判断结果差强人意,因为判断失误的那篇邮件让我看我也觉得伪装的很好,第一样看不出来是spam。GTA4的邮件系统有一说一确实很有意思。

02004ea73c8c4787ab36d24ef4ab5f09.png

f7156c85a212489ea24545883a1d7838.png13937017f212419699a2315afb5a7a20.png

ad061776f5cd4a1e88517f47175eb4cc.pngc7f2ff1fbb4c4fb89b62ecb39c119cbf.pnge218d72218194d159a284e7dd97aa7b7.png

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值