Coursera 机器学习 Week7 编程作业: Support Vector Machines

此次作业是要完成两部分内容,一部分是利用SVM支持向量机训练数据,另一部分是通过SVM完成对垃圾邮件的分类。

1 Support Vector Machines

在这一部分我们会利用SVM对2维数据集进行训练,

1.1 Example Dataset 1

我们会在这一部分利用线性分隔对2维数据集进行训练,训练集的可视化如下图:
在这里插入图片描述
‘+‘表示的是1.
通过可视化我们发现,最左边有一个孤立的’+’,很明显跟大部队隔开了,我们可以通过调整C的值来调整假设曲线,C的取值直接影响着最左边的’+‘能否被取到。
当C变大时,假设曲线会渐渐的离孤立点’+‘接近,当C变得非常大时,会将孤立的’+‘跟其他’+'划在同一个界限内,也就是会出现过拟合的现象。
当C变的非常小时,会出现欠拟合的问题。
这里给出当C取不同值的曲线:
C = 1
在这里插入图片描述
C=100
在这里插入图片描述

1.2 SVM with Gaussian Kernels

这一部分是要利用SVM来训练非线性分类问题了。也就是需要我们设计核函数。

1.2.1 Gaussian Kernel

高斯核函数的定义如下:
在这里插入图片描述
这一部分需要我们在gaussianKernel.m中实现求解高斯核函数。
代码如下:

gaussianKernel.m

function sim = gaussianKernel(x1, x2, sigma)
%RBFKERNEL returns a radial basis function kernel between x1 and x2
%   sim = gaussianKernel(x1, x2) returns a gaussian kernel between x1 and x2
%   and returns the value in sim

% Ensure that x1 and x2 are column vectors
x1 = x1(:); x2 = x2(:);

% You need to return the following variables correctly.
sim = 0;

% ====================== YOUR CODE HERE ======================
% Instructions: Fill in this function to return the similarity between x1
%               and x2 computed using a Gaussian kernel with bandwidth
%               sigma
%
%
deltax = x1-x2;
sim = exp(-(deltax'*deltax/(2*(sigma^2))));

% =============================================================
    
end

1.2.2 Example Dataset 2

数据集可视化如下:
在这里插入图片描述
通过运用svmTrain函数来对此数据集进行训练,得到的假设曲线如下:
在这里插入图片描述

1.2.3 Example Dataset 3

此部分给出一个数据集,让我们通过SVM进行训练,然后找出合适的 C C C和σ,让交叉验证集的准确度最高,通过得出的 C C C和σ再绘制出假设曲线。
首先可视化数据如下:
在这里插入图片描述
通过图片我们可以看出,这也不能用一条直线来划分种类,所以我们需要建立核函数,在这里我们还是沿用高斯核函数。
我们需要在dataset3Params.m中实现选取最合适的 C C C和σ,在这里,我是分别设置 C C C和σ分别取 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30中的一个,一共64种情况,然后进行训练,之后对交叉验证集中的数据进行验证,选取准确度最高的 C C C和σ。

dataset3Params.m

function [C, sigma] = dataset3Params(X, y, Xval, yval)
%DATASET3PARAMS returns your choice of C and sigma for Part 3 of the exercise
%where you select the optimal (C, sigma) learning parameters to use for SVM
%with RBF kernel
%   [C, sigma] = DATASET3PARAMS(X, y, Xval, yval) returns your choice of C and 
%   sigma. You should complete this function to return the optimal C and 
%   sigma based on a cross-validation set.
%

% You need to return the following variables correctly.
C = 1;
sigma = 0.3;
tc = [0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30];
tsigma = [0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30];


% ====================== YOUR CODE HERE ======================
% Instructions: Fill in this function to return the optimal C and sigma
%               learning parameters found using the cross validation set.
%               You can use svmPredict to predict the labels on the cross
%               validation set. For example, 
%                   predictions = svmPredict(model, Xval);
%               will return the predictions on the cross validation set.
%
%  Note: You can compute the prediction error using 
%        mean(double(predictions ~= yval))
%
best=100;
bestc=0;
bests=0;
for i=1:8
  for j=1:8
    model= svmTrain(X, y, tc(i), @(x1, x2) gaussianKernel(x1, x2, tsigma(j)));
    predictions = svmPredict(model,Xval);
    temp=mean(double(predictions~=yval));
    fprintf('temp=%f best=%f\n',temp,best);
    if (best>temp)
      bestc=tc(i);
      bests=tsigma(j);
      best=temp;
      fprintf('bestc = %f,bests=%f best=%f\n',bestc,bests,best);
    endif
  endfor
endfor
fprintf('ansc = %f,anssig=%f\n',bestc,bests);
C = bestc;
sigma = bests;

% C=1.0 sigma=0.1


% =========================================================================

end

之后绘制的图像如下:
在这里插入图片描述

2 Spam Classification

此部分是要实现对正常邮件和垃圾邮件进行区分。
我们需要在这部分任务中完成学习算法的一些流程。

2.1 Preprocessing Emails

我们需要在原始邮件上进行处理,包括以下处理步骤:

  1. 将字母全变成小写字母
  2. 剥离HTML
  3. 将url全替换为httpaddr
  4. 将邮箱全替换为emailaddr
  5. 将数字全替换为number
  6. 将美元符号全替换为dollar
  7. 将某些单词的不同形式替换为一种形式
  8. 将不是文字的内容删除

2.1.1 Vocabulary List

这一部分,我们需要在邮件中出现的单词中挑选一些出现最频繁的单词作为特征值。这里选择了出现次数多于100次的1899个单词作为关键词。
我们需要完成对邮件中出现的单词为关键词替换为相应单词的索引,然后储存在一个向量中,不为关键词的单词略过。
我们需要在processEmail.m中实现方法。

processEmail.m

function word_indices = processEmail(email_contents)
%PROCESSEMAIL preprocesses a the body of an email and
%returns a list of word_indices 
%   word_indices = PROCESSEMAIL(email_contents) preprocesses 
%   the body of an email and returns a list of indices of the 
%   words contained in the email. 
%

% Load Vocabulary
vocabList = getVocabList();

% Init return value
word_indices = [];

% ========================== Preprocess Email ===========================

% Find the Headers ( \n\n and remove )
% Uncomment the following lines if you are working with raw emails with the
% full headers

% hdrstart = strfind(email_contents, ([char(10) char(10)]));
% email_contents = email_contents(hdrstart(1):end);

% Lower case
email_contents = lower(email_contents);

% Strip all HTML
% Looks for any expression that starts with < and ends with > and replace
% and does not have any < or > in the tag it with a space
email_contents = regexprep(email_contents, '<[^<>]+>', ' ');

% Handle Numbers
% Look for one or more characters between 0-9
email_contents = regexprep(email_contents, '[0-9]+', 'number');

% Handle URLS
% Look for strings starting with http:// or https://
email_contents = regexprep(email_contents, ...
                           '(http|https)://[^\s]*', 'httpaddr');

% Handle Email Addresses
% Look for strings with @ in the middle
email_contents = regexprep(email_contents, '[^\s]+@[^\s]+', 'emailaddr');

% Handle $ sign
email_contents = regexprep(email_contents, '[$]+', 'dollar');


% ========================== Tokenize Email ===========================

% Output the email to screen as well
fprintf('\n==== Processed Email ====\n\n');

% Process file
l = 0;

while ~isempty(email_contents)

    % Tokenize and also get rid of any punctuation
    [str, email_contents] = ...
       strtok(email_contents, ...
              [' @$/#.-:&*+=[]?!(){},''">_<;%' char(10) char(13)]);
   
    % Remove any non alphanumeric characters
    str = regexprep(str, '[^a-zA-Z0-9]', '');

    % Stem the word 
    % (the porterStemmer sometimes has issues, so we use a try catch block)
    try str = porterStemmer(strtrim(str)); 
    catch str = ''; continue;
    end;

    % Skip the word if it is too short
    if length(str) < 1
       continue;
    end

    % Look up the word in the dictionary and add to word_indices if
    % found
    % ====================== YOUR CODE HERE ======================
    % Instructions: Fill in this function to add the index of str to
    %               word_indices if it is in the vocabulary. At this point
    %               of the code, you have a stemmed word from the email in
    %               the variable str. You should look up str in the
    %               vocabulary list (vocabList). If a match exists, you
    %               should add the index of the word to the word_indices
    %               vector. Concretely, if str = 'action', then you should
    %               look up the vocabulary list to find where in vocabList
    %               'action' appears. For example, if vocabList{18} =
    %               'action', then, you should add 18 to the word_indices 
    %               vector (e.g., word_indices = [word_indices ; 18]; ).
    % 
    % Note: vocabList{idx} returns a the word with index idx in the
    %       vocabulary list.
    % 
    % Note: You can use strcmp(str1, str2) to compare two strings (str1 and
    %       str2). It will return 1 only if the two strings are equivalent.
    %
    for i=1:length(vocabList)
      if(strcmp(vocabList(i),str)==1)
        word_indices(end+1)=i;
      endif  
    endfor

    % =============================================================


    % Print to screen, ensuring that the output lines are not too long
    if (l + length(str) + 1) > 78
        fprintf('\n');
        l = 0;
    end
    fprintf('%s ', str);
    l = l + length(str) + 1;

end

% Print footer
fprintf('\n\n=========================\n');

end

2.2 Extracting Features from Emails

这一部分我们需要建立 X X X向量,将所有邮件中出现的关键词的索引在 X X X中置为1,其余的置为0.
我们需要在emailFeatures.m中实现这一部分。

emailFeatures.m

function x = emailFeatures(word_indices)
%EMAILFEATURES takes in a word_indices vector and produces a feature vector
%from the word indices
%   x = EMAILFEATURES(word_indices) takes in a word_indices vector and 
%   produces a feature vector from the word indices. 

% Total number of words in the dictionary
n = 1899;

% You need to return the following variables correctly.
x = zeros(n, 1);

% ====================== YOUR CODE HERE ======================
% Instructions: Fill in this function to return a feature vector for the
%               given email (word_indices). To help make it easier to 
%               process the emails, we have have already pre-processed each
%               email and converted each word in the email into an index in
%               a fixed dictionary (of 1899 words). The variable
%               word_indices contains the list of indices of the words
%               which occur in one email.
% 
%               Concretely, if an email has the text:
%
%                  The quick brown fox jumped over the lazy dog.
%
%               Then, the word_indices vector for this text might look 
%               like:
%               
%                   60  100   33   44   10     53  60  58   5
%
%               where, we have mapped each word onto a number, for example:
%
%                   the   -- 60
%                   quick -- 100
%                   ...
%
%              (note: the above numbers are just an example and are not the
%               actual mappings).
%
%              Your task is take one such word_indices vector and construct
%              a binary feature vector that indicates whether a particular
%              word occurs in the email. That is, x(i) = 1 when word i
%              is present in the email. Concretely, if the word 'the' (say,
%              index 60) appears in the email, then x(60) = 1. The feature
%              vector should look like:
%
%              x = [ 0 0 0 0 1 0 0 0 ... 0 0 0 0 1 ... 0 0 0 1 0 ..];
%
%
for i=1:length(word_indices)
  x(word_indices(i))=1;
endfor

% =========================================================================
    
end

2.3 Training SVM for Spam Classification

之后就是带入SVM中进行训练了。
以上就是此次作业的内容

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值