机器学习笔记 十二:最强大的学习算法之支持向量机(二)

在这里插入图片描述

1. 邮件预处理

给定一个邮件,其中包含有:符号、网址、数字、邮箱地址、不规范的书写

> Anyone knows how much it costs to host a web portal ?
>
Well, it depends on how many visitors youre expecting. This can be
anywhere from less than 10 bucks a month to a couple of $100. You
should checkout http://www.rackspace.com/ or perhaps Amazon EC2 if
youre running something big..
To unsubscribe yourself from this mailing list, send an email to:
groupname-unsubscribe@egroups.com
1.1 处理方式
  1. 小写: 整个电子邮件被转换为小写格式,从而忽略标题化;
  2. 删除超链接: 所有的HTML标签都将从电子邮件中删除。许多电子邮件通常都带有HTML格式,我们删除了所有的HTML标签,这样就只保留内容;
  3. 规范化url: 所有url都被替换为文本“httpaddr”;
  4. 规范化的电子邮件地址: 所有的电子邮件地址都被替换为文本“emailaddr”;
  5. 数字标准化: 所有的数字都被替换为文本“number”;
  6. 货币标准化: 所有货币符号($)替换为“dollar”;
  7. 词干处理: 单词被简化为它们的固定的形式,即无单复数、无大小写、无时态等;
  8. 删除非单词: 非单词和标点符号被删除。所有空格(占位符、换行符、空格)都被修剪为一个空格字符
function word_indices = processEmail(email_contents)
%PROCESSEMAIL preprocesses a the body of an email and
%returns a list of word_indices
%   word_indices = PROCESSEMAIL(email_contents) preprocesses
%   the body of an email and returns a list of indices of the
%   words contained in the email.
%

% Load Vocabulary
%1899个常见字,返回一个列向量
vocabList = getVocabList();

% Init return value
word_indices = [];

% ========================== Preprocess Email ===========================

% Find the Headers ( \n\n and remove )
% Uncomment the following lines if you are working with raw emails with the
% full headers

% hdrstart = strfind(email_contents, ([char(10) char(10)]));
% email_contents = email_contents(hdrstart(1):end);

% Lower case
email_contents = lower(email_contents);

% Strip all HTML
% Looks for any expression that starts with < and ends with > and replace
% and does not have any < or > in the tag it with a space
email_contents = regexprep(email_contents, '<[^<>]+>', ' ');

% Handle Numbers
% Look for one or more characters between 0-9
email_contents = regexprep(email_contents, '[0-9]+', 'number');

% Handle URLS
% Look for strings starting with http:// or https://
email_contents = regexprep(email_contents, ...
                           '(http|https)://[^\s]*', 'httpaddr');

% Handle Email Addresses
% Look for strings with @ in the middle
email_contents = regexprep(email_contents, '[^\s]+@[^\s]+', 'emailaddr');

% Handle $ sign
email_contents = regexprep(email_contents, '[$]+', 'dollar');


% ========================== Tokenize Email ===========================

% Output the email to screen as well
fprintf('\n==== Processed Email ====\n\n');

% Process file
l = 0;

while ~isempty(email_contents)

    % Tokenize and also get rid of any punctuation
    % strtok以这些字符为分割符,将文本分成两个部分
    [str, email_contents] = ...
       strtok(email_contents, ...
              [' @$/#.-:&*+=[]?!(){},''">_<;%' char(10) char(13)]);

    % Remove any non alphanumeric characters
    str = regexprep(str, '[^a-zA-Z0-9]', '');

    % Stem the word
    % (the porterStemmer sometimes has issues, so we use a try catch block)
    try str = porterStemmer(strtrim(str));
    catch str = ''; continue;
    end;

    % Skip the word if it is too short
    if length(str) < 1
       continue;
    end

# 添加索引
for i = 1:length(vocabList)
  if strcmp(vocabList(i),str) == 1  % 比较函数strcmp
    word_indices = [word_indices; i]
    break
  endif
end

% =============================================================


    % Print to screen, ensuring that the output lines are not too long
    if (l + length(str) + 1) > 78
        fprintf('\n');
        l = 0;
    end
    fprintf('%s ', str);
    l = l + length(str) + 1;

end

% Print footer
fprintf('\n\n=========================\n');

end

获取单词表:

function vocabList = getVocabList()
%GETVOCABLIST reads the fixed vocabulary list in vocab.txt and returns a
%cell array of the words
%   vocabList = GETVOCABLIST() reads the fixed vocabulary list in vocab.txt 
%   and returns a cell array of the words in vocabList.


%% Read the fixed vocabulary list
fid = fopen('vocab.txt');

% Store all dictionary words in cell array vocab{}
n = 1899;  % Total number of words in the dictionary

% For ease of implementation, we use a struct to map the strings => integers
% In practice, you'll want to use some form of hashmap
vocabList = cell(n, 1);
for i = 1:n
    % Word Index (can ignore since it will be = i)
    fscanf(fid, '%d', 1);
    % Actual Word
    vocabList{i} = fscanf(fid, '%s', 1);
end
fclose(fid);

end

预处理结果:

anyon know how much it cost to host a web portal well it depend on how
mani visitor your expect thi can be anywher from less than number buck
a month to a coupl of dollarnumb you should checkout httpaddr or perhap
amazon ecnumb if your run someth big to unsubscrib yourself from thi
mail list send an email to emailaddr

单词索引:

在这里插入图片描述

2. 提取邮件中的特征

从预处理邮件中提取特征,得到一个[0,0,0,1,0,1,1…0,1,1,1,0,0,0,1]类似的特征向量

function x = emailFeatures(word_indices)
% 邮件特征提取

n = 1899;

% 生成一个1899*1的列向量
x = zeros(n, 1);

for i =word_indices
  x(i) = 1;
end

end

提取结果:

在这里插入图片描述

3. 利用垃圾分类邮件训练SVM

此处参考上一节内容

训练结果:

在这里插入图片描述

4. 垃圾邮件分类中最佳影响因子

在这里插入图片描述

our click remov guarante visit basenumb dollar will price pleas nbsp
most lo ga dollarnumb

5. 主程序代码

%% 初始化
clear ; close all; clc

%% ==================== Part 1: 邮件预处理 ====================
% 要使用SVM将电子邮件分类为垃圾邮件和非垃圾邮件
% 首先需要将每个电子邮件转换为特征向量,生成给定电子邮件的文字索引向量

fprintf('\n邮件预处理:\n');

% 提取邮件的特征
file_contents = readFile('emailSample1.txt');
word_indices  = processEmail(file_contents);

% 状态打印
fprintf('文字索引: \n');
fprintf(' %d', word_indices);
fprintf('\n\n');

fprintf('程序暂停,点击任意键运行.\n');
pause;

%% ==================== Part 2: 特征提取 ====================
%  将邮件转换为特征向量vector

fprintf('\n 样本邮件中的特征提取(emailSample1.txt)\n');

% 提取特征
file_contents = readFile('emailSample1.txt');
word_indices  = processEmail(file_contents);
features      = emailFeatures(word_indices);

% 状态打印输出
fprintf('特征向量的长度: %d\n', length(features));
fprintf('非0特征的个数: %d\n', sum(features > 0));

fprintf('程序暂停,点击任意键运行.\n');
pause;

%% =========== Part 3: 用垃圾邮件训练一个线性SVM ========
%  训练一个线性分类器,来判断邮件是否需要分类(垃圾邮件或非垃圾邮件)

% 垃圾邮件数据加载
% spamTrain.mat文件描述:
%     X:4000*18994000封邮件样本),y:4000*1(标签)
load('spamTrain.mat');

fprintf('\n 训练线性支持向量机 (Spam Classification)\n')
fprintf('(这个过程可能比较耗时(SVM的一个缺点)) ...\n')

C = 0.1;
model = svmTrain(X, y, C, @linearKernel);

p = svmPredict(model, X);

fprintf('训练精读: %f\n', mean(double(p == y)) * 100);

%% =================== Part 4: 测试垃圾邮件分类器 ================
%  用spamTest.mat测试集,测试分类器效果

% 文件描述:Xtest, ytest
load('spamTest.mat');

fprintf('\n 在测试集上评估训练的线性SVM。。。\n')

p = svmPredict(model, Xtest);

fprintf('测试精读: %f\n', mean(double(p == ytest)) * 100);
pause;


%% ================= Part 5: 垃圾邮件中的最佳影响因子 ====================
% 寻找分类器中最大权重的单词

% 对权重进行排序,并在词汇表中输入
[weight, idx] = sort(model.w, 'descend');
vocabList = getVocabList();

fprintf('\n最佳影响因子: \n');
for i = 1:15
    fprintf(' %-15s (%f) \n', vocabList{idx(i)}, weight(i));
end

fprintf('\n\n');
fprintf('\n程序暂停,点击任意键运行.\n');
pause;

%% =================== Part 6: 训练你自己的邮件 =====================
filename = 'spamSample1.txt';

% 读取和预测
file_contents = readFile(filename);
word_indices  = processEmail(file_contents);
x             = emailFeatures(word_indices);
p = svmPredict(model, x);

fprintf('\n处理 %s\n\n垃圾邮件分类: %d\n', filename, p);
fprintf('(1表示垃圾邮件,0表示非垃圾邮件)\n\n');

垃圾邮件样本1:

Do You Want To Make $1000 Or More Per Week?

If you are a motivated and qualified individual - I 
will personally demonstrate to you a system that will 
make you $1,000 per week or more! This is NOT mlm.

Call our 24 hour pre-recorded number to get the 
details.  

000-456-789

I need people who want to make serious money.  Make 
the call and get the facts. 

Invest 2 minutes in yourself now!

000-456-789

Looking forward to your call and I will introduce you 
to people like yourself who
are currently making $10,000 plus per week!

000-456-789

3484lJGv6-241lEaN9080lRmS6-271WxHo7524qiyT5-438rjUv5615hQcf0-662eiDB9057dMtVl72

垃圾邮件样本2:

Best Buy Viagra Generic Online

Viagra 100mg x 60 Pills $125, Free Pills & Reorder Discount, Top Selling 100% Quality & Satisfaction guaranteed!

We accept VISA, Master & E-Check Payments, 90000+ Satisfied Customers!
http://medphysitcstech.ru
5.1 文件读取
function file_contents = readFile(filename)
% 读取文件并在file_contents中返回其全部内容

% 文件加载
fid = fopen(filename);
if fid
    file_contents = fscanf(fid, '%c', inf);
    fclose(fid);
else
    file_contents = '';
    fprintf('无法打开文件 %s\n', filename);
end

end

5.2 线性SVM
function sim = linearKernel(x1, x2)

% 将样本转化为列向量
x1 = x1(:); x2 = x2(:);

% 计算核函数
sim = x1' * x2;

end
5.3 训练SVM
function [model] = svmTrain(X, Y, C, kernelFunction, tol, max_passes)
% SVMTRAIN使用启发式(SMO)算法的简化版本训练SVM分类器:
%    X是训练样本的矩阵(4000*1899);
%    Y为邮件的标签(0:非垃圾邮件,1:垃圾邮件);
%    C是SVM的正则参数;
%    tol是用于确定浮点数相等性的容差值;
%    max_passes是在算法停止前,控制迭代次数;
% 提示: 这里使用的是简化的SMO算法,如果要训练一个SVM分类器,建议使用下面的优化算法:
%       LIBSVM   (http://www.csie.ntu.edu.tw/~cjlin/libsvm/)
%       SVMLight (http://svmlight.joachims.org/)

if ~exist('tol', 'var') || isempty(tol)
    tol = 1e-3;
end

if ~exist('max_passes', 'var') || isempty(max_passes)
    max_passes = 5;
end

% 数据参数
m = size(X, 1);
n = size(X, 2);

%0映射到-1
Y(Y==0) = -1;

% 变量
alphas = zeros(m, 1);
b = 0;
E = zeros(m, 1);
passes = 0;
eta = 0;
L = 0;
H = 0;

% 由于我们的数据集很小,因此预先计算核矩阵
% (in practice, optimized SVM packages that handle large datasets
%  gracefully will _not_ do this)
%
% 这里实现优化的矢量化核函数,使得SVM训练将得更快。
% func2str:将字符串转换成函数句柄
if strcmp(func2str(kernelFunction), 'linearKernel')
    % Vectorized computation for the Linear Kernel
    % This is equivalent to computing the kernel on every pair of examples
    K = X*X';
elseif strfind(func2str(kernelFunction), 'gaussianKernel')
    % Vectorized RBF Kernel
    % This is equivalent to computing the kernel on every pair of examples
    X2 = sum(X.^2, 2);
    K = bsxfun(@plus, X2, bsxfun(@plus, X2', - 2 * (X * X')));
    K = kernelFunction(1, 0) .^ K;
else
    % Pre-compute the Kernel Matrix
    % The following can be slow due to the lack of vectorization
    K = zeros(m);
    for i = 1:m
        for j = i:m
             K(i,j) = kernelFunction(X(i,:)', X(j,:)');
             K(j,i) = K(i,j); %the matrix is symmetric
        end
    end
end

% Train
fprintf('\nTraining ...');
dots = 12;
while passes < max_passes,

    num_changed_alphas = 0;
    for i = 1:m,

        % Calculate Ei = f(x(i)) - y(i) using (2).
        % E(i) = b + sum (X(i, :) * (repmat(alphas.*Y,1,n).*X)') - Y(i);
        E(i) = b + sum (alphas.*Y.*K(:,i)) - Y(i);

        if ((Y(i)*E(i) < -tol && alphas(i) < C) || (Y(i)*E(i) > tol && alphas(i) > 0)),

            % In practice, there are many heuristics one can use to select
            % the i and j. In this simplified code, we select them randomly.
            j = ceil(m * rand());
            while j == i,  % Make sure i \neq j
                j = ceil(m * rand());
            end

            % Calculate Ej = f(x(j)) - y(j) using (2).
            E(j) = b + sum (alphas.*Y.*K(:,j)) - Y(j);

            % Save old alphas
            alpha_i_old = alphas(i);
            alpha_j_old = alphas(j);

            % Compute L and H by (10) or (11).
            if (Y(i) == Y(j)),
                L = max(0, alphas(j) + alphas(i) - C);
                H = min(C, alphas(j) + alphas(i));
            else
                L = max(0, alphas(j) - alphas(i));
                H = min(C, C + alphas(j) - alphas(i));
            end

            if (L == H),
                % continue to next i.
                continue;
            end

            % Compute eta by (14).
            eta = 2 * K(i,j) - K(i,i) - K(j,j);
            if (eta >= 0),
                % continue to next i.
                continue;
            end

            % Compute and clip new value for alpha j using (12) and (15).
            alphas(j) = alphas(j) - (Y(j) * (E(i) - E(j))) / eta;

            % Clip
            alphas(j) = min (H, alphas(j));
            alphas(j) = max (L, alphas(j));

            % Check if change in alpha is significant
            if (abs(alphas(j) - alpha_j_old) < tol),
                % continue to next i.
                % replace anyway
                alphas(j) = alpha_j_old;
                continue;
            end

            % Determine value for alpha i using (16).
            alphas(i) = alphas(i) + Y(i)*Y(j)*(alpha_j_old - alphas(j));

            % Compute b1 and b2 using (17) and (18) respectively.
            b1 = b - E(i) ...
                 - Y(i) * (alphas(i) - alpha_i_old) *  K(i,j)' ...
                 - Y(j) * (alphas(j) - alpha_j_old) *  K(i,j)';
            b2 = b - E(j) ...
                 - Y(i) * (alphas(i) - alpha_i_old) *  K(i,j)' ...
                 - Y(j) * (alphas(j) - alpha_j_old) *  K(j,j)';

            % Compute b by (19).
            if (0 < alphas(i) && alphas(i) < C),
                b = b1;
            elseif (0 < alphas(j) && alphas(j) < C),
                b = b2;
            else
                b = (b1+b2)/2;
            end

            num_changed_alphas = num_changed_alphas + 1;

        end

    end

    if (num_changed_alphas == 0),
        passes = passes + 1;
    else
        passes = 0;
    end

    fprintf('.');
    dots = dots + 1;
    if dots > 78
        dots = 0;
        fprintf('\n');
    end
    if exist('OCTAVE_VERSION')
        fflush(stdout);
    end
end
fprintf(' Done! \n\n');

% Save the model
idx = alphas > 0;
model.X= X(idx,:);
model.y= Y(idx);
model.kernelFunction = kernelFunction;
model.b= b;
model.alphas= alphas(idx);
model.w = ((alphas.*Y)'*X)';

end

5.4 运用SVM进行预测
function pred = svmPredict(model, X)
%SVMPREDICT returns a vector of predictions using a trained SVM model
%(svmTrain). 
%   pred = SVMPREDICT(model, X) returns a vector of predictions using a 
%   trained SVM model (svmTrain). X is a mxn matrix where there each 
%   example is a row. model is a svm model returned from svmTrain.
%   predictions pred is a m x 1 column of predictions of {0, 1} values.
%

% Check if we are getting a column vector, if so, then assume that we only
% need to do prediction for a single example
if (size(X, 2) == 1)
    % Examples should be in rows
    X = X';
end

% Dataset 
m = size(X, 1);
p = zeros(m, 1);
pred = zeros(m, 1);

if strcmp(func2str(model.kernelFunction), 'linearKernel')
    % We can use the weights and bias directly if working with the 
    % linear kernel
    p = X * model.w + model.b;
elseif strfind(func2str(model.kernelFunction), 'gaussianKernel')
    % Vectorized RBF Kernel
    % This is equivalent to computing the kernel on every pair of examples
    X1 = sum(X.^2, 2);
    X2 = sum(model.X.^2, 2)';
    K = bsxfun(@plus, X1, bsxfun(@plus, X2, - 2 * X * model.X'));
    K = model.kernelFunction(1, 0) .^ K;
    K = bsxfun(@times, model.y', K);
    K = bsxfun(@times, model.alphas', K);
    p = sum(K, 2);
else
    % Other Non-linear kernel
    for i = 1:m
        prediction = 0;
        for j = 1:size(model.X, 1)
            prediction = prediction + ...
                model.alphas(j) * model.y(j) * ...
                model.kernelFunction(X(i,:)', model.X(j,:)');
        end
        p(i) = prediction + model.b;
    end
end

% Convert predictions into 0 / 1
pred(p >= 0) =  1;
pred(p <  0) =  0;

end

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
### 回答1: 《机器学习学习笔记.pdf》是一本关于机器学习学习笔记的电子书,其内容涵盖了机器学习的基本概念、算法原理和实践应用等方面。 该电子书的主要内容包括但不限于以下几个方面: 1. 机器学习基础:介绍了机器学习的基本概念、发展历史和核心原理,帮助读者建立起对机器学习的整体认识和理解。 2. 机器学习算法:详细介绍了常见的机器学习算法,包括监督学习算法(如线性回归、逻辑回归、决策树、支持向量机等)、无监督学习算法(如聚类算法、降维算法等)和强化学习算法等,使读者能够了解和掌握不同类型的机器学习算法及其应用场景。 3. 机器学习实践:讲解了机器学习的实践方法和流程,涵盖了数据预处理、特征工程、模型选择和评估等方面的内容,帮助读者掌握如何在实际问题中应用机器学习技术。 4. 应用案例:通过实际案例的介绍和分析,展示了机器学习在自然语言处理、计算机视觉、推荐系统等领域的应用,激发读者对机器学习在实际问题中的应用的兴趣和思考能力。 通过阅读《机器学习学习笔记.pdf》,读者可以系统地学习机器学习的基础知识和算法原理,了解机器学习的应用场景和实践方法,并通过实际案例的分析加深对机器学习技术的理解。这本电子书可以作为机器学习初学者的入门学习资料,也适合有一定机器学习基础的读者作为参考和进一步学习的资料。希望通过这本电子书的阅读,读者能够理解和掌握机器学习的相关知识,为未来在机器学习领域的学习和研究打下坚实的基础。 ### 回答2: 《机器学习学习笔记.pdf》是一本介绍机器学习学习资料。机器学习是一种通过利用数据来训练计算机算法的方法,使其能够自动地从数据中学习和提高性能。这本学习笔记涵盖了机器学习的基本概念、原理和方法,适合初学者和对机器学习感兴趣的读者。 首先,学习笔记机器学习的基本概念入手,包括机器学习的定义、应用领域以及机器学习的三个主要任务:监督学习、无监督学习和强化学习。然后,详细介绍了机器学习的基本原理,如训练集、测试集、特征选择和模型评估等。此外,学习笔记还介绍了几种常见的机器学习算法,如决策树、支持向量机和深度学习等。 除了理论知识,学习笔记还提供了实践案例和代码示例,帮助读者更好地理解和应用机器学习算法。读者可以通过实践案例来掌握机器学习算法的具体应用,并且可以利用代码示例进行实际编程实践。同时,学习笔记还讨论了机器学习的一些挑战和未来的发展方向,如数据质量、模型解释性和自动化机器学习等。 总的来说,《机器学习学习笔记.pdf》是一本全面介绍机器学习学习资料。它结合理论和实践,旨在帮助读者建立对机器学习的基本理解,并具备在实际问题中应用机器学习算法的能力。无论是初学者还是有一定机器学习基础的读者,都可以从中获得有益的知识和经验。 ### 回答3: 《机器学习学习笔记.pdf》是一本关于机器学习学习笔记文档。机器学习是人工智能领域的重要分支,它研究如何使计算机系统自动从数据中学习和改进,以完成特定任务。这本学习笔记以简洁明了的方式介绍了机器学习的基本概念、算法和实践应用。 笔记中首先介绍了机器学习的基础知识,包括监督学习、无监督学习和强化学习等不同的学习类型。然后详细讲解了常用的机器学习算法,如线性回归、逻辑回归、决策树、支持向量机等。每种算法都给出了清晰的定义和示例,并详细解释了算法的原理和应用场景。 此外,《机器学习学习笔记.pdf》还包括了机器学习的实践应用和案例分析。它介绍了如何通过Python等编程语言和机器学习库进行实际的机器学习项目开发,包括数据预处理、特征工程、模型训练和评估等环节。对于初学者来说,这部分内容非常有价值,可以帮助他们快速进入实际应用的阶段。 总结来说,《机器学习学习笔记.pdf》是一本很好的机器学习入门教材,它详细介绍了机器学习的基本概念和常用算法,并提供了实际项目的实践指导。无论是对于想要了解机器学习基础知识的初学者,还是对于已经有一定机器学习经验的开发者来说,这本学习笔记都是一本值得阅读和参考的资料。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Jackson的生态模型

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值