《Mahout实战》

Refer from http://www.52ml.net/tags/mahout

本书附带的语音讲解可以直接在此网页 http://www.ituring.com.cn/article/74754 观看,图灵教育为这些视频配上了中文字幕,以便帮助读者更好地理解。后文中会在不同的章节出现音频或视频文件图标(如下所示),并连接到相关语音讲解视频。

              

No. 1 音频1.1 Mahout的故事
Sean介绍了Mahout项目以及他参与的事项。

No. 2 音频 2.4.2 查准率和查全率的问题
Sean讨论了推荐系统的工作。

No. 3 音频3.3.1 何时忽略值
Sean阐述为什么他认为人们有可能过度“聆听”数据。

No. 4 音频 4.3.2 皮尔逊相关系数存在的问题
Sean谈论皮尔逊相关系数的实现。

No. 5 音频 5.2.4 评估查准率和查全率
Sean讨论了诠释性能指标的价值。

No. 6 音频 6.3 基于MapReduce实现分布式算法
Sean解释了Mahout和Hadoop之间的关系。

No. 7 音频7.4.1 欧氏距离测度
Robin解释了如何为一个数据集选择正确的距离测度方法。

No. 8 音频 8.1.1 将数据转换为向量
Robin扩展了苹果的类比示例。

No. 9 音频 9.1.1 关于k-means你需要了解的
Robin解释了k-means聚类迭代过程。

No. 10 音频 10.2.2 簇间与簇内距离
Robin讨论改善聚类质量的策略。

No. 11 音频 11.3 批聚类及在线聚类
Robin解释了如何改进大规模聚类的性能。

No. 12 视频 13.4.1 第一阶段工作流:训练分类模型
Ellen展示了如何训练一个模型使之逐步优化。

No. 13 视频 14.4.3 20 Newsgroups数据的训练代码
Ted和Ellen展示了Logistic回归的内部机制。

No. 14 视频 14.5 选择训练分类器的算法
Ted比较了使用串行算法与并行算法的优势。

No. 15 音频15.2.1 计算AUC
Ted和Ellen讨论了AUC评估方法。

No. 16 音频 15.2.3 计算平均对数似然
Ted和Ellen讨论了为什么对数似然法意味着“永不说不”。

本文转载自: Startup News

Mahout 将从 Hadoop 迁移到 Spark

images

Apache 的机器学习项目 Mahout 开始支持 Spark,整个将从 Hadoop 平台慢慢迁移到 Spark 计算平台,并支持新的数据引擎 H20.

Mahout 项目委员会委员 Ted Dunning : “H2O 是个很优秀的技术,非常适合 Mahout,可以助力 Mahout 实现很多心的功能,去掉之前版本中的种种限制。尤其H20 和 Spark 结合的时候,性能会非常强。”

H2O 是由初创公司 0xadata 开发的基于内存的数据引擎,专为统计类计算任务设计,并原生地对 R 语言提供支持。

来源 GigaOM

[原]Mahout 0.8维护笔记

(0)
类:
org.apache.mahout.clustering.spectral.common.VectorMatrixMultiplicationJob
方法:
public static DistributedRowMatrix runJob(Path markovPath, Vector diag, Path outputPath)
    throws IOException, ClassNotFoundException, InterruptedException
修改前语句:
return runJob(markovPath, diag, outputPath, new Path(outputPath, "tmp"));
修改后语句:
return runJob(markovPath, diag, outputPath, new Path(outputPath, "_tmp"));
(1)
类:
org.apache.mahout.classifier.df.mapreduce.BuildForest
方法:
private void buildForest() throws IOException, ClassNotFoundException, InterruptedException
修改前语句:
forestBuilder.setOutputDirName(outputPath.getName());
修改后语句:
forestBuilder.setOutputDirName(outputPath.toString());
(2)
类:
org.apache.mahout.classifier.df.mapreduce.Builder
方法:
public Path getOutputPath(Configuration conf) throws IOException
修改前语句:
FileSystem fs = FileSystem.get(conf);
return new Path(fs.getWorkingDirectory(), outputDirName);
修改后语句:
FileSystem fs = FileSystem.get(conf);
return fs.makeQualified(new Path(outputDirName));

本文转载自:葱葱的城堡
9

mahout-0.7-cdh4.5.0安装

1、下载mahout:http://archive.cloudera.com/cdh4/cdh/4/mahout-0.7-cdh4.6.0.tar.gz
2、解压:mahout-0.7-cdh4.5.0.tar.gz
3、改名:mv mahout-0.7-cdh4.5.0 mahout
4、添加环境变量/tec/profile:
export MAHOUT_HOME=/usr/local/mahout
export CLASSPATH=.:$CLASSPATH:$MAHOUT_HOME/lib
export PATH=$PATH:$MAHOUT_HOME/bin
5、验证:
5.1)、下载测试数据:wget http://archive.ics.uci.edu/ml/databases/synthetic_control/synthetic_control.data
5.2)、创建Hadoop目录:hadoop fs -mkdir testdata
5.3)、上传文件:hadoop fs -put synthetic_control.data testdata

5.4)、运行程序:hadoop jar /usr/local/mahout/mahout-examples-0.5-job.jar org.apache.mahout.clustering.syntheticcontrol.kmeans.Job

由此可见,安装mahout的服务器必须先安装hadoop。

本文转载自:Linux公社
16

Mahout应用(一) – LongYou

Mahout应用(一)

Mahout 是应用于hadoop上的数据挖掘工具(废话不多说)

这里先简单介绍一下mahout的一般使用方法。

拿kmeans为列子

Mahout中的kmeans所需要的输入比较特殊需要的输入类型为VectorWritable类型并且是SequenceFile格式存储(一般来讲为了方便查看数据我比较喜欢直接用Text格式直接存储)使用SequenceFile主要是因为可压缩和数据读入速度,mahout认为我们的输出绝大多数不需要看而是为了当做以后的输入。VectorWritable的应用我们以后再说。

Mahout中有一个类叫做InputDriver是用来将输入的文件转化成VectorWritable格式,这里需要注意一下它需要的输入为Text格式存储的输出为SequenceFile格式,也就是Kmeans所需要的格式,每一行为一个Vector必须用空格分隔。

因为不知道读者的mahout版本所以我这里将用mahout.jar来代表mahout的jar包。

假设HDFS上的输入路径为input,输出为buf1 则可以使用命令行直接进行转换:

hadoop jar mahout.jar org.apache.mahout.clustering.conversion.InputDriver -i input -o buf1

这里的数据为大家截个图:

然后就可以使用mahout中的kmeans算法来进行聚类了

这里可以直接使用bin/mahout 来直接操作。

这里我们的输入是上次的输出buf1,输出为output,需要一个路径来存放聚类的中心 buf2,那么命令行代码例子为:

bin/mahout kmeans\

–input buf1\

–output output\

-k 5 \

-c buf2\

–maxIter 100\

-cd 0.001\

-dm org.apache.mahout.common.distance.SquaredEuclideanDistanceMeasure \

-cl

为大家解释一下参数,input和output就不解释了,k参数意思为要聚成5类,c参数代表着聚类中心存放的位置,maxIter为最大迭代次数,cd为收敛到多少可以停止,dm为使用的距离公式,有cl参数意味着最后的输出会多一个聚类中心点clusteredPoints(这个是挺必要的为了方便查看结果建议有这个参数如果不查看结果可以没有)。

在若干次mapreduce过程后,我们来查看一下结果:

bin/mahout clusterdump\

–seqFileDir output/clusters-4\

–pointsDir output/clusteredPoints\

–output result.txt

笔者这里迭代四次后收敛所以是clusters-4

上图为结果,里面的c为中心,r为半径。

欢迎大牛拍砖以后会有其他工具和算法的介绍。

本文转载自:博客园-所有随笔区
applogo-ss

mahout0.8 构建推荐图书系统(dataguru mahout 第二周作业)

书面作业 
1. 用Maven搭建Mahout的开发环境,并完成PPT 26页,最简单的例子。要求有过程说明和截图。

1.1开发环境

– Win7 64bit

– Java 1.7.0_51

– Maven-3.2.1

–myEclipse2013 SR

– Mahout-0.8

–        Hadoop-2.2.0

1.2 用Maven构建Mahout开发环境

1.2.1 用Maven创建一个标准化的Java项目

D:\MyEclipse Professional\java>cd D:\MyEclipse Professional\myMahout

D:\MyEclipse Professional\myMahout>mvn archetype:generate-DarchetypeGroupId=org

.apache.maven.archetypes -DgroupId=org.conan.mymahout-DartifactId=myMahout -Dpa

ckageName=org.conan.mymahout -Dversion=1.0-SNAPSHOT-DinteractiveMode=false

[INFO] Scanning for projects…

[INFO]

[INFO] Using the builderorg.apache.maven.lifecycle.internal.builder.singlethrea

ded.SingleThreadedBuilder with a thread count of 1

[INFO]

[INFO]————————————————————————

[INFO] Building Maven Stub Project (No POM) 1

[INFO]————————————————————————

[INFO]

[INFO] >>> maven-archetype-plugin:2.2:generate(default-cli) @ standalone-pom >>

[INFO]

[INFO] <<< maven-archetype-plugin:2.2:generate(default-cli) @ standalone-pom <<

[INFO]

[INFO] — maven-archetype-plugin:2.2:generate (default-cli) @standalone-pom –

-

[INFO] Generating project in Batch mode

[INFO] No archetype defined. Using maven-archetype-quickstart(org.apache.maven.

archetypes:maven-archetype-quickstart:1.0)

[INFO]————————————————————————-

[INFO] Using following parameters for creating project from Old(1.x) Archetype:

 maven-archetype-quickstart:1.0

[INFO]————————————————————————-

[INFO] Parameter: groupId, Value: org.conan.mymahout

[INFO] Parameter: packageName, Value: org.conan.mymahout

[INFO] Parameter: package, Value: org.conan.mymahout

[INFO] Parameter: artifactId, Value: myMahout

[INFO] Parameter: basedir, Value: D:\MyEclipseProfessional\myMahout

[INFO] Parameter: version, Value: 1.0-SNAPSHOT

[INFO] project created from Old (1.x) Archetype in dir:D:\MyEclipse Professiona

l\myMahout\myMahout

[INFO] ————————————————————————

[INFO] BUILD SUCCESS

[INFO]————————————————————————

[INFO] Total time: 02:29 min

[INFO] Finished at: 2014-03-10T21:12:36+08:00

[INFO] Final Memory: 16M/108M

[INFO]————————————————————————

1.2.3 导入项目到eclipse


1.2.4 增加mahout依赖,修改pom.xml

< project xmlns = "http://maven.apache.org/POM/4.0.0" xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"

    xsi:schemaLocation = "http://maven.apache.org/POM/4.0.0http://maven.apache.org/maven-v4_0_0.xsd" >

    < modelVersion > 4.0.0 </ modelVersion >

    < groupId > org.conan.mymahout </ groupId >

    < artifactId > myMahout </ artifactId >

    < packaging > jar </ packaging >

    < version > 1.0-SNAPSHOT </ version >

    < name > myMahout </ name >

    < url > http://maven.apache.org </ url >

    < properties >

        < project.build.sourceEncoding > UTF-8 </ project.build.sourceEncoding >

        < mahout.version > 0.8 </ mahout.version >

    </ properties >

    < dependencies >

        < dependency >

            < groupId > org.apache.mahout </ groupId >

            < artifactId > mahout -core </ artifactId >

            < version > ${mahout.version} </ version >

        </ dependency >

        < dependency >

            < groupId > org.apache.mahout </ groupId >

            < artifactId > mahout -integration </ artifactId >

            < version > ${mahout.version} </ version >

            < exclusions >

                < exclusion >

                    < groupId > org.mortbay.jetty </ groupId >

                    < artifactId > jetty </ artifactId >

                </ exclusion >

                < exclusion >

                    < groupId > org.apache.cassandra </ groupId >

                    < artifactId > cassandra -all </ artifactId >

                </ exclusion >

                < exclusion >

                    < groupId > me.prettyprint </ groupId >

                    < artifactId > hector -core </ artifactId >

                </ exclusion >

            </ exclusions >

        </ dependency >

    </ dependencies >

</ project >

 

1.2.4 下载依赖

D:\MyEclipse Professional\myMahout\myMahout>mvn clean install

[INFO] Scanning for projects…

[INFO]

[INFO] Using the builderorg.apache.maven.lifecycle.internal.builder.singlethrea

ded.SingleThreadedBuilder with a thread count of 1

[INFO]

[INFO]————————————————————————

[INFO] Building myMahout 1.0-SNAPSHOT

[INFO]————————————————————————

[INFO]

[INFO] — maven-clean-plugin:2.5:clean (default-clean) @ myMahout—

[INFO]

[INFO] — maven-resources-plugin:2.6:resources (default-resources)@ myMahout -

[INFO] Using ‘UTF-8′ encoding to copy filtered resources.

[INFO] skip non existing resourceDirectory D:\MyEclipseProfessional\myMahout\my

Mahout\src\main\resources

[INFO]

[INFO] — maven-compiler-plugin:2.5.1:compile(default-compile) @ myMahout —

[INFO] Compiling 1 source file to D:\MyEclipseProfessional\myMahout\myMahout\ta

rget\classes

[INFO]

[INFO] — maven-resources-plugin:2.6:testResources(default-testResources) @ my

Mahout —

[INFO] Using ‘UTF-8′ encoding to copy filtered resources.

[INFO] skip non existing resourceDirectory D:\MyEclipseProfessional\myMahout\my

Mahout\src\test\resources

[INFO]

[INFO] — maven-compiler-plugin:2.5.1:testCompile(default-testCompile) @ myMah

out —

[INFO] Compiling 1 source file to D:\MyEclipseProfessional\myMahout\myMahout\ta

rget\test-classes

[INFO]

[INFO] — maven-surefire-plugin:2.12.4:test(default-test) @ myMahout —

[INFO] Surefire report directory: D:\MyEclipseProfessional\myMahout\myMahout\ta

rget\surefire-reports

Downloading:http://repo.maven.apache.org/maven2/org/apache/maven/surefire/suref

ire-junit4/2.12.4/surefire-junit4-2.12.4.pom

Downloaded:http://repo.maven.apache.org/maven2/org/apache/maven/surefire/surefi

re-junit4/2.12.4/surefire-junit4-2.12.4.pom(3 KB at 0.5 KB/sec)

Downloading:http://repo.maven.apache.org/maven2/org/apache/maven/surefire/suref

ire-providers/2.12.4/surefire-providers-2.12.4.pom

Downloaded:http://repo.maven.apache.org/maven2/org/apache/maven/surefire/surefi

re-providers/2.12.4/surefire-providers-2.12.4.pom(3 KB at 3.1 KB/sec)

Downloading:http://repo.maven.apache.org/maven2/org/apache/maven/surefire/suref

ire-junit4/2.12.4/surefire-junit4-2.12.4.jar

Downloaded:http://repo.maven.apache.org/maven2/org/apache/maven/surefire/surefi

re-junit4/2.12.4/surefire-junit4-2.12.4.jar(37 KB at 16.2 KB/sec)

——————————————————-

 T E S T S

——————————————————-

Running org.conan.mymahout.AppTest

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed:0.007 sec

Results :

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0

[INFO]

[INFO] — maven-jar-plugin:2.4:jar (default-jar) @ myMahout —

[INFO] Building jar: D:\MyEclipseProfessional\myMahout\myMahout\target\myMahout

-1.0-SNAPSHOT.jar

[INFO]

[INFO] — maven-install-plugin:2.4:install (default-install) @myMahout —

[INFO] Installing D:\MyEclipseProfessional\myMahout\myMahout\target\myMahout-1.

0-SNAPSHOT.jar toC:\Users\Administrator\.m2\repository\org\conan\mymahout\myMah

out\1.0-SNAPSHOT\myMahout-1.0-SNAPSHOT.jar

[INFO] Installing D:\MyEclipseProfessional\myMahout\myMahout\pom.xml to C:\User

s\Administrator\.m2\repository\org\conan\mymahout\myMahout\1.0-SNAPSHOT\myMahout

-1.0-SNAPSHOT.pom

[INFO] ————————————————————————

[INFO] BUILD SUCCESS

[INFO]————————————————————————

[INFO] Total time: 13.173 s

[INFO] Finished at: 2014-03-10T21:28:56+08:00

[INFO] Final Memory: 24M/178M

[INFO]————————————————————————

D:\MyEclipse Professional\myMahout\myMahout>

在eclipse中刷新项目:


1.3 用Mahout实现协同过滤userCF


2. 用案例的数据集,基于Mahout,任选一种算法,对任意一个女性用户进行协同过滤推荐,并解释推荐结果是否合理,解释过程可以写成一文档说明。

控制台输出:只截取部分结果:

userEuclidean       =>uid:163,(279,5.500000)

itemEuclidean      =>uid:163,(374,9.454545)(264,9.000000)(852,8.927536)

userEuclideanNoPref=>uid:163,(279,2.000000)(2,1.000000)(415,1.000000)

itemEuclideanNoPref=>uid:163,(138,5.150000)(246,4.092857)(288,3.833333)我们查看uid=163的用户推荐信息:推荐了138。然后我们看看图书138评分比较高的都有哪些用户:

userid

bookid

score

sex

age

152

138

8

F

26

172

138

4

F

56

其中152用户对973图书的评分很高。

userid

bookid

score

sex

age

152

973

8

F

26

163

973

9

F

32

所以是合理的。


3. 接第2题,增加过滤条件,排除男性,只保留对女性用户的推荐评分,然后进行推荐,并解释推荐结果,是否合理。要求有代码,运行过程抓图,代码的文档说明,解释结果的文档说明等。

package org.conan.mymahout.recommendation.book;

import java.io.BufferedReader;

import java.io.File;

import java.io.FileReader;

import java.io.IOException;

import java.util.HashSet;

import java.util.List;

import java.util.Set;

import org.apache.mahout.cf.taste.common.TasteException;

import org.apache.mahout.cf.taste.eval.RecommenderBuilder;

importorg.apache.mahout.cf.taste.impl.common.LongPrimitiveIterator;

import org.apache.mahout.cf.taste.model.DataModel;

import org.apache.mahout.cf.taste.recommender.IDRescorer;

import org.apache.mahout.cf.taste.recommender.RecommendedItem;

public class BookFilterGenderResult {

    final static intNEIGHBORHOOD_NUM = 2;

    final static intRECOMMENDER_NUM = 3;

    public static void main(String[]args) throws TasteException, IOException {

        String file ="datafile/book/rating.csv";

        DataModel dataModel= RecommendFactory.buildDataModel(file);

        RecommenderBuilderrb1 = BookEvaluator.userEuclidean(dataModel);

        RecommenderBuilder rb2 =BookEvaluator.itemEuclidean(dataModel);

        RecommenderBuilderrb3 = BookEvaluator.userEuclideanNoPref(dataModel);

        RecommenderBuilderrb4 = BookEvaluator.itemEuclideanNoPref(dataModel);

        long uid = 152;

       System.out.print("userEuclidean       =>");

        filterGender(uid,rb1, dataModel);

       System.out.print("itemEuclidean       =>");

        filterGender(uid,rb2, dataModel);

       System.out.print("userEuclideanNoPref =>");

        filterGender(uid,rb3, dataModel);

       System.out.print("itemEuclideanNoPref =>");

        filterGender(uid,rb4, dataModel);

    }

    /**

     * 对用户性别进行过滤

     */

    public static voidfilterGender(long uid, RecommenderBuilder recommenderBuilder, DataModeldataModel) throws TasteException, IOException {

        //Set<Long>userids = getMale("datafile/book/user.csv");

    Set <Long>userids = getFeMale("datafile/book/user.csv");

        //计算女性用户打分过的图书

        Set bookids = newHashSet();

        for (long uids :userids) {

           LongPrimitiveIterator iter =dataModel.getItemIDsFromUser(uids).iterator();

            while(iter.hasNext()) {

                long bookid = iter.next();

               bookids.add(bookid);

            }

        }

        IDRescorer rescorer= new FilterRescorer(bookids);

        List list =recommenderBuilder.buildRecommender(dataModel).recommend(uid, RECOMMENDER_NUM,rescorer);

       RecommendFactory.showItems(uid, list, false);

    }

    /**

     * 获得男性用户ID

     */

    public static SetgetMale(String file) throws IOException {

        BufferedReader br =new BufferedReader(new FileReader(new File(file)));

        Set userids = newHashSet();

        String s = null;

        while ((s =br.readLine()) != null) {

            String[] cols =s.split(",");

            if(cols[1].equals("M")) {// 判断男性用户

               userids.add(Long.parseLong(cols[0]));

            }

        }

        br.close();

        return userids;

    }

    /**

     * 获得女性用户ID

     */

    public static SetgetFeMale(String file) throws IOException {

        BufferedReader br =new BufferedReader(new FileReader(new File(file)));

        Set userids = newHashSet();

        String s = null;

        while ((s =br.readLine()) != null) {

            String[] cols =s.split(",");

            if(cols[1].equals("F")) {// 判断女性用户

               userids.add(Long.parseLong(cols[0]));

            }

        }

        br.close();

        return userids;

    }

}

/**

 * 对结果重计算

 */

class FilterRescorer implements IDRescorer {

    final private Setuserids;

    publicFilterRescorer(Set userids) {

        this.userids =userids;

    }

    @Override

    public doublerescore(long id, double originalScore) {

        returnisFiltered(id) ? Double.NaN : originalScore;

    }

    @Override

    public booleanisFiltered(long id) {

        return !userids.contains(id);

    }

}

运行结果:

userEuclidean

AVERAGE_ABSOLUTE_DIFFERENCEEvaluater Score:0.11111108462015788

RecommenderIR Evaluator: [Precision:0.3010752688172043,Recall:0.08542713567839195]

itemEuclidean

AVERAGE_ABSOLUTE_DIFFERENCEEvaluater Score:1.3536954060693203

RecommenderIR Evaluator: [Precision:0.0,Recall:0.0]

userEuclideanNoPref

AVERAGE_ABSOLUTE_DIFFERENCEEvaluater Score:4.61812258478421

RecommenderIR Evaluator: [Precision:0.09045226130653267,Recall:0.09296482412060306]

itemEuclideanNoPref

AVERAGE_ABSOLUTE_DIFFERENCEEvaluater Score:2.625455679766278

RecommenderIR Evaluator: [Precision:0.6005025125628134,Recall:0.6055276381909548]

userEuclidean       =>uid:99,

itemEuclidean      =>uid:99,(586,10.000000)(378,10.000000)(202,9.666667)

userEuclideanNoPref=>uid:99,(616,1.000000)(307,1.000000)(552,1.000000)

itemEuclideanNoPref=>uid:99,(96,3.392724)(860,3.250000)(375,3.200000)

我们对itemEuclideanNoPref算法的结果进行分析。

排名第一的是ID为96的图书,我再一步向下追踪:查询哪些用户对图书96的打分比较高。

73

96

8

F

28

79

96

7

F

32

117

96

10

F

34

163

96

8

F

32

所有得用户都是女性,其中117用户对106图书的评分很高。

userid

bookid

score

sex

age

99

106

10

F

37

117

106

7

F

34

所以是合理的。

本文转载自:CSDN博客
14

Mahout分步式程序开发 基于物品的协同过滤ItemCF – Django's blog

http://blog.fens.me/hadoop-mahout-mapreduce-itemcf/

Hadoop家族系列文章 ,主要介绍Hadoop家族产品,常用的项目包括Hadoop, Hive, Pig, HBase, Sqoop, Mahout, Zookeeper, Avro, Ambari, Chukwa,新增加的项目包括,YARN, Hcatalog, Oozie, Cassandra, Hama, Whirr, Flume, Bigtop, Crunch, Hue等。

从2011年开始,中国进入大数据风起云涌的时代,以Hadoop为代表的家族软件,占据了大数据处理的广阔地盘。开源界及厂商,所有数据软件,无一不向Hadoop靠拢。Hadoop也从小众的高富帅领域,变成了大数据开发的标准。在Hadoop原有技术基础之上,出现了Hadoop家族产品,通过“大数据”概念不断创新,推出科技进步。

作为IT界的开发人员,我们也要跟上节奏,抓住机遇,跟着Hadoop一起雄起!

关于作者:

  • 张丹(Conan), 程序员Java,R,PHP,Javascript
  • weibo:@Conan_Z
  • blog:  http://blog.fens.me
  • email: bsspirit@gmail.com

转载请注明出处:
http://blog.fens.me/hadoop-mahout-mapreduce-itemcf/

mahout-hadoop-itemcf

前言

Mahout是Hadoop家族一员,从血缘就继承了Hadoop程序的特点,支持HDFS访问和MapReduce分步式算法。随着Mahout的发展,从0.7版本开始,Mahout做了重大的升级。移除了部分算法的单机内存计算,只支持基于Hadoop的MapReduce平行计算。

从这点上,我们能看出Mahout走向大数据,坚持并行化的决心!相信在Hadoop的大框架下,Mahout最终能成为一个大数据的明星产品!

目录

  1. Mahout开发环境介绍
  2. Mahout基于Hadoop的分步环境介绍
  3. 用Mahout实现协同过滤ItemCF
  4. 模板项目上传github

1. Mahout开发环境介绍

在  用Maven构建Mahout项目  文章中,我们已经配置好了基于Maven的Mahout的开发环境,我们将继续完成Mahout的分步式的程序开发。

本文的mahout版本为0.8。

开发环境:

  • Win7 64bit
  • Java 1.6.0_45
  • Maven 3
  • Eclipse Juno Service Release 2
  • Mahout 0.8
  • Hadoop 1.1.2

找到pom.xml,修改mahout版本为0.8

<mahout.version>0.8</mahout.version>

然后,下载依赖库。

~ mvn clean install

由于 org.conan.mymahout.cluster06.Kmeans.java 类代码,是基于mahout-0.6的,所以会报错。我们可以先注释这个文件。

2. Mahout基于Hadoop的分步环境介绍

hadoop-mahout-cluster-dev

如上图所示,我们可以选择在win7中开发,也可以在linux中开发,开发过程我们可以在本地环境进行调试,标配的工具都是Maven和Eclipse。

Mahout在运行过程中,会把MapReduce的算法程序包,自动发布的Hadoop的集群环境中,这种开发和运行模式,就和真正的生产环境差不多了。

3. 用Mahout实现协同过滤ItemCF

实现步骤:

  • 1. 准备数据文件: item.csv
  • 2. Java程序:HdfsDAO.java
  • 3. Java程序:ItemCFHadoop.java
  • 4. 运行程序
  • 5. 推荐结果解读

1). 准备数据文件: item.csv
上传测试数据到HDFS,单机内存实验请参考文章: 用Maven构建Mahout项目

~ hadoop fs -mkdir /user/hdfs/userCF~ hadoop fs -copyFromLocal /home/conan/datafiles/item.csv /user/hdfs/userCF~ hadoop fs -cat /user/hdfs/userCF/item.csv1,101,5.01,102,3.01,103,2.52,101,2.02,102,2.52,103,5.02,104,2.03,101,2.53,104,4.03,105,4.53,107,5.04,101,5.04,103,3.04,104,4.54,106,4.05,101,4.05,102,3.05,103,2.05,104,4.05,105,3.55,106,4.0

2). Java程序:HdfsDAO.java
HdfsDAO.java,是一个HDFS操作的工具,用API实现Hadoop的各种HDFS命令,请参考文章: Hadoop编程调用HDFS

我们这里会用到HdfsDAO.java类中的一些方法:

	HdfsDAO hdfs = new HdfsDAO(HDFS, conf);	hdfs.rmr(inPath);	hdfs.mkdirs(inPath);	hdfs.copyFile(localFile, inPath);	hdfs.ls(inPath);	hdfs.cat(inFile);

3). Java程序:ItemCFHadoop.java
用Mahout实现分步式算法,我们看到Mahout in Action中的解释。

aglorithm_2

实现程序:

package org.conan.mymahout.recommendation;import org.apache.hadoop.mapred.JobConf;import org.apache.mahout.cf.taste.hadoop.item.RecommenderJob;import org.conan.mymahout.hdfs.HdfsDAO;public class ItemCFHadoop {    private static final String HDFS = "hdfs://192.168.1.210:9000";    public static void main(String[] args) throws Exception {	String localFile = "datafile/item.csv";	String inPath = HDFS + "/user/hdfs/userCF";	String inFile = inPath + "/item.csv";	String outPath = HDFS + "/user/hdfs/userCF/result/";	String outFile = outPath + "/part-r-00000";	String tmpPath = HDFS + "/tmp/" + System.currentTimeMillis();	JobConf conf = config();	HdfsDAO hdfs = new HdfsDAO(HDFS, conf);	hdfs.rmr(inPath);	hdfs.mkdirs(inPath);	hdfs.copyFile(localFile, inPath);	hdfs.ls(inPath);	hdfs.cat(inFile);	StringBuilder sb = new StringBuilder();	sb.append("--input ").append(inPath);	sb.append(" --output ").append(outPath);	sb.append(" --booleanData true");	sb.append(" --similarityClassname org.apache.mahout.math.hadoop.similarity.cooccurrence.measures.EuclideanDistanceSimilarity");	sb.append(" --tempDir ").append(tmpPath);	args = sb.toString().split(" ");	RecommenderJob job = new RecommenderJob();	job.setConf(conf);	job.run(args);	hdfs.cat(outFile);    }    public static JobConf config() {	JobConf conf = new JobConf(ItemCFHadoop.class);	conf.setJobName("ItemCFHadoop");	conf.addResource("classpath:/hadoop/core-site.xml");	conf.addResource("classpath:/hadoop/hdfs-site.xml");	conf.addResource("classpath:/hadoop/mapred-site.xml");	return conf;    }}

RecommenderJob.java,实际上就是封装了,上面整个图的分步式并行算法的执行过程!如果没有这层封装,我们需要自己去实现图中8个步骤MapReduce算法。

关于上面算法的深度剖析,请参考文章: R实现MapReduce的协同过滤算法

4). 运行程序
控制台输出:

Delete: hdfs://192.168.1.210:9000/user/hdfs/userCFCreate: hdfs://192.168.1.210:9000/user/hdfs/userCFcopy from: datafile/item.csv to hdfs://192.168.1.210:9000/user/hdfs/userCFls: hdfs://192.168.1.210:9000/user/hdfs/userCF==========================================================name: hdfs://192.168.1.210:9000/user/hdfs/userCF/item.csv, folder: false, size: 229==========================================================cat: hdfs://192.168.1.210:9000/user/hdfs/userCF/item.csv1,101,5.01,102,3.01,103,2.52,101,2.02,102,2.52,103,5.02,104,2.03,101,2.53,104,4.03,105,4.53,107,5.04,101,5.04,103,3.04,104,4.54,106,4.05,101,4.05,102,3.05,103,2.05,104,4.05,105,3.55,106,4.0SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".SLF4J: Defaulting to no-operation (NOP) logger implementationSLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.2013-10-14 10:26:35 org.apache.hadoop.util.NativeCodeLoader 警告: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable2013-10-14 10:26:35 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus信息: Total input paths to process : 12013-10-14 10:26:35 org.apache.hadoop.io.compress.snappy.LoadSnappy 警告: Snappy native library not loaded2013-10-14 10:26:36 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Running job: job_local_00012013-10-14 10:26:36 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 10:26:36 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: io.sort.mb = 1002013-10-14 10:26:36 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: data buffer = 79691776/996147202013-10-14 10:26:36 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: record buffer = 262144/3276802013-10-14 10:26:36 org.apache.hadoop.mapred.MapTask$MapOutputBuffer flush信息: Starting flush of map output2013-10-14 10:26:36 org.apache.hadoop.io.compress.CodecPool getCompressor信息: Got brand-new compressor2013-10-14 10:26:36 org.apache.hadoop.mapred.MapTask$MapOutputBuffer sortAndSpill信息: Finished spill 02013-10-14 10:26:36 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting2013-10-14 10:26:36 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:36 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0001_m_000000_0' done.2013-10-14 10:26:36 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 10:26:36 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:36 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Merging 1 sorted segments2013-10-14 10:26:36 org.apache.hadoop.io.compress.CodecPool getDecompressor信息: Got brand-new decompressor2013-10-14 10:26:36 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Down to the last merge-pass, with 1 segments left of total size: 42 bytes2013-10-14 10:26:36 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:36 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting2013-10-14 10:26:36 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:36 org.apache.hadoop.mapred.Task commit信息: Task attempt_local_0001_r_000000_0 is allowed to commit now2013-10-14 10:26:36 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask信息: Saved output of task 'attempt_local_0001_r_000000_0' to hdfs://192.168.1.210:9000/tmp/1381717594500/preparePreferenceMatrix/itemIDIndex2013-10-14 10:26:36 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: reduce > reduce2013-10-14 10:26:36 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0001_r_000000_0' done.2013-10-14 10:26:37 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息:  map 100% reduce 100%2013-10-14 10:26:37 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Job complete: job_local_00012013-10-14 10:26:37 org.apache.hadoop.mapred.Counters log信息: Counters: 192013-10-14 10:26:37 org.apache.hadoop.mapred.Counters log信息:   File Output Format Counters 2013-10-14 10:26:37 org.apache.hadoop.mapred.Counters log信息:     Bytes Written=1872013-10-14 10:26:37 org.apache.hadoop.mapred.Counters log信息:   FileSystemCounters2013-10-14 10:26:37 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_READ=32873302013-10-14 10:26:37 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_READ=9162013-10-14 10:26:37 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_WRITTEN=34432922013-10-14 10:26:37 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_WRITTEN=6452013-10-14 10:26:37 org.apache.hadoop.mapred.Counters log信息:   File Input Format Counters 2013-10-14 10:26:37 org.apache.hadoop.mapred.Counters log信息:     Bytes Read=2292013-10-14 10:26:37 org.apache.hadoop.mapred.Counters log信息:   Map-Reduce Framework2013-10-14 10:26:37 org.apache.hadoop.mapred.Counters log信息:     Map output materialized bytes=462013-10-14 10:26:37 org.apache.hadoop.mapred.Counters log信息:     Map input records=212013-10-14 10:26:37 org.apache.hadoop.mapred.Counters log信息:     Reduce shuffle bytes=02013-10-14 10:26:37 org.apache.hadoop.mapred.Counters log信息:     Spilled Records=142013-10-14 10:26:37 org.apache.hadoop.mapred.Counters log信息:     Map output bytes=842013-10-14 10:26:37 org.apache.hadoop.mapred.Counters log信息:     Total committed heap usage (bytes)=3765698562013-10-14 10:26:37 org.apache.hadoop.mapred.Counters log信息:     SPLIT_RAW_BYTES=1162013-10-14 10:26:37 org.apache.hadoop.mapred.Counters log信息:     Combine input records=212013-10-14 10:26:37 org.apache.hadoop.mapred.Counters log信息:     Reduce input records=72013-10-14 10:26:37 org.apache.hadoop.mapred.Counters log信息:     Reduce input groups=72013-10-14 10:26:37 org.apache.hadoop.mapred.Counters log信息:     Combine output records=72013-10-14 10:26:37 org.apache.hadoop.mapred.Counters log信息:     Reduce output records=72013-10-14 10:26:37 org.apache.hadoop.mapred.Counters log信息:     Map output records=212013-10-14 10:26:37 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus信息: Total input paths to process : 12013-10-14 10:26:37 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Running job: job_local_00022013-10-14 10:26:37 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 10:26:37 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: io.sort.mb = 1002013-10-14 10:26:37 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: data buffer = 79691776/996147202013-10-14 10:26:37 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: record buffer = 262144/3276802013-10-14 10:26:37 org.apache.hadoop.mapred.MapTask$MapOutputBuffer flush信息: Starting flush of map output2013-10-14 10:26:37 org.apache.hadoop.mapred.MapTask$MapOutputBuffer sortAndSpill信息: Finished spill 02013-10-14 10:26:37 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0002_m_000000_0 is done. And is in the process of commiting2013-10-14 10:26:37 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:37 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0002_m_000000_0' done.2013-10-14 10:26:37 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 10:26:37 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:37 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Merging 1 sorted segments2013-10-14 10:26:37 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Down to the last merge-pass, with 1 segments left of total size: 68 bytes2013-10-14 10:26:37 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:37 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0002_r_000000_0 is done. And is in the process of commiting2013-10-14 10:26:37 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:37 org.apache.hadoop.mapred.Task commit信息: Task attempt_local_0002_r_000000_0 is allowed to commit now2013-10-14 10:26:37 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask信息: Saved output of task 'attempt_local_0002_r_000000_0' to hdfs://192.168.1.210:9000/tmp/1381717594500/preparePreferenceMatrix/userVectors2013-10-14 10:26:37 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: reduce > reduce2013-10-14 10:26:37 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0002_r_000000_0' done.2013-10-14 10:26:38 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息:  map 100% reduce 100%2013-10-14 10:26:38 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Job complete: job_local_00022013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息: Counters: 202013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:   org.apache.mahout.cf.taste.hadoop.item.ToUserVectorsReducer$Counters2013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:     USERS=52013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:   File Output Format Counters 2013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:     Bytes Written=2882013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:   FileSystemCounters2013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_READ=65742742013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_READ=13742013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_WRITTEN=68875922013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_WRITTEN=11202013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:   File Input Format Counters 2013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:     Bytes Read=2292013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:   Map-Reduce Framework2013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:     Map output materialized bytes=722013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:     Map input records=212013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:     Reduce shuffle bytes=02013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:     Spilled Records=422013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:     Map output bytes=632013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:     Total committed heap usage (bytes)=5759303682013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:     SPLIT_RAW_BYTES=1162013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:     Combine input records=02013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:     Reduce input records=212013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:     Reduce input groups=52013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:     Combine output records=02013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:     Reduce output records=52013-10-14 10:26:38 org.apache.hadoop.mapred.Counters log信息:     Map output records=212013-10-14 10:26:38 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus信息: Total input paths to process : 12013-10-14 10:26:38 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Running job: job_local_00032013-10-14 10:26:38 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 10:26:38 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: io.sort.mb = 1002013-10-14 10:26:38 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: data buffer = 79691776/996147202013-10-14 10:26:38 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: record buffer = 262144/3276802013-10-14 10:26:38 org.apache.hadoop.mapred.MapTask$MapOutputBuffer flush信息: Starting flush of map output2013-10-14 10:26:38 org.apache.hadoop.mapred.MapTask$MapOutputBuffer sortAndSpill信息: Finished spill 02013-10-14 10:26:38 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0003_m_000000_0 is done. And is in the process of commiting2013-10-14 10:26:38 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:38 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0003_m_000000_0' done.2013-10-14 10:26:38 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 10:26:38 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:38 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Merging 1 sorted segments2013-10-14 10:26:38 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Down to the last merge-pass, with 1 segments left of total size: 89 bytes2013-10-14 10:26:38 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:38 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0003_r_000000_0 is done. And is in the process of commiting2013-10-14 10:26:38 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:38 org.apache.hadoop.mapred.Task commit信息: Task attempt_local_0003_r_000000_0 is allowed to commit now2013-10-14 10:26:38 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask信息: Saved output of task 'attempt_local_0003_r_000000_0' to hdfs://192.168.1.210:9000/tmp/1381717594500/preparePreferenceMatrix/ratingMatrix2013-10-14 10:26:38 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: reduce > reduce2013-10-14 10:26:38 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0003_r_000000_0' done.2013-10-14 10:26:39 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息:  map 100% reduce 100%2013-10-14 10:26:39 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Job complete: job_local_00032013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息: Counters: 212013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:   File Output Format Counters 2013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:     Bytes Written=3352013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:   org.apache.mahout.cf.taste.hadoop.preparation.ToItemVectorsMapper$Elements2013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:     USER_RATINGS_NEGLECTED=02013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:     USER_RATINGS_USED=212013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:   FileSystemCounters2013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_READ=98613492013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_READ=19502013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_WRITTEN=103319582013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_WRITTEN=17512013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:   File Input Format Counters 2013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:     Bytes Read=2882013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:   Map-Reduce Framework2013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:     Map output materialized bytes=932013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:     Map input records=52013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:     Reduce shuffle bytes=02013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:     Spilled Records=142013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:     Map output bytes=3362013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:     Total committed heap usage (bytes)=7752908802013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:     SPLIT_RAW_BYTES=1572013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:     Combine input records=212013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:     Reduce input records=72013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:     Reduce input groups=72013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:     Combine output records=72013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:     Reduce output records=72013-10-14 10:26:39 org.apache.hadoop.mapred.Counters log信息:     Map output records=212013-10-14 10:26:39 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus信息: Total input paths to process : 12013-10-14 10:26:39 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Running job: job_local_00042013-10-14 10:26:39 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 10:26:39 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: io.sort.mb = 1002013-10-14 10:26:39 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: data buffer = 79691776/996147202013-10-14 10:26:39 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: record buffer = 262144/3276802013-10-14 10:26:39 org.apache.hadoop.mapred.MapTask$MapOutputBuffer flush信息: Starting flush of map output2013-10-14 10:26:39 org.apache.hadoop.mapred.MapTask$MapOutputBuffer sortAndSpill信息: Finished spill 02013-10-14 10:26:39 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0004_m_000000_0 is done. And is in the process of commiting2013-10-14 10:26:39 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:39 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0004_m_000000_0' done.2013-10-14 10:26:39 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 10:26:39 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:39 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Merging 1 sorted segments2013-10-14 10:26:39 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Down to the last merge-pass, with 1 segments left of total size: 118 bytes2013-10-14 10:26:39 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:39 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0004_r_000000_0 is done. And is in the process of commiting2013-10-14 10:26:39 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:39 org.apache.hadoop.mapred.Task commit信息: Task attempt_local_0004_r_000000_0 is allowed to commit now2013-10-14 10:26:39 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask信息: Saved output of task 'attempt_local_0004_r_000000_0' to hdfs://192.168.1.210:9000/tmp/1381717594500/weights2013-10-14 10:26:39 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: reduce > reduce2013-10-14 10:26:39 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0004_r_000000_0' done.2013-10-14 10:26:40 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息:  map 100% reduce 100%2013-10-14 10:26:40 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Job complete: job_local_00042013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息: Counters: 202013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:   File Output Format Counters 2013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:     Bytes Written=3812013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:   FileSystemCounters2013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_READ=131484762013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_READ=26282013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_WRITTEN=137804082013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_WRITTEN=25512013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:   File Input Format Counters 2013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:     Bytes Read=3352013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:   org.apache.mahout.math.hadoop.similarity.cooccurrence.RowSimilarityJob$Counters2013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:     ROWS=72013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:   Map-Reduce Framework2013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:     Map output materialized bytes=1222013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:     Map input records=72013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:     Reduce shuffle bytes=02013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:     Spilled Records=162013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:     Map output bytes=5162013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:     Total committed heap usage (bytes)=9746513922013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:     SPLIT_RAW_BYTES=1582013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:     Combine input records=242013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:     Reduce input records=82013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:     Reduce input groups=82013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:     Combine output records=82013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:     Reduce output records=52013-10-14 10:26:40 org.apache.hadoop.mapred.Counters log信息:     Map output records=242013-10-14 10:26:40 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus信息: Total input paths to process : 12013-10-14 10:26:40 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Running job: job_local_00052013-10-14 10:26:40 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 10:26:40 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: io.sort.mb = 1002013-10-14 10:26:40 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: data buffer = 79691776/996147202013-10-14 10:26:40 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: record buffer = 262144/3276802013-10-14 10:26:40 org.apache.hadoop.mapred.MapTask$MapOutputBuffer flush信息: Starting flush of map output2013-10-14 10:26:40 org.apache.hadoop.mapred.MapTask$MapOutputBuffer sortAndSpill信息: Finished spill 02013-10-14 10:26:40 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0005_m_000000_0 is done. And is in the process of commiting2013-10-14 10:26:40 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:40 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0005_m_000000_0' done.2013-10-14 10:26:40 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 10:26:40 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:40 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Merging 1 sorted segments2013-10-14 10:26:40 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Down to the last merge-pass, with 1 segments left of total size: 121 bytes2013-10-14 10:26:40 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:40 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0005_r_000000_0 is done. And is in the process of commiting2013-10-14 10:26:40 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:40 org.apache.hadoop.mapred.Task commit信息: Task attempt_local_0005_r_000000_0 is allowed to commit now2013-10-14 10:26:40 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask信息: Saved output of task 'attempt_local_0005_r_000000_0' to hdfs://192.168.1.210:9000/tmp/1381717594500/pairwiseSimilarity2013-10-14 10:26:40 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: reduce > reduce2013-10-14 10:26:40 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0005_r_000000_0' done.2013-10-14 10:26:41 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息:  map 100% reduce 100%2013-10-14 10:26:41 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Job complete: job_local_00052013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息: Counters: 212013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:   File Output Format Counters 2013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:     Bytes Written=3922013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:   FileSystemCounters2013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_READ=164355772013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_READ=34882013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_WRITTEN=172300102013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_WRITTEN=34082013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:   File Input Format Counters 2013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:     Bytes Read=3812013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:   org.apache.mahout.math.hadoop.similarity.cooccurrence.RowSimilarityJob$Counters2013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:     PRUNED_COOCCURRENCES=02013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:     COOCCURRENCES=572013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:   Map-Reduce Framework2013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:     Map output materialized bytes=1252013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:     Map input records=52013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:     Reduce shuffle bytes=02013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:     Spilled Records=142013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:     Map output bytes=7442013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:     Total committed heap usage (bytes)=11740119042013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:     SPLIT_RAW_BYTES=1292013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:     Combine input records=212013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:     Reduce input records=72013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:     Reduce input groups=72013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:     Combine output records=72013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:     Reduce output records=72013-10-14 10:26:41 org.apache.hadoop.mapred.Counters log信息:     Map output records=212013-10-14 10:26:41 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus信息: Total input paths to process : 12013-10-14 10:26:41 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Running job: job_local_00062013-10-14 10:26:41 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 10:26:41 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: io.sort.mb = 1002013-10-14 10:26:41 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: data buffer = 79691776/996147202013-10-14 10:26:41 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: record buffer = 262144/3276802013-10-14 10:26:41 org.apache.hadoop.mapred.MapTask$MapOutputBuffer flush信息: Starting flush of map output2013-10-14 10:26:41 org.apache.hadoop.mapred.MapTask$MapOutputBuffer sortAndSpill信息: Finished spill 02013-10-14 10:26:41 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0006_m_000000_0 is done. And is in the process of commiting2013-10-14 10:26:41 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:41 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0006_m_000000_0' done.2013-10-14 10:26:41 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 10:26:41 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:41 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Merging 1 sorted segments2013-10-14 10:26:41 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Down to the last merge-pass, with 1 segments left of total size: 158 bytes2013-10-14 10:26:41 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:41 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0006_r_000000_0 is done. And is in the process of commiting2013-10-14 10:26:41 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:41 org.apache.hadoop.mapred.Task commit信息: Task attempt_local_0006_r_000000_0 is allowed to commit now2013-10-14 10:26:41 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask信息: Saved output of task 'attempt_local_0006_r_000000_0' to hdfs://192.168.1.210:9000/tmp/1381717594500/similarityMatrix2013-10-14 10:26:41 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: reduce > reduce2013-10-14 10:26:41 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0006_r_000000_0' done.2013-10-14 10:26:42 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息:  map 100% reduce 100%2013-10-14 10:26:42 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Job complete: job_local_00062013-10-14 10:26:42 org.apache.hadoop.mapred.Counters log信息: Counters: 192013-10-14 10:26:42 org.apache.hadoop.mapred.Counters log信息:   File Output Format Counters 2013-10-14 10:26:42 org.apache.hadoop.mapred.Counters log信息:     Bytes Written=5542013-10-14 10:26:42 org.apache.hadoop.mapred.Counters log信息:   FileSystemCounters2013-10-14 10:26:42 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_READ=197227402013-10-14 10:26:42 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_READ=43422013-10-14 10:26:42 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_WRITTEN=206747722013-10-14 10:26:42 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_WRITTEN=43542013-10-14 10:26:42 org.apache.hadoop.mapred.Counters log信息:   File Input Format Counters 2013-10-14 10:26:42 org.apache.hadoop.mapred.Counters log信息:     Bytes Read=3922013-10-14 10:26:42 org.apache.hadoop.mapred.Counters log信息:   Map-Reduce Framework2013-10-14 10:26:42 org.apache.hadoop.mapred.Counters log信息:     Map output materialized bytes=1622013-10-14 10:26:42 org.apache.hadoop.mapred.Counters log信息:     Map input records=72013-10-14 10:26:42 org.apache.hadoop.mapred.Counters log信息:     Reduce shuffle bytes=02013-10-14 10:26:42 org.apache.hadoop.mapred.Counters log信息:     Spilled Records=142013-10-14 10:26:42 org.apache.hadoop.mapred.Counters log信息:     Map output bytes=5992013-10-14 10:26:42 org.apache.hadoop.mapred.Counters log信息:     Total committed heap usage (bytes)=13733724162013-10-14 10:26:42 org.apache.hadoop.mapred.Counters log信息:     SPLIT_RAW_BYTES=1402013-10-14 10:26:42 org.apache.hadoop.mapred.Counters log信息:     Combine input records=252013-10-14 10:26:42 org.apache.hadoop.mapred.Counters log信息:     Reduce input records=72013-10-14 10:26:42 org.apache.hadoop.mapred.Counters log信息:     Reduce input groups=72013-10-14 10:26:42 org.apache.hadoop.mapred.Counters log信息:     Combine output records=72013-10-14 10:26:42 org.apache.hadoop.mapred.Counters log信息:     Reduce output records=72013-10-14 10:26:42 org.apache.hadoop.mapred.Counters log信息:     Map output records=252013-10-14 10:26:42 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus信息: Total input paths to process : 12013-10-14 10:26:42 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus信息: Total input paths to process : 12013-10-14 10:26:42 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Running job: job_local_00072013-10-14 10:26:42 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 10:26:42 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: io.sort.mb = 1002013-10-14 10:26:42 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: data buffer = 79691776/996147202013-10-14 10:26:42 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: record buffer = 262144/3276802013-10-14 10:26:42 org.apache.hadoop.mapred.MapTask$MapOutputBuffer flush信息: Starting flush of map output2013-10-14 10:26:42 org.apache.hadoop.mapred.MapTask$MapOutputBuffer sortAndSpill信息: Finished spill 02013-10-14 10:26:42 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0007_m_000000_0 is done. And is in the process of commiting2013-10-14 10:26:42 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:42 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0007_m_000000_0' done.2013-10-14 10:26:42 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 10:26:42 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: io.sort.mb = 1002013-10-14 10:26:42 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: data buffer = 79691776/996147202013-10-14 10:26:42 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: record buffer = 262144/3276802013-10-14 10:26:42 org.apache.hadoop.mapred.MapTask$MapOutputBuffer flush信息: Starting flush of map output2013-10-14 10:26:42 org.apache.hadoop.mapred.MapTask$MapOutputBuffer sortAndSpill信息: Finished spill 02013-10-14 10:26:42 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0007_m_000001_0 is done. And is in the process of commiting2013-10-14 10:26:42 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:42 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0007_m_000001_0' done.2013-10-14 10:26:42 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 10:26:42 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:42 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Merging 2 sorted segments2013-10-14 10:26:42 org.apache.hadoop.io.compress.CodecPool getDecompressor信息: Got brand-new decompressor2013-10-14 10:26:42 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Down to the last merge-pass, with 2 segments left of total size: 233 bytes2013-10-14 10:26:42 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:42 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0007_r_000000_0 is done. And is in the process of commiting2013-10-14 10:26:42 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:42 org.apache.hadoop.mapred.Task commit信息: Task attempt_local_0007_r_000000_0 is allowed to commit now2013-10-14 10:26:42 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask信息: Saved output of task 'attempt_local_0007_r_000000_0' to hdfs://192.168.1.210:9000/tmp/1381717594500/partialMultiply2013-10-14 10:26:42 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: reduce > reduce2013-10-14 10:26:42 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0007_r_000000_0' done.2013-10-14 10:26:43 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息:  map 100% reduce 100%2013-10-14 10:26:43 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Job complete: job_local_00072013-10-14 10:26:43 org.apache.hadoop.mapred.Counters log信息: Counters: 192013-10-14 10:26:43 org.apache.hadoop.mapred.Counters log信息:   File Output Format Counters 2013-10-14 10:26:43 org.apache.hadoop.mapred.Counters log信息:     Bytes Written=5722013-10-14 10:26:43 org.apache.hadoop.mapred.Counters log信息:   FileSystemCounters2013-10-14 10:26:43 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_READ=345179132013-10-14 10:26:43 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_READ=87512013-10-14 10:26:43 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_WRITTEN=361826302013-10-14 10:26:43 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_WRITTEN=79342013-10-14 10:26:43 org.apache.hadoop.mapred.Counters log信息:   File Input Format Counters 2013-10-14 10:26:43 org.apache.hadoop.mapred.Counters log信息:     Bytes Read=02013-10-14 10:26:43 org.apache.hadoop.mapred.Counters log信息:   Map-Reduce Framework2013-10-14 10:26:43 org.apache.hadoop.mapred.Counters log信息:     Map output materialized bytes=2412013-10-14 10:26:43 org.apache.hadoop.mapred.Counters log信息:     Map input records=122013-10-14 10:26:43 org.apache.hadoop.mapred.Counters log信息:     Reduce shuffle bytes=02013-10-14 10:26:43 org.apache.hadoop.mapred.Counters log信息:     Spilled Records=562013-10-14 10:26:43 org.apache.hadoop.mapred.Counters log信息:     Map output bytes=4532013-10-14 10:26:43 org.apache.hadoop.mapred.Counters log信息:     Total committed heap usage (bytes)=25584599042013-10-14 10:26:43 org.apache.hadoop.mapred.Counters log信息:     SPLIT_RAW_BYTES=6652013-10-14 10:26:43 org.apache.hadoop.mapred.Counters log信息:     Combine input records=02013-10-14 10:26:43 org.apache.hadoop.mapred.Counters log信息:     Reduce input records=282013-10-14 10:26:43 org.apache.hadoop.mapred.Counters log信息:     Reduce input groups=72013-10-14 10:26:43 org.apache.hadoop.mapred.Counters log信息:     Combine output records=02013-10-14 10:26:43 org.apache.hadoop.mapred.Counters log信息:     Reduce output records=72013-10-14 10:26:43 org.apache.hadoop.mapred.Counters log信息:     Map output records=282013-10-14 10:26:43 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus信息: Total input paths to process : 12013-10-14 10:26:43 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Running job: job_local_00082013-10-14 10:26:43 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 10:26:43 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: io.sort.mb = 1002013-10-14 10:26:43 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: data buffer = 79691776/996147202013-10-14 10:26:43 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: record buffer = 262144/3276802013-10-14 10:26:43 org.apache.hadoop.mapred.MapTask$MapOutputBuffer flush信息: Starting flush of map output2013-10-14 10:26:43 org.apache.hadoop.mapred.MapTask$MapOutputBuffer sortAndSpill信息: Finished spill 02013-10-14 10:26:43 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0008_m_000000_0 is done. And is in the process of commiting2013-10-14 10:26:43 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:43 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0008_m_000000_0' done.2013-10-14 10:26:43 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 10:26:43 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:43 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Merging 1 sorted segments2013-10-14 10:26:43 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Down to the last merge-pass, with 1 segments left of total size: 206 bytes2013-10-14 10:26:43 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:43 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0008_r_000000_0 is done. And is in the process of commiting2013-10-14 10:26:43 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 10:26:43 org.apache.hadoop.mapred.Task commit信息: Task attempt_local_0008_r_000000_0 is allowed to commit now2013-10-14 10:26:43 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask信息: Saved output of task 'attempt_local_0008_r_000000_0' to hdfs://192.168.1.210:9000/user/hdfs/userCF/result2013-10-14 10:26:43 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: reduce > reduce2013-10-14 10:26:43 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0008_r_000000_0' done.2013-10-14 10:26:44 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息:  map 100% reduce 100%2013-10-14 10:26:44 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Job complete: job_local_00082013-10-14 10:26:44 org.apache.hadoop.mapred.Counters log信息: Counters: 192013-10-14 10:26:44 org.apache.hadoop.mapred.Counters log信息:   File Output Format Counters 2013-10-14 10:26:44 org.apache.hadoop.mapred.Counters log信息:     Bytes Written=2172013-10-14 10:26:44 org.apache.hadoop.mapred.Counters log信息:   FileSystemCounters2013-10-14 10:26:44 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_READ=262998022013-10-14 10:26:44 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_READ=73572013-10-14 10:26:44 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_WRITTEN=275664082013-10-14 10:26:44 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_WRITTEN=62692013-10-14 10:26:44 org.apache.hadoop.mapred.Counters log信息:   File Input Format Counters 2013-10-14 10:26:44 org.apache.hadoop.mapred.Counters log信息:     Bytes Read=5722013-10-14 10:26:44 org.apache.hadoop.mapred.Counters log信息:   Map-Reduce Framework2013-10-14 10:26:44 org.apache.hadoop.mapred.Counters log信息:     Map output materialized bytes=2102013-10-14 10:26:44 org.apache.hadoop.mapred.Counters log信息:     Map input records=72013-10-14 10:26:44 org.apache.hadoop.mapred.Counters log信息:     Reduce shuffle bytes=02013-10-14 10:26:44 org.apache.hadoop.mapred.Counters log信息:     Spilled Records=422013-10-14 10:26:44 org.apache.hadoop.mapred.Counters log信息:     Map output bytes=9272013-10-14 10:26:44 org.apache.hadoop.mapred.Counters log信息:     Total committed heap usage (bytes)=19714539522013-10-14 10:26:44 org.apache.hadoop.mapred.Counters log信息:     SPLIT_RAW_BYTES=1372013-10-14 10:26:44 org.apache.hadoop.mapred.Counters log信息:     Combine input records=02013-10-14 10:26:44 org.apache.hadoop.mapred.Counters log信息:     Reduce input records=212013-10-14 10:26:44 org.apache.hadoop.mapred.Counters log信息:     Reduce input groups=52013-10-14 10:26:44 org.apache.hadoop.mapred.Counters log信息:     Combine output records=02013-10-14 10:26:44 org.apache.hadoop.mapred.Counters log信息:     Reduce output records=52013-10-14 10:26:44 org.apache.hadoop.mapred.Counters log信息:     Map output records=21cat: hdfs://192.168.1.210:9000/user/hdfs/userCF/result//part-r-000001	[104:1.280239,106:1.1462644,105:1.0653841,107:0.33333334]2	[106:1.560478,105:1.4795978,107:0.69935876]3	[103:1.2475469,106:1.1944525,102:1.1462644]4	[102:1.6462644,105:1.5277859,107:0.69935876]5	[107:1.1993587]

5). 推荐结果解读
我们可以把上面的日志分解析成3个部分解读

  • a. 初始化环境
  • b. 算法执行
  • c. 打印推荐结果

a. 初始化环境
出初HDFS的数据目录和工作目录,并上传数据文件。

Delete: hdfs://192.168.1.210:9000/user/hdfs/userCFCreate: hdfs://192.168.1.210:9000/user/hdfs/userCFcopy from: datafile/item.csv to hdfs://192.168.1.210:9000/user/hdfs/userCFls: hdfs://192.168.1.210:9000/user/hdfs/userCF==========================================================name: hdfs://192.168.1.210:9000/user/hdfs/userCF/item.csv, folder: false, size: 229==========================================================cat: hdfs://192.168.1.210:9000/user/hdfs/userCF/item.csv

b. 算法执行
分别执行,上图中对应的8种MapReduce算法。
Job complete: job_local_0001
Job complete: job_local_0002
Job complete: job_local_0003
Job complete: job_local_0004
Job complete: job_local_0005
Job complete: job_local_0006
Job complete: job_local_0007
Job complete: job_local_0008

c. 打印推荐结果

方便我们看到计算后的推荐结果

cat: hdfs://192.168.1.210:9000/user/hdfs/userCF/result//part-r-000001	[104:1.280239,106:1.1462644,105:1.0653841,107:0.33333334]2	[106:1.560478,105:1.4795978,107:0.69935876]3	[103:1.2475469,106:1.1944525,102:1.1462644]4	[102:1.6462644,105:1.5277859,107:0.69935876]5	[107:1.1993587]

4. 模板项目上传github

https://github.com/bsspirit/maven_mahout_template/tree/mahout-0.8

大家可以下载这个项目,做为开发的起点。

~ git clone https://github.com/bsspirit/maven_mahout_template~ git checkout mahout-0.8

我们完成了基于物品的协同过滤分步式算法实现,下面将继续介绍Mahout的Kmeans的分步式算法实现,请参考文章: Mahout分步式程序开发 聚类Kmeans

转载请注明出处:
http://blog.fens.me/hadoop-mahout-mapreduce-itemcf/

本文转载自:博客园-所有随笔区
1

win7基于mahout推荐之用户相似度计算 – Django's blog

 事情回到半年前,我想做关于推荐系统的东西,结果看到了强大的apache mahout,然后各种安装linux,hadoop,apache,mahout,taste,结局是,一个星期的努力,失败….
linux实在是hold不住啊,最后放弃了,可是最近计算用户相似度,实在是喜欢mahout 的开源,硬着头皮使用win7,+eclipse+maven+mahout0.8,下了好多东西;
网站: mahout下载网站
上面的东西基本都下载完了,可是怎么安装又成了问题,我已经在电脑上安装了maven3.1
但是直接使用maven导入到eclipse还是又问题,各种红叉,几个有用的网址:
其中遇到的问题有:
1,mvn install的时候不能下载;解决方法:安装代理:这里的原因是我使用的学校的网络可能会屏蔽一些网址,具体看一下 http://www.cnblogs.com/chenying99/archive/2013/06/09/3127930.html 我使用的是第三种方法。见下面详情‘
2,最简单的一个例子, http://blog.csdn.net/aidayei/article/details/6626699
按照他的方法,可以直接使用mahout 但是我没有找到怎样加入javadoc,即查看源码,所以最后放弃了~
3,使用Loglikelihood 对数似然相似度计算,因为在 http://www.oschina.net/question/780962_125354
http://www.cnblogs.com/dlts26/archive/2012/06/20/2555772.html
还有: http://www.oschina.net/question/780962_130015
中分别介绍的了各种相似度计算,我的数据是用户选择参与或者不参与所以最合适的应该是使用对数似然相似度计算;
上面的3个网站是对对数似然相似度的解释。
4,代码:
  @Test
  public void testCorrelationUserSimilarity() throws Exception {
          // step:1 构建模型 2 计算相似度 3 查找k紧邻 4 构造推荐引擎
          //-Xms1024m -Xmx1024m
        DataModel model =new FileDataModel(new File("./data/core_userNo_eventNo.txt"));//文件名一定要是绝对路径
        UserSimilarity similarityLog =new LogLikelihoodSimilarity(model);
        for(int i =1;i<=500;i++){
                for(int j =i+1;j<=500;j++){
                        System.out.println("用户"+i+"和"+j+"的相似度:"+similarityLog.userSimilarity(i, j));
                }
        }
        
        UserNeighborhood neighborhood =new NearestNUserNeighborhood(2,similarityLog,model);
        Recommender recommender= new GenericUserBasedRecommender(model,neighborhood,similarityLog);
        List<RecommendedItem> recommendations =recommender.recommend(1, 2);//为用户1推荐两个ItemID
        for(RecommendedItem recommendation :recommendations){
        // System.out.println(recommendation);
        }
  }
5,结果:

发现之间要么没有NaN,要么相似度还是很大》。。。看懂原理吧

发现之间要么没有NaN,要么相似度还是很大》。。。看懂原理吧

—————————————————-
第三种:设置HTTP代理
有时候你所在的公司由于安全因素考虑,要求你使用通过安全认证的代理访问因特网或无法访问。这种情况下,就需要为Maven配置HTTP代理,才能让它正 常访问外部仓库,以下载所需要的资源。 首先确认自己无法直接访问公共的Maven中央仓库,直接运行命令ping repo1.maven.org可以检查网络。如果真的需要代理,先检查一下代理服务器是否畅通,比如现在有一个IP 地址为218.14.227.197,端口为3128的代理服务,我们可以运行telnet 218.14.227.197 3128来检测该地址的该端口是否畅通。如果得到出错信息,需要先获取正确的代理服务信息;如果telnet 连接正确,则输入ctrl+],然后q,回车,退出即可。 检查完毕之后,编辑~/.m2/settings.xml文件(如果没有该文件,则复 制$M2_HOME/conf/settings.xml)。添加代理配置如下:
<settings>
———–
<proxies>
    <proxy>
 
      <id>my-proxy</id>
      <active>true</active>
      <protocol>http</protocol>
      <host>218.14.227.197</host>
      <port>3128</port>
      <!–
      <username>***</username>
      <password>***</password>
      <nonProxyHosts>repository.mycom.com|*.google.com</nonProxyHosts>
      –>
    </proxy>
  </proxies>
—————–
</settings>
这段配置十分简单,proxies下可以有多个proxy元素,如果你声明了多个proxy元素,则默认情况下第一个被激活的proxy会生效。这里声明 了一个id为my-proxy的代理,active的值为true 表示激活该代理,protocol表示使用的代理协议,这里是http。当然,最重要的是指定正确的主机名(host 元素)和端口(port 元素)。上述XML配置中我注释掉了username、password、nonProxyHost几个元素,当你的代理服务需要认证时,就需要配置 username和password。nonProxyHost元素用来指定哪些主机名不需要代理,可以使用 | 符号来分隔多个主机名。此外,该配置也支持通配符,如*.google.com表示所有以google.com结尾的域名访问都不要通过代理
6.加入javadoc ,下载mahout-distribution-0.8-src.zip 大概12m,添加到javadoc 搞定;

本文转载自:博客园-所有随笔区
9

win7下使用Taste实现协同过滤算法 – Django's blog

如果要实现Taste算法,必备的条件是:

1) JDK,使用1.6版本。需要说明一下,因为要基于Eclipse构建,所以在设置path的值之前要先定义JAVA_HOME变量。

2) Maven,使用2.0.11版本或以上。在eclipse上安装maven插件—m2eclipse。

3)Apache Mahout,使用0.5版本。

Apache Mahout -Taste Documentation中的安装步骤:

[javascript]   view plain copy

  1. 4. Demo  
  2. To build and run the demo, follow the instructions below, which are written  for  Unix-like  
  3. operating systems:  
  4. 1. Obtain a copy of the Mahout distribution, either from SVN or as a downloaded archive.  
  5. 2. Download the  "1 Million MovieLens Dataset"  from http: //www.grouplens.org/.   
  6. 3. Unpack the archive and copy movies.dat and ratings.dat to  
  7. trunk/taste-web/src/main/resources/org/apache/mahout/cf/taste/example/  
  8. under the Mahout distribution directory.  
  9. 4. Navigate to the directory where you unpacked the Mahout distribution, and navigate  
  10. totrunk.  
  11. 5. Runmvn install, which builds and installs Mahout core to your local repository  
  12. 6. cd taste-web  
  13. 7. cp ../examples/target/grouplens.jar ./lib  
  14. 8. Edit recommender.properties and fill  in  therecommender. class :  
  15. recommender. class =org.apache.mahout.cf.taste.example.grouplens.GroupLe  
  16. 9. mvn  package   
  17. 10.mvn jetty:run-war. You may need to give Maven more memory:  in  a bash shell,  
  18. export  MAVEN_OPTS=-Xmx1024M  
  19. 11.Get recommendations by accessing the web application  in  your browser:  
  20. http: //localhost:8080/RecommenderServlet?userID=1   
  21. This will produce a simple preference-item ID list which could be consumed by a client  
  22. application. Get more useful human-readable output  with  the debug parameter:  
  23. http: //localhost:8080/RecommenderServlet?userID=1&debug=true   
  24. Incidentally, Taste’s web service  interface  may then be found at:  
  25. http: //localhost:8080/RecommenderService.jws   
  26. Its WSDL file will be here…  
  27. http: //localhost:8080/RecommenderService.jws?wsdl   
  28. … and you can even access it  in  your browser via a simple HTTP request:  
  29. …/RecommenderService.jws?method=recommend&userID=1&howMany=10  

一、在window上安装maven

 

现在Java新架构的不断出现,例如Struts,Spring,Hibernate等,项目的配置文件的增多,给开发人员带来很大麻烦。在实际的开发当中,Myeclipse中的project越来越庞大,所依赖的第三方Jar包越来越多,这显得Project很臃肿,给项目管理带来了很大不便,尤其是在一些大型项目。为了解决上述问题,Apache开源组织发布了Maven,它适用于大的Java项目。

有关maven介绍见《Maven权威指南》 ,下载地址: http://www.juvenxu.com/mvn-def-guide/

安装步骤:

1、下载包,见http://maven.apache.org/download.html

2、解压缩,将其中的bin目录设置到windows Path环境变量中,maven也是依赖jdk的,先装好jdk,在环境变量里面配置好jdk。

2.1、设置 JAVA_HOME(顾名其意该变量的含义就是java的安装路径),找到path,然后点编辑,path变量的含义就是系统在任何路径下都可以识别java命令,则变量值为“.;%JAVA_HOME%\bin”,

2.2、 新建变量名:M2_HOME,变量值:E:\maven\apache-maven-2.2.1,注意这里不含bin的路径。 2、在 path后追加;%M2_HOME%\bin,注意这里到bin目录

3、测试安装是否成功:开始->运行->cmd->mvn -version

注意:当提示mvn提示不是内部命令或外部命令,是因为在设置环境变量path的时候,可能覆盖了原先设置着的变量,只要在path后面添加变量:%SystemRoot%\system32;

4、在eclipse中安装maven插件 http://she.iteye.com/blog/1217812 http://www.cnblogs.com/freeliver54/archive/2011/09/07/2169527.html

5、使用links管理eclipse插件 http://blog.csdn.net/cfyme/article/details/6099056/

二、在windows上构建Apache Mahout环境

Apache Mahout 是 Apache Software Foundation (ASF) 开发的一个全新的开源项目,其主要目标是创建一些可伸缩的机器学习算法,供开发人员在 Apache 在许可下免费使用。该项目已经发展到了它的最二个年头,目前只有一个公共发行版。Mahout 包含许多实现,包括集群、分类、CP 和进化程序。

详细内容见:

1、Apache Mahout 简介 http://www.ibm.com/developerworks/cn/java/j-mahout/

2、Maven 2.0:编译、测试、部署、运行 http://www.ideagrace.com/html/doc/2006/06/14/00847.html

开始构建:

1、基于 Apache Mahout 构建社会化推荐引擎 http://www.ibm.com/developerworks/cn/java/j-lo-mahout/

本文是由此篇文章引申而来,所以具体就是实现了“Taste的安装于简单的Demo实现”。

2、使用mvn 搭建Mahout环境    http://anqiang1900.blog.163.com/blog/static/1141888642010380255296/

简单来说就是将Mahout源码从官网上下载下来后,在dos下切换到根文件夹后执行mvn install。

3、在Eclipse中构建Mahout   http://www.cnblogs.com/dlts26/archive/2011/09/13/2174889.html

就是将Mahout源码导入Eclipse从而形成Maven工程。再在mahout文件夹下执行maven install(如果上一步没做这个的话)。

三、运行Apache Mahout中的Taste Webapp例子

Taste 是 Apache Mahout 提供的一个协同过滤算法的高效实现,它是一个基于 Java 实现的可扩展的,高效的推荐引擎。

1.修改mahout-taste-webapp工程的pom.xml,添加对mahout-examples的依赖 
<dependency>  
    <groupId>${project.groupId}</groupId>  
    <artifactId>mahout-examples</artifactId>  
    <version>0.5</version>  
</dependency>  
2.在mahout-taste-webapp工程的recommender.properties中添加 
recommender.class=org.apache.mahout.cf.taste.example.grouplens.GroupLensRecommender  
3.从http://www.grouplens.org/node/73上下载数据文件,我下载的是1M Ratings Data Set (.tar.gz)经过测试验证通过,其他数据文件请自行验证。解压以后将ratings.dat复制到mahout-taste-webapp工程的/org/apache/mahout/cf/taste/example/grouplens/下,至于为什么是这个路径?请大家去看这个类GroupLensDataModel。 
4.现在准备工作基本完成了,cd到taste-web我们来运行一把 
mvn jetty:run-war  
5.访问一下http://localhost:8080/RecommenderServlet?userID=1就能看到效果,这个servlet还支持其他参数请参看RecommenderServlet的javadoc说明 

详细内容见 http://seanhe.iteye.com/blog/1124682

1、在Eclipse中配置Maven时遇到的问题

启动eclipse的时候会提示warning:找不到jdk啥的,解决办法:
在eclipse.ini文件中加入如下两行(vm指向javaw.exe的位置,或者直接到bin那里也可以):
-vm
D:\Development\Java\jdk1.5.0_16\bin\javaw.exe(注意这两行加到-startup与-launcher.library之间)

2、在windows上构建mahout环境时出现的问题:

2.1在mahout目录下,运行"mvn install"时,遇到以下错误

Cannot run program "chmod": CreateProcess error=2

chmod是linux命令,此错误是由于 Cygwin + Hadoop 跑在 Windows 上出现的。

也就是说如果当前在 windows 下进行mahout编译,一定要确保正确安装了 Cygwin(按照下面的教程装上Cygwin便可,后面hadoop的配置可以不用全部完成!)

这里用几个比较好的教材,讲解如何在 windows 下安装 Hadoop Cluster(

http://ebiquity.umbc.edu/Tutorials/Hadoop/00%20-%20Intro.htm l     

http://hayesdavis.net/2008/06/14/running-hadoop-on-windows/ ) 

下载 hadoop-0.19.1在     http://archive.apache.org/dist/hadoop/core/hadoop-0.19.1/

2.2在Cygwin中运行命令ssh localhost连接不成功时出现Connection closed by ::1错误

Cygwin,耗时近xxxx个小时,查遍中文外文文献,终于将此题目解决。问题描述:在Win7下Cygwin中,使用sshlocalhost命令, 出现Connectionclosedby127.0.0.1的问题。解决方案:1、开端——运行——services.msc2、右键 CYGWINsshd——属性——登录选项卡——选择“此账户”——浏览——高级——立即查找——选择你的账户名(必须为治理员权限)——输进密码(必须 要有,空密码不承受,且和电脑登录密码相同)——确定。3、重启CYGWINsshd效劳即可。这样就以你的账户的名义启动了这个效劳。而后sshlocalhost成功。这样 做的一个缺点可能是你要给电脑设个密码

详见: http://blog.sina.com.cn/s/blog_4abbf0ae0100r8hh.html

3、运行Taste Webapp时遇到的问题

在Eclipse中配置好mahout后,就可以在mahout中运行taste-webapp算法了。

文中1,2步骤由于前面已经配置好,就直接从第3步开始配置mahout-taste-webapp中的内容。

出现的问题:

在浏览器栏输入 http://localhost:8080/RecommenderServlet?userID=1后出现错误:

HTTP ERROR: 404

Problem accessing /RecommenderServlet. Reason:

Not Found

Powered by Jetty://

仔细查看第7步mvn jetty:run-war时,发现其中出现错误:

WARN::FAILED taste-recommender: java.lang.OutOfMemoryError: Java heap space 

表明出现maven工程内存溢出。

解决办法:

Windows环境中

在Maven安装目录中找到文件%M2_HOME%\bin\mvn.bat ,这就是启动Maven的脚本文件,在该文件中你能看到有一行注释为:

  @REM set MAVEN_OPTS=-Xdebug -Xnoagent -Djava.compiler=NONE…

它的意思是你可以设置一些Maven参数,我们就在注释下面加入一行:
set MAVEN_OPTS=-Xmx1024M

或者,在执行 mvn jetty:run-war命令之前,执行
F:\mahout-distribution-0.5\taste-web>set MAVEN_OPTS= -Xmx1024M

我们看到,配置的Maven选项生效了,OutOfMemoryError也能得以相应的解决。

本文转载自:博客园-所有随笔区
18

用mahout构建单机推荐引擎(一) – 随风蔷薇

最近看了Mahout in Action 第一部分Recommend System 根据书上的例子 自己写一个推荐引擎,

首先时数据,数据用的是书上推荐的GroupLens的数据 http://grouplens.org/datasets/movielens/

下载的数据格式是 UserId ItemID rating timestamp

应为FileDataModel的格式没有最后的一列,首先吧timestamp去掉就好。

选择合适的推荐算法

import java.io.File;import org.apache.mahout.cf.taste.common.TasteException;import org.apache.mahout.cf.taste.eval.RecommenderBuilder;import org.apache.mahout.cf.taste.eval.RecommenderEvaluator;import org.apache.mahout.cf.taste.impl.eval.RMSRecommenderEvaluator;import org.apache.mahout.cf.taste.impl.model.file.FileDataModel;import org.apache.mahout.cf.taste.impl.neighborhood.NearestNUserNeighborhood;import org.apache.mahout.cf.taste.impl.recommender.GenericItemBasedRecommender;import org.apache.mahout.cf.taste.impl.recommender.GenericUserBasedRecommender;import org.apache.mahout.cf.taste.impl.recommender.slopeone.SlopeOneRecommender;import org.apache.mahout.cf.taste.impl.recommender.svd.ALSWRFactorizer;import org.apache.mahout.cf.taste.impl.recommender.svd.SVDRecommender;import org.apache.mahout.cf.taste.impl.similarity.PearsonCorrelationSimilarity;import org.apache.mahout.cf.taste.model.DataModel;import org.apache.mahout.cf.taste.neighborhood.UserNeighborhood;import org.apache.mahout.cf.taste.recommender.Recommender;import org.apache.mahout.cf.taste.similarity.ItemSimilarity;import org.apache.mahout.cf.taste.similarity.UserSimilarity;public class Demo {    public void userbased(DataModel model,int n) throws TasteException{        System.out.println("-----------------------------------------------------------------------------");        final int N=n;        RecommenderEvaluator evaluator=new RMSRecommenderEvaluator();        RecommenderBuilder bulider=new RecommenderBuilder(){        @Override            public Recommender buildRecommender(DataModel model)                    throws TasteException {                // TODO Auto-generated method stub                UserSimilarity similarity=new PearsonCorrelationSimilarity(model);                UserNeighborhood neighborhood=new NearestNUserNeighborhood(N,similarity,model);                Recommender recommend=new GenericUserBasedRecommender(model,neighborhood,similarity);                return recommend;            }        };        double score=evaluator.evaluate(bulider, null, model, 0.7, 1.0);        System.out.println("UserBased "+N+"  score is"+score);    }        public void itembased(DataModel model) throws TasteException{        System.out.println("-----------------------------------------------------------------------------");        RecommenderEvaluator evaluator=new RMSRecommenderEvaluator();        RecommenderBuilder builder=new RecommenderBuilder(){            @Override            public Recommender buildRecommender(DataModel model)                    throws TasteException {                // TODO Auto-generated method stub                ItemSimilarity similarity=new PearsonCorrelationSimilarity(model);                 Recommender recommend=new GenericItemBasedRecommender(model,similarity);                return recommend;            }        };        double score=evaluator.evaluate(builder, null, model, 0.7, 1.0);        System.out.println("ItemBased score is "+score);    }        public void slope_one(DataModel model) throws TasteException{        System.out.println("-----------------------------------------------------------------------------");        RecommenderEvaluator evaluator=new RMSRecommenderEvaluator();        RecommenderBuilder builder=new RecommenderBuilder(){            @Override            public Recommender buildRecommender(DataModel arg0)                    throws TasteException {                // TODO Auto-generated method stub                return new SlopeOneRecommender(arg0);            }                    };        double score=evaluator.evaluate(builder, null, model, 0.7, 1);        System.out.println("Slope  one score is "+score);    }        public void SVD(DataModel model) throws TasteException{        System.out.println("-----------------------------------------------------------------------------");        RecommenderEvaluator evaluator=new RMSRecommenderEvaluator();        RecommenderBuilder builder=new RecommenderBuilder(){            @Override            public Recommender buildRecommender(DataModel model)                    throws TasteException {                // TODO Auto-generated method stub                return new SVDRecommender(model,new ALSWRFactorizer(model,10,0.05,10));            }                    };        double score=evaluator.evaluate(builder, null, model, 0.7, 1.0);        System.out.println("SVD score is "+score);    }    public static void main(String[] args) throws Exception{        String filepath=args[0];        DataModel model=new FileDataModel(new File(filepath));        Demo demo=new Demo();        demo.userbased(model, 2);        demo.itembased(model);        demo.slope_one(model);        demo.SVD(model);    }}

看一下运行结果

 14/03/02 14:16:56 INFO file.FileDataModel: Creating FileDataModel for file /home/bianwenlong/ml-100k/ml-100k/sat.data
14/03/02 14:16:56 INFO file.FileDataModel: Reading file info…
14/03/02 14:16:56 INFO file.FileDataModel: Read lines: 100000
14/03/02 14:16:56 INFO model.GenericDataModel: Processed 943 users
—————————————————————————–
14/03/02 14:16:56 INFO eval.AbstractDifferenceRecommenderEvaluator: Beginning evaluation using 0.7 of FileDataModel[dataFile:/home/bianwenlong/ml-100k/ml-100k/sat.data]
14/03/02 14:16:56 INFO model.GenericDataModel: Processed 943 users
14/03/02 14:16:56 INFO eval.AbstractDifferenceRecommenderEvaluator: Beginning evaluation of 942 users
14/03/02 14:16:56 INFO eval.AbstractDifferenceRecommenderEvaluator: Starting timing of 942 tasks in 4 threads
14/03/02 14:16:57 INFO eval.StatsCallable: Average time per recommendation: 193ms
14/03/02 14:16:57 INFO eval.StatsCallable: Approximate memory used: 12MB / 16MB
14/03/02 14:16:57 INFO eval.StatsCallable: Unable to recommend in 341 cases
14/03/02 14:17:50 INFO eval.AbstractDifferenceRecommenderEvaluator: Evaluation result: 1.1750788826753058
UserBased 2  score is1.1750788826753058
—————————————————————————–
14/03/02 14:17:50 INFO eval.AbstractDifferenceRecommenderEvaluator: Beginning evaluation using 0.7 of FileDataModel[dataFile:/home/bianwenlong/ml-100k/ml-100k/sat.data]
14/03/02 14:17:50 INFO model.GenericDataModel: Processed 943 users
14/03/02 14:17:50 INFO eval.AbstractDifferenceRecommenderEvaluator: Beginning evaluation of 943 users
14/03/02 14:17:50 INFO eval.AbstractDifferenceRecommenderEvaluator: Starting timing of 943 tasks in 4 threads
14/03/02 14:17:50 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 711
14/03/02 14:17:50 INFO eval.StatsCallable: Average time per recommendation: 35ms
14/03/02 14:17:50 INFO eval.StatsCallable: Approximate memory used: 12MB / 16MB
14/03/02 14:17:50 INFO eval.StatsCallable: Unable to recommend in 2 cases
14/03/02 14:17:51 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 830
14/03/02 14:17:51 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 851
14/03/02 14:17:51 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 851
14/03/02 14:17:51 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1155
14/03/02 14:17:51 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1156
14/03/02 14:17:52 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1325
14/03/02 14:17:52 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1340
14/03/02 14:17:52 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1342
14/03/02 14:17:52 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1343
14/03/02 14:17:52 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1348
14/03/02 14:17:52 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1352
14/03/02 14:17:52 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1363
14/03/02 14:17:52 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1364
14/03/02 14:17:52 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1387
14/03/02 14:17:52 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1414
14/03/02 14:17:52 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1408
14/03/02 14:17:52 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1465
14/03/02 14:17:52 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1461
14/03/02 14:17:53 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1508
14/03/02 14:17:53 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1515
14/03/02 14:17:53 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1526
14/03/02 14:17:54 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 851
14/03/02 14:17:54 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1536
14/03/02 14:17:54 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1537
14/03/02 14:17:54 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1537
14/03/02 14:17:54 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 851
14/03/02 14:17:54 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1595
14/03/02 14:17:54 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1387
14/03/02 14:17:54 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1408
14/03/02 14:17:54 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1546
14/03/02 14:17:54 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1562
14/03/02 14:17:54 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1567
14/03/02 14:17:54 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1568
14/03/02 14:17:54 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1569
14/03/02 14:17:54 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1575
14/03/02 14:17:54 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1582
14/03/02 14:17:54 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1586
14/03/02 14:17:54 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1601
14/03/02 14:17:55 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1603
14/03/02 14:17:55 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1508
14/03/02 14:17:56 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1155
14/03/02 14:17:56 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1508
14/03/02 14:17:56 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1408
14/03/02 14:17:56 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1465
14/03/02 14:17:56 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1627
14/03/02 14:17:56 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1342
14/03/02 14:17:56 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1653
14/03/02 14:17:56 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1155
14/03/02 14:17:56 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1465
14/03/02 14:17:56 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1635
14/03/02 14:17:56 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1640
14/03/02 14:17:56 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1641
14/03/02 14:17:56 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1645
14/03/02 14:17:56 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1647
14/03/02 14:17:56 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1650
14/03/02 14:17:57 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1661
14/03/02 14:17:57 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1387
14/03/02 14:17:57 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1537
14/03/02 14:17:57 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1663
14/03/02 14:17:57 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1666
14/03/02 14:17:57 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1667
14/03/02 14:17:57 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1668
14/03/02 14:17:57 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1669
14/03/02 14:17:57 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1537
14/03/02 14:17:57 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1675
14/03/02 14:17:58 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1682
ItemBased score is 1.0975656960610958
—————————————————————————–
14/03/02 14:17:58 INFO eval.AbstractDifferenceRecommenderEvaluator: Evaluation result: 1.0975656960610958
14/03/02 14:17:58 INFO eval.AbstractDifferenceRecommenderEvaluator: Beginning evaluation using 0.7 of FileDataModel[dataFile:/home/bianwenlong/ml-100k/ml-100k/sat.data]
14/03/02 14:17:58 INFO model.GenericDataModel: Processed 943 users
14/03/02 14:17:58 INFO slopeone.MemoryDiffStorage: Building average diffs…
14/03/02 14:18:00 INFO eval.AbstractDifferenceRecommenderEvaluator: Beginning evaluation of 942 users
14/03/02 14:18:00 INFO eval.AbstractDifferenceRecommenderEvaluator: Starting timing of 942 tasks in 4 threads
14/03/02 14:18:00 INFO eval.StatsCallable: Average time per recommendation: 11ms
14/03/02 14:18:00 INFO eval.StatsCallable: Approximate memory used: 75MB / 137MB
14/03/02 14:18:00 INFO eval.StatsCallable: Unable to recommend in 0 cases
14/03/02 14:18:01 INFO eval.AbstractDifferenceRecommenderEvaluator: Evaluation result: 0.9432216740733792
Slope  one score is 0.9432216740733792
—————————————————————————–
14/03/02 14:18:01 INFO eval.AbstractDifferenceRecommenderEvaluator: Beginning evaluation using 0.7 of FileDataModel[dataFile:/home/bianwenlong/ml-100k/ml-100k/sat.data]
14/03/02 14:18:01 INFO model.GenericDataModel: Processed 943 users
14/03/02 14:18:01 INFO svd.ALSWRFactorizer: starting to compute the factorization…
14/03/02 14:18:01 INFO svd.ALSWRFactorizer: iteration 0
14/03/02 14:18:01 INFO svd.ALSWRFactorizer: iteration 1
14/03/02 14:18:01 INFO svd.ALSWRFactorizer: iteration 2
14/03/02 14:18:02 INFO svd.ALSWRFactorizer: iteration 3
14/03/02 14:18:02 INFO svd.ALSWRFactorizer: iteration 4
14/03/02 14:18:02 INFO svd.ALSWRFactorizer: iteration 5
14/03/02 14:18:03 INFO svd.ALSWRFactorizer: iteration 6
14/03/02 14:18:03 INFO svd.ALSWRFactorizer: iteration 7
14/03/02 14:18:03 INFO svd.ALSWRFactorizer: iteration 8
14/03/02 14:18:04 INFO svd.ALSWRFactorizer: iteration 9
14/03/02 14:18:04 INFO svd.ALSWRFactorizer: finished computation of the factorization…
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Beginning evaluation of 943 users
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Starting timing of 943 tasks in 4 threads
14/03/02 14:18:04 INFO eval.StatsCallable: Average time per recommendation: 0ms
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 814
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 599
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 830
14/03/02 14:18:04 INFO eval.StatsCallable: Approximate memory used: 87MB / 137MB
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 600
14/03/02 14:18:04 INFO eval.StatsCallable: Unable to recommend in 3 cases
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1122
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1130
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1201
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1321
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1398
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1341
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1358
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1359
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1398
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1364
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1366
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1447
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1374
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1452
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1453
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1458
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1476
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1461
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1321
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1493
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1505
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1358
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1498
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1500
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1522
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1526
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1533
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1359
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1522
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1594
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1595
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1522
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1603
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1613
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1522
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1594
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1619
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1624
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1374
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1630
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1641
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1654
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1655
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1568
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1522
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1645
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1649
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1650
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1651
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1660
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1500
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1590
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1666
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1670
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1522
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1682
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 600
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1572
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1586
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1590
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Item exists in test data but not training data: 1522
14/03/02 14:18:04 INFO eval.AbstractDifferenceRecommenderEvaluator: Evaluation result: 0.9860761592925741
SVD score is 0.9860761592925741

本文转载自:博客园-所有随笔区

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值