Mahout安装与测试-基于hadoop单结点伪分布式

安装JDK

见我之前关于JDK1.7安装的博客:

http://blog.csdn.net/stanely_hwang/article/details/18883599

Hadoop单结点伪分布式安装

见我之前关于Hadoop单结点伪分布式安装的博客:

http://blog.csdn.net/stanely_hwang/article/details/18884181

Mahout安装与配置

1:下载二进制解压安装:

 Mahout下载地址:
http://www.apache.org/dyn/closer.cgi/mahout/
或 http://archive.apache.org/dist/mahout/
Mahout下载完后,直接解压。我将Mahout下载到/opt/hadoop下,进入该目录,进行解压操作
$ cd /opt/hadoop
$ tar -zxvf mahout-distribution-0.9

2:配置环境变量:

用vim编辑/etc/profile文件, 再文件末尾添加$JHADOOP_HOME, $HADOOP_CONF,$MAHOUT_HOME 环境遍历,
详细配置信息如下所示:

JAVA_HOME=/opt/java/jdk
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/bin
JRE_HOME=/opt/java/jdk
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/bin
export JAVA_HOME
export JRE_HOME
export HADOOP_HOME=/home/andy/hadoop-2.2.0
export HADOOP_CONF_DIR=/home/andy/hadoop-2.2.0/conf
export MAHOUT_HOME=/opt/hadoop/mahout-distribution-0.9
export PATH=$HADOOP_HOME/bin:$MAHOUT_HOME/bin:$PATH
export PATH
export PATH=/sbin:/bin:/usr/sbin:/usr/bin:/sbin

3:启动Hadoop:

到Hadoop安装目录的sbin目录下执行 ~/hadoop-2.2.0/sbin目录下)
 $  ./hadoop-daemon.sh start namenode
 $ ./hadoop-daemon.sh start datanode
$ ./yarn-daemon.sh start resourcemanager
 $ ./yarn-daemon.sh start nodemanager

4:mahout --help    #检查Mahout是否安装完好,看是否列出了一些算法

进入$MAHOUT_HOME/bin目录
 $ cd $MAHOUT_HOME/bin
 $ ./mahout --help 
 输出内容如下:
MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
Running on hadoop, using /home/andy/hadoop-2.2.0/bin/hadoop and HADOOP_CONF_DIR=/home/andy/hadoop-2.2.0/conf
MAHOUT-JOB: /opt/hadoop/mahout-distribution-0.9/mahout-examples-0.9-job.jar
Unknown program '--help' chosen.
Valid program names are:
  arff.vector: : Generate Vectors from an ARFF file or directory
  baumwelch: : Baum-Welch algorithm for unsupervised HMM training
  canopy: : Canopy clustering
  cat: : Print a file or resource as the logistic regression models would see it
  cleansvd: : Cleanup and verification of SVD output
  clusterdump: : Dump cluster output to text
  clusterpp: : Groups Clustering Output In Clusters
  cmdump: : Dump confusion matrix in HTML or text formats
  concatmatrices: : Concatenates 2 matrices of same cardinality into a single matrix
  cvb: : LDA via Collapsed Variation Bayes (0th deriv. approx)
  cvb0_local: : LDA via Collapsed Variation Bayes, in memory locally.
  evaluateFactorization: : compute RMSE and MAE of a rating matrix factorization against probes
  fkmeans: : Fuzzy K-means clustering
  hmmpredict: : Generate random sequence of observations by given HMM
  itemsimilarity: : Compute the item-item-similarities for item-based collaborative filtering
  kmeans: : K-means clustering
  lucene.vector: : Generate Vectors from a Lucene index
  lucene2seq: : Generate Text SequenceFiles from a Lucene index
  matrixdump: : Dump matrix in CSV format
  matrixmult: : Take the product of two matrices
  parallelALS: : ALS-WR factorization of a rating matrix
  qualcluster: : Runs clustering experiments and summarizes results in a CSV
  recommendfactorized: : Compute recommendations using the factorization of a rating matrix
  recommenditembased: : Compute recommendations using item-based collaborative filtering
  regexconverter: : Convert text files on a per line basis based on regular expressions
  resplit: : Splits a set of SequenceFiles into a number of equal splits
  rowid: : Map SequenceFile<Text,VectorWritable> to {SequenceFile<IntWritable,VectorWritable>, SequenceFile<IntWritable,Text>}
  rowsimilarity: : Compute the pairwise similarities of the rows of a matrix
  runAdaptiveLogistic: : Score new production data using a probably trained and validated AdaptivelogisticRegression model
  runlogistic: : Run a logistic regression model against CSV data
  seq2encoded: : Encoded Sparse Vector generation from Text sequence files
  seq2sparse: : Sparse Vector generation from Text sequence files
  seqdirectory: : Generate sequence files (of Text) from a directory
  seqdumper: : Generic Sequence File dumper
  seqmailarchives: : Creates SequenceFile from a directory containing gzipped mail archives
  seqwiki: : Wikipedia xml dump to sequence file
  spectralkmeans: : Spectral k-means clustering
  split: : Split Input data into test and train sets
  splitDataset: : split a rating dataset into training and probe parts
  ssvd: : Stochastic SVD
  streamingkmeans: : Streaming k-means clustering
  svd: : Lanczos Singular Value Decomposition
  testnb: : Test the Vector-based Bayes classifier
  trainAdaptiveLogistic: : Train an AdaptivelogisticRegression model
  trainlogistic: : Train a logistic regression using stochastic gradient descent
  trainnb: : Train the Vector-based Bayes classifier
  transpose: : Take the transpose of a matrix
  validateAdaptiveLogistic: : Validate an AdaptivelogisticRegression model against hold-out data set
  vecdist: : Compute the distances between a set of Vectors (or Cluster or Canopy, they must fit in memory) and a list of Vectors
  vectordump: : Dump vectors from a sequence file to text
  viterbi: : Viterbi decoding of hidden states from given output states sequence
[andy@localhost bin]$ 

5:mahout使用准备:

  • 准备数据:
测试数据下载地址:
http://archive.ics.uci.edu/ml/databases/synthetic_control/synthetic_control.data
下载完后,将数据放入$MAHOUT_HOME文件下
  • 创建测试目录
创建测试目录testdata,并将数据导入到testdata中      

 $ cd $HADOOP_HOME/bin/
$ hadoop fs -mkdir testdata #
$ hadoop fs -put $MAHOUT_HOME/synthetic_control.data testdata
 
     
 
    

  • 使用kmeans算法

$ hadoop jar /home/hadoop/mahout-distribution-0.7/mahout-examples-0.7-job.jar org.apache.mahout.clustering.syntheticcontrol.kmeans.Job
 
     
 
    

  • 查看结果

$ hadoop fs -lsr output


$ hadoop fs -get output $MAHOUT_HOME/result


$ cd $MAHOUT_HOME/example/result


$ ls


如上图所示表示安装成功!


(转载自:http://blog.csdn.net/stanely_hwang/article/details/20044323)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值