关闭

hadoop单机伪分布式环境搭建和mahout试用

标签: 分布式hadoopmahout大数据
168人阅读 评论(0) 收藏 举报
分类:
单机版hadoop安装
(1)下载hadoop安装包,解压  http://hadoop.apache.org/releases.html
(2)配置环境变量
export PATH=$PATH:/home/iomssbd/user/hadoop-2.4.1/bin:/home/iomssbd/user/hadoop-2.4.1/sbin
export HADOOP_HOME=/home/iomssbd/user/hadoop-2.4.1
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
--------------------------------测试hadoop环境是否可用--------------------------------
cd $HADOOP_HOME
mkdir input
cp etc/hadoop/*.xml input
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar grep input output 'dfs[a-z.]+'
cat ./output/*
--------------------------------------------------------------------------------------------
(3)修改配置文件
1.修改hadoop安装路径etc/hadoop下的hadoop-env.sh
export JAVA_HOME=/home/iomssbd/user/java/jdk1.7.0_67
export HADOOP_LOG_DIR=/home/iomssbd/user/hadoop-2.4.1/logs
2.修改hadoop安装路径etc/hadoop下的core-site.xml
<property>
        <name>hadoop.tmp.dir</name>
        <value>file:/home/iomssbd/user/hadoop-2.4.1/tmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9001</value>
    </property>
    <property>
        <name>hadoop.logfile.size</name>
        <value>1000000</value>
        <description>The max size of each log file</description>
   </property>
   <property>
        <name>hadoop.logfile.count</name>
        <value>5</value>
        <description>The max number of log files</description>
   </property>
3.修改hadoop安装路径etc/hadoop下的hdfs-site.xml
       <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/home/iomssbd/user/hadoop-2.4.1/tmp/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/home/iomssbd/user/hadoop-2.4.1/tmp/dfs/data</value>
    </property>
    <property>
        <name>dfs.datanode.address</name>
        <value>localhost:50011</value>
    </property>
    <property>
        <name>dfs.datanode.http.address</name>
        <value>localhost:50076</value>
    </property>
    <property>
        <name>dfs.datanode.ipc.address</name>
        <value>localhost:50021</value>
    </property>
(4)启动hadoop伪分布式集群
cd $HADOOP_HOME
sbin/start-dfs.sh
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
mahout安装和使用
(1)下载   http://archive.apache.org/dist/mahout/
(2)解压  tar -zcvf apache-mahout-distribution-0.10.1.tar.gz
(3)修改配置文件   
export MAHOUT_HOME=/home/iomssbd/user/apache-mahout-distribution-0.10.1
export MAHOUT_CONF_DIR=$MAHOUT_HOME/conf
export PATH=$MAHOUT_HOME/conf:$MAHOUT_HOME/bin:$PATH
(4)输入命令mahout测试是否成功,如果打印出过个mahout命令即成功
---------------------测试使用-----------------------------------------------
(5)下载测试数据  http://kdd.ics.uci.edu/databases/synthetic_control/链接中的synthetic_control.data
(6)创建dfs路径
hadoop fs -mkdir hdfs://localhost:9001/user
hadoop fs -mkdir hdfs://localhost:9001/user/iomssbd
(7)上传测试文件
hadoop fs -put  synthetic_control.data hdfs://localhost:9001/user/iomssbd/testdata
(8)执行kmeans算法
mahout -core  org.apache.mahout.clustering.syntheticcontrol.kmeans.Job
(9)查看数据结果
hadoop fs -ls /user/iomssbd/output
0
0

查看评论
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
    个人资料
    • 访问:6307次
    • 积分:178
    • 等级:
    • 排名:千里之外
    • 原创:10篇
    • 转载:0篇
    • 译文:0篇
    • 评论:0条
    文章分类