在CentOS6.9上配置Hadoop集群(二)

四、安装Hadoop集群

(1)将hadoop上传至master,可以用WinSCP上传,我这里以hadoop-2.5.1.tar.gz为例:
这里写图片描述
(2)在master上创建一个新文件用来装hadoop:

mkdir /home/hadoop

(3)将hadoop目录的写权限分配给gznc-hadoop用户:
chown –R gznc-hadoop:gznc-hadoop /home/hadoop

(4)解压文件:tar -zxvf hadoop-2.5.1.tar.gz

(5)在hadoop目录下创建mydata文件:mkdir mydata
(6)在hadoop-env.sh中配置运行时需要的jdk环境变量:
vim hadoop-env.sh 或 gedit hadoop-env.sh

将 export JAVA_HOME=${JAVA_HOME}修改为
export JAVA_HOME=/usr/java/jdk1.7.0_80

(7)配置yarn-env.sh环境变量:vim yarn-env.sh
将 #export JAVA_HOME= 修改为
export JAVA_HOME=/usr/java/jdk1.7.0_80

(8)配置core-site.xml文件
这里写图片描述
(9)配置hdfs-site.xml文件
这里写图片描述
(10)配置yarn-site.xml文件
这里写图片描述
(11)配置计算框架,复制mapred-site.xml.template为mapred-site.xml
命令:cp mapred-site.xml.template mapred-site.xml
然后对mapred-site.xml文件进行编辑:
这里写图片描述
(12)在slave中配置DataNode节点
命令:vim slaves
我这只有一个从节点,如果有多个就要写多个,一行一个

(13)分发hadoop-2.5.1到其他从节点上(DataNode)
命令:
scp –r /home/hadoop/hadoop-2.5.1 slave01:/home/hadoop/

scp –r /home/hadoop/mydata/ slave01:/home/hadoop/
(14)在master、slave01上配置/home/gznc-hadoop/.bash_profile

配置完成后不要忘了source /home/gznc-hadoop/.bash_profile

(15)在master上启动hadoop集群:
命令:cd /home/hadoop/hadoop-2.5.1
./sbin/start-all.sh
start-all.sh
验证:jps
这样显示则表示启动成功
这里写图片描述
停止

配置时间戳:ntpdate us.pool.ntp.org
如果没有namenode进程,在master下进行namenode格式化,命令:
stop-all.sh
hdfs namenode –format

然后再启动就正常了。

例子:计算pi
cd /home/hadoop/hadoop-2.5.1/share/hadoop/mapreduce
hadoop jar hadoop-mapreduce-examples-2.5.1.jar pi 3 3
计算过程和结果:

[root@master Desktop]# cd /home/hadoop/hadoop-2.5.1/share/hadoop/mapreduce
[root@master mapreduce]# hadoop jar hadoop-mapreduce-examples-2.5.1.jar pi 3 3
Number of Maps  = 3
Samples per Map = 3
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Starting Job
18/05/12 10:35:28 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.228.132:18040
18/05/12 10:35:28 INFO input.FileInputFormat: Total input paths to process : 3
18/05/12 10:35:28 INFO mapreduce.JobSubmitter: number of splits:3
18/05/12 10:35:29 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1526146332767_0001
18/05/12 10:35:29 INFO impl.YarnClientImpl: Submitted application application_1526146332767_0001
18/05/12 10:35:29 INFO mapreduce.Job: The url to track the job: http://master:18088/proxy/application_1526146332767_0001/
18/05/12 10:35:29 INFO mapreduce.Job: Running job: job_1526146332767_0001
18/05/12 10:35:38 INFO mapreduce.Job: Job job_1526146332767_0001 running in uber mode : false
18/05/12 10:35:38 INFO mapreduce.Job:  map 0% reduce 0%
18/05/12 10:35:49 INFO mapreduce.Job:  map 33% reduce 0%
18/05/12 10:35:51 INFO mapreduce.Job:  map 100% reduce 0%
18/05/12 10:36:01 INFO mapreduce.Job:  map 100% reduce 100%
18/05/12 10:36:02 INFO mapreduce.Job: Job job_1526146332767_0001 completed successfully
18/05/12 10:36:02 INFO mapreduce.Job: Counters: 49
    File System Counters
        FILE: Number of bytes read=72
        FILE: Number of bytes written=388477
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=783
        HDFS: Number of bytes written=215
        HDFS: Number of read operations=15
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=3
    Job Counters 
        Launched map tasks=3
        Launched reduce tasks=1
        Data-local map tasks=3
        Total time spent by all maps in occupied slots (ms)=31163
        Total time spent by all reduces in occupied slots (ms)=8945
        Total time spent by all map tasks (ms)=31163
        Total time spent by all reduce tasks (ms)=8945
        Total vcore-seconds taken by all map tasks=31163
        Total vcore-seconds taken by all reduce tasks=8945
        Total megabyte-seconds taken by all map tasks=31910912
        Total megabyte-seconds taken by all reduce tasks=9159680
    Map-Reduce Framework
        Map input records=3
        Map output records=6
        Map output bytes=54
        Map output materialized bytes=84
        Input split bytes=429
        Combine input records=0
        Combine output records=0
        Reduce input groups=2
        Reduce shuffle bytes=84
        Reduce input records=6
        Reduce output records=0
        Spilled Records=12
        Shuffled Maps =3
        Failed Shuffles=0
        Merged Map outputs=3
        GC time elapsed (ms)=510
        CPU time spent (ms)=2800
        Physical memory (bytes) snapshot=701161472
        Virtual memory (bytes) snapshot=3372118016
        Total committed heap usage (bytes)=377171968
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters 
        Bytes Read=354
    File Output Format Counters 
        Bytes Written=97
Job Finished in 34.508 seconds
Estimated value of Pi is 3.55555555555555555556
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值