hadoop-2.6.0-cdh5.5.2安装

一、前言

  我这里是在Debian8.6.0版本的Linux操作系统下安装的hadoop-2.6.0-cdh5.5.2
  cdh5系列下载地址:http://archive.cloudera.com/cdh5/cdh/5/

  正真的第一步应该是同步时间,同步后得重启,否则的话在运行mr的时候会卡主不动,不同步运行mr的时候会报错(红帽的话防火墙和selinux是否关闭,这都可能在安装中会报错,但Debian安装后防火墙好像默认关闭的,而selinux默认没有安装,可以通过apt安装,所以倒也不用管它了):

hadoop@h21:~$ hadoop jar xx.jar WordCount /input/he.txt /output
16/12/22 22:58:17 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/12/22 22:58:18 INFO client.RMProxy: Connecting to ResourceManager at h21/192.168.8.21:8032
16/12/22 22:58:18 INFO client.RMProxy: Connecting to ResourceManager at h21/192.168.8.21:8032
16/12/22 22:58:19 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
16/12/22 22:58:19 INFO mapred.FileInputFormat: Total input paths to process : 1
16/12/22 22:58:19 INFO mapreduce.JobSubmitter: number of splits:2
16/12/22 22:58:19 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1482465356176_0001
16/12/22 22:58:20 INFO impl.YarnClientImpl: Submitted application application_1482465356176_0001
16/12/22 22:58:20 INFO mapreduce.Job: The url to track the job: http://h21:8088/proxy/application_1482465356176_0001/
16/12/22 22:58:20 INFO mapreduce.Job: Running job: job_1482465356176_0001
16/12/22 22:58:22 INFO mapreduce.Job: Job job_1482465356176_0001 running in uber mode : false
16/12/22 22:58:22 INFO mapreduce.Job:  map 0% reduce 0%
16/12/22 22:58:22 INFO mapreduce.Job: Job job_1482465356176_0001 failed with state FAILED due to: Application application_1482465356176_0001 failed 2 times due to Error launching appattempt_1482465356176_0001_000002. Got exception: org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container. 
This token is expired. current time is 1482471565613 found 1482466101404
Note: System times on machines may be out of sync. Check system time and time zones.
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
        at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:123)
        at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:251)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:724)
. Failing the application.
16/12/22 22:58:22 INFO mapreduce.Job: Counters: 0
Exception in thread "main" java.io.IOException: Job failed!
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:838)
        at WordCount.main(WordCount.java:70)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

二、安装

1.vi /etc/hosts:

3节点都修改,可以把初始内容都删掉后添加如下内容

192.168.8.11    h11
192.168.8.12    h12
192.168.8.13    h13

注意:后来验证hosts没必要都这样写,这样写就行

root@h11:~# vi /etc/hosts
192.168.8.11    h11
192.168.8.12    h12
192.168.8.13    h13
root@h12:~# vi /etc/hosts
192.168.8.11    h11
192.168.8.12    h12
root@h13:~# vi /etc/hosts
192.168.8.11    h11
192.168.8.13    h13

vi /etc/hostname (主节点,其他节点类似):h11

reboot (3节点都重启)

2.三台机器 创建hadoop 用户:

adduser hadoop(密码123456)

3.安装JDK (3台都安装):
root@h11:/usr# tar -zxvf jdk-7u25-linux-i586.tar.gz
root@h11:/usr# scp -r jdk1.7.0_25/ root@h12:/usr/
root@h11:/usr# scp -r jdk1.7.0_25/ root@h13:/usr/

root@h11:/usr# vi /etc/profile (三台都修改吧,添加如下内容)
export JAVA_HOME=/usr/jdk1.7.0_25
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH

root@h11:/usr# source /etc/profile (使环境变量生效)
4.安装ssh 证书:
root@h11:/usr# su - hadoop (3个节点必须得先切换到hadoop用户)
hadoop@h11:~$ ssh-keygen -t rsa
hadoop@h12:~$ ssh-keygen -t rsa
hadoop@h13:~$ ssh-keygen -t rsa

hadoop@h11:~$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h11
hadoop@h11:~$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h12
hadoop@h11:~$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h13

hadoop@h12:~$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h11
hadoop@h12:~$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h12
hadoop@h12:~$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h13

hadoop@h13:~$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h11
hadoop@h13:~$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h12
hadoop@h13:~$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h13

注意:后来经验证其实这个步骤也可以简化,没必要三台机器两两互ssh,因为是主节点去向从节点派发任务,从节点与从节点之间就没必要通信了,可简化如下

hadoop@h11:~$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h11
hadoop@h11:~$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h12
hadoop@h11:~$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h13

hadoop@h12:~$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h11

hadoop@h13:~$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h11
5.安装hadoop-2.6.0-cdh5.5.2:
hadoop@h11:~$ tar -zxvf hadoop-2.6.0-cdh5.5.2.tar.gz

hadoop@h11:~$ vi .profile (添加以下内容,Redhat为修改.bash_profile)
export JAVA_HOME=/usr/jdk1.7.0_25
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH

HADOOP_HOME=/home/hadoop/hadoop-2.6.0-cdh5.5.2
HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
PATH=$HADOOP_HOME/bin:$PATH
export HADOOP_HOME HADOOP_CONF_DIR PATH

hadoop@h11:~$ source .profile
6.修改core-site.xml:
hadoop@h11:~$ cd hadoop-2.6.0-cdh5.5.2/etc/hadoop
hadoop@h11:~/hadoop-2.6.0-cdh5.5.2/etc/hadoop$ vi core-site.xml
<property>
   <name>fs.defaultFS</name>
   <value>hdfs://h11:9000</value>
   <description>NameNode URI.</description>
 </property>

 <property>
   <name>io.file.buffer.size</name>
   <value>131072</value>
   <description>Size of read/write buffer used inSequenceFiles.</description>
 </property>
7.编辑hdfs-site.xml:
hadoop@h11:~/hadoop-2.6.0-cdh5.5.2/etc/hadoop$ cd /home/hadoop/hadoop-2.6.0-cdh5.5.2
hadoop@h11:~/hadoop-2.6.0-cdh5.5.2$ mkdir -p dfs/name
hadoop@h11:~/hadoop-2.6.0-cdh5.5.2$ mkdir -p dfs/data
hadoop@h11:~/hadoop-2.6.0-cdh5.5.2$ mkdir -p dfs/namesecondary
hadoop@h11:~/hadoop-2.6.0-cdh5.5.2$ cd etc/hadoop

hadoop@h11:~/hadoop-2.6.0-cdh5.5.2/etc/hadoop$ vi hdfs-site.xml
 <property>
   <name>dfs.namenode.secondary.http-address</name>
   <value>h11:50090</value>
   <description>The secondary namenode http server address andport.</description>
 </property>

 <property>
   <name>dfs.namenode.name.dir</name>
   <value>file:///home/hadoop/hadoop-2.6.0-cdh5.5.2/dfs/name</value>
   <description>Path on the local filesystem where the NameNodestores the namespace and transactions logs persistently.</description>
 </property>

 <property>
   <name>dfs.datanode.data.dir</name>
   <value>file:///home/hadoop/hadoop-2.6.0-cdh5.5.2/dfs/data</value>
   <description>Comma separated list of paths on the local filesystemof a DataNode where it should store its blocks.</description>
 </property>

 <property>
   <name>dfs.namenode.checkpoint.dir</name>
   <value>file:///home/hadoop/hadoop-2.6.0-cdh5.5.2/dfs/namesecondary</value>
   <description>Determines where on the local filesystem the DFSsecondary name node should store the temporary images to merge. If this is acomma-delimited list of directories then the image is replicated in all of thedirectories for redundancy.</description>
 </property>

<property>
    <name>dfs.replication</name>
    <value>2</value>
</property>
8.编辑mapred-site.xml:
hadoop@h11:~/hadoop-2.6.0-cdh5.5.2/etc/hadoop$ cp mapred-site.xml.template mapred-site.xml
hadoop@h11:~/hadoop-2.6.0-cdh5.5.2/etc/hadoop$ vi mapred-site.xml
<property>
   <name>mapreduce.framework.name</name>
<value>yarn</value>
<description>Theruntime framework for executing MapReduce jobs. Can be one of local, classic oryarn.</description>
  </property>

  <property>
   <name>mapreduce.jobhistory.address</name>
    <value>h11:10020</value>
    <description>MapReduce JobHistoryServer IPC host:port</description>
  </property>

  <property>
   <name>mapreduce.jobhistory.webapp.address</name>
    <value>h11:19888</value>
    <description>MapReduce JobHistoryServer Web UI host:port</description>
  </property>

注:属性”mapreduce.framework.name“表示执行mapreduce任务所使用的运行框架,默认为local,需要将其改为”yarn”。

9.编辑yarn-site.xml:
hadoop@h11:~/hadoop-2.6.0-cdh5.5.2/etc/hadoop$ vi yarn-site.xml
<property>
   <name>yarn.resourcemanager.hostname</name>
  <value>h11</value>
  <description>The hostname of theRM.</description>
</property>

 <property>
   <name>yarn.nodemanager.aux-services</name>
   <value>mapreduce_shuffle</value>
   <description>Shuffle service that needs to be set for Map Reduceapplications.</description>
 </property>
10.编辑hadoop-env.sh:
hadoop@h11:~/hadoop-2.6.0-cdh5.5.2/etc/hadoop$ vi hadoop-env.sh
export JAVA_HOME=/usr/jdk1.7.0_25

注:在RedHat6.6 64位中默认jdk版本为1.7.0_65,所以无需再自己安装,此时在这个地方应写为export JAVA_HOME=/usr

11.编辑slaves文件:
[hadoop@h201 hadoop]$ vi slaves 
h12
h13
12.同步:
hadoop@h11:~/hadoop-2.6.0-cdh5.5.2/etc/hadoop$ cd
hadoop@h11:~$ scp -r ./hadoop-2.6.0-cdh5.5.2/ hadoop@h12:/home/hadoop/
hadoop@h11:~$ scp -r ./hadoop-2.6.0-cdh5.5.2/ hadoop@h13:/home/hadoop/

注:如果遇到如下告警信息:WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

解决方法1:去网址http://dl.bintray.com/sequenceiq/sequenceiq-bin/hadoop-native-64-2.6.0.tar下载hadoop-native-64-2.6.0.tar(这个包是64位的,所以只对64位的Linux操作系统好使,32位的不行)
hadoop@h11:~$ tar -xvf hadoop-native-64-2.6.0.tar -C hadoop-2.6.0-cdh5.5.2/lib/native/

解决方法2:直接在log4j日志中去除告警信息
在hadoop-2.6.0-cdh5.5.2/etc/hadoop/log4j.properties文件中添加
log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
 

三:验证

hadoop@h11:~/hadoop-2.6.0-cdh5.5.2$ bin/hdfs namenode -format
hadoop@h11:~/hadoop-2.6.0-cdh5.5.2$ sbin/start-all.sh

hadoop@h11:~/hadoop-2.6.0-cdh5.5.2$ jps
2260 ResourceManager
1959 NameNode
2121 SecondaryNameNode
2559 Jps

hadoop@h12:~/hadoop-2.6.0-cdh5.5.2$ jps
1889 NodeManager
2038 Jps
1788 DataNode

hadoop@h13:~/hadoop-2.6.0-cdh5.5.2$ jps
1889 NodeManager
2038 Jps
1788 DataNode

hadoop@h11:~/hadoop-2.6.0-cdh5.5.2$ bin/hadoop fs -ls /
hadoop@h11:~/hadoop-2.6.0-cdh5.5.2$ bin/hadoop fs -mkdir /aaa

四、运行一个wordcount

mkdir input
echo "Hello Docker" >input/file2.txt
echo "Hello Hadoop" >input/file1.txt

# create input directory on HDFS
hadoop fs -mkdir -p input

# put input files to HDFS
hdfs dfs -put ./input/* input

# run wordcount 
hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.6.0-sources.jar org.apache.hadoop.examples.WordCount input output

# print the output of wordcount
hdfs dfs -cat output/part-r-00000
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

小强签名设计

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值