1.准备工作
(1)三台虚拟机
(2)hadoop-2.6.4安装包
目录
(1)root用户复制文件到hadoop01的/usr/apps/hadoop目录下,解压缩
(7)启动journalnode(分别在hadoop01、hadoop02、hadoop03上执行)
2.Hadoop安装
(1)root用户复制文件到hadoop01的/usr/apps/hadoop目录下,解压缩
使用xftp上传文件
tar –zxvf cenos-6.6-hadoop-2.6.4.tar.gz
(2)将hadoop添加到环境变量中
vi /etc/profile
在配置文件中对export进行添加更改
export HADOOP_HOME=/’usr/apps/hadoop
上传到各个节点
scp –r /etc/profile root@hadoop02:/etc/
(3)修改配置文件,修改Windows主机的hosts文件
A.修改hadoop-env.sh
hadoop-env.sh
export JAVA_HOME=/usr/apps/java/jdk1.7.0_80
B.修改core-site.xml
core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://jsj/</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/apps/hdpdata</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value>
</property>
C.修改hdfs-site.xml
hdfs-site.xml
<property>
<name>dfs.nameservices</name>
<value>jsj</value>
</property>
<property>
<name>dfs.ha.namenodes.jsj</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.jsj.nn1</name>
<value>hadoop01:9000</value>
</property>
<property>
<name>dfs.namenode.rpc-address.jsj.nn2</name>
<value>hadoop02:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.jsj.nn1</name>
<value>hadoop01:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.jsj.nn2</name>
<value>hadoop02:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://hadoop01:8485;hadoop02:8485;hadoop03:8485/jsj</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/usr/apps/journaldata</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.jsj</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
shell(/bin/true)
</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
D.修改mapred-site.xml
mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop03:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop03:19888</value>
</property>
E.修改yarn-site.xml
yarn-site.xml
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>abc</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>hadoop01</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>hadoop02</value>
</property>
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
F.修改slaves
slaves
hadoop02
hadoop03
hadoop04
(5)将配置好的hadoop拷贝到其他节点
上传到各个节点
Scp –r hadoop hadoop@hadoop02:/usr/apps/
(6)启动zookeeper集群
zkSever.sh start
(7)启动journalnode(分别在hadoop01、hadoop02、hadoop03上执行)
Hadoop-daemon.sh start journalnode
(8)格式化HDFS
Hdfs namemode –format
cd /usr/apps
scp –r hdpdata hadoop@hadoop02:/usr/apps/
hadoop-daemon.sh start namenode
hadoop02 上 hdfs nameNode –bootstrapstandby y
(9)格式化ZKFC(在hadoop01上执行一次即可)
hdfs zkfc –formatzk
(10)启动HDFS(在hadoop01上执行)
start-dfs.sh
(11)启动YARN
Start-yarn.sh
(12)启动历史服务器
mr-jobhistory-damon.sh start historysever