hadoop集群高可用
配置三台虚拟机
192.168.199.161 pass1
192.168.199.162 pass2
192.168.199.163 pass3
先配置jdk和zookeeper
配置hadoop
解压hadoop安装包
[root@pass1 install]# tar -zxvf ./hadoop-2.6.0-cdh5.14.2.tar.gz -C ../bigdata/
[root@pass1 install]# cd ../bigdata/
[root@pass1 bigdata]# ls
hadoop-2.6.0-cdh5.14.2 jdk180
[root@pass1 bigdata]# mv ./hadoop-2.6.0-cdh5.14.2/ hadoop260
进入/opt/bigdata/hadoop260/etc/hadoop 对如下文件进行配置
hadoop-env.sh
mapred-env.sh
yarn-env.sh
Slaves
core-site.xml
hdfs-site.xml
mapred-site.xml
yarn-site.xml
hadoop-env.sh
export JAVA_HOME=/opt/bigdata/jdk180/
mapred-env.sh
export JAVA_HOME=/opt/bigdata/jdk180/
yarn-env.sh
export JAVA_HOME=/opt/bigdata/jdk180/
slaves
pass1
pass2
pass3
core-site.xml(配置完此步,需要在hadoop260目录下建hadoop2目录)
<configuration>
<property>
<!-- HDFS namenode地址 -->
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<!-- hadoop运行是存储路径 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/bigdata/hadoop260/hadoop2</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/opt/bigdata/hadoop260/hadoop2/jn</value>
</property>
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop