Hadoop集群安装
机器
如果只有3台主机,可以按照如下规划来部署安装
IP | server |
---|---|
hadoop1(172.16.185.68) | zookeeper journalnode namenode zkfc resourcemanager datanode |
hadoop2(172.16.185.69) | zookeeper journalnode namenode zkfc resourcemanager datanode |
hadoop3(172.16.185.70) | zookeeper journalnode datanode |
创建用户组hadoop、创建用户hadoop
[root@master ~]$ groupadd hadoop
[root@master ~]$ adduser -g hadoop -d /home/hadoop hadoop
[root@master ~]$ useradd -s /bin/bash -d /home/hadoop -m hadoop -g hadoop -G root
[root@master ~]$ passwd hadoo
[root@hadoop1 data]# chown -R hadoop:hadoop /data/
修改主机名
vi /etc/sysconfig/network
绑定hosts
[root@hadoop1 ~]# vi /etc/hosts
172.16.185.68 hadoop1
172.16.185.69 hadoop2
172.16.185.70 hadoop3
免密码登录
切换hadoop用户
[root@hadoop1 ~]#
[root@hadoop1 ~]# su hadoop
[hadoop@hadoop1 test]$
配置免密码登陆
#首先要配置hadoop1到hadoop2、hadoop3的免密码登陆
#在hadoop1上生产一对钥匙 yum -y install openssh-clients
ssh-keygen -t rsa
#将公钥拷贝到其他节点,包括自己
/usr/bin/ssh-copy-id hadoop1
ssh-copy-id hadoop2
ssh-copy-id hadoop3
#配置hadoop2到hadoop1、hadoop3的免密码登陆
ssh-keygen -t rsa
#将公钥拷贝到其他节点
ssh-copy-id hadoop1
ssh-copy-id hadoop2
ssh-copy-id hadoop3
zookeeper安装
hadoop2.0集群搭建
安装配置hadoop集群(在hadoop1上操作)
解压
tar -zxvf /data/software/hadoop-2.5.2.tar.gz -C /data/
配置HDFS
hadoop2.0所有的配置文件都在$HADOOP_HOME/etc/hadoop目录下
#将hadoop添加到环境变量中
vim /etc/profile
export JAVA_HOME=/data/java
export HADOOP_HOME=/data/hadoop-2.5.2
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
#hadoop2.0的配置文件全部在$HADOOP_HOME/etc/hadoop下
cd /data/hadoop-2.5.2/etc/hadoop
修改hadoo-env.sh
export JAVA_HOME=/data/java
修改core-site.xml
<configuration>
<!-- 指定hdfs的nameservice为ns1 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://ns1</value>
</property>
<!-- 指定hadoop临时目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop-2.5.2/tmp</value>
</property>
<!-- 指定zookeeper地址 -->
<property>
<name>ha.zookeeper.quorum</name>
<value>hadoop1:2181,hadoop2:2181,hadoop2:2181</value>
</property>
</configuration>