Linux (Centos7) hadoop 分布式集群搭建

1.修改/etc/hosts
  a1 192.168.9.1  (master)
  a2 192.168.9.2  (slave1)
  a3 192.168.9.3  (slave2)


2.3台机器 创建hadoop 用户
用户名:hadoop 密码:123


3.安装JDK (3台都安装)
[root@a1 ~]# chmod 777 jdk-6u38-ea-bin-b04-linux-i586-31_oct_2012-rpm.bin 
[root@a1 ~]# ./jdk-6u38-ea-bin-b04-linux-i586-31_oct_2012-rpm.bin
[root@a1 ~]# cd /usr/java/jdk1.6.0_38/


[root@a1 jdk]# vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.6.0_38
export JAVA_BIN=/usr/java/jdk1.6.0_38/bin
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME JAVA_BIN PATH CLASSPATH


重启你的系统 或 source /etc/profile


[root@a1 ~]#  /usr/java/jdk1.6.0_38/bin/java -version


java version "1.6.0_38-ea"
Java(TM) SE Runtime Environment (build 1.6.0_38-ea-b04)
Java HotSpot(TM) Client VM (build 20.13-b02, mixed mode, sharing)


4.安装hadoop (3台都安)


[root@a1 ~]# tar zxvf hadoop-0.20.2-cdh3u5.tar.gz -C /usr/local


编辑hadoop 配置文件
[root@a1 ~]# cd /usr/local/hadoop-0.20.2-cdh3u5/conf/
[root@a1 conf]# vi hadoop-env.sh 
添加
export JAVA_HOME=/usr/java/jdk1.6.0_38


设置namenode启动端口
[root@a1 conf]# vi core-site.xml 
添加
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://a1:9000</value>
</property>


<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop-0.20.2-cdh3u5/tmp</value>
</property>
</configuration>


mkdir tmp 创建tmp目录 


设置datanode节点数为2
[root@a1 conf]# vi hdfs-site.xml
添加
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>


设置jobtracker端口
[root@a1 conf]# vim mapred-site.xml


<configuration>
<property>
<name>mapred.job.tracker</name>
<value>h91:9001</value>
</property>
</configuration>


[root@a1 conf]# vi masters
改为  a1(主机名)


[root@a1 conf]# vi slaves
改为   
a2
a3


拷贝到其他两个节点
[root@a1 conf]# cd /usr/local/
[root@a1 local]# scp -r ./hadoop-0.20.2-cdh3u5/ a2:/usr/local/
[root@a1 local]# scp -r ./hadoop-0.20.2-cdh3u5/ a3:/usr/local/


在所有节点上执行以下操作,把/usr/local/hadoop-0.20.2-cdh3u5的所有者,所有者组改为hadoop并su成该


用户
[root@a1 ~]# chown hadoop.hadoop /usr/local/hadoop-0.20.2-cdh3u5/ -R
[root@a2 ~]# chown hadoop.hadoop /usr/local/hadoop-0.20.2-cdh3u5/ -R
[root@a3 ~]# chown hadoop.hadoop /usr/local/hadoop-0.20.2-cdh3u5/ -R


[root@a1 ~]# su - hadoop
[root@a2 ~]# su - hadoop
[root@a3 ~]# su - hadoop


所有节点上创建密钥 
[hadoop@a1 ~]$ ssh-keygen -t rsa
[hadoop@a2 ~]$ ssh-keygen -t rsa
[hadoop@a3 ~]$ ssh-keygen -t rsa


[hadoop@a1 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a1
[hadoop@a1 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a2
[hadoop@a1 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a3


[hadoop@a2 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a1
[hadoop@a2 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a2
[hadoop@a2 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a3


[hadoop@a3 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a1
[hadoop@a3 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a2
[hadoop@a3 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a3




格式化 namenode
[hadoop@a1 ~]$ cd /usr/local/hadoop-0.20.2-cdh3u5/
[hadoop@a1 hadoop-0.20.2-cdh3u5]$ bin/hadoop namenode -format


开启
[hadoop@a1 hadoop-0.20.2-cdh3u5]$ bin/start-all.sh 


在所有节点查看进程状态验证启动
[hadoop@a1 hadoop-0.20.2-cdh3u5]$ jps
8602 JobTracker
8364 NameNode
8527 SecondaryNameNode
8673 Jps


[hadoop@a2 hadoop-0.20.2-cdh3u5]$ jps
10806 Jps
10719 TaskTracker
10610 DataNode


[hadoop@a3 hadoop-0.20.2-cdh3u5]$ jps
7605 Jps
7515 TaskTracker
7405 DataNode


[hadoop@a1 hadoop-0.20.2-cdh3u5]$ bin/hadoop dfsadmin -report
  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值