hadoop部署,亲测可用

1、准备环境

    4台centos 7 主机,ip分别为 192.168.1.62,192.168.1.65,192.168.1.70,192.168.1.71

    主机名分别是master,slave1,slave2,slave3 

    分别将四台主机加入/etc/hosts内

 

    192.168.1.62   master

    192.168.1.65   slave1

    192.168.1.70   slave2

    192.168.1.71   slave3

2、部署前准备工作

       Master主机将127.0.0.1   localhostlocalhost.localdomain localhost4 localhost4.localdomain4  注释,原因是因为master端可以直接识别192.168.1.62   master ip

    关闭所有的防火墙

    service firewalld stop

 

3、四台主机创建hadoop3 用户并设置密码

    

 

4、在master主机上创建用户需要在/etc/sudoers中添加sudo 权限

    ## Allows members of the 'sys' group to runnetworking, software,

    ## service management apps and more.

    # %sys ALL = NETWORKING, SOFTWARE,SERVICES, STORAGE, DELEGATING, PROCESSES, LOCATE, DRIVERS

    %sudo  ALL=(ALL:ALL) ALL

    hadoop3.0 ALL=(ALL)ALL

    将下载好的jdk包和hadoop包放置/usr/local/java,/usr/local/hadoop内

    mkdir /usr/local/java /usr/local/Hadoop

 

 

    使用hadoop3用户配置环境变量至四台主机的.bashrc

    exportJAVA_HOME=/usr/local/java/jdk1.8.0_171

    exportHADOOP_HOME=/usr/local/hadoop/hadoop-3.0.0

    export JRE_HOME=${JAVA_HOME}/jre

    export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib:${HIVE_HOME}/lib

    exportSCALA_HOME=~/usr/local/scala/scala-2.10.5

    exportSPARK_HOME=~/usr/local/spark/spark-2.0.1-bin-hadoop2.7

    exportSQOOP_HOME=~/usr/local/sqoop/sqoop-1.4.6

    exportHIVE_HOME=~/usr/local/hive/hive-1.2.1

    export HBASE_HOME=~/usr/local/hbase/hbase-1.0.1.1

    exportPATH=${SPARK_HOME}/bin:${SCALA_HOME}/bin:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:${SQOOP_HOME}/bin:${HADOOP_HOME}/lib:${HIVE_HOME}/bin:${HBASE_HOME}/bin:$PATH

 

    source ~/.bashrc

 

5、验证输入java和hadoop 返回系统命令提示才配置成功

    

     

6、四台主机设置互信

    [hadoop3@master java]$ ssh-keygen -t rsa

    Generating public/private rsa key pair.

    Enter file in which to save the key(/home/hadoop3/.ssh/id_rsa):

    Created directory '/home/hadoop3/.ssh'.

    Enter passphrase (empty for no passphrase):

    Enter same passphrase again:

    Your identification has been saved in/home/hadoop3/.ssh/id_rsa.

    Your public key has been saved in/home/hadoop3/.ssh/id_rsa.pub.

    The key fingerprint is:

    ff:8c:f1:ea:65:d6:db:09:bf:fb:20:dd:0d:6c:88:a0hadoop3@master

The key's randomart image is:

+--[ RSA 2048]----+

|                |

|                |

|       .        |

|       . .. o   |

|      E S. . +  |

|        .   + o.|

|         o =.+ o|

|          X .o+.|

|         .+.+.=*|

+-----------------+

 cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys

7、将id_rsa.pub文件传入其他三天slave主机内

[hadoop3@master java]$ ssh-copy-id -i~/.ssh/id_rsa.pub hadoop3@slave1

The authenticity of host 'slave1(192.168.1.65)' can't be established.

ECDSA key fingerprint is b1:0d:74:00:d0:3c:89:1f:48:26:e7:89:c7:e4:c2:3e.

Are you sure you want to continueconnecting (yes/no)? yes

/usr/bin/ssh-copy-id: INFO: attempting tolog in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: INFO: 1 key(s) remainto be installed -- if you are prompted now it is to install the new keys

hadoop3@slave1's password:

Number of key(s) added: 1

Now try logging into the machine,with:   "ssh 'hadoop3@slave1'"

and check to make sure that only the key(s)you wanted were added.

 

其他两台重复以上步骤

验证:

Ssh slave1 如果正常切换到slave1的话,则配置成功

[hadoop3@master java]$ ssh slave1

[hadoop3@slave1 ~]$

 

8、Hadoop配置

hadoop3.0需要配置的文件有core-site.xml、hdfs-site.xml、yarn-site.xml、mapred-site.xml、hadoop-env.sh、workers、masters

 

 

编辑core-site.xml配置文件

编辑hdfs-site.xml

 

 

编辑mapred-site.xml配置

编辑yarn-site.xml

 

编辑workers

master

slave1

slave2

slave3

 

新建masters

master

 

 

 

以上配置完成后,将java hadoop整个文件夹复制到其他三台机器并且设置好环境

 

9、启动hadoop服务器

在master服务器上输入hdfsnamenode -format 对所有的节点服务器进行格式化

 

执行后无异常即可

 

Start-all.sh 启动所有的服务

 

 

 



评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值