Hadoop3.1.2完全分布式集群部署超详细记录(CentOS 7.6.1810)

安装准备

JDK版本

[root@localhost hadoop]# java -version
java version "1.8.0_221"
Java(TM) SE Runtime Environment (build 1.8.0_221-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.221-b11, mixed mode)

Host配置

172.16.131.21 master
172.16.131.22 node1
172.16.131.23 node2

修改主机名

分别修改三台机器主机名

hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2

免密码登录

1.分别在三台机器上执行
ssh-keygen -t rsa

[root@localhost ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:DGvT+sqwkGNR8DVbIe/vd+hzfXCHFh/7SJw7Rgg9utY root@node2
The key's randomart image is:
+---[RSA 3072]----+
|  .   + o.       |
|   o . *         |
|    o o .  .     |
|   .   *  . o .. |
|  .   + S  o + =o|
|   o . o .. . O.+|
|  = . .   .o = *o|
| . o + . .o E B +|
|    . o....o.= ..|
+----[SHA256]-----+
[root@localhost ~]# cat .ssh/id_rsa.pub >> .ssh/authorized_keys
[root@localhost ~]# chmod 0600  .ssh/authorized_keys
[root@localhost ~]# ssh localhost

2.分别在三台机器上执行
执行时排除自己节点

ssh-copy-id -i .ssh/id_rsa.pub master
ssh-copy-id -i .ssh/id_rsa.pub node1
ssh-copy-id -i .ssh/id_rsa.pub node2

配置Hadoop

  1. hadoop配置文件
    hadoop所在目录
    /opt/hadoop

  2. Hadoop配置文件所在目录
    /opt/hadoop/etc/hadoop

  3. 配置文件说明

    1. hadoop-env.sh
    2. core-site.xml
    3. hdfs-site.xml
    4. yarn-site.xml
    5. mapred-site.xml
    6. workers
  4. 修改配置

    1. hadoop-env.sh
      查看JAVA_HOME并添加以下信息
      export JAVA_HOME=/usr/java/jdk1.8.0_221-amd64/
      export HDFS_NAMENODE_USER=“root”
      export HDFS_DATANODE_USER=“root”
      export HDFS_SECONDARYNAMENODE_USER=“root”
      export YARN_RESOURCEMANAGER_USER=“root”
      export YARN_NODEMANAGER_USER=“root”
[root@localhost hadoop]# echo $JAVA_HOME
/usr/java/jdk1.8.0_221-amd64
[root@localhost hadoop]# vim hadoop-env.sh
 2. core-site.xml
<configuration>
  <property>
      <name>fs.defaultFS</name>
      <value>hdfs://master:9000</value>
  </property>
  
  <property>
        <name>hadoop.tmp.dir</name>
        <value>/data/hadoop/tmp/data/</value>
  </property>

  <property>
      <name>io.file.buffer.size</name>
      <value>65536</value>
  </property>
</configuration>

 3. hdfs-site.xml
<configuration>
<property>
  <name>dfs.namenode.name.dir</name>
  <value>/data/hadoop/hdfs/name/</value>
</property>

<property>
  <name>dfs.namenode.handler.count</name>
  <value>100</value>
</property>

<!-- Configurations for DataNode: -->

<property>
  <name>dfs.datanode.data.dir</name>
  <value>/data/hadoop/hdfs/data/</value>
</property>

<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>
</configuration>

 4. yarn-site.xml
<configuration>
  <property>
          <name>yarn.resourcemanager.hostname</name>
          <value>master</value>
  </property>
  <property>
          <name>yarn.nodemanager.aux-services</name>
          <value>mapreduce_shuffle</value>
  </property>
</configuration>
 5. mapred-site.xml
<configuration>
<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>
</configuration>

 6. workers
	 node1
     node2
 7. 复制到其他节点
scp -r hadoop node1:/opt/
scp -r hadoop node2:/opt/
 8. 三台机器分别设置环境变量

vim /etc/profile
export HADOOP_HOME=/opt/hadoop
export PATH= P A T H : PATH: PATH:HADOOP_HOME/bin:$HADOOP_HOME/sbin
source /etc/profile
9. 格式化文件系统

[root@master bin]# cd /opt/hadoop/bin/
[root@master bin]# ./hdfs namenode -format

 10.启动

/opt/hadoop/sbin/start-dfs.sh
/opt/hadoop/sbin/start-yarn.sh

验证是否安装成功

我这三台机器都运行了tomcat,所以可以看到以**“Bootstrap”,正常情况下是没有这个进程**的
master执行jps
在这里插入图片描述
node1执行jps
在这里插入图片描述
node2执行jps
在这里插入图片描述
打开HDFS 前台页面:http://172.16.131.21:9870/
在这里插入图片描述
打开Hadoop前台页面:http://172.16.131.21:8088/cluster
在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值