一、配置完全分布
根据下面表进行配置
hadoop100 | hadoop101 | hadoop102 | |
HDFS | NameNode DataNode | DataNode | SecondaryNameNode DataNode |
YARN | NodeManage | ResourceManager NodeManage | NodeManage |
1.配置:hadoop-env.sh
获取JDK的安装路径:echo $JAVA_HOME
/opt/module/jdk1.8.0_212
修改JAVA_HOME 路径:
export JAVA_HOME=/opt/module/jdk1.8.0_212
2.配置core-site.xml
切换位置:
cd /opt/module/hadoop-3.1.3/etc/hadoop/
进行修改
vi core-site.xml
文件内容如下:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop100:9820</value>
</property>
<!-- hadoop.data.dir是自定义的变量,下面的配置文件会用到 -->
<property>
<name>hadoop.data.dir</name>
<value>/opt/module/hadoop-3.1.3/data</value>
</property>
</configuration>
3.配置hdfs-site.xml
vi hdfs-site.xml
文件内容如下:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- namenode数据存放位置 -->
<property>
<name>dfs.namenode.name.dir</name>
<value>file://${hadoop.data.dir}/name</value>
</property>
<!-- datanode数据存放位置 -->
<property>
<name>dfs.datanode.data.dir</name>
<value>file://${hadoop.data.dir}/data</value>
</property>
<!-- secondary namenode数据存放位置 -->
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>file://${hadoop.data.dir}/namesecondary</value>
</property>
<!-- datanode重启超时时间是30s,解决兼容性问题,跳过 -->
<property>
<name>dfs.client.datanode-restart.timeout</name>
<value>30</value>
</property>
<!-- 设置web端访问namenode的地址 -->
<property>
<name>dfs.namenode.http-address</name>
<value>hadoop100:9870</value>
</property>
<!-- 设置web端访问secondary namenode的地址 -->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop102:9868</value>
</property>
</configuration>
4.配置yarn-site.xml
vi yarn-site.xml
文件内容如下:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop101</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name> <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
</configuration>
5.配置mapred-site.xml
vi mapred-site.xml
文件内容如下:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
二、配置完成后进行分发
1.把/etc/hadoop/目录拷贝到hadoop101:
[root@hadoop100 opt]# cd /opt
[root@hadoop100 opt]# scp -r hadoop/ root@hadoop101:/opt/module/hadoop-3.1.3/etc/
2.把/etc/hadoop/目录拷贝到hadoop102:
[root@hadoop100 opt]# scp -r hadoop/ root@hadoop102:/opt/module/hadoop-3.1.3/etc/
3.把 /etc/profile拷贝到hadoop100 hadoop101
root@hadoop102 opt]# rsync -av /etc/profile hadoop101:/etc
[root@hadoop102 opt]# rsync -av /etc/profile hadoop100:/etc
4.在hadoop100和hadoop101上分别要进行source /etc/profile
[root@hadoop100 opt]# source /etc/profile
[root@hadoop101 opt]# source /etc/profile
三、分布式集群格式化
分布式集群第一次启动之前要格式化
格式化之前,要把三个服务器上的hadoop安装目录下的 data目录和logs目录都删掉
[root@hadoop101 opt]# cd /opt/module/hadoop-3.1.3
[root@hadoop101 opt]# rm -rf data
[root@hadoop101 opt]# rm -rf logs
在指定namenode运行的服务器上执行格式化:
(namenode指定在hadoop100上运行的)
[root@hadoop100 hadoop-3.1.3]# hdfs namenode -format
三、集群单点启动
1.hadoop100:
hdfs --daemon start namenode
hdfs --daemon start datanode
yarn --daemon start nodemanager
2.hadoop101:
yarn --daemon start resourcemanager
hdfs --daemon start datanode
yarn --daemon start nodemanager
3.hadoop102:
hdfs --daemon start secondarynamenode
hdfs --daemon start datanode
yarn --daemon start nodemanager