最近随着公司业务的发展,在一波重构中技术层面主要的目标是双中台建设,由是就有了本文。
# 数据中台
## 1、环境准备
### 1.1、前置条件
#### Linux wget
##### rpm 安装
下载wget的RPM包:
http://mirrors.163.com/centos/6.8/os/x86_64/Packages/wget-1.12-8.el6.x86_64.rpm
执行
rpm -ivh wget-1.12-8.el6.x86_64.rpm
##### yum安装
yum -y install wget
#### JDK安装
通过 wget 安装JDK1.8
1)官网复制下载链接
2)执行以下命令
[root@localhost local]# wget -P jdk https://download.oracle.com/otn/java/jdk/8u301-b09/d3c52aa6bfa54d3ca74e617f18309292/jdk-8u301-linux-x64.tar.gz?AuthParam=1630898005_b5ee943a5a2ac6194eda3334c4ec7fb0
如果遇到403 请使用以下方案
wget -U NoSuchBrowser/1.0 JDK下载地址
3)解压到指定目录
tar -zxvf jdk-8u301-linux-x64.tar.gz -C /usr/local/jdk
4)环境变量
vim /etc/profile #打开配置环境变量的文件,需要注意这个profile文件的路径,不要搞错。
profile里面内容编辑 光标移到最下面,然后添加以下内容
#==========ADD JAVA_HOME=============
export JAVA_HOME=/usr/local/jdk/jdk1.8.0_301
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib:$CLASSPATH
export JAVA_PATH=${JAVA_HOME}/bin:${JRE_HOME}/bin
export PATH=$PATH:${JAVA_PATH}
#按Esc退出编辑,再输入[:wq]保存并退出
5)修改完环境变量后,我们还需要重启一下配置信息修改的内容才能生效,执行:
source /etc/profile
6)关闭、禁止开机自启防火墙
> 防火墙开关机
启动: systemctl start firewalld
关闭: systemctl stop firewalld
查看状态: systemctl status firewalld
开机禁用 : systemctl disable firewalld
开机启用 : systemctl enable firewalld
### 1.2、安装带有 HDFS, MapReduce, Yarn 功能的 Hadoop集群
#### 物理环境
|机器分布|IP|HDFS|YARN|
|----|----|----|----|
|hadoopMaster|192.168.11.151|NameNode<br>DataNode|NodeManager<br>ResourceManager|
|hadoopSlave0|192.168.11.104|DataNode<br>SecondaryNameNode|NodeManager|
|hadoopSlave1|192.168.11.163|DataNode|NodeManager|
###### 设置三台机器的名称,并把IP与名称映射起来
hostname #查看主机名称
hostnamectl set-hostname hadoopMaster #修改主机名称
reboot #重启虚拟机
vim /etc/hosts #每台机器上都要配置
192.168.11.151 hadoopMaster
192.168.11.104 hadoopSlave0
192.168.11.163 hadoopSlave1
#reboot
###### 必须要让hadoopMaster节点可以SSH无密码登录到各个hadoopSlave节点上
1)ssh-keygen -t rsa #生成ssh密钥,不提示输入密码
2)ssh-copy-id hadoopMaster
3)ssh-copy-id hadoopSlave0
4)ssh-copy-id hadoopSlave1 #将密钥拷贝到各节点
5)ssh hadoopSlave1 #测试免密登录
###### Hadoop部署
1)选装3.3.1
2)安装目录:/usr/local/hadoop/
3)解压到安装目录;
4)配置集群模式时,需要修改“/usr/local/hadoop/etc/hadoop”目录下的配置文件,包括workers、core-site.xml、hdfs-site.xml、mapred-site.xml、yarn-site.xml。
tar -zxvf hadoop-3.3.1.tar.gz
4)修改配置文件,加入环境变量
#vim /etc/profile #修改配置文件,加入Hadoop环境变量
在文件末尾加入
#HADOOP_HOME
export HADOOP_HOME=/usr/local/hadoop/hadoop-3.3.1
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
5)立即生效配置
source /etc/profile
6)测试hadoop 显示下面的内容表示配置成功
[root@localhost /]# hadoop
Usage: hadoop [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
or hadoop [OPTIONS] CLASSNAME [CLASSNAME OPTIONS]
where CLASSNAME is a user-provided Java class
OPTIONS is none or any of:
buildpaths attempt to add class files from build tree
--config dir Hadoop config directory
--debug turn on shell script debug mode
--help usage information
hostnames list[,of,host,names] hosts to use in slave mode
hosts filename list of hosts to use in slave mode
loglevel level set the log4j level for this command
workers turn on worker mode
SUBCOMMAND is one of:
Admin Commands:
daemonlog get/set the log level for each daemon
Client Commands:
archive create a Hadoop archive
checknative check native Hadoop and compression libraries availability
classpath prints the class path needed to get the Hadoop jar and the required libraries
conftest validate configuration XML files
credential interact with credential providers
distch distributed metadata changer
distcp copy file or directories recursively
dtutil operations related to delegation tokens
envvars display computed Hadoop environment variables
fs run a generic filesystem user client
gridmix submit a mix of synthetic job, modeling a profiled from production load
jar <jar> run a jar file. NOTE: please use "yarn jar" to launch YARN applications, not this command.
jnipath prints the java.library.path
kdiag Diagnose Kerberos Problems
kerbname show auth_to_local principal conversion
key manage keys via the KeyProvider
rumenfolder scale a rumen input trace
rumentrace convert logs into a rumen trace
s3guard manage metadata on S3
trace view and modify Hadoop tracing settings
version print the version
***7)修改Hadoop配置文件***
***7-1)修改hadoop-env.sh文件***
#cd /usr/local/hadoop/hadoop-3.3.1/etc/hadoop
#vim hadoop-env.sh
#在文件末尾添加
export JAVA_HOME=/usr/local/jdk/jdk1.8.0_301
***7-2)修改yarn-env.sh文件***
#cd /usr/local/hadoop/hadoop-3.3.1/etc/hadoop
#vim yarn-env.sh
#在文件末尾添加
export JAVA_HOME=/usr/local/jdk/jdk1.8.0_301
***7-3)修改mapred-env.sh文件***
#cd /usr/local/hadoop/hadoop-3.3.1/etc/hadoop
#vim mapred-env.sh
#在文件末尾添加
export JAVA_HOME=/usr/local/jdk/jdk1.8.0_301
***7-4)修改core-site.xml***
fs.defaultFS:指定namenode的hdfs协议的文件系统通信地址,可以指定一个主机+端口
hadoop.tmp.dir:hadoop集群在工作时存储的一些临时文件存放的目录
#vim core-site.xml
#在<configuration></configuration>间加入
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoopMaster:9000</value>
</property>
<!-- 指定Hadoop运行时产生文件的存储目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/hadoop-3.3.1/data/tmp</value>
</property>
***7-5)修改hdfs-site.xml文件***
dfs.namenode.name.dir:namenode数据的存放位置,元数据存放位置
dfs.datanode.data.dir:datanode数据的存放位置,block块存放的位置
dfs.repliction:hdfs的副本数设置,默认为3
dfs.secondary.http.address:secondarynamenode运行节点的信息,应该和namenode存放在不同节点
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoopSlave0:50090</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop/tmp/dfs/data</value>
</property>
</configuration>
***7-6)修改yarn-site.xml文件***
#vim yarn-site.xml
#在<configuration></configuration>间加入
<!-- Reducer获取数据的方式 -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!-- 指定YARN的ResourceManager的地址 -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoopSlave1</value>
</property>
***7-7)修改mapred-site.xml文件***
mapreduce.framework.name:指定mapreduce框架为yarn方式
mapreduce.jobhistory.address:指定历史服务器的地址和端口
mapreduce.jobhistory.webapp.address:查看历史服务器已经运行完的Mapreduce作业记录的web地址,需要启动该服务才行
#vim mapred-site.xml
#在<configuration></configuration>间加入
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
*另一种配置方法*
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>192.168.11.151:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>192.168.11.151:19888</value>
</property>
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value>
</property>
</configuration>
***7-8)修改workers文件***
该文件内容可以指定某几个节点作为数据节点,默认为localhost,我们将其删除并修改为hadoop2和hadoop3.当然也可以将hadoop1(Master)加进去,让hadoop1节点既做名称节点,也做数据节点,在我们的配置中也将hadoop1一起加进去作为数据节点。
#vim workers
#修改为3个主机名称
hadoopMaster
hadoopSlave0
hadoopSlave1
8)分发文件
修改完上面五个文件后,将hadoopMaster节点上的hadoop文件复制到各个结点上。在hadoopMaster节点上执行下面命令以完成下面的操作:将hadoopMaster主节点下usr/local目录下的hadoop文件夹分别拷贝到hadoopSlave0和hadoopSlave1节点的/usr/local/目录下。
scp -r /usr/local/hadoop/ root@hadoopSlave0:/usr/local/
scp -r /usr/local/hadoop/ root@hadoopSlave1:/usr/local/
9)远程同步/etc/profile文件(hadoopMaster)
#1)远程同步,将hadoopMaster主节点上的配置文件分别拷贝到hadoopSlave0和hadoopSlave1节点
rsync -rvl /etc/profile root@hadoopSlave0:/etc/profile
rsync -rvl /etc/profile root@hadoopSlave1:/etc/profile
#2)显示已修改的配置文件(/etc/profile)内容,查看是否同步成功
#tail /etc/profile
#3) 立即生效
#source /etc/profile
#javadoc #测试 java jdk
#hadoop #测试 hadoop
#4)查看hadoop workers文件内容是否一致
#cat /usr/local/hadoop/hadoop-3.3.1/etc/hadoop/workers
10)Hadoop初始化
HDFS初始化只能在主节点上进行
cd /usr/local/hadoop/hadoop-3.3.1
./bin/hdfs namenode -format
出现以下结果说明初始化成功
[root@hadoopmaster hadoop-3.3.1]# ./bin/hdfs namenode -format
WARNING: /usr/local/hadoop/hadoop-3.3.1/logs does not exist. Creating.
2021-09-06 23:56:20,147 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = hadoopMaster/192.168.11.151
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 3.3.1
STARTUP_MSG: classpath = /usr/local/hadoop/hadoop-3.3.1/etc/hadoop:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/hadoop-annotations-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/re2j-1.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/hadoop-auth-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jetty-security-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jsch-0.1.55.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/commons-text-1.4.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jsr305-3.0.2.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/woodstox-core-5.3.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/metrics-core-3.2.4.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/slf4j-api-1.7.30.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/hadoop-shaded-guava-1.1.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/stax2-api-4.2.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/dnsjava-2.1.7.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/curator-framework-4.2.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jersey-json-1.19.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/kerb-client-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/failureaccess-1.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/kerby-config-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jackson-databind-2.10.5.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jetty-xml-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/kerb-util-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/netty-3.10.6.Final.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/avro-1.7.7.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jackson-core-2.10.5.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/commons-net-3.6.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/asm-5.0.4.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/commons-codec-1.11.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jakarta.activation-api-1.2.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/json-smart-2.4.2.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/snappy-java-1.1.8.2.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/guava-27.0-jre.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/kerb-core-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/accessors-smart-2.4.2.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/kerb-server-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jetty-http-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/zookeeper-3.5.6.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/slf4j-log4j12-1.7.30.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/animal-sniffer-annotations-1.17.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/kerby-util-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/commons-beanutils-1.9.4.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/commons-compress-1.19.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/token-provider-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jetty-util-ajax-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/checker-qual-2.5.2.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/nimbus-jose-jwt-9.8.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jetty-servlet-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/httpclient-4.5.13.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/commons-io-2.8.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/hadoop-shaded-protobuf_3_7-1.1.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jersey-core-1.19.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jetty-server-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/commons-lang3-3.7.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jetty-webapp-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/j2objc-annotations-1.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jersey-servlet-1.19.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/kerb-common-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jackson-annotations-2.10.5.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/curator-client-4.2.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jersey-server-1.19.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jul-to-slf4j-1.7.30.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/zookeeper-jute-3.5.6.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/httpcore-4.4.13.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/curator-recipes-4.2.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jetty-util-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/lib/jetty-io-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/hadoop-registry-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/hadoop-nfs-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/hadoop-common-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/hadoop-kms-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/common/hadoop-common-3.3.1-tests.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/hadoop-annotations-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/re2j-1.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/hadoop-auth-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-security-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jsch-0.1.55.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-text-1.4.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jsr305-3.0.2.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/woodstox-core-5.3.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/netty-all-4.1.61.Final.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/hadoop-shaded-guava-1.1.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/stax2-api-4.2.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/dnsjava-2.1.7.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/curator-framework-4.2.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/failureaccess-1.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-databind-2.10.5.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-xml-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/netty-3.10.6.Final.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/avro-1.7.7.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-core-2.10.5.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-net-3.6.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/asm-5.0.4.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jakarta.activation-api-1.2.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/json-smart-2.4.2.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/snappy-java-1.1.8.2.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/guava-27.0-jre.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/accessors-smart-2.4.2.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-http-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/zookeeper-3.5.6.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/animal-sniffer-annotations-1.17.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-beanutils-1.9.4.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-compress-1.19.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/okio-1.6.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-util-ajax-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/checker-qual-2.5.2.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/nimbus-jose-jwt-9.8.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jettison-1.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-servlet-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/paranamer-2.3.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/httpclient-4.5.13.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-io-2.8.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/hadoop-shaded-protobuf_3_7-1.1.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-server-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-lang3-3.7.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-webapp-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/j2objc-annotations-1.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/gson-2.2.4.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-annotations-2.10.5.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/curator-client-4.2.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/zookeeper-jute-3.5.6.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/httpcore-4.4.13.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/curator-recipes-4.2.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-util-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-io-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-client-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-rbf-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-native-client-3.3.1-tests.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-native-client-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-client-3.3.1-tests.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-3.3.1-tests.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-nfs-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-rbf-3.3.1-tests.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.3.1-tests.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/websocket-client-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/asm-analysis-9.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/bcpkix-jdk15on-1.60.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/guice-4.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/jetty-client-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/jna-5.2.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/javax-websocket-server-impl-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/javax-websocket-client-impl-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/jersey-client-1.19.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/fst-2.50.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.10.5.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/jline-3.9.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/bcprov-jdk15on-1.60.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/websocket-server-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/jetty-jndi-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/jetty-plus-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/websocket-common-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/objenesis-2.6.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/snakeyaml-1.26.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/jackson-jaxrs-base-2.10.5.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/asm-commons-9.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/websocket-servlet-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/javax.websocket-client-api-1.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/asm-tree-9.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/websocket-api-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.10.5.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/javax.websocket-api-1.0.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/jakarta.xml.bind-api-2.3.2.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/jetty-annotations-9.4.40.v20210413.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-services-core-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-api-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-router-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-registry-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-client-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-tests-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-common-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-services-api-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-applications-mawo-core-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-common-3.3.1.jar:/usr/local/hadoop/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.3.1.jar
STARTUP_MSG: build = https://github.com/apache/hadoop.git -r a3b9c37a397ad4188041dd80621bdeefc46885f2; compiled by 'ubuntu' on 2021-06-15T05:13Z
STARTUP_MSG: java = 1.8.0_301
************************************************************/
2021-09-06 23:56:20,156 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2021-09-06 23:56:20,299 INFO namenode.NameNode: createNameNode [-format]
2021-09-06 23:56:21,306 INFO namenode.NameNode: Formatting using clusterid: CID-6137e91b-b079-42d9-a1ae-2fc3e410f0d2
2021-09-06 23:56:21,340 INFO namenode.FSEditLog: Edit logging is async:true
2021-09-06 23:56:21,367 INFO namenode.FSNamesystem: KeyProvider: null
2021-09-06 23:56:21,368 INFO namenode.FSNamesystem: fsLock is fair: true
2021-09-06 23:56:21,369 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2021-09-06 23:56:21,402 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)
2021-09-06 23:56:21,402 INFO namenode.FSNamesystem: supergroup = supergroup
2021-09-06 23:56:21,402 INFO namenode.FSNamesystem: isPermissionEnabled = true
2021-09-06 23:56:21,402 INFO namenode.FSNamesystem: isStoragePolicyEnabled = true
2021-09-06 23:56:21,403 INFO namenode.FSNamesystem: HA Enabled: false
2021-09-06 23:56:21,451 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2021-09-06 23:56:21,462 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2021-09-06 23:56:21,462 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2021-09-06 23:56:21,466 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2021-09-06 23:56:21,466 INFO blockmanagement.BlockManager: The block deletion will start around 2021 Sep 06 23:56:21
2021-09-06 23:56:21,468 INFO util.GSet: Computing capacity for map BlocksMap
2021-09-06 23:56:21,468 INFO util.GSet: VM type = 64-bit
2021-09-06 23:56:21,486 INFO util.GSet: 2.0% max memory 3.4 GB = 70.6 MB
2021-09-06 23:56:21,486 INFO util.GSet: capacity = 2^23 = 8388608 entries
2021-09-06 23:56:21,509 INFO blockmanagement.BlockManager: Storage policy satisfier is disabled
2021-09-06 23:56:21,509 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false
2021-09-06 23:56:21,516 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.999
2021-09-06 23:56:21,516 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
2021-09-06 23:56:21,516 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
2021-09-06 23:56:21,516 INFO blockmanagement.BlockManager: defaultReplication = 3
2021-09-06 23:56:21,517 INFO blockmanagement.BlockManager: maxReplication = 512
2021-09-06 23:56:21,517 INFO blockmanagement.BlockManager: minReplication = 1
2021-09-06 23:56:21,517 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
2021-09-06 23:56:21,517 INFO blockmanagement.BlockManager: redundancyRecheckInterval = 3000ms
2021-09-06 23:56:21,517 INFO blockmanagement.BlockManager: encryptDataTransfer = false
2021-09-06 23:56:21,517 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2021-09-06 23:56:21,541 INFO namenode.FSDirectory: GLOBAL serial map: bits=29 maxEntries=536870911
2021-09-06 23:56:21,541 INFO namenode.FSDirectory: USER serial map: bits=24 maxEntries=16777215
2021-09-06 23:56:21,541 INFO namenode.FSDirectory: GROUP serial map: bits=24 maxEntries=16777215
2021-09-06 23:56:21,541 INFO namenode.FSDirectory: XATTR serial map: bits=24 maxEntries=16777215
2021-09-06 23:56:21,554 INFO util.GSet: Computing capacity for map INodeMap
2021-09-06 23:56:21,554 INFO util.GSet: VM type = 64-bit
2021-09-06 23:56:21,554 INFO util.GSet: 1.0% max memory 3.4 GB = 35.3 MB
2021-09-06 23:56:21,554 INFO util.GSet: capacity = 2^22 = 4194304 entries
2021-09-06 23:56:21,567 INFO namenode.FSDirectory: ACLs enabled? true
2021-09-06 23:56:21,567 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true
2021-09-06 23:56:21,567 INFO namenode.FSDirectory: XAttrs enabled? true
2021-09-06 23:56:21,568 INFO namenode.NameNode: Caching file names occurring more than 10 times
2021-09-06 23:56:21,573 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
2021-09-06 23:56:21,575 INFO snapshot.SnapshotManager: SkipList is disabled
2021-09-06 23:56:21,579 INFO util.GSet: Computing capacity for map cachedBlocks
2021-09-06 23:56:21,579 INFO util.GSet: VM type = 64-bit
2021-09-06 23:56:21,580 INFO util.GSet: 0.25% max memory 3.4 GB = 8.8 MB
2021-09-06 23:56:21,580 INFO util.GSet: capacity = 2^20 = 1048576 entries
2021-09-06 23:56:21,588 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2021-09-06 23:56:21,588 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2021-09-06 23:56:21,588 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2021-09-06 23:56:21,592 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
2021-09-06 23:56:21,592 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2021-09-06 23:56:21,594 INFO util.GSet: Computing capacity for map NameNodeRetryCache
2021-09-06 23:56:21,594 INFO util.GSet: VM type = 64-bit
2021-09-06 23:56:21,594 INFO util.GSet: 0.029999999329447746% max memory 3.4 GB = 1.1 MB
2021-09-06 23:56:21,594 INFO util.GSet: capacity = 2^17 = 131072 entries
2021-09-06 23:56:21,622 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1133261968-192.168.11.151-1630986981613
2021-09-06 23:56:21,664 INFO common.Storage: Storage directory /usr/local/hadoop/tmp/dfs/name has been successfully formatted.
2021-09-06 23:56:21,691 INFO namenode.FSImageFormatProtobuf: Saving image file /usr/local/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
2021-09-06 23:56:21,802 INFO namenode.FSImageFormatProtobuf: Image file /usr/local/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 399 bytes saved in 0 seconds .
2021-09-06 23:56:21,816 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2021-09-06 23:56:21,857 INFO namenode.FSNamesystem: Stopping services started for active state
2021-09-06 23:56:21,858 INFO namenode.FSNamesystem: Stopping services started for standby state
2021-09-06 23:56:21,862 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid=0 when meet shutdown.
2021-09-06 23:56:21,862 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoopMaster/192.168.11.151
************************************************************/
11)Hadoop集群启动
在hadoopMaster节点上执行下面命令:
cd /usr/local/hadoop/hadoop-3.3.1
./sbin/start-dfs.sh
./sbin/start-yarn.sh
./sbin/mr-jobhistory-daemon.sh start historyserver
用命令mr-jobhistory-daemon.sh start historyserver可以正常启动但是会出现两个警告,改为mapred --daemon start historyserver即可。
启动时的信息
[root@hadoopmaster hadoop-3.3.1]# ./sbin/start-dfs.sh
Starting namenodes on [hadoopMaster]
Last login: Tue Sep 7 01:46:22 EDT 2021 from ldap-studenti.uniparthenope.it on pts/3
Starting datanodes
Last login: Tue Sep 7 01:47:50 EDT 2021 on pts/3
hadoopSlave1: WARNING: /usr/local/hadoop/hadoop-3.3.1/logs does not exist. Creating.
hadoopSlave0: WARNING: /usr/local/hadoop/hadoop-3.3.1/logs does not exist. Creating.
Starting secondary namenodes [hadoopSlave0]
Last login: Tue Sep 7 01:47:53 EDT 2021 on pts/3
[root@hadoopmaster hadoop-3.3.1]# ./sbin/start-yarn.sh
Starting resourcemanager
Last login: Tue Sep 7 01:47:57 EDT 2021 on pts/3
Starting nodemanagers
Last login: Tue Sep 7 01:50:14 EDT 2021 on pts/3
12)启动如果有错的解决办法
在sbin/start-dfs.sh和sbin/stop-dfs.sh文件内的顶部添加:
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
在sbin/start-yarn.sh和sbin/stop-yarn.sh文件内的顶部添加:
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
13)关于UI界面
hadoopMaster:9870 访问 Hadoop HDFS 文件系统
注:hadoop-3.x.x,服务器端口改为了9870,并不是原来的50070
14)最后的坑
我把yarn安装到hadoopSlave1机器上了,不应该在Master启动yarn,应该到hadoopSlave1上去启动
/usr/local/hadoop/hadoop-3.3.1/sbin
[root@hadoopslave1 sbin]# start-yarn.sh
Starting resourcemanager
Last login: Tue Sep 7 01:56:35 EDT 2021 from ldap-studenti.uniparthenope.it on pts/1
Starting nodemanagers
Last login: Tue Sep 7 03:56:25 EDT 2021 on pts/1
hadoopSlave1: Warning: Permanently added 'hadoopslave1,192.168.11.163' (ECDSA) to the list of known hosts.
hadoopSlave0: Warning: Permanently added 'hadoopslave0,192.168.11.104' (ECDSA) to the list of known hosts.
hadoopMaster: Warning: Permanently added 'hadoopmaster,192.168.11.151' (ECDSA) to the list of known hosts.
hadoopSlave1: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
hadoopSlave0: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
hadoopMaster: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
[root@hadoopslave1 sbin]#
最后就可以愉快的访问了:http://192.168.11.163(hadoopSlave1):8088/cluster/nodes
15)启动和关闭Hadoop集群
#启动集群
#cd /usr/loacl/hadoop/hadoop-3.3.1/sbin/
#start-dfs.sh
先在 hadoopMaster 节点下执行上述命令 start-dfs.sh
#cd /usr/loacl/hadoop/hadoop-3.3.1/sbin/
#start-yarn.sh
等 hadoopMaster 主节点下 start-dfs.sh 命令执行结束后,在 hadoopSlave1 节点下执行命令 start-yarn.sh
#jps #hadoopMaster、hadoopSlave0、hadoopSlave1一起查看启动的进程
#关闭集群
#stop-yarn.sh
先在 hadoopSlave1 节点下执行命令 stop-yarn.sh
#stop-dfs.sh
等 hadoopSlave1 节点下 stop-yarn.sh 命令执行结束后,在 hadoopMaster 主节点下执行命令 stop-dfs.sh
*也可以这个样子了*
cd /usr/local/hadoop/hadoop-3.3.1
./sbin/stop-yarn.sh
./sbin/stop-dfs.sh
./sbin/mr-jobhistory-daemon.sh stop historyserver
最后:随后更新数据中台其他技术实战,如实时处理、CDC、数据湖、搜索引擎等等。请各位同行多多指正。