Hadoop海量级分布式存储

实验环境:

3台虚拟机,CentOS7.5.1804

jdk1.8在官网下的,没有yum安装(https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

 hadoop是3.2版本的,清华大学站点(https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/

hadoop官方文档:https://hadoop.apache.org/docs/r3.2.0/

hadoop实验在最下面,手动操作的,没遇到什么大问题,参考网上教程


 

一、Hadoop简介;

二、Hadoop构成:HDFS+MapReduce

三、案例:部署Hadoop分布式存储集群;

 

一、Hadoop简介:

1.大数据略知一二:

1)大数据(big data),指无法在一定时间范围内用常规软件工具进行捕捉、管理和处理的数据集合,是需要新处理模式才能具有更强的决策力、洞察发现力和流程优化能力的海量、高增长率和多样化的信息资产,需要在合理的时间内达到提取、管理、处理、并且整理成为帮助企业运营决策更积极目的的信息;

2)在维克托·迈尔-舍恩伯格及肯尼斯·库克耶编写的《大数据时代》 中大数据指不用随机分析法(抽样调查)这样捷径,而采用所有数据进行分析处理;

3)大数据的5V特点(IBM提出):Volume(大量)、Velocity(高速)、Variety(多样)、Value(低价值密度)、Veracity(真实性)。

 

2.图解大数据:

 

http://www.ruanyifeng.com/blog/2017/07/iaas-paas-saas.html

3.项目起源:

  Hadoop由 Apache Software Foundation 公司于 2005 年秋天作为Lucene的子项目Nutch的一部分正式引入。它受到最先由 Google Lab 开发的 Map/Reduce 和 Google File System(GFS) 的启发。2006 年 3 月份,Map/Reduce 和 Nutch Distributed File System (NDFS) 分别被纳入称为 Hadoop 的项目中。Hadoop是一个能够对大量数据进行分布式处理的软件框架。

  Hadoop 是最受欢迎的在 Internet 上对搜索关键字进行内容分类的工具,但它也可以解决许多要求极大伸缩性的问题。例如,如果您要 grep 一个 10TB 的巨型文件,会出现什么情况?在传统的系统上,这将需要很长的时间。但是 Hadoop 在设计时就考虑到这些问题,采用并行执行机制,因此能大大提高效率。

 

4.hadoop优点:

1)高可靠:在多台廉价商用机器群集上,善于存放超大文件;

2)高扩展性:Hadoop是在可用的计算机集簇间分配数据并完成计算任务的,这些集簇可以方便地扩展到数以千计的节点中。

3)高效性:处理速度较快。

4)高容错性:Hadoop能够自动保存数据的多个副本,并且能够自动将失败的任务重新分配。

5)低成本:hadoop是开源的,项目的软件成本因此会大大降低。

6)Hadoop带有用Java语言编写的框架,因此运行在 Linux 生产平台上是非常理想的。Hadoop 上的应用程序也可以使用其他语言编写,比如 C++。

补充:云计算大数据必会单位(换算为1024=2^10;1B=8b,1汉字=2B)

bit比特|位--byte字节--KB--MB--GB--TB--PB--EB--ZB--YB--BB--NB--DB

 

5.hadoop缺点:

1)低时间延迟的数据访问:要求在例如几十毫秒内完成数据访问的应用,不适合在HDFS上运行,HDFS虽然有强大的高数据吞吐量,但是以提高时间延迟为代价,可以使用HBase满足低延迟的访问需求;

2)无法高效存储大量小文件:大量小文件会造成整个文件系统的目录树和索引目录相对较大,而这些的元数据都会存放在namenode节点;

 

二、Hadoop构成:HDFS+MapReduce

1.HDFS引擎结构:

1)Hadoop Distributed File System(HDFS)引擎:包括namenode(名称空间节点)和datanode(数据节点);

https://www.cnblogs.com/liango/p/7136448.html

基础概念:

1)文件块:Block,datanode中存放数据最小逻辑单元,默认块大小为64M,便于管理,不受磁盘限制,数据可在datanode的总block中进行冗余备份,存储的副本数量要少于datanode节点的数量,当一个或多个块出现故障,用户可以直接去其他地方读取数据副本;

2)NameNode:管理文件系统的命名空间,属于管理者角色,维护文件系统树内所有文件和目录,记录每个文件在各个DataNode上的位置和副本信息,并协调客户端对文件的访问;

3)DataNode:负责处理文件系统客户端的文件读写请求,存储并检索数据块,并定期向NameNode发送所存储的块的列表,属于工作者角色。负责所在物理节点的存储管理,按照一次写入,多次读取的原则,存储文件按照Block块进行存储;

4)Secondary NameNode:相当于NameNode的快照,也称之为二级NameNode,能够周期性的备份NameNode,记录NameNode上的元数据等。为防止NameNode进程出现故障,起到备份作用;

 

2. MapReduce 引擎构成:

1)MapReduce 引擎:是用于并行处理计算大数据集的软件框架,是HDFS(对于本文)的上一层,与hadoop结合工作,将用户的任务分发到上千台商用机器组成的集群上。最简单的 MapReduce应用程序至少包含 2个部分:一个 Map (映射)函数、一个 Reduce (归纳)函数,Map负责将任务分解成多个子任务,reduce负责把分解后的多任务的处理结果进行汇总;

JobTrackers :是一个master进程,用于作业的调度和管理工作,一个Hadoop集群中只有一台JobTracker;

TaskTrackers:运行在多个节点上的Slave服务,用于执行任务。TaskTracker需要运行在HDFS的DataNode节点上;

MapReduce 引擎的缺点:JobTracker单点瓶颈(负责集群心跳信息、作业管理)、JobTracker分配作业延迟高、缺乏灵活性;

2) YARN架构:是MapReduce 引擎的V2版本,解决MapReduce 引擎面临的性能瓶颈问题,将集群资源管理和作业调度进行分离;

ResourceManager进程:管理集群资源的资源管理器

MapReduce:管理作业任务

3)数据仓库工具Hive和分布式数据库Hbase:NoSQL数据库

 

3.Hadoop核心概念注意事项:

1)HDFS把节点分成两类:NameNode和DataNode。NameNode是唯一的,程序与之通信,然后从DataNode上存取文件。这些操作是透明的,与普通的文件系统API没有区别。

2)MapReduce则是JobTracker节点为主,分配工作以及负责和用户程序通信。

3)HDFS和MapReduce实现是完全分离的,并不是没有HDFS就不能MapReduce运算。

4)Hadoop也跟其他云计算项目有共同点和目标:实现海量数据的计算。而进行海量计算需要一个稳定的,安全的数据容器,才有了Hadoop分布式文件系统(HDFS,Hadoop Distributed File System)。

5)60款大数据软件:http://blog.csdn.net/SunWuKong_Hadoop/article/details/53580425

6)Hadoop生态:http://blog.csdn.net/u010270403/article/details/51493191

 


三、案例:部署Hadoop分布式存储集群:

环境:

系统

IP地址

主机名

所需软件

Centos 7.5 64bit 1804

10.0.3.106

master

namenode

hadoop-2.7.6.tar.gz 

jdk-8u171-linux-x64.tar.gz

Centos 7.5 64bit 180410.0.3.107

slave1

datanode

hadoop-2.7.6.tar.gz 

jdk-8u171-linux-x64.tar.gz

Centos 7.5 64bit 1804

10.0.3.108

slave2

datanode

hadoop-2.7.6.tar.gz 

jdk-8u171-linux-x64.tar.gz

 

版本要求:

hadoop版本>=2.7:要求Java 7(openjdk/oracle)

hadoop版本<=2.6:要求Java 6(openjdk/oracle)

 

步骤:

  • 配置所有节点间的域名解析及创建用户(所有节点配置相同,在此列举master节点配置);
  • 配置master节点远程管理slave节点;
  • 在所有节点安装JDK环境(所有节点配置相同,在此列举master节点配置);
  • 在所有节点安装Hadoop并简要配置(所有节点配置相同,在此列举master节点配置);
  • 在master节点进行配置hadoop服务,并将配置文件复制到slave节点上;
  • 在master节点初始化并且启动Hadoop进程;
  • 验证slave节点的进程状态;
  • 网页查看http://master:50070统计hadoop集群的信息;

Hadoop中数据的基本管理;

一、配置基础环境

1.1给三台机器修改主机名

[root@localhost ~]# hostnamectl set-hostname master

[root@localhost ~]# hostnamectl set-hostname slave1

[root@localhost ~]# hostnamectl set-hostname slave1

1.2主机名解析

[root@master ~]# vim /etc/hosts
[root@master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.3.106  master
10.0.3.107  slave1
10.0.3.108  slave2

[root@master ~]# for i in slave{1,2};do scp /etc/hosts $i:/etc/hosts;done
root@slave1's password: 
hosts                                                                                  100%  216     2.7KB/s   00:00    
root@slave2's password: 
hosts                                                                                  100%  216    16.6KB/s   00:00  

1.3 创建hadoop用户

[root@master ~]# useradd hadoop
[root@master ~]# echo "hadoop:hadoop123" | chpasswd
[root@slave1 ~]# useradd hadoop
[root@slave1 ~]# echo "hadoop:hadoop123" | chpasswd
[root@slave2 ~]# useradd hadoop
[root@slave2 ~]# echo "hadoop:hadoop123" | chpasswd

1.4 使用密钥管理slave

密钥也给本机发送一份,这个我后来才发现,也是必须的

[hadoop@master ~]$ ssh-copy-id hadoop@master

[hadoop@master ~]$ ssh-keygen -t rsa -C "hadoop-master"
[hadoop@master ~]$ for i in slave{1,2};do ssh-copy-id hadoop@$i;done

# 测试
[hadoop@master ~]$ ssh slave1
[hadoop@slave1 ~]$ 登出
Connection to slave1 closed.
[hadoop@master ~]$ ssh slave2
[hadoop@slave2 ~]$ 登出
Connection to slave2 closed.
[hadoop@master ~]$ 

二、安装JDK环境(master与slave节点都要安装)

下面使用源码包,yum更方便

下载:https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

master节点安装JDK

[root@master ~]# ls
anaconda-ks.cfg  jdk-8u201-linux-x64.tar.gz
[root@master ~]# tar xf jdk-8u201-linux-x64.tar.gz 
[root@master ~]# ls
anaconda-ks.cfg  jdk1.8.0_201  jdk-8u201-linux-x64.tar.gz
[root@master ~]# mv jdk1.8.0_201 /usr/local/java
[root@master ~]# cat <<EOF >> /etc/profile
profile    profile.d/ 
[root@master ~]# cat <<EOF >> /etc/profile
> JAVA_HOME=/usr/local/java/
> JRE_HOME=/usr/local/java/jre
> CLASS_PATH=.:\$JAVA_HOME/lib/dt.jar:\$JAVA_HOME/lib/tools.jar:\$JAVA_HOME/lib
> PATH=\$PATH:\$JAVA_HOME/bin:\$JRE_HOME/bin
> export JAVA_HOME JRE_HOME CLASS_PATH PATH
> EOF
[root@master ~]# source /etc/profile
[root@master ~]# java -version
java version "1.8.0_201"
Java(TM) SE Runtime Environment (build 1.8.0_201-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode)

Slave1安装JDK

[root@master ~]# for i in slave{1,2};do scp jdk-8u201-linux-x64.tar.gz $i:/root/;done
root@slave1's password: 
jdk-8u201-linux-x64.tar.gz                                                             100%  183MB  36.6MB/s   00:05    
root@slave2's password: 
jdk-8u201-linux-x64.tar.gz                                                             100%  183MB  48.1MB/s   00:03 

简易JDK安装脚本(这个脚本有点问题,写入环境变量时建议手动

[root@master ~]# vim install_JDK.sh 
[root@master ~]# cat install_JDK.sh 
#!/bin/bash
cd
tar xf jdk-8u201-linux-x64.tar.gz
mv jdk1.8.0_201 /usr/local/java
cat <<EOF >> /etc/profile
JAVA_HOME=/usr/local/java/
JRE_HOME=/usr/local/java/jre/
#CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib
#PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
CLASS_PATH=.:/usr/local/java//lib/dt.jar:/usr/local/java//lib/tools.jar:/usr/local/java//lib
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/bin:/bin:/usr/local/java//bin:/usr/local/java/jre//bin:/usr/local/java//bin:/usr/local/java/jre//bin
export JAVA_HOME JRE_HOME CLASS_PATH PATH
EOF
source /etc/profile
java -version
[root@master ~]# for i in slave{1,2};do scp install_JDK.sh $i:/root/;done
root@slave1's password: 
install_JDK.sh                                                                         100%  359    67.6KB/s   00:00    
root@slave2's password: 
install_JDK.sh                                                                         100%  359     4.9KB/s   00:00    

在slave1和slave2上安装JDK

[root@master ~]# for i in slave{1,2};do ssh $i "/bin/bash /root/install_JDK.sh";done
root@slave1's password: 
/root/install_JDK.sh:行13: java: 未找到命令
root@slave2's password: 
/root/install_JDK.sh:行13: java: 未找到命令

#环境变量写入有问题,手动改下
[root@slave1 ~]# cat <<EOF >> /etc/profile
> JAVA_HOME=/usr/local/java/
> JRE_HOME=/usr/local/java/jre/
> CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib
> PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
> export JAVA_HOME JRE_HOME CLASS_PATH PATH
> EOF
[root@slave1 ~]# vim /etc/profile
[root@slave1 ~]# source /etc/profile
[root@slave1 ~]# java -version
java version "1.8.0_201"
Java(TM) SE Runtime Environment (build 1.8.0_201-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode)

# slave2手动修改下环境变量
[root@slave2 ~]# vim /etc/profile
[root@slave2 ~]# cat <<EOF >> /etc/profile
JAVA_HOME=/usr/local/java/
JRE_HOME=/usr/local/java/jre/
CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib
PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
export JAVA_HOME JRE_HOME CLASS_PATH PATH
EOF
[root@slave2 ~]# source /etc/profile
[root@slave2 ~]# java -version
java version "1.8.0_201"
Java(TM) SE Runtime Environment (build 1.8.0_201-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode)

三、安装hadoop,都要安装

下载地址:https://hadoop.apache.org/releases.html

下载地址:https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/

master安装hadoop

[root@master ~]# ls
anaconda-ks.cfg  hadoop-3.2.0.tar.gz  install_JDK.sh  jdk-8u201-linux-x64.tar.gz
[root@master ~]# tar xf hadoop-3.2.0.tar.gz 
[root@master ~]# ls
anaconda-ks.cfg  hadoop-3.2.0  hadoop-3.2.0.tar.gz  install_JDK.sh  jdk-8u201-linux-x64.tar.gz
[root@master ~]# mv hadoop-3.2.0 /usr/local/hadoop
[root@master ~]# ls /usr/local/hadoop/
bin  etc  include  lib  libexec  LICENSE.txt  NOTICE.txt  README.txt  sbin  share
[root@master ~]# cat <<EOF >> /etc/profile
> export HADOOP_HOME=/usr/local/hadoop/
> export PATH=\$PATH:\$HADOOP_HOME/bin
> EOF
[root@master ~]# source /etc/profile

## 设置HDFS存储加载jdk的环境变量
[root@master ~]# echo "export JAVA_HOME=/usr/local/java/" >>/usr/local/hadoop/etc/hadoop/hadoop-env.sh
## 设置mapreduce的V2版本--YARN加载jdk的环境变量
[root@master ~]# echo "export JAVA_HOME=/usr/local/java/" >>/usr/local/hadoop/etc/hadoop/yarn-env.sh

##存放namenode中元数据的位置
[root@master ~]# mkdir /usr/local/hadoop/name/
##存放datanode中的数据目录
[root@master ~]# mkdir /usr/local/hadoop/data/
##存放用户临时文件
[root@master ~]# mkdir /usr/local/hadoop/tmp/
##存放服务动态变化文件
[root@master ~]# mkdir /usr/local/hadoop/var/
[root@master ~]# chown hadoop /usr/local/hadoop/ -R

slave安装hadoop

[root@master ~]# for i in slave{1,2};do scp hadoop-3.2.0.tar.gz  $i:/root/;done
root@slave1's password: 
hadoop-3.2.0.tar.gz                                                                    100%  330MB  41.3MB/s   00:07    
root@slave2's password: 
hadoop-3.2.0.tar.gz                                                                    100%  330MB  75.0MB/s   00:04

# 编写发送hadoop安装脚本
[root@master ~]# vim install_hadoop.sh 
[root@master ~]# cat install_hadoop.sh 
#!/bin/bash

cd
tar xf hadoop-3.2.0.tar.gz
mv hadoop-3.2.0 /usr/local/hadoop
cat <<EOF >> /etc/profile
export HADOOP_HOME=/usr/local/hadoop/
export PATH=\$PATH:\$HADOOP_HOME/bin
EOF
source /etc/profile
echo "export JAVA_HOME=/usr/local/java/" >>/usr/local/hadoop/etc/hadoop/hadoop-env.sh
echo "export JAVA_HOME=/usr/local/java/" >>/usr/local/hadoop/etc/hadoop/yarn-env.sh
mkdir /usr/local/hadoop/name/
mkdir /usr/local/hadoop/data/
mkdir /usr/local/hadoop/tmp/
mkdir /usr/local/hadoop/var/
chown hadoop /usr/local/hadoop/ -R
[root@master ~]# for i in slave{1,2};do scp install_hadoop.sh   $i:/root/;done
root@slave1's password: 
install_hadoop.sh                                                                      100%  525   502.5KB/s   00:00    
root@slave2's password: 
install_hadoop.sh                                                                      100%  525   348.6KB/s   00:00    

手动执行脚本

[root@slave1 ~]# bash install_hadoop.sh
[root@slave2 ~]# bash install_hadoop.sh

四、修改master上的hadoop配置文件,并复制到slave上

4.1修改/usr/local/hadoop/etc/hadoop/core-site.xml

[root@master ~]# su - hadoop
[hadoop@master ~]$ vim /usr/local/hadoop/etc/hadoop/core-site.xml

##指定名称节点namenode的相关配置
## configure配置段指定了了tmp和9000端口
[hadoop@master ~]$ vim /usr/local/hadoop/etc/hadoop/core-site.xml 
[hadoop@master ~]$ cat /usr/local/hadoop/etc/hadoop/core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
 <property>
        <name>hadoop.tmp.dir</name>
        <value>/usr/local/hadoop/tmp</value>
        <description>Abase for other temporary directories.</description>
   </property>
   <property>
        <name>fs.default.name</name>
        <value>hdfs://master:9000</value>
   </property>
</configuration>

4.2需修改hdfs存储相关的配置

/usr/local/hadoop/etc/hadoop/hdfs-site.xml

configure段定义了

dfs.namenode.secondary.http-address:hadoop从进程的地址和端口

dfs.name.dir:元数据的存储位置

dfs.data.dir:数据存储位置

dfs.replication:复制几份

dfs.webhdfs.enabled<:开启web界面

[hadoop@master ~]$ cat /usr/local/hadoop/etc/hadoop/hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
      <name>dfs.namenode.secondary.http-address</name>
      <value>master:50090</value>
</property>
<property>
   <name>dfs.name.dir</name>
   <value>/usr/local/hadoop/name</value>
   <description>Path on the local filesystem where theNameNode stores the namespace and transactions logs persistently.</description>
</property>
<property>
   <name>dfs.data.dir</name>
   <value>/usr/local/hadoop/data</value>
   <description>Comma separated list of paths on the localfilesystem of a DataNode where it should store its blocks.</description>
</property>
<property>
   <name>dfs.replication</name>
   <value>2</value>
</property>
<property>
      <name>dfs.webhdfs.enabled</name>
      <value>true</value>
</property>
</configuration>

4.3指定mapreduce的相关配置

configure配置段

mapred.job.tracker

mapred.local.dir

mapreduce.framework.name    指定模式

[hadoop@master ~]$ vim  /usr/local/hadoop/etc/hadoop/mapred-site.xml 
[hadoop@master ~]$ cat /usr/local/hadoop/etc/hadoop/mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
    <name>mapred.job.tracker</name>
    <value>master:49001</value>
</property>
<property>
      <name>mapred.local.dir</name>
       <value>/usr/local/hadoop/var</value>
</property>
<property>
      <name>mapreduce.framework.name</name>
       <value>yarn</value>
</property>
</configuration>

4.4修改/usr/local/hadoop/etc/hadoop/slaves

(这一步错了。3.2的这个文件应该是workers,而不是slaves)

最后面我再改正

##指定slave的名称(要能解析到Ip)

附:此文件明确指定DataNode节点,可以通过节点的添加和减少来满足整个hadoop群集的伸缩性,添加节点时,首先将新节点的配置保证与NameNode节点配置相同,在此文件指定新的DataNode节点名,重新启动NameNode便完成。但为保证原有DataNode节点与新添加DataNode节点的数据进行均衡存储,需要执行此命令进行重新平衡数据块的分布:/usr/local/hadoop/sbin/start-balancer.sh

[hadoop@master ~]$ vim /usr/local/hadoop/etc/hadoop/slaves
[hadoop@master ~]$ cat /usr/local/hadoop/etc/hadoop/slaves
slave1
slave2

4.5修改/usr/local/hadoop/etc/hadoop/yarn-site.xml

##指定YARN的相关配置

yarn.nodemanager.aux-services       资源管理相关的配置

yarn.nodemanager.aux-services.mapreduce.shuffle.class         资源管理相关的配置

yarn.resourcemanager.address

yarn.resourcemanager.scheduler.address

yarn.resourcemanager.resource-tracker.address

yarn.resourcemanager.webapp.address

[hadoop@master ~]$ vim /usr/local/hadoop/etc/hadoop/yarn-site.xml 
[hadoop@master ~]$ cat /usr/local/hadoop/etc/hadoop/yarn-site.xml
<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties -->
 <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
</property>
<property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
 <property>
        <name>yarn.resourcemanager.address</name>
        <value>master:8032</value>
</property>
 <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
</property>
 <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8035</value>
</property>
 <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>master:8033</value>
</property>
 <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>master:8088</value>
</property>
</configuration>

4.6同步配置文件到slave

[hadoop@master ~]$ for i in slave{1,2};do scp -r /usr/local/hadoop/etc/hadoop/* hadoop@$i:/usr/local/hadoop/etc/hadoop/;done

五、初始化并启动Hadoop进程

5.1初始化namenode节点

注意:第一次执行格式化,提示信息如上图,如若第二次再次执行格式化,需要将namenode节点的/usr/local/hadoop/name/目录内容清空,并且将datanode节点的/usr/local/hadoop/data/目录清空,方可再次执行格式化,否则会造成namenode节点与datanode节点的数据版本ID不一致,导致启动服务失败;

[hadoop@master ~]$ /usr/local/hadoop/bin/hdfs namenode -format
WARNING: /usr/local/hadoop//logs does not exist. Creating.
2019-03-02 20:18:07,408 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = master/10.0.3.106
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 3.2.0
STARTUP_MSG:   classpath = /usr/local/hadoop//etc/hadoop:/usr/local/hadoop//share/hadoop/common/lib/accessors-smart-1.2.jar:/usr/local/hadoop//share/hadoop/common/lib/asm-5.0.4.jar:/usr/local/hadoop//share/hadoop/common/lib/audience-annotations-0.5.0.jar:/usr/local/hadoop//share/hadoop/common/lib/avro-1.7.7.jar:/usr/local/hadoop//share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/usr/local/hadoop//share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop//share/hadoop/common/lib/commons-codec-1.11.jar:/usr/local/hadoop//share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop//share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop//share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/usr/local/hadoop//share/hadoop/common/lib/commons-io-2.5.jar:/usr/local/hadoop//share/hadoop/common/lib/commons-lang3-3.7.jar:/usr/local/hadoop//share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop//share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop//share/hadoop/common/lib/commons-net-3.6.jar:/usr/local/hadoop//share/hadoop/common/lib/commons-text-1.4.jar:/usr/local/hadoop//share/hadoop/common/lib/curator-client-2.12.0.jar:/usr/local/hadoop//share/hadoop/common/lib/curator-framework-2.12.0.jar:/usr/local/hadoop//share/hadoop/common/lib/curator-recipes-2.12.0.jar:/usr/local/hadoop//share/hadoop/common/lib/dnsjava-2.1.7.jar:/usr/local/hadoop//share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop//share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop//share/hadoop/common/lib/hadoop-annotations-3.2.0.jar:/usr/local/hadoop//share/hadoop/common/lib/hadoop-auth-3.2.0.jar:/usr/local/hadoop//share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/usr/local/hadoop//share/hadoop/common/lib/httpclient-4.5.2.jar:/usr/local/hadoop//share/hadoop/common/lib/httpcore-4.4.4.jar:/usr/local/hadoop//share/hadoop/common/lib/jackson-annotations-2.9.5.jar:/usr/local/hadoop//share/hadoop/common/lib/jackson-core-2.9.5.jar:/usr/local/hadoop//share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop//share/hadoop/common/lib/jackson-databind-2.9.5.jar:/usr/local/hadoop//share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop//share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop//share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop//share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/usr/local/hadoop//share/hadoop/common/lib/jaxb-api-2.2.11.jar:/usr/local/hadoop//share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop//share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop//share/hadoop/common/lib/jersey-core-1.19.jar:/usr/local/hadoop//share/hadoop/common/lib/jersey-json-1.19.jar:/usr/local/hadoop//share/hadoop/common/lib/jersey-server-1.19.jar:/usr/local/hadoop//share/hadoop/common/lib/jersey-servlet-1.19.jar:/usr/local/hadoop//share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop//share/hadoop/common/lib/jetty-http-9.3.24.v20180605.jar:/usr/local/hadoop//share/hadoop/common/lib/jetty-io-9.3.24.v20180605.jar:/usr/local/hadoop//share/hadoop/common/lib/jetty-security-9.3.24.v20180605.jar:/usr/local/hadoop//share/hadoop/common/lib/jetty-server-9.3.24.v20180605.jar:/usr/local/hadoop//share/hadoop/common/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/local/hadoop//share/hadoop/common/lib/jetty-util-9.3.24.v20180605.jar:/usr/local/hadoop//share/hadoop/common/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/local/hadoop//share/hadoop/common/lib/jetty-xml-9.3.24.v20180605.jar:/usr/local/hadoop//share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop//share/hadoop/common/lib/json-smart-2.3.jar:/usr/local/hadoop//share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop//share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop//share/hadoop/common/lib/jsr311-api-1.1.1.jar:/usr/local/hadoop//share/hadoop/common/lib/kerb-admin-1.0.1.jar:/usr/local/hadoop//share/hadoop/common/lib/kerb-client-1.0.1.jar:/usr/local/hadoop//share/hadoop/common/lib/kerb-common-1.0.1.jar:/usr/local/hadoop//share/hadoop/common/lib/kerb-core-1.0.1.jar:/usr/local/hadoop//share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/usr/local/hadoop//share/hadoop/common/lib/kerb-identity-1.0.1.jar:/usr/local/hadoop//share/hadoop/common/lib/kerb-server-1.0.1.jar:/usr/local/hadoop//share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/usr/local/hadoop//share/hadoop/common/lib/kerb-util-1.0.1.jar:/usr/local/hadoop//share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/usr/local/hadoop//share/hadoop/common/lib/kerby-config-1.0.1.jar:/usr/local/hadoop//share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/usr/local/hadoop//share/hadoop/common/lib/kerby-util-1.0.1.jar:/usr/local/hadoop//share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/usr/local/hadoop//share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop//share/hadoop/common/lib/netty-3.10.5.Final.jar:/usr/local/hadoop//share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop//share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop//share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop//share/hadoop/common/lib/re2j-1.1.jar:/usr/local/hadoop//share/hadoop/common/lib/slf4j-api-1.7.25.jar:/usr/local/hadoop//share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/usr/local/hadoop//share/hadoop/common/lib/snappy-java-1.0.5.jar:/usr/local/hadoop//share/hadoop/common/lib/stax2-api-3.1.4.jar:/usr/local/hadoop//share/hadoop/common/lib/token-provider-1.0.1.jar:/usr/local/hadoop//share/hadoop/common/lib/woodstox-core-5.0.3.jar:/usr/local/hadoop//share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop//share/hadoop/common/lib/zookeeper-3.4.13.jar:/usr/local/hadoop//share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/usr/local/hadoop//share/hadoop/common/lib/metrics-core-3.2.4.jar:/usr/local/hadoop//share/hadoop/common/hadoop-common-3.2.0-tests.jar:/usr/local/hadoop//share/hadoop/common/hadoop-common-3.2.0.jar:/usr/local/hadoop//share/hadoop/common/hadoop-nfs-3.2.0.jar:/usr/local/hadoop//share/hadoop/common/hadoop-kms-3.2.0.jar:/usr/local/hadoop//share/hadoop/hdfs:/usr/local/hadoop//share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jetty-util-ajax-9.3.24.v20180605.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/netty-all-4.0.52.Final.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/okio-1.6.0.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/hadoop-auth-3.2.0.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/commons-codec-1.11.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/httpcore-4.4.4.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/json-smart-2.3.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/asm-5.0.4.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/zookeeper-3.4.13.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/curator-framework-2.12.0.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/curator-client-2.12.0.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/commons-io-2.5.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jersey-core-1.19.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jersey-server-1.19.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jetty-server-9.3.24.v20180605.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jetty-http-9.3.24.v20180605.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jetty-util-9.3.24.v20180605.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jetty-io-9.3.24.v20180605.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jetty-xml-9.3.24.v20180605.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jetty-security-9.3.24.v20180605.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/hadoop-annotations-3.2.0.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/commons-net-3.6.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jersey-json-1.19.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jettison-1.1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/commons-lang3-3.7.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/commons-text-1.4.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/avro-1.7.7.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/paranamer-2.3.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/commons-compress-1.4.1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/xz-1.0.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/re2j-1.1.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/gson-2.2.4.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jsch-0.1.54.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/curator-recipes-2.12.0.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jackson-databind-2.9.5.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jackson-annotations-2.9.5.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/jackson-core-2.9.5.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/usr/local/hadoop//share/hadoop/hdfs/lib/dnsjava-2.1.7.jar:/usr/local/hadoop//share/hadoop/hdfs/hadoop-hdfs-3.2.0-tests.jar:/usr/local/hadoop//share/hadoop/hdfs/hadoop-hdfs-3.2.0.jar:/usr/local/hadoop//share/hadoop/hdfs/hadoop-hdfs-nfs-3.2.0.jar:/usr/local/hadoop//share/hadoop/hdfs/hadoop-hdfs-client-3.2.0-tests.jar:/usr/local/hadoop//share/hadoop/hdfs/hadoop-hdfs-client-3.2.0.jar:/usr/local/hadoop//share/hadoop/hdfs/hadoop-hdfs-native-client-3.2.0-tests.jar:/usr/local/hadoop//share/hadoop/hdfs/hadoop-hdfs-native-client-3.2.0.jar:/usr/local/hadoop//share/hadoop/hdfs/hadoop-hdfs-rbf-3.2.0-tests.jar:/usr/local/hadoop//share/hadoop/hdfs/hadoop-hdfs-rbf-3.2.0.jar:/usr/local/hadoop//share/hadoop/hdfs/hadoop-hdfs-httpfs-3.2.0.jar:/usr/local/hadoop//share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop//share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop//share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.2.0.jar:/usr/local/hadoop//share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.2.0.jar:/usr/local/hadoop//share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.2.0.jar:/usr/local/hadoop//share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.2.0.jar:/usr/local/hadoop//share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.2.0.jar:/usr/local/hadoop//share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.2.0-tests.jar:/usr/local/hadoop//share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.2.0.jar:/usr/local/hadoop//share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.2.0.jar:/usr/local/hadoop//share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.2.0.jar:/usr/local/hadoop//share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.2.0.jar:/usr/local/hadoop//share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.0.jar:/usr/local/hadoop//share/hadoop/yarn:/usr/local/hadoop//share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/usr/local/hadoop//share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop//share/hadoop/yarn/lib/ehcache-3.3.1.jar:/usr/local/hadoop//share/hadoop/yarn/lib/fst-2.50.jar:/usr/local/hadoop//share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/local/hadoop//share/hadoop/yarn/lib/guice-4.0.jar:/usr/local/hadoop//share/hadoop/yarn/lib/guice-servlet-4.0.jar:/usr/local/hadoop//share/hadoop/yarn/lib/jackson-jaxrs-base-2.9.5.jar:/usr/local/hadoop//share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.9.5.jar:/usr/local/hadoop//share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.9.5.jar:/usr/local/hadoop//share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/local/hadoop//share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop//share/hadoop/yarn/lib/jersey-client-1.19.jar:/usr/local/hadoop//share/hadoop/yarn/lib/jersey-guice-1.19.jar:/usr/local/hadoop//share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/local/hadoop//share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/usr/local/hadoop//share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/local/hadoop//share/hadoop/yarn/lib/objenesis-1.0.jar:/usr/local/hadoop//share/hadoop/yarn/lib/snakeyaml-1.16.jar:/usr/local/hadoop//share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/usr/local/hadoop//share/hadoop/yarn/hadoop-yarn-api-3.2.0.jar:/usr/local/hadoop//share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.0.jar:/usr/local/hadoop//share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.2.0.jar:/usr/local/hadoop//share/hadoop/yarn/hadoop-yarn-client-3.2.0.jar:/usr/local/hadoop//share/hadoop/yarn/hadoop-yarn-common-3.2.0.jar:/usr/local/hadoop//share/hadoop/yarn/hadoop-yarn-registry-3.2.0.jar:/usr/local/hadoop//share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.2.0.jar:/usr/local/hadoop//share/hadoop/yarn/hadoop-yarn-server-common-3.2.0.jar:/usr/local/hadoop//share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.2.0.jar:/usr/local/hadoop//share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.2.0.jar:/usr/local/hadoop//share/hadoop/yarn/hadoop-yarn-server-router-3.2.0.jar:/usr/local/hadoop//share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.2.0.jar:/usr/local/hadoop//share/hadoop/yarn/hadoop-yarn-server-tests-3.2.0.jar:/usr/local/hadoop//share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.2.0.jar:/usr/local/hadoop//share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.2.0.jar:/usr/local/hadoop//share/hadoop/yarn/hadoop-yarn-services-api-3.2.0.jar:/usr/local/hadoop//share/hadoop/yarn/hadoop-yarn-services-core-3.2.0.jar:/usr/local/hadoop//share/hadoop/yarn/hadoop-yarn-submarine-3.2.0.jar
STARTUP_MSG:   build = https://github.com/apache/hadoop.git -r e97acb3bd8f3befd27418996fa5d4b50bf2e17bf; compiled by 'sunilg' on 2019-01-08T06:08Z
STARTUP_MSG:   java = 1.8.0_201
************************************************************/
2019-03-02 20:18:07,564 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2019-03-02 20:18:08,074 INFO namenode.NameNode: createNameNode [-format]
2019-03-02 20:18:10,195 INFO common.Util: Assuming 'file' scheme for path /usr/local/hadoop/name in configuration.
2019-03-02 20:18:10,196 INFO common.Util: Assuming 'file' scheme for path /usr/local/hadoop/name in configuration.
Formatting using clusterid: CID-18457cb3-3034-4158-9d0d-e547e2e2bc83
2019-03-02 20:18:10,262 INFO namenode.FSEditLog: Edit logging is async:true
2019-03-02 20:18:10,277 INFO namenode.FSNamesystem: KeyProvider: null
2019-03-02 20:18:10,278 INFO namenode.FSNamesystem: fsLock is fair: true
2019-03-02 20:18:10,279 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2019-03-02 20:18:10,340 INFO namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)
2019-03-02 20:18:10,347 INFO namenode.FSNamesystem: supergroup          = supergroup
2019-03-02 20:18:10,347 INFO namenode.FSNamesystem: isPermissionEnabled = true
2019-03-02 20:18:10,347 INFO namenode.FSNamesystem: HA Enabled: false
2019-03-02 20:18:10,402 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2019-03-02 20:18:10,416 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2019-03-02 20:18:10,416 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2019-03-02 20:18:10,451 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2019-03-02 20:18:10,452 INFO blockmanagement.BlockManager: The block deletion will start around 2019 三月 02 20:18:10
2019-03-02 20:18:10,453 INFO util.GSet: Computing capacity for map BlocksMap
2019-03-02 20:18:10,453 INFO util.GSet: VM type       = 64-bit
2019-03-02 20:18:10,476 INFO util.GSet: 2.0% max memory 235.9 MB = 4.7 MB
2019-03-02 20:18:10,476 INFO util.GSet: capacity      = 2^19 = 524288 entries
2019-03-02 20:18:10,492 INFO blockmanagement.BlockManager: Storage policy satisfier is disabled
2019-03-02 20:18:10,492 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false
2019-03-02 20:18:10,516 INFO Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
2019-03-02 20:18:10,516 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2019-03-02 20:18:10,516 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
2019-03-02 20:18:10,516 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
2019-03-02 20:18:10,517 INFO blockmanagement.BlockManager: defaultReplication         = 2
2019-03-02 20:18:10,517 INFO blockmanagement.BlockManager: maxReplication             = 512
2019-03-02 20:18:10,517 INFO blockmanagement.BlockManager: minReplication             = 1
2019-03-02 20:18:10,517 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
2019-03-02 20:18:10,517 INFO blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms
2019-03-02 20:18:10,517 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
2019-03-02 20:18:10,517 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2019-03-02 20:18:10,629 INFO namenode.FSDirectory: GLOBAL serial map: bits=29 maxEntries=536870911
2019-03-02 20:18:10,629 INFO namenode.FSDirectory: USER serial map: bits=24 maxEntries=16777215
2019-03-02 20:18:10,629 INFO namenode.FSDirectory: GROUP serial map: bits=24 maxEntries=16777215
2019-03-02 20:18:10,629 INFO namenode.FSDirectory: XATTR serial map: bits=24 maxEntries=16777215
2019-03-02 20:18:10,640 INFO util.GSet: Computing capacity for map INodeMap
2019-03-02 20:18:10,640 INFO util.GSet: VM type       = 64-bit
2019-03-02 20:18:10,640 INFO util.GSet: 1.0% max memory 235.9 MB = 2.4 MB
2019-03-02 20:18:10,640 INFO util.GSet: capacity      = 2^18 = 262144 entries
2019-03-02 20:18:10,640 INFO namenode.FSDirectory: ACLs enabled? false
2019-03-02 20:18:10,640 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true
2019-03-02 20:18:10,640 INFO namenode.FSDirectory: XAttrs enabled? true
2019-03-02 20:18:10,640 INFO namenode.NameNode: Caching file names occurring more than 10 times
2019-03-02 20:18:10,650 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
2019-03-02 20:18:10,652 INFO snapshot.SnapshotManager: SkipList is disabled
2019-03-02 20:18:10,656 INFO util.GSet: Computing capacity for map cachedBlocks
2019-03-02 20:18:10,656 INFO util.GSet: VM type       = 64-bit
2019-03-02 20:18:10,656 INFO util.GSet: 0.25% max memory 235.9 MB = 603.8 KB
2019-03-02 20:18:10,656 INFO util.GSet: capacity      = 2^16 = 65536 entries
2019-03-02 20:18:10,664 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2019-03-02 20:18:10,665 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2019-03-02 20:18:10,665 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2019-03-02 20:18:10,674 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
2019-03-02 20:18:10,674 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2019-03-02 20:18:10,676 INFO util.GSet: Computing capacity for map NameNodeRetryCache
2019-03-02 20:18:10,676 INFO util.GSet: VM type       = 64-bit
2019-03-02 20:18:10,677 INFO util.GSet: 0.029999999329447746% max memory 235.9 MB = 72.5 KB
2019-03-02 20:18:10,677 INFO util.GSet: capacity      = 2^13 = 8192 entries
2019-03-02 20:18:10,722 INFO namenode.FSImage: Allocated new BlockPoolId: BP-887427717-10.0.3.106-1551529090709
2019-03-02 20:18:10,786 INFO common.Storage: Storage directory /usr/local/hadoop/name has been successfully formatted.
2019-03-02 20:18:10,816 INFO namenode.FSImageFormatProtobuf: Saving image file /usr/local/hadoop/name/current/fsimage.ckpt_0000000000000000000 using no compression
2019-03-02 20:18:11,045 INFO namenode.FSImageFormatProtobuf: Image file /usr/local/hadoop/name/current/fsimage.ckpt_0000000000000000000 of size 401 bytes saved in 0 seconds .
2019-03-02 20:18:11,057 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2019-03-02 20:18:11,063 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/10.0.3.106
************************************************************/

5.2启动hadoop的所有进程

注:/usr/local/hadoop/sbin/start-all.sh命令等于/usr/local/hadoop/sbin/start-dfs.sh加/usr/local/hadoop/sbin/start-yarn.sh,前者启动hdfs系统,后者启动mapreduce调度工具,关闭两进程的命令为/usr/local/hadoop/sbin/stop-all.sh

[hadoop@master ~]$ /usr/local/hadoop/sbin/start-all.sh 
WARNING: Attempting to start all Apache Hadoop daemons as hadoop in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [master]
Starting datanodes
Starting secondary namenodes [master]
Starting resourcemanager
resourcemanager is running as process 5156.  Stop it first.
Starting nodemanagers

[hadoop@master ~]$ netstat -ltunp 

[hadoop@master ~]$ netstat -ltunp
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 10.0.3.106:8088         0.0.0.0:*               LISTEN      5156/java           
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      -                   
tcp        0      0 0.0.0.0:13562           0.0.0.0:*               LISTEN      6343/java           
tcp        0      0 127.0.0.1:40254         0.0.0.0:*               LISTEN      5834/java           
tcp        0      0 10.0.3.106:8030         0.0.0.0:*               LISTEN      5156/java           
tcp        0      0 10.0.3.106:8032         0.0.0.0:*               LISTEN      5156/java           
tcp        0      0 10.0.3.106:8033         0.0.0.0:*               LISTEN      5156/java           
tcp        0      0 10.0.3.106:8035         0.0.0.0:*               LISTEN      5156/java           
tcp        0      0 0.0.0.0:8040            0.0.0.0:*               LISTEN      6343/java           
tcp        0      0 0.0.0.0:9864            0.0.0.0:*               LISTEN      5834/java           
tcp        0      0 10.0.3.106:9000         0.0.0.0:*               LISTEN      5723/java           
tcp        0      0 0.0.0.0:8042            0.0.0.0:*               LISTEN      6343/java           
tcp        0      0 10.0.3.106:50090        0.0.0.0:*               LISTEN      6033/java           
tcp        0      0 0.0.0.0:9866            0.0.0.0:*               LISTEN      5834/java           
tcp        0      0 0.0.0.0:9867            0.0.0.0:*               LISTEN      5834/java           
tcp        0      0 0.0.0.0:9870            0.0.0.0:*               LISTEN      5723/java           
tcp        0      0 0.0.0.0:35055           0.0.0.0:*               LISTEN      6343/java           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -                   
tcp6       0      0 ::1:25                  :::*                    LISTEN      -                   
tcp6       0      0 :::22                   :::*                    LISTEN      -                   
udp        0      0 127.0.0.1:323           0.0.0.0:*                           -                   
udp        0      0 0.0.0.0:68              0.0.0.0:*                           -                   
udp6       0      0 ::1:323                 :::*                                -            

 ##查看hadoop存储节点的状态信息

 [hadoop@master ~]$ /usr/local/hadoop/bin/hdfs dfsadmin -report

[hadoop@master ~]$ /usr/local/hadoop/bin/hdfs dfsadmin -report
Configured Capacity: 53660876800 (49.98 GB)
Present Capacity: 50506805248 (47.04 GB)
DFS Remaining: 50506801152 (47.04 GB)
DFS Used: 4096 (4 KB)
DFS Used%: 0.00%
Replicated Blocks:
	Under replicated blocks: 0
	Blocks with corrupt replicas: 0
	Missing blocks: 0
	Missing blocks (with replication factor 1): 0
	Low redundancy blocks with highest priority to recover: 0
	Pending deletion blocks: 0
Erasure Coded Block Groups: 
	Low redundancy block groups: 0
	Block groups with corrupt internal blocks: 0
	Missing block groups: 0
	Low redundancy blocks with highest priority to recover: 0
	Pending deletion blocks: 0

-------------------------------------------------
Live datanodes (1):

Name: 10.0.3.106:9866 (master)
Hostname: master
Decommission Status : Normal
Configured Capacity: 53660876800 (49.98 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 3154071552 (2.94 GB)
DFS Remaining: 50506801152 (47.04 GB)
DFS Used%: 0.00%
DFS Remaining%: 94.12%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sat Mar 02 20:50:15 CST 2019
Last Block Report: Sat Mar 02 20:44:00 CST 2019
Num of Blocks: 0

[hadoop@master ~]$ jps

[hadoop@master ~]$ jps
6033 SecondaryNameNode
5156 ResourceManager
6343 NodeManager
5834 DataNode
5723 NameNode
6733 Jps

六、 在查看hadoop存储节点的状态信息时发现只有master

 [hadoop@master ~]$ /usr/local/hadoop/bin/hdfs dfsadmin -report

查看了下官方文档,貌似3.2版本中slaves文件已经替换成workers文件了

https://hadoop.apache.org/docs/r3.2.0/hadoop-project-dist/hadoop-common/ClusterSetup.html#Installation

所以使用脚本先停止所有hadoop进程,然后重命名slaves为workers,再启动hadoop所有进程

[hadoop@master ~]$ /usr/local/hadoop/sbin/stop-all.sh
WARNING: Stopping all Apache Hadoop daemons as hadoop in 10 seconds.
WARNING: Use CTRL-C to abort.
Stopping namenodes on [master]
Stopping datanodes
Stopping secondary namenodes [master]
Stopping nodemanagers
Stopping resourcemanager
[hadoop@master ~]$ mv /usr/local/hadoop/etc/hadoop/slaves /usr/local/hadoop/etc/hadoop/workers
[hadoop@master ~]$ /usr/local/hadoop/sbin/start-all.sh
WARNING: Attempting to start all Apache Hadoop daemons as hadoop in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [master]
Starting datanodes
slave1: WARNING: /usr/local/hadoop/logs does not exist. Creating.
slave2: WARNING: /usr/local/hadoop/logs does not exist. Creating.
Starting secondary namenodes [master]
Starting resourcemanager
Starting nodemanagers

再次查看hadoop存储节点的状态:

[hadoop@master ~]$ /usr/local/hadoop/bin/hdfs dfsadmin -report
Configured Capacity: 107321753600 (99.95 GB)
Present Capacity: 100654186496 (93.74 GB)
DFS Remaining: 100654178304 (93.74 GB)
DFS Used: 8192 (8 KB)
DFS Used%: 0.00%
Replicated Blocks:
	Under replicated blocks: 0
	Blocks with corrupt replicas: 0
	Missing blocks: 0
	Missing blocks (with replication factor 1): 0
	Low redundancy blocks with highest priority to recover: 0
	Pending deletion blocks: 0
Erasure Coded Block Groups: 
	Low redundancy block groups: 0
	Block groups with corrupt internal blocks: 0
	Missing block groups: 0
	Low redundancy blocks with highest priority to recover: 0
	Pending deletion blocks: 0

-------------------------------------------------
Live datanodes (2):

Name: 10.0.3.107:9866 (slave1)
Hostname: slave1
Decommission Status : Normal
Configured Capacity: 53660876800 (49.98 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 3534544896 (3.29 GB)
DFS Remaining: 50126327808 (46.68 GB)
DFS Used%: 0.00%
DFS Remaining%: 93.41%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sat Mar 02 21:29:09 CST 2019
Last Block Report: Sat Mar 02 21:28:27 CST 2019
Num of Blocks: 0


Name: 10.0.3.108:9866 (slave2)
Hostname: slave2
Decommission Status : Normal
Configured Capacity: 53660876800 (49.98 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 3133022208 (2.92 GB)
DFS Remaining: 50527850496 (47.06 GB)
DFS Used%: 0.00%
DFS Remaining%: 94.16%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sat Mar 02 21:29:09 CST 2019
Last Block Report: Sat Mar 02 21:28:27 CST 2019
Num of Blocks: 0

master上执行jps

[hadoop@master ~]$ jps
8370 ResourceManager
7940 NameNode
8730 Jps
8157 SecondaryNameNode

slave1上执行jps

[root@slave1 ~]# jps
4052 DataNode
4119 NodeManager
4266 Jps

slave2上执行jps

[root@slave2 ~]# jps
3959 DataNode
4023 NodeManager
4172 Jps

七、web界面

教程上时master的50070端口,但该端口并没有被监听,查资料显示3.2版本已经从50070变为9870

https://blog.csdn.net/wu_noah/article/details/79448118

八、Hadoop中的数据管理

8.1创建一个文件

[hadoop@master ~]$ touch 1.txt
[hadoop@master ~]$ ls
1.txt

8.2查看本地文件(查看本地文件要加上file://)

[hadoop@master ~]$ hadoop fs -ls file:///home/hadoop/
Found 7 items
-rw-------   1 hadoop hadoop       2312 2019-03-02 22:06 file:///home/hadoop/.bash_history
-rw-r--r--   1 hadoop hadoop         18 2018-04-11 08:53 file:///home/hadoop/.bash_logout
-rw-r--r--   1 hadoop hadoop        193 2018-04-11 08:53 file:///home/hadoop/.bash_profile
-rw-r--r--   1 hadoop hadoop        231 2018-04-11 08:53 file:///home/hadoop/.bashrc
drwx------   - hadoop hadoop         80 2019-03-02 20:42 file:///home/hadoop/.ssh
-rw-------   1 hadoop hadoop       6885 2019-03-02 21:23 file:///home/hadoop/.viminfo
-rw-rw-r--   1 hadoop hadoop          0 2019-03-02 22:37 file:///home/hadoop/1.txt

8.3查看hadoop文件,当前是空的

[hadoop@master ~]$ hadoop fs -ls /

8.3上传本地文件到hadoop,然后查看

[hadoop@master ~]$  hadoop fs -mkdir -p /input
[hadoop@master ~]$  hadoop fs -put /home/hadoop/1.txt /input
[hadoop@master ~]$ hadoop fs -ls /
Found 1 items
drwxr-xr-x   - hadoop supergroup          0 2019-03-02 22:47 /input
[hadoop@master ~]$ hadoop fs -ls /input
Found 1 items
-rw-r--r--   2 hadoop supergroup          0 2019-03-02 22:47 /input/1.txt

8.4使用--help查看有哪些可用命令

[hadoop@master ~]$ hadoop fs -help

删除: [hadoop@master ~]$ hadoop fs -rm

下载:[hadoop@master ~]$ hadoop fs -get

上传:[hadoop@master ~]$ hadoop fs -put

移动:[hadoop@master ~]$ hadoop fs -mv

复制:[hadoop@master ~]$ hadoop fs -cp

查看:[hadoop@master ~]$ hadoop fs -cat

统计:[hadoop@master ~]$ hadoop fs -du /input/

[hadoop@master ~]$ date > date.txt
[hadoop@master ~]$ hadoop fs -put date.txt /input
[hadoop@master ~]$ hadoop fs -ls /input
Found 2 items
-rw-r--r--   2 hadoop supergroup          0 2019-03-02 22:47 /input/1.txt
-rw-r--r--   2 hadoop supergroup         43 2019-03-02 22:52 /input/date.txt
[hadoop@master ~]$ hadoop fs -cat /input/date.txt
2019年 03月 02日 星期六 22:51:43 CST
[hadoop@master ~]$ hadoop fs -get /input/date.txt /tmp/date2.txt
[hadoop@master ~]$ ll /tmp/date2.txt 
-rw-r--r-- 1 hadoop hadoop 43 3月   2 22:53 /tmp/date2.txt
[hadoop@master ~]$ cat /tmp/date2.txt 
2019年 03月 02日 星期六 22:51:43 CST
[hadoop@master ~]$ hadoop fs -du /input/
0   0   /input/1.txt
43  86  /input/date.txt
[hadoop@master ~]$ hadoop fs -du -s /input/
43  86  /input

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值