CentOS 7.4 安装部署 hadoop 2.6 文档 V1.3

 

 

 

 


 ###################################################################
 #######                                                   #########
 ####### CentOS 7.4 安装部署 hadoop 2.6 文档  V1.3         #########
 ####### auther  : li_chunli                               #########
 ####### datetime: 2017.12.22-18:44                        #########
 ####### note    :添加设置系统资源限制,添加NTP服务配置   #########
 #######                                                   #########
 ###################################################################

=============================================
目标:
	CentOS 7.4 搭建 hadoop2.6 环境 
	使用1台机器做 Hadoop 2.6 HDFS NameNode
	使用3台机器做 Hadoop 2.6 HDFS DataNode

	
效果:
	搭建完成后,能够使用 HDFS 进行文件的创建,删除,上传,下载。


=============================================
1,必要的准备:

1.1 操作系统版本,mini安装即可
[root@NameNode ~]# cat /etc/redhat-release 
CentOS Linux release 7.4.1708 (Core) 
[root@NameNode ~]# 

1.2 操作系统时区选择:中国·上海,后续集群时间同步需要。


1.2 确信能够访问公网
[root@NameNode ~]# ping www.oracle.com 
[root@NameNode ~]# ping hadoop.apache.org 


1.3 节点数目功能一览
Hadoop NameNode  : 172.16.10.103
Hadoop DataNode1 : 172.16.10.93 
Hadoop DataNode2 : 172.16.10.94 
Hadoop DataNode3 : 172.16.10.102


1.3 NameNode 节点磁盘状态
[hadoop@NameNode ~]$ df -hT 
文件系统                类型      容量  已用  可用 已用% 挂载点
/dev/mapper/centos-root xfs        50G  4.8G   46G   10% /
devtmpfs                devtmpfs  7.9G     0  7.9G    0% /dev
tmpfs                   tmpfs     7.9G     0  7.9G    0% /dev/shm
tmpfs                   tmpfs     7.9G  9.1M  7.9G    1% /run
tmpfs                   tmpfs     7.9G     0  7.9G    0% /sys/fs/cgroup
/dev/sda1               xfs      1014M  161M  854M   16% /boot
/dev/mapper/centos-home xfs       441G   39M  441G    1% /home
tmpfs                   tmpfs     1.6G  8.0K  1.6G    1% /run/user/1000
tmpfs                   tmpfs     1.6G   12K  1.6G    1% /run/user/42
tmpfs                   tmpfs     1.6G     0  1.6G    0% /run/user/1001
[hadoop@NameNode ~]$ 

1.4 NameNode 各节点内存状态
[hadoop@NameNode ~]$ free -m
              total        used        free      shared  buff/cache   available
Mem:          16047        1282       12830           9        1933       14394
Swap:          8063           0        8063
[hadoop@NameNode ~]$ 



1.5 DataNode 各节点磁盘空间状态,当前环境每个 DataNode 节点 /home 分区空间较大,将用来存储 数据,重要的话说三遍
    DataNode 各节点磁盘空间状态,当前环境每个 DataNode 节点 /home 分区空间较大,将用来存储 数据,重要的话说三遍
    DataNode 各节点磁盘空间状态,当前环境每个 DataNode 节点 /home 分区空间较大,将用来存储 数据,重要的话说三遍
[hadoop@DataNode1 ~]$ df -hT 
文件系统                类型      容量  已用  可用 已用% 挂载点
/dev/mapper/centos-root xfs        50G  5.1G   45G   11% /
devtmpfs                devtmpfs   16G     0   16G    0% /dev
tmpfs                   tmpfs      16G     0   16G    0% /dev/shm
tmpfs                   tmpfs      16G  9.1M   16G    1% /run
tmpfs                   tmpfs      16G     0   16G    0% /sys/fs/cgroup
/dev/sda2               xfs      1014M  199M  816M   20% /boot
/dev/mapper/centos-home xfs       4.0T  201M  4.0T    1% /home                # hadoop 将会安装在这里,空间较大,将用来存储数据
tmpfs                   tmpfs     3.2G  8.0K  3.2G    1% /run/user/1000
tmpfs                   tmpfs     3.2G   12K  3.2G    1% /run/user/42
tmpfs                   tmpfs     3.2G     0  3.2G    0% /run/user/1001
[hadoop@DataNode1 ~]$ 

1.6 DataNode 节点 内存状态信息:
[root@DataNode1 ~]# free -m 
              total        used        free      shared  buff/cache   available
Mem:          32175        1199       28790           9        2185       30513
Swap:         16127           0       16127
[root@DataNode1 ~]#




1.7 能够访问互联网就行,无需准备其他特殊软件。


=============================================
2.1,在所有节点,关闭firewall/iptables。
	先学会怎么搭建并用起来 ,然后再去优化安全,避免掉到坑了去了。
[root@NameNode ~]# systemctl stop    firewalld.service 
[root@NameNode ~]# systemctl disable firewalld.service
[root@NameNode ~]# systemctl stop    iptables.service                  #可选操作:CentOS 7 默认没有安装 iptables
[root@NameNode ~]# systemctl disable iptables.service                  #可选操作:CentOS 7 默认没有安装 iptables

2.2,关闭selinux
[root@NameNode ~]# setenforce 0                                       #使配置立即生效
[root@NameNode ~]# echo 'SELINUX=disabled' >> /etc/selinux/config     #永久关闭SElinux
 
 2.3 重复上述操作为每一个节点。

=============================================
3,在所有节点安装Java环境
[root@NameNode ~]# yum install -y wget
[root@NameNode ~]# jdk_url='http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.rpm'
[root@NameNode ~]# wget --no-cookies --no-check-certificate --header "Cookie:oraclelicense=accept-securebackup-cookie" $jdk_url
[root@NameNode ~]# md5sum jdk-8u151-linux-x64.rpm                     #验证文件完整性
7f09893e12aadef39e0751ec657cc7d8  jdk-8u151-linux-x64.rpm
[root@NameNode ~]# yum autoremove   -y java                           #卸载自带的Open-JDK
[root@NameNode ~]# yum localinstall -y jdk-8u151-linux-x64.rpm 

3.2验证Java
[root@NameNode ~]# java -version                                     #验证Java 安装结果
java version "1.8.0_151"
Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)
[root@NameNode ~]# 

3.3 重复上述操作为每一个节点。
 
=============================================

4,创建hadoop用户

4.1在所有节点创建普通hadoop用户
[root@NameNode ~]# useradd hadoop && passwd hadoop                 #密码设定:hadoop

4.2验证添加的用户
[root@NameNode ~]# id hadoop 
uid=1001(hadoop) gid=1001(hadoop) 组=1001(hadoop)
[root@NameNode ~]# 


4.3 重复上述操作为每一个节点。


=============================================
5,设置host 映射文件,简化集群主机与IP关系

5.1 在所有节点添加 FQDN ,hadoop集群内部调度需要。在/etc/hosts文件后追加
[root@NameNode ~]# vim /etc/hosts                                  #尾行追加
172.16.10.103 NameNode
172.16.10.93  DataNode1
172.16.10.94  DataNode2
172.16.10.102 DataNode3

5.2 验证写入结果:
[root@NameNode ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.10.103 NameNode
172.16.10.93  DataNode1
172.16.10.94  DataNode2
172.16.10.102 DataNode3

5.3 重复上述操作为每一个节点。

=============================================
6,在所有节点配置集群内SSH免密登录

6.1 生成公钥与私钥,并将公钥传给每一个节点
[root@NameNode ~]# su - hadoop
[hadoop@NameNode ~]$ ssh-keygen -t rsa                          #一直回车
[hadoop@NameNode ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@NameNode
[hadoop@NameNode ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@DataNode1
[hadoop@NameNode ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@DataNode2
[hadoop@NameNode ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@DataNode3
[hadoop@NameNode ~]$ chmod 0600 ~/.ssh/authorized_keys


6.2 免密登录验证:
[hadoop@NameNode ~]$ ssh NameNode 
[hadoop@NameNode ~]$ ifconfig
[hadoop@NameNode ~]$ logout                                #退出NameNode shell

[hadoop@NameNode ~]$ ssh DataNode1
[hadoop@NameNode ~]$ ifconfig 
[hadoop@NameNode ~]$ logout                                 #退出DataNode1 shell

[hadoop@NameNode ~]$ ssh DataNode2
[hadoop@NameNode ~]$ ifconfig 
[hadoop@NameNode ~]$ logout                                 #退出DataNode2 shell

[hadoop@NameNode ~]$ ssh DataNode3
[hadoop@NameNode ~]$ ifconfig 
[hadoop@NameNode ~]$ logout                                 #退出DataNode3 shell

退出Hadoop用户 shell
[hadoop@NameNode ~]$ logout
[root@NameNode ~]# 


6.3 重复上述操作为每一个节点。






=============================================

7,在 NameNode 节点下载解压hadoop

7.1 
[root@NameNode ~]# su - hadoop
[hadoop@NameNode ~]$ wget http://mirror.rise.ph/apache/hadoop/common/hadoop-2.6.5/hadoop-2.6.5.tar.gz
[hadoop@NameNode ~]$ md5sum hadoop-2.6.5.tar.gz 
967c24f3c15fcdd058f34923e92ce8ac  hadoop-2.6.5.tar.gz
[hadoop@NameNode ~]$ tar xf hadoop-2.6.5.tar.gz 
[hadoop@NameNode ~]$ mv hadoop-2.6.5 hadoop
[hadoop@NameNode ~]$                        


7.2 其他节点不用上述操作。

=============================================
8,在 NameNode 节点配置,非常关键,hadoop能不能跑起来就看这里了!!! 

8.1 配置hadoop ,就像下面这样!
[hadoop@NameNode ~]$ cp ~/hadoop/etc/hadoop/core-site.xml ~/hadoop/etc/hadoop/core-site.xml.install #备份原来的配置文件
[hadoop@NameNode ~]$ > ~/hadoop/etc/hadoop/core-site.xml
[hadoop@NameNode ~]$ vim ~/hadoop/etc/hadoop/core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
	<property>
	    <name>fs.default.name</name>
	    <value>hdfs://NameNode:9000/</value>       
	</property>

	<property>
		<name>hadoop.tmp.dir </name>
		<value>/home/hadoop/data/ </value>
	</property>
</configuration>
[hadoop@NameNode hadoop]$ 
# 注意 NameNode 注意这里的主机名要填 NameNode 。。。重要的话说三遍
# 注意 NameNode 注意这里的主机名要填 NameNode 。。。重要的话说三遍
# 注意 NameNode 注意这里的主机名要填 NameNode 。。。重要的话说三遍




8.2 配置HDFS的配置文件,就像下面这样!
[hadoop@NameNode ~]$ cp ~/hadoop/etc/hadoop/hdfs-site.xml ~/hadoop/etc/hadoop/hdfs-site.xml.install #备份原来的配置文件
[hadoop@NameNode ~]$ > ~/hadoop/etc/hadoop/hdfs-site.xml
[hadoop@NameNode ~]$ vim ~/hadoop/etc/hadoop/hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
	<property>
		<name>dfs.data.dir</name>
		<value>/home/hadoop/hadoop/dfs/name/data</value>
		<final>true</final>
	</property>

	<property>
		<name>dfs.name.dir</name>
		<value>file:/home/hadoop/hadoop/dfs/name</value>
		<final>true</final>
	</property>

	<property>
		<name>dfs.replication</name>
		<value>2</value>
	</property>

</configuration>
[hadoop@NameNode ~]$ 
# 设定文件副本数 2份
# 设定数据存储路径:/home/hadoop/hadoop/dfs/name/data




8.3 配置mapred,创建文件mapred-site.xml,文件内容如下:
[hadoop@NameNode ~]$ vim ~/hadoop/etc/hadoop/mapred-site.xml
<configuration>
	<property>
		<name>mapred.job.tracker</name>
		<value>NameNode:9001</value>
	</property>
</configuration>
[hadoop@NameNode ~]$ 
# 注意 NameNode 注意这里的主机名要填 NameNode 。。。重要的话说三遍
# 注意 NameNode 注意这里的主机名要填 NameNode 。。。重要的话说三遍
# 注意 NameNode 注意这里的主机名要填 NameNode 。。。重要的话说三遍




8.4 配置Hadoop运行环境
8.4.1 查看JDK安装的路径
[root@NameNode hadoop]# rpm -qa | grep -i jdk                  #查找JAVA_HOME
jdk1.8-1.8.0_151-fcs.x86_64
[root@NameNode hadoop]# rpm -ql jdk1.8-1.8.0_151-fcs.x86_64  | more 
/usr/java/jdk1.8.0_151/bin                                       #可以看出 JAVA_HOME就是/usr/java/jdk1.8.0_151/
/usr/java/jdk1.8.0_151/bin/java
/usr/java/jdk1.8.0_151/bin/javac

8.4.2 将JDK路径写入Hadoop 脚本
[hadoop@NameNode ~]$ vim ~/hadoop/etc/hadoop/hadoop-env.sh 
export JAVA_HOME=/usr/java/jdk1.8.0_151/                                 
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
export HADOOP_CONF_DIR=/home/hadoop/hadoop/etc/hadoop/   
[hadoop@NameNode ~]$                
#注意JAVA_HOME 为JDK安装的实际路径
#注意hadoop的配置文件实际路径



8.5 其他节点无需上述操作。



=============================================
9 配置集群,使 NameNameNode 可以知道有几个DataNode

9.1 配置 SecondaryNameNode 为本机
[hadoop@NameNode ~]$ vim ~/hadoop/etc/hadoop/master
NameNode
[hadoop@NameNode ~]$ 

9.2 使 NameNameNode 可以知道有几个DataNode
[hadoop@NameNode ~]$ > ~/hadoop/etc/hadoop/slaves
[hadoop@NameNode ~]$ vim ~/hadoop/etc/hadoop/slaves
DataNode1
DataNode2
DataNode3
[hadoop@NameNode hadoop]$ 


9.3 其他节点无需上述操作。



=============================================
10,将 NameNode 的程序和配置文件下发到各 DataNode 节点

本环境 DataNode 节点的 /home 目录空间非常大,hadoop 用户的家目录就在/home 分区。
执行以下操作,将NameNode节点程序与配置,下发到DataNode节点
[hadoop@NameNode ~]$ scp -r ~/hadoop DataNode1:~/
[hadoop@NameNode ~]$ scp -r ~/hadoop DataNode2:~/
[hadoop@NameNode ~]$ scp -r ~/hadoop DataNode3:~/




=============================================

11 在 NameNode 节点,初始化HDFS文件系统:
[hadoop@NameNode ~]$ ~/hadoop/bin/hdfs namenode -format cluster_test
************************************************************/
17/12/21 18:32:37 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
17/12/21 18:32:37 INFO namenode.NameNode: createNameNode [-format, cluster_test]
Formatting using clusterid: CID-7f55fd7c-e3ac-429a-985f-c7652158a219
17/12/21 18:32:38 INFO namenode.FSNamesystem: No KeyProvider found.
17/12/21 18:32:38 INFO namenode.FSNamesystem: fsLock is fair:true
17/12/21 18:32:38 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
17/12/21 18:32:38 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
17/12/21 18:32:38 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
17/12/21 18:32:38 INFO blockmanagement.BlockManager: The block deletion will start around 2017 十二月 21 18:32:38
17/12/21 18:32:38 INFO util.GSet: Computing capacity for map BlocksMap
17/12/21 18:32:38 INFO util.GSet: VM type       = 64-bit
17/12/21 18:32:38 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
17/12/21 18:32:38 INFO util.GSet: capacity      = 2^21 = 2097152 entries
17/12/21 18:32:38 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
17/12/21 18:32:38 INFO blockmanagement.BlockManager: defaultReplication         = 2
17/12/21 18:32:38 INFO blockmanagement.BlockManager: maxReplication             = 512
17/12/21 18:32:38 INFO blockmanagement.BlockManager: minReplication             = 1
17/12/21 18:32:38 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
17/12/21 18:32:38 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
17/12/21 18:32:38 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
17/12/21 18:32:38 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
17/12/21 18:32:38 INFO namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)
17/12/21 18:32:38 INFO namenode.FSNamesystem: supergroup          = supergroup
17/12/21 18:32:38 INFO namenode.FSNamesystem: isPermissionEnabled = true
17/12/21 18:32:38 INFO namenode.FSNamesystem: HA Enabled: false
17/12/21 18:32:38 INFO namenode.FSNamesystem: Append Enabled: true
17/12/21 18:32:38 INFO util.GSet: Computing capacity for map INodeMap
17/12/21 18:32:38 INFO util.GSet: VM type       = 64-bit
17/12/21 18:32:38 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
17/12/21 18:32:38 INFO util.GSet: capacity      = 2^20 = 1048576 entries
17/12/21 18:32:38 INFO namenode.NameNode: Caching file names occuring more than 10 times
17/12/21 18:32:38 INFO util.GSet: Computing capacity for map cachedBlocks
17/12/21 18:32:38 INFO util.GSet: VM type       = 64-bit
17/12/21 18:32:38 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
17/12/21 18:32:38 INFO util.GSet: capacity      = 2^18 = 262144 entries
17/12/21 18:32:38 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
17/12/21 18:32:38 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
17/12/21 18:32:38 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
17/12/21 18:32:38 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
17/12/21 18:32:38 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
17/12/21 18:32:38 INFO util.GSet: Computing capacity for map NameNodeRetryCache
17/12/21 18:32:38 INFO util.GSet: VM type       = 64-bit
17/12/21 18:32:38 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
17/12/21 18:32:38 INFO util.GSet: capacity      = 2^15 = 32768 entries
17/12/21 18:32:38 INFO namenode.NNConf: ACLs enabled? false
17/12/21 18:32:38 INFO namenode.NNConf: XAttrs enabled? true
17/12/21 18:32:38 INFO namenode.NNConf: Maximum size of an xattr: 16384
17/12/21 18:32:40 INFO namenode.FSImage: Allocated new BlockPoolId: BP-900658624-127.0.0.1-1513852360055
17/12/21 18:32:40 INFO common.Storage: Storage directory /home/hadoop/hadoop/dfs/name has been successfully formatted.
17/12/21 18:32:40 INFO namenode.FSImageFormatProtobuf: Saving image file /home/hadoop/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
17/12/21 18:32:40 INFO namenode.FSImageFormatProtobuf: Image file /home/hadoop/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds.
17/12/21 18:32:40 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/12/21 18:32:40 INFO util.ExitUtil: Exiting with status 0
17/12/21 18:32:40 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at NameNode/127.0.0.1
************************************************************/
[hadoop@NameNode ~]$ 
# 可以看出,没有ERR之类的错误。。。



=============================================
12 启动hadoop

12.1 启动hadoop
[hadoop@NameNode ~]$ ~/hadoop/sbin/start-all.sh                                   #启动hadoop 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [NameNode]
NameNode: starting namenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-namenode-NameNode.localdomain.out
DataNode3: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-NameNode.localdomain.out
DataNode1: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-NameNode.out
DataNode2: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-NameNode.localdomain.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-secondarynamenode-NameNode.localdomain.out
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-resourcemanager-NameNode.localdomain.out
DataNode1: starting nodemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-nodemanager-NameNode.out
DataNode3: starting nodemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-nodemanager-NameNode.localdomain.out
DataNode2: starting nodemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-nodemanager-NameNode.localdomain.out
[hadoop@NameNode ~]$ 


[hadoop@NameNode ~]$ jps                                                               # Java 程序 进程
21601 NameNode
21971 ResourceManager
21803 SecondaryNameNode
22237 Jps
[hadoop@NameNode ~]$ 



12.2 查看服务监听状态
[hadoop@NameNode ~]$ netstat -tnlp  | grep java
tcp        0      0 0.0.0.0:50070           0.0.0.0:*               LISTEN      21601/java          
tcp        0      0 172.16.10.103:9000      0.0.0.0:*               LISTEN      21601/java   #可以看到 9000 端口监听成功了       
tcp        0      0 0.0.0.0:50090           0.0.0.0:*               LISTEN      21803/java          
tcp6       0      0 :::8088                 :::*                    LISTEN      21971/java          
tcp6       0      0 :::8030                 :::*                    LISTEN      21971/java          
tcp6       0      0 :::8031                 :::*                    LISTEN      21971/java          
tcp6       0      0 :::8032                 :::*                    LISTEN      21971/java          
tcp6       0      0 :::8033                 :::*                    LISTEN      21971/java          
[hadoop@NameNode ~]$ 



12.3 关闭hadoop
[hadoop@NameNode ~]$ ~/hadoop/sbin/stop-all.sh                                  #关闭hadoop
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [NameNode]
NameNode: stopping namenode
DataNode3: stopping datanode
DataNode2: stopping datanode
DataNode1: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
stopping yarn daemons
stopping resourcemanager
DataNode1: stopping nodemanager
DataNode3: stopping nodemanager
DataNode2: stopping nodemanager
DataNode1: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
DataNode3: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
DataNode2: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
no proxyserver to stop
[hadoop@NameNode ~]$ 



[hadoop@NameNode ~]$ jps 
23187 Jps
[hadoop@NameNode ~]$



=============================================
13 , 验证 HDFS 可用性:

以下将演示 文件的上传、下载、查看、删除,目录的创建,删除。

[hadoop@NameNode ~]$ ~/hadoop/sbin/start-all.sh                                              #启动hadoop 
[hadoop@NameNode ~]$ ~/hadoop/bin/hadoop fs -df -h                                           #查看HDFS的容量状况
Filesystem              Size  Used  Available  Use%
hdfs://NameNode:9000  11.8 T  12 K     11.8 T    0%
[hadoop@NameNode ~]$ 

[hadoop@NameNode ~]$ ~/hadoop/bin/hadoophadoop fs -ls /                                     #显示HDFS根目录
[hadoop@NameNode ~]$ ~/hadoop/bin/hadoophadoop fs -mkdir /test_directory                    #在 HDFS 中创建一个目录
[hadoop@NameNode ~]$ echo 'Hello HDFS!' > /tmp/test_file                                    #将本地文件上传到 HDFS 
[hadoop@NameNode ~]$ ~/hadoop/bin/hadoophadoop fs -put /tmp/test_file /test_directory     
[hadoop@NameNode ~]$ ~/hadoop/bin/hadoophadoop fs -cat /test_directory/test_file            #查看 HDFS 中文件的内容
Hello HDFS!
[hadoop@NameNode ~]$ rm -rf /tmp/test_file                                                  #将 HDFS 中文件 下载到本地
[hadoop@NameNode ~]$ ~/hadoop/bin/hadoop fs -get /test_directory/test_file /tmp/
[hadoop@NameNode ~]$ cat /tmp/test_file 
Hello HDFS!
[hadoop@NameNode ~]$  ~/hadoop/bin/hadoop fs -ls /test_directory/                            #删除HDFS中的文件
Found 1 items
-rw-r--r--   1 hadoop supergroup         12 2017-12-21 15:16 /test_directory/test_file
[hadoop@NameNode ~]$  ~/hadoop/bin/hadoop fs -rm -f /test_directory/test_file
Deleted /test_directory/test_file
[hadoop@NameNode ~]$ 
[hadoop@NameNode ~]$  ~/hadoop/bin/hadoop fs -ls /test_directory/
[hadoop@NameNode ~]$ 

[hadoop@NameNode ~]$  ~/hadoop/bin/hadoop fs -ls /                                           #删除HDFS中的目录
Found 1 items
drwxr-xr-x   - hadoop supergroup          0 2017-12-21 15:20 /test_directory
[hadoop@NameNode ~]$ 
[hadoop@NameNode ~]$  ~/hadoop/bin/hadoop fs -rm -r -f /test_directory
17/12/21 15:21:34 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Deleted /test_directory
[hadoop@NameNode ~]$  ~/hadoop/bin/hadoop fs -ls /





=============================================
14, 系统优化:设定系统资源限制 

14.1 查看各节点 系统资源限制 
[root@NameNode ~]$ ulimit -Sn    #查看文件描述符软限制
1024
[root@NameNode ~]$ ulimit -Hn    #查看文件描述符硬限制
4096
[root@NameNode ~]$ 


14.2 修改各节点 系统资源限制 
[root@NameNode ~]# ulimit -n 10000
[root@NameNode ~]# ulimit -Sn     #查看文件描述符软限制 
10000
[root@NameNode ~]# ulimit -Hn    #查看文件描述符硬限制
10000
[root@NameNode ~]# 


14.3 修改linux Kernel 启动参数
[root@NameNode ~]# vim /etc/security/limits.conf   #尾行追加
* soft nofile 10000      
* hard nofile 10000 


14.4 重启机器,验证上述设定效果
[root@NameNode ~]# reboot 
[root@NameNode ~]# ulimit -Sn    #查看文件描述符软限制
10000
[root@NameNode ~]# ulimit -Hn    #查看文件描述符硬限制
10000
[root@NameNode ~]# 


14.5 重复上述操作为每一个节点。





=============================================
15, 系统优化:安装NTP时间同步服务器


15.1 NameNode 节点,安装NTP时间同步服务器
[root@NameNode ~]# yum install -y ntp
[root@NameNode ~]# yum install -y ntpdate
[root@NameNode ~]# ntpdate -u cn.pool.ntp.org                       #与国内时间服务器同步
[root@NameNode ~]# date                                             #显示当前时间
2017年 12月 22日 星期五 10:24:07 CST
[root@NameNode ~]# 


15.1 NameNode 节点,修改配置文件
[root@NameNode ~]# cp /etc/ntp.conf /etc/ntp.conf.install
[root@NameNode ~]# > /etc/ntp.conf                                 #清空文件内容

[root@NameNode ~]# vim /etc/ntp.conf                               #文件内写入新的内容
driftfile /var/lib/ntp/drift
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
restrict 127.0.0.1 
restrict -6 ::1

# 允许内网 172.16.10.0/24 网段机器同步时间
restrict 172.16.10.0 mask 255.255.255.0 nomodify notrap

# 允许上层时间服务器主动修改本机时间
restrict time1.aliyun.com  nomodify notrap noquery
restrict ntp1.aliyun.com  nomodify notrap noquery

# 定义使用的上游 ntp服务器
server time1.aliyun.com
server ntp1.aliyun.com

# 外部时间服务器不可用时,以本地时间作为时间服务
server  127.127.1.0    
fudge   127.127.1.0 stratum 10

includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
[root@NameNode ~]# 


15.3 NameNode 节点, 启动NTP服务
[root@NameNode ~]# /bin/systemctl enable  ntpd.service             #NTP服务开机自启
[root@NameNode ~]# /bin/systemctl restart ntpd.service             #重启NTP服务
[root@NameNode ~]# ps -ef  | grep -i ntp
root      4569     1  0 11:00 ?        00:00:00 /usr/sbin/ntpd -u ntp:ntp -g
[root@NameNode ~]# 


15.4 NameNode 节点,显示时间服务器节点列表
[root@NameNode ~]# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
 time6.aliyun.co 10.137.38.86     2 u   50   64    1   25.753   16.920   0.000
 time5.aliyun.co 10.137.38.86     2 u   49   64    1   27.693   17.074   0.000
*LOCAL(0)        .LOCL.          10 l   48   64    1    0.000    0.000   0.000
[root@NameNode ~]# 


15.5 在其他节点 都操作一次:
[root@NameNode ~]# yum install -y ntpdate
[root@NameNode ~]# ntpdate NameNode
[root@NameNode ~]# date 
2017年 12月 22日 星期五 11:09:13 CST
[root@NameNode ~]# 


15.6 验证时间同步

在 NameNode 节点,写一个几句命令的脚本,内容就4句,
以微秒级显示各节点时间信息: 
[hadoop@NameNode ~]$ vim test.sh
ssh NameNode  'date +%s.%N' &
ssh DataNode1 'date +%s.%N' &
ssh DataNode2 'date +%s.%N' &
ssh DataNode3 'date +%s.%N' &
[hadoop@NameNode ~]$ 


15.7 运行这个脚本:
[hadoop@NameNode ~]$ bash test.sh 
[hadoop@NameNode ~]$ 1513913513.057121708
1513913513.039545463
1513913513.097342009
1513913513.093183005
[hadoop@NameNode ~]$ 


15.8 可以看出,各节点时间显示非常接近,这是以微秒级进行显示的.

=============================================






=============================================
=============================================
=============================================





 

转载于:https://my.oschina.net/u/3776585/blog/1619532

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值