HDFS简介和使用介绍

HDFS简介和使用介绍

HDFS概述

​ 随着数据量越来越大,在一个操作系统存不下所有的数据,那么就分配到更多的操作系统管理的磁盘中,但是不方便管理和维护,迫切需要一种系统来管理多台机器上的文件,这就是分布式文件管理系统。HDFS只是分布式文件管理系统中的一种。

​ HDFS(Hadoop Distributed File System),它是一个文件系统,用于存储文件,通过目录树来定位文件;其次,它是分布式的,由很多服务器联合起来实现其功能,集群中的服务器有各自的角色。

​ HDFS的使用场景:适合一次写入,多次读出的场景,且不支持文件的修改。适合用来做数据分析,并不适合用来做网盘应用。

HDFS优缺点

优点

1)高容错性

​ (1)数据自动保存多个副本。它通过增加副本的形式,提高容错性。
​ (2)某一个副本丢失以后,它可以自动恢复。
2)适合处理大数据
​ (1)数据规模:能够处理数据规模达到GB、TB、甚至PB级别的数据;
​ (2)文件规模:能够处理百万规模以上的文件数量,数量相当之大。
3)可构建在廉价机器上,通过多副本机制,提高可靠性。

缺点

1)不适合低延时数据访问,比如毫秒级的存储数据,是做不到的。
2)无法高效的对大量小文件进行存储。
(1)存储大量小文件的话,它会占用NameNode大量的内存来存储文件目录和块信息。这样是不可取的,因为NameNode的内存总是有限的;
(2)小文件存储的寻址时间会超过读取时间,它违反了HDFS的设计目标。
3)不支持并发写入、文件随机修改。
(1)一个文件只能有一个写,不允许多个线程同时写;
(2)仅支持数据append(追加),不支持文件的随机修改。

HDFS组成架构

1)NameNode(nn):就是Master,它是一个主管、管理者。
(1)管理HDFS的名称空间;
(2)配置副本策略;
(3)管理数据块(Block)映射信息;
(4)处理客户端读写请求。

2)DataNode:就是Slave。NameNode下达命令,DataNode执行实际的操作。
(1)存储实际的数据块;
(2)执行数据块的读/写操作。

3)Client:就是客户端。
(1)文件切分。文件上传HDFS的时候,Client将文件切分成一个一个的Block,然后进行上传;
(2)与NameNode交互,获取文件的位置信息;
(3)与DataNode交互,读取或者写入数据;
(4)Client提供一些命令来管理HDFS,比如NameNode格式化;
(5)Client可以通过一些命令来访问HDFS,比如对HDFS增删查改操作;

【注意】: 文件切分是由client来完成的,起初接触hdfs的时候,我一直误认为这个工作是由nn来实现,所以这里可以注意下文件切分的工作划分由client处理。

4)Secondary NameNode:并非NameNode的热备。当NameNode挂掉的时候,它并不能马上替换NameNode并提供服务。
(1)辅助NameNode,分担其工作量,比如定期合并Fsimage和Edits,并推送给NameNode ;
(2)在紧急情况下,可辅助恢复NameNode。

摘自官网的一张工作流程图:
在这里插入图片描述

HDFS文件块大小

​ HDFS中的文件在物理上是分块存储(Block),块的大小可以通过配置参数( dfs.blocksize)来规定,默认大小在Hadoop2.x版本中是128M,老版本中是64M。
​ 1 集群中的block
​ 2 如果寻址时间约为10ms,即查找到目标block的时间为10ms。
​ 3 寻址时间为传输时间的1%时,则为最佳状态。
​ 因此,传输时间=10ms/0.01=1000ms=1s
​ 4 而目前磁盘的传输速率普遍为100MB/s。
​ 5 block大小=1s*100MB/s=100MB
【注意】:
​ (1)HDFS的块设置太小,会增加寻址时间,程序一直在找块的开始位置;
​ (2)如果块设置的太大,从磁盘传输数据的时间会明显大于定位这个块开始位置所需的时间。导致程序在处理这块数据时,会非常慢。
​ 所以:HDFS块的大小设置主要取决于磁盘传输速率。

HDFS的日常操作整理

【注意】: hadoop fs 和 hdfs dfs命令效果相同,原因是dfs是fs的实现类;hadoop fs是一个命令整体,固定格式。

wangting@ops01:/home/wangting >hadoop fs -ls /
2021-03-19 10:10:51,749 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
Found 4 items
drwxr-xr-x   - wangting supergroup          0 2021-03-17 11:44 /20210317
-rw-r--r--   3 wangting supergroup  338075860 2021-03-12 11:50 /hadoop-3.1.3.tar.gz
drwx------   - wangting supergroup          0 2021-03-12 14:46 /tmp
drwxr-xr-x   - wangting supergroup          0 2021-03-12 11:48 /user
wangting@ops01:/home/wangting >hdfs dfs -ls /
2021-03-19 10:11:02,217 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
Found 4 items
drwxr-xr-x   - wangting supergroup          0 2021-03-17 11:44 /20210317
-rw-r--r--   3 wangting supergroup  338075860 2021-03-12 11:50 /hadoop-3.1.3.tar.gz
drwx------   - wangting supergroup          0 2021-03-12 14:46 /tmp
drwxr-xr-x   - wangting supergroup          0 2021-03-12 11:48 /user

(0)启动Hadoop集群

# hdfs
wangting@ops01:/home/wangting >cd /opt/module/hadoop-3.1.3/sbin/
wangting@ops01:/opt/module/hadoop-3.1.3/sbin >ls
distribute-exclude.sh  hadoop-daemons.sh  mr-jobhistory-daemon.sh  start-all.sh       start-dfs.sh         start-yarn.sh  stop-balancer.sh  stop-secure-dns.sh  workers.sh
FederationStateStore   httpfs.sh          refresh-namenodes.sh     start-balancer.sh  start-secure-dns.sh  stop-all.cmd   stop-dfs.cmd      stop-yarn.cmd       yarn-daemon.sh
hadoop-daemon.sh       kms.sh             start-all.cmd            start-dfs.cmd      start-yarn.cmd       stop-all.sh    stop-dfs.sh       stop-yarn.sh        yarn-daemons.sh
wangting@ops01:/opt/module/hadoop-3.1.3/sbin >./start-dfs.sh

【注意】: 配置了环境变量后,无需切入对应目录,直接在命令行执行 start-dfs.sh 即可

# yarn
wangting@ops02:/home/wangting >
wangting@ops02:/home/wangting >cd /opt/module/hadoop-3.1.3/sbin/
wangting@ops02:/opt/module/hadoop-3.1.3/sbin >ls
distribute-exclude.sh  hadoop-daemons.sh  mr-jobhistory-daemon.sh  start-all.sh       start-dfs.sh         start-yarn.sh  stop-balancer.sh  stop-secure-dns.sh  workers.sh
FederationStateStore   httpfs.sh          refresh-namenodes.sh     start-balancer.sh  start-secure-dns.sh  stop-all.cmd   stop-dfs.cmd      stop-yarn.cmd       yarn-daemon.sh
hadoop-daemon.sh       kms.sh             start-all.cmd            start-dfs.cmd      start-yarn.cmd       stop-all.sh    stop-dfs.sh       stop-yarn.sh        yarn-daemons.sh
wangting@ops02:/opt/module/hadoop-3.1.3/sbin >./start-yarn.sh

(1)-help:输出这个命令参数查询帮助

wangting@ops01:/home/wangting >hadoop fs -help ls
-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [<path> ...] :
  List the contents that match the specified file pattern. If path is not
  specified, the contents of /user/<currentUser> will be listed. For a directory a
  list of its direct children is returned (unless -d option is specified).
  
  Directory entries are of the form:
  	permissions - userId groupId sizeOfDirectory(in bytes)
  modificationDate(yyyy-MM-dd HH:mm) directoryName
  
  and file entries are of the form:
  	permissions numberOfReplicas userId groupId sizeOfFile(in bytes)
  modificationDate(yyyy-MM-dd HH:mm) fileName
  
    -C  Display the paths of files and directories only.
    -d  Directories are listed as plain files.
    -h  Formats the sizes of files in a human-readable fashion
        rather than a number of bytes.
    -q  Print ? instead of non-printable characters.
    -R  Recursively list the contents of directories.
    -t  Sort files by modification time (most recent first).
    -S  Sort files by size.
    -r  Reverse the order of the sort.
    -u  Use time of last access instead of modification for
        display and sorting.
    -e  Display the erasure coding policy of files and directories.

【注意】: 这里的-help和通常的shell命令行顺序有些区别,先-help 再跟要查的命令

(2)-ls: 显示目录信息

wangting@ops01:/home/wangting >hadoop fs -ls /user/wangting
2021-03-19 10:23:53,130 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
Found 2 items
drwxr-xr-x   - wangting supergroup          0 2021-03-12 11:49 /user/wangting/input
drwxr-xr-x   - wangting supergroup          0 2021-03-12 14:46 /user/wangting/output

(3)-mkdir:在HDFS上创建目录

wangting@ops01:/home/wangting >hadoop fs -mkdir /user/wangting/20210319
2021-03-19 10:25:00,081 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
wangting@ops01:/home/wangting >hadoop fs -ls /user/wangting
2021-03-19 10:25:08,201 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
Found 3 items
drwxr-xr-x   - wangting supergroup          0 2021-03-19 10:25 /user/wangting/20210319
drwxr-xr-x   - wangting supergroup          0 2021-03-12 11:49 /user/wangting/input
drwxr-xr-x   - wangting supergroup          0 2021-03-12 14:46 /user/wangting/output

(4)-moveFromLocal:从本地剪切粘贴到HDFS

wangting@ops01:/home/wangting >ll
total 12
-rwxrwxr-x 1 wangting wangting  415 Mar 12 14:17 jpsall.sh
drwxrwxr-x 2 wangting wangting 4096 Mar 12 16:59 test
-rwxrwxr-x 1 wangting wangting  622 Mar 12 11:04 xsync
wangting@ops01:/home/wangting >touch 20210319.log
wangting@ops01:/home/wangting >echo "hello;csdn" >> 20210319.log 
wangting@ops01:/home/wangting >ll
total 16
-rw-rw-r-- 1 wangting wangting   11 Mar 19 10:27 20210319.log
-rwxrwxr-x 1 wangting wangting  415 Mar 12 14:17 jpsall.sh
drwxrwxr-x 2 wangting wangting 4096 Mar 12 16:59 test
-rwxrwxr-x 1 wangting wangting  622 Mar 12 11:04 xsync
wangting@ops01:/home/wangting >cat 20210319.log 
hello;csdn
wangting@ops01:/home/wangting >hadoop fs -moveFromLocal ./20210319.log /user/wangting/20210319
2021-03-19 10:28:29,189 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
2021-03-19 10:28:30,034 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
wangting@ops01:/home/wangting >ll
total 12
-rwxrwxr-x 1 wangting wangting  415 Mar 12 14:17 jpsall.sh
drwxrwxr-x 2 wangting wangting 4096 Mar 12 16:59 test
-rwxrwxr-x 1 wangting wangting  622 Mar 12 11:04 xsync
wangting@ops01:/home/wangting >hadoop fs -ls /user/wangting/20210319
2021-03-19 10:28:48,348 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
Found 1 items
-rw-r--r--   3 wangting supergroup         11 2021-03-19 10:28 /user/wangting/20210319/20210319.log

【注意】: moveFromLocal命令相当于本地文件剪切走,执行后本地不再有源文件

(5)-cat:显示文件内容

wangting@ops01:/home/wangting >hadoop fs -ls /20210317
2021-03-19 10:36:23,063 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
Found 1 items
-rw-r--r--   3 wangting supergroup         28 2021-03-17 11:44 /20210317/20210317.log
wangting@ops01:/home/wangting >hadoop fs -cat /20210317/20210317.log
2021-03-19 10:36:33,823 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
2021-03-19 10:36:34,579 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
test for hadoop file append

(6)-appendToFile:追加一个文件到已经存在的文件末尾

wangting@ops01:/home/wangting >ll
total 12
-rwxrwxr-x 1 wangting wangting  415 Mar 12 14:17 jpsall.sh
drwxrwxr-x 2 wangting wangting 4096 Mar 12 16:59 test
-rwxrwxr-x 1 wangting wangting  622 Mar 12 11:04 xsync
wangting@ops01:/home/wangting >touch 20210319_new.log
wangting@ops01:/home/wangting >echo "20210319_new.log for test append to 20210319.log" >> 20210319_new.log
wangting@ops01:/home/wangting >cat 20210319_new.log 
20210319_new.log for test append to 20210319.log
wangting@ops01:/home/wangting >hadoop fs -appendToFile ./20210319_new.log /user/wangting/20210319/20210319.log
2021-03-19 10:33:41,678 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
2021-03-19 10:33:42,500 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
wangting@ops01:/home/wangting >hadoop fs -cat /user/wangting/20210319/20210319.log
2021-03-19 10:34:38,508 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
2021-03-19 10:34:39,263 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
hello;csdn
20210319_new.log for test append to 20210319.log
wangting@ops01:/home/wangting >

(7)-chgrp 、-chmod、-chown:Linux文件系统中的用法一样,修改文件所属权限

wangting@ops01:/home/wangting >hadoop fs -ls /user/wangting/20210319/20210319.log
2021-03-19 10:38:02,700 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
-rw-r--r--   3 wangting supergroup         60 2021-03-19 10:33 /user/wangting/20210319/20210319.log
wangting@ops01:/home/wangting >hadoop fs -chmod 666 /user/wangting/20210319/20210319.log
2021-03-19 10:38:32,574 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
wangting@ops01:/home/wangting >hadoop fs -ls /user/wangting/20210319/20210319.log
2021-03-19 10:38:36,077 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
-rw-rw-rw-   3 wangting supergroup         60 2021-03-19 10:33 /user/wangting/20210319/20210319.log
wangting@ops01:/home/wangting >hadoop fs -chown niubiplus:niubiplus /user/wangting/20210319/20210319.log
2021-03-19 10:39:48,536 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
wangting@ops01:/home/wangting >hadoop fs -ls /user/wangting/20210319/20210319.log
2021-03-19 10:39:53,122 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
-rw-rw-rw-   3 niubiplus niubiplus         60 2021-03-19 10:33 /user/wangting/20210319/20210319.log
wangting@ops01:/home/wangting >hadoop fs -chown wangting:wangting /user/wangting/20210319/20210319.log
2021-03-19 10:40:20,355 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
wangting@ops01:/home/wangting >hadoop fs -ls /user/wangting/20210319/20210319.log
2021-03-19 10:40:24,174 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
-rw-rw-rw-   3 wangting wangting         60 2021-03-19 10:33 /user/wangting/20210319/20210319.log
wangting@ops01:/home/wangting >

(8)-copyFromLocal:从本地文件系统中拷贝文件到HDFS路径去

​ -put:等同于copyFromLocal,文件从本地上传至HDFS(常用)

wangting@ops01:/home/wangting >touch 600916.txt
wangting@ops01:/home/wangting >
wangting@ops01:/home/wangting >echo "600916 is 中国黄金;" >> 600916.txt
wangting@ops01:/home/wangting >cat 600916.txt 
600916 is 中国黄金;
wangting@ops01:/home/wangting >hadoop fs -copyFromLocal ./600916.txt /
2021-03-19 10:44:52,441 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
2021-03-19 10:44:53,228 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
wangting@ops01:/home/wangting >hadoop fs -ls /
2021-03-19 10:45:15,554 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
Found 5 items
drwxr-xr-x   - wangting supergroup          0 2021-03-17 11:44 /20210317
-rw-r--r--   3 wangting supergroup         24 2021-03-19 10:44 /600916.txt
-rw-r--r--   3 wangting supergroup  338075860 2021-03-12 11:50 /hadoop-3.1.3.tar.gz
drwx------   - wangting supergroup          0 2021-03-12 14:46 /tmp
drwxr-xr-x   - wangting supergroup          0 2021-03-12 11:48 /user
wangting@ops01:/home/wangting >hadoop fs -cat /600916.txt
2021-03-19 10:45:33,118 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
2021-03-19 10:45:34,089 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
600916 is 中国黄金;

(9)-copyToLocal:从HDFS拷贝到本地

​ -get:等同于copyToLocal,从HDFS下载文件到本地(常用)

wangting@ops01:/home/wangting >ll
total 20
-rw-rw-r-- 1 wangting wangting   49 Mar 19 10:32 20210319_new.log
-rw-rw-r-- 1 wangting wangting   24 Mar 19 10:43 600916.txt
-rwxrwxr-x 1 wangting wangting  415 Mar 12 14:17 jpsall.sh
drwxrwxr-x 2 wangting wangting 4096 Mar 12 16:59 test
-rwxrwxr-x 1 wangting wangting  622 Mar 12 11:04 xsync
wangting@ops01:/home/wangting >rm -f 600916.txt 20210319_new.log
wangting@ops01:/home/wangting >ll
total 12
-rwxrwxr-x 1 wangting wangting  415 Mar 12 14:17 jpsall.sh
drwxrwxr-x 2 wangting wangting 4096 Mar 12 16:59 test
-rwxrwxr-x 1 wangting wangting  622 Mar 12 11:04 xsync
wangting@ops01:/home/wangting >hadoop fs -copyToLocal /600916.txt
2021-03-19 10:47:52,970 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
2021-03-19 10:47:53,729 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
wangting@ops01:/home/wangting >ll
total 16
-rw-r--r-- 1 wangting wangting   24 Mar 19 10:47 600916.txt
-rwxrwxr-x 1 wangting wangting  415 Mar 12 14:17 jpsall.sh
drwxrwxr-x 2 wangting wangting 4096 Mar 12 16:59 test
-rwxrwxr-x 1 wangting wangting  622 Mar 12 11:04 xsync
#########################
wangting@ops01:/home/wangting >hadoop fs -ls /20210317
2021-03-19 10:57:42,334 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
Found 1 items
-rw-r--r--   3 wangting supergroup         28 2021-03-17 11:44 /20210317/20210317.log
wangting@ops01:/home/wangting >hadoop fs -get /20210317/20210317.log ./
2021-03-19 10:58:00,770 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
2021-03-19 10:58:01,517 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
wangting@ops01:/home/wangting >ll
total 20
-rw-r--r-- 1 wangting wangting   28 Mar 19 10:58 20210317.log
-rw-r--r-- 1 wangting wangting   24 Mar 19 10:47 600916.txt
-rwxrwxr-x 1 wangting wangting  415 Mar 12 14:17 jpsall.sh
drwxrwxr-x 2 wangting wangting 4096 Mar 12 16:59 test
-rwxrwxr-x 1 wangting wangting  622 Mar 12 11:04 xsync

(10)-cp :从HDFS的一个路径拷贝到HDFS的另一个路径

wangting@ops01:/home/wangting >hadoop fs -ls /
2021-03-19 10:49:34,869 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
Found 5 items
drwxr-xr-x   - wangting supergroup          0 2021-03-17 11:44 /20210317
-rw-r--r--   3 wangting supergroup         24 2021-03-19 10:44 /600916.txt
-rw-r--r--   3 wangting supergroup  338075860 2021-03-12 11:50 /hadoop-3.1.3.tar.gz
drwx------   - wangting supergroup          0 2021-03-12 14:46 /tmp
drwxr-xr-x   - wangting supergroup          0 2021-03-12 11:48 /user
wangting@ops01:/home/wangting >hadoop fs -cat /600916.txt
2021-03-19 10:49:46,904 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
2021-03-19 10:49:47,658 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
600916 is 中国黄金;
wangting@ops01:/home/wangting >hadoop fs -mkdir /20210319
2021-03-19 10:50:48,007 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
wangting@ops01:/home/wangting >
wangting@ops01:/home/wangting >hadoop fs -cp /600916.txt /20210319/cp_from_600916.txt
2021-03-19 10:51:36,332 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
2021-03-19 10:51:37,151 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2021-03-19 10:51:37,284 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
wangting@ops01:/home/wangting >hadoop fs -ls /20210319/
2021-03-19 10:52:01,934 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
Found 1 items
-rw-r--r--   3 wangting supergroup         24 2021-03-19 10:51 /20210319/cp_from_600916.txt
wangting@ops01:/home/wangting >hadoop fs -cat /20210319/cp_from_600916.txt
2021-03-19 10:52:16,479 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
2021-03-19 10:52:17,225 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
600916 is 中国黄金;

(11)-mv:在HDFS目录中移动文件

wangting@ops01:/home/wangting >hadoop fs -ls /
2021-03-19 10:54:03,178 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
Found 6 items
drwxr-xr-x   - wangting supergroup          0 2021-03-17 11:44 /20210317
drwxr-xr-x   - wangting supergroup          0 2021-03-19 10:51 /20210319
-rw-r--r--   3 wangting supergroup         24 2021-03-19 10:44 /600916.txt
-rw-r--r--   3 wangting supergroup  338075860 2021-03-12 11:50 /hadoop-3.1.3.tar.gz
drwx------   - wangting supergroup          0 2021-03-12 14:46 /tmp
drwxr-xr-x   - wangting supergroup          0 2021-03-12 11:48 /user
wangting@ops01:/home/wangting >hadoop fs -ls /user/wangting
2021-03-19 10:54:07,457 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
Found 3 items
drwxr-xr-x   - wangting supergroup          0 2021-03-19 10:28 /user/wangting/20210319
drwxr-xr-x   - wangting supergroup          0 2021-03-12 11:49 /user/wangting/input
drwxr-xr-x   - wangting supergroup          0 2021-03-12 14:46 /user/wangting/output
wangting@ops01:/home/wangting >hadoop fs -mv /600916.txt /user/wangting
2021-03-19 10:54:33,457 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
wangting@ops01:/home/wangting >hadoop fs -ls /
2021-03-19 10:54:38,931 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
Found 5 items
drwxr-xr-x   - wangting supergroup          0 2021-03-17 11:44 /20210317
drwxr-xr-x   - wangting supergroup          0 2021-03-19 10:51 /20210319
-rw-r--r--   3 wangting supergroup  338075860 2021-03-12 11:50 /hadoop-3.1.3.tar.gz
drwx------   - wangting supergroup          0 2021-03-12 14:46 /tmp
drwxr-xr-x   - wangting supergroup          0 2021-03-12 11:48 /user
wangting@ops01:/home/wangting >hadoop fs -ls /user/wangting
2021-03-19 10:54:44,387 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
Found 4 items
drwxr-xr-x   - wangting supergroup          0 2021-03-19 10:28 /user/wangting/20210319
-rw-r--r--   3 wangting supergroup         24 2021-03-19 10:44 /user/wangting/600916.txt
drwxr-xr-x   - wangting supergroup          0 2021-03-12 11:49 /user/wangting/input
drwxr-xr-x   - wangting supergroup          0 2021-03-12 14:46 /user/wangting/output

(12)-getmerge:合并下载多个文件到一个文件中

wangting@ops01:/home/wangting >hadoop fs -ls /testgetmerge
2021-03-19 11:03:33,805 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
Found 4 items
-rw-r--r--   3 wangting supergroup          2 2021-03-19 11:01 /testgetmerge/a
-rw-r--r--   3 wangting supergroup          2 2021-03-19 11:01 /testgetmerge/b
-rw-r--r--   3 wangting supergroup          2 2021-03-19 11:01 /testgetmerge/c
-rw-r--r--   3 wangting supergroup          2 2021-03-19 11:01 /testgetmerge/d
wangting@ops01:/home/wangting >hadoop fs -cat /testgetmerge/a
2021-03-19 11:03:40,509 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
2021-03-19 11:03:41,244 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
a
wangting@ops01:/home/wangting >hadoop fs -cat /testgetmerge/b
2021-03-19 11:03:45,445 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
2021-03-19 11:03:46,207 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
b
wangting@ops01:/home/wangting >hadoop fs -cat /testgetmerge/c
2021-03-19 11:03:49,414 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
2021-03-19 11:03:50,178 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
c
wangting@ops01:/home/wangting >hadoop fs -cat /testgetmerge/d
2021-03-19 11:03:53,417 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
2021-03-19 11:03:54,181 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
d
wangting@ops01:/home/wangting >hadoop fs -getmerge /testgetmerge/* ./abcd.txt
2021-03-19 11:04:23,320 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
2021-03-19 11:04:24,121 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
wangting@ops01:/home/wangting >ll
total 16
-rw-r--r-- 1 wangting wangting    8 Mar 19 11:04 abcd.txt
-rwxrwxr-x 1 wangting wangting  415 Mar 12 14:17 jpsall.sh
drwxrwxr-x 2 wangting wangting 4096 Mar 12 16:59 test
-rwxrwxr-x 1 wangting wangting  622 Mar 12 11:04 xsync
wangting@ops01:/home/wangting >cat abcd.txt 
a
b
c
d

【注意】: 功能类似shell中的cat a >> abcd.txt;把多个文件cat追加到一个文件中之后,再下载至本地

(13)-tail:显示一个文件的末尾

wangting@ops01:/home/wangting >hadoop fs -ls /user/wangting/input
2021-03-19 11:09:35,492 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
Found 1 items
-rw-r--r--   3 wangting supergroup       1366 2021-03-12 11:49 /user/wangting/input/README.txt
wangting@ops01:/home/wangting >hadoop fs -tail /user/wangting/input/README.txt
2021-03-19 11:09:51,176 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
2021-03-19 11:09:51,913 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
try, of 
encryption software.  BEFORE using any encryption software, please 
check your country's laws, regulations and policies concerning the
import, possession, or use, and re-export of encryption software, to 
see if this is permitted.  See <http://www.wassenaar.org/> for more
information.

The U.S. Government Department of Commerce, Bureau of Industry and
Security (BIS), has classified this software as Export Commodity 
Control Number (ECCN) 5D002.C.1, which includes information security
software using or performing cryptographic functions with asymmetric
algorithms.  The form and manner of this Apache Software Foundation
distribution makes it eligible for export under the License Exception
ENC Technology Software Unrestricted (TSU) exception (see the BIS 
Export Administration Regulations, Section 740.13) for both object 
code and source code.

The following provides more details on the included cryptographic
software:
  Hadoop Core uses the SSL libraries from the Jetty project written 
by mortbay.org.
wangting@ops01:/home/wangting >

(14)-rm:删除文件或文件夹

wangting@ops01:/home/wangting >hadoop fs -ls /testgetmerge
2021-03-19 11:14:09,952 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
Found 4 items
-rw-r--r--   3 wangting supergroup          2 2021-03-19 11:01 /testgetmerge/a
-rw-r--r--   3 wangting supergroup          2 2021-03-19 11:01 /testgetmerge/b
-rw-r--r--   3 wangting supergroup          2 2021-03-19 11:01 /testgetmerge/c
-rw-r--r--   3 wangting supergroup          2 2021-03-19 11:01 /testgetmerge/d
wangting@ops01:/home/wangting >hadoop fs -rm /testgetmerge/c
2021-03-19 11:14:16,853 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
Deleted /testgetmerge/c
wangting@ops01:/home/wangting >hadoop fs -rm /testgetmerge/d
2021-03-19 11:14:22,755 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
Deleted /testgetmerge/d
wangting@ops01:/home/wangting >hadoop fs -ls /testgetmerge
2021-03-19 11:14:27,061 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
Found 2 items
-rw-r--r--   3 wangting supergroup          2 2021-03-19 11:01 /testgetmerge/a
-rw-r--r--   3 wangting supergroup          2 2021-03-19 11:01 /testgetmerge/b
wangting@ops01:/home/wangting >

(15)-du统计文件夹的大小信息

wangting@ops01:/home/wangting >hadoop fs -du -s -h /
2021-03-19 11:15:51,081 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
323.0 M  969.0 M  /
wangting@ops01:/home/wangting >
wangting@ops01:/home/wangting >
wangting@ops01:/home/wangting >hadoop fs -du -s -h /user
2021-03-19 11:15:58,985 INFO Configuration.deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
2.7 K  8.1 K  /user

【注意】: hadoop命令行du 后面的参数不能一起用,例如:hadoop fs -du -sh / ; 需要分开-s -h 这个和shell不一样

HDFS副本数量

​ 设置的副本数只是记录在NameNode的元数据中,是否真的会有这么多副本,还得看DataNode的数量。因为目前只有3台设备,最多也就3个副本,只有节点数的增加到10台时,副本数才能达到10,例如:当前我部署的集群ops01,ops02,ops03; nn+dn总数也就3个;如果副本数设置成10;实际上并不是真正意义上的10个副本,因为同一个机器上有多份的情况下,没有意义,不符合高可用的规则。

HDFS的数据流

在这里插入图片描述
1)客户端通过Distributed FileSystem模块向NameNode请求上传文件,NameNode检查目标文件是否已存在,父目录是否存在。
2)NameNode返回是否可以上传。
3)客户端请求第一个 Block上传到哪几个DataNode服务器上。
4)NameNode返回3个DataNode节点,分别为dn1、dn2、dn3。
5)客户端通过FSDataOutputStream模块请求dn1上传数据,dn1收到请求会继续调用dn2,然后dn2调用dn3,将这个通信管道建立完成。
6)dn1、dn2、dn3逐级应答客户端。
7)客户端开始往dn1上传第一个Block(先从磁盘读取数据放到一个本地内存缓存),以Packet为单位,dn1收到一个Packet就会传给dn2,dn2传给dn3;dn1每传一个packet会放入一个应答队列等待应答。
8)当一个Block传输完成之后,客户端再次请求NameNode上传第二个Block的服务器。(重复执行3-7步)

【注意】: client上传文件副本时,只与其中一个dn发生交互,其余副本是由dn之间内部完成传输副本对齐,并非是一个client同时对多个dn传输文件。

HDFS新增DataNode节点

  1. 将现有节点的hadoop-3.1.3目录远程拷贝到新主机

  2. 远程拷贝后,删除新主机上原来HDFS文件系统留存的文件(hadoop-3.1.3目录下的data和log文件夹)

  3. 参照之前的节点/etc/profile,配置profile文件,然后source

  4. 直接启动DataNode,即可关联到集群

    ​ sbin/hdfs --daemon start datanode

    ​ sbin/yarn-daemon.sh start nodemanager

6)dn1、dn2、dn3逐级应答客户端。
7)客户端开始往dn1上传第一个Block(先从磁盘读取数据放到一个本地内存缓存),以Packet为单位,dn1收到一个Packet就会传给dn2,dn2传给dn3;dn1每传一个packet会放入一个应答队列等待应答。
8)当一个Block传输完成之后,客户端再次请求NameNode上传第二个Block的服务器。(重复执行3-7步)

【注意】: client上传文件副本时,只与其中一个dn发生交互,其余副本是由dn之间内部完成传输副本对齐,并非是一个client同时对多个dn传输文件。

HDFS新增DataNode节点

  1. 将现有节点的hadoop-3.1.3目录远程拷贝到新主机

  2. 远程拷贝后,删除新主机上原来HDFS文件系统留存的文件(hadoop-3.1.3目录下的data和log文件夹)

  3. 参照之前的节点/etc/profile,配置profile文件,然后source

  4. 直接启动DataNode,即可关联到集群

    ​ sbin/hdfs --daemon start datanode

    ​ sbin/yarn-daemon.sh start nodemanager

  • 2
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

王亭_666

感觉对你有帮助,谢谢支持一下~

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值