HDFS的Shell操作

本文介绍了HDFS的Shell操作,包括基本语法、命令大全及实操案例。如`-ls`显示目录信息,`-mkdir`创建目录,`-put`将本地文件复制到HDFS,`-rm`删除文件,`-setrep`设置文件副本数量等。这些命令对于管理和操作Hadoop分布式文件系统至关重要。
摘要由CSDN通过智能技术生成
1、基本语法

bin/hadoop fs 具体命令 OR bin/hdfs dfs 具体命令

2、命令大全
[test@hadoop151 ~]$ hadoop fs
Usage: hadoop fs [generic options]
	[-appendToFile <localsrc> ... <dst>]
	[-cat [-ignoreCrc] <src> ...]
	[-checksum <src> ...]
	[-chgrp [-R] GROUP PATH...]
	[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
	[-chown [-R] [OWNER][:[GROUP]] PATH...]
	[-copyFromLocal [-f] [-p] [-l] <localsrc> ... <dst>]
	[-copyToLocal [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
	[-count [-q] [-h] <path> ...]
	[-cp [-f] [-p | -p[topax]] <src> ... <dst>]
	[-createSnapshot <snapshotDir> [<snapshotName>]]
	[-deleteSnapshot <snapshotDir> <snapshotName>]
	[-df [-h] [<path> ...]]
	[-du [-s] [-h] <path> ...]
	[-expunge]
	[-find <path> ... <expression> ...]
	[-get [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
	[-getfacl [-R] <path>]
	[-getfattr [-R] {-n name | -d} [-e en] <path>]
	[-getmerge [-nl] <src> <localdst>]
	[-help [cmd ...]]
	[-ls [-d] [-h] [-R] [<path> ...]]
	[-mkdir [-p] <path> ...]
	[-moveFromLocal <localsrc> ... <dst>]
	[-moveToLocal <src> <localdst>]
	[-mv <src> ... <dst>]
	[-put [-f] [-p] [-l] <localsrc> ... <dst>]
	[-renameSnapshot <snapshotDir> <oldName> <newName>]
	[-rm [-f] [-r|-R] [-skipTrash] <src> ...]
	[-rmdir [--ignore-fail-on-non-empty] <dir> ...]
	[-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
	[-setfattr {-n name [-v value] | -x name} <path>]
	[-setrep [-R] [-w] <rep> <path> ...]
	[-stat [format] <path> ...]
	[-tail [-f] <file>]
	[-test -[defsz] <path>]
	[-text [-ignoreCrc] <src> ...]
	[-touchz <path> ...]
	[-truncate [-w] <length> <path> ...]
	[-usage [cmd ...]]

Generic options supported are
-conf <configuration file>     specify an application configuration file
-D <property=value>            use value for given property
-fs <local|namenode:port>      specify a namenode
-jt <local|resourcemanager:port>    specify a ResourceManager
-files <comma separated list of files>    specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars>    specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives>    specify comma separated archives to be unarchived on the compute machines.

The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions]
3、命令实操

(1) -help:输出这个命令参数

[test@hadoop151 ~]$ hadoop fs -help ls
-ls [-d] [-h] [-R] [<path> ...] :
  List the contents that match the specified file pattern. If path is not
  specified, the contents of /user/<currentUser> will be listed. Directory entries
  are of the form:
  	permissions - userId groupId sizeOfDirectory(in bytes)
  modificationDate(yyyy-MM-dd HH:mm) directoryName
  
  and file entries are of the form:
  	permissions numberOfReplicas userId groupId sizeOfFile(in bytes)
  modificationDate(yyyy-MM-dd HH:mm) fileName
                                                                                 
  -d  Directories are listed as plain files.                                     
  -h  Formats the sizes of files in a human-readable fashion rather than a number
      of bytes.                                                                  
  -R  Recursively list the contents of directories.   

(2) -ls:显示目录信息

[test@hadoop151 ~]$ hadoop fs -ls /
Found 4 items
-rw-r--r--   3 test supergroup  197657687 2020-01-30 20:50 /hadoop-2.7.2.tar.gz
drwxr-xr-x   - test supergroup          0 2020-01-30 21:04 /input
drwxr-xr-x   - test supergroup          0 2020-01-30 21:09 /output
drwx------   - test supergroup          0 2020-01-30 21:08 /tmp

(3) -mkdir:在HDFS上创建目录

[test@hadoop151 ~]$ hadoop fs -mkdir -p /sanguo/shuguo
[test@hadoop151 ~]$ hadoop fs -ls /sanguo
Found 1 items
drwxr-xr-x   - test supergroup          0 2020-02-01 09:24 /sanguo/shuguo

(4) -moveFromLocal:从本地剪切粘贴到HDFS

[test@hadoop151 ~]$ touch a.txt
[test@hadoop151 ~]$ hadoop fs  -moveFromLocal  ./a.txt  /sanguo/shuguo
[test@hadoop151 ~]$ ls
aaa  bbb  bin  公共的  模板  视频  图片  文档  下载  音乐  桌面
[test@hadoop151 ~]$ hadoop fs -ls /sanguo/shuguo/
Found 1 items
-rw-r--r--   3 test supergroup          0 2020-02-01 09:32 /sanguo/shuguo/a.txt

(5) -appendToFile:追加一个文件到已经存在的文件末尾

[test@hadoop151 ~]$ vim b
[test@hadoop151 ~]$ cat b
aaaaaaaaaaaaaa
[test@hadoop151 ~]$ hadoop fs -appendToFile b /sanguo/shuguo/a
[test@hadoop151 ~]$ hadoop fs -cat /sanguo/shuguo/a
aaaaaaaaaaaaaa

(6) -cat:显示文件内容

[test@hadoop151 ~]$ hadoop fs -cat /sanguo/shuguo/a
aaaaaaaaaaaaaa

(7) -chgrp、-chmod、-chown:Linux文件系统中的用法一样,修改文件所属权限

-chgrp:修改用户组
-chmod:修改文件权限
-chown:修改用户及用户组

[test@hadoop151 ~]$  hadoop fs -chgrp hyr /sanguo/shuguo/a
[test@hadoop151 ~]$ hadoop fs -chmod 666 /sanguo/shuguo/a
[test@hadoop151 ~]$ hadoop fs -chown test:test /sanguo/shuguo/a

(8) -copyFromLocal:从本地文件系统中拷贝文件到HDFS路径去

[test@hadoop151 /]$ hadoop fs -copyFromLocal /opt/software/jdk-8u144-linux-x64.tar.gz /
[test@hadoop151 /]$ hadoop fs -ls /
Found 6 items
-rw-r--r--   3 test supergroup  197657687 2020-01-30 20:50 /hadoop-2.7.2.tar.gz
drwxr-xr-x   - test supergroup          0 2020-01-30 21:04 /input
-rw-r--r--   3 test supergroup  185515842 2020-02-01 12:58 /jdk-8u144-linux-x64.tar.gz
drwxr-xr-x   - test supergroup          0 2020-01-30 21:09 /output
drwxr-xr-x   - test supergroup          0 2020-02-01 09:24 /sanguo
drwx------   - test supergroup          0 2020-01-30 21:08 /tmp

(9) -copyToLocal:从HDFS拷贝到本地

[test@hadoop151 /]$ hadoop fs -copyToLocal /output /opt/module
[test@hadoop151 /]$ ls /opt/module/
hadoop-2.7.2  jdk1.8.0_144  output

(10) -cp:从HDFS的一个路径拷贝到HDFS的另一个路径

[test@hadoop151 /]$ hadoop fs -cp /sanguo /shuihu
[test@hadoop151 /]$ hadoop fs -ls /
Found 7 items
-rw-r--r--   3 test supergroup  197657687 2020-01-30 20:50 /hadoop-2.7.2.tar.gz
drwxr-xr-x   - test supergroup          0 2020-01-30 21:04 /input
-rw-r--r--   3 test supergroup  185515842 2020-02-01 12:58 /jdk-8u144-linux-x64.tar.gz
drwxr-xr-x   - test supergroup          0 2020-01-30 21:09 /output
drwxr-xr-x   - test supergroup          0 2020-02-01 09:24 /sanguo
drwxr-xr-x   - test supergroup          0 2020-02-01 13:06 /shuihu
drwx------   - test supergroup          0 2020-01-30 21:08 /tmp

(11) -mv:在HDFS目录中移动文件

[test@hadoop151 /]$ hadoop fs -mv /output /sanguo
[test@hadoop151 /]$ hadoop fs -ls /output/sanguo
[test@hadoop151 /]$  hadoop fs -ls /sanguo
Found 2 items
drwxr-xr-x   - test supergroup          0 2020-01-30 21:09 /sanguo/output
drwxr-xr-x   - test supergroup          0 2020-02-01 09:38 /sanguo/shuguo

(12) -get:等同于copyToLocal,就是从HDFS下载文件到本地

(13) -getmerge:合并下载多个文件

[test@hadoop151 module]$ hadoop fs -getmerge /A/* ./text
[test@hadoop151 module]$ cat text 
ssssssssssss
sssssssssssssssssssss

(14) -put:等同于copyFromLocal,就是从本地复制文件到HDFS上

(15) -tail:显示一个文件的末尾

[test@hadoop151 module]$ hadoop fs -tail /A/2.txt
sssssssssssssssssssss

(16) -rm:删除文件或文件夹

[test@hadoop151 module]$ hadoop fs -rm /A/2.txt
[test@hadoop151 module]$ hadoop fs -ls /A
Found 1 items
-rw-r--r--   3 test supergroup         13 2020-02-01 13:39 /A/.txt

(17) -rmdir:删除空目录

[test@hadoop151 module]$ hadoop fs -rmdir /abc

(18) -du:统计文件夹的大小信息

[test@hadoop151 module]$ hadoop fs -du -h /
13       /A
188.5 M  /hadoop-2.7.2.tar.gz
53       /input
176.9 M  /jdk-8u144-linux-x64.tar.gz
47       /sanguo
15       /shuihu
146.3 K  /tmp
[test@hadoop151 module]$ hadoop fs -du -h -s /
365.6 M  /

(19) -setrep:设置HDFS中文件的副本数量

[test@hadoop151 module]$ hadoop fs -setrep 10 /
Replication 10 set: /A/.txt
Replication 10 set: /hadoop-2.7.2.tar.gz
Replication 10 set: /input/word.txt
Replication 10 set: /jdk-8u144-linux-x64.tar.gz
Replication 10 set: /sanguo/output/_SUCCESS
Replication 10 set: /sanguo/output/part-r-00000
Replication 10 set: /sanguo/shuguo/a
Replication 10 set: /sanguo/shuguo/a.txt
Replication 10 set: /shuihu/shuguo/a
Replication 10 set: /shuihu/shuguo/a.txt
Replication 10 set: /tmp/hadoop-yarn/staging/history/done_intermediate/test/job_1580388306206_0001-1580389711205-test-word+count-1580389761005-1-1-SUCCEEDED-default-1580389731556.jhist
Replication 10 set: /tmp/hadoop-yarn/staging/history/done_intermediate/test/job_1580388306206_0001.summary
Replication 10 set: /tmp/hadoop-yarn/staging/history/done_intermediate/test/job_1580388306206_0001_conf.xml

在这里插入图片描述这里设置的副本数只是记录在NameNode的元数据中,是否真的会有这么多副本,还得看DataNode的数量。因为目前只有3台设备,最多也就3个副本,只有节点数的增加到10台时,副本数才能达到10。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值