[root@localhost hadoop-2.5.0]# bin/hadoop fs Usage: hadoop fs [generic options] [-appendToFile <localsrc> ... <dst>] [-cat [-ignoreCrc] <src> ...] [-checksum <src> ...] [-chgrp [-R] GROUP PATH...] [-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...] [-chown [-R] [OWNER][:[GROUP]] PATH...] [-copyFromLocal [-f] [-p] <localsrc> ... <dst>] [-copyToLocal [-p] [-ignoreCrc] [-crc] <src> ... <localdst>] [-count [-q] <path> ...] [-cp [-f] [-p | -p[topax]] <src> ... <dst>] [-createSnapshot <snapshotDir> [<snapshotName>]] [-deleteSnapshot <snapshotDir> <snapshotName>] [-df [-h] [<path> ...]] [-du [-s] [-h] <path> ...] [-expunge] [-get [-p] [-ignoreCrc] [-crc] <src> ... <localdst>] [-getfacl [-R] <path>] [-getfattr [-R] {-n name | -d} [-e en] <path>] [-getmerge [-nl] <src> <localdst>] [-help [cmd ...]] [-ls [-d] [-h] [-R] [<path> ...]] [-mkdir [-p] <path> ...] [-moveFromLocal <localsrc> ... <dst>] [-moveToLocal <src> <localdst>] [-mv <src> ... <dst>] [-put [-f] [-p] <localsrc> ... <dst>] [-renameSnapshot <snapshotDir> <oldName> <newName>] [-rm [-f] [-r|-R] [-skipTrash] <src> ...] [-rmdir [--ignore-fail-on-non-empty] <dir> ...] [-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]] [-setfattr {-n name [-v value] | -x name} <path>] [-setrep [-R] [-w] <rep> <path> ...] [-stat [format] <path> ...] [-tail [-f] <file>] [-test -[defsz] <path>] [-text [-ignoreCrc] <src> ...] [-touchz <path> ...] [-usage [cmd ...]] Generic options supported are -conf <configuration file> specify an application configuration file -D <property=value> use value for given property -fs <local|namenode:port> specify a namenode -jt <local|jobtracker:port> specify a job tracker -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath. -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines. The general command line syntax is bin/hadoop command [genericOptions] [commandOptions]
2.列出目录及文件信息
hadoop fs -ls
3.循环列出目录、子目录及文件信息
hadoop fs -lsr
4.将本地文件系统的test.txt复制到HDFS文件系统的/user/sunlightcs目录下
hadoop fs –put test.txt /user/sunlightcs
5.将HDFS中的test.txt复制到本地文件系统中,与-put命令相反
hadoop fs –get /user/sunlightcs/test.txt
6.查看HDFS文件系统里test.txt的内容
hadoop fs –cat /user/sunlightcs/test.txt
7.查看最后1KB的内容
hadoop fs –tail /user/sunlightcs/test.txt
8.从HDFS文件系统删除test.txt文件,rm命令也可以删除空目录
hadoop fs –rm /user/sunlightcs/test.txt
9.删除/user/sunlightcs目录以及所有子目录
hadoop fs –rmr /user/sunlightcs
10.从本地文件系统复制文件到HDFS文件系统,等同于put命令
hadoop fs –copyFromLocal test.txt /user/sunlightcs/test.txt
11.从HDFS文件系统复制文件到本地文件系统,等同于get命令
hadoop fs –copyToLocal /user/sunlightcs/test.txt test.txt
12.修改HDFS系统中/user/sunlightcs目录所属群组,选项-R递归执行,跟linux命令一样
hadoop fs –chgrp [-R] /user/sunlightcs
13.修改HDFS系统中/user/sunlightcs目录拥有者,选项-R递归执行
hadoop fs –chown [-R] /user/sunlightcs
14.修改HDFS系统中/user/sunlightcs目录权限,MODE可以为相应权限的3位数或+/-{rwx},选项-R递归执行
hadoop fs –chmod [-R] MODE /user/sunlightcs
15.查看PATH目录下,子目录数、文件数、文件大小、文件名/目录名
hadoop fs –count [-q] PATH
16.将文件从SRC复制到DST,如果指定了多个SRC,则DST必须为一个目录
hadoop fs –cp SRC [SRC …] DST
17.显示该目录中每个文件或目录的大小
hadoop fs –du PATH
18.类似于du,PATH为目录时,会显示该目录的总大小
hadoop fs –dus PATH
19.清空回收站,文件被删除时,它首先会移到临时目录.Trash/中,当超过延迟时间之后,文件才会被永久删除
hadoop fs –expunge
20.获取由SRC指定的所有文件,将它们合并为单个文件,并写入本地文件系统中的LOCALDST,选项addnl将在每个文件的末尾处加上一个换行符
hadoop fs –getmerge SRC [SRC …] LOCALDST [addnl]
21.创建长度为0的空文件
hadoop fs –touchz PATH
22.对PATH进行如下类型的检查:
hadoop fs –test –[ezd] PATH
23.显示文件的内容,当文件为文本文件时,等同于cat,文件为压缩格式(gzip以及hadoop的二进制序列文件格式)时,会先解压缩
hadoop fs –text PATH
24.查看某个[ls]命令的帮助文档
hadoop fs –help ls