一、dfs bin/hdfs dfs命令
appendToFile
Usage: hdfs dfs -appendToFile <localsrc> ... <dst>
追加本地liunx下的一个或者多个文件到hdfs指定文件中.也可以从命令行读取输入.
· hdfs dfs -appendToFile localfile /user/hadoop/hadoopfile
· hdfs dfs -appendToFile localfile1 localfile2 /user/hadoop/hadoopfile
· hdfs dfs -appendToFile localfile hdfs://nn.example.com/hadoop/hadoopfile
· hdfs dfs -appendToFile - hdfs://nn.example.com/hadoop/hadoopfile Reads the input from stdin.
Exit Code:
Returns 0 on success and 1 on error.
cat 查看文件内容
Usage: hdfs dfs -cat URI [URI ...]
查看内容.
Example:
· hdfs dfs -cat hdfs://nn1.example.com/file1 hdfs://nn2.example.com/file2
· hdfs dfs -cat file:///file3 /user/hadoop/file4
Exit Code:
Returns 0 on success and -1 on error.
Chgrp【change group】改变文件或文件夹所属组
Usage: hdfs dfs -chgrp [-R] GROUP URI [URI ...]
修改所属组.
Options
· The -R option will make the change recursively through the directory structure.
chmod 给文件或文件夹加执行权限
Usage: hdfs dfs -chmod [-R] <MODE[,MODE]... | OCTALMODE> URI [URI ...]
修改权限.
Options
· The -R option will make the change recursively through the directory structure.
chown 改变文件或文件夹拥有者
Usage: hdfs dfs -chown [-R] [OWNER][:[GROUP]] URI [URI ]
修改所有者.
Options
· The -R option will make the change recursively through the directory structure.
copyFromLocal 从本地liunx系统上复制文件到hdfs上
-f 表示如果文件存在就覆盖掉
如./hdfs dfs -copyFromLocal /usr/local/java/sparkword.txt /out/wc/
/usr/local/java/sparkword.txt 表示linux上的文件
/out/wc/ 表示hdfs上的文件
Usage: hdfs dfs -copyFromLocal <localsrc> URI
Options:
· The -f option will overwrite the destination if it already exists.
copyToLocal 从hdfs上复制文件到liunx系统上
如./hdfs dfs -copyToLocal /out/wc/sparkword.txt /usr/local/java/sparkword.tx1t
/usr/local/java/sparkword.tx1t 表示linux上的文件
/out/wc/sparkword.txt 表示hdfs上的文件
Usage: hdfs dfs -copyToLocal [-ignorecrc] [-crc] URI <localdst>
-crc选项数据校验 表示比对从hdfs上复制到liunx上文件的完成性 如果不一致从新下载
count 列出文件夹数量、文件数量、内容大小
Usage: hdfs dfs -count [-q] [-h] <paths>
列出文件夹数量、文件数量、内容大小. The output columns with -count are: DIR_COUNT, FILE_COUNT, CONTENT_SIZE FILE_NAME
The output columns with -count -q are: QUOTA, REMAINING_QUATA, SPACE_QUOTA, REMAINING_SPACE_QUOTA, DIR_COUNT, FILE_COUNT, CONTENT_SIZE, FILE_NAME
The -h option shows sizes in human readable format.
Example:
· hdfs dfs -count hdfs://nn1.example.com/file1 hdfs://nn2.example.com/file2
· hdfs dfs -count -q hdfs://nn1.example.com/file1
· hdfs dfs -count -q -h hdfs://nn1.example.com/file1
Exit Code:
Returns 0 on success and -1 on error.
cp 复制文件(夹)
Usage: hdfs dfs -cp [-f] [-p | -p[topax]] URI [URI ...] <dest>
复制文件(夹),可以覆盖,可以保留原有权限信息
Options:
· The -f option will overwrite the destination if it already exists.
· The -p option will preserve file attributes [topx] (timestamps, ownership, permission, ACL, XAttr). If -p is specified with no arg, then preserves timestamps, ownership, permission. If -pa is specified, then preserves permission also because ACL is a super-set of permission. Determination of whether raw namespace extended attributes are preserved is independent of the -p flag.
Example:
· hdfs dfs -cp /user/hadoop/file1 /user/hadoop/file2
· hdfs dfs -cp /user/hadoop/file1 /user/hadoop/file2 /user/hadoop/dir
Exit Code:
Returns 0 on success and -1 on error.
du 显示文件(夹)大小.
Usage: hdfs dfs -du [-s] [-h] URI [URI ...]
Options:
· The -s option will result in an aggregate summary of file lengths being displayed, rather than the individual files.
· The -h option will format file sizes in a "human-readable" fashion (e.g 64.0m instead of 67108864)
Example:
· hdfs dfs -du /user/hadoop/dir1 /user/hadoop/file1 hdfs://nn.example.com/user/hadoop/dir1
Exit Code: Returns 0 on success and -1 on error.
dus
Usage: hdfs dfs -dus <args>
Displays a summary of file lengths.
Note: This command is deprecated. Instead use hdfs dfs -du -s.
expunge 清空回收站.
Usage: hdfs dfs -expunge
get 从hdfs上下载文件
Usage: hdfs dfs -get [-ignorecrc] [-crc] <src> <localdst>
Copy files to the local file system. Files that fail the CRC check may be copied with the -ignorecrc option. Files and CRCs may be copied using the -crc option.
Example:
· hdfs dfs -get /user/hadoop/file localfile
· hdfs dfs -get hdfs://nn.example.com/user/hadoop/file localfile
Exit Code:
Returns 0 on success and -1 on error.
getfacl
Usage: hdfs dfs -getfacl [-R] <path>
显示权限信息.
Options:
· -R: List the ACLs of all files and directories recursively.
· path: File or directory to list.
Examples:
· hdfs dfs -getfacl /file
· hdfs dfs -getfacl -R /dir
Exit Code:
Returns 0 on success and non-zero on error.
getfattr
Usage: hdfs dfs -getfattr [-R] -n name | -d [-e en] <path>
Displays the extended attribute names and values (if any) for a file or directory.
Options:
· -R: Recursively list the attributes for all files and directories.
· -n name: Dump the named extended attribute value.
· -d: Dump all extended attribute values associated with pathname.
· -e encoding: Encode values after retrieving them. Valid encodings are "text", "hex", and "base64". Values encoded as text strings are enclosed in double quotes ("), and values encoded as hexadecimal and base64 are prefixed with 0x and 0s, respectively.
· path: The file or directory.
Examples:
· hdfs dfs -getfattr -d /file
· hdfs dfs -getfattr -R -n user.myAttr /dir
Exit Code:
Returns 0 on success and non-zero on error.
getmerge
Usage: hdfs dfs -getmerge <src> <localdst> [addnl]
合并.
ls
Usage: hdfs dfs -ls [-R] <args>
Options:
· The -R option will return stat recursively through the directory structure.
For a file returns stat on the file with the following format:
permissions number_of_replicas userid groupid filesize modification_date modification_time filename
For a directory it returns list of its direct children as in Unix. A directory is listed as:
permissions userid groupid modification_date modification_time dirname
Example:
· hdfs dfs -ls /user/hadoop/file1
Exit Code:
Returns 0 on success and -1 on error.
lsr
Usage: hdfs dfs -lsr <args>
Recursive version of ls.
Note: This command is deprecated. Instead use hdfs dfs -ls -R
mkdir 创建目录
Usage: hdfs dfs -mkdir [-p] <paths>
Takes path uri's as argument and creates directories.
Options:
· The -p option behavior is much like Unix mkdir -p, creating parent directories along the path.
Example:
· hdfs dfs -mkdir /user/hadoop/dir1 /user/hadoop/dir2
· hdfs dfs -mkdir hdfs://nn1.example.com/user/hadoop/dir hdfs://nn2.example.com/user/hadoop/dir
Exit Code:
Returns 0 on success and -1 on error.
moveFromLocal 从本地移动文件到hdfs上
Usage: hdfs dfs -moveFromLocal <localsrc> <dst>
Similar to put command, except that the source localsrc is deleted after it's copied.
moveToLocal 从hdfs上移动文件到本地
Usage: hdfs dfs -moveToLocal [-crc] <src> <dst>
Displays a "Not implemented yet" message.
mv 在hdfs上移动文件
Usage: hdfs dfs -mv URI [URI ...] <dest>
Moves files from source to destination. This command allows multiple sources as well in which case the destination needs to be a directory. Moving files across file systems is not permitted.
Example:
· hdfs dfs -mv /user/hadoop/file1 /user/hadoop/file2
· hdfs dfs -mv hdfs://nn.example.com/file1 hdfs://nn.example.com/file2 hdfs://nn.example.com/file3 hdfs://nn.example.com/dir1
Exit Code:
Returns 0 on success and -1 on error.
put 上传文件到hdfs上
Usage: hdfs dfs -put <localsrc> ... <dst>
Copy single src, or multiple srcs from local file system to the destination file system. Also reads input from stdin and writes to destination file system.
· hdfs dfs -put localfile /user/hadoop/hadoopfile
· hdfs dfs -put localfile1 localfile2 /user/hadoop/hadoopdir
· hdfs dfs -put localfile hdfs://nn.example.com/hadoop/hadoopfile
· hdfs dfs -put - hdfs://nn.example.com/hadoop/hadoopfile Reads the input from stdin.
Exit Code:
Returns 0 on success and -1 on error.