Hadoop Shell Commands
FS Shell
The FileSystem (FS) shell is invoked bybin/hadoop fs <args>. All the FS shell commands take path URIs as arguments. The URI format isscheme://autority/path. For HDFS the scheme ishdfs, and for the local filesystem the scheme isfile. The scheme and authority are optional. If not specified, the default scheme specified in the configuration is used. An HDFS file or directory such as/parent/childcan be specified ashdfs://namenodehost/parent/childor simply as/parent/child(given that your configuration is set to point tohdfs://namenodehost). Most of the commands in FS shell behave like corresponding Unix commands. Differences are described with each of the commands. Error information is sent tostderrand the output is sent tostdout.
cat
Usage: hadoop fs -cat URI [URI …]
Copies source paths tostdout.
Example:
- hadoop fs -cat hdfs://nn1.example.com/file1 hdfs://nn2.example.com/file2
- hadoop fs -cat file:///file3 /user/hadoop/file4
Exit Code:
Returns 0 on success and -1 on error.
chgrp
Usage: hadoop fs -chgrp [-R] GROUP URI [URI …]
Change group association of files. With-R, make the change recursively through the directory structure. The user must be the owner of files, or else a super-user. Additional information is in thePermissions User Guide.
chmod
Usage: hadoop fs -chmod [-R] <MODE[,MODE]... | OCTALMODE> URI [URI …]
Change the permissions of files. With-R, make the change recursively through the directory structure. The user must be the owner of the file, or else a super-user. Additional information is in thePermissions User Guide.
chown
Usage: hadoop fs -chown [-R] [OWNER][:[GROUP]] URI [URI ]
Change the owner of files. With-R, make the change recursively through the directory structure. The user must be a super-user. Additional information is in thePermissions User Guide.
copyFromLocal
Usage: hadoop fs -copyFromLocal <localsrc> URI
Similar toputcommand, except that the source is restricted to a local file reference.
copyToLocal
Usage: hadoop fs -copyToLocal [-ignorecrc] [-crc] URI <localdst>
Similar togetcommand, except that the destination is restricted to a local file reference.
cp
Usage: hadoop fs -cp URI [URI …] <dest>
Copy files from source to destination. This command allows multiple sources as well in which case the destination must be a directory.
Example:
- hadoop fs -cp /user/hadoop/file1 /user/hadoop/file2
- hadoop fs -cp /user/hadoop/file1 /user/hadoop/file2 /user/hadoop/dir
Exit Code:
Returns 0 on success and -1 on error.
du
Usage: hadoop fs -du URI [URI …]
Displays aggregate length of files contained in the directory or the length of a file in case its just a file.
Example:
hadoop fs -du /user/hadoop/dir1 /user/hadoop/file1 hdfs://nn.example.com/user/hadoop/dir1
Exit Code:
Returns 0 on success and -1 on error.
dus
Usage: hadoop fs -dus <args>
Displays a summary of file lengths.
expunge
Usage: hadoop fs -expunge
Empty the Trash. Refer toHDFS Designfor more information on Trash feature.
get
Usage: hadoop fs -get [-ignorecrc] [-crc] <src> <localdst>
Copy files to the local file system. Files that fail the CRC check may be copied with the-ignorecrcoption. Files and CRCs may be copied using the-crcoption.
Example:
- hadoop fs -get /user/hadoop/file localfile
- hadoop fs -get hdfs://nn.example.com/user/hadoop/file localfile
Exit Code:
Returns 0 on success and -1 on error.
getmerge
Usage: hadoop fs -getmerge <src> <localdst> [addnl]
Takes a source directory and a destination file as input and concatenates files in src into the destination local file. Optionallyaddnlcan be set to enable adding a newline character at the end of each file.
ls
Usage: hadoop fs -ls <args>
For a file returns stat on the file with the following format:
filename <number of replicas> filesize modification_date modification_time permissions userid groupid
For a directory it returns list of its direct children as in unix. A directory is listed as:
dirname <dir> modification_time modification_time permissions userid groupid
Example:
hadoop fs -ls /user/hadoop/file1 /user/hadoop/file2 hdfs://nn.example.com/user/hadoop/dir1 /nonexistentfile
Exit Code:
Returns 0 on success and -1 on error.
lsr
Usage: hadoop fs -lsr <args>
Recursive version ofls. Similar to Unixls -R.
mkdir
Usage: hadoop fs -mkdir <paths>
Takes path uri's as argument and creates directories. The behavior is much like unix mkdir -p creating parent directories along the path.
Example:
- hadoop fs -mkdir /user/hadoop/dir1 /user/hadoop/dir2
- hadoop fs -mkdir hdfs://nn1.example.com/user/hadoop/dir hdfs://nn2.example.com/user/hadoop/dir
Exit Code:
Returns 0 on success and -1 on error.
movefromLocal
Usage: dfs -moveFromLocal <src> <dst>
Displays a "not implemented" message.
mv
Usage: hadoop fs -mv URI [URI …] <dest>
Moves files from source to destination. This command allows multiple sources as well in which case the destination needs to be a directory. Moving files across filesystems is not permitted.
Example:
- hadoop fs -mv /user/hadoop/file1 /user/hadoop/file2
- hadoop fs -mv hdfs://nn.example.com/file1 hdfs://nn.example.com/file2 hdfs://nn.example.com/file3 hdfs://nn.example.com/dir1
Exit Code:
Returns 0 on success and -1 on error.
put
Usage: hadoop fs -put <localsrc> ... <dst>
Copy single src, or multiple srcs from local file system to the destination filesystem. Also reads input from stdin and writes to destination filesystem.
- hadoop fs -put localfile /user/hadoop/hadoopfile
- hadoop fs -put localfile1 localfile2 /user/hadoop/hadoopdir
- hadoop fs -put localfile hdfs://nn.example.com/hadoop/hadoopfile
- hadoop fs -put - hdfs://nn.example.com/hadoop/hadoopfile
Reads the input from stdin.
Exit Code:
Returns 0 on success and -1 on error.
rm
Usage: hadoop fs -rm URI [URI …]
Delete files specified as args. Only deletes non empty directory and files. Refer to rmr for recursive deletes.
Example:
- hadoop fs -rm hdfs://nn.example.com/file /user/hadoop/emptydir
Exit Code:
Returns 0 on success and -1 on error.
rmr
Usage: hadoop fs -rmr URI [URI …]
Recursive version of delete.
Example:
- hadoop fs -rmr /user/hadoop/dir
- hadoop fs -rmr hdfs://nn.example.com/user/hadoop/dir
Exit Code:
Returns 0 on success and -1 on error.
setrep
Usage: hadoop fs -setrep [-R] <path>
Changes the replication factor of a file. -R option is for recursively increasing the replication factor of files within a directory.
Example:
- hadoop fs -setrep -w 3 -R /user/hadoop/dir1
Exit Code:
Returns 0 on success and -1 on error.
stat
Usage: hadoop fs -stat URI [URI …]
Returns the stat information on the path.
Example:
- hadoop fs -stat path
Exit Code:
Returns 0 on success and -1 on error.
tail
Usage: hadoop fs -tail [-f] URI
Displays last kilobyte of the file to stdout. -f option can be used as in Unix.
Example:
- hadoop fs -tail pathname
Exit Code:
Returns 0 on success and -1 on error.
test
Usage: hadoop fs -test -[ezd] URI
Options:
-e check to see if the file exists. Return 0 if true.
-z check to see if the file is zero length. Return 0 if true
-d check return 1 if the path is directory else return 0.
Example:
- hadoop fs -test -e filename
text
Usage: hadoop fs -text <src>
Takes a source file and outputs the file in text format. The allowed formats are zip and TextRecordInputStream.
touchz
Usage: hadoop fs -touchz URI [URI …]
Create a file of zero length.
Example:
- hadoop -touchz pathname
Exit Code:
Returns 0 on success and -1 on error.
DistCp
Overview
DistCp (distributed copy) is a tool used for large inter/intra-cluster copying. It uses Map/Reduce to effect its distribution, error handling and recovery, and reporting. It expands a list of files and directories into input to map tasks, each of which will copy a partition of the files specified in the source list. Its Map/Reduce pedigree has endowed it with some quirks in both its semantics and execution. The purpose of this document is to offer guidance for common tasks and to elucidate its model.
Usage
Basic
The most common invocation of DistCp is an inter-cluster copy:
bash$ hadoop distcp hdfs://nn1:8020/foo/bar \
hdfs://nn2:8020/bar/foo
This will expand the namespace under/foo/baron nn1 into a temporary file, partition its contents among a set of map tasks, and start a copy on each TaskTracker from nn1 to nn2. Note that DistCp expects absolute paths.
One can also specify multiple source directories on the command line:
bash$ hadoop distcp hdfs://nn1:8020/foo/a \
hdfs://nn1:8020/foo/b \
hdfs://nn2:8020/bar/foo
Or, equivalently, from a file using the-foption:
bash$ hadoop distcp -f hdfs://nn1:8020/srclist \
hdfs://nn2:8020/bar/foo
Wheresrclistcontains
hdfs://nn1:8020/foo/a
hdfs://nn1:8020/foo/b
When copying from multiple sources, DistCp will abort the copy with an error message if two sources collide, but collisions at the destination are resolved per theoptionsspecified. By default, files already existing at the destination are skipped (i.e. not replaced by the source file). A count of skipped files is reported at the end of each job, but it may be inaccurate if a copier failed for some subset of its files, but succeeded on a later attempt (seeAppendix).
It is important that each TaskTracker can reach and communicate with both the source and destination file systems. For HDFS, both the source and destination must be running the same version of the protocol or use a backwards-compatible protocol (seeCopying Between Versions).
After a copy, it is recommended that one generates and cross-checks a listing of the source and destination to verify that the copy was truly successful. Since DistCp employs both Map/Reduce and the FileSystem API, issues in or between any of the three could adversely and silently affect the copy. Some have had success running with-updateenabled to perform a second pass, but users should be acquainted with its semantics before attempting this.
It's also worth noting that if another client is still writing to a source file, the copy will likely fail. Attempting to overwrite a file being written at the destination should also fail on HDFS. If a source file is (re)moved before it is copied, the copy will fail with a FileNotFoundException.
Options
Option Index
Flag | Description | Notes |
---|---|---|
-p[rbugp] | Preserve r: replication number b: block size u: user g: group p: permission | Modification times are not preserved. Also, when-updateis specified, status updates willnotbe synchronized unless the file sizes also differ (i.e. unless the file is re-created). |
-i | Ignore failures | As explained in theAppendix, this option will keep more accurate statistics about the copy than the default case. It also preserves logs from failed copies, which can be valuable for debugging. Finally, a failing map will not cause the job to fail before all splits are attempted. |
-log <logdir> | Write logs to <logdir> | DistCp keeps logs of each file it attempts to copy as map output. If a map fails, the log output will not be retained if it is re-executed. |
-m <num_maps> | Maximum number of simultaneous copies | Specify the number of maps to copy data. Note that more maps may not necessarily improve throughput. |
-overwrite | Overwrite destination | If a map fails and-iis not specified, all the files in the split, not only those that failed, will be recopied. As discussed in thefollowing, it also changes the semantics for generating destination paths, so users should use this carefully. |
-update | Overwrite if src size different from dst size | As noted in the preceding, this is not a "sync" operation. The only criterion examined is the source and destination file sizes; if they differ, the source file replaces the destination file. As discussed in thefollowing, it also changes the semantics for generating destination paths, so users should use this carefully. |
-f <urilist_uri> | Use list at <urilist_uri> as src list | This is equivalent to listing each source on the command line. Theurilist_urilist should be a fully qualified URI. |
Update and Overwrite
It's worth giving some examples of-updateand-overwrite. Consider a copy from/foo/aand/foo/bto/bar/foo, where the sources contain the following:
hdfs://nn1:8020/foo/a
hdfs://nn1:8020/foo/a/aa
hdfs://nn1:8020/foo/a/ab
hdfs://nn1:8020/foo/b
hdfs://nn1:8020/foo/b/ba
hdfs://nn1:8020/foo/b/ab
If either-updateor-overwriteis set, then both sources will map an entry to/bar/foo/abat the destination. For both options, the contents of each source directory are compared with thecontentsof the destination directory. Rather than permit this conflict, DistCp will abort.
In the default case, both/bar/foo/aand/bar/foo/bwill be created and neither will collide.
Now consider a legal copy using-update:
distcp -update hdfs://nn1:8020/foo/a \
hdfs://nn1:8020/foo/b \
hdfs://nn2:8020/bar
With sources/sizes:
hdfs://nn1:8020/foo/a
hdfs://nn1:8020/foo/a/aa 32
hdfs://nn1:8020/foo/a/ab 32
hdfs://nn1:8020/foo/b
hdfs://nn1:8020/foo/b/ba 64
hdfs://nn1:8020/foo/b/bb 32
And destination/sizes:
hdfs://nn2:8020/bar
hdfs://nn2:8020/bar/aa 32
hdfs://nn2:8020/bar/ba 32
hdfs://nn2:8020/bar/bb 64
Will effect:
hdfs://nn2:8020/bar
hdfs://nn2:8020/bar/aa 32
hdfs://nn2:8020/bar/ab 32
hdfs://nn2:8020/bar/ba 64
hdfs://nn2:8020/bar/bb 32
Onlyaais not overwritten on nn2. If-overwritewere specified, all elements would be overwritten.
Appendix
Map sizing
DistCp makes a faint attempt to size each map comparably so that each copies roughly the same number of bytes. Note that files are the finest level of granularity, so increasing the number of simultaneous copiers (i.e. maps) may not always increase the number of simultaneous copies nor the overall throughput.
If-mis not specified, DistCp will attempt to schedule work formin (total_bytes / bytes.per.map, 20 * num_task_trackers)wherebytes.per.mapdefaults to 256MB.
Tuning the number of maps to the size of the source and destination clusters, the size of the copy, and the available bandwidth is recommended for long-running and regularly run jobs.
Copying between versions of HDFS
For copying between two different versions of Hadoop, one will usually use HftpFileSystem. This is a read-only FileSystem, so DistCp must be run on the destination cluster (more specifically, on TaskTrackers that can write to the destination cluster). Each source is specified ashftp://<dfs.http.address>/<path>(the defaultdfs.http.addressis <namenode>:50070).
Map/Reduce and other side-effects
As has been mentioned in the preceding, should a map fail to copy one of its inputs, there will be several side-effects.
- Unless-iis specified, the logs generated by that task attempt will be replaced by the previous attempt.
- Unless-overwriteis specified, files successfully copied by a previous map on a re-execution will be marked as "skipped".
- If a map failsmapred.map.max.attemptstimes, the remaining map tasks will be killed (unless-iis set).
- Ifmapred.speculative.executionis set setfinalandtrue, the result of the copy is undefined.