【学习笔记】Hadoop之HDFS常用shell命令

1.hadoop命令

$ hadoop
	fs                   run a generic filesystem user client
		#访问文件系统,相当于hdfs dfs
	version              print the version
	jar <jar>            run a jar file
		#运行一个jar到yarn上
	checknative [-a|-h]  check native hadoop and compression libraries availability
		#检查原生hadoop和压缩库的可用性
	distcp <srcurl> <desturl> copy file or directories recursively
		#用于集群间复制备份hdfs文件,多用于运维
	archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
	classpath            prints the class path needed to get the
		#hadoop启动加载类路径
	credential           interact with credential providers
	                     Hadoop jar and the required libraries
	daemonlog            get/set the log level for each daemon
	trace                view and modify Hadoop tracing settings
	CLASSNAME            run the class named CLASSNAME

2.hdfs命令

$ hdfs
Usage: hdfs [--config confdir] COMMAND
	     where COMMAND is one of:
	dfs                  run a filesystem command on the file systems supported in Hadoop.
		#访问文件系统
	namenode -format     format the DFS filesystem
		#格式化文件系统,通常只有在初始化文件系统的时候使用,初始化系统之后,尽量不要使用该命令,会造成集群无法启动问题
	secondarynamenode    run the DFS secondary namenode
	namenode             run the DFS namenode
	journalnode          run the DFS journalnode
	zkfc                 run the ZK Failover Controller daemon
	datanode             run a DFS datanode
	dfsadmin             run a DFS admin client
	haadmin              run a DFS HA admin client
	fsck                 run a DFS filesystem checking utility
		#检查文件系统状态,可以看到块的状态情况(包括损坏、丢失情况)
		#具体的使用方式可以关注另一篇博客:【学习笔记】Hadoop之HDFS Block损坏恢复最佳实践(含思考题) https://blog.csdn.net/eryehong/article/details/95167059
	balancer             run a cluster balancing utility
		#平衡集群中各个节点的块分布,运行该命令时,最好是集群较为空闲时,否则的话会对文件的读写产生影响
	jmxget               get JMX exported values from NameNode or DataNode.
	mover                run a utility to move block replicas across
	                     storage types
	oiv                  apply the offline fsimage viewer to an fsimage
	oiv_legacy           apply the offline fsimage viewer to an legacy fsimage
	oev                  apply the offline edits viewer to an edits file
	fetchdt              fetch a delegation token from the NameNode
	getconf              get config values from configuration
		#-查看当前生效的配置项值
	groups               get the groups which users belong to
	snapshotDiff         diff two snapshots of a directory or diff the
	                     current directory contents with a snapshot
	lsSnapshottableDir   list all snapshottable dirs owned by the current user
		#		#		#Use -help to see options
	portmap              run a portmap service
	nfs3                 run an NFS version 3 gateway
	cacheadmin           configure the HDFS cache
	crypto               configure HDFS encryption zones
	storagepolicies      list/get/set block storage policies
	version              print the version

3.hdfs dfs命令

$ hdfs dfs
Usage: hadoop fs [generic options]
	[-appendToFile <localsrc> ... <dst>]
	[-cat [-ignoreCrc] <src> ...]
		#查看HDFS文件内容
	[-checksum <src> ...]
	[-chgrp [-R] GROUP PATH...]
		#修改HDFS文件用户组
	[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
		#修改HDFS文件权限
	[-chown [-R] [OWNER][:[GROUP]] PATH...]
		#修改HDFS文件用户
	[-copyFromLocal [-f] [-p] [-l] <localsrc> ... <dst>]
		#将本地文件复制到HDFS,相当于 -put
	[-copyToLocal [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
		#将HDFS文件复制到本地,相当于 -get
	[-count [-q] [-h] [-v] <path> ...]
	[-cp [-f] [-p | -p[topax]] <src> ... <dst>]
		#将HDFS文件复制到HDFS的其他位置
	[-createSnapshot <snapshotDir> [<snapshotName>]]
	[-deleteSnapshot <snapshotDir> <snapshotName>]
	[-df [-h] [<path> ...]]
		#显示HDFS的磁盘空间情况
	[-du [-s] [-h] <path> ...]
		#显示HDFS文件和目录的使用空间
	[-expunge]
	[-find <path> ... <expression> ...]
	[-get [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
		#将HDFS文件复制到本地
	[-getfacl [-R] <path>]
	[-getfattr [-R] {-n name | -d} [-e en] <path>]
	[-getmerge [-nl] <src> <localdst>]
	[-help [cmd ...]]
	[-ls [-d] [-h] [-R] [<path> ...]]
	[-mkdir [-p] <path> ...]
	[-moveFromLocal <localsrc> ... <dst>]
	[-moveToLocal <src> <localdst>]
	[-mv <src> ... <dst>]
	[-put [-f] [-p] [-l] <localsrc> ... <dst>]
		#将本地文件复制到HDFS
	[-renameSnapshot <snapshotDir> <oldName> <newName>]
	[-rm [-f] [-r|-R] [-skipTrash] <src> ...]
	[-rmdir [--ignore-fail-on-non-empty] <dir> ...]
	[-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
	[-setfattr {-n name [-v value] | -x name} <path>]
	[-setrep [-R] [-w] <rep> <path> ...]
	[-stat [format] <path> ...]
	[-tail [-f] <file>]
	[-test -[defsz] <path>]
	[-text [-ignoreCrc] <src> ...]
	[-touchz <path> ...]
	[-usage [cmd ...]]

Generic options supported are
-conf <configuration file>     specify an application configuration file
-D <property=value>            use value for given property
-fs <local|namenode:port>      specify a namenode
-jt <local|resourcemanager:port>    specify a ResourceManager
-files <comma separated list of files>    specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars>    specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives>    specify comma separated archives to be unarchived on the compute machines.
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值