hdfs入门

hdfs –help

[bigdata@bigdata ~]$ hadoop --help
Usage: hadoop [--config confdir] [COMMAND | CLASSNAME]
  CLASSNAME            run the class named CLASSNAME
 or
  where COMMAND is one of:
  fs                   run a generic filesystem user client
  version              print the version
  jar <jar>            run a jar file
                       note: please use "yarn jar" to launch
                             YARN applications, not this command.
  checknative [-a|-h]  check native hadoop and compression libraries availability
  distcp <srcurl> <desturl> copy file or directories recursively
  archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
  classpath            prints the class path needed to get the
  credential           interact with credential providers
                       Hadoop jar and the required libraries
  daemonlog            get/set the log level for each daemon
  trace                view and modify Hadoop tracing settings

Most commands print help when invoked w/o parameters.
[bigdata@bigdata ~]$ clear

[bigdata@bigdata ~]$ hdfs --help
Usage: hdfs [--config confdir] [--loglevel loglevel] COMMAND
       where COMMAND is one of:
  dfs                  run a filesystem command on the file systems supported in Hadoop.
  classpath            prints the classpath
  namenode -format     format the DFS filesystem
  secondarynamenode    run the DFS secondary namenode
  namenode             run the DFS namenode
  journalnode          run the DFS journalnode
  zkfc                 run the ZK Failover Controller daemon
  datanode             run a DFS datanode
  dfsadmin             run a DFS admin client
  haadmin              run a DFS HA admin client
  fsck                 run a DFS filesystem checking utility
  balancer             run a cluster balancing utility
  jmxget               get JMX exported values from NameNode or DataNode.
  mover                run a utility to move block replicas across
                       storage types
  oiv                  apply the offline fsimage viewer to an fsimage
  oiv_legacy           apply the offline fsimage viewer to an legacy fsimage
  oev                  apply the offline edits viewer to an edits file
  fetchdt              fetch a delegation token from the NameNode
  getconf              get config values from configuration
  groups               get the groups which users belong to
  snapshotDiff         diff two snapshots of a directory or diff the
                       current directory contents with a snapshot
  lsSnapshottableDir   list all snapshottable dirs owned by the current user
                        Use -help to see options
  portmap              run a portmap service
  nfs3                 run an NFS version 3 gateway
  cacheadmin           configure the HDFS cache
  crypto               configure HDFS encryption zones
  storagepolicies      list/get/set block storage policies
  version              print the version

Most commands print help when invoked w/o parameters.
[bigdata@bigdata ~]$ 

hadoop –help

[bigdata@bigdata ~]$ hadoop --help
Usage: hadoop [--config confdir] [COMMAND | CLASSNAME]
  CLASSNAME            run the class named CLASSNAME
 or
  where COMMAND is one of:
  fs                   run a generic filesystem user client
  version              print the version
  jar <jar>            run a jar file
                       note: please use "yarn jar" to launch
                             YARN applications, not this command.
  checknative [-a|-h]  check native hadoop and compression libraries availability
  distcp <srcurl> <desturl> copy file or directories recursively
  archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
  classpath            prints the class path needed to get the
  credential           interact with credential providers
                       Hadoop jar and the required libraries
  daemonlog            get/set the log level for each daemon
  trace                view and modify Hadoop tracing settings

Most commands print help when invoked w/o parameters.
[bigdata@bigdata ~]$ 

hadoop系统的archive

#归档前
[bigdata@bigdata ~]$ hdfs dfs -ls /sqoop/brand
Found 3 items
-rw-r--r--   1 bigdata supergroup      29966 2018-08-09 19:05 /sqoop/brand/bc7c8edf-1a35-4ced-9fd0-fcdbdfaad072.txt
-rw-r--r--   1 bigdata supergroup      29966 2018-08-10 06:06 /sqoop/brand/t1.txt
-rw-r--r--   1 bigdata supergroup      29966 2018-08-10 06:06 /sqoop/brand/t2.txt
[bigdata@bigdata ~]$ 
#归档
hadoop archive -archiveName test.har  -p /sqoop/brand /tmp2/
#归档后
#注意:归档只是为了把小文件变成大文件更有效的存储,在这个例子中,实际大小反而大了。
[bigdata@bigdata ~]$ hdfs dfs -ls  /tmp2/test.har
Found 4 items
-rw-r--r--   1 bigdata supergroup          0 2018-08-10 06:10 /tmp2/test.har/_SUCCESS
-rw-r--r--   5 bigdata supergroup        351 2018-08-10 06:10 /tmp2/test.har/_index
-rw-r--r--   5 bigdata supergroup         23 2018-08-10 06:10 /tmp2/test.har/_masterindex
-rw-r--r--   1 bigdata supergroup      89898 2018-08-10 06:10 /tmp2/test.har/part-0
#用har来查看
[bigdata@bigdata ~]$ hadoop fs -ls har://hdfs-bigdata:9000/tmp2/test.har
Found 3 items
-rw-r--r--   1 bigdata supergroup      29966 2018-08-09 19:05 har://hdfs-bigdata:9000/tmp2/test.har/bc7c8edf-1a35-4ced-9fd0-fcdbdfaad072.txt
-rw-r--r--   1 bigdata supergroup      29966 2018-08-10 06:06 har://hdfs-bigdata:9000/tmp2/test.har/t1.txt
-rw-r--r--   1 bigdata supergroup      29966 2018-08-10 06:06 har://hdfs-bigdata:9000/tmp2/test.har/t2.txt
[bigdata@bigdata ~]$
#拷贝归档文件
[bigdata@bigdata ~]$ hdfs dfs -cp har://hdfs-bigdata:9000/tmp2/test.har/t1.txt hdfs:/sqoop/
[bigdata@bigdata ~]$ hdfs dfs -ls /sqoop
Found 2 items
drwxr-xr-x   - bigdata supergroup          0 2018-08-10 06:06 /sqoop/brand
-rw-r--r--   1 bigdata supergroup      29966 2018-08-10 06:28 /sqoop/t1.txt
[bigdata@bigdata ~]$ 
##带全协议的拷贝方式
hdfs dfs -cp har://hdfs-bigdata:9000/tmp2/test.har/t1.txt hdfs://bigdata:9000/sqoop/
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值