Hadoop操作命令(持续更新)

   这篇博文会记录hadoop日常操作的命令,持续更新


1.查看hadoop版本
[hadoop@master ~]$ file /home/hadoop/hadoop-2.6.5/lib/native/libhadoop.so.1.0.0
/home/hadoop/hadoop-2.6.5/lib/native/libhadoop.so.1.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=b2e7abd79d152432cf38b4e0d75d10f13022ba19, not stripped


2.查看hdfs中/user/hadoop目录下目录。 
[hadoop@master bin]$ ./hadoop fs -ls /home/hadoop/hadoop-2.6.5/data/
Found 1 items
drwxr-xr-x   - hadoop supergroup          0 2016-12-16 14:32 /home/hadoop/hadoop-2.6.5/data/fs


3.列出hdfs中/user/hadoop目录下的所有文件(包括子目录下的文件)
[hadoop@master bin]$ ./hadoop fs -ls -R /home/hadoop/hadoop-2.6.5/data/fs
drwxr-xr-x   - hadoop supergroup          0 2016-12-16 14:32 /home/hadoop/hadoop-2.6.5/data/fs
-rw-r--r--   2 hadoop supergroup         22 2016-12-16 14:32 /home/hadoop/hadoop-2.6.5/data/fs/file1.txt
-rw-r--r--   2 hadoop supergroup         24 2016-12-16 14:32 /home/hadoop/hadoop-2.6.5/data/fs/file2.txt
-rw-r--r--   2 hadoop supergroup         24 2016-12-16 14:32 /home/hadoop/hadoop-2.6.5/data/fs/file3.txt


4.检查并列出所有文件状态

[hadoop@master ~]$ hdfs fsck /user/hadoop/wc_input/file -files
Connecting to namenode via http://master:50070
FSCK started by hadoop (auth:SIMPLE) from /192.168.0.22 for path /user/hadoop/wc_input/file at Sat Dec 31 19:41:24 CST 2016
/user/hadoop/wc_input/file <dir>
/user/hadoop/wc_input/file/file1.txt 18 bytes, 1 block(s):  OK
/user/hadoop/wc_input/file/file2.txt 17 bytes, 1 block(s):  OK
Status: HEALTHY
 Total size:    35 B
 Total dirs:    1
 Total files:   2
 Total symlinks:               0
 Total blocks (validated):     2 (avg. block size 17 B)
 Minimally replicated blocks:  2 (100.0 %)
 Over-replicated blocks:       0 (0.0 %)
 Under-replicated blocks:      0 (0.0 %)
 Mis-replicated blocks:        0 (0.0 %)
 Default replication factor:   2
 Average block replication:    2.0
 Corrupt blocks:               0
 Missing replicas:             0 (0.0 %)
 Number of data-nodes:         3
 Number of racks:              1
FSCK ended at Sat Dec 31 19:41:24 CST 2016 in 4 milliseconds

The filesystem under path '/user/hadoop/wc_input/file' is HEALTHY


5.创建文件目录

[hadoop@master ~]$ hadoop fs -mkdir /user/hadoop/wc_input


6.删除hdfs中/user/hadoop/wc_input/file目录下一个名叫file1.txt的文件 
[hadoop@master bin]$hadoop fs -rm /user/hadoop/wc_input/file/file1.txt


7.删除hdfs中/user/hadoop目录以及该目录下的所有文件
[hadoop@master bin]$ hadoop fs -rmr /user/hadoop


8.上传一个本机/home/hadoop/file的文件到hdfs中/user/hadoop/wc_input目录下 

[hadoop@master ~]$ hadoop fs -put /home/hadoop/file/ /user/hadoop/wc_input


9.下载hdfs中/user/hadoop/wc_input/file目录下的file2.txt文件到本机/home/hadoop/newFile中 
[hadoop@master bin]$ hadoop fs -get /user/hadoop/wc_input/file/file2.txt /home/hadoop/newfile


10.查看hdfs中/user/hadoop/wc_input/file目录下的文件 
[hadoop@master bin]$hadoop fs -cat /user/hadoop/wc_input/file/file*.txt


11.查看文件系统的所有信息

[hadoop@master newfile]$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.


Configured Capacity: 119101992960 (110.92 GB)
Present Capacity: 97309655040 (90.63 GB)
DFS Remaining: 97307897856 (90.63 GB)
DFS Used: 1757184 (1.68 MB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Live datanodes (3):


Name: 192.168.0.22:50010 (master)
Hostname: master
Decommission Status : Normal
Configured Capacity: 39700664320 (36.97 GB)
DFS Used: 630784 (616 KB)
Non DFS Used: 8733229056 (8.13 GB)
DFS Remaining: 30966804480 (28.84 GB)
DFS Used%: 0.00%
DFS Remaining%: 78.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sat Dec 31 19:54:59 CST 2016




  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值