hadoop常用指令

1、列出正在运行的job

nange@ubuntu:~$ hadoop job -list
0 jobs currently running
JobId	State	StartTime	UserName	Priority	SchedulingInfo

 

2、kill掉job(1234指jobid)

nange@ubuntu:~$ hadoop job -kill 1234

 

3、查看hdfs块状态

nange@ubuntu:~$ hadoop fsck /
FSCK started by nange from /127.0.0.1 for path / at Mon Apr 21 22:26:29 CST 2014
................................
/hdfs/test/avg/out/part-r-00000:  Under replicated blk_958972218415296316_1141. Target Replicas is 3 but found 1 replica(s).
.
/hdfs/test/hdfsoper/2.f:  Under replicated blk_1024136243155007162_1126. Target Replicas is 3 but found 1 replica(s).

 

4、查看hdfs状态,并删除损坏块

nange@ubuntu:~$ hadoop fsck / -delete

 

5、检查dfs状态,以及datanode信息

nange@ubuntu:~$ hadoop dfsadmin -report
Configured Capacity: 155277496320 (144.61 GB)
Present Capacity: 138432897024 (128.93 GB)
DFS Remaining: 138429386752 (128.92 GB)
DFS Used: 3510272 (3.35 MB)
DFS Used%: 0%
Under replicated blocks: 14
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)

Name: 127.0.0.1:50010
Decommission Status : Normal
Configured Capacity: 155277496320 (144.61 GB)
DFS Used: 3510272 (3.35 MB)
Non DFS Used: 16844599296 (15.69 GB)
DFS Remaining: 138429386752(128.92 GB)
DFS Used%: 0%
DFS Remaining%: 89.15%
Last contact: Mon Apr 21 22:33:15 CST 2014

 

6、进入离开安全模式

nange@ubuntu:~$ hadoop dfsadmin -safemode enter
Safe mode is ON
nange@ubuntu:~$ hadoop dfsadmin -safemode leave
Safe mode is OFF

 

7、hadoop 并行拷贝(将a节点数据拷贝到b节点)

hadoop distcp hdfs://a:9000/a hdfs://b:9000/b

 

8、平衡集群文件

nange@ubuntu:~$ start-balancer.sh 
starting balancer, logging to /home/nange/programs/hadoopWS/hadoop-1.2.1/libexec/../logs/hadoop-nange-balancer-ubuntu.out

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值