hadoop常用命令

查看CPU:cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c

查看内存:free –m

查看磁盘:df  -m

处理器信息:dmidecode | grep -A48 'Processor Information$'

查看目录的大小

[hadoop@masternode1 ~]$ hdfs dfs -ls /
Found 8 items
drwxr-xr-x   - hadoop supergroup          0 2016-11-09 15:38 /datatmp
drwxr-xr-x   - hadoop supergroup          0 2016-10-10 11:18 /flumeTest
drwxr-xr-x   - hauser supergroup          0 2016-11-04 16:58 /hauser_home
drwxr-xr-x   - hadoop supergroup          0 2016-11-09 17:10 /hbase
drwx-wx-wx   - hadoop supergroup          0 2016-10-08 16:31 /hive
drwxr-xr-x   - hadoop supergroup          0 2016-10-09 12:57 /spark
drwx-wx-wx   - hadoop supergroup          0 2016-10-26 14:35 /tmp
drwx------   - hadoop supergroup          0 2016-10-26 14:35 /user
[hadoop@masternode1 ~]$ hadoop fsck /datatmp
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Connecting to namenode via http://masternode2:50070/fsck?ugi=hadoop&path=%2Fdatatmp
FSCK started by hadoop (auth:SIMPLE) from /192.168.237.230 for path /datatmp at Thu Nov 10 11:38:17 CST 2016
....................................................................................................
...Status: HEALTHY
 Total size:    10800425600 B
 Total dirs:    1
 Total files:    103
 Total symlinks:        0
 Total blocks (validated):    103 (avg. block size 104858500 B)
 Minimally replicated blocks:    103 (100.0 %)
 Over-replicated blocks:    0 (0.0 %)
 Under-replicated blocks:    0 (0.0 %)
 Mis-replicated blocks:        0 (0.0 %)
 Default replication factor:    3
 Average block replication:    3.0
 Corrupt blocks:        0
 Missing replicas:        0 (0.0 %)
 Number of data-nodes:        7
 Number of racks:        1
FSCK ended at Thu Nov 10 11:38:17 CST 2016 in 27 milliseconds





Did you mean -du?  This command begins with a dash.
[hadoop@masternode1 ~]$ hadoop fs -du -s /datatmp
10800425600  /datatmp
[hadoop@masternode1 ~]$ hadoop fs -count -q /datatmp
        none             inf            none             inf            1          103        10800425600 /datatmp
[hadoop@masternode1 ~]$


创建目录

hadoop dfs -mkdir /home

上传文件或目录到hdfs

hadoop dfs -put hello /
hadoop dfs -put hellodir/ /

查看目录

hadoop dfs -ls /

创建一个空文件

hadoop dfs -touchz /wahaha

删除一个文件

hadoop dfs -rm /wahaha

删除一个目录

hadoop dfs -rmr /home

重命名

hadoop dfs -mv /hello1 /hello2

查看文件

hadoop dfs -cat /hello

将制定目录下的所有内容merge成一个文件,下载到本地

hadoop dfs -getmerge /hellodir wa

使用du文件和目录大小

hadoop dfs -du /

将目录拷贝到本地

hadoop dfs -copyToLocal  /home localdir

查看dfs的情况

hadoop dfsadmin -report

查看正在跑的Java程序

jps



 确定要转为active的namenode的id,这里将namenode1设为active,使用命令行工具进行状态切换:
$HADOOP_HOME/bin/hdfs  haadmin -failover --forcefence --forceactive  nn2  nn1
注意:此处“nn2  nn1”的顺序表示active状态由nn2转换到nn1上(虽然nn2在转化前也是standby状态)。
获取状态


   hdfs haadmin -getServiceState mnn

核查心跳

hdfs haadmin -checkHealth snn

查看mr 状态的命令

yarn rmadmin –getServiceState rm1

状态切换的命令

yarn rmadmin –transitionToStandby rm1


  • hadoop fs -appendToFile localfile /user/hadoop/hadoopfile
  • hadoop fs -appendToFile localfile1 localfile2 /user/hadoop/hadoopfile
  • hadoop fs -appendToFile localfile hdfs://nn.example.com/hadoop/hadoopfile
  • hadoop fs -appendToFile - hdfs://nn.example.com/hadoop/hadoopfile Reads the input from stdin.
  • hadoop fs -cat hdfs://nn1.example.com/file1 hdfs://nn2.example.com/file2
  • hadoop fs -cat file:///file3 /user/hadoop/file4
  • hadoop fs -checksum hdfs://nn1.example.com/file1
  • hadoop fs -checksum file:///etc/hosts
  • hadoop fs -chgrp [-R] GROUP URI [URI ...]
  •  hadoop fs -chmod [-R] <MODE[,MODE]... | OCTALMODE> URI [URI ...]

  • adoop fs -count hdfs://nn1.example.com/file1 hdfs://nn2.example.com/file2
  • hadoop fs -count -q hdfs://nn1.example.com/file1
  • hadoop fs -count -q -h hdfs://nn1.example.com/file1
  • hadoop fs -count -q -h -v hdfs://nn1.example.com/file1
  • hadoop fs -count -u hdfs://nn1.example.com/file1
  • hadoop fs -count -u -h hdfs://nn1.example.com/file1
  • hadoop fs -count -u -h -v hdfs://nn1.example.com/file1
  • hadoop fs -cp /user/hadoop/file1 /user/hadoop/file2
  • hadoop fs -cp /user/hadoop/file1 /user/hadoop/file2 /user/hadoop/dir
  • hadoop fs -get /user/hadoop/file localfile
  • hadoop fs -get hdfs://nn.example.com/user/hadoop/file localfile
  • hadoop dfs -df /user/hadoop/dir1
  • hadoop fs -find / -name test -print
  • hadoop fs -getfacl /file
  • hadoop fs -getfacl -R /dir
  • hadoop fs -getfattr -d /file
  • hadoop fs -getfattr -R -n user.myAttr /dir
  • hadoop fs -getmerge -nl /src /opt/output.txt
  • hadoop fs -getmerge -nl /src/file1.txt /src/file2.txt /output.txt
  • hadoop fs -mkdir /user/hadoop/dir1 /user/hadoop/dir2
  • hadoop fs -mkdir hdfs://nn1.example.com/user/hadoop/dir hdfs://nn2.example.com/user/hadoop/dir
  • hadoop fs -mv /user/hadoop/file1 /user/hadoop/file2
  • hadoop fs -mv hdfs://nn.example.com/file1 hdfs://nn.example.com/file2 hdfs://nn.example.com/file3 hdfs://nn.example.com/dir1
  • hadoop fs -put localfile /user/hadoop/hadoopfile
  • hadoop fs -put localfile1 localfile2 /user/hadoop/hadoopdir
  • hadoop fs -put localfile hdfs://nn.example.com/hadoop/hadoopfile
  • hadoop fs -put - hdfs://nn.example.com/hadoop/hadoopfile Reads the input from stdin.
  • hadoop fs -rmdir /user/hadoop/emptydir
  • hadoop fs -setfacl -m user:hadoop:rw- /file
  • hadoop fs -setfacl -x user:hadoop /file
  • hadoop fs -setfacl -b /file
  • hadoop fs -setfacl -k /dir
  • hadoop fs -setfacl --set user::rw-,user:hadoop:rw-,group::r--,other::r-- /file
  • hadoop fs -setfacl -R -m user:hadoop:r-x /dir
  • hadoop fs -setfacl -m default:user:hadoop:r-x /dir
  • hadoop fs -setfattr -n user.myAttr -v myValue /file
  • hadoop fs -setfattr -n user.noValue /file
  • hadoop fs -setfattr -x user.myAttr /file
  • hadoop fs -setrep -w 3 /user/hadoop/dir1
  • hadoop fs -stat "%F %u:%g %b %y %n" /file
  • hadoop fs -tail pathname
  • hadoop fs -test -[defsz] URI
  • hadoop fs -touchz pathname
  • hadoop fs -truncate 55 /user/hadoop/file1 /user/hadoop/file2
  • hadoop fs -truncate -w 127 hdfs://nn1.example.com/user/hadoop/file1

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值