HDFS命令详解

HDFS dfsadmin 命令详解

  1. -report:获取集群报表信息【心跳机制汇报的就是report信息】

    [hadoop@master ~]$ ·hdfs dfsadmin -report·
    Configured Capacity: 55935541248 (52.09 GB)
    Present Capacity: 39769579520 (37.04 GB)
    DFS Remaining: 39099133952 (36.41 GB)
    DFS Used: 670445568 (639.39 MB)
    DFS Used%: 1.69%
    Under replicated blocks: 0
    Blocks with corrupt replicas: 0
    Missing blocks: 0
    Missing blocks (with replication factor 1): 0


    Live datanodes (3):

    Name: 192.168.204.210:50010 (slave03)
    Hostname: slave03
    Decommission Status : Normal
    Configured Capacity: 18645180416 (17.36 GB)
    DFS Used: 223481856 (213.13 MB)
    Non DFS Used: 5392871424 (5.02 GB)
    DFS Remaining: 13028827136 (12.13 GB)
    DFS Used%: 1.20%
    DFS Remaining%: 69.88%
    Configured Cache Capacity: 0 (0 B)
    Cache Used: 0 (0 B)
    Cache Remaining: 0 (0 B)
    Cache Used%: 100.00%
    Cache Remaining%: 0.00%
    Xceivers: 1
    Last contact: Sat Aug 03 21:46:59 CST 2019

    Name: 192.168.204.206:50010 (slave01)
    Hostname: slave01
    Decommission Status : Normal
    Configured Capacity: 18645180416 (17.36 GB)
    DFS Used: 223481856 (213.13 MB)
    Non DFS Used: 5386186752 (5.02 GB)
    DFS Remaining: 13035511808 (12.14 GB)
    DFS Used%: 1.20%
    DFS Remaining%: 69.91%
    Configured Cache Capacity: 0 (0 B)
    Cache Used: 0 (0 B)
    Cache Remaining: 0 (0 B)
    Cache Used%: 100.00%
    Cache Remaining%: 0.00%
    Xceivers: 1
    Last contact: Sat Aug 03 21:46:59 CST 2019

    Name: 192.168.204.208:50010 (slave02)
    Hostname: slave02
    Decommission Status : Normal
    Configured Capacity: 18645180416 (17.36 GB)
    DFS Used: 223481856 (213.13 MB)
    Non DFS Used: 5386903552 (5.02 GB)
    DFS Remaining: 13034795008 (12.14 GB)
    DFS Used%: 1.20%
    DFS Remaining%: 69.91%
    Configured Cache Capacity: 0 (0 B)
    Cache Used: 0 (0 B)
    Cache Remaining: 0 (0 B)
    Cache Used%: 100.00%
    Cache Remaining%: 0.00%
    Xceivers: 1
    Last contact: Sat Aug 03 21:46:59 CST 2019

  2. -setQuota:设置目录配额

    【进行目录配额的文件夹算一个,传输进的文件夹也算一个】

    【不能对/进行目录配额,无法清空】

    [hadoop@main hadoop]$ hadoop fs -mkdir /mumu
    》》[hadoop@main hadoop]$ hdfs dfsadmin -setQuota 3 /mumu

    》[hadoop@master ~]$ hadoop fs -put tree.txt /mumu
    》[hadoop@master ~]$ hadoop fs -put Public/ /mumu
    》[hadoop@master ~]$ hadoop fs -put tree.txt /mumu/trees.txt
    put: The NameSpace quota (directories and files) of directory /mumu is exceeded: quota=3 file count=4

    》》[hadoop@master ~]$ hdfs dfsadmin -setQuota 6 /mumu
    》[hadoop@master ~]$ hadoop fs -mkdir /mumu/test
    [hadoop@master ~]$ mkdir Test
    [hadoop@master Test]$ echo "zhuyili" > Test/test.txt
    》[hadoop@master ~]$ hadoop fs -put Test/ /mumu

    [hadoop@master ~]$ hadoop fs -ls -R /
    drwxr-xr-x - hadoop supergroup 0 2019-08-02 04:09 /mumu
    drwxr-xr-x - hadoop supergroup 0 2019-08-02 04:02 /mumu/Public
    drwxr-xr-x - hadoop supergroup 0 2019-08-02 04:09 /mumu/Test
    -rw-r–r-- 1 hadoop supergroup 8 2019-08-02 04:09 /mumu/Test/test.txt
    drwxr-xr-x - hadoop supergroup 0 2019-08-02 04:07 /mumu/test
    -rw-r–r-- 1 hadoop supergroup 867 2019-08-02 04:01 /mumu/tree.txt
    [hadoop@master ~]$ echo "zhuzhu" > Test/test1.txt
    》[hadoop@master ~]$ hadoop fs -put Test/test1.txt /mumu
    put: The NameSpace quota (directories and files) of directory /mumu is exceeded: quota=6 file count=7

    【因此,目录配额的数量,实际是,检索一个目录的[所有文件数 + 子目录数 + 目录本身(1)]】

  3. -setSpaceQuota:设置目录空间配额

    【目录空间配额设定以后,剩余空间大小要 > (上传文件大小/块大小=块数)* 副本数 * 块大小】

    [hadoop@master current]$ hdfs dfsadmin -setSpaceQuota 800M /mumu

  4. -clrQuota:清空目录配额

    [hadoop@master ~]$ hdfs dfsadmin -clrQuota /mumu

  5. -clrSpaceQuota:清空空间配额

    [hadoop@master ~]$ hdfs dfsadmin -clrSpaceQuota /mumu

  6. 查看配额情况

    》[hadoop@master current]$ hadoop fs -count -q -h /mumu

    ​ 6 2 none inf 2 2 211.4 M /mumu

    【目录配额数:剩余目录配额:空间配额数:剩余空间配额:目录数:文件数:已使用容量:目录名】

    【-q:显示配额详情(容量以字节展示)】

    》[hadoop@master current]$ hadoop fs -count -h /mumu
    2 2 211.4 M /mumu

    【-h:显示目录信息】

    【-q -h:显示配额详情(容量以具体单位:k,M,G展示)】

  7. -metasave:保存HDFS集群相关节点信息${HADOOP.LOG.DIR}目录

    》》[hadoop@master logs]$ hdfs dfsadmin -metasave 2019-08-04-16-16
    Created metasave file 2019-08-04-16-16 in the log directory of namenode hdfs://master:9000

    [hadoop@master logs]$ cd $HADOOP_HOME/logs

    [hadoop@master logs]$ cat 2019-08-04-16-16

    9 files and directories, 6 blocks = 15 total
    Live Datanodes: 3
    Dead Datanodes: 0
    Metasave: Blocks waiting for replication: 0
    Mis-replicated blocks that have been postponed:
    Metasave: Blocks being replicated: 0
    Metasave: Blocks 0 waiting deletion from 0 datanodes.
    Metasave: Number of datanodes: 3
    192.168.204.210:50010 IN 18645180416(17.36 GB) 436146176(415.94 MB) 2.34% 12816138240(11.94 GB) 0(0 B) 0(0 B) 100.00% 0(0 B) Sun Aug 04 16:11:19 CST 2019
    192.168.204.206:50010 IN 18645180416(17.36 GB) 436146176(415.94 MB) 2.34% 12822732800(11.94 GB) 0(0 B) 0(0 B) 100.00% 0(0 B) Sun Aug 04 16:11:20 CST 2019
    192.168.204.208:50010 IN 18645180416(17.36 GB) 436146176(415.94 MB) 2.34% 12822097920(11.94 GB) 0(0 B) 0(0 B) 100.00% 0(0 B) Sun Aug 04 16:11:19 CST 2019

  8. -fetchImage:拉取fsimage文件,在datanode节点操作

    》》[hadoop@slave01 ~]$ hdfs dfsadmin -fetchImage ~

    19/08/04 16:19:28 INFO namenode.TransferFsImage: Opening connection to http://master:50070/imagetransfer?getimage=1&txid=latest
    19/08/04 16:19:28 INFO namenode.TransferFsImage: Image Transfer timeout configured to 60000 milliseconds
    19/08/04 16:19:28 INFO namenode.TransferFsImage: Transfer took 0.16s at 0.00 KB/s
    
  9. 快照Snapshot

    》》[hadoop@master logs]$ hdfs dfsadmin -allowSnapshot /mumu
    Allowing snaphot on /mumu succeeded

    》》[hadoop@master logs]$ hadoop fs -createSnapshot /mumu mumu_snapshot

    created snapshot /mumu/.snapshot/mumu_snapshot
    》》[hadoop@master logs]$ hadoop fs -deleteSnapshot /mumu mumu_snapshot

HDFS 中的绝对路径

  • hadoop fs -put tree.txt hdfs://master:9000/mumu

    其实就是

    hadoop fs -put tree.txt /mumu

  • 用户路径

    创建HDFS的linux用户就是HDFS的默认用户,可以给hadoop这个用户创建HDFS中的用户目录

    》$>hadoop fs -mkdir /user/hadoop

    创建之后,如果不指定上传的目录位置,就默认上传到用户目录下

    【但未创建又不指定目标目录则报错,找不到该路径】

    》$>hadoop fs -put tree.txt

    【该文件的路径为:/user/hadoop/tree.txt】

  • 如果目录下有同名文件,在上传时又指定了相对路径的目录,则重命名该文件

    存在/mumu/tree.txt文件

    》$>hadoop fs -put tree.txt tree

    则/mumu/tree.txt文件被重命名为/mumu/tree

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值