hadoop2.6.0集群复制因子更改(四)

一、查看当前文件副本个数

[hadoop@master hadoop]$ hdfs dfs -lsr /
lsr: DEPRECATED: Please use 'ls -R' instead.
drwxr-xr-x   - hadoop supergroup          0 2017-03-16 23:39 /system
drwxr-xr-x   - hadoop supergroup          0 2017-03-16 10:33 /tmp
-rw-r--r--   2 hadoop supergroup          6 2017-03-16 10:33 /tmp/a.txt   《===当前副本为2
[hadoop@master hadoop]$


二、查看hdfs-sizt.xml配置文件

[hadoop@master hadoop]$ cat hdfs-site.xml
<property>
  <name>dfs.replication</name>
  <value>2</value>
 </property>


三、检查/目录下文件系统状态


[hadoop@master hadoop]$ hdfs fsck /
Connecting to namenode via http://master:50070
FSCK started by hadoop (auth:SIMPLE) from /192.168.123.10 for path / at Thu Mar 16 23:53:10 EDT 2017
.Status: HEALTHY
 Total size:    6 B
 Total dirs:    3
 Total files:   1
 Total symlinks:                0
 Total blocks (validated):      1 (avg. block size 6 B)
 Minimally replicated blocks:   1 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       0 (0.0 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    2       《===当前复制因子是2
 Average block replication:     2.0

 Corrupt blocks:                0
 Missing replicas:              0 (0.0 %)
 Number of data-nodes:          3
 Number of racks:               1
FSCK ended at Thu Mar 16 23:53:10 EDT 2017 in 23 milliseconds

The filesystem under path '/' is HEALTHY
[hadoop@master hadoop]$


四、修改hdfs-site.xml文件中的复制因子为3

[hadoop@master hadoop]$ vi hdfs-site.xml 

<property>
  <name>dfs.replication</name>
  <value>3</value>
 </property>


五、 再次检查文件系统


[hadoop@master hadoop]$ hdfs fsck /
Connecting to namenode via http://master:50070
FSCK started by hadoop (auth:SIMPLE) from /192.168.123.10 for path / at Thu Mar 16 23:56:20 EDT 2017
.Status: HEALTHY
 Total size:    6 B
 Total dirs:    3
 Total files:   1
 Total symlinks:                0
 Total blocks (validated):      1 (avg. block size 6 B)
 Minimally replicated blocks:   1 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       0 (0.0 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    2               《====复制因子还是2,但是当我新上传一个文件后,副本是3,以下有实验
 Average block replication:     2.0

 Corrupt blocks:                0
 Missing replicas:              0 (0.0 %)
 Number of data-nodes:          3
 Number of racks:               1
FSCK ended at Thu Mar 16 23:56:20 EDT 2017 in 3 milliseconds


The filesystem under path '/' is HEALTHY

六、 新上传一个文件

[hadoop@master ~]$ hdfs dfs -put b.txt /tmp
[hadoop@master ~]$ hdfs dfs -lsr /
lsr: DEPRECATED: Please use 'ls -R' instead.
drwxr-xr-x   - hadoop supergroup          0 2017-03-16 23:39 /system
drwxr-xr-x   - hadoop supergroup          0 2017-03-16 23:57 /tmp
-rw-r--r--   2 hadoop supergroup          6 2017-03-16 10:33 /tmp/a.txt     《==修改复制因子之前的旧文件,还是2,需要手工还成3
-rw-r--r--   3 hadoop supergroup         21 2017-03-16 23:57 /tmp/b.txt   《==新上传b.txt,副本是3


七、检查文件系统
[hadoop@master ~]$ hdfs fsck /
Connecting to namenode via http://master:50070
FSCK started by hadoop (auth:SIMPLE) from /192.168.123.10 for path / at Thu Mar 16 23:59:09 EDT 2017
..Status: HEALTHY
 Total size:    27 B
 Total dirs:    3
 Total files:   2
 Total symlinks:                0
 Total blocks (validated):      2 (avg. block size 13 B)
 Minimally replicated blocks:   2 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       0 (0.0 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    2                   《===重启后复制因子会变成3
 Average block replication:     2.5            《===这里变为2.5了

 Corrupt blocks:                0
 Missing replicas:              0 (0.0 %)
 Number of data-nodes:          3
 Number of racks:               1
FSCK ended at Thu Mar 16 23:59:09 EDT 2017 in 4 milliseconds


The filesystem under path '/' is HEALTHY
[hadoop@master ~]$


八、 重启hdfs,检查文件系统及复制因子


[hadoop@master hadoop]$ stop-dfs.sh 
Stopping namenodes on [master]
master: stopping namenode
slave2: stopping datanode
slave1: stopping datanode
Stopping secondary namenodes [master]
master: stopping secondarynamenode
[hadoop@master hadoop]$ 

[hadoop@master ~]$ start-dfs.sh
Starting namenodes on [master]
master: starting namenode, logging to /opt/hadoop-2.6.0/logs/hadoop-hadoop-namenode-master.out
hadoop04: starting datanode, logging to /opt/hadoop-2.6.0/logs/hadoop-hadoop-datanode-hadoop04.out
slave1: starting datanode, logging to /opt/hadoop-2.6.0/logs/hadoop-hadoop-datanode-slave1.out
slave2: starting datanode, logging to /opt/hadoop-2.6.0/logs/hadoop-hadoop-datanode-slave2.out
Starting secondary namenodes [master]
master: starting secondarynamenode, logging to /opt/hadoop-2.6.0/logs/hadoop-hadoop-secondarynamenode-master.out
[hadoop@master ~]$ 
[hadoop@master ~]$ jps
5552 NameNode
5743 SecondaryNameNode
5895 Jps
3757 ResourceManager
[hadoop@master ~]$ hdfs fsck /
Connecting to namenode via http://master:50070
FSCK started by hadoop (auth:SIMPLE) from /192.168.123.10 for path / at Fri Mar 17 01:51:01 EDT 2017
..Status: HEALTHY
 Total size:    27 B
 Total dirs:    3
 Total files:   2
 Total symlinks:                0
 Total blocks (validated):      2 (avg. block size 13 B)
 Minimally replicated blocks:   2 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       0 (0.0 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    3                《===hdfs重启后复制因子变为3
 Average block replication:     2.5

 Corrupt blocks:                0
 Missing replicas:              0 (0.0 %)
 Number of data-nodes:          3
 Number of racks:               1
FSCK ended at Fri Mar 17 01:51:01 EDT 2017 in 19 milliseconds

The filesystem under path '/' is HEALTHY
[hadoop@master ~]$


九、手工调整副本数

[hadoop@master ~]$ hdfs dfs -lsr /
lsr: DEPRECATED: Please use 'ls -R' instead.
drwxr-xr-x   - hadoop supergroup          0 2017-03-16 23:39 /system
drwxr-xr-x   - hadoop supergroup          0 2017-03-16 23:57 /tmp
-rw-r--r--   2 hadoop supergroup          6 2017-03-16 10:33 /tmp/a.txt  《===手工改之前
-rw-r--r--   3 hadoop supergroup         21 2017-03-16 23:57 /tmp/b.txt
[hadoop@master ~]$ hadoop fs -setrep -w 3 -R /tmp 
setrep: `-R': No such file or directory
Replication 3 set: /tmp/a.txt
Replication 3 set: /tmp/b.txt
Waiting for /tmp/a.txt .... done
Waiting for /tmp/b.txt ... done
[hadoop@master ~]$ hdfs dfs -lsr /
lsr: DEPRECATED: Please use 'ls -R' instead.
drwxr-xr-x   - hadoop supergroup          0 2017-03-16 23:39 /system
drwxr-xr-x   - hadoop supergroup          0 2017-03-16 23:57 /tmp
-rw-r--r--   3 hadoop supergroup          6 2017-03-16 10:33 /tmp/a.txt  《===手工改之后
-rw-r--r--   3 hadoop supergroup         21 2017-03-16 23:57 /tmp/b.txt
[hadoop@master ~]$

注意:1. 在本次实验中就改了dfs.replication的值为3后,只对新上传的文件起作用,对于旧的文件不起作用,需要手工更改

           2.执行hadoop fs -setrep -w 3 -R /tmp,要注意当前作业针对/tmp比较空闲 
--------------------- 
作者:forever19870418 
来源:CSDN 
原文:https://blog.csdn.net/forever19870418/article/details/62890866 
版权声明:本文为博主原创文章,转载请附上博文链接!

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值