HDFS 读写流程

                                        HDFS 读写流程

1、写数据流程

hdfs写数据流程图


1.1、数据写入流程说明:

  1. client向nameNode 请求文件上传,nameNode检查目标文件是否存在,父目录是否存在;
  2. nameNode返回是否可以上传;
  3. client对文件切分,请求第一个block传输到哪些DataNode服务器上;
  4. NameNode 返回3个DataNode 服务器DataNode1,DataNode2,DataNode3;
  5. client请求3台中的一台DataNode1(网络拓扑最近的一台)上传数据(RPC,建立pipeline),DataNode1 收到请求继续调用DataNode2 ,然后DataNode2 调用DataNode3 ,将整个pipeline建立完成,然后逐级返回客户端;

  6. dn1、dn2、dn3逐级应答客户端。

  7. Client开始往DataNode 1上传第一个block(先从磁盘读取数据放到一个本地内存缓存),以packet为单位。写入的时候DataNode会进行数据校验。DataNode 1收到一个packet就会传给DataNode 2,DataNode 2传给DataNode 3,DataNode 1每传一个packet会放入一个应答队列等待应答;

当一个Block传输完成之后,客户端再次请求NameNode上传第二个Block到服务器。(重复执行3-7步); 

2、读数据流程

 

2.1、数据读取流程说明

  1. 客户端通过Distributed FileSystem向NameNode请求下载文件,NameNode通过查询元数据,找到文件块所在的DataNode地址;
  2. 挑选一台DataNode(网络拓扑就近原则,然后随机)服务器,请求建立socket流读取数据;
  3. DataNode开始传输数据给客户端(从磁盘里面读取数据输入流,以Packet为单位来做校验);
  4. 客户端以Packet为单位接收,先在本地缓存,然后写入目标文件。

2.2 副本选择

Replica Selection

To minimize global bandwidth consumption and read latency, HDFS tries to satisfy a read request from a replica that is closest to the reader. If there exists a replica on the same rack as the reader node, then that replica is preferred to satisfy the read request. If angg/ HDFS cluster spans multiple data centers, then a replica that is resident in the local data center is preferred over any remote replica.

为了最大程度地减少全局带宽消耗和读取延迟,HDFS尝试满足最接近读取器的副本的读取请求。如果在与读取器节点相同的机架上存在一个副本,则首选该副本以满足读取请求。如果angg / HDFS群集跨越多个数据中心,则驻留在本地数据中心中的副本比任何远程副本都更可取。

 

The NameNode determines the rack id each DataNode belongs to via the process outlined in Hadoop Rack Awareness. A simple but non-optimal policy is to place replicas on unique racks. This prevents losing data when an entire rack fails and allows use of bandwidth from multiple racks when reading data. This policy evenly distributes replicas in the cluster which makes it easy to balance load on component failure. However, this policy increases the cost of writes because a write needs to transfer blocks to multiple racks.

For the common case, when the replication factor is three, HDFS’s placement policy is to put one replica on one node in the local rack, another on a different node in the local rack, and the last on a different node in a different rack. This policy cuts the inter-rack write traffic which generally improves write performance. The chance of rack failure is far less than that of node failure; this policy does not impact data reliability and availability guarantees. However, it does reduce the aggregate network bandwidth used when reading data since a block is placed in only two unique racks rather than three. With this policy, the replicas of a file do not evenly distribute across the racks. One third of replicas are on one node, two thirds of replicas are on one rack, and the other third are evenly distributed across the remaining racks. This policy improves write performance without compromising data reliability or read performance.

 

NameNode通过Hadoop Rack Awareness中概述的过程确定每个DataNode所属的机架ID 。一个简单但非最佳的策略是将副本放置在唯一的机架上。这样可以防止在整个机架出现故障时丢失数据,并允许在读取数据时使用多个机架的带宽。此策略在群集中平均分配副本,这使得平衡组件故障时的负载变得容易。但是,此策略增加了写入成本,因为写入需要将块传输到多个机架。

在常见情况下,当复制因子为3时,HDFS的放置策略是将一个副本放置在本地机架中的一个节点上,将另一个副本放置在本地机架中的另一个节点上,最后一个副本放置在不同机架中的另一个节点上。该策略减少了机架间的写流量,通常可以提高写性能。机架故障的机会远小于节点故障的机会。此策略不会影响数据的可靠性和可用性保证。但是,由于一个块仅放置在两个唯一的机架中,而不是三个,因此它确实减少了读取数据时使用的总网络带宽。使用此策略,文件的副本不会在机架上均匀分布。三分之一的副本位于一个节点上,三分之二的副本位于一个机架上,其余三分之一则均匀分布在其余机架上。

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值