1、Simple Coherency Model
HDFS applications need a write-once-read-many access model for files. A file once created, written, and closed need not be changed. This assumption simplifies data coherency issues and enables high throughput data access. A Map/Reduce application or a web crawler application fits perfectly with this model. There is a plan to support appending-writes to files in the future.
网络爬虫为什么适应这种一次写多次读的模型
2、Replica Placement:The First Bady Setps
However,it does reduce the aggregate network bandwidth used when reading data since a block is placed in only two unique racks rather than three. With this policy, the replicas of a file do not evenly distribute across the racks. One third of replicas are on one node, two thirds of replicas are on one rack, and the other third are evenly distributed across the remaining racks. This policy improves write performance without compromising data reliability or read performance.
问题一、是因为选择变少的原因造成使用的总的网络宽带变少的吗
问题二、上文说减少使用的总的网络带宽,后面为什么又说这种策略对read performance没有影响
- hadoop distcp file1 file2 回针对单个文件也并行处理吗?如果是的话是通过切分文件来处理吗