大数据笔记:HDFS读官方文档
标签: 大数据
介绍
The Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity hardware. It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant. HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware. HDFS provides high throughput access to application data and is suitable for applications that have large data sets. HDFS relaxes a few POSIX requirements to enable streaming access to file system data. HDFS was originally built as infrastructure for the Apache Nutch web search engine project. HDFS is part of the Apache Hadoop Core project. The project URL is http://hadoop.apache.org/.
主要说的是:HDFS具有高容错、高可靠性、高扩展性的特点,可以部署在低成本硬件上。
HDFS提供对应用程序数据的高吞吐量访问,适用于具有大型数据集的应用程序。
一般来讲,hadoop:
- 适用于大规模数据吞吐而不是境地数据访问的延时
- 流式访问数据集(写一次,读多次),不适用于频繁修改文件
- HDFS设计更多用于批处理,而不是用户交互式使用。
NameNode and DataNodes
HDFS架构及其内部
HDFS has a master/slave arch