(1)What Is Apache Hadoop?
TheApache™ Hadoop® project develops open-source software for reliable, scalable, distributed computing.
TheApache Hadoop software library is a framework that allows for the distributedprocessing of large data sets across clusters of computers using simpleprogramming models. It is designed to scale up from single servers to thousandsof machines, each offering local computation and storage. Rather than rely onhardware to deliver high-availability, the library itself is designed to detectand handle failures at the application layer, so delivering a highly-availableservice on top of a cluster of computers, each of which may be prone tofailures.
(2)The project includesthese modules:
- Hadoop Common: The common utilities that support the other Hadoop modules.
- Hadoop Distributed File System (HDFS™): A distributed file system that provideshigh-throughput access to application data.
- Hadoop YARN:A framework for job scheduling and cluster resource management.
- Hadoop MapReduce: A YARN-based system for parallel processing of large data sets.
(3) Other Hadoop-relatedprojects at Apache include:
Ambari™: A web-based tool for provisioning,managing, and monitoring Apache Hadoop clusters which includes support forHadoop HDFS, Hadoop MapReduce, Hive, HCatalog, HBase, ZooKeeper, Oozie, Pig andSqoop. Ambari also provides a dashboard for viewing cluster health such asheatmaps and ability to view MapReduce, Pig and Hive applications visuallyalongwith features to diagnose their performance characteristics in a user-friendlymanner.
Avro™: A data serialization system.
Cassandra™: A scalable multi-master database with nosingle points of failure.
Chukwa™: A data collection system for managinglarge distributed systems.
HBase™: A scalable, distributed database that supportsstructured data storage for large tables.
Hive™: A data warehouse infrastructure thatprovides data summarization and ad hoc querying.
Mahout™: A Scalable machine learning and datamining library.
Pig™: A high-level data-flow language andexecution framework for parallel computation.
Spark™: A fast and general compute engine forHadoop data. Spark provides a simple and expressive programming model thatsupports a wide range of applications, including ETL, machine learning, streamprocessing, and graph computation.
Tez™: A generalized data-flow programmingframework, built on Hadoop YARN, which provides a powerful and flexible engineto execute an arbitrary DAG of tasks to process data for both batch andinteractive use-cases. Tez is being adopted by Hive™, Pig™ and other frameworksin the Hadoop ecosystem, and also by other commercial software (e.g. ETLtools), to replace Hadoop™ MapReduce as the underlying execution engine.
ZooKeeper™: A high-performance coordination servicefor distributed applications.
u Apache Lucene
u Apache Nutch
开源的 Web 搜索引擎
u Google 三大论文
MapReduce / GFS / BigTable
u Apache Hadoop