Classifying the SQL-on-Hadoop Solutions

Classifying the SQL-on-Hadoop Solutions

Posted on October 2, 2013 at 10:00 am.

Almost a year and a half ago on this blog, I went on something that is probably best described as an anti-DBMS/Hadoop-connector rant. There was then (as there still is now) an incredible amount of use cases that require the combination of DBMS and Hadoop technologies, and at the time, both the Hadoop vendors and the DBMS vendors were pushing a “connector” approach, where the customer buys both a Hadoop product and a DBMS product and data can be passed back and forth between these two systems. I explained the architectural wastefulness that is associated with this approach, and why, given the way that parallel database systems and Hadoop are designed, it is relatively easy to combine them (architecturally speaking) into a single system. At the time there were only two solutions that took the combined system approach: Hive and Hadapt.

Since that post was written, it is good to see that several vendors have abandoned the connector approach and have instead launched initiatives (such as Stinger, Impala, and Drill) that, while still immature, are following (or extending) Hive and Hadapt, and going in the direction of bringing SQL technologies directly to Hadoop clusters. In my opinion, this is absolutely the right direction for the market, and will result in the furthering of Hadoop’s dominance in the data processing and analysis space.

Given the rapid entrance of these new “SQL-on-Hadoop” initiatives, now is a good time to classify them and study the similarities and differences between these approaches.

Before comparing and contrasting six approaches to SQL-on-Hadoop (Hive, Hadapt, Stinger, Impala, Polybase, and Drill), I should explain why these are the only approaches that are being compared in this post: since the DBMS/Hadoop connector approach is so fundamentally flawed from an architectural perspective, vendors that use this approach remain in a different category and are not directly competitive with the direct approaches to SQL-on-Hadoop. Even recent attempts from Greenplum and Aster Data to retrofit their MPP database to work on Hadoop clusters through the HAWQ and SQL-H projects respectively still fundamentally use the connector approach: at query time, data is extracted out of HDFS and sent over the network into their MPP execution engines for further processing. Even if the MPP execution engine sits on the same physical cluster as HDFS, if processing is not pushed down to the same nodes that store the data, the MPP database is essentially treating HDFS as a large (cheap) shared-disk storage system, and comes with the scalability constraints and network bottlenecks that are associated with this approach. Shared-disk architectures are fundamentally antithetical to the Google-made-famous “shared-nothing” design that Hadoop emulates, where processing is pushed as close to the data as possible. This is why these MPP+Hadoop vendors typically bundle hardware with software, so that high-end and expensive networking gear can be integrated into the cluster, in order to hide the fundamental limitations of the shared-disk architecture.

Therefore, we are left with the above-mentioned six technologies to compare. (It’s possible that there are additional SQL-on-Hadoop solutions that I’m not aware of – if so, please add them via the comment thread below). They are best divided into three categories, with two technologies placed inside each category:

(1)   SQL translated to MapReduce jobs over a Hadoop cluster. Both Hive and Stinger (without Tez) fall into this category. A SQL query that is sent to a Hadoop cluster is translated into a series of MapReduce jobs which are then processed by the cluster. A major advantage of this approach is that by integrating with Hadoop’s version of MapReduce, queries are run with Hadoop’s dynamic scheduler and are therefore highly tolerant of unexpected performance issues and other forms of heterogeneous performance across the cluster. Furthermore, they leverage MapReduce’s mid-query fault tolerance so that nodes that fail in the middle of query processing do not cause the entire query to fail. Combined, these two properties lead to consistent and reliable execution of queries across clusters containing thousands of nodes. Disadvantages include: (a) in order to facilitate the transaction of SQL to MapReduce jobs, the dialect of SQL that are spoken by these systems is not quite standard SQL, which complicates integration with third party tools; (b) due to the need to automatically generate MapReduce jobs for any type of SQL clause, the amount of SQL coverage is coming along slowly; and (c) due to processing exclusively using the MapReduce framework (Stinger with Tez falls in a different category), the per-query MapReduce overhead prevents the ability of these technologies to process queries interactively (this category is fundamentally a “batch processing” category).

(2)   SQL processed by a specialized (Google-inspired) SQL engine that sits on a Hadoop cluster. Both Impala and Drill fall into this category. Impala is inspired by Google’s F1 project and Drill by Google’s Dremel project. Both push down SQL (or, in the case of Drill/Dremel, SQL-like) operators down to where it is stored in the distributed file system (HDFS) and therefore have the advantage of collocating data with data processing. However, since both systems are building the SQL query execution engine from scratch, both suffer from the same (a) and (b) disadvantages of category (1) – non-standard SQL and poor SQL coverage. Furthermore, by completely eschewing MapReduce, they do not get the associated fault tolerance and dynamic scheduling (and therefore scalability) benefits that are inherent in MapReduce.

(3)   Processing of SQL queries are split between MapReduce and storage that natively speaks SQL. Both Hadapt and Polybase fall into this category. These systems attempt to get the best of both worlds, doing some processing in MapReduce and some processing in native SQL operators. When a SQL query is submitted to the Hadoop cluster, an optimizer analyzes the query, and decides what parts should be performed via MapReduce, and what parts via SQL operators. For queries that require interactive (sub-second) time, MapReduce is typically avoided, and the entire query is performed via native SQL. But for queries that require massive scale and mid-query fault tolerance, more work is left for the MapReduce engine.

Although each of these “SQL-on-Hadoop” categories has different advantages and disadvantages, as a group, they significantly bring Hadoop forward from where it was a year ago, and greatly expand the use cases for which Hadoop technology can be used. As vendors continue to abandon the DBMS-connector approach, customers win through cleaner architectures, fewer data silos, and simplified systems administration.

from http://hadapt.com/blog/2013/10/02/classifying-the-sql-on-hadoop-solutions/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
提供的源码资源涵盖了Java应用等多个领域,每个领域都包含了丰富的实例和项目。这些源码都是基于各自平台的最新技术和标准编写,确保了在对应环境下能够无缝运行。同时,源码中配备了详细的注释和文档,帮助用户快速理解代码结构和实现逻辑。 适用人群: 适合毕业设计、课程设计作业。这些源码资源特别适合大学生群体。无论你是计算机相关专业的学生,还是对其他领域编程感兴趣的学生,这些资源都能为你提供宝贵的学习和实践机会。通过学习和运行这些源码,你可以掌握各平台开发的基础知识,提升编程能力和项目实战经验。 使用场景及目标: 在学习阶段,你可以利用这些源码资源进行课程实践、课外项目或毕业设计。通过分析和运行源码,你将深入了解各平台开发的技术细节和最佳实践,逐步培养起自己的项目开发和问题解决能力。此外,在求职或创业过程中,具备跨平台开发能力的大学生将更具竞争力。 其他说明: 为了确保源码资源的可运行性和易用性,特别注意了以下几点:首先,每份源码都提供了详细的运行环境和依赖说明,确保用户能够轻松搭建起开发环境;其次,源码中的注释和文档都非常完善,方便用户快速上手和理解代码;最后,我会定期更新这些源码资源,以适应各平台技术的最新发展和市场需求。 所有源码均经过严格测试,可以直接运行,可以放心下载使用。有任何使用问题欢迎随时与博主沟通,第一时间进行解答!

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值