Classifying the SQL-on-Hadoop Solutions

Classifying the SQL-on-Hadoop Solutions

Posted on October 2, 2013 at 10:00 am.

Almost a year and a half ago on this blog, I went on something that is probably best described as an anti-DBMS/Hadoop-connector rant. There was then (as there still is now) an incredible amount of use cases that require the combination of DBMS and Hadoop technologies, and at the time, both the Hadoop vendors and the DBMS vendors were pushing a “connector” approach, where the customer buys both a Hadoop product and a DBMS product and data can be passed back and forth between these two systems. I explained the architectural wastefulness that is associated with this approach, and why, given the way that parallel database systems and Hadoop are designed, it is relatively easy to combine them (architecturally speaking) into a single system. At the time there were only two solutions that took the combined system approach: Hive and Hadapt.

Since that post was written, it is good to see that several vendors have abandoned the connector approach and have instead launched initiatives (such as Stinger, Impala, and Drill) that, while still immature, are following (or extending) Hive and Hadapt, and going in the direction of bringing SQL technologies directly to Hadoop clusters. In my opinion, this is absolutely the right direction for the market, and will result in the furthering of Hadoop’s dominance in the data processing and analysis space.

Given the rapid entrance of these new “SQL-on-Hadoop” initiatives, now is a good time to classify them and study the similarities and differences between these approaches.

Before comparing and contrasting six approaches to SQL-on-Hadoop (Hive, Hadapt, Stinger, Impala, Polybase, and Drill), I should explain why these are the only approaches that are being compared in this post: since the DBMS/Hadoop connector approach is so fundamentally flawed from an architectural perspective, vendors that use this approach remain in a different category and are not directly competitive with the direct approaches to SQL-on-Hadoop. Even recent attempts from Greenplum and Aster Data to retrofit their MPP database to work on Hadoop clusters through the HAWQ and SQL-H projects respectively still fundamentally use the connector approach: at query time, data is extracted out of HDFS and sent over the network into their MPP execution engines for further processing. Even if the MPP execution engine sits on the same physical cluster as HDFS, if processing is not pushed down to the same nodes that store the data, the MPP database is essentially treating HDFS as a large (cheap) shared-disk storage system, and comes with the scalability constraints and network bottlenecks that are associated with this approach. Shared-disk architectures are fundamentally antithetical to the Google-made-famous “shared-nothing” design that Hadoop emulates, where processing is pushed as close to the data as possible. This is why these MPP+Hadoop vendors typically bundle hardware with software, so that high-end and expensive networking gear can be integrated into the cluster, in order to hide the fundamental limitations of the shared-disk architecture.

Therefore, we are left with the above-mentioned six technologies to compare. (It’s possible that there are additional SQL-on-Hadoop solutions that I’m not aware of – if so, please add them via the comment thread below). They are best divided into three categories, with two technologies placed inside each category:

(1)   SQL translated to MapReduce jobs over a Hadoop cluster. Both Hive and Stinger (without Tez) fall into this category. A SQL query that is sent to a Hadoop cluster is translated into a series of MapReduce jobs which are then processed by the cluster. A major advantage of this approach is that by integrating with Hadoop’s version of MapReduce, queries are run with Hadoop’s dynamic scheduler and are therefore highly tolerant of unexpected performance issues and other forms of heterogeneous performance across the cluster. Furthermore, they leverage MapReduce’s mid-query fault tolerance so that nodes that fail in the middle of query processing do not cause the entire query to fail. Combined, these two properties lead to consistent and reliable execution of queries across clusters containing thousands of nodes. Disadvantages include: (a) in order to facilitate the transaction of SQL to MapReduce jobs, the dialect of SQL that are spoken by these systems is not quite standard SQL, which complicates integration with third party tools; (b) due to the need to automatically generate MapReduce jobs for any type of SQL clause, the amount of SQL coverage is coming along slowly; and (c) due to processing exclusively using the MapReduce framework (Stinger with Tez falls in a different category), the per-query MapReduce overhead prevents the ability of these technologies to process queries interactively (this category is fundamentally a “batch processing” category).

(2)   SQL processed by a specialized (Google-inspired) SQL engine that sits on a Hadoop cluster. Both Impala and Drill fall into this category. Impala is inspired by Google’s F1 project and Drill by Google’s Dremel project. Both push down SQL (or, in the case of Drill/Dremel, SQL-like) operators down to where it is stored in the distributed file system (HDFS) and therefore have the advantage of collocating data with data processing. However, since both systems are building the SQL query execution engine from scratch, both suffer from the same (a) and (b) disadvantages of category (1) – non-standard SQL and poor SQL coverage. Furthermore, by completely eschewing MapReduce, they do not get the associated fault tolerance and dynamic scheduling (and therefore scalability) benefits that are inherent in MapReduce.

(3)   Processing of SQL queries are split between MapReduce and storage that natively speaks SQL. Both Hadapt and Polybase fall into this category. These systems attempt to get the best of both worlds, doing some processing in MapReduce and some processing in native SQL operators. When a SQL query is submitted to the Hadoop cluster, an optimizer analyzes the query, and decides what parts should be performed via MapReduce, and what parts via SQL operators. For queries that require interactive (sub-second) time, MapReduce is typically avoided, and the entire query is performed via native SQL. But for queries that require massive scale and mid-query fault tolerance, more work is left for the MapReduce engine.

Although each of these “SQL-on-Hadoop” categories has different advantages and disadvantages, as a group, they significantly bring Hadoop forward from where it was a year ago, and greatly expand the use cases for which Hadoop technology can be used. As vendors continue to abandon the DBMS-connector approach, customers win through cleaner architectures, fewer data silos, and simplified systems administration.

from http://hadapt.com/blog/2013/10/02/classifying-the-sql-on-hadoop-solutions/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
IEEE-CIS Fraud Detection is a Kaggle competition that challenges participants to detect fraudulent transactions using machine learning techniques. KNN (k-Nearest Neighbors) is one of the machine learning algorithms that can be used to solve this problem. KNN is a non-parametric algorithm that classifies new data points based on the majority class of their k-nearest neighbors in the training data. In the context of fraud detection, KNN can be used to classify transactions as either fraudulent or not based on the similarity of their features to those in the training data. To implement KNN for fraud detection, one can follow the following steps: 1. Preprocess the data: This involves cleaning and transforming the data into a format that the algorithm can work with. 2. Split the data: Split the data into training and testing sets. The training data is used to train the KNN model, and the testing data is used to evaluate its performance. 3. Choose the value of k: This is the number of neighbors to consider when classifying a new data point. The optimal value of k can be determined using cross-validation. 4. Train the model: Train the KNN model on the training data. 5. Test the model: Test the performance of the model on the testing data. 6. Tune the model: Fine-tune the model by changing the hyperparameters such as the distance metric used or the weighting function. Overall, KNN can be a useful algorithm for fraud detection, but its performance depends heavily on the quality of the data and the choice of hyperparameters.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值