How-to: Do Statistical Analysis with Impala and R

 

https://study.163.com/course/courseMain.htm?courseId=1005988013&share=2&shareId=400000000398149(博主录制,2K超清分辨率)

 

 

http://blog.cloudera.com/blog/2013/12/how-to-do-statistical-analysis-with-impala-and-r/

The new RImpala package brings the speed and interactivity of Impala to queries from R.

Our thanks to Austin Chungath, Sachin Sudarshana, and Vikas Raguttahalli of Mu Sigma, a Decision Sciences and Big Data analytics company, for the guest post below.

As is well known, Apache Hadoop traditionally relies on the MapReduce paradigm for parallel processing, which is an excellent programming model for batch-oriented workloads. But when ad hoc, interactive querying is required, the batch model fails to meet performance expectations due to its inherent latency.

To overcome this drawback, Cloudera introduced Cloudera Impala, the open source distributed SQL query engine for Hadoop data. Impala brings the necessary speed to queries that were otherwise not interactive when executed by the batch Apache Hive engine; Hive queries that used to take minutes can be executed in a matter of seconds using Impala.

Impala is quite exciting for us at Mu Sigma because existing Hive queries can run interactively with few or no changes. Furthermore, because we do a lot of our statistical computing on R, the popular open source statistical computing language, we considered it worthwhile to bring the speed of Impala to R.

To meet that goal, we have created a new R package, RImpala, which connects Impala to R. RImpala enables querying the data residing in HDFS and Apache HBase from R, which can be further processed as an R object using R functions. RImpala is now available for download from the Comprehensive R Archive Network (CRAN) under GNU General Public License (GPL3).

The RImpala architecture is simple: we used the existing Impala JDBC drivers and wrote a Java program to connect and query Impala, which we then called from R using the rJava package. We put them all together in an R package that you can use to easily query Impala from R.

Steps for Installing RImpala

Assuming that you have R and Impala already installed, installing the RImpala package is straightforward and is done in a manner similar to any other R package. There are two steps to installing RImpala and getting it working:

Step 1: Install the package from CRAN

You can install RImpala directly using the install.packages() command in R.

 

 

 

Alternatively, if you need to do offline installation of the package, you can download it from here and install using the R CMD INSTALL command:

 

 

 

Step 2: Install the Impala JDBC drivers

You need to install Cloudera’s JDBC drivers before you can use the RImpala package that we installed earlier. Cloudera provides JBDC jars on its website that you can download directly. As of this writing, this is the link to zip file containing the JDBC jars.

There are two ways to do this:

  1. If you have Impala installed on the machine running R, then you will have the necessary JDBC jars already (probably in /usr/lib/impala/lib) and you can use them to initiate the connection to Impala.
  2. If the machine running R is a different server than the Impala server, then you need to download the JDBC jars from the above link and extract it to a location that can be accessed by the R user.

After you have installed the JDBC drivers you can start using the RImpala package:

  1. Load the library.

     

     

  2. Initialize the JDBC jars.

     

     

  3. Connect to Impala.

     

     

    The following is an Rscript showing how to connect to Impala:

     

     

     

    Location of JDBC jars = /tmp/impala/jars

    IP of the server running impalad service = 192.168.10.1

    Port where the impalad service is listening = 21050

The default parameter for the rimpala.init() function is “/usr/lib/impala/lib” and the default parameters for rimpala.connect() function are “localhost” and “21050” respectively.

To run a query on the impalad instance that the client has connected, you can use the rimpala.query() function. Example:

 

 

 

All the contents of the sample_table will be stored in the result object as a data frame. This data frame can now be used for further analytical processing in R.

You can also install the RImpala package on a client machine running Microsoft Windows. Since the JDBC jars are platform independent, you can extract them into a folder on a Windows machine (such as “C:\Program Files\impala”) and then this location can be passed as a parameter to the rimpala.init() function.

The following a simple example that shows you how to use RImpala:

 

 

 

Conclusion

Impala is an exciting new technology that is gaining popularity and will probably grow to be an enterprise asset in the Hadoop world. We hope that RImpala will be a fruitful package for all Big Data analysts to leverage the power of Impala from R.

Impala is an ongoing and thriving effort at Cloudera and will continue to evolve with richer functionality and improved performance – and so will RImpala. We will continue to improve the package over time and incorporate new features into RImpala as and when they are made available in Impala.  

Austin Chungath is a Senior Research Analyst with Mu Sigma’s Innovation & Development Team and maintainer of the RImpala project. He does research on various tools in the Hadoop ecosystem and the possibilities that they bring for analytics. He spends his free time contributing to Open Source projects like Apache Tez or building small robots.

Sachin Sudarshana is a Research Analyst with Mu Sigma’s Innovation & Development Team. His responsibilities include researching emerging tools in the Hadoop ecosystem and how they can be leveraged in an analytics context.

Vikas Raguttahalli is a Research Lead with Mu Sigma’s Innovation & Development Team. He is responsible for working with client delivery teams and helping clients institutionalize Big Data within their organizations, as well as researching new and upcoming Big Data tools. His expertise includes R, MapReduce, Hive, Pig, Mahout and the wider Hadoop ecosystem.

python风控评分卡建模和风控常识(博客主亲自录制视频教程)

转载于:https://www.cnblogs.com/webRobot/p/9083379.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值