ElasticSearch是一个基于Lucene构建的开源,分布式,RESTful搜索引擎。设计用于云计算中,能够达到实时搜索,稳定,可靠,快速,安装使用方便。
hive是一个基于hdfs的数据仓库,方便使用者可以通过一种类sql(HiveQL)的语言对hdfs上面的打数据进行访问,通过elasticsearch与hive的结合来实现对hdfs上面的数据实时访问的效果。
在上面的图中描述了日志通过Flume Collector 流到Sink 然后进入hdfs和elastic search,然后可以通过es的接口可以实时将一些趋势 比如当前用户数 请求次数等展示在图表中实现数据可视化。
要作集成需要在hive上有两个表,一个是原数据表,另外一个类似于在元数据表上面建立的view,但是并不是数据的存储 下面是作者Costin Leau在邮件列表里边的描述,网址http://elasticsearch-users.115913.n3.nabble.com/Elasticsearch-Hadoop-td4047293.html
There is no duplication per-se in HDFS. Hive tables are just 'views' of data - one sits unindexed, in raw format in HDFS
the other one is indexed and analyzed in Elasticsearch.
You can't combine the two since they are completely different things - one is a file-system, the other one is a search
and analytics engine.
首先 我们要获得elasticsearc-hadoop的jar包,可以通过maven方式取得:
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-hadoop</artifactId>
<version>2.0.1</version>
</dependency>
这个地址是elasticsearch-hadoop的github地址:https://github.com/elasticsearch/elasticsearch-hadoop#readme
目前最新的版本是2.0.1 这个版本能支持目前所有的hadoop衍生版本。
取得这个jar包之后,可以将其拷贝到hive的lib目录中,然后以如下方式打开hive命令窗口:
<span style="font-size:18px;">bin/hive -hiveconf hive.aux.jars.path=/home/hadoop/hive/lib/elasticsearch-hadoop-2.0.1.jar</span>
这个也可以写在hive的配置文件中,
建立view表
<span style="font-size:18px;">CREATE EXTERNAL TABLE user (id INT, name STRING)
STORED BY 'org.elasticsearch.hadoop.hive.ESStorageHandler'
TBLPROPERTIES('es.resource' = 'radiott/artiststt','es.index.auto.create' = 'true');</span>
es.resource的radiott/artiststt分别是索引名和索引的类型,这个是在es访问数据时候使用的。
然后建立源数据表
CREATE TABLE user_source (id INT, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
数据示例:
1,medcl
2,lcdem
3,tom
4,jack
将数据导入到user_source表中:
LOAD DATA LOCAL INPATH '/home/hadoop/files1.txt' OVERWRITE INTO TABLE <span style="font-size:18px;">user_source</span>;
hive> select * from user_source;
OK
1 medcl
2 lcdem
3 tom
4 jack
Time taken: 3.4 seconds, Fetched: 4 row(s)
将数据导入到user表中:
INSERT OVERWRITE TABLE user SELECT s.id, s.name FROM user_source s;
hive> INSERT OVERWRITE TABLE user SELECT s.id, s.name FROM user_source s;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1412756024135_0007, Tracking URL = N/A
Kill Command = /home/hadoop/hadoop/bin/hadoop job -kill job_1412756024135_0007
Hadoop job information for Stage-0: number of mappers: 1; number of reducers: 0
2014-10-08 17:44:04,121 Stage-0 map = 0%, reduce = 0%
2014-10-08 17:45:04,360 Stage-0 map = 0%, reduce = 0%, Cumulative CPU 1.21 sec
2014-10-08 17:45:05,505 Stage-0 map = 0%, reduce = 0%, Cumulative CPU 1.21 sec
2014-10-08 17:45:06,707 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 1.29 sec
2014-10-08 17:45:07,728 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 1.29 sec
2014-10-08 17:45:08,757 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 1.29 sec
2014-10-08 17:45:09,778 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 1.29 sec
2014-10-08 17:45:10,800 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 1.29 sec
2014-10-08 17:45:11,915 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 1.29 sec
2014-10-08 17:45:12,969 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 1.42 sec
2014-10-08 17:45:14,231 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 1.42 sec
2014-10-08 17:45:15,258 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 1.42 sec
2014-10-08 17:45:16,300 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 1.42 sec
2014-10-08 17:45:17,326 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 1.42 sec
2014-10-08 17:45:18,352 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 1.42 sec
2014-10-08 17:45:19,374 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 1.42 sec
2014-10-08 17:45:20,396 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 1.42 sec
2014-10-08 17:45:21,423 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 1.42 sec
2014-10-08 17:45:22,447 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 1.42 sec
2014-10-08 17:45:23,475 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 1.42 sec
MapReduce Tota