Apache Spark集成
本文档在ES与 Hadoop集群安装成功条件下,参照:
https://www.elastic.co/guide/en/elasticsearch/hadoop/master/spark.html
版本信息:
HDP version : HDP-2.6.1.0
ES version : 5.6.0
Spark version : 2.2.0
maven version:3.5
Scala version 2.10.6
1. write data to elasticsearch
1.1. 方式一步骤解析
RDD先定义Map/JavaBean/Scala case class将其内容翻译成文件,利用elasticsearch Hadoop将保存到Elasticsearch中
在Scala中只需要以下几步:
1.Spark Scala imports
2.Elasticsearch-hadoop Scala imports
3.Start Spark through its Scala API
4.makeRDD
5.index content(内容索引) index ES under spark/docs
1.2. 代码如下:
import org.apache.spark.{SparkConf, SparkContext}
import org.elasticsearch.spark.rdd.EsSpark
/**
* Created by hand on 2018/1/9.
*/
object Es_spark {
def main(args: Array[String]): Unit = {
case class Job(jobName: String, jobUrl: String, companyName: String, salary: String)
var conf = new SparkConf()
conf.setAppName("S2EExample")
conf.setMaster("local[*]")
conf.set("es.index.auto.create", "true")
conf.set("es.nodes", "hdfs01.edcs.org:9200")
val sc = new SparkContext(conf)
val job1 = Job("C开发工程师", "http://job.c.com", "c公司", "10000")
val job2 = Job("C++开发工程师", "http://job.c++.com", "c++公司", "10000")
val job3 = Job("C#开发工程师", "http://job.c#.com", "c#公司", "10000")
val job4 = Job("Java开发工程师", "http://job.java.com", "java公司", "10000")
val job5 = Job("Scala开发工程师", "http://job.scala.com", "java公司", "10000")
val rdd = sc.makeRDD(Seq(job1, job2, job3, job4, job5))
EsSpark.saveToEs(rdd, "data/job01")
}
}
1.3. 方式二步骤解析:
Scala用户可能会使用SEQ和→符号声明根对象(即JSON文件)而不是使用Map。而类似的,第一个符号的结果略有不同,不能相匹配的一个JSON文件:序列是一阶序列(换句话说,一个列表),←会创建一个Tuple(元组),或多或少是一个有序的,元素的定数。例如,一个列表的列表不能作为一个文件,因为它不能被映射到一个JSON对象;但是它可以在一个自由的使用。因此在上面的例子Map(K→V)代替SEQ(K→V)
作为一种替代上面的隐式导入,elasticsearch-hadoop支持spark的Scala用户通过org.elasticsearch.spark.rdd包作为实用类允许显式方法调用EsSpark。
步骤如下:
1.Spark Scala imports
2.Elasticsearch-hadoop Scala imports
3.Define a case class named Trip
4.Create an RDD around the Trip instances
5.Index the RDD explicitly through EsSpark
对于指定documents的id(或者其他类似于TTL或时间戳的元数据),可以设置名字为es.mapping.id的映射。下面以前的实例,Elasticsearch利用filed的id作为documents的id.更新RDD的配置configuration(也可以在SparkConf上设置全局的属性,不建议这样做)
注意:设置es.mapping.id属性,参数不可以设置包含”_id”(与ES中数据默认字段冲突)
1.4. 代码如下:
import org.apache.spark.{SparkConf, SparkContext}
import org.elasticsearch.spark.rdd.EsSpark
/**
* Created by hand on 2018/1/9.
*/
object Es_spark {
def main(args: Array[String]): Unit = {
case class Trip(jobName: String, jobUrl: String, companyName: String, salary: String)
var conf = new SparkConf()
conf.setAppName("S2EExample")
conf.setMaster("local[*]")
conf.set("es.index.auto.create", "true")
conf.set("es.nodes", "hdfs01.edcs.org:9200")
val sc = new SparkContext(conf)
val trip1 = Trip("C开发工程师", "http://job.c.com", "c公司", "10000")
val trip2 = Trip("C++开发工程师", "http://job.c++.com", "c++公司", "10000")
val trip3 = Trip("C#开发工程师", "http://job.c#.com", "c#公司", "10000")
val trip4 = Trip("Java开发工程师", "http://job.java.com", "java公司", "10000")
val trip5 = Trip("Scala开发工程师", "http://job.scala.com", "java公司", "10000")
val rdd = sc.makeRDD(Seq(trip1, trip2, trip3, trip4, trip5))
EsSpark.saveToEs(rdd, "data/job01")
}
}
2. write data from elasticsearch
2.1. 步骤:
1.Spark Scala imports
2.elasticsearch-hadoop Scala imports
3.start Spark through its Scala API
4.a dedicated RDD for Elasticsearch is created for index eee/01
5.create an RDD streaming all the documents matching me* from index eee/01
代码如下:
import org.apache.spark.{SparkConf, SparkContext}
import org.elasticsearch.spark.rdd.EsSpark
/**
* Created by hand on 2018/1/9.
*/
object Es_spark {
def main(args: Array[String]): Unit = {
case class Trip(jobName: String, jobUrl: String, companyName: String, salary: String)
var conf = new SparkConf()
conf.setAppName("S2EExample")
conf.setMaster("local[*]")
conf.set("es.index.auto.create", "true")
conf.set("es.nodes", "hdfs01.edcs.org:9200")
val sc = new SparkContext(conf)
EsSpark.esRDD(sc, "eee/01", "q?name=*")
}
}