向导
介绍
Elasticsearch提供了对Spark的支持,可以将ES中的索引加载为RDD或DataFrame。
官网地址:https://www.elastic.co/guide/en/elasticsearch/hadoop/7.17/spark.html#spark-sql-versions
在使用elasticsearch-spark插件之前,需要在项目中添加依赖:
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-spark-30_2.12</artifactId>
<version>7.17.5</version>
</dependency>
读取es
加载为rdd
val spark: SparkSession = SparkSession
.builder().appName("SinkCk")
.master("local[4]").config("spark.driver.host", "localhost")
/**
* 当读取为rdd时需要在创建spark session 时就指定如下三个参数,
* 当读取为dataframe时不需要初始传入,只需在读取时动态传入即可
*/
.config("es.nodes", "node01")
.config("es.port", "9200")
.config("pushdown", "true")
// .config("es.index.auto.create",true)
.getOrCreate()
val rdd = spark.sparkContext.esJsonRDD("icourt_compliance_online",
"""
|{
| "query": {
| "match": {"_id": "df2773d689de18192bb39eceb1a924db"}
| }
|}
|""".stripMargin)
rdd.foreach(println)
spark.clouse()
Spark从ES加载出来的数据是JSON String类型的RDD,根据请求体的结构就可以取出来具体的数据。该方式不会有类型转换错误、时间转换错误、不识别数组结构、不识别嵌套结构等,非常推荐该方式。
加载为dataframe
方式一:
val spark: SparkSession = SparkSession
.builder().appName("SinkCk")
.master("local[4]").config("spark.driver.host", "localhost")
.getOrCreate()
val options = Map("es.nodes" -> "node01",
"es.port" -> "9200", "pushdown" -> "true",
//spark和es非同一网段时增加该配置
"es.nodes.wan.only" -> "true",
//并发更新时, 如果更新在同一条记录则会冲突,增加该配置
"es.update.retry.on.conflict" -> "3",
//决定spark会生成多少个partition对应执行的task
"es.input.max.docs.per.partition" -> "5000000")
val inputDF = spark.read.format("org.elasticsearch.spark.sql")
.options(options).load("icourt_compliance_online")
.where("cid = 'df2773d689de18192bb39eceb1a924db'")
.select("cid", "compliance_id", "ds_source", "status", "notice_main_body", "source_url", "title")
inputDF.printSchema()
inputDF.show()
spark.clouse()
方式二:无需在sparkSession初始化时配置es,在写入时配置即可
val inputDF = spark.esDF("icourt_compliance_online", "cid = 'df2773d689de18192bb39eceb1a924db'", options)
inputDF.printSchema()
inputDF.show()
spark.clouse()
写入
以rdd写
def rddWrite2Es(spark: SparkSession) = {
val numbers = Map("one" -> 1, "two" -> 2, "three" -> 3)
val airports = Map("arrival" -> "Otopeni", "SFO" -> "San Fran")
spark.sparkContext
.makeRDD(Seq(numbers, airports))
.saveToEs("spark_es_demo")
}
以dataframe写
方式一:
def dataframeWrite2Es(spark: SparkSession) = {
import spark.implicits._
val df = Seq((1, "a", 2), (1, "a", 2), (1, "b", 3))
.toDF("id", "category", "num")
df.saveToEs("spark_es_demo")
}
方式二:无需在sparkSession初始化时配置es,在写入时配置即可
def dataframeWrite2Es(spark: SparkSession) = {
import spark.implicits._
val df = Seq((1, "a", 2), (1, "a", 2), (1, "b", 3))
.toDF("id", "category", "num")
val options = Map("es.nodes" -> "node01", "es.port" -> "9200",
//并发更新时, 如果更新在同一条记录则会冲突,增加该配置
"es.update.retry.on.conflict" -> "3")
df.write.format("org.elasticsearch.spark.sql")
.options(options).mode(SaveMode.Append)
.save("spark_es_demo")
}
structured streaming写
df
.writeStream
.outputMode(OutputMode.Append())
.format("es")
.option("checkpointLocation", "hdfs://hadoop:8020/checkpoint/test01")
.options(options)
.start("streaming_2_es")
.awaitTermination()
注意
- 写入es时索引是自动创建的,也可以在创建sparkSession时设置不自动创建:
SparkSession.builder().config("es.index.auto.create",false)
,如果已写入时已存在相关索引,则会进行mapping融合(不同mapping融合为一个mapping),或者也可以删除原索引:curl -XDELETE "http://localhost:9200/index"
- 插入数据时_id是自动生成的,如果需要以数据中某字段作为_id,则在创建sparkSession时设置es.mapping.id:
SparkSession.builder().config("es.mapping.id","id")
- 分片数和备份数默认都是1,这个是可以随时修改更新的(可使用官方给的kibana工具)
一些配置
配置信息:ElasticSearch-Spark-Configuration
一些bug
Position for ‘xxx.xxx’ not found in row; typically this is caused by a mapping inconsistency
错误原因
es-spark插件无法解析数组嵌套类型,解析报错,相关bug:Position for ‘xxx.xxx’ not found in row; typically this is caused by a mapping inconsistency
解决
- 增加配置,读取时排除这些字段:
"es.read.field.exclude" -> "defendant_litigant,prosecutor_litigant"
- 使用rdd读取,需要自己映射schema
Field ‘xx’ is backed by an array but the associated Spark Schema does not reflect this;
错误原因
因为es的mapping只会记录字段的类型,不会记录是否是数组,也就是说如果是int数组,es的mapping只是记录成int。当sparksql读取的规范是先获取数据类型,定义好dataframe的格式,然后再从数据源抽取数据。这就导致dataframe的某个字段类型是int,但读取数据的时候硬生生想把int数组放进去
解决
在options里加一个es.read.field.as.array.include,标明数组字段: "es.read.field.as.array.include" -> "xx,yy"
,如果是object里的某个字段,写成"object名字.数组字段名字",如果是多个字段,字段名之间用逗号分隔