spark与Elasticsearch整合

maven添加依赖

   <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core -->
   <dependency>
       <groupId>org.apache.spark</groupId>
       <artifactId>spark-core_2.11</artifactId>
       <version>2.4.3</version>
   </dependency>
   <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-sql -->
   <dependency>
       <groupId>org.apache.spark</groupId>
       <artifactId>spark-sql_2.11</artifactId>
       <version>2.4.3</version>
   </dependency>
   <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-streaming -->
   <dependency>
       <groupId>org.apache.spark</groupId>
       <artifactId>spark-streaming_2.11</artifactId>
       <version>2.4.3</version>
   </dependency>
   <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-mllib -->
   <dependency>
       <groupId>org.apache.spark</groupId>
       <artifactId>spark-mllib_2.11</artifactId>
       <version>2.4.3</version>
   </dependency>
   <!-- https://mvnrepository.com/artifact/com.alibaba/fastjson -->
   <dependency>
       <groupId>com.alibaba</groupId>
       <artifactId>fastjson</artifactId>
       <version>1.2.58</version>
   </dependency>

   <dependency>
       <groupId>org.elasticsearch</groupId>
       <artifactId>elasticsearch-spark-20_2.11</artifactId>
       <version>7.2.0</version>
   </dependency>
spark 从 Elasticsearch读取数据

es官方文档

import org.apache.spark.{SparkConf, SparkContext}
import org.elasticsearch.spark._

object ESToSpark {
  def main(args: Array[String]): Unit = {

    val conf = new SparkConf().setAppName("hello world").setMaster("local[*]")
    conf.set("es.index.auto.create", "true")
    conf.set("es.nodes","127.0.0.1")
    conf.set("es.port","9200")

    val sc = new SparkContext(conf)
    val query: String =
      s"""{
           "query": {
             "term": {
               "name": {
                 "value": "鲁仲连"
               }
             }
           }
      }"""
    val rdd = sc.esRDD("phonebills",query)
    rdd.collect().foreach(println)
    println(rdd.count()+" -----------")
    sc.stop()
  }
}
spark 向 Elasticsearch中写入数据
1.先在es中新建索引指定字段
spark字段es 字段
Stringtext/keyword
Longlong
Stringtext/keyword
Integernumber
String/Longdate
Doublenumber
Intnumber

其中向es中写入date类型时,需要先创建索引并指定mapping的format,不然会被当做text类型处理

PUT xxx
{
  "mappings": {
    "properties": {
      "a": {
        "type": "keyword"
      },
      "b": {
        "type": "keyword"
      },
      "c": {
        "type": "long"
      },
      "time": {
        "type": "date",
        "format":["yyyy-MM-dd HH:mm:ss"]
      },
      "d": {
        "type": "text"
      },
      "e": {
        "type": "keyword"
      },
      "f": {
        "type": "text"
      },
      "g": {
        "type": "keyword"
      }
    }
  }
}
    val spark = SparkSession.builder()
      .master("local[8]")
      .config("es.index.auto.create", "true")
      .config("es.nodes", "127.0.0.1")
      .config("es.port", "9200")
      .appName("log")
      .getOrCreate()
    val sc = spark.sparkContext
    // val rdd = sc.textFile(path)
    // case class xx()
    // val log = rdd.map(x=>xx())
    val rlog = spark.createDataFrame(log)
    rlog.saveToEs("dblog")
  • 1
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值