ElasticSearch入门2

ElasticSearch分布式读写原理

写流程(基于_id)
  1. 写操作必须在主分片上面完成之后才能被复制到相关的副本分片
  2. 客户端向Node1发送写操作请求,此时Node1为协调节点(接收客户端请求的节点)
  3. Node1节点使用文档的_id通过哈希计算确定文档属于分片,将请求转发给主分片
  4. 主分片写完后会把数据转发给副本分片,写完成后主分片将向协调节点报告成功,协调节点向客户端报告成功

注意:index的shards数量是不能改变的


读流程(基于_id)
  1. 读操作可以从主分片或者从其它任意副本分片检索文档
  2. 客户端向Node1发送读请求,Node1为协调节点
  3. 协调节点使用文档的_id来确定文档属于分片
    分片的主副分片存在于所有的三个节点上,协调节点在每次请求的时候都会通过轮询的方式将请求打到不同的节点上来达到负载均衡
  4. 假设本次它将请求转发到Node2,Node2将文档返回给Node1,Node1然后将文档返回给客户端

搜索流程(_search)
  1. 搜索分成两阶段过程Query Then Fetch
  2. 初始查询阶段:查询会广播到索引中每一个分片(主分片或者副本分片)。
    每个分片在本地执行搜索并构建一个匹配文档的大小为 from + size 的优先队列。
  3. 每个分片返回各自优先队列中所有文档的ID和排序值给协调节点,它合并这些值到自己的优先队列中来产生一个全局排序后的结果列表。
  4. 取回阶段:协调节点辨别出哪些文档需要被取回并向相关的分片提交多个GET请求。
    每个分片加载文档,返回文档给协调节点。所有的文档被取回后,协调节点返回结果给客户端。

文档的修改和并发控制
ElasticSearch中的全部文档数据都是不可变的
数据不能修改,只能通过版本号的方式不断增加。主要目的是解决更新过程中的并发冲突问题

删除方式
删除文档操作不会直接物理删除,而是通过给文档打删除标记,进行逻辑删除,
直到该索引触发段合并时,才物理删除,释放存储空间。


shard是为了负载均衡让每个节点的硬件充分发挥
但是如果分片多,在单个节点上的多个shard同时接受请求,并对本节点的资源形成了竞争,造成了内耗

如何规划shard数量?
  1. 根据每日数据量规划shard数量
    单日数据评估低于10G的情况可以只使用一个分片,高于10G的情况,单一分片也不能太大不能超过30G
    一个200G的单日索引大致划分7-10个分片
  2. 根据堆内存规划shard数量
    一个Elasticsearch服务官方推荐的最大堆内存是32G
    一个 10G分片,大致会占用 30-80M堆内存,对于这个32G堆内存的节点最好不要超过1000个分片
shard优化
  1. 及时归档冷数据
  2. 降低单个分片占用的资源消耗,合并分片中多个segment(段)

数据的物理提交流程

  1. 首先由请求端提交到不可读buffer中(doc数据),同时写入translog-内存中(操作日志),然后translog根据默认设置同步磁盘文件。此时会返回给请求端处理成功。
  2. 1-2秒后执行refresh操作把不可读buffer中的数据写入整理成为一个段,并提交到可读buffer(段数据)
  3. 可读buffer会存在相当一段时间直到达到flush条件写入磁盘(flush条件:默认30分钟或者translog达到默认上限512M)
  4. 落盘到磁盘文件后多个段以文件形式保存,后台周期性或手动进行段合并,把段中的数据合并到shard文件中。
Segment段优化

ES的异步写入机制,后台每一次把写入到内存的数据refresh(默认每秒)到磁盘,都会在对应的shard上产生一个segment
segment上的数据有着独立的Lucene索引,如果一个shard是由成千上万的segment组成,那么性能一定非常不好,而且消耗内存也大。

手动执行段合并

GET _cat/indices?v&s=segmentsCount:desc&h=index,segmentsCount,segmentsMemory,memoryTotal,storeSize,p,r

POST movie_index_cn/_forcemerge?max_num_segments=1

ES代码客户端测试

创建一个maven项目,引入依赖

    <dependencies>
        <dependency>
            <groupId>org.elasticsearch</groupId>
            <artifactId>elasticsearch</artifactId>
            <version>7.8.0</version>
        </dependency>
        <dependency>
            <groupId>org.elasticsearch.client</groupId>
            <artifactId>elasticsearch-rest-high-level-client</artifactId>
            <version>7.8.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.httpcomponents</groupId>
            <artifactId>httpclient</artifactId>
            <version>4.5.13</version>
        </dependency>
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>fastjson</artifactId>
            <version>1.2.78</version>
        </dependency>
    </dependencies>
使用Scala编写测试类

创建客户端

  /** 客户端对象 */
  var client: RestHighLevelClient = create()

  /** 创建客户端对象 */
  def create(): RestHighLevelClient = {
    val restClientBuilder: RestClientBuilder = RestClient.builder(new HttpHost("hadoop101", 9200))
    val client: RestHighLevelClient = new RestHighLevelClient(restClientBuilder)
    client
  }

  /** 关闭客户端对象 */
  def close(): Unit = {
    if (client != null) client.close()
  }

查询 - 单条查询

  def getById(): Unit = {
    val getRequest: GetRequest = new GetRequest("movie_test", "1001")
    val getResponse: GetResponse = client.get(getRequest, RequestOptions.DEFAULT)
    val dataStr: String = getResponse.getSourceAsString
    println(dataStr)
  }

查询 - 条件查询(查询doubanScore>=5.0 关键词搜索red sea,关键词高亮显示,显示第一页,每页2条,按doubanScore从大到小排序)

  def searchByFilter(): Unit = {
    /**
    GET /movie_index/_search
    {
      "query": {
        "bool": {
          "filter": [
            {
              "range": {
                "doubanScore": {
                  "gte": 5.0
                }
              }
            }
          ],
          "must": [
            {
              "match": {
                "name": "red sea"
              }
            }
          ]
        }
      },
      "highlight": {
        "fields": {
          "name": {}
        }
      },
      "from": 0,
      "size": 2,
      "sort": [
        {
          "doubanScore": {
            "order": "desc"
          }
        }
      ]
    }
     */
    val searchRequest: SearchRequest = new SearchRequest("movie_index")
    val searchSourceBuilder: SearchSourceBuilder = new SearchSourceBuilder()
    //query
    //bool
    val boolQueryBuilder: BoolQueryBuilder = QueryBuilders.boolQuery()
    //filter
    val rangeQueryBuilder: RangeQueryBuilder = QueryBuilders.rangeQuery("doubanScore").gte(5.0)
    boolQueryBuilder.filter(rangeQueryBuilder)
    //must
    val matchQueryBuilder: MatchQueryBuilder = QueryBuilders.matchQuery("name", "red sea")
    boolQueryBuilder.must(matchQueryBuilder)
    searchSourceBuilder.query(boolQueryBuilder)

    //分页
    searchSourceBuilder.from(0)
    searchSourceBuilder.size(1)
    //排序
    searchSourceBuilder.sort("doubanScore", SortOrder.DESC)

    //高亮
    val highlightBuilder: HighlightBuilder = new HighlightBuilder()
    highlightBuilder.field("name")
    searchSourceBuilder.highlighter(highlightBuilder)

    searchRequest.source(searchSourceBuilder)
    val searchResponse: SearchResponse = client.search(searchRequest, RequestOptions.DEFAULT)

    //获取总条数据
    val totalDocs: Long = searchResponse.getHits.getTotalHits.value

    //明细
    val hits: Array[SearchHit] = searchResponse.getHits.getHits
    for (hit <- hits) {
      //数据
      val dataJson: String = hit.getSourceAsString
      //hit.getSourceAsMap
      //提取高亮
      val highlightFields: util.Map[String, HighlightField] = hit.getHighlightFields
      val highlightField: HighlightField = highlightFields.get("name")
      val fragments: Array[Text] = highlightField.getFragments
      val highLightValue: String = fragments(0).toString

      println("明细数据: " + dataJson)
      println("高亮: " + highLightValue)
    }
  }

查询 - 聚合查询(查询每位演员参演的电影的平均分,倒叙排序)

  def searchByAggs(): Unit ={
    /**
    GET /movie_index/_search
    {
      "aggs": {
        "groupByActorName": {
          "terms": {
            "field": "actorList.name.keyword",
            "size": 10,
            "order": {
              "doubanScoreAvg": "desc"
            }
          }
          , "aggs": {
            "doubanScoreAvg": {
              "avg": {
                "field": "doubanScore"
              }
            }
          }
        }
      },
      "size": 0
    }
     */
    val searchRequest: SearchRequest = new SearchRequest("movie_index")
    val searchSourceBuilder: SearchSourceBuilder = new SearchSourceBuilder()
    //不要明细
    searchSourceBuilder.size(0)
    //group
    val termsAggregationBuilder: TermsAggregationBuilder = AggregationBuilders.terms("groupByActorName")
      .field("actorList.name.keyword").size(10).order(BucketOrder.aggregation("doubanScoreAvg", false))
    //avg
    val vgAggregationBuilder: AvgAggregationBuilder = AggregationBuilders.avg("doubanScoreAvg").field("doubanScore")
    termsAggregationBuilder.subAggregation(vgAggregationBuilder)

    searchSourceBuilder.aggregation(termsAggregationBuilder)
    searchRequest.source(searchSourceBuilder)
    val searchResponse: SearchResponse = client.search(searchRequest, RequestOptions.DEFAULT)

    val aggregations: Aggregations = searchResponse.getAggregations
    val groupByActorNameParsedTerms: ParsedTerms = aggregations.get[ParsedTerms]("groupByActorName")
    val buckets: util.List[_ <: Terms.Bucket] = groupByActorNameParsedTerms.getBuckets
    import scala.collection.JavaConverters._

    for (bucket <- buckets.asScala) {
      //演员名字
      val actorName: String = bucket.getKeyAsString
      //电影个数
      val moviecount: Long = bucket.getDocCount

      //平均分
      val aggregations: Aggregations = bucket.getAggregations
      val doubanScoreAvgParsedAvg: ParsedAvg = aggregations.get[ParsedAvg]("doubanScoreAvg")
      val avgScore: Double = doubanScoreAvgParsedAvg.getValue
      println(s"$actorName 共参演了 $moviecount 部电影, 平均分为 $avgScore")
    }

  }

幂等添加数据 - 指定docid(非幂等写不用指定docid)

  def addByIdempotent(): Unit = {
    val indexRequest: IndexRequest = new IndexRequest()
    //指定索引
    indexRequest.index("movie_test")
    //指定doc
    indexRequest.id("1001")
    val movie: Movie = Movie("1001", "速度与激情")
    val movieJson: String = JSON.toJSONString(movie, new SerializeConfig(true))
    indexRequest.source(movieJson, XContentType.JSON)
    client.index(indexRequest, RequestOptions.DEFAULT)
  }

批量写数据

  def bulk(): Unit = {
    val bulkRequest: BulkRequest = new BulkRequest()
    val movies: List[Movie] = List[Movie](
      Movie("1002", "长津湖"),
      Movie("1003", "水门桥"),
      Movie("1004", "狙击手"),
      Movie("1005", "熊出没")
    )

    for (movie <- movies) {
      val indexRequest: IndexRequest = new IndexRequest("movie_test") // 指定索引
      val movieJson: String = JSON.toJSONString(movie, new SerializeConfig(true))
      indexRequest.source(movieJson, XContentType.JSON)
      //幂等写指定id , 非幂等不指定id
      indexRequest.id(movie.id)
      //将indexRequest加入到bulk
      bulkRequest.add(indexRequest)
    }
    client.bulk(bulkRequest, RequestOptions.DEFAULT);
  }

修改 - 单条修改

  def update(): Unit = {
    val updateRequest: UpdateRequest = new UpdateRequest("movie_test", "1001")
    // 多个键值、键值的形式修改数据
    updateRequest.doc("movie_name", "功夫")
    client.update(updateRequest, RequestOptions.DEFAULT);
  }

修改 - 条件修改

  def updateByQuery(): Unit = {
    /**
    POST /movie_test/_update_by_query
    {
      "query": {
        "bool": {
        "filter": [
      {
        "term": {
        "movie_name": "功夫"
      }
      }
        ]
      }
      },
      "script": {
        "source": "ctx._source['movie_name']=params.newName",
        "params": {
        "newName": "战狼"
      },
        "lang": "painless"
      }
    }
    **/
    val updateByQueryRequest: UpdateByQueryRequest = new UpdateByQueryRequest("movie_test")
    val boolQueryBuilder: BoolQueryBuilder = QueryBuilders.boolQuery()
    val termQueryBuilder: TermQueryBuilder = QueryBuilders.termQuery("movie_name", "功夫")
    boolQueryBuilder.filter(termQueryBuilder)
    updateByQueryRequest.setQuery(boolQueryBuilder)

    val params: java.util.HashMap[String, AnyRef] = new util.HashMap[String, AnyRef]()
    params.put("newName", "战狼")
    val script: Script = new Script(
      ScriptType.INLINE,
      Script.DEFAULT_SCRIPT_LANG,
      "ctx._source['movie_name']=params.newName",
      params
    )
    updateByQueryRequest.setScript(script)

    client.updateByQuery(updateByQueryRequest, RequestOptions.DEFAULT)
  }

删除

  def delete(): Unit = {
    val deleteRequest: DeleteRequest = new DeleteRequest("movie_test", "1001")
    client.delete(deleteRequest, RequestOptions.DEFAULT)
  }

测试文件地址
https://gitee.com/galen.zhang/spark-realtime/blob/master/es-demo/src/main/scala/com/intmall/es/EsTest.scala

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

GalenZhang888

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值