初识 Elasticsearch

官网

1. Elasticsearch 简介

  • Elasticsearch 是 Elastic Stack 的核心。
  • Elasticsearch 是一个分布式、RESTful 风格的搜索和数据分析引擎,能够解决不断涌现出的各种用例。 作为 Elastic Stack 的核心,它集中存储您的数据,帮助您发现意料之中以及意料之外的情况。
  • 优点:
    • 速度特别快。
    • 可扩展性很好。
    • 弹性很好(健壮性很强大)。
    • 灵活性(支持结构化和非结构化数据)。
1.1 Elasticsearch 核心概念
  1. 准实时(Near Realtime)
    • 默认情况下,插入的数据1s后可查询,所示称其为准实时系统。
  2. 集群(Cluster)
    • 由一个或多个节点组成,能保存数据,且所有节点提供索引和搜索功能。
  3. 节点(Node)
    • 集群的单个server,可存储数据,并提供集群的索引和搜索功能。
    • Master 节点:管理集群,向其他节点同步集群状态。
    • Data 节点:存储数据的节点。
  4. 索引(Index)
    • 类似于数据库的database、schema。
  5. 文档(Document)
    • 可被索引的最小单位。类似于数据库中的一条记录。
  6. 分片(shard)
    • 索引的大小可能超出单个节点的限制,将索引分为多块存储,每块即为一个分片,对标 HDFS 的块。
  7. 副本( replicas )
    • 分片的备份数。默认情况下每个索引有一个备份。
  8. 集群状态
    • Red:某些分片的主副分片都丢失,影响数据完整性。
    • Yellow:某些副本分片丢失,不影响数据的完整性。
    • Green:健康状态,没有分片丢失。
1.2 Elasticsearch 读写流程
  • 读流程:
    1. 将读请求解析到相关分片。注意,由于大多数搜索将被发送到一个或多个索引,因此它们通常需要从多个分片中读取,每个分片表示数据的不同子集。
    2. 在分片复制组中选择每个相关分片的活动副本。这可以是主分片,也可以是副本分片。默认情况下,Elasticsearch只是在副本分片之间进行循环。
    3. 将分片级别的读请求发送到所选副本。
    4. 整合结果并做出响应。注意,在get by ID查找的情况下,只有一个分片是相关的,可以跳过这一步。
  • 写流程:
    1. 数据首先写入到 Index buffer(内存) 和 Transaction log(磁盘) 中,即便内存数据丢失,也可读取磁盘中的 Transaction log。
    2. 默认 1s 一次的 refresh 操作将 Index buffer 中的数据写入 segments(内存),此时数据可查询。
    3. 默认30分钟执行一次的 flush 操作,将 segments 写入磁盘,同时清空 Transaction log。若Transaction log 满(默认512M),也会执行此操作。
    4. merge 操作,定期合并 segment

2. Elasticsearch 部署

  • 本次部署的版本为 7.9.3
  • 本次部署为单点部署,非集群模式

下载地址

2.1 部署前置条件
  • 需要 JDK 1.8 版本的部署
2.2 部署
[hadoop@bigdata ~]$ cd software/
[hadoop@bigdata software]$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.9.3-linux-x86_64.tar.gz
[hadoop@bigdata software]$ tar -xzvf elasticsearch-7.9.3-linux-x86_64.tar.gz -C ~/app/
[hadoop@bigdata software]$ cd ~/app/
[hadoop@bigdata app]$ ln -s elasticsearch-7.9.3 elasticsearch
[hadoop@bigdata app]$ cd elasticsearch
[hadoop@bigdata elasticsearch]$ ll
总用量 568
drwxr-xr-x.  2 hadoop hadoop   4096 10月 16 18:39 bin
drwxr-xr-x.  3 hadoop hadoop    199 10月 30 15:06 config
drwxr-xr-x.  8 hadoop hadoop     96 10月 16 18:39 jdk
drwxr-xr-x.  3 hadoop hadoop   4096 10月 16 18:39 lib
-rw-r--r--.  1 hadoop hadoop  13675 10月 16 18:34 LICENSE.txt
drwxr-xr-x.  2 hadoop hadoop     30 10月 29 20:17 logs
drwxr-xr-x. 51 hadoop hadoop   4096 10月 16 18:39 modules
-rw-r--r--.  1 hadoop hadoop 544318 10月 16 18:38 NOTICE.txt
drwxr-xr-x.  2 hadoop hadoop      6 10月 16 18:38 plugins
-rw-r--r--.  1 hadoop hadoop   7007 10月 16 18:34 README.asciidoc
  • 修改配置文件
[hadoop@bigdata elasticsearch]$ cd config/
[hadoop@bigdata config]$ vim elasticsearch.yml

node.name: bigdata
path.data: /home/hadoop/app/tmp/es/data
path.logs: /home/hadoop/app/tmp/es/logs
network.host: 0.0.0.0
http.port: 9200
cluster.initial_master_nodes: ["bigdata"]
# 跨域
http.cors.enabled: true
http.cors.allow-origin: "*"

[hadoop@bigdata config]$ vim jvm.options

-Xms256M
-Xmx256M
  • 由于本次部署是为了学习,jvm 内存给的小一些即可。
  • 后台启动 Elasticsearch
[hadoop@bigdata config]$ cd ../bin
[hadoop@bigdata bin]$ ./elasticsearch -d
future versions of Elasticsearch will require Java 11; your Java version from [/home/hadoop/app/jdk1.8.0_151/jre] does not meet this requirement
future versions of Elasticsearch will require Java 11; your Java version from [/home/hadoop/app/jdk1.8.0_151/jre] does not meet this requirement
[hadoop@bigdata bin]$ jps
10538 Elasticsearch
10620 Jps
  • 查看 Elasticsearch WEB 地址:http://bigdata:9200/
{
  "name" : "bigdata",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "A3iEa2EyRJuccHAaA0PPOw",
  "version" : {
    "number" : "7.9.3",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "c4138e51121ef06a6404866cddc601906fe5c868",
    "build_date" : "2020-10-16T10:36:16.141335Z",
    "build_snapshot" : false,
    "lucene_version" : "8.6.2",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

3. Elasticsearch 相关监控工具部署

  • 这里给大家推荐一个 Kibana 的监控 ES 的工具:elasticsearch-head

ES 监控工具地址

3.1 ES 监控工具部署前置条件
  • 安装git
[root@bigdata ~]# yum install -y git
  • 安装 Nodejs
[hadoop@bigdata software]$ wget https://npm.taobao.org/mirrors/node/latest-v4.x/node-v4.4.7-linux-x64.tar.gz
[hadoop@bigdata software]$ tar -zxvf node-v4.4.7-linux-x64.tar.gz -C ~/app/
[hadoop@bigdata software]$ cd ~/app/
[hadoop@bigdata app]$ cd node-v4.4.7-linux-x64/
[hadoop@bigdata node-v4.4.7-linux-x64]$ pwd
/home/hadoop/app/node-v4.4.7-linux-x64
  • 设置环境变量
[hadoop@bigdata node-v4.4.7-linux-x64]$ cd ~
[hadoop@bigdata ~]$ vim .bashrc 

export NODE_JS_HOME=/home/hadoop/app/node-v4.4.7-linux-x64
export PATH=${NODE_JS_HOME}/bin:${PATH}

[hadoop@bigdata ~]$ source .bashrc 
3.2 ES 监控工具部署
  • ES 监控工具安装
[hadoop@bigdata ~]$ cd sourcecode/
[hadoop@bigdata sourcecode]$ git clone git://github.com/mobz/elasticsearch-head.git
[hadoop@bigdata sourcecode]$ cd elasticsearch-head
[hadoop@bigdata elasticsearch-head]$ npm install
  • 遇到报错
tar: Error is not recoverable: exiting now

    at ChildProcess.exithandler (child_process.js:213:12)
    at emitTwo (events.js:87:13)
    at ChildProcess.emit (events.js:172:7)
    at maybeClose (internal/child_process.js:827:16)
    at Process.ChildProcess._handle.onexit (internal/child_process.js:211:5)
npm ERR! Linux 3.10.0-957.el7.x86_64
npm ERR! argv "/home/hadoop/app/node-v4.4.7-linux-x64/bin/node" "/home/hadoop/app/node-v4.4.7-linux-x64/bin/npm" "install"
npm ERR! node v4.4.7
npm ERR! npm  v2.15.8
npm ERR! code ELIFECYCLE

npm ERR! phantomjs-prebuilt@2.1.16 install: `node install.js`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the phantomjs-prebuilt@2.1.16 install script 'node install.js'.
npm ERR! This is most likely a problem with the phantomjs-prebuilt package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR!     node install.js
npm ERR! You can get information on how to open an issue for this project with:
npm ERR!     npm bugs phantomjs-prebuilt
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! 
npm ERR!     npm owner ls phantomjs-prebuilt
npm ERR! There is likely additional logging output above.

npm ERR! Please include the following file with any support request:
npm ERR!     /home/hadoop/sourcecode/elasticsearch-head/npm-debug.log
  • 解决办法,并再次执行
[root@bigdata ~]# yum -y install bzip2
[hadoop@bigdata elasticsearch-head]$ npm install
[hadoop@bigdata elasticsearch-head]$ npm run start

> elasticsearch-head@0.0.0 start /home/hadoop/sourcecode/elasticsearch-head
> grunt server

Running "connect:server" (connect) task
Waiting forever...
Started connect web server on http://localhost:9100
  • 打开 WEB 页面:http://bigdata:9100/
    elasticsearch-head1

  • 连接 ES 集群
    elasticsearch-head2

  • 可以通过 WEB 点击直接查看索引里面的数据

elasticsearch-head3

4. Elasticsearch 基础命令使用

  • HEAD: 获取某个资源的头信息,可以判断索引是否存在
  • GET: 获取资源
  • POST:创建或更新资源
  • PUT: 创建或更新资源
  • DELETE: 删除资源
4.1 查看集群状态
GET _cluster/health
{
  "cluster_name" : "elasticsearch",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 24,
  "active_shards" : 24,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 4,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 85.71428571428571
}
4.2 查看有多少索引
GET _cat/indices
green  open .monitoring-kibana-7-2020.10.29 amIPMJ92To-oF8XLNejNXA 1 0  6220    0     1mb     1mb
green  open .kibana-event-log-7.9.3-000002  Pym-DS6OSl60K-G4_isj8g 1 0     1    0   5.5kb   5.5kb
green  open .kibana-event-log-7.9.3-000001  2KKKVFhxTySJqOBKaSXRUQ 1 0     4    0  21.6kb  21.6kb
green  open .monitoring-kibana-7-2021.02.06 wO7zZpboSV6ROOHcocur4w 1 0  1828    0 313.7kb 313.7kb
green  open .monitoring-kibana-7-2021.02.16 0EFnop5ZSZOAhKFHLeTrzw 1 0   126    0 120.8kb 120.8kb
yellow open book                            SBq986xTQ9i_Q0N-skDRIQ 1 1     2    0     8kb     8kb
green  open .kibana-event-log-7.9.3-000003  N7Wj6oTLTV2CcZLm9fOfHA 1 0     1    0   5.5kb   5.5kb
green  open .apm-agent-configuration        r29XmJ0jRKO1IFML9n0_bQ 1 0     0    0    208b    208b
green  open .monitoring-es-7-2020.10.29     XNGxjoh1QvGg6YcvUNcb_A 1 0 40170    0  13.5mb  13.5mb
green  open .monitoring-es-7-2020.11.29     UeDtiCjxRFK2m19gwM-E5w 1 0 66257   40  22.6mb  22.6mb
green  open .kibana_1                       K68bvHtqQQitRAcOY_Aurg 1 0    53   58  10.4mb  10.4mb
yellow open hbasetoes_index                 BXyIzFzxQP6-NNsYcTHtvQ 1 1     1    0   8.4kb   8.4kb
yellow open flink                           U6IF4U4VQYmW1vs4oSIsBw 1 1     2    0   8.7kb   8.7kb
green  open .monitoring-es-7-2020.10.30     RKvmUV5bR6Oc0eX30rkExw 1 0 80549    0  28.6mb  28.6mb
yellow open xk                              Abtab4EKRia8uBg2LYS2og 1 1     1    0  17.7kb  17.7kb
green  open .apm-custom-link                EvrXzYqMSECEfZ8HDhUvew 1 0     0    0    208b    208b
green  open .kibana_task_manager_1          b0XqeI-GTXyNoH7X1gMh8Q 1 0     6   73  57.4kb  57.4kb
green  open .monitoring-kibana-7-2020.10.30 oLGs73ceS3mPmJpC5NJ74w 1 0 10180    0   1.5mb   1.5mb
green  open .monitoring-es-7-2021.02.16     qOZ0Kp1vRYOwToitDQQVFA 1 0  6044 6298   4.2mb   4.2mb
green  open .monitoring-es-7-2021.02.06     fP6lMKO5RtuixjKZSvjSFQ 1 0 21912    0  10.3mb  10.3mb
green  open .monitoring-kibana-7-2020.11.29 XyzkauRTTzi-iady9ygviw 1 0   742    0 195.1kb 195.1kb
4.3 创建一个索引库
PUT bigdata
{
  "acknowledged" : true,
  "shards_acknowledged" : true,
  "index" : "bigdata"
}
4.4 更新/插入数据
  • put 命令需要指定 id ,否则无法插入,post 命令可以不需要指定对应的 id,如果不指定 id,会自动生成一个 id 插入。
  • 在命令中不可只更新一个字段,如果需要更新,需要把所有的字段都给加上去,否则最终的结果会只显示更新的那个字段。
  • put 更新/插入数据(必须要指定 id)
PUT bigdata/_doc/1
{
  "name": "hadoop",
  "tags": [
    "看书",
    "看电影"
  ]
}
  • post 更新/插入数据(可以指定id,也可以不指定 id)
POST bigdata/_doc/2
{
  "name": "spark",
  "tags": [
    "看书",
    "打游戏"
  ]
}
{
  "_index" : "bigdata",
  "_type" : "_doc",
  "_id" : "2",
  "_version" : 1,
  "result" : "created",
  "_shards" : {
    "total" : 2,
    "successful" : 1,
    "failed" : 0
  },
  "_seq_no" : 1,
  "_primary_term" : 1
}
POST bigdata/_doc
{
  "name": "spark",
  "tags": [
    "看书"
  ]
}
{
  "_index" : "bigdata",
  "_type" : "_doc",
  "_id" : "K1vcqXcBEy2TS2uORWda",
  "_version" : 1,
  "result" : "created",
  "_shards" : {
    "total" : 2,
    "successful" : 1,
    "failed" : 0
  },
  "_seq_no" : 3,
  "_primary_term" : 1
}
4.5 查询数据
  • 根据 id ,查询一条数据
GET bigdata/_doc/1/_source
#! Deprecation: [types removal] Specifying types in get_source and exist_sourcerequests is deprecated.
{
  "name" : "hadoop",
  "tags" : [
    "看书",
    "看电影"
  ]
}
  • 查询所有的数据
GET bigdata/_doc/_search
#! Deprecation: [types removal] Specifying types in search requests is deprecated.
{
  "took" : 0,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 3,
      "relation" : "eq"
    },
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "bigdata",
        "_type" : "_doc",
        "_id" : "1",
        "_score" : 1.0,
        "_source" : {
          "name" : "hadoop",
          "tags" : [
            "看书",
            "看电影"
          ]
        }
      },
      {
        "_index" : "bigdata",
        "_type" : "_doc",
        "_id" : "2",
        "_score" : 1.0,
        "_source" : {
          "name" : "spark",
          "tags" : [
            "看书",
            "打游戏"
          ]
        }
      },
      {
        "_index" : "bigdata",
        "_type" : "_doc",
        "_id" : "K1vcqXcBEy2TS2uORWda",
        "_score" : 1.0,
        "_source" : {
          "name" : "spark",
          "tags" : [
            "看书"
          ]
        }
      }
    ]
  }
}
4.5 删除数据
  • 删除一条数据
DELETE bigdata/_doc/1
{
  "_index" : "bigdata",
  "_type" : "_doc",
  "_id" : "1",
  "_version" : 2,
  "result" : "deleted",
  "_shards" : {
    "total" : 2,
    "successful" : 1,
    "failed" : 0
  },
  "_seq_no" : 4,
  "_primary_term" : 1
}
  • 删除整个索引库
DELETE bigdata
{
  "acknowledged" : true
}

5. Elasticsearch 外部分词器 ik 使用

  • Elasticsearch 内含了一些分词器,还有一些常用的分词器
5.1 IK 分词器部署

外部分词器地址

  • 外部分词器的部署
[hadoop@bigdata ~]$ cd $ES_HOME/
[hadoop@bigdata elasticsearch]$ cd plugins
[hadoop@bigdata plugins]$ mkdir ik
[hadoop@bigdata plugins]$ cd ik/
[hadoop@bigdata ik]$ wget https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.9.3/elasticsearch-analysis-ik-7.9.3.zip
[hadoop@bigdata ik]$ unzip elasticsearch-analysis-ik-7.9.3.zip 
[hadoop@bigdata ik]$ mv elasticsearch-analysis-ik-7.9.3.zip ~/sourcecode/
  • 重启 Elasticsearch
[hadoop@bigdata ~]$ jps
13219 Jps
10538 Elasticsearch
[hadoop@bigdata ~]$ kill -9 10538
[hadoop@bigdata ~]$ cd $ES_HOME/
[hadoop@bigdata elasticsearch]$ bin/elasticsearch -d
future versions of Elasticsearch will require Java 11; your Java version from [/home/hadoop/app/jdk1.8.0_151/jre] does not meet this requirement
future versions of Elasticsearch will require Java 11; your Java version from [/home/hadoop/app/jdk1.8.0_151/jre] does not meet this requirement
5.2 IK 分词器的使用
  • ik_smart :最少力度的切分
  • ik_max_word :最细粒度切分
POST /_analyze
{
  "analyzer": "ik_smart",
  "text": "我是一个程序员"
}
{
  "tokens" : [
    {
      "token" : "我",
      "start_offset" : 0,
      "end_offset" : 1,
      "type" : "CN_CHAR",
      "position" : 0
    },
    {
      "token" : "是",
      "start_offset" : 1,
      "end_offset" : 2,
      "type" : "CN_CHAR",
      "position" : 1
    },
    {
      "token" : "一个",
      "start_offset" : 2,
      "end_offset" : 4,
      "type" : "CN_WORD",
      "position" : 2
    },
    {
      "token" : "程序员",
      "start_offset" : 4,
      "end_offset" : 7,
      "type" : "CN_WORD",
      "position" : 3
    }
  ]
}
POST /_analyze
{
  "analyzer": "ik_max_word",
  "text": "我是一个程序员"
}
{
  "tokens" : [
    {
      "token" : "我",
      "start_offset" : 0,
      "end_offset" : 1,
      "type" : "CN_CHAR",
      "position" : 0
    },
    {
      "token" : "是",
      "start_offset" : 1,
      "end_offset" : 2,
      "type" : "CN_CHAR",
      "position" : 1
    },
    {
      "token" : "一个",
      "start_offset" : 2,
      "end_offset" : 4,
      "type" : "CN_WORD",
      "position" : 2
    },
    {
      "token" : "一",
      "start_offset" : 2,
      "end_offset" : 3,
      "type" : "TYPE_CNUM",
      "position" : 3
    },
    {
      "token" : "个",
      "start_offset" : 3,
      "end_offset" : 4,
      "type" : "COUNT",
      "position" : 4
    },
    {
      "token" : "程序员",
      "start_offset" : 4,
      "end_offset" : 7,
      "type" : "CN_WORD",
      "position" : 5
    },
    {
      "token" : "程序",
      "start_offset" : 4,
      "end_offset" : 6,
      "type" : "CN_WORD",
      "position" : 6
    },
    {
      "token" : "员",
      "start_offset" : 6,
      "end_offset" : 7,
      "type" : "CN_CHAR",
      "position" : 7
    }
  ]
}
5.3 IK 分词器扩展字典的使用
  • 需求:使用 IK 分词器切分“我是一个程序员”,并把“一个程序员”作为一个 词来切分
[hadoop@bigdata ~]$ cd $ES_HOME/
[hadoop@bigdata elasticsearch]$ cd plugins/ik/
[hadoop@bigdata ik]$ cd config/
[hadoop@bigdata config]$ vim IKAnalyzer.cfg.xml

<entry key="ext_dict">bigdata/mydict.dic</entry>

[hadoop@bigdata config]$ mkdir bigdata
[hadoop@bigdata config]$ cd bigdata/
[hadoop@bigdata bigdata]$ vim mydict.dic
一个程序员
  • 重启 Elasticsearch
POST /_analyze
{
  "analyzer": "ik_smart",
  "text": "我是一个程序员"
}
{
  "tokens" : [
    {
      "token" : "我",
      "start_offset" : 0,
      "end_offset" : 1,
      "type" : "CN_CHAR",
      "position" : 0
    },
    {
      "token" : "是",
      "start_offset" : 1,
      "end_offset" : 2,
      "type" : "CN_CHAR",
      "position" : 1
    },
    {
      "token" : "一个程序员",
      "start_offset" : 2,
      "end_offset" : 7,
      "type" : "CN_WORD",
      "position" : 2
    }
  ]
}
  • 为了增加扩展词汇且不想重启 Elasticsearch ,可以使用 http 的方式,通过接口传输扩展词汇。

6. Elasticsearch 常用 API 使用

API(官网)

6.1 POM 文件
<dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.12</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.elasticsearch.client</groupId>
    <artifactId>elasticsearch-rest-high-level-client</artifactId>
    <version>7.9.3</version>
</dependency>
6.2 Code
6.2.1 ElasticsaerchUtils Code
package com.xk.bigdata.elasticserach.basic.utils;

import org.apache.http.HttpHost;
import org.elasticsearch.action.delete.DeleteRequest;
import org.elasticsearch.action.delete.DeleteResponse;
import org.elasticsearch.action.get.GetRequest;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.get.MultiGetRequest;
import org.elasticsearch.action.get.MultiGetResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.action.update.UpdateResponse;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestHighLevelClient;

import java.io.IOException;

public class ElasticsaerchUtils {

    public static RestHighLevelClient client = null;

    /**
     * 初始化 客户端
     */
    public static void initClient(String hostName, int port) {
        client = new RestHighLevelClient(
                RestClient.builder(
                        new HttpHost(hostName, port, "http")));
    }

    /**
     * 关闭客户端
     */
    public static void closeClient() throws IOException {
        if (null != client) {
            client.close();
        }
    }

    /**
     * put 数据
     */
    public static IndexResponse put(IndexRequest request) throws IOException {
        return client.index(request, RequestOptions.DEFAULT);
    }

    /**
     * get 数据
     */
    public static GetResponse get(GetRequest getRequest) throws IOException {
        return client.get(getRequest, RequestOptions.DEFAULT);
    }

    /**
     * 得到多条数据
     */
    public static MultiGetResponse mget(MultiGetRequest request) throws IOException {
        return client.mget(request, RequestOptions.DEFAULT);
    }

    /**
     * 数据是否存在
     */
    public static boolean isExists(GetRequest getRequest) throws IOException {
        return client.exists(getRequest, RequestOptions.DEFAULT);
    }

    /**
     * 更新数据:可以只跟新一个字段
     */
    public static UpdateResponse update(UpdateRequest request) throws IOException {
        return client.update(request, RequestOptions.DEFAULT);
    }

    /**
     * 删除数据
     */
    public static DeleteResponse delete(DeleteRequest request) throws IOException {
        return client.delete(request, RequestOptions.DEFAULT);
    }

}
6.2.ElasticsaerchUtilsTest Code
package com.xk.bigdata.elasticserach.basic.utils;

import com.alibaba.fastjson.JSON;
import com.xk.bigdata.elasticserach.basic.domain.Person;
import org.elasticsearch.action.delete.DeleteRequest;
import org.elasticsearch.action.delete.DeleteResponse;
import org.elasticsearch.action.get.*;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.action.update.UpdateResponse;
import org.elasticsearch.client.Requests;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.search.fetch.subphase.FetchSourceContext;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;

import java.io.IOException;
import java.util.HashMap;
import java.util.Map;

public class ElasticsaerchUtilsTest {

    public static final String HOSTNAME = "bigdata";

    public static final int PORT = 9200;

    private static final String INDEX = "bigdata";

    @Before
    public void stepUp() {
        ElasticsaerchUtils.initClient(HOSTNAME, PORT);
    }

    @After
    public void cleanUp() throws IOException {
        ElasticsaerchUtils.closeClient();
    }

    /**
     * 插入一条 json 的 string
     */
    @Test
    public void testPut01() throws IOException {
        IndexRequest request = new IndexRequest(INDEX);
        request.id("1");
        String jsonString = "{" +
                "\"user\":\"bigdata\"," +
                "\"postDate\":\"2013-01-30\"," +
                "\"message\":\"trying out Elasticsearch\"" +
                "}";
        request.source(jsonString, XContentType.JSON);
        IndexResponse response = ElasticsaerchUtils.put(request);
        System.out.println(response);
    }

    /**
     * 插入一个  map 结构数据
     */
    @Test
    public void testPut02() throws IOException {
        Map<String, Object> jsonMap = new HashMap<>();
        jsonMap.put("name", "spark");
        jsonMap.put("age", 18);
        jsonMap.put("tags", new String[]{"爱学习", "看电影"});

        IndexRequest request = new IndexRequest(INDEX)
                .id("2").source(jsonMap);
        IndexResponse response = ElasticsaerchUtils.put(request);
        System.out.println(response);
    }

    /**
     * 把 class 里面的数据插入 es
     * @throws IOException
     */
    @Test
    public void testPut03() throws IOException {
        Person person = new Person("flink",31);

        IndexRequest request = new IndexRequest(INDEX)
                .id("3").source(JSON.toJSONString(person), Requests.INDEX_CONTENT_TYPE);
        IndexResponse response = ElasticsaerchUtils.put(request);
        System.out.println(response);
    }

    @Test
    public void testPut04() throws IOException {
        XContentBuilder builder = XContentFactory.jsonBuilder();
        builder.startObject();
        {
            builder.field("name", "hadoop");
            builder.field("age", 100);
        }
        builder.endObject();

        IndexRequest request = new IndexRequest(INDEX)
                .id("4").source(builder);
        IndexResponse response = ElasticsaerchUtils.put(request);
        System.out.println(response);
    }

    @Test
    public void testGetById01() throws IOException {
        GetRequest getRequest = new GetRequest(INDEX, "1");
        GetResponse getResponse = ElasticsaerchUtils.get(getRequest);
        System.out.println(getResponse.getSourceAsString());
    }

    @Test
    public void testGetById02() throws IOException {
        GetRequest getRequest = new GetRequest(INDEX, "2");
        // 选择只看哪些字段
        String[] includes = Strings.EMPTY_ARRAY;
        // 选择排除哪些字段
        String[] excludes = new String[]{"age", "tags"};
        FetchSourceContext fetchSourceContext =
                new FetchSourceContext(true, includes, excludes);
        getRequest.fetchSourceContext(fetchSourceContext);
        GetResponse getResponse = ElasticsaerchUtils.get(getRequest);
        System.out.println(getResponse.getSourceAsString());
    }

    @Test
    public void testGetByIds() throws IOException {
        MultiGetRequest request = new MultiGetRequest();
        request.add(new MultiGetRequest.Item(INDEX, "2"));
        request.add(new MultiGetRequest.Item(INDEX, "3"));
        request.add(new MultiGetRequest.Item(INDEX, "4"));
        MultiGetResponse response = ElasticsaerchUtils.mget(request);
        MultiGetItemResponse[] responses = response.getResponses();
        for (MultiGetItemResponse item : responses) {
            System.out.println(item.getResponse().getSourceAsString());
        }
    }

    @Test
    public void testIsExists() throws IOException {
        GetRequest getRequest = new GetRequest(INDEX,"1");
        boolean result = ElasticsaerchUtils.isExists(getRequest);
        System.out.println(result);
    }

    @Test
    public void testUpdate01() throws IOException {
        Map<String, Object> jsonMap = new HashMap<>();
        jsonMap.put("sex", "男");
        UpdateRequest request = new UpdateRequest(INDEX,"4").doc(jsonMap);
        UpdateResponse updateResponse = ElasticsaerchUtils.update(request);
        System.out.println(updateResponse);
    }

    @Test
    public void testUpdate02() throws IOException {
        XContentBuilder builder = XContentFactory.jsonBuilder();
        builder.startObject();
        {
            builder.field("score", 99);
        }
        builder.endObject();
        UpdateRequest request = new UpdateRequest(INDEX,"4").doc(builder);
        UpdateResponse updateResponse = ElasticsaerchUtils.update(request);
        System.out.println(updateResponse);
    }

    /**
     * 删除一条数据
     */
    @Test
    public void testDelete() throws IOException {
        DeleteRequest request = new DeleteRequest(INDEX,"1");
        DeleteResponse deleteResponse = ElasticsaerchUtils.delete(request);
        System.out.println(deleteResponse);
    }

}
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值