ElasticSearch实战二(es基本操作以及IK分词器的安装)

1 基本概念

1.1 Node 与 Cluster
Elastic 本质上是一个分布式数据库,允许多台服务器协同工作,每台服务器可以运行多个 Elastic 实例。

单个 Elastic 实例称为一个节点(node)。一组节点构成一个集群(cluster)。

1.2 Index
Elastic 会索引所有字段,经过处理后写入一个反向索引(Inverted Index)。查找数据的时候,直接查找该索引。

所以,Elastic 数据管理的顶层单位就叫做 Index(索引)。它是单个数据库的同义词。每个 Index (即数据库)的名字必须是小写。

1.3  Document

Index 里面单条的记录称为 Document(文档)。许多条 Document 构成了一个 Index。

1.4 Type

Document 可以分组,比如weather这个 Index 里面,可以按城市分组(北京和上海),也可以按气候分组(晴天和雨天)。这种分组就叫做 Type,它是虚拟的逻辑分组,用来过滤 Document。

    对比关系型数据库而言,index相当于关系型数据库中的数据库;document相当于表。而Type相当于列数据。在前文中,我们安装了Kibana工具,这里的演示全部在kibana中进行操作。

1.5 创建索引

语法:PUT ip:port/<索引名称>/<文档名称>

# 创建索引
PUT /es/emp/1
{
  "id":1,
  "name":"lucy",
  "hobbys":["go","eat"]
}

索引名称为es,文档为emp.后面的为id=1

获取数据   GET /es/emp/1

{
  "_index": "es",
  "_type": "emp",
  "_id": "1",
  "_version": 1,
  "found": true,
  "_source": {
    "id": 1,
    "name": "lucy",
    "hobbys": [
      "go",
      "eat"
    ]
  }
}

从上图可以看出es中对数据存储的格式:元数据+文档

元数据:

_index

索引名称

_type

文档类型

_id

元数据id

_source

文档数据

es是基于乐观锁控制数据的----通过版本号区分.上面有_version字段。就是这个意思。

上述的id是元数据的id,根据元数据 的id获取数据,而不是文档内容的id.

使用POST可以自动由系统产生id

POST /es/emp/
{
  "id":2,
  "name":"johy",
  "hobbys":["swimming","eat"]
}

响应结果:

{
  "_index": "es",
  "_type": "emp",
  "_id": "6ke8m2YB0gh-mNfcBmiv",
  "_version": 1,
  "result": "created",
  "_shards": {
    "total": 2,
    "successful": 1,
    "failed": 0
  },
  "_seq_no": 0,
  "_primary_term": 1
}

获取数据:

GET /es/emp/6ke8m2YB0gh-mNfcBmiv

{
  "_index": "es",
  "_type": "emp",
  "_id": "6ke8m2YB0gh-mNfcBmiv",
  "_version": 1,
  "found": true,
  "_source": {
    "id": 2,
    "name": "johy",
    "hobbys": [
      "swimming",
      "eat"
    ]
  }
}

#获取部分获字段

GET /es/emp/6ke8m2YB0gh-mNfcBmiv?_source=id,name

{
  "_index": "es",
  "_type": "emp",
  "_id": "6ke8m2YB0gh-mNfcBmiv",
  "_version": 1,
  "found": true,
  "_source": {
    "name": "johy",
    "id": 2
  }
}

#不需要不需要元数据

GET /es/emp/6ke8m2YB0gh-mNfcBmiv/_source

{
  "id": 2,
  "name": "johy",
  "hobbys": [
    "swimming",
    "eat"
  ]
}

文档更新:

我们先来获取整个数据:

GET /es/emp/6ke8m2YB0gh-mNfcBmiv

{
  "_index": "es",
  "_type": "emp",
  "_id": "6ke8m2YB0gh-mNfcBmiv",
  "_version": 1,
  "found": true,
  "_source": {
    "id": 2,
    "name": "johy",
    "hobbys": [
      "swimming",
      "eat"
    ]
  }
}

修改:name为kumi,添加sex字段:

# 更新文档
PUT /es/emp/6ke8m2YB0gh-mNfcBmiv/
{
  "name":"kumi",
  "sex":"man"
}

再次获取:

{
  "_index": "es",
  "_type": "emp",
  "_id": "6ke8m2YB0gh-mNfcBmiv",
  "_version": 2,
  "found": true,
  "_source": {
    "name": "kumi",
    "sex": "man"
  }
}

更新文档其实是把之前的数据删除了,版本号+1,_id不变添加新数据

局部更新文档:

#局部更新文档
POST /es/emp/6ke8m2YB0gh-mNfcBmiv/_update
{
  "doc":
  {
    "name":"kumi",
    "sex":"man",
    "age":18
  }
}
{
	"_index": "es",
	"_type": "emp",
	"_id": "6ke8m2YB0gh-mNfcBmiv",
	"_version": 4,
	"found": true,
	"_source": {
		"doc": {
			"name": "kumi",
			"sex": "man",
			"age": 18
		},
		"sex": "man",
		"name": "kumi",
		"age": 18
	}
}

#脚本更新数据

POST /es/emp/6ke8m2YB0gh-mNfcBmiv/_update

{

    "script" : "ctx._source.age += 5"

}
{
  "_index": "es",
  "_type": "emp",
  "_id": "6ke8m2YB0gh-mNfcBmiv",
  "_version": 5,
  "found": true,
  "_source": {
    "doc": {
      "name": "kumi",
      "sex": "man",
      "age": 18
    },
    "sex": "man",
    "name": "kumi",
    "age": 23
  }
}

 

# 删除文档

DELETE /es/emp/1

 

2 ik分词器安装

下载文件: https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.0.0/elasticsearch-analysis-ik-6.0.0.zip

在D:\devs\es\elasticsearch-6.0.0\plugins新建ik文件夹,然后解压缩:

重启ES.重新打开kibana:

POST _analyze
{
  "analyzer":"ik_smart",
  "text":"WWW是覆盖全球的客户机/服务器网络;当用互联网接入WWW时,用户的计算机就等于一台客户机;通过WWW用户能够和各种不同类型的计算机之间实现有效的通讯。"
}

分词结果如下:

{
  "tokens": [
    {
      "token": "www",
      "start_offset": 0,
      "end_offset": 3,
      "type": "ENGLISH",
      "position": 0
    },
    {
      "token": "是",
      "start_offset": 3,
      "end_offset": 4,
      "type": "CN_CHAR",
      "position": 1
    },
    {
      "token": "覆盖",
      "start_offset": 4,
      "end_offset": 6,
      "type": "CN_WORD",
      "position": 2
    },
    {
      "token": "全球",
      "start_offset": 6,
      "end_offset": 8,
      "type": "CN_WORD",
      "position": 3
    },
    {
      "token": "的",
      "start_offset": 8,
      "end_offset": 9,
      "type": "CN_CHAR",
      "position": 4
    },
    {
      "token": "客户机",
      "start_offset": 9,
      "end_offset": 12,
      "type": "CN_WORD",
      "position": 5
    },
    {
      "token": "服务器",
      "start_offset": 13,
      "end_offset": 16,
      "type": "CN_WORD",
      "position": 6
    },
    {
      "token": "网络",
      "start_offset": 16,
      "end_offset": 18,
      "type": "CN_WORD",
      "position": 7
    },
    {
      "token": "当用",
      "start_offset": 19,
      "end_offset": 21,
      "type": "CN_WORD",
      "position": 8
    },
    {
      "token": "互联网",
      "start_offset": 21,
      "end_offset": 24,
      "type": "CN_WORD",
      "position": 9
    },
    {
      "token": "接入",
      "start_offset": 24,
      "end_offset": 26,
      "type": "CN_WORD",
      "position": 10
    },
    {
      "token": "www",
      "start_offset": 26,
      "end_offset": 29,
      "type": "ENGLISH",
      "position": 11
    },
    {
      "token": "时",
      "start_offset": 29,
      "end_offset": 30,
      "type": "CN_CHAR",
      "position": 12
    },
    {
      "token": "用户",
      "start_offset": 31,
      "end_offset": 33,
      "type": "CN_WORD",
      "position": 13
    },
    {
      "token": "的",
      "start_offset": 33,
      "end_offset": 34,
      "type": "CN_CHAR",
      "position": 14
    },
    {
      "token": "计算机",
      "start_offset": 34,
      "end_offset": 37,
      "type": "CN_WORD",
      "position": 15
    },
    {
      "token": "就",
      "start_offset": 37,
      "end_offset": 38,
      "type": "CN_CHAR",
      "position": 16
    },
    {
      "token": "等于",
      "start_offset": 38,
      "end_offset": 40,
      "type": "CN_WORD",
      "position": 17
    },
    {
      "token": "一台",
      "start_offset": 40,
      "end_offset": 42,
      "type": "CN_WORD",
      "position": 18
    },
    {
      "token": "客户机",
      "start_offset": 42,
      "end_offset": 45,
      "type": "CN_WORD",
      "position": 19
    },
    {
      "token": "通过",
      "start_offset": 46,
      "end_offset": 48,
      "type": "CN_WORD",
      "position": 20
    },
    {
      "token": "www",
      "start_offset": 48,
      "end_offset": 51,
      "type": "ENGLISH",
      "position": 21
    },
    {
      "token": "用户",
      "start_offset": 51,
      "end_offset": 53,
      "type": "CN_WORD",
      "position": 22
    },
    {
      "token": "能够",
      "start_offset": 53,
      "end_offset": 55,
      "type": "CN_WORD",
      "position": 23
    },
    {
      "token": "和",
      "start_offset": 55,
      "end_offset": 56,
      "type": "CN_CHAR",
      "position": 24
    },
    {
      "token": "各种不同类型",
      "start_offset": 56,
      "end_offset": 62,
      "type": "CN_WORD",
      "position": 25
    },
    {
      "token": "的",
      "start_offset": 62,
      "end_offset": 63,
      "type": "CN_CHAR",
      "position": 26
    },
    {
      "token": "计算机",
      "start_offset": 63,
      "end_offset": 66,
      "type": "CN_WORD",
      "position": 27
    },
    {
      "token": "之间",
      "start_offset": 66,
      "end_offset": 68,
      "type": "CN_WORD",
      "position": 28
    },
    {
      "token": "实现",
      "start_offset": 68,
      "end_offset": 70,
      "type": "CN_WORD",
      "position": 29
    },
    {
      "token": "有效",
      "start_offset": 70,
      "end_offset": 72,
      "type": "CN_WORD",
      "position": 30
    },
    {
      "token": "的",
      "start_offset": 72,
      "end_offset": 73,
      "type": "CN_CHAR",
      "position": 31
    },
    {
      "token": "通讯",
      "start_offset": 73,
      "end_offset": 75,
      "type": "CN_WORD",
      "position": 32
    }
  ]
}

注意:IK分词器有两种类型,分别是ik_smart分词器和ik_max_word分词器。

ik_smart: 会做最粗粒度的拆分,比如会将“中华人民共和国国歌”拆分为“中华人民共和国,国歌”。

ik_max_word: 会将文本做最细粒度的拆分,比如会将“中华人民共和国国歌”拆分为“中华人民共和国,中华人民,中华,华人,人民共和国,人民,人,民,共和国,共和,和,国国,国歌”,会穷尽各种可能的组合;

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值