ElasticSearch(7.11.1) 安装与学习 第二篇
第四讲 POSTMAN 操作增删改查功能
1、执行删除索引
1)命令
2)删除效果
2、创建文档数据
1)给索引中添加数据
2) 对应参数信息
3) 给予赋值信息
提交方式: POST
路径: http://127.0.0.1:9200/blog/hello
提交数据:
{
"id":1,
"title":"123",
"content":"123456"
}
4) 查看数据
3、 删除文档信息
1)数据执行
删除数据3
执行脚本:
http://127.0.0.1:9200/blog/hello/iqY3BngB8z7RC7kch5c9
2)注意情况:
使用POSTMAN 执行报错,使用elasticsearch-head-master工具执行成功;具体原因不清楚后续补充。
失败截图:
成功截图:
3) 数据查看:
可能是不通版本的差异,后续查看官网后再进行补充…
4、修改文档数据
1) 数据情况 主键ID: _id: jaY6BngB8z7RC7kcEpcU
2) 执行脚本
http://127.0.0.1:9200/blog/hello/jaY6BngB8z7RC7kcEpcU/
{
"id": 77,
"title": "777777",
"content": "77777"
}
3) 修改数据之后效果
5、查询数据
1)提交脚本:
http://127.0.0.1:9200/blog/hello/jaY6BngB8z7RC7kcEpcU/
2)请求方式:
GET
3)数据展示:
第五讲 关键词查询
1、term查询关键字
命令:http://127.0.0.1:9200/blog/hello/_search
请求方式: POST
2、返回参数
{
"took": 397,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 2,
"relation": "eq"
},
"max_score": 1.4618267,
"hits": [
{
"_index": "blog",
"_type": "hello",
"_id": "lKZ2BngB8z7RC7kcwpeU",
"_score": 1.4618267,
"_source": {
"id": 8,
"title": "种地了",
"content": "春天可以种地了 "
}
},
{
"_index": "blog",
"_type": "hello",
"_id": "mKZ4BngB8z7RC7kcXJeB",
"_score": 1.2714045,
"_source": {
"id": 66,
"title": "种地了22",
"content": "春天可以种地了222 "
}
}
]
}
}
3、数据对比
2、term查询关键字
1)路径:
http://127.0.0.1:9200/blog/hello/_search
2)参数:
{
"query":{
"query_string":{
"default_field":"title",
"query":"种地了"
}
}
}
3)查询返回值
{
"took": 181,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 2,
"relation": "eq"
},
"max_score": 4.38548,
"hits": [
{
"_index": "blog",
"_type": "hello",
"_id": "lKZ2BngB8z7RC7kcwpeU",
"_score": 4.38548,
"_source": {
"id": 8,
"title": "种地了",
"content": "春天可以种地了 "
}
},
{
"_index": "blog",
"_type": "hello",
"_id": "mKZ4BngB8z7RC7kcXJeB",
"_score": 3.8142135,
"_source": {
"id": 66,
"title": "种地了22",
"content": "春天可以种地了222 "
}
}
]
}
}
4) 数据对比
第六讲 分词器的分析效果
1、
1)执行请求:
请求方式: GET
请求路径:http://127.0.0.1:9200/_analyze
{
"analyzer":"chinese",
"text":"我是阿里巴巴与四十大盗"
}
2)执行效果
{
"tokens": [
{
"token": "我",
"start_offset": 0,
"end_offset": 1,
"type": "<IDEOGRAPHIC>",
"position": 0
},
{
"token": "是",
"start_offset": 1,
"end_offset": 2,
"type": "<IDEOGRAPHIC>",
"position": 1
},
{
"token": "阿",
"start_offset": 2,
"end_offset": 3,
"type": "<IDEOGRAPHIC>",
"position": 2
},
{
"token": "里",
"start_offset": 3,
"end_offset": 4,
"type": "<IDEOGRAPHIC>",
"position": 3
},
{
"token": "巴",
"start_offset": 4,
"end_offset": 5,
"type": "<IDEOGRAPHIC>",
"position": 4
},
{
"token": "巴",
"start_offset": 5,
"end_offset": 6,
"type": "<IDEOGRAPHIC>",
"position": 5
},
{
"token": "与",
"start_offset": 6,
"end_offset": 7,
"type": "<IDEOGRAPHIC>",
"position": 6
},
{
"token": "四",
"start_offset": 7,
"end_offset": 8,
"type": "<IDEOGRAPHIC>",
"position": 7
},
{
"token": "十",
"start_offset": 8,
"end_offset": 9,
"type": "<IDEOGRAPHIC>",
"position": 8
},
{
"token": "大",
"start_offset": 9,
"end_offset": 10,
"type": "<IDEOGRAPHIC>",
"position": 9
},
{
"token": "盗",
"start_offset": 10,
"end_offset": 11,
"type": "<IDEOGRAPHIC>",
"position": 10
}
]
}
- 执行效果
注意:
ElasticSearch低版本与高版本差异,现在这个请求方式已经提示错误了,需要修改为上面demo.
解决方案:
https://blog.csdn.net/dfshsdr/article/details/98593794
https://blog.csdn.net/dfshsdr/article/details/98593794
第七讲、IK分词器学习
1、IK分词器下载网址:
https://github.com/medcl/elasticsearch-analysis-ik/releases
2、安装目录及位置
我的存放地址:
C:\办公工具\elasticsearch-7.11.1-windows-x86_64\elasticsearch-7.11.1\plugins\analysis-ik
注意:
名称要保持一致,我测试名称不一致情况下我启动是失败的。
参考其他安装网址:
https://blog.csdn.net/u012864245/article/details/114069336
3、插件放入到目录中
1)需要重新启动ElasticSearch: elasticsearch-7.11.1
2)需要重新启动: elasticsearch-head-master
4、测试执行成果
ik_smart分词器 执行效果
-
测试准备
提交方式:GET
测试命令:
http://127.0.0.1:9200/_analyze
{
“analyzer”:“ik_smart”,
“text”:“我是阿里巴巴与四十大盗”
} -
执行效果
3)返回数据{
“tokens”: [
{
“token”: “我”,
“start_offset”: 0,
“end_offset”: 1,
“type”: “CN_CHAR”,
“position”: 0
},
{
“token”: “是”,
“start_offset”: 1,
“end_offset”: 2,
“type”: “CN_CHAR”,
“position”: 1
},
{
“token”: “阿里巴巴”,
“start_offset”: 2,
“end_offset”: 6,
“type”: “CN_WORD”,
“position”: 2
},
{
“token”: “与”,
“start_offset”: 6,
“end_offset”: 7,
“type”: “CN_CHAR”,
“position”: 3
},
{
“token”: “四十”,
“start_offset”: 7,
“end_offset”: 9,
“type”: “CN_WORD”,
“position”: 4
},
{
“token”: “大盗”,
“start_offset”: 9,
“end_offset”: 11,
“type”: “CN_WORD”,
“position”: 5
}
]
}
测试案例2:
http://127.0.0.1:9200/_analyze
{
"analyzer":"ik_smart",
"text":"我是阿里巴巴的程序员"
}
返回数据:
{
"tokens": [
{
"token": "我",
"start_offset": 0,
"end_offset": 1,
"type": "CN_CHAR",
"position": 0
},
{
"token": "是",
"start_offset": 1,
"end_offset": 2,
"type": "CN_CHAR",
"position": 1
},
{
"token": "阿里巴巴",
"start_offset": 2,
"end_offset": 6,
"type": "CN_WORD",
"position": 2
},
{
"token": "的",
"start_offset": 6,
"end_offset": 7,
"type": "CN_CHAR",
"position": 3
},
{
"token": "程序员",
"start_offset": 7,
"end_offset": 10,
"type": "CN_WORD",
"position": 4
}
]
}
ik_max_word分词器 执行效果
1、数据准备
提交方式:GET
测试命令:
http://127.0.0.1:9200/_analyze
{
"analyzer":"ik_max_word",
"text":"我是阿里巴巴的程序员"
}
2、返回数据
3、返回数据格式 返回拆分较为详细
{
"tokens": [
{
"token": "我",
"start_offset": 0,
"end_offset": 1,
"type": "CN_CHAR",
"position": 0
},
{
"token": "是",
"start_offset": 1,
"end_offset": 2,
"type": "CN_CHAR",
"position": 1
},
{
"token": "阿里巴巴",
"start_offset": 2,
"end_offset": 6,
"type": "CN_WORD",
"position": 2
},
{
"token": "阿里",
"start_offset": 2,
"end_offset": 4,
"type": "CN_WORD",
"position": 3
},
{
"token": "巴巴",
"start_offset": 4,
"end_offset": 6,
"type": "CN_WORD",
"position": 4
},
{
"token": "的",
"start_offset": 6,
"end_offset": 7,
"type": "CN_CHAR",
"position": 5
},
{
"token": "程序员",
"start_offset": 7,
"end_offset": 10,
"type": "CN_WORD",
"position": 6
},
{
"token": "程序",
"start_offset": 7,
"end_offset": 9,
"type": "CN_WORD",
"position": 7
},
{
"token": "员",
"start_offset": 9,
"end_offset": 10,
"type": "CN_CHAR",
"position": 8
}
]
}
Elasticsearch Reference [7.11] 学习网址:
https://www.elastic.co/guide/en/elasticsearch/reference/current/removal-of-types.html#_index_templates