Elasticsearch 分词插件 ik

详情: https://github.com/medcl/elasticsearch-analysis-ik

先来看ES默认的分词

curl -H 'Content-Type: application/json'  -XGET --user elastic:123456 '172.30.28.11:9200/_analyze?pretty' -d '{"text":"热烈 庆祝沪昆高速开通"}'
{
  "tokens" : [
    {
      "token" : "热",
      "start_offset" : 0,
      "end_offset" : 1,
      "type" : "<IDEOGRAPHIC>",
      "position" : 0
    },
    {
      "token" : "烈",
      "start_offset" : 1,
      "end_offset" : 2,
      "type" : "<IDEOGRAPHIC>",
      "position" : 1
    },
    {
      "token" : "庆",
      "start_offset" : 3,
      "end_offset" : 4,
      "type" : "<IDEOGRAPHIC>",
      "position" : 2
    },
    {
      "token" : "祝",
      "start_offset" : 4,
      "end_offset" : 5,
      "type" : "<IDEOGRAPHIC>",
      "position" : 3
    },
    {
      "token" : "沪",
      "start_offset" : 5,
      "end_offset" : 6,
      "type" : "<IDEOGRAPHIC>",
      "position" : 4
    },
    {
      "token" : "昆",
      "start_offset" : 6,
      "end_offset" : 7,
      "type" : "<IDEOGRAPHIC>",
      "position" : 5
    },
    {
      "token" : "高",
      "start_offset" : 7,
      "end_offset" : 8,
      "type" : "<IDEOGRAPHIC>",
      "position" : 6
    },
    {
      "token" : "速",
      "start_offset" : 8,
      "end_offset" : 9,
      "type" : "<IDEOGRAPHIC>",
      "position" : 7
    },
    {
      "token" : "开",
      "start_offset" : 9,
      "end_offset" : 10,
      "type" : "<IDEOGRAPHIC>",
      "position" : 8
    },
    {
      "token" : "通",
      "start_offset" : 10,
      "end_offset" : 11,
      "type" : "<IDEOGRAPHIC>",
      "position" : 9
    }
  ]
}

会把内容拆分成单个的字

ik分词

安装: 注意分词器与你安装的ES版本对应,并重启ES

cd /opt/es/elasticsearch-7.7.0

./bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.7.0/elasticsearch-analysis-ik-7.7.0.zip

使用方法:

1> 在ElasticSearch的配置文件config/elasticsearch.yml中的最后一行添加参数 index.analysis.analyzer.default.type: ik,则设置所有索引的默认分词器为ik分词。

2> 也可以通过设置mapping来使用ik分词

分词力度: ik_max_word   会将文本做最细粒度的拆分

{
    "text":"今天是个好日子hello word hi",
    "analyzer":"ik_max_word"
}

{
"tokens": [
{
"token": "今天是",
"start_offset": 0,
"end_offset": 3,
"type": "CN_WORD",
"position": 0
}
,
{
"token": "今天",
"start_offset": 0,
"end_offset": 2,
"type": "CN_WORD",
"position": 1
}
,
{
"token": "是",
"start_offset": 2,
"end_offset": 3,
"type": "CN_CHAR",
"position": 2
}
,
{
"token": "个",
"start_offset": 3,
"end_offset": 4,
"type": "CN_CHAR",
"position": 3
}
,
{
"token": "好日子",
"start_offset": 4,
"end_offset": 7,
"type": "CN_WORD",
"position": 4
}
,
{
"token": "日子",
"start_offset": 5,
"end_offset": 7,
"type": "CN_WORD",
"position": 5
}
,
{
"token": "hello",
"start_offset": 7,
"end_offset": 12,
"type": "ENGLISH",
"position": 6
}
,
{
"token": "word",
"start_offset": 13,
"end_offset": 17,
"type": "ENGLISH",
"position": 7
}
,
{
"token": "hi",
"start_offset": 18,
"end_offset": 20,
"type": "ENGLISH",
"position": 8
}
]
}

分词力度: ik_smart  会做最粗粒度的拆分

{
    "text":"今天是个好日子hello word hi",
    "analyzer":"ik_smart"
}
{
"tokens": [
{
"token": "今天是",
"start_offset": 0,
"end_offset": 3,
"type": "CN_WORD",
"position": 0
}
,
{
"token": "个",
"start_offset": 3,
"end_offset": 4,
"type": "CN_CHAR",
"position": 1
}
,
{
"token": "好日子",
"start_offset": 4,
"end_offset": 7,
"type": "CN_WORD",
"position": 2
}
,
{
"token": "hello",
"start_offset": 7,
"end_offset": 12,
"type": "ENGLISH",
"position": 3
}
,
{
"token": "word",
"start_offset": 13,
"end_offset": 17,
"type": "ENGLISH",
"position": 4
}
,
{
"token": "hi",
"start_offset": 18,
"end_offset": 20,
"type": "ENGLISH",
"position": 5
}
]
}

设置自定义字典

参考文章 参考文章 参考文章

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值