Analysis与Analyzer
- Analysis - 文本分析是吧全文本转换为一系列单词(term/token)的过程,也叫分词,
- Analysis是通过Analyzer来实现的
- 可以使用Elasticsearch内置的分析器/或者按需定制化分析器
- 除了在数据写入时转换词条,匹配Query语句的时候也需要相同的分析器对查询语句进行分析
Analyzer的组成
- 分词器是专门处理分词的组件,Analyzer由三部分组成
- character filters(针对原始文本处理,例如去除html)
- tokenizer(按照规则切分为单词)
- token filter(将切分的单词进行加工,小写,删除stopwords,增加同义词)
Elasticsearch的内置分词器
- standard analyzer - 默认分词器,按词切分,小写处理
- simple analyzer - 按照非字母切分(符号被过滤),小写处理
- stop analyzer - 小写处理,停用词过滤(the,a,is等)
- whitespace analyzer - 按照空格切分,不转小写
- keyword analyzer - 不分词,直接将输入当做输出
- patter analyzer - 正则表达式,默认 \W+(非字符分隔)
- language - 提供了30多种常见语言的分词器
- customer analyzer自定义分词器
#查看不同的analyzer的效果
#standard
GET _analyze
{
"analyzer": "standard",
"text": "2 running Quick brown-foxes leap over lazy dogs in the summer evening."
}
#simpe
GET _analyze
{
"analyzer": "simple",
"text": "2 running Quick brown-foxes leap over lazy dogs in the summer evening."
}
GET _analyze
{
"analyzer": "stop",
"text": "2 running Quick brown-foxes leap over lazy dogs in the summer evening."
}
#stop
GET _analyze
{
"analyzer": "whitespace",
"text": "2 running Quick brown-foxes leap over lazy dogs in the summer evening."
}
#keyword
GET _analyze
{
"analyzer": "keyword",
"text": "2 running Quick brown-foxes leap over lazy dogs in the summer evening."
}
GET _analyze
{
"analyzer": "pattern",
"text": "2 running Quick brown-foxes leap over lazy dogs in the summer evening."
}
#english
GET _analyze
{
"analyzer": "english",
"text": "2 running Quick brown-foxes leap over lazy dogs in the summer evening."
}
POST _analyze
{
"analyzer": "icu_analyzer",
"text": "他说的确实在理”"
}
POST _analyze
{
"analyzer": "standard",
"text": "他说的确实在理”"
}
POST _analyze
{
"analyzer": "icu_analyzer",
"text": "这个苹果不大好吃"
}
ICU Analyzer
- 需要安装plugin
- Elasticsearch-plugin install analysis-icu
- 提供了Unicode的支持,更好的支持亚洲语言
#更多的中文分词器 - IK
- 支持自定义词库,支持热更新分词字典
- https://github.com/medcl/elasticsearch-analusis-ik
- THULAC
- THU Lexucal Analyzer for Chinese,清华大学自然语言处理和社会人文计算实验室的一套中文分词器
- https://github.com/microbun/elasticsearch-thulac-plugin