Lucene内置很多的分词器工具包,

Lucene内置很多的分词器工具包,几乎涵盖了全球所有的国家和地区,最近散仙,在搞多语言分词的一个处理,主要国家有西班牙,葡萄牙,德语,法语,意大利,其实这些语系都与英语非常类似,都是以空格为分割的语种。


那么首先,探讨下分词器的词形还原和词干提取的对搜索的意义?在这之前,先看下两者的概念:
词形还原(lemmatization),是把一个任何形式的语言词汇还原为一般形式(能表达完整语义),而词干提取

(stemming)是抽取词的词干或词根形式(不一定能够表达完整语义)。词形还原和词干提取是词形规范化的两类
重要方式,都能够达到有效归并词形的目的,二者既有联系也有区别

详细介绍,请参考这篇文章


在电商搜索里,词干的抽取,和单复数的还原比较重要(这里主要针对名词来讲),因为这有关搜索的查准率,和查全率的命中,如果我们的分词器没有对这些词做过处理,会造成什么影响呢?那么请看如下的一个例子?

句子: i have two cats

分词器如果什么都没有做:

这时候我们搜cat,就会无命中结果,而必须搜cats才能命中到一条数据,而事实上cat和cats是同一个东西,只不过单词的形式不一样,这样以来,如果不做处理,我们的查全率和查全率都会下降,会涉及影响到我们的搜索体验,所以stemming这一步,在某些场合的分词中至关重要。

本篇,散仙,会参考源码分析一下,关于德语分词中中如何做的词干提取,先看下德语的分词声明:



Java代码 复制代码 收藏代码
1.List<String> list=new ArrayList<String>();
2.list.add("player");//这里面的词,不会被做词干抽取,词形还原
3.CharArraySet ar=new CharArraySet(Version.LUCENE_43,list , true);
4.//分词器的第二个参数是禁用词参数,第三个参数是排除不做词形转换,或单复数的词
5.GermanAnalyzer sa=new GermanAnalyzer(Version.LUCENE_43,null,ar);


接着,我们具体看下,在德语的分词器中,都经过了哪几部分的过滤处理:


Java代码 复制代码 收藏代码
1. protected TokenStreamComponents createComponents(String fieldName,
2. Reader reader) {
3. //标准分词器过滤
4. final Tokenizer source = new StandardTokenizer(matchVersion, reader);
5. TokenStream result = new StandardFilter(matchVersion, source);
6.//转小写过滤
7. result = new LowerCaseFilter(matchVersion, result);
8.//禁用词过滤
9. result = new StopFilter( matchVersion, result, stopwords);
10.//排除词过滤
11. result = new SetKeywordMarkerFilter(result, exclusionSet);
12. if (matchVersion.onOrAfter(Version.LUCENE_36)) {
13.//在lucene3.6以后的版本,采用如下filter过滤
14. //规格化,将德语中的特殊字符,映射成英语
15. result = new GermanNormalizationFilter(result);
16. //stem词干抽取,词性还原
17. result = new GermanLightStemFilter(result);
18. } else if (matchVersion.onOrAfter(Version.LUCENE_31)) {
19.//在lucene3.1至3.6的版本中,采用SnowballFilter处理
20. result = new SnowballFilter(result, new German2Stemmer());
21. } else {
22.//在lucene3.1之前的采用兼容的GermanStemFilter处理
23. result = new GermanStemFilter(result);
24. }
25. return new TokenStreamComponents(source, result);
26. }


OK,我们从源码中得知,在Lucene4.x中对德语的分词也做了向前和向后兼容,现在我们主要关注在lucene4.x之后的版本如何的词形转换,下面分别看下
result = new GermanNormalizationFilter(result);
result = new GermanLightStemFilter(result);
这两个类的功能:



Java代码 复制代码 收藏代码
1.package org.apache.lucene.analysis.de;
2.
3./*
4. * Licensed to the Apache Software Foundation (ASF) under one or more
5. * contributor license agreements. See the NOTICE file distributed with
6. * this work for additional information regarding copyright ownership.
7. * The ASF licenses this file to You under the Apache License, Version 2.0
8. * (the "License"); you may not use this file except in compliance with
9. * the License. You may obtain a copy of the License at
10. *
11. * http://www.apache.org/licenses/LICENSE-2.0
12. *
13. * Unless required by applicable law or agreed to in writing, software
14. * distributed under the License is distributed on an "AS IS" BASIS,
15. * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16. * See the License for the specific language governing permissions and
17. * limitations under the License.
18. */
19.
20.import java.io.IOException;
21.
22.import org.apache.lucene.analysis.TokenFilter;
23.import org.apache.lucene.analysis.TokenStream;
24.import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
25.import org.apache.lucene.analysis.util.StemmerUtil;
26.
27./**
28. * Normalizes German characters according to the heuristics
29. * of the <a href="http://snowball.tartarus.org/algorithms/german2/stemmer.html">
30. * German2 snowball algorithm</a>.
31. * It allows for the fact that ä, ö and ü are sometimes written as ae, oe and ue.
32. *
33. * [list]
34. * <li> 'ß' is replaced by 'ss'
35. * <li> 'ä', 'ö', 'ü' are replaced by 'a', 'o', 'u', respectively.
36. * <li> 'ae' and 'oe' are replaced by 'a', and 'o', respectively.
37. * <li> 'ue' is replaced by 'u', when not following a vowel or q.
38. * [/list]
39. * <p>
40. * This is useful if you want this normalization without using
41. * the German2 stemmer, or perhaps no stemming at all.
42. *上面的解释说得很清楚,主要是对德文的一些特殊字母,转换成对应的英文处理
43. *
44. */
45.
46.public final class GermanNormalizationFilter extends TokenFilter {
47. // FSM with 3 states:
48. private static final int N = 0; /* ordinary state */
49. private static final int V = 1; /* stops 'u' from entering umlaut state */
50. private static final int U = 2; /* umlaut state, allows e-deletion */
51.
52. private final CharTermAttribute termAtt = addAttribute(CharTermAttribute.class);
53.
54. public GermanNormalizationFilter(TokenStream input) {
55. super(input);
56. }
57.
58. @Override
59. public boolean incrementToken() throws IOException {
60. if (input.incrementToken()) {
61. int state = N;
62. char buffer[] = termAtt.buffer();
63. int length = termAtt.length();
64. for (int i = 0; i < length; i++) {
65. final char c = buffer[i];
66. switch(c) {
67. case 'a':
68. case 'o':
69. state = U;
70. break;
71. case 'u':
72. state = (state == N) ? U : V;
73. break;
74. case 'e':
75. if (state == U)
76. length = StemmerUtil.delete(buffer, i--, length);
77. state = V;
78. break;
79. case 'i':
80. case 'q':
81. case 'y':
82. state = V;
83. break;
84. case 'ä':
85. buffer[i] = 'a';
86. state = V;
87. break;
88. case 'ö':
89. buffer[i] = 'o';
90. state = V;
91. break;
92. case 'ü':
93. buffer[i] = 'u';
94. state = V;
95. break;
96. case 'ß':
97. buffer[i++] = 's';
98. buffer = termAtt.resizeBuffer(1+length);
99. if (i < length)
100. System.arraycopy(buffer, i, buffer, i+1, (length-i));
101. buffer[i] = 's';
102. length++;
103. state = N;
104. break;
105. default:
106. state = N;
107. }
108. }
109. termAtt.setLength(length);
110. return true;
111. } else {
112. return false;
113. }
114. }
115.}


Java代码 复制代码 收藏代码
1.package org.apache.lucene.analysis.de;
2.
3./*
4. * Licensed to the Apache Software Foundation (ASF) under one or more
5. * contributor license agreements. See the NOTICE file distributed with
6. * this work for additional information regarding copyright ownership.
7. * The ASF licenses this file to You under the Apache License, Version 2.0
8. * (the "License"); you may not use this file except in compliance with
9. * the License. You may obtain a copy of the License at
10. *
11. * http://www.apache.org/licenses/LICENSE-2.0
12. *
13. * Unless required by applicable law or agreed to in writing, software
14. * distributed under the License is distributed on an "AS IS" BASIS,
15. * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16. * See the License for the specific language governing permissions and
17. * limitations under the License.
18. */
19.
20.import java.io.IOException;
21.
22.import org.apache.lucene.analysis.TokenFilter;
23.import org.apache.lucene.analysis.TokenStream;
24.import org.apache.lucene.analysis.miscellaneous.SetKeywordMarkerFilter;
25.import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
26.import org.apache.lucene.analysis.tokenattributes.KeywordAttribute;
27.
28./**
29. * A {@link TokenFilter} that applies {@link GermanLightStemmer} to stem German
30. * words.
31. * <p>
32. * To prevent terms from being stemmed use an instance of
33. * {@link SetKeywordMarkerFilter} or a custom {@link TokenFilter} that sets
34. * the {@link KeywordAttribute} before this {@link TokenStream}.
35. *
36.
37. *
38. *
39. *这个类,主要做Stemmer(词干提取),而我们主要关注
40. *GermanLightStemmer这个类的作用
41. *
42. *
43. */
44.public final class GermanLightStemFilter extends TokenFilter {
45. private final GermanLightStemmer stemmer = new GermanLightStemmer();
46. private final CharTermAttribute termAtt = addAttribute(CharTermAttribute.class);
47. private final KeywordAttribute keywordAttr = addAttribute(KeywordAttribute.class);
48.
49. public GermanLightStemFilter(TokenStream input) {
50. super(input);
51. }
52.
53. @Override
54. public boolean incrementToken() throws IOException {
55. if (input.incrementToken()) {
56. if (!keywordAttr.isKeyword()) {
57. final int newlen = stemmer.stem(termAtt.buffer(), termAtt.length());
58. termAtt.setLength(newlen);
59. }
60. return true;
61. } else {
62. return false;
63. }
64. }
65.}


下面看下,在GermanLightStemmer中,如何做的词干提取:源码如下:


Java代码 复制代码 收藏代码
1. package org.apache.lucene.analysis.de;
2.
3./*
4. * Licensed to the Apache Software Foundation (ASF) under one or more
5. * contributor license agreements. See the NOTICE file distributed with
6. * this work for additional information regarding copyright ownership.
7. * The ASF licenses this file to You under the Apache License, Version 2.0
8. * (the "License"); you may not use this file except in compliance with
9. * the License. You may obtain a copy of the License at
10. *
11. * http://www.apache.org/licenses/LICENSE-2.0
12. *
13. * Unless required by applicable law or agreed to in writing, software
14. * distributed under the License is distributed on an "AS IS" BASIS,
15. * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16. * See the License for the specific language governing permissions and
17. * limitations under the License.
18. */
19.
20./*
21. * This algorithm is updated based on code located at:
22. * http://members.unine.ch/jacques.savoy/clef/
23. *
24. * Full copyright for that code follows:
25. */
26.
27./*
28. * Copyright (c) 2005, Jacques Savoy
29. * All rights reserved.
30. *
31. * Redistribution and use in source and binary forms, with or without
32. * modification, are permitted provided that the following conditions are met:
33. *
34. * Redistributions of source code must retain the above copyright notice, this
35. * list of conditions and the following disclaimer. Redistributions in binary
36. * form must reproduce the above copyright notice, this list of conditions and
37. * the following disclaimer in the documentation and/or other materials
38. * provided with the distribution. Neither the name of the author nor the names
39. * of its contributors may be used to endorse or promote products derived from
40. * this software without specific prior written permission.
41. *
42. * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
43. * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
44. * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
45. * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
46. * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
47. * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
48. * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
49. * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
50. * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
51. * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
52. * POSSIBILITY OF SUCH DAMAGE.
53. */
54.
55./**
56. * Light Stemmer for German.
57. * <p>
58. * This stemmer implements the "UniNE" algorithm in:
59. * <i>Light Stemming Approaches for the French, Portuguese, German and Hungarian Languages</i>
60. * Jacques Savoy
61. */
62.public class GermanLightStemmer {
63.
64. //处理特殊字符映射
65. public int stem(char s[], int len) {
66. for (int i = 0; i < len; i++)
67. switch(s[i]) {
68. case 'ä':
69. case 'à':
70. case 'á':
71. case 'â': s[i] = 'a'; break;
72. case 'ö':
73. case 'ò':
74. case 'ó':
75. case 'ô': s[i] = 'o'; break;
76. case 'ï':
77. case 'ì':
78. case 'í':
79. case 'î': s[i] = 'i'; break;
80. case 'ü':
81. case 'ù':
82. case 'ú':
83. case 'û': s[i] = 'u'; break;
84. }
85.
86. len = step1(s, len);
87. return step2(s, len);
88. }
89.
90.
91. private boolean stEnding(char ch) {
92. switch(ch) {
93. case 'b':
94. case 'd':
95. case 'f':
96. case 'g':
97. case 'h':
98. case 'k':
99. case 'l':
100. case 'm':
101. case 'n':
102. case 't': return true;
103. default: return false;
104. }
105. }
106. //处理基于以下规则的词干抽取和缩减
107. private int step1(char s[], int len) {
108. if (len > 5 && s[len-3] == 'e' && s[len-2] == 'r' && s[len-1] == 'n')
109. return len - 3;
110.
111. if (len > 4 && s[len-2] == 'e')
112. switch(s[len-1]) {
113. case 'm':
114. case 'n':
115. case 'r':
116. case 's': return len - 2;
117. }
118.
119. if (len > 3 && s[len-1] == 'e')
120. return len - 1;
121.
122. if (len > 3 && s[len-1] == 's' && stEnding(s[len-2]))
123. return len - 1;
124.
125. return len;
126. }
127. //处理基于以下规则est,er,en等的词干抽取和缩减
128. private int step2(char s[], int len) {
129. if (len > 5 && s[len-3] == 'e' && s[len-2] == 's' && s[len-1] == 't')
130. return len - 3;
131.
132. if (len > 4 && s[len-2] == 'e' && (s[len-1] == 'r' || s[len-1] == 'n'))
133. return len - 2;
134.
135. if (len > 4 && s[len-2] == 's' && s[len-1] == 't' && stEnding(s[len-3]))
136. return len - 2;
137.
138. return len;
139. }
140.}


具体的分析结果如下:


Java代码 复制代码 收藏代码
1.搜索技术交流群:324714439
2.大数据hadoop交流群:376932160
3.
4.0,将一些德语特殊字符,替换成对应的英文表示
5.1,将所有词干元音还原 a ,o,i,u
6.ste(2)(按先后顺序,符合以下任意一项,就完成一次校验(return))
7.2,单词长度大于5的词,以ern结尾的,直接去掉
8.3,单词长度大于4的词,以em,en,es,er结尾的,直接去掉
9.4,单词长度大于3的词,以e结尾的直接去掉
10.5,单词长度大于3的词,以bs,ds,fs,gs,hs,ks,ls,ms,ns,ts结尾的,直接去掉s
11.step(3)(按先后顺序,符合以下任意一项,就完成一次校验(return))
12.6,单词长度大于5的词,以est结尾的,直接去掉
13.7,单词长度大于4的词,以er或en结尾的直接去掉
14.8,单词长度大于4的词,bst,dst,fst,gst,hst,kst,lst,mst,nst,tst,直接去掉后两位字母st


最后,结合网上资料分析,基于er,en,e,s结尾的是做单复数转换的,其他的几条规则主要是对非名词的单词,做词干抽取。
  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值