solr4.3默认的分词器是一元分词器,这个本来就是对英文进行分词的,英文大部分就是典型的根据空格进行分词,而中文如果按照这个规则,那么显然是要有很多的冗余词被分出来,一些没有用的虚词,数词,都会被分出来,影响效率不说,关键是分词效果不好,所以可以利用solr的同步发行包smartcn进行中文切词,smartcn的分词准确率不错,但就是不能自己定义新的词库,不过smartcn是跟solr同步的,所以不需要额外的下载,只需在solr的例子中拷贝进去即可,下面给出路径图和安装solr4.3的smartcn分词过程
无论安装那种分词器,大部分都有2个步骤,第一步是拷贝jar包到solr的lib中
- C:\桌面\solr-4.3.0\contrib\analysis-extras\lucene-libs
- F:\eclipse10tomcat\webapps\solr\WEB-INF\lib
- smartcn的同步发行包:lucene-analyzers-smartcn-4.3.0.jar
C:\桌面\solr-4.3.0\contrib\analysis-extras\lucene-libs
F:\eclipse10tomcat\webapps\solr\WEB-INF\lib
smartcn的同步发行包:lucene-analyzers-smartcn-4.3.0.jar
这个弄好之后,就需要在schemal.xml文件中,注册分词器了
- <fieldType name="text_smart" class="solr.TextField" positionIncrementGap="100">
- <analyzer type="index">
- <!-- 此处需要配置主要的分词类 -->
- <tokenizer class="solr.SmartChineseSentenceTokenizerFactory"/>
- <!--
- <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
- <filter class="solr.LowerCaseFilterFactory"/>
- -->
- <!-- in this example, we will only use synonyms at query time
- <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
- -->
- <filter class="solr.SmartChineseWordTokenFilterFactory"/>
- </analyzer>
- <analyzer type="query">
- <!-- 此处配置同上 -->
- <tokenizer class="solr.SmartChineseSentenceTokenizerFactory"/>
- <!--
- <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
- <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
- <filter class="solr.LowerCaseFilterFactory"/>
- -->
- <filter class="solr.SmartChineseWordTokenFilterFactory"/>
- </analyzer>
- </fieldType>
<fieldType name="text_smart" class="solr.TextField" positionIncrementGap="100">
<analyzer type="index">
<!-- 此处需要配置主要的分词类 -->
<tokenizer class="solr.SmartChineseSentenceTokenizerFactory"/>
<!--
<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
<filter class="solr.LowerCaseFilterFactory"/>
-->
<!-- in this example, we will only use synonyms at query time
<filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
-->
<filter class="solr.SmartChineseWordTokenFilterFactory"/>
</analyzer>
<analyzer type="query">
<!-- 此处配置同上 -->
<tokenizer class="solr.SmartChineseSentenceTokenizerFactory"/>
<!--
<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
<filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
<filter class="solr.LowerCaseFilterFactory"/>
-->
<filter class="solr.SmartChineseWordTokenFilterFactory"/>
</analyzer>
</fieldType>
最后在引用一下字段类型就可以了
- <field name="sma" type="text_smart" indexed="true" stored="true" multiValued="true"/>
<field name="sma" type="text_smart" indexed="true" stored="true" multiValued="true"/>
访问http://localhost:8080/solr/#/collection1点击分词分析即可查看分词效果