web应用中嵌入solr

Solr已经到4了,相比以前的版本至少安装上面方便了很多。从官网上面下载zip包,doc里面有使用jetty作为服务器的例子,如果使用tomcat的话,可以先将dist目录下的apache-solr-4.0.0.war拷贝到tomcat webapps下面,然后要加一些配置,可以在tomcat的根目录建solr\collection1\conf然后将solr压缩包的apache-solr-4.0.0\example\solr\collection1\conf下的一些配置文件拷贝到solr\collection1\conf下,然后启动tomcat即可。

在实际的环境中可能我们需要将solr嵌入到web服务中,我们公司的项目就是需要用thrift暴露solr服务,这就需要做一些定制。下面就具体介绍下。

Solr 服务定制

先建一个maven webapp的项目。然后我们从web.xml开始:

<filter>
        <filter-name>solrRequestFilter</filter-name>
        <filter-class>xxx.xxx.searcher.web.SolrDispatchFilter</filter-class>
        <init-param>
            <param-name>path-prefix</param-name>
            <param-value>/s2</param-value>
        </init-param>
    </filter>

    <filter-mapping>
        <filter-name>solrRequestFilter</filter-name>
        <url-pattern>/s2/*</url-pattern>
    </filter-mapping>
这段里面我们并没使用默认的org.apache.solr.servlet.SolrDispatchFilter,不过我的这个filter只是继承了它然后有些改动:

public class SolrDispatchFilter extends org.apache.solr.servlet.SolrDispatchFilter{
	protected static final Logger LOG = LoggerFactory.getLogger(SolrDispatchFilter.class);
    private static final String DATA_DIR_PARAM = "data.dir";

    @Override
    public void init(FilterConfig config) throws ServletException {
    	// 设置solr home目录,默认是tomcat的根目录下solr
        System.setProperty("solr.solr.home", config.getServletContext().getRealPath("/WEB-INF/solr"));
        try {
            String dataDir = ResourceUtils.getFile(getDataDir("file:///opt/any/home/searcher/data")).getAbsolutePath();
            LOG.info("Use solr.data.dir:[{}]", dataDir);
            // 索引数据存放目录,默认是solr home下data
            System.setProperty("solr.data.dir", dataDir);
        } catch (FileNotFoundException ignored) {
            throw new ServletException("solr data dir not found");
        }
        super.init(config); // 然后调用父类的init
        /**
         * solr是作为一个内嵌的服务,并把它保存到servletContext里面,后面取很方便
         */
        EmbeddedSolrServer solrServer = new EmbeddedSolrServer(cores, cores.getDefaultCoreName());
        config.getServletContext().setAttribute(SolrDispatchFilter.class.getName(), solrServer);
        
        // spring context loader
        ContextLoader loader = new ContextLoaderListener();
        loader.initWebApplicationContext(config.getServletContext());
    }
...

代码里面将solr.home重新定义了,也可以使用jndi的方式定义,放到web.xml里面即可:

<env-entry>
    <env-entry-name>solr/home</env-entry-name>
    <env-entry-type>java.lang.String</env-entry-type>
    <env-entry-value>d:\solr</env-entry-value>
</env-entry>

solrServer会保存在context里面,用到就从里面取就可以了。

然后我们就可以在WEB-INF目录下建一个solr目录,里面建一个solr.xml文件:

<?xml version="1.0" encoding="UTF-8" ?>

<solr persistent="false">
  <cores adminPath="/admin/cores" defaultCoreName="core0">
      <core name="core0" instanceDir="."/>
  </cores>
</solr>
这个文件的作用是创建一个core0,可以配置多个core,它的概念应该类似于一个数据库,如果没有solr.xml,那默认就只有一个core,名字是collection1。属性persistent默认是true,通过solr控制台对于core的操作会被记录到solr.xml,这样服务重启后可以恢复。

然后在conf目录下建schema.xml,这个感觉就是定义表字段。solr给的exmaple里面的定义的很全,而且每个类型都有注释,下面是我的schema.xml:

<?xml version="1.0" ?>

<schema name="searcherSchema" version="1.4">
    <types>
        <fieldType name="string" class="solr.StrField" sortMissingLast="true" omitNorms="true"/>
        <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true" omitNorms="true"/>


        <fieldType name="int" class="solr.TrieIntField" precisionStep="0" omitNorms="true" positionIncrementGap="0"/>
        <fieldType name="long" class="solr.TrieLongField" precisionStep="0" omitNorms="true" positionIncrementGap="0"/>
        <fieldType name="float" class="solr.TrieFloatField" precisionStep="0" omitNorms="true" positionIncrementGap="0"/>
        <fieldType name="double" class="solr.TrieDoubleField" precisionStep="0" omitNorms="true" positionIncrementGap="0"/>
        <fieldType name="date" class="solr.TrieDateField" precisionStep="0" omitNorms="true" positionIncrementGap="0"/>


        <fieldType name="tint" class="solr.TrieIntField" precisionStep="8" omitNorms="true" positionIncrementGap="0"/>
        <fieldType name="tlong" class="solr.TrieLongField" precisionStep="8" omitNorms="true" positionIncrementGap="0"/>
        <fieldType name="tfloat" class="solr.TrieFloatField" precisionStep="8" omitNorms="true" positionIncrementGap="0"/>
        <fieldType name="tdouble" class="solr.TrieDoubleField" precisionStep="8" omitNorms="true" positionIncrementGap="0"/>
        <fieldType name="tdate" class="solr.TrieDateField" precisionStep="6" omitNorms="true" positionIncrementGap="0"/>


        <fieldType name="text" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="false">
            <analyzer>
                <tokenizer class="org.bear.searcher.analyzer.IKTokenizerFactory"/>
                <filter class="solr.LowerCaseFilterFactory"/>
                <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
            </analyzer>
        </fieldType>


        <fieldType name="ngram_text" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="false">
            <analyzer type="query">
                <tokenizer class="org.bear.searcher.analyzer.IKTokenizerFactory"/>
                <filter class="solr.LowerCaseFilterFactory"/>
                <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
            </analyzer>
            <analyzer type="index">
                <tokenizer class="org.bear.searcher.analyzer.IKTokenizerFactory"/>
                <filter class="org.bear.searcher.analyzer.TokenJoinTokenFilterFactory" maxSize="3"/>
                <filter class="solr.LowerCaseFilterFactory"/>
                <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
            </analyzer>
        </fieldType>


        <fieldtype name="ignored" class="solr.StrField" stored="false" indexed="false"/>
    </types>


    <fields>
        <field name="_version_" type="long" indexed="true" stored="true"/>
        <field name="id" type="string" indexed="true" stored="true" required="true"/>
        <field name="scope" type="int" indexed="true" stored="false"/>
        <field name="mimeType" type="int" indexed="true" stored="true"/>
        <field name="app" type="string" indexed="true" stored="true"/>
        <field name="category" type="string" indexed="true" stored="true"/>
        <field name="owner" type="string" indexed="true" stored="false" multiValued="true"/>
        <field name="role" type="string" indexed="true" stored="false" multiValued="true"/>
        <field name="tag" type="string" indexed="true" stored="false" multiValued="true"/>
        <field name="title" type="text" indexed="true" stored="true" omitTermFreqAndPositions="true"/>
        <field name="titleAuto" type="ngram_text" indexed="true" stored="true" omitTermFreqAndPositions="true"/>
        <field name="body" type="text" indexed="true" stored="true" omitTermFreqAndPositions="true" compressed="true" compressThreshold="128"/>
        <field name="date" type="tdate" indexed="true" stored="true"/>
        <field name="public" type="boolean" indexed="false" stored="true"/>
        <field name="all" type="text" indexed="true" stored="false" multiValued="true" omitTermFreqAndPositions="true"/>
        <!--unsearchable fields-->
        <dynamicField name="*_us" type="string" indexed="false" stored="true" compressed="true" compressThreshold="128"/>
        <dynamicField name="*_ub" type="boolean" indexed="false" stored="true"/>
        <dynamicField name="*_ui" type="int" indexed="false" stored="true"/>
        <dynamicField name="*_ul" type="long" indexed="false" stored="true"/>
        <dynamicField name="*_uf" type="float" indexed="false" stored="true"/>
        <dynamicField name="*_ud" type="double" indexed="false" stored="true"/>
        <dynamicField name="*_ut" type="date" indexed="false" stored="true"/>
        <!--filter fields-->
        <dynamicField name="*_fs" type="string" indexed="true" stored="false"/>
        <dynamicField name="*_fb" type="boolean" indexed="true" stored="false"/>
        <dynamicField name="*_fi" type="tint" indexed="true" stored="true"/>
        <dynamicField name="*_fl" type="tlong" indexed="true" stored="true"/>
        <dynamicField name="*_ff" type="tfloat" indexed="true" stored="true"/>
        <dynamicField name="*_fd" type="tdouble" indexed="true" stored="true"/>
        <dynamicField name="*_ft" type="tdate" indexed="true" stored="true"/>
        <!--searchable fields-->
        <dynamicField name="*_s" type="string" indexed="true" stored="true" compressed="true" compressThreshold="128"/>
        <dynamicField name="*_b" type="boolean" indexed="true" stored="true"/>
        <dynamicField name="*_i" type="tint" indexed="true" stored="true"/>
        <dynamicField name="*_l" type="tlong" indexed="true" stored="true"/>
        <dynamicField name="*_f" type="tfloat" indexed="true" stored="true"/>
        <dynamicField name="*_d" type="tdouble" indexed="true" stored="true"/>
        <dynamicField name="*_t" type="tdate" indexed="true" stored="true"/>
        <dynamicField name="*" type="ignored" multiValued="true"/>
    </fields>


    <copyField source="title" dest="all"/>
    <copyField source="title" dest="titleAuto"/>
    <copyField source="body" dest="all"/>
    <copyField source="*_s" dest="all"/>


    <uniqueKey>id</uniqueKey>
    <defaultSearchField>all</defaultSearchField>
    <solrQueryParser defaultOperator="OR"/>
</schema>

fileType 是类型定义,这样就可以在后面filed定义时引用了。precisionStep属性为0事表示禁止索引。值越小精度越高,当然创建的标记(token)越多。对于text相关类型的使用了中文分词包IK,这个后面再详细讲。

field 类似于表字段了,里面的stored表示是否存储,如果字段不需要作为结果返回,最好不要设置为true,以提升性能。

copyField 在索引创建时将一个filed复制到另外一个filed,用于分开索引同一内容,或者将多个filed加到一个field里面。

dynamicField 类型的功能很强大,因为很多时候我们并不能定死用户只能使用定义好的field,比如上面的schame.xml定义的app,body,id等filed是必需的,但使用solr服务的应用本身可能会定义一些自己使用的filed,比如用户索引里面我们会存放用户的username,nickname。这时dynamicField就体现它的作用了。

SolrInputDocument input = new SolrInputDocument();
...
input.addField("app", "user"); // 定义的field
input.addField("nickname_s", "superman");// 动态filed
...
nickname使用了 dynamicField name="*_s"这个动态类型进行处理。如果多个动态类型符合,首先匹配的使用。


查询的时候如果只根据用户的昵称nickname字段查询可以这么写SolrQuery solrQuery = new SolrQuery(“nickname_s:superman”);这样这个字段就会匹配dynamicFiled定义的方式进行处理。

另外说下string和text的区别,string类型的字段一般用于精确匹配,区分大小写。text则是模糊匹配,而且可以二次处理如上面的配置里面的分词,小写和去除重复。

<tokenizer class="org.bear.searcher.analyzer.IKTokenizerFactory"/>
                <filter class="solr.LowerCaseFilterFactory"/>
                <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>

store=true的字段会保存到索引里面,这样就可以从索引里面取的,index=true的字段可以作为查询字段,如果是排序字段一定需要设置index=true

然后是索引配置文件solrconfig.xml,官方给的example里面的solrconfig.xml也有详细的说明。可以根据自身需求配置:

<config>
    <luceneMatchVersion>LUCENE_40</luceneMatchVersion>
    <dataDir>${solr.data.dir:}</dataDir>
    <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.StandardDirectoryFactory}"/>
    <indexConfig>
        <useCompoundFile>false</useCompoundFile>
        <mergeFactor>10</mergeFactor>
        <ramBufferSizeMB>64</ramBufferSizeMB>
        <maxFieldLength>20000</maxFieldLength>
        <writeLockTimeout>1000</writeLockTimeout>
        <lockType>native</lockType>
    </indexConfig>
    <updateHandler class="solr.DirectUpdateHandler2">
        <autoCommit>
            <maxDocs>30000</maxDocs>
            <maxTime>60000</maxTime>
        </autoCommit>
        <autoSoftCommit>
            <maxTime>1000</maxTime>
        </autoSoftCommit>
        <updateLog>
          <str name="dir">${solr.data.dir:}</str>
        </updateLog>
    </updateHandler>
    <query>
        <maxBooleanClauses>1024</maxBooleanClauses>
        <filterCache class="solr.FastLRUCache" size="16384" initialSize="4096" autowarmCount="4096"/>
        <queryResultCache class="solr.FastLRUCache" size="16384" initialSize="4096" autowarmCount="1024"/>
        <documentCache class="solr.FastLRUCache" size="16384" initialSize="16384"/>
        <enableLazyFieldLoading>true</enableLazyFieldLoading>
        <queryResultWindowSize>50</queryResultWindowSize>
        <queryResultMaxDocsCached>400</queryResultMaxDocsCached>
        <HashDocSet maxSize="10000" loadFactor="0.75"/>
        <useColdSearcher>false</useColdSearcher>
        <maxWarmingSearchers>2</maxWarmingSearchers>
    </query>

    <requestDispatcher handleSelect="true">
        <requestParsers enableRemoteStreaming="true" multipartUploadLimitInKB="2048000"/>
        <httpCaching never304="true"/>
    </requestDispatcher>

    <searchComponent name="terms" class="solr.TermsComponent"/>

    <searchComponent name="spellchecker" class="solr.SpellCheckComponent">
        <lst name="spellchecker">
            <str name="name">suggest</str>
            <str name="classname">org.apache.solr.spelling.suggest.Suggester</str>
            <str name="lookupImpl">org.apache.solr.spelling.suggest.tst.TSTLookup</str>
            <str name="field">titleAuto</str>
            <str name="buildOnOptimize">true</str>
        </lst>
        <lst name="spellchecker">
            <str name="name">spellchecker</str>
            <str name="classname">solr.IndexBasedSpellChecker</str>
            <str name="field">titleAuto</str>
            <str name="spellcheckIndexDir">spellchecker</str>
            <str name="buildOnOptimize">true</str>
        </lst>
    </searchComponent>

    <requestHandler name="search" class="solr.SearchHandler" default="true">
        <lst name="defaults">
            <!--<str name="echoParams">explicit</str>-->
            <str name="echoParams">all</str>
        </lst>
        <arr name="components">
            <str>query</str>
            <str>facet</str>
            <str>highlight</str>
            <str>terms</str>
            <str>spellchecker</str>
        </arr>
    </requestHandler>
    <requestHandler name="/update" class="solr.UpdateRequestHandler"/>
    <requestHandler name="/update/javabin" class="solr.UpdateRequestHandler"/>
    <requestHandler name="/analysis/field" startup="lazy" class="solr.FieldAnalysisRequestHandler"/>
    <requestHandler name="/admin/" class="solr.admin.AdminHandlers"/>

    <requestHandler name="/replication" class="solr.ReplicationHandler" startup="lazy" />
    <requestHandler name="/get" class="solr.RealTimeGetHandler">
      <lst name="defaults">
        <str name="omitHeader">true</str>
     </lst>
    </requestHandler>

    <requestHandler name="/admin/ping" class="solr.PingRequestHandler">
        <lst name="invariants">
            <str name="q">solrpingquery</str>
        </lst>
        <lst name="defaults">
            <str name="echoParams">all</str>
        </lst>
    </requestHandler>

    <requestHandler name="/suggest" class="org.apache.solr.handler.component.SearchHandler">
        <lst name="defaults">
            <str name="spellcheck">true</str>
            <str name="spellcheck.dictionary">suggest</str>
            <str name="spellcheck.count">10</str>
        </lst>
        <arr name="components">
            <str>spellchecker</str>
        </arr>
    </requestHandler>
    <requestHandler name="/spellcheck" class="org.apache.solr.handler.component.SearchHandler">
        <lst name="defaults">
            <str name="spellcheck">true</str>
            <str name="spellcheck.dictionary">spellchecker</str>
            <str name="spellcheck.onlyMorePopular">true</str>
            <str name="spellcheck.count">10</str>
        </lst>
        <arr name="components">
            <str>spellchecker</str>
        </arr>
    </requestHandler>
</config>

下面我们看看中文分词。自定义IKTokenizerFactory,这个主要是因为这个版本的IK和solr4的TokenizerFactory不兼容:

public class IKTokenizerFactory extends TokenizerFactory {
	private boolean useSmart = false;

    public void init(Map<String, String> args) {
        String _arg = args.get("useSmart");
        useSmart = Boolean.parseBoolean(_arg);
    }

    public Tokenizer create(Reader reader) {
        return new IKTokenizer(reader, isUseSmart());
    }

    public void setUseSmart(boolean useSmart) {
        this.useSmart = useSmart;
    }

    public boolean isUseSmart() {
        return useSmart;
    }
}

如果需要定制中文分词的话可以在classpath下面加个IKAnalyzer.cfg.xml:

<?xml version="1.0" encoding="UTF-8"?>

<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>  
	<comment>IK Analyzer 扩展配置</comment>
	<!--用户可以在这里配置自己的扩展字典-->
	<entry key="ext_dict">/mydict.dic;</entry>

	 <!--用户可以在这里配置自己的扩展停止词字典-->
	<entry key="ext_stopwords">/ext_stopword.dic</entry>
</properties>

然后就是TokenJoinTokenFilterFactory:

public class TokenJoinTokenFilterFactory extends TokenFilterFactory {
    private int minSize;
    private int maxSize;

    @Override
    public void init(Map<String, String> args) {
        super.init(args);
        String minArg = args.get("minSize");
        minSize = (minArg != null ? Integer.parseInt(minArg) : 2);
        String maxArg = args.get("maxSize");
        maxSize = (maxArg != null ? Integer.parseInt(maxArg) : 3);
    }

    public TokenJoinTokenFilter create(TokenStream input) {
        return new TokenJoinTokenFilter(input, minSize, maxSize);
    }
}
TokenJoinTokenFilter,这个类主要是将token组合成指定范围词长的token,用于自动完成和语法检查:
public class TokenJoinTokenFilter extends TokenFilter {
    private final int minSize;
    private final int maxSize;

    private final CharTermAttribute termAtt = addAttribute(CharTermAttribute.class);
    private final OffsetAttribute offsetAtt = addAttribute(OffsetAttribute.class);

    private LinkedList<Token> tokens = new LinkedList<Token>();
    private int point;

    protected TokenJoinTokenFilter(TokenStream input, int minSize, int maxSize) {
        super(input);
        if (minSize < 2) {
            throw new IllegalArgumentException("minSize must be greater than zero");
        }

        if (minSize > maxSize) {
            throw new IllegalArgumentException("maxSize must not be greater than minSize");
        }
        this.minSize = minSize;
        this.maxSize = maxSize;
    }

    @Override
    public boolean incrementToken() throws IOException {
        if (point >= minSize) {
            if (point > maxSize) {
                tokens.removeFirst();
                point = tokens.size();
            }
            StringBuilder sb = new StringBuilder();
            ListIterator<Token> it = tokens.listIterator(tokens.size() - point);
            int beginPosition = -1;
            Token token = null;
            while (it.hasNext()) {
                token = it.next();
                sb.append(token.getBuff());
                if (beginPosition < 0) {
                    beginPosition = token.getStartOffset();
                }
            }
            if (token != null) {
                clearAttributes();
                offsetAtt.setOffset(beginPosition, token.getEndOffset());
                termAtt.append(sb.toString());
                point--;
            }
        } else {
            if (!input.incrementToken())
                return false;
            Token token = new Token(termAtt.toString(), offsetAtt.startOffset(), offsetAtt.endOffset());
            tokens.add(token);
            point = tokens.size();
        }
        return true;
    }

    @Override
    public void reset() throws IOException {
        super.reset();
        tokens.clear();
        point = 0;
    }

    class Token {
        private String buff;
        private int startOffset;
        private int endOffset;

        Token(String buff, int startOffset, int endOffset) {
            this.buff = buff;
            this.endOffset = endOffset;
            this.startOffset = startOffset;
        }

        public String getBuff() {
            return buff;
        }

        public int getStartOffset() {
            return startOffset;
        }

        public int getEndOffset() {
            return endOffset;
        }
    }
}
最后在web项目奖solr目录并将solr example war里面的solr管理台页面拷贝到我们的项目中,WEB-INF META-INF和favicon.ico就不要了:

里面的内容可以根据需要修改,但一定要改admin.html里面的:

app_config.solr_path = '../s2';
app_config.core_admin_path = '/admin/cores';

这个是跟我们的配置相关的。然后就可以启动服务了,因为solr是嵌入我们的服务的,它的管理台可以用iframe的形式集成进来,这样比较方便。这样修改的solr_path有个问题,就是在控制台查询测试时url会直接拼装‘../s2’,不过不影响大局。

Thrift暴露Solr服务

用thrift暴露主要是因为公司整个框架都是thrift的,比较统一,而且也解决了跨语言调用不方便的问题。thrift暴露服务这块请参看:http://my.oschina.net/yybear/blog/101217,机制是一样的。主要用到的就是保存到servletContext里面的那个solrServer,有了它就无所不能了。

Spring data solr 客户端

spring的solr data框架是新出的,可以像操作数据库一样操作solr挺不错。Github上面有人专门做了spring-solr-repository-example来演示,我们可以使用它的,来测试下刚刚启动的solr服务和spring solr data框架。先到Github上面下载,然后修改/spring-solr-repository-example/src/main/resources/org/springframework/data/solr/example/configuration.properties里面的solr.host为我们启动的服务地址:solr.host=http://xxx.xxx.com/solrTest/s2。

测试之前需要将spring-solr-repository-example里面的product类里面的字段在schema.xml里面定义下。

spring的solr data框架使用很方便,和Spring的JPA很像。

Solr常见问题

1.

message:[org.apache.solr.client.solrj.SolrServerException: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/opt/any/home/searcher/data/index/write.lock: java.io.FileNotFoundException: /opt/any/home/searcher/data/index/write.lock (Permission denied)]
这个是由于某个client写的数据过大,导致后面 client等待writelock超时,可以修改solrConfig.xml的<writeLockTimeout>1000</writeLockTimeout>来解决。

2.

PERFORMANCE WARNING: Overlapping onDeckSearchers=X

官方回答:

This warning means that at least one searcher hadn't yet finished warming in the background, when a commit was issued and another searcher started warming. This can not only eat up a lot of ram (as multiple on deck searches warm caches simultaneously) but it can also create a feedback cycle, since the more searchers warming in parallel means each searcher might take longer to warm. 

Typically the way to avoid this error is to either reduce the frequency of commits, or reduce the amount of warming a searcher does while it's on deck (by reducing the work in newSearcher listeners, and/or reducing the autowarmCount on your caches)

See also the <maxWarmingSearchers/> option in SolrConfigXml.

使用的是solr4的话可以设置在autoCommit里面设置openSearcher=false来解决这个问题.

附赠solr的术语表:

http://wiki.apache.org/solr/SolrTerminology

转载于:https://my.oschina.net/yybear/blog/105698

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值