solr的索引和查询顺序

转载:http://heweiya.iteye.com/?show_full=true

 

 拜读了solr的部分源码,却急于弄明白solr的索引顺序和查询顺序,如下是探访结果.

 所有的配置都在solr/example/solr/conf/schema.xml当中.

 

Xml代码
  1. <!-- 如下是对text类型的处理 -->   
  2. < fieldType   name = "text"   class = "solr.TextField"   positionIncrementGap = "100"   autoGeneratePhraseQueries = "true" >   
  3.  <!-- 索引顺序1空格2同义词3过滤词4拆字5小写过滤6关键字7词干抽取算法-->   
  4.       < analyzer   type = "index" >   
  5.         < tokenizer   class = "solr.WhitespaceTokenizerFactory" />   
  6.         <!-- in this example, we will only use synonyms at query time  
  7.         < filter   class = "solr.SynonymFilterFactory"   synonyms = "index_synonyms.txt"   ignoreCase = "true"   expand = "false" />   
  8.         -->   
  9.         <!-- Case insensitive stop word removal.  
  10.           add enablePositionIncrements = true  in both the index and query  
  11.           analyzers to leave a 'gap' for more accurate phrase queries.  
  12.         -->   
  13.         < filter   class = "solr.StopFilterFactory"   
  14.                 ignoreCase = "true"   
  15.                 words = "stopwords.txt"   
  16.                 enablePositionIncrements = "true"   
  17.                 />   
  18.         < filter   class = "solr.WordDelimiterFilterFactory"   generateWordParts = "1"   generateNumberParts = "1"   catenateWords = "1"   catenateNumbers = "1"   catenateAll = "0"   splitOnCaseChange = "1" />   
  19.         < filter   class = "solr.LowerCaseFilterFactory" />   
  20.         < filter   class = "solr.KeywordMarkerFilterFactory"   protected = "protwords.txt" />   
  21.         < filter   class = "solr.PorterStemFilterFactory" />   
  22.       </ analyzer >   
  23.      <!-- 查询顺序1空格2同义词3过滤词4拆字5小写过滤6关键字7词干抽取算法-->   
  24.       < analyzer   type = "query" >   
  25.         < tokenizer   class = "solr.WhitespaceTokenizerFactory" />   
  26.         < filter   class = "solr.SynonymFilterFactory"   synonyms = "synonyms.txt"   ignoreCase = "true"   expand = "true" />   
  27.         < filter   class = "solr.StopFilterFactory"   
  28.                 ignoreCase = "true"   
  29.                 words = "stopwords.txt"   
  30.                 enablePositionIncrements = "true"   
  31.                 />   
  32.         < filter   class = "solr.WordDelimiterFilterFactory"   generateWordParts = "1"   generateNumberParts = "1"   catenateWords = "0"   catenateNumbers = "0"   catenateAll = "0"   splitOnCaseChange = "1" />   
  33.         < filter   class = "solr.LowerCaseFilterFactory" />   
  34.         < filter   class = "solr.KeywordMarkerFilterFactory"   protected = "protwords.txt" />   
  35.         < filter   class = "solr.PorterStemFilterFactory" />   
  36.       </ analyzer >   
  37.     </ fieldType >   
  38.   
  39.   
  40.     <!-- Less flexible matching, but less false matches.  Probably not ideal for product names,  
  41.          but may be good for SKUs.  Can insert dashes in the wrong place and still match. -->   
  42.    <!-- 针对textTight类型-->   
  43.     < fieldType   name = "textTight"   class = "solr.TextField"   positionIncrementGap = "100"   >   
  44.  <!-- 查询顺序1空格2同义词3过滤词4拆字5小写过滤6关键字7英文相近词8去除重复词  
  45.  -->   
  46.       < analyzer >   
  47.         < tokenizer   class = "solr.WhitespaceTokenizerFactory" />   
  48.         < filter   class = "solr.SynonymFilterFactory"   synonyms = "synonyms.txt"   ignoreCase = "true"   expand = "false" />   
  49.         < filter   class = "solr.StopFilterFactory"   ignoreCase = "true"   words = "stopwords.txt" />   
  50.         < filter   class = "solr.WordDelimiterFilterFactory"   generateWordParts = "0"   generateNumberParts = "0"   catenateWords = "1"   catenateNumbers = "1"   catenateAll = "0" />   
  51.         < filter   class = "solr.LowerCaseFilterFactory" />   
  52.         < filter   class = "solr.KeywordMarkerFilterFactory"   protected = "protwords.txt" />   
  53.         < filter   class = "solr.EnglishMinimalStemFilterFactory" />   
  54.         <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes  
  55.              possible with WordDelimiterFilter in conjuncton with stemming. -->   
  56.         < filter   class = "solr.RemoveDuplicatesTokenFilterFactory" />   
  57.       </ analyzer >   
  58.     </ fieldType >   
  59.   
  60.   
  61.     <!-- A general unstemmed text field - good if one does not know the language of the field -->   
  62.  <!-- 针对textgen类型 -->   
  63.     < fieldType   name = "textgen"   class = "solr.TextField"   positionIncrementGap = "100" >   
  64.      <!-- 索引顺序1空格2过滤词3拆字4小写过滤-->   
  65.       < analyzer   type = "index" >   
  66.         < tokenizer   class = "solr.WhitespaceTokenizerFactory" />   
  67.         < filter   class = "solr.StopFilterFactory"   ignoreCase = "true"   words = "stopwords.txt"   enablePositionIncrements = "true"   />   
  68.         < filter   class = "solr.WordDelimiterFilterFactory"   generateWordParts = "1"   generateNumberParts = "1"   catenateWords = "1"   catenateNumbers = "1"   catenateAll = "0"   splitOnCaseChange = "0" />   
  69.         < filter   class = "solr.LowerCaseFilterFactory" />   
  70.       </ analyzer >   
  71. <!-- 查询顺序1空格2同义词3过滤词4小写过滤-->   
  72.       < analyzer   type = "query" >   
  73.         < tokenizer   class = "solr.WhitespaceTokenizerFactory" />   
  74.         < filter   class = "solr.SynonymFilterFactory"   synonyms = "synonyms.txt"   ignoreCase = "true"   expand = "true" />   
  75.         < filter   class = "solr.StopFilterFactory"   
  76.                 ignoreCase = "true"   
  77.                 words = "stopwords.txt"   
  78.                 enablePositionIncrements = "true"   
  79.                 />   
  80.         < filter   class = "solr.WordDelimiterFilterFactory"   generateWordParts = "1"   generateNumberParts = "1"   catenateWords = "0"   catenateNumbers = "0"   catenateAll = "0"   splitOnCaseChange = "0" />   
  81.         < filter   class = "solr.LowerCaseFilterFactory" />   
  82.       </ analyzer >   
  83.     </ fieldType >   
  84.   
  85.   
  86.     <!-- A general unstemmed text field that indexes tokens normally and also  
  87.          reversed (via ReversedWildcardFilterFactory), to enable more efficient   
  88.      leading wildcard queries. -->   
  89.    <!-- 针对text_rev类型 -->   
  90.     < fieldType   name = "text_rev"   class = "solr.TextField"   positionIncrementGap = "100" >   
  91.   <!-- 索引顺序1空格2过滤词3拆字4小写过滤6转义通配符-->   
  92.       < analyzer   type = "index" >   
  93.         < tokenizer   class = "solr.WhitespaceTokenizerFactory" />   
  94.         < filter   class = "solr.StopFilterFactory"   ignoreCase = "true"   words = "stopwords.txt"   enablePositionIncrements = "true"   />   
  95.         < filter   class = "solr.WordDelimiterFilterFactory"   generateWordParts = "1"   generateNumberParts = "1"   catenateWords = "1"   catenateNumbers = "1"   catenateAll = "0"   splitOnCaseChange = "0" />   
  96.         < filter   class = "solr.LowerCaseFilterFactory" />   
  97.         < filter   class = "solr.ReversedWildcardFilterFactory"   withOriginal = "true"   
  98.            maxPosAsterisk = "3"   maxPosQuestion = "2"   maxFractionAsterisk = "0.33" />   
  99.       </ analyzer >   
  100.  <!-- 查询顺序1空格2同义词3过滤词4拆字5小写过滤 -->   
  101.       < analyzer   type = "query" >   
  102.         < tokenizer   class = "solr.WhitespaceTokenizerFactory" />   
  103.         < filter   class = "solr.SynonymFilterFactory"   synonyms = "synonyms.txt"   ignoreCase = "true"   expand = "true" />   
  104.         < filter   class = "solr.StopFilterFactory"   
  105.                 ignoreCase = "true"   
  106.                 words = "stopwords.txt"   
  107.                 enablePositionIncrements = "true"   
  108.                 />   
  109.         < filter   class = "solr.WordDelimiterFilterFactory"   generateWordParts = "1"   generateNumberParts = "1"   catenateWords = "0"   catenateNumbers = "0"   catenateAll = "0"   splitOnCaseChange = "0" />   
  110.         < filter   class = "solr.LowerCaseFilterFactory" />   
  111.       </ analyzer >   
  112.     </ fieldType >   
  113.   
  114.     <!-- charFilter + WhitespaceTokenizer  -->   
  115.     <!--  
  116.     < fieldType   name = "textCharNorm"   class = "solr.TextField"   positionIncrementGap = "100"   >   
  117.       < analyzer >   
  118.         < charFilter   class = "solr.MappingCharFilterFactory"   mapping = "mapping-ISOLatin1Accent.txt" />   
  119.         < tokenizer   class = "solr.WhitespaceTokenizerFactory" />   
  120.       </ analyzer >   
  121.     </ fieldType >   
  122.     -->   
  123.   
  124.     <!-- This is an example of using the KeywordTokenizer along  
  125.          With various TokenFilterFactories to produce a sortable field  
  126.          that does not include some properties of the source text  
  127.       -->   
  128.     < fieldType   name = "alphaOnlySort"   class = "solr.TextField"   sortMissingLast = "true"   omitNorms = "true" >   
  129.       < analyzer >   
  130.         <!-- KeywordTokenizer does no actual tokenizing, so the entire  
  131.              input string is preserved as a single token  
  132.           -->   
  133.         < tokenizer   class = "solr.KeywordTokenizerFactory" />   
  134.         <!-- The LowerCase TokenFilter does what you expect, which can be  
  135.              when you want your sorting to be case insensitive  
  136.           -->   
  137.         < filter   class = "solr.LowerCaseFilterFactory"   />   
  138.         <!-- The TrimFilter removes any leading or trailing whitespace -->   
  139.         < filter   class = "solr.TrimFilterFactory"   />   
  140.         <!-- The PatternReplaceFilter gives you the flexibility to use  
  141.              Java Regular expression to replace any sequence of characters  
  142.              matching a pattern with an arbitrary replacement string,   
  143.              which may include back references to portions of the original  
  144.              string matched by the pattern.  
  145.                
  146.              See the Java Regular Expression documentation for more  
  147.              information on pattern and replacement string syntax.  
  148.                
  149.              http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/package-summary.html  
  150.           -->   
  151.         < filter   class = "solr.PatternReplaceFilterFactory"   
  152.                 pattern = "([^a-z])"   replacement = ""   replace = "all"   
  153.         />   
  154.       </ analyzer >   
  155.     </ fieldType >   
  156.       
  157.     < fieldtype   name = "phonetic"   stored = "false"   indexed = "true"   class = "solr.TextField"   >   
  158.       < analyzer >   
  159.         < tokenizer   class = "solr.StandardTokenizerFactory" />   
  160.         < filter   class = "solr.DoubleMetaphoneFilterFactory"   inject = "false" />   
  161.       </ analyzer >   
  162.     </ fieldtype >   
  163.   
  164.     < fieldtype   name = "payloads"   stored = "false"   indexed = "true"   class = "solr.TextField"   >   
  165.       < analyzer >   
  166.         < tokenizer   class = "solr.WhitespaceTokenizerFactory" />   
  167.         <!--  
  168.         The DelimitedPayloadTokenFilter can put payloads on tokens... for example,  
  169.         a token of "foo|1.4"  would be indexed as "foo" with a payload of 1.4f  
  170.         Attributes of the DelimitedPayloadTokenFilterFactory :   
  171.          "delimiter" - a one character delimiter. Default is | (pipe)  
  172.      "encoder" - how to encode the following value into a playload  
  173.         float ->  org.apache.lucene.analysis.payloads.FloatEncoder,  
  174.         integer ->  o.a.l.a.p.IntegerEncoder  
  175.         identity ->  o.a.l.a.p.IdentityEncoder  
  176.             Fully Qualified class name implementing PayloadEncoder, Encoder must have a no arg constructor.  
  177.          -->   
  178.         < filter   class = "solr.DelimitedPayloadTokenFilterFactory"   encoder = "float" />   
  179.       </ analyzer >   
  180.     </ fieldtype >   
  181.   
  182.     <!-- lowercases the entire field value, keeping it as a single token.  -->   
  183.     < fieldType   name = "lowercase"   class = "solr.TextField"   positionIncrementGap = "100" >   
  184.       < analyzer >   
  185.         < tokenizer   class = "solr.KeywordTokenizerFactory" />   
  186.         < filter   class = "solr.LowerCaseFilterFactory"   />   
  187.       </ analyzer >   
  188.     </ fieldType >   
<!-- 如下是对text类型的处理 -->
<fieldType name="text" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
 <!-- 索引顺序1空格2同义词3过滤词4拆字5小写过滤6关键字7词干抽取算法-->
      <analyzer type="index">
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
        <!-- in this example, we will only use synonyms at query time
        <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
        -->
        <!-- Case insensitive stop word removal.
          add enablePositionIncrements=true in both the index and query
          analyzers to leave a 'gap' for more accurate phrase queries.
        -->
        <filter class="solr.StopFilterFactory"
                ignoreCase="true"
                words="stopwords.txt"
                enablePositionIncrements="true"
                />
        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
        <filter class="solr.LowerCaseFilterFactory"/>
        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
        <filter class="solr.PorterStemFilterFactory"/>
      </analyzer>
     <!-- 查询顺序1空格2同义词3过滤词4拆字5小写过滤6关键字7词干抽取算法-->
      <analyzer type="query">
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
        <filter class="solr.StopFilterFactory"
                ignoreCase="true"
                words="stopwords.txt"
                enablePositionIncrements="true"
                />
        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
        <filter class="solr.LowerCaseFilterFactory"/>
        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
        <filter class="solr.PorterStemFilterFactory"/>
      </analyzer>
    </fieldType>


    <!-- Less flexible matching, but less false matches.  Probably not ideal for product names,
         but may be good for SKUs.  Can insert dashes in the wrong place and still match. -->
   <!-- 针对textTight类型-->
    <fieldType name="textTight" class="solr.TextField" positionIncrementGap="100" >
 <!-- 查询顺序1空格2同义词3过滤词4拆字5小写过滤6关键字7英文相近词8去除重复词
 -->
      <analyzer>
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>
        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/>
        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>
        <filter class="solr.LowerCaseFilterFactory"/>
        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
        <filter class="solr.EnglishMinimalStemFilterFactory"/>
        <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes
             possible with WordDelimiterFilter in conjuncton with stemming. -->
        <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
      </analyzer>
    </fieldType>


    <!-- A general unstemmed text field - good if one does not know the language of the field -->
 <!-- 针对textgen类型 -->
    <fieldType name="textgen" class="solr.TextField" positionIncrementGap="100">
     <!-- 索引顺序1空格2过滤词3拆字4小写过滤-->
      <analyzer type="index">
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="0"/>
        <filter class="solr.LowerCaseFilterFactory"/>
      </analyzer>
<!-- 查询顺序1空格2同义词3过滤词4小写过滤-->
      <analyzer type="query">
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
        <filter class="solr.StopFilterFactory"
                ignoreCase="true"
                words="stopwords.txt"
                enablePositionIncrements="true"
                />
        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="0"/>
        <filter class="solr.LowerCaseFilterFactory"/>
      </analyzer>
    </fieldType>


    <!-- A general unstemmed text field that indexes tokens normally and also
         reversed (via ReversedWildcardFilterFactory), to enable more efficient 
	 leading wildcard queries. -->
   <!-- 针对text_rev类型 -->
    <fieldType name="text_rev" class="solr.TextField" positionIncrementGap="100">
  <!-- 索引顺序1空格2过滤词3拆字4小写过滤6转义通配符-->
      <analyzer type="index">
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="0"/>
        <filter class="solr.LowerCaseFilterFactory"/>
        <filter class="solr.ReversedWildcardFilterFactory" withOriginal="true"
           maxPosAsterisk="3" maxPosQuestion="2" maxFractionAsterisk="0.33"/>
      </analyzer>
 <!-- 查询顺序1空格2同义词3过滤词4拆字5小写过滤 -->
      <analyzer type="query">
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
        <filter class="solr.StopFilterFactory"
                ignoreCase="true"
                words="stopwords.txt"
                enablePositionIncrements="true"
                />
        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="0"/>
        <filter class="solr.LowerCaseFilterFactory"/>
      </analyzer>
    </fieldType>

    <!-- charFilter + WhitespaceTokenizer  -->
    <!--
    <fieldType name="textCharNorm" class="solr.TextField" positionIncrementGap="100" >
      <analyzer>
        <charFilter class="solr.MappingCharFilterFactory" mapping="mapping-ISOLatin1Accent.txt"/>
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
      </analyzer>
    </fieldType>
    -->

    <!-- This is an example of using the KeywordTokenizer along
         With various TokenFilterFactories to produce a sortable field
         that does not include some properties of the source text
      -->
    <fieldType name="alphaOnlySort" class="solr.TextField" sortMissingLast="true" omitNorms="true">
      <analyzer>
        <!-- KeywordTokenizer does no actual tokenizing, so the entire
             input string is preserved as a single token
          -->
        <tokenizer class="solr.KeywordTokenizerFactory"/>
        <!-- The LowerCase TokenFilter does what you expect, which can be
             when you want your sorting to be case insensitive
          -->
        <filter class="solr.LowerCaseFilterFactory" />
        <!-- The TrimFilter removes any leading or trailing whitespace -->
        <filter class="solr.TrimFilterFactory" />
        <!-- The PatternReplaceFilter gives you the flexibility to use
             Java Regular expression to replace any sequence of characters
             matching a pattern with an arbitrary replacement string, 
             which may include back references to portions of the original
             string matched by the pattern.
             
             See the Java Regular Expression documentation for more
             information on pattern and replacement string syntax.
             
             http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/package-summary.html
          -->
        <filter class="solr.PatternReplaceFilterFactory"
                pattern="([^a-z])" replacement="" replace="all"
        />
      </analyzer>
    </fieldType>
    
    <fieldtype name="phonetic" stored="false" indexed="true" class="solr.TextField" >
      <analyzer>
        <tokenizer class="solr.StandardTokenizerFactory"/>
        <filter class="solr.DoubleMetaphoneFilterFactory" inject="false"/>
      </analyzer>
    </fieldtype>

    <fieldtype name="payloads" stored="false" indexed="true" class="solr.TextField" >
      <analyzer>
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
        <!--
        The DelimitedPayloadTokenFilter can put payloads on tokens... for example,
        a token of "foo|1.4"  would be indexed as "foo" with a payload of 1.4f
        Attributes of the DelimitedPayloadTokenFilterFactory : 
         "delimiter" - a one character delimiter. Default is | (pipe)
	 "encoder" - how to encode the following value into a playload
	    float -> org.apache.lucene.analysis.payloads.FloatEncoder,
	    integer -> o.a.l.a.p.IntegerEncoder
	    identity -> o.a.l.a.p.IdentityEncoder
            Fully Qualified class name implementing PayloadEncoder, Encoder must have a no arg constructor.
         -->
        <filter class="solr.DelimitedPayloadTokenFilterFactory" encoder="float"/>
      </analyzer>
    </fieldtype>

    <!-- lowercases the entire field value, keeping it as a single token.  -->
    <fieldType name="lowercase" class="solr.TextField" positionIncrementGap="100">
      <analyzer>
        <tokenizer class="solr.KeywordTokenizerFactory"/>
        <filter class="solr.LowerCaseFilterFactory" />
      </analyzer>
    </fieldType>

    大致的索引顺序会是:

 1.空格..............................solr.WhitespaceTokenizerFactory

    2同义词............................solr.SynonymFilterFactory

    3过滤词...........................solr.StopFilterFactory

    4拆字..............................solr.WordDelimiterFilterFactory

    5小写过滤.....................solr.LowerCaseFilterFactory

    6关键字.........................solr.KeywordMarkerFilterFactory

    7词干抽取算法............solr.PorterStemFilterFactory

 

 大致的搜素顺序是:

 

 1.空格..............................solr.WhitespaceTokenizerFactory

    2同义词............................solr.SynonymFilterFactory

    3过滤词...........................solr.StopFilterFactory

    4拆字..............................solr.WordDelimiterFilterFactory

    5小写过滤.....................solr.LowerCaseFilterFactory

    6关键字.........................solr.KeywordMarkerFilterFactory

    7英文相近词..................solr.EnglishMinimalStemFilterFactory

    8去除重复词.................solr.RemoveDuplicatesTokenFilterFactory

 

    当然了,你可以根据自己的权重来重新分配索引和搜素顺序

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值