[b]前提[/b]:不对结果做sort操作.
在搜索中,并不是所有的Document和Fields都是平等的.有些技术会要求到对其Doucment或者Fields的权值改变,[b]默认值为:1.0F[b],以上需求都是通过改变Document的boost因子来改变的.
Result:
结果查看:
因为bc查询出来两次,所以第一条的tf=2的开方,即为1.4142135,第二三个结果集为一个,则tf=1,默认的boost为1.0;由于来自于同一个文档,idf相同;bc_bc与ab_bc的长度相同,所以fieldNorm的值一样.
代码清单二
取消对着代码的注释.就改变了boost的权值,结果中的fieldNorm值就为原fieldNorm*设置的权值.
Result:
最后附上,计算公式:
[b]TermQuery的计算公式
score = sqrt(freq) * idf * boost * norm
idf = ln(maxDoc/(docFreq + 1) )+ 1.0
norm = fieldboost / sqrt(fieldlength)[/b]
参考PPT
1.[url=http://users.ir-lab.org/~huxg/LuceneRetrievalModel.ppt]查看[/url]
在搜索中,并不是所有的Document和Fields都是平等的.有些技术会要求到对其Doucment或者Fields的权值改变,[b]默认值为:1.0F[b],以上需求都是通过改变Document的boost因子来改变的.
setBoost(float)
@Test
public void testFieldBoost() throws Exception{
String[] email = {"bc_bc@gmail.com","ab_bc@gmail.com","ab_bc_cd@gmail.com"};
RAMDirectory rd = new RAMDirectory();
Analyzer analyzer = new SimpleAnalyzer();//Version.LUCENE_CURRENT);
IndexWriter iw = new IndexWriter(rd,analyzer,MaxFieldLength.UNLIMITED);
for (int i = 0; i < email.length; i++) {
Document doc = new Document();
//-----------代码段 一---------------------
/* if(i == 1)
doc.setBoost(2F);//default
//改变不同的权值
if(i == 0)
;//doc.setBoost(0.5F);
if(i == 1)
;//doc.setBoost(0.1F);//default
if(i == 2)
doc.setBoost(1.2F);
*/
//-----------代码段 一 结束--------------------
doc.add(new Field("email", email[i], Field.Store.YES,Field.Index.ANALYZED));
//可选,查看分词情况
/*TokenStream ts = analyzer.tokenStream("email", new StringReader(email[i]));
ts.addAttribute(TermAttribute.class);
while (ts.incrementToken()) {
TermAttribute ta = ts.getAttribute(TermAttribute.class);
System.out.println("{"+ta.term()+"}");
}
iw.optimize();*/
iw.addDocument(doc);
}
iw.close();
IndexSearcher is = new IndexSearcher(rd);
Term t = new Term("email", "bc");
Query query = new TermQuery(t);
query.setBoost(2F);
TopDocs td = is.search(query, 100);
for (int i = 0; i < td.scoreDocs.length; i++) {
System.out.println(is.doc(i).get("email"));
System.out.println(is.explain(query, i));
}
is.close();
}
Result:
bc_bc@gmail.com
0.5036848 = (MATCH) fieldWeight(email:bc in 0), product of:
1.4142135 = tf(termFreq(email:bc)=2)
0.71231794 = idf(docFreq=3, maxDocs=3)
0.5 = fieldNorm(field=email, doc=0)
ab_bc@gmail.com
0.35615897 = (MATCH) fieldWeight(email:bc in 1), product of:
1.0 = tf(termFreq(email:bc)=1)
0.71231794 = idf(docFreq=3, maxDocs=3)
0.5 = fieldNorm(field=email, doc=1)
ab_bc_cd@gmail.com
0.3116391 = (MATCH) fieldWeight(email:bc in 2), product of:
1.0 = tf(termFreq(email:bc)=1)
0.71231794 = idf(docFreq=3, maxDocs=3)
0.4375 = fieldNorm(field=email, doc=2)
结果查看:
因为bc查询出来两次,所以第一条的tf=2的开方,即为1.4142135,第二三个结果集为一个,则tf=1,默认的boost为1.0;由于来自于同一个文档,idf相同;bc_bc与ab_bc的长度相同,所以fieldNorm的值一样.
代码清单二
//-----------代码段 一---------------------
/*
if(i == 1)
doc.setBoost(2F);//default
//改变不同的权值
if(i == 0)
;//doc.setBoost(0.5F);
if(i == 1)
;//doc.setBoost(0.1F);//default
if(i == 2)
doc.setBoost(1.2F);
*/
//-----------代码段 一 结束--------------------
取消对着代码的注释.就改变了boost的权值,结果中的fieldNorm值就为原fieldNorm*设置的权值.
Result:
bc_bc@gmail.com
1.0073696 = (MATCH) fieldWeight(email:bc in 0), product of:
1.4142135 = tf(termFreq(email:bc)=2)
0.71231794 = idf(docFreq=3, maxDocs=3)
1.0 = fieldNorm(field=email, doc=0)
ab_bc@gmail.com
0.35615897 = (MATCH) fieldWeight(email:bc in 1), product of:
1.0 = tf(termFreq(email:bc)=1)
0.71231794 = idf(docFreq=3, maxDocs=3)
0.5 = fieldNorm(field=email, doc=1)
ab_bc_cd@gmail.com
0.35615897 = (MATCH) fieldWeight(email:bc in 2), product of:
1.0 = tf(termFreq(email:bc)=1)
0.71231794 = idf(docFreq=3, maxDocs=3)
0.5 = fieldNorm(field=email, doc=2)
最后附上,计算公式:
[b]TermQuery的计算公式
score = sqrt(freq) * idf * boost * norm
idf = ln(maxDoc/(docFreq + 1) )+ 1.0
norm = fieldboost / sqrt(fieldlength)[/b]
参考PPT
1.[url=http://users.ir-lab.org/~huxg/LuceneRetrievalModel.ppt]查看[/url]