BM25 vs Lucene Default Similarity Comparing Precision and Recall

转载 2015年07月10日 15:04:36

本文转载于: https://www.found.no/foundation/BM-vs-Lucene-default-similarity/


Introduction

In the previous article I looked into the difference between BM25 and the tf-idf similarity as they are defined. The default similarity in Lucene and hence also Elasticsearch is however not a pure tf-idf implementation. It does actually have a form of document length normalization too. Thanks to1 for mentioning this! This makes it a little more difficult to compare it with the solutions described in academic papers and the conclusion that BM25 is mostly superior to tf-idf might not be applicable to the tf-idf implemation in Lucene. To investigate this issue I will do a comparison of precision and recall using Wikipedia articles.

Indexing the Data

Using the Wikipedia river indexing the articles is pretty straight forward, but if you plan on indexing the entire dataset there are a few gotchas. Due to the sheer size of the dataset just downloading the data takes a few hours and you will likely have to leave it working over night. At my first attempt I used the default image, but this of course was updated to a new image midway in the download and it failed. The solution was simple, use a specific image. This also had the added benefit of making it easier to reproduce the exact same documents for indexing them with a different similarity model. Should you want to do something similar, I recommend you start querying right away as the river does not overload your cluster and it’s better to figure out that you should have changed the mappings or some other index setting before having wasted all that time waiting.

The river settings:

{
    "type" : "wikipedia",
    "index" : {
        "index": "wikipedia",
        "type": "page",
        "url" : "http://dumps.wikimedia.org/enwiki/20131202/enwiki-20131202-pages-articles.xml.bz2",
        "bulk_size" : 1000,
        "flush_interval" : "1s",
        "max_concurrent_bulk" : 2
    }
}

Index settings with default similarity:

{
   "settings":{
      "number_of_shards":2,
      "number_of_replicas":0
   },
   "mappings":{
      "page":{
         "properties":{
            "category":{
               "type":"string"
            },
            "disambiguation":{
               "type":"boolean"
            },
            "link":{
               "type":"string"
            },
            "redirect":{
               "type":"boolean"
            },
            "redirect_page":{
               "type":"string"
            },
            "special":{
               "type":"boolean"
            },
            "stub":{
               "type":"boolean"
            },
            "text":{
               "type":"string"
            },
            "title":{
               "type":"string"
            }
         }
      }
   }
}

Index settings BM25 similarity:

{
   "settings":{
      "number_of_shards":2,
      "number_of_replicas":0
   },
   "mappings":{
      "page":{
         "properties":{
            "category":{
               "type":"string",
               "similarity":"BM25"
            },
            "disambiguation":{
               "type":"boolean",
               "similarity":"BM25"
            },
            "link":{
               "type":"string",
               "similarity":"BM25"
            },
            "redirect":{
               "type":"boolean"
            },
            "redirect_page":{
               "type":"string",
               "similarity":"BM25"
            },
            "special":{
               "type":"boolean"
            },
            "stub":{
               "type":"boolean"
            },
            "text":{
               "type":"string",
               "similarity":"BM25"
            },
            "title":{
               "type":"string",
               "similarity":"BM25"
            }
         }
      }
   }
}

As you can see changing the similarity for a specific field is as simple as changing the mapping.

Versions

For this test I used Elasticsearch 0.90.7 and elasticsearch-river-wikipedia 1.3.0.

The Query Used

I started out experimenting with a basic match query, adressing all fields and comparing the results I got with what I could find in the search box on wikipedia.org. Two things became clear, the search on wikipedia.org mainly tries to match your keywords with titles and the document collection also has many documents that simply represent alternate titles and a redirect link to another document. Considering Wikipedia’s encyclopedic style, the matching of search queries with titles is a natural corollary of the strong aboutness that the title has for such articles. The redirect titles are a good source of alternate titles and spelling, but the fact that the redirects appear as pages in the document collection is more of an implementation detail regarding mediawiki and when indexed with Elasticsearch, it would better fit as a bundle with the targeted document.

The Wikipedia river actually detects the redirects and marks them with the boolean redirect field making it easy to filter them out. I decided to filter them away as for some documents the redirect document ended up with a better score as it contained the desired terms and where that much shorter. To better get to know the data I also tried some queries on just the titles and it was easy to find the articles I where looking for. If I were to create a new searchbox for Wikipedia with Elasticsearch I would definitely give a boost to a title or redirect title match.

Now to address the real issue at hand. I needed a way to create a query set where each query has a preknown answer. Having witnessed the aboutness that the titles represent for their articles I constructed my test by simply drawing 1000 random articles and using the title as the query keywords and the document text as the answer.

I then wrote a script that executed the below query replacing <title> for each chosen article.

{
"query": {
    "filtered": {
        "query": {
            "match": {
                "text": "<title>"
            }
        },
        "filter": {
            "bool": {
                "must_not": [
                    {
                        "term": { "redirect": true }
                    },
                    {
                        "regexp": { "text": "redirect[^\\s]+" }
                    }
                ]
            }
        }
    },
    "explain": true
}

For this test the redirects would ideally not have been indexed at all, but the above filters handles this. The term filter uses the redirect field created by the river and the regexp catches some remaining redirects that where not detected by the river.

The query is limited to the 10 top ranking hits (default in Elasticsearch). Usually similarity models are compared by precision, the percentage of relevant queries within the result, and recall, the percentage of all relevant documents that are included in the result, but the limit of 10 hits and only one relevant document for each query makes these values a little skewed as all queries will either have a recall of zero or one hundred percent and a precision of either zero or ten percent. Instead of the usual academic criteria I will for this non-academic test use percentage of queries that did not find the desired document as an indicator of recall and the average rank of the desired document for those queries who did find it as an indicator of precision.

The result of the script is an indication of precision and recall for both similarities when matching Wikipedia document titles with document text (titles excluded).

Results and Conclusion

Similarity Average rank Documents not found
BM25 2,07 16,0%
Default 2,44 57,7%

Clearly BM25 performed far better than the default similarity for this case, but it is important to keep in mind the 10 hits limit on the result size when interpreting these results. If one where to retrieve more hits, it is likely that the percentage of documents not found would drop for both similarities, but then the average rank would also grow.

By processing the hits of every query until the desired number of hits is found or not one could calculate the actual precision and recall, but this would still only be circumstantial proof as the result would only be in regards to matching titles to articles in the English version of Wikipedia. As this experiment would be more in line with practical search than a formal proof I chose to also use the 10 hit limit as this is about as many hits as I would expect the average user to bother to consider reading before refining his query if the desired document was not found.

In other words the result of this experiment is not a general proof that BM25 is always better than the default similarity, but it does suggest there can be a significant potential in using BM25 over the default similarity, at the very least for some cases. I strongly recommend taking the time to test both, with documents from your own use of Elasticsearch.



相关文章推荐

recall and precision

外行人做互联网,很多概念不懂。就拿最基础的“召回率”和“准确率”这种概念,看看网上资料知道大概,自己用的时候,脑子里绕着弯儿能想明白,可碰到别人活用的时候,脑子里还是没法一下子反应过来,还是要绕弯想一...

Precision and Recall

Precision and Recall 2012 年 05 月 20 日 Precision and Recall是評估測量 (Evaluation ...

Precision-Recall and ROC Curves

Calculate and plot P/R and ROC curves for binary classification tasks. url:http://cn.mathworks.com/m...
  • nwuzzh
  • nwuzzh
  • 2017年06月01日 15:33
  • 98

Lucene示例 BM25相似度计算

  • 2014年04月09日 23:19
  • 3.5MB
  • 下载

Project2--Lucene的Ranking算法修改:BM25算法

原文出自: 1.       BM25算法 BM25是二元独立模型的扩展,其得分函数有很多形式,最普通的形式如下:   ∑    其中,k1,k2,K均为经验设置的参数,fi是词项...

将BM25 用作lucene排序算法的实现步骤

BM25算法的介绍:http://en.wikipedia.org/wiki/Okapi_BM25 BM25算法一直是被用作代替lucene的TFIDF的评分公式的。 首先给出BM25的相关度计算...

基础学习笔记——Recall, Precision, and Average Precision

以下内容绝大部分内容翻译自: Mu Zhu , Recall, Precision, and Average Precision. Working Paper 2004-09 Department ...

Recall, Precision, and Average Precision

原文出处:http://blog.csdn.net/pkueecser/article/details/8229166 在信息检索、分类体系中,有一系列的指标,搞清楚这些指标对于评价...
  • byplane
  • byplane
  • 2017年05月15日 11:56
  • 142

Precision、Recall and Mean Average Precision(MAP)

基本概念 Precision准确度 Recall召回率 Precision-Recall 曲线 MAP Reference论文 [1] 介绍了一种用于形状分类的metric,理论部分作者给出几组sha...

机器学习中分类器的评价指标:召回率(recall), 精度(precision), 准确率(accuracy), F1分数(F1-score)

今天我们主要来看看机器学习中分类器的一些评价指标,主要包括召回率(recall), 精度(precision), 准确率(accuracy), F1分数(F1-score). 大多数刚接触分类器的同学...
内容举报
返回顶部
收藏助手
不良信息举报
您举报文章:BM25 vs Lucene Default Similarity Comparing Precision and Recall
举报原因:
原因补充:

(最多只允许输入30个字)