Ukbench图像数据集

Ukbench图像数据集官网地址:http://www.vis.uky.edu/~stewe/ukbench/


Revised set!

In the first set which went online there were some errors. Most notably one subset being included twice. Also some transposed images. Tests on the old set are invalid.

Recognition Benchmark Images

Henrik Stewénius and David Nistér

The set consists of N groups of 4 images each. All the images are 640x480.

If you use the dataset, please refer to:

D. Nistér and H. Stewénius. Scalable recognition with a vocabulary tree. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, pages 2161-2168, June 2006. [ bib | .ppt | .pdf ]

Subsets

For users of subsets of the database please note that the difficulty is dependent on the chosen subset. Important factors are:
  1. Difficulty of the objects themselves. CD-covers are much easier than flowers. See performance curve below.
  2. Sharpness of the images. Many of the indoor images are somewhat blurry and this can affect some algorithms.
  3. Similar or identical objects. All the pictures where taken by CS students/faculty/staff and thus keyboards and computer equipment are popular motives. So is computer vision literature.

Download

Please note BEFORE starting your download that the file is almost 2GB. Please save a local copy in order to save bandwidth at our server.Zipped File.Visual Words. We extracted visual words for each document and wrote them one document per line. Data before ":" is header and then data. The vocabulary was 6 levels and splitting with a factor of 10. The vocabulary was trained on non-related data.

Performance

In the paper we give results either for a subset of 6376 images (all we had at that time) or a smaller subset of 1400 images. The smaller set was used when we did not have an efficient enough implementation in order to handle the larger set.

Performance Measures

Our simplest measure of performance is to count how many of the 4 images which are top-4 when using a query image from that set of four images.

A matlab implementation which computes this measure: Download.

Numbers for computing our measure on the full 10200 database using different training-sets and different scoring strategies:
Scoring Strategy
Quantizer Flat 10 100 1000
cd 2.895588 2.574118 3.139706 3.161275
moving 2.828529 2.161275 3.014216 3.083824
moving+cd 2.884412 2.551078 3.139902 3.157157
flip 3.014412 2.534902 3.135098 3.188333
test 3.166373 3.070098 3.294314 3.286863
Please see the page for Semiprocessed Data for explainations.

 
How our performance varies when taking subsets 0:n from the set. The different curces represent different choices in scoring strategy. For extremely fast applications we use the flat-scoring while for better performance we use hierarchical scoring. 
The feature extractor was set to use relatively few features for these experiments.

How the score is computed

int nrblocks = nr_docs/4;
int totaltopcount = 0;
for(  block = 0; block < nrblocks; block++) {
  for( int i=0; i < 4; i++){
    int pos = block*4+i;
    for( int j=0; j < 4; j++){
      r = find_rank_of_doc (4*block+j) relative to doc (block*4+i); 
      if( r < 4) 
        totaltopcount++;
       
    }
  }
}
score = totaltopcount/(nrblocks*4);
What we are measuring is how many of the images are found on average.Getting everything right gives a score of 4Getting nothing right gives a score of 0Getting only identical image right gives a score of 1A score of 3 means that we find the identical image plus 2 of the 3 other images of the set.

Semiprocessed Data

We have computed lots of semiprocessed data along with SIFT vectors for training.Semiprocessed Data

This page is maintained by Henrik Stewénius

Stewenius
展开阅读全文

没有更多推荐了,返回首页