原文地址:http://zybler.blogspot.com/2011/02/table-of-results-for-cifar-10-dataset.html
Table of results for CIFAR-10 dataset
1.Multi-Column Deep Neural Networks for Image Classification (CVPR 2012)
Cited 15 times. 88.79%
Supplemental material,Technical Report
2.Maxout networks (ARXIV 2013)
Cited 0 times. 87.07%
3.Practical Bayesian Optimization of Machine Learning Algorithms (NIPS 2012)
Cited 9 times. 85.02%
Additional info: With data augmented with horizontal reflections and translations, 90.5% accuracy on test set is achieved.
4. Stochastic Pooling for Regularization of Deep Convolutional Neural Networks (2013)
Cited 1 times. 84.88%
Additional info: Stochastic Pooling Stochastic-100 Pooling
5.Improving neural networks by preventing co-adaptation of feature detectors (2012)
Cited 4 times. 84.4%
6.Discriminative Learning of Sum-Product Networks (NIPS 2012)
Cited 0 time. 83.96%
7.Beyond Spatial Pyramids: Receptive Field Learning for Pooled Image Features (2012)
Cited 6 times. 83.11%
8.Learning Invariant Representations with Local Transformations (2012)
Cited 0 times. 82.2%
Additional info: TIOMP-1/T (combined, K= 4,000)
9.Learning Feature Representations with K-means (NNTOT 2012)
Cited 2 times. 82%
10.Selecting Receptive Fields in Deep Networks (NIPS 2011)
Cited 11 times. 82%
11.The Importance of Encoding Versus Training with Sparse Coding and Vector Quantization (ICML 2011)
Cited 54 times. 81.5%
Source code: Adam Coates's web page
12.High-Performance Neural Networks for Visual Object Classification (2011)
Cited 14 times. 80.49%
13.Object Recognition with Hierarchical Kernel Descriptors (CVPR 2011)
Cited 19 times. 80%
Source code: Project web page
14.An Analysis of Single-Layer Networks in Unsupervised Feature Learning (NIPS Workshop 2010)
Cited 83 times. 79.6%
Additional info: K-means (Triangle, 4000 features)
Homepage: Link
15.Making a Science of Model Search (2012)
Cited 0 time. 79.1%
16.Convolutional Deep Belief Networks on CIFAR-10 (2010)
Cited 20 times. 78.9%
Additional info: 2 layers
17. Spike-and-Slab Sparse Coding for Unsupervised Feature Discovery (2012)
Cited 2 times. 78.8%
18.Pooling-Invariant Image Feature Learning (ARXIV 2012)
Cited 0 times. 78.71%
Additional info: 1600 codes, learnt using 2x PDL
19.Semiparametric Latent Variable Models for Guided Representation (2011)
Cited 2 times. 77.9%
20.Learning Separable Filters (2012)
Cited 0 times. 76%
21.Kernel Descriptors for Visual Recognition (NIPS 2010)
Cited 28 times. 76%
Additional info: KDES-A
Source code: Project web page
22.Image Descriptor Learning Using Deep Networks (2010)
Cited 0 times. 75.18%
23.Improved Local Coordinate Coding using Local Tangents (ICML 2010)
Cited 27 times. 74.5%
Additional info: Linear SVM with improved LCC
24.Tiled convolutional neural networks (NIPS 2010)
Cited 20 times. 73.1%
Additional info: Deep Tiled CNNs (s=4, with finetuning)
Source code: Quoc V. Le's web page
25.Semiparametric Latent Variable Models for Guided Representation (2011)
Cited 2 times. 72.28%
Additional info: Alpha = 0.01
26.Modelling Pixel Means and Covariances Using Factorized Third-Order Boltzmann Machines (CVPR 2010)
Cited 57 times. 71%
Additional info: mcRBM-DBN (11025-8192-8192), 3 layers, PCA’d images
27.On Autoencoders and Score Matching for Energy Based Models (ICML 2011)
Cited 4 times. 65.5%
28.Factored 3-Way Restricted Boltzmann Machines For Modeling Natural Images (JMLR 2010)
Cited 14 times. 65.3%
Additional info: 4,096 3-Way, 3 layer, ZCA’d images
29.Learning invariant features through local space contraction (2011)
Cited 2 times. 52.14%
欢迎来到我的CSDN博客:http://blog.csdn.net/anshan1984/