Table of results for CIFAR-10 dataset

本文汇总了CIFAR-10数据集上部分最佳模型的成果,包括多列深度神经网络、最大输出网络等,涵盖了从2010年到2012年的研究成果。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

原文地址:http://zybler.blogspot.com/2011/02/table-of-results-for-cifar-10-dataset.html

 

Table of results for CIFAR-10 dataset

This is a table documenting some of the best results some paper obtained in CIFAR-10 dataset.

 

1.Multi-Column Deep Neural Networks for Image Classification (CVPR 2012)

Cited 15 times. 88.79%
Supplemental material,Technical Report

 

2.Maxout networks (ARXIV 2013)
Cited 0 times. 87.07%

 

3.Practical Bayesian Optimization of Machine Learning Algorithms (NIPS 2012)
Cited 9 times. 85.02%
Additional info: With
 data augmented with horizontal reflections and translations, 90.5% accuracy on  test set is achieved.

 

4. Stochastic Pooling for Regularization of Deep Convolutional Neural Networks (2013)
Cited 1 times. 84.88%
Additional info: Stochastic Pooling Stochastic-100 Pooling

 

5.Improving neural networks by preventing co-adaptation of feature detectors (2012)
Cited 4 times. 84.4%

 

6.Discriminative Learning of Sum-Product Networks (NIPS 2012)
Cited 0 time. 83.96%

 

7.Beyond Spatial Pyramids: Receptive Field Learning for Pooled Image Features (2012)
Cited 6 times. 83.11%

 

8.Learning Invariant Representations with Local Transformations (2012)
Cited 0 times. 82.2%
Additional info: TIOMP-1/T (combined, K= 4,000)

 

9.Learning Feature Representations with K-means (NNTOT 2012)
Cited 2 times. 82%

 

10.Selecting Receptive Fields in Deep Networks (NIPS 2011)
Cited 11 times. 82%

 

11.The Importance of Encoding Versus Training with Sparse Coding and Vector Quantization (ICML 2011)
Cited 54 times. 81.5%
Source code:
Adam Coates's web page

 

12.High-Performance Neural Networks for Visual Object Classification (2011)
Cited 14 times. 80.49%

 

13.Object Recognition with Hierarchical Kernel Descriptors (CVPR 2011)
Cited 19 times.
80%
Source code: 
Project web page

 

14.An Analysis of Single-Layer Networks in Unsupervised Feature Learning (NIPS Workshop 2010)
Cited 83 times. 79.6%
Additional info: K-means (Triangle, 4000 features)
Homepage:
Link

 

15.Making a Science of Model Search (2012)
Cited 0 time. 79.1%

 

16.Convolutional Deep Belief Networks on CIFAR-10 (2010)
Cited 20 times. 78.9%
Additional info: 2 layers

 

17. Spike-and-Slab Sparse Coding for Unsupervised Feature Discovery (2012)
Cited 2 times. 78.8%

 

18.Pooling-Invariant Image Feature Learning (ARXIV 2012)
Cited 0 times.  78.71%
Additional info: 1600 codes, learnt using 2x PDL

 

19.Semiparametric Latent Variable Models for Guided Representation (2011)
Cited 2 times. 77.9%

 

20.Learning Separable Filters (2012)
Cited 0 times. 76%

 

21.Kernel Descriptors for Visual Recognition (NIPS 2010)
Cited 28 times. 76%
Additional info: KDES-A
Source code:
Project web page

 

22.Image Descriptor Learning Using Deep Networks (2010)
Cited 0 times. 75.18%

 

23.Improved Local Coordinate Coding using Local Tangents (ICML 2010)
Cited 27 times. 74.5%
Additional info: Linear SVM with improved LCC

 

24.Tiled convolutional neural networks (NIPS 2010)
Cited 20 times. 73.1%
Additional info: Deep Tiled CNNs (s=4, with finetuning)
Source code: 
Quoc V. Le's web page

 

25.Semiparametric Latent Variable Models for Guided Representation (2011)
Cited 2 times. 72.28%
Additional info: Alpha = 0.01

 

26.Modelling Pixel Means and Covariances Using Factorized Third-Order Boltzmann Machines (CVPR 2010)
Cited 57 times. 71%
Additional info: mcRBM-DBN (11025-8192-8192), 3 layers, PCA’d images

 

27.On Autoencoders and Score Matching for Energy Based Models (ICML 2011)
Cited 4 times. 65.5%

 

28.Factored 3-Way Restricted Boltzmann Machines For Modeling Natural Images (JMLR 2010)
Cited 14 times. 65.3%
Additional info: 4,096 3-Way, 3 layer, ZCA’d images

 

29.Learning invariant features through local space contraction (2011)
Cited 2 times. 52.14%

 

欢迎来到我的CSDN博客:http://blog.csdn.net/anshan1984/

 

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值