deep learning数据集 net 精度

http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html#43494641522d3130

Classification datasets results


What is the class of this image ?

Discover the current state of the art in objects classification.

MNIST 50 results collected

Units: error %

Classify handwriten digits. Some additional results are available on the original dataset page.

ResultMethodVenueDetails
0.21%Regularization of Neural Networks using DropConnectICML 2013 
0.23%Multi-column Deep Neural Networks for Image Classification CVPR 2012 
0.23%APAC: Augmented PAttern Classification with Neural NetworksarXiv 2015 
0.24%Batch-normalized Maxout Network in NetworkarXiv 2015 Details

(k=5 maxout pieces in each maxout unit).

0.29%Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and TreeAISTATS 2016 Details

Single model without data augmentation

0.31%Recurrent Convolutional Neural Network for Object RecognitionCVPR 2015 
0.31%On the Importance of Normalisation Layers in Deep Learning with Piecewise Linear Activation UnitsarXiv 2015 
0.32%Fractional Max-PoolingarXiv 2015 Details

Uses 12 passes at test time. Reaches 0.5% when using a single pass at test time.

0.33%Competitive Multi-scale ConvolutionarXiv 2015 
0.35%Deep Big Simple Neural Nets Excel on Handwritten Digit RecognitionNeural Computation 2010 Details

6-layer NN 784-2500-2000-1500-1000-500-10 (on GPU), uses elastic distortions

0.35%C-SVDDNet: An Effective Single-Layer Network for Unsupervised Feature LearningarXiv 2014 
0.37%Enhanced Image Classification With a Fast-Learning Shallow Convolutional Neural NetworkarXiv 2015 Details

No data augmentation

0.39%Efficient Learning of Sparse Representations with an Energy-Based ModelNIPS 2006 Details

Large conv. net, unsup pretraining, uses elastic distortions

0.39%Convolutional Kernel NetworksarXiv 2014 Details

No data augmentation.

0.39%Deeply-Supervised NetsarXiv 2014 
0.4%Best Practices for Convolutional Neural Networks Applied to Visual Document AnalysisDocument Analysis and Recognition 2003 
0.40%Hybrid Orthogonal Projection and Estimation (HOPE): A New Framework to Probe and Learn Neural NetworksarXiv 2015 
0.42%Multi-Loss Regularized Deep Neural NetworkCSVT 2015 Details

Based on NiN architecture.

0.45%Maxout NetworksICML 2013 Details

Uses convolution. Does not use dataset augmentation.

0.45%Training Very Deep NetworksNIPS 2015 Details

Best result selected on test set. 0.46% average over multiple trained models.

0.45%ReNet: A Recurrent Neural Network Based Alternative to Convolutional NetworksarXiv 2015 
0.46%Deep Convolutional Neural Networks as Generic Feature ExtractorsIJCNN 2015 Details

feature extraction part of convnet is trained on imagenet (external training data), classification part is trained on cifar-10

0.47%Network in NetworkICLR 2014 Details

NIN + Dropout

The code for NIN available at https://github.com/mavenlin/cuda-convnet

0.52 %Trainable COSFIRE filters for keypoint detection and pattern recognitionPAMI 2013 Details

Source code available.

0.53%What is the Best Multi-Stage Architecture for Object Recognition?ICCV 2009 Details

Large conv. net, unsup pretraining, no distortions

0.54%Deformation Models for Image RecognitionPAMI 2007 Details

K-NN with non-linear deformation (IDM) (Preprocessing: shiftable edges)

0.54%A trainable feature extractor for handwritten digit recognitionJournal Pattern Recognition 2007 Details

Trainable feature extractor + SVMs, uses affine distortions

0.56%Training Invariant Support Vector MachinesMachine Learning 2002 Details

Virtual SVM, deg-9 poly, 2-pixel jittered (Preprocessing: deskewing)

0.59%Simple Methods for High-Performance Digit Recognition Based on Sparse CodingTNN 2008 Details

Unsupervised sparse features + SVM, no distortions

0.62%Unsupervised learning of invariant feature hierarchies with applications to object recognitionCVPR 2007 Details

Large conv. net, unsup features, no distortions

0.62%PCANet: A Simple Deep Learning Baseline for Image Classification?arXiv 2014 Details

No data augmentation.

0.63%Shape matching and object recognition using shape contextsPAMI 2002 Details

K-NN, shape context matching (preprocessing: shape context feature extraction)

0.64%Beyond Spatial Pyramids: Receptive Field Learning for Pooled Image FeaturesCVPR 2012 
0.68%Handwritten Digit Recognition using Convolutional Neural Networks and Gabor FiltersICCI 2003 
0.69%On Optimization Methods for Deep LearningICML 2011 
0.71%Deep Fried ConvnetsICCV 2015 Details

Uses about 10x fewer parameters than the reference model, which reaches 0.87%.

0.75%Sparse Activity and Sparse Connectivity in Supervised LearningJMLR 2013 
0.78%Explaining and Harnessing Adversarial ExamplesICLR 2015 Details

permutation invariant network used

0.82%Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical RepresentationsICML 2009 
0.84%Supervised Translation-Invariant Sparse CodingCVPR 2010 Details

Uses sparse coding + svm.

0.94%Large-Margin kNN Classification using a Deep Encoder Network2009 
0.95%Deep Boltzmann MachinesAISTATS 2009 
1.01%BinaryConnect: Training Deep Neural Networks with binary weights during propagationsNIPS 2015 Details

Using 50% dropout

1.1%StrongNet: mostly unsupervised image recognition with strong neuronstechnical report on ALGLIB website 2014 Details

StrongNet is a neural design which uses two innovations: (a) “strong neurons” – highly nonlinear neurons with multiple outputs and (b) “mostly unsupervised architecture” – backpropagation-free design with all layers except for the last one being trained in a completely unsupervised setting.

1.12%CS81: Learning words with Deep Belief Networks2008 
1.19%Convolutional Neural Networks2003 Details

The ConvNN is based on the paper “Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis”.

1.2%Reducing the dimensionality of data with neural networks2006 
1.40%Convolutional Clustering for Unsupervised LearningarXiv 2015 Details

2 layers + multi dict.

1.5%Deep learning via semi-supervised embedding2008 
14.53%Deep Representation Learning with Target CodingAAAI 2015 
Something is off, something is missing ? Feel free to fill in the form.

CIFAR-10 49 results collected

Units: accuracy %

Classify 32x32 colour images.

ResultMethodVenueDetails
96.53%Fractional Max-PoolingarXiv 2015 Details

Uses 100 passes at test time. Reaches 95.5% when using a single pass at test time, and 96.33% when using 12 passes.. Uses data augmentation during training.

95.59%Striving for Simplicity: The All Convolutional NetICLR 2015 Details
  1. 92% without data augmentation, 92.75% with small data augmentation, 95.59% when using agressive data augmentation and larger network.
94.16%All you need is a good initICLR 2016 Details

Only mirroring and random shifts, no extreme data augmentation. Uses thin deep residual net with maxout activations.

94%Lessons learned from manually classifying CIFAR-10unpublished 2011 Details

Rough estimate from a single individual, over 400 training images (~1% of training data).

93.95%Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and TreeAISTATS 2016 Details

Single model with data augmentation, 92.38% without.

93.72%Spatially-sparse convolutional neural networksarXiv 2014 
93.63%Scalable Bayesian Optimization Using Deep Neural NetworksICML 2015 
93.57%Deep Residual Learning for Image RecognitionarXiv 2015 Details

Best performance reached with 110 layers. Using 1202 layers leads to 92.07%, 56 layers lead to 93.03%.

93.45%Fast and Accurate Deep Network Learning by Exponential Linear UnitsarXiv 2015 Details

Without data augmentation.

93.34%Universum Prescription: Regularization using Unlabeled DataarXiv 2015 
93.25%Batch-normalized Maxout Network in NetworkarXiv 2015 Details

(k=5 maxout pieces in each maxout unit). Reaches 92.15% without data augmentation.

93.13%Competitive Multi-scale ConvolutionarXiv 2015 
92.91%Recurrent Convolutional Neural Network for Object RecognitionCVPR 2015 Details

Reaches 91.31% without data augmentation.

92.49%Learning Activation Functions to Improve Deep Neural NetworksICLR 2015 Details

Uses an adaptive piecewise linear activation function. 92.49% accuracy with data augmentation and 90.41% accuracy without data augmentation.

92.45%cifar.torchunpublished 2015 Details

Code available at https://github.com/szagoruyko/cifar.torch

92.40%Training Very Deep NetworksNIPS 2015 Details

Best result selected on test set. 92.31% average over multiple trained models.

92.23%Stacked What-Where Auto-encodersarXiv 2015 
91.88%Multi-Loss Regularized Deep Neural NetworkCSVT 2015 Details

With data augmentation, 90.45% without. Based on NiN architecture.

91.78%Deeply-Supervised NetsarXiv 2014 Details

Single model, with data augmentation: 91.78%. Without data augmentation: 90.22%.

91.73%BinaryConnect: Training Deep Neural Networks with binary weights during propagationsNIPS 2015 Details

These results were obtained without using any data-augmentation.

91.48%On the Importance of Normalisation Layers in Deep Learning with Piecewise Linear Activation UnitsarXiv 2015 
91.40%Spectral Representations for Convolutional Neural NetworksNIPS 2015 
91.2%Network In NetworkICLR 2014 Details

The code for NIN available at https://github.com/mavenlin/cuda-convnet

NIN + Dropout 89.6% NIN + Dropout + Data Augmentation 91.2%

91.19%Speeding up Automatic Hyperparameter Optimization of Deep Neural Networks by Extrapolation of Learning CurvesIJCAI 2015 Details

Based on the “all convolutional” architecture. which reaches 90.92% by itself.

90.78%Deep Networks with Internal Selective Attention through Feedback ConnectionsNIPS 2014 Details

No data augmentation

90.68%Regularization of Neural Networks using DropConnectICML 2013 
90.65%Maxout NetworksICML 2013 Details

This result was obtained using both convolution and synthetic translations / horizontal reflections of the training data.

Reaches 88.32% when using convolution, but without any synthetic transformations of the training data.

90.61%Improving Deep Neural Networks with Probabilistic Maxout UnitsICLR 2014 Details
  1. 65% without data augmentation.
  2. 61% when using data augmentation.
90.5%Practical Bayesian Optimization of Machine Learning Algorithms NIPS 2012 Details

Reaches 85.02% without data augmentation.

With data augmented with horizontal reflections and translations, 90.5% accuracy on test set is achieved.

89.67%APAC: Augmented PAttern Classification with Neural NetworksarXiv 2015 
89.14%Deep Convolutional Neural Networks as Generic Feature ExtractorsIJCNN 2015 Details

feature extraction part of convnet is trained on imagenet (external training data), classification part is trained on cifar-10

89%ImageNet Classification with Deep Convolutional Neural NetworksNIPS 2012 Details

87% error on the unaugmented data.

88.80%Empirical Evaluation of Rectified Activations in Convolution NetworkICML workshop 2015 Details

Using Randomized Leaky ReLU

88.79%Multi-Column Deep Neural Networks for Image Classification CVPR 2012 Details
87.65%ReNet: A Recurrent Neural Network Based Alternative to Convolutional NetworksarXiv 2015 
86.70 %An Analysis of Unsupervised Pre-training in Light of Recent AdvancesICLR 2015 Details

Unsupervised pre-training, with supervised fine-tuning. Uses dropout and data-augmentation.

84.87%Stochastic Pooling for Regularization of Deep Convolutional Neural NetworksarXiv 2013 
84.4%Improving neural networks by preventing co-adaptation of feature detectorsarXiv 2012 Details

So called “dropout” method.

83.96%Discriminative Learning of Sum-Product NetworksNIPS 2012 
82.9%Stable and Efficient Representation Learning with Nonnegativity Constraints ICML 2014 Details

Full data, 3-layers + multi-dict.

  1. 4 with 3-layers only.
  2. 0 with 1-layers only.
82.2%Learning Invariant Representations with Local TransformationsICML 2012 Details

K= 4,000

82.18%Convolutional Kernel NetworksarXiv 2014 Details

No data augmentation.

82%Discriminative Unsupervised Feature Learning with Convolutional Neural NetworksNIPS 2014 Details

Unsupervised feature learning + linear SVM

80.02%Learning Smooth Pooling Regions for Visual RecognitionBMVC 2013 
80%Object Recognition with Hierarchical Kernel DescriptorsCVPR 2011 
79.7%Learning with Recursive Perceptual RepresentationsNIPS 2012 Details

Code size 1600.

79.6 %An Analysis of Single-Layer Networks in Unsupervised Feature Learning AISTATS 2011 Details
  1. 6% obtained using K-means over whitened patches, with triangle encoding and 4000 features (clusters).
78.67%PCANet: A Simple Deep Learning Baseline for Image Classification?arXiv 2014 Details

No data augmentation. Multiple feature scales combined. 77.14% when using only a single scale.

75.86%Enhanced Image Classification With a Fast-Learning Shallow Convolutional Neural NetworkarXiv 2015 Details

No data augmentation

Something is off, something is missing ? Feel free to fill in the form.

CIFAR-100 31 results collected

Units: accuracy %

Classify 32x32 colour images.

ResultMethodVenueDetails
75.72%Fast and Accurate Deep Network Learning by Exponential Linear UnitsarXiv 2015 Details

Without data augmentation.

75.7%Spatially-sparse convolutional neural networksarXiv 2014 
73.61%Fractional Max-PoolingarXiv 2015 Details

Uses 12 passes at test time. Reaches 68.55% when using a single pass at test time. Uses data augmentation during training.

72.60%Scalable Bayesian Optimization Using Deep Neural NetworksICML 2015 
72.44%Competitive Multi-scale ConvolutionarXiv 2015 
72.34%All you need is a good initICLR 2015 Details

Using RMSProp optimizer

71.14%Batch-normalized Maxout Network in NetworkarXiv 2015 Details

(k=5 maxout pieces in each maxout unit).

70.80%On the Importance of Normalisation Layers in Deep Learning with Piecewise Linear Activation UnitsarXiv 2015 
69.17%Learning Activation Functions to Improve Deep Neural NetworksICLR 2015 Details

Uses a piecewise linear activation function. 69.17% accuracy with data augmentation and 65.6% accuracy without data augmentation.

69.12%Stacked What-Where Auto-encodersarXiv 2015 
68.53%Multi-Loss Regularized Deep Neural NetworkCSVT 2015 Details

With data augmentation, 65.82% without. Based on NiN architecture.

68.40%Spectral Representations for Convolutional Neural NetworksNIPS 2015 
68.25%Recurrent Convolutional Neural Network for Object RecognitionCVPR 2015 
67.76%Training Very Deep NetworksNIPS 2015 Details

Best result selected on test set. 67.61% average over multiple trained models.

67.68%Deep Convolutional Neural Networks as Generic Feature ExtractorsIJCNN 2015 Details

feature extraction part of convnet is trained on imagenet (external training data), classification part is trained on cifar-100

67.63%Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and TreeAISTATS 2016 Details

Single model without data augmentation

67.38%HD-CNN: Hierarchical Deep Convolutional Neural Network for Large Scale Visual RecognitionICCV 2015 
67.16%Universum Prescription: Regularization using Unlabeled DataarXiv 2015 
66.29%Striving for Simplicity: The All Convolutional NetICLR 2014 
66.22%Deep Networks with Internal Selective Attention through Feedback ConnectionsNIPS 2014 
65.43%Deeply-Supervised NetsarXiv 2014 Details

Single model, without data augmentation.

64.77%Deep Representation Learning with Target CodingAAAI 2015 
64.32%Network in NetworkICLR 2014 Details

NIN + Dropout

The code for NIN available at https://github.com/mavenlin/cuda-convnet

63.15%Discriminative Transfer Learning with Tree-based PriorsNIPS 2013 Details

The baseline “Convnet + max pooling + dropout” reaches 62.80% (without any tree prior).

61.86%Improving Deep Neural Networks with Probabilistic Maxout UnitsICLR 2014 
61.43%Maxout NetworksICML 2013 Details

Uses convolution. Does not use dataset agumentation.

60.8%Stable and Efficient Representation Learning with Nonnegativity Constraints ICML 2014 Details

3-layers + multi-dict.

  1. 7 with 3-layers only.
  2. 3 with 1-layers only.
59.75%Empirical Evaluation of Rectified Activations in Convolution NetworkICML workshop 2015 Details

Using Randomized Leaky ReLU

57.49%Stochastic Pooling for Regularization of Deep Convolutional Neural NetworksarXiv 2013 
56.29%Learning Smooth Pooling Regions for Visual RecognitionBMVC 2013 Details

No data augmentation.

54.23%Beyond Spatial Pyramids: Receptive Field Learning for Pooled Image FeaturesCVPR 2012 
Something is off, something is missing ? Feel free to fill in the form.

STL-10 18 results collected

Units: accuracy %

Similar to CIFAR-10 but with 96x96 images. Original dataset website.

ResultMethodVenueDetails
74.33%Stacked What-Where Auto-encodersarXiv 2015 
74.10%Convolutional Clustering for Unsupervised LearningarXiv 2015 Details

3 layers + multi dict. With 2 layers, reaches 71.4%

73.15%Deep Representation Learning with Target CodingAAAI 2015 
72.8% (±0.4%)Discriminative Unsupervised Feature Learning with Convolutional Neural NetworksNIPS 2014 Details

Unsupervised feature learning + linear SVM

70.20 % (±0.7 %)An Analysis of Unsupervised Pre-training in Light of Recent AdvancesICLR 2015 Details

Unsupervised pre-training, with supervised fine-tuning. Uses dropout and data-augmentation.

70.1% (±0.6%)Multi-Task Bayesian OptimizationNIPS 2013 Details

Also uses CIFAR-10 training data

68.23% ± 0.5C-SVDDNet: An Effective Single-Layer Network for Unsupervised Feature LearningarXiv 2014 
68% (±0.55%)Committees of deep feedforward networks trained with few dataarXiv 2014 
67.9% (±0.6%)Stable and Efficient Representation Learning with Nonnegativity Constraints ICML 2014 Details

3-layers + multi-dict.

  1. 5 ± 0.5 with 3-layers only.
  2. 6 ± 0.6 with 1-layers only.
64.5% (±1%)Unsupervised Feature Learning for RGB-D Based Object RecognitionISER 2012 Details

Hierarchical sparse coding using Matching Pursuit and K-SVD

62.32%Convolutional Kernel NetworksarXiv 2014 Details

No data augmentation.

62.3% (±1%)Discriminative Learning of Sum-Product NetworksNIPS 2012 
61.0% (±0.58%)No more meta-parameter tuning in unsupervised sparse feature learningarXiv 2014 
61%Deep Learning of Invariant Features via Simulated Fixations in VideoNIPS 2012 2012 
60.1% (±1%)Selecting Receptive Fields in Deep Networks NIPS 2011 
58.7%Learning Invariant Representations with Local TransformationsICML 2012 
58.28%Pooling-Invariant Image Feature Learning arXiv 2012 Details

1600 codes, learnt using 2x PDL

56.5%Deep Learning of Invariant Features via Simulated Fixations in VideoNIPS 2012 Details

Trained also with video (unrelated to STL-10) obtained 61%

Something is off, something is missing ? Feel free to fill in the form.

SVHN 17 results collected

Units: error %

The Street View House Numbers (SVHN) Dataset.

SVHN is a real-world image dataset for developing machine learning and object recognition algorithms with minimal requirement on data preprocessing and formatting. It can be seen as similar in flavor to MNIST(e.g., the images are of small cropped digits), but incorporates an order of magnitude more labeled data (over 600,000 digit images) and comes from a significantly harder, unsolved, real world problem (recognizing digits and numbers in natural scene images). SVHN is obtained from house numbers in Google Street View images.

ResultMethodVenueDetails
1.69%Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and TreeAISTATS 2016 Details

Single model without data augmentation

1.76%Competitive Multi-scale ConvolutionarXiv 2015 
1.77%Recurrent Convolutional Neural Network for Object RecognitionCVPR 2015 Details

Without data augmentation

1.81%Batch-normalized Maxout Network in NetworkarXiv 2015 Details

(k=5 maxout pieces in each maxout unit).

1.92%Deeply-Supervised NetsarXiv 2014 
1.92%Multi-Loss Regularized Deep Neural NetworkCSVT 2015 Details

Based on NiN architecture.

1.94%Regularization of Neural Networks using DropConnectICML 2013 
1.97%On the Importance of Normalisation Layers in Deep Learning with Piecewise Linear Activation UnitsarXiv 2015 
2%Estimated human performanceNIPS 2011 Details

Based on the paper that introduced the dataset Reading Digits in Natural Images with Unsupervised Feature Learning, section 5.

2.15%BinaryConnect: Training Deep Neural Networks with binary weights during propagationsNIPS 2015 
2.16%Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural NetworksICLR 2014 Details

For classification of individual digits with a single network, error rate is 2.16%. For classification of the entire digit sequence (first paper doing this): error rate of 3.97%.

2.35%Network in NetworkICLR 2014 Details

NIN + Dropout

The code for NIN available at https://github.com/mavenlin/cuda-convnet

2.38%ReNet: A Recurrent Neural Network Based Alternative to Convolutional NetworksarXiv 2015 
2.47%Maxout NetworksICML 2013 Details

This result was obtained using convolution but not any synthetic transformations of the training data.

2.8%Stochastic Pooling for Regularization of Deep Convolutional Neural NetworksarXiv 2013 Details

64-64-128 Stochastic Pooling

3.96%Enhanced Image Classification With a Fast-Learning Shallow Convolutional Neural NetworkarXiv 2015 Details

No data augmentation

4.9%Convolutional neural networks applied to house numbers digit classificationICPR 2012 Details

ConvNet / MS / L4 / Padded

Something is off, something is missing ? Feel free to fill in the form.

ILSVRC2012 task 1

Units: Error (5 guesses)

1000 categories classification challenge. With tens of thousands of training, validation and testing images.

See this interesting comparative analysis.

Results are collected in the following external webpage

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
网上目前能找到的基于MATLAB神经网络的调制方式识别只有那个识别六种信号的神经网络程序,我也下载研究了一下,发现其代码实现的功能是自己选择训练一组对单一信号的调制神经网络,之后再用同一个信号来验证。举个例子,其实现的效果就是我向神经网络训练2ASK信号,在用训练的2ASK检测,神经网络检测出输入的信号是2ASK信号。而我向里面输入2FSK信号用来检测便检测不出来,所以其实现的并不是我们希望神经网络实现的功能。而且我也对那个程序里面运行后的各种参数观察了一下,发现其实现的并不能区分不同信号的调制识别方式。与是我写了一个应用MATLAB神经网络对信号调制方式识别的程序,目前只是起步阶段,只实现了应用瞬时参数γmax来区分2FSK,2ASK的信号调制方式识别。目前测试是实现了我想实现的功能。这个程序设置的免积分可下载,希望各位志同道合和我一样的小白们一起共同进步,也希望大神们不吝指点。 通过MATLAB的神经网络函数,训练了一组通过瞬时参数γmax来区分2FSK,2ASK的信号调制方式识别。 .m文件sig_2ASK和sig_2FSK用来生成输入数据2ASK_train,2FSK_train和测试数据2ASK_test和2FSK_test,network_2ASK_2FSK是训练的神经网络,对训练好的神经网络保存命名为net,netout是应用训练好的神经网络对输入信号进行调制方式识别检测。 代码下载好直接运行netout就可看出实现效果。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值