ICLR 2013 International Conference on Learning Representations深度学习论文papers

 

Accepted for Oral Presentation

Zero-Shot Learning Through Cross-Modal Transfer
Richard Socher, Milind Ganjoo, Hamsa Sridhar, Osbert Bastani, Christopher D. Manning, Andrew Y. Ng
17 Jan 2013  arXiv  2 CommentsICLR 2013 Workshop Track
 
This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images.

 

A Nested HDP for Hierarchical Topic Models
John Paisley, Chong Wang, David Blei, Michael I. Jordan
16 Jan 2013  arXiv  4 CommentsICLR 2013 Workshop Track
 
We develop a nested hierarchical Dirichlet process (nHDP) for hierarchical topic modeling. The nHDP is a generalization of the nested Chinese restaurant process (nCRP) that allows each word to follow its own path to a topic node according to a document-specific distribution on a shared tree. This alleviates the rigid, single-path formulation of the nCRP, allowing a document to more easily express thematic borrowings as a random effect. We demonstrate our algorithm on 1.8 million documents from The New York Times.

 

Big Neural Networks Waste Capacity
Yann N. Dauphin, Yoshua Bengio
16 Jan 2013  arXiv  9 CommentsICLR 2013 Workshop Track
 
This article exposes the failure of some big neural networks to leverage added capacity to reduce underfitting. Past research suggest diminishing returns when increasing the size of neural networks. Our experiments on ImageNet LSVRC-2010 show that this may be due to the fact that bigger networks underfit the training objective, sometimes performing worse on the training set than smaller networks. This suggests that the optimization method - first order gradient descent - fails at this regime. Directly attacking this problem, either through the optimization method or the choices of parametrization, may allow to improve the generalization error on large datasets, for which a large capacity is required.

 

Affinity Weighted Embedding
Jason Weston, Ron Weiss, Hector Yee
18 Jan 2013  arXiv  3 CommentsICLR 2013 Workshop Track
 
Supervised (linear) embedding models like Wsabie and PSI have proven successful at ranking, recommendation and annotation tasks. However, despite being scalable to large datasets they do not take full advantage of the extra data due to their linear nature, and typically underfit. We propose a new class of models which aim to provide improved performance while retaining many of the benefits of the existing class of embedding models. Our new approach works by iteratively learning a linear embedding model where the next iteration's features and labels are reweighted as a function of the previous iteration. We describe several variants of the family, and give some initial results.

 

Joint Training Deep Boltzmann Machines for Classification
Ian J. Goodfellow, Aaron Courville, Yoshua Bengio
16 Jan 2013  arXiv  6 CommentsICLR 2013 Workshop Track
 
We introduce a new method for training deep Boltzmann machines jointly. Prior methods require an initial learning pass that trains the deep Boltzmann machine greedily, one layer at a time, or do not perform well on classification tasks. In our approach, we train all layers of the DBM simultaneously, using a novel inpainting-based objective function that facilitates second order optimization and line searches.

 

Why Size Matters: Feature Coding as Nystrom Sampling
Oriol Vinyals, Yangqing Jia, Trevor Darrell
23 Jan 2013  arXiv  3 CommentsICLR 2013 Workshop Track
 
Recently, the computer vision and machine learning community has been in favor of feature extraction pipelines that rely on a coding step followed by a linear classifier, due to their overall simplicity, well understood properties of linear classifiers, and their computational efficiency. In this paper we propose a novel view of this pipeline based on kernel methods and Nystrom sampling. In particular, we focus on the coding of a data point with a local representation based on a dictionary with fewer elements than the number of data points, and view it as an approximation to the actual function that would compute pair-wise similarity to all data points (often too many to compute in practice), followed by a Nystrom sampling step to select a subset of all data points. Furthermore, since bounds are known on the approximation power of Nystrom sampling as a function of how many samples (i.e. dictionary size) we consider, we can derive bounds on the approximation of the exact (but expensive to compute) kernel matrix, and use it as a proxy to predict accuracy as a function of the dictionary size, which has been observed to increase but also to saturate as we increase its size. This model may help explaining the positive effect of the codebook size and justifying the need to stack more layers (often referred to as deep learning), as flat models empirically saturate as we add more complexity.

 

Deep Learning for Detecting Robotic Grasps
Ian Lenz, Honglak Lee, Ashutosh Saxena
16 Jan 2013  arXiv  2 CommentsICLR 2013 Workshop Track
 
In this work, we consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects. We present a two-step cascaded structure, where we have two deep networks, with the top detections from the first one re-evaluated by the second one. The first deep network has fewer features, is therefore faster to run and makes more mistakes. The second network has more features and therefore gives better detections. Unlike previous works that need to design these features manually, deep learning gives us flexibility in designing such multi-step cascaded detectors.

 

Accepted for Poster Presentation

Latent Relation Representations for Universal Schemas
Sebastian Riedel, Limin Yao, Andrew McCallum
20 Jan 2013  arXiv  2 CommentsICLR 2013 Workshop Track
 
Traditional relation extraction predicts relations within some fixed and finite target schema. Machine learning approaches to this task require either manual annotation or, in the case of distant supervision, existing structured sources of the same schema. The need for existing datasets can be avoided by using a universal schema: the union of all involved schemas (surface form predicates as in OpenIE, and relations in the schemas of pre-existing databases). This schema has an almost unlimited set of relations (due to surface forms), and supports integration with existing structured data (through the relation types of existing databases). To populate a database of such schema we present a family of matrix factorization models that predict affinity between database tuples and relations. We show that this achieves substantially higher accuracy than the traditional classification approach. More importantly, by operating simultaneously on relations observed in text and in pre-existing structured DBs such as Freebase, we are able to reason about unstructured and structured data in mutually-supporting ways. By doing so our approach outperforms state-of-the-art distant supervision systems.

 

Learning Stable Group Invariant Representations with Convolutional Networks
Joan Bruna, Arthur Szlam, Yann LeCun
17 Jan 2013  arXiv  3 CommentsICLR 2013 Workshop Track
 
Transformation groups, such as translations or rotations, effectively express part of the variability observed in many recognition problems. The group structure enables the construction of invariant signal representations with appealing mathematical properties, where convolutions, together with pooling operators, bring stability to additive and geometric perturbations of the input. Whereas physical transformation groups are ubiquitous in image and audio applications, they do not account for all the variability of complex signal classes. We show that the invariance properties built by deep convolutional networks can be cast as a form of stable group invariance. The network wiring architecture determines the invariance group, while the trainable filter coefficients characterize the group action. We give explanatory examples which illustrate how the network architecture controls the resulting invariance group. We also explore the principle by which additional convolutional layers induce a group factorization enabling more abstract, powerful invariant representations.

 

Tree structured sparse coding on cubes
Arthur Szlam
16 Jan 2013  arXiv  2 CommentsICLR 2013 Workshop Track
 
A brief description of tree structured sparse coding on the binary cube.

 

Two SVDs produce more focal deep learning representations
Hinrich Schuetze, Christian Scheible
17 Jan 2013  arXiv  2 CommentsICLR 2013 Workshop Track
 
A key characteristic of work on deep learning and neural networks in general is that it relies on representations of the input that support generalization, robust inference, domain adaptation and other desirable functionalities. Much recent progress in the field has focused on efficient and effective methods for computing representations. In this paper, we propose an alternative method that is more efficient than prior work and produces representations that have a property we call focality -- a property we hypothesize to be important for neural network representations. The method consists of a simple application of two consecutive SVDs and is inspired by Anandkumar (2012).

 

Matrix Approximation under Local Low-Rank Assumption
Joonseok Lee, Seungyeon Kim, Guy Lebanon, Yoram Singer
15 Jan 2013  arXiv  4 CommentsICLR 2013 Workshop Track
 
Matrix approximation is a common tool in machine learning for building accurate prediction models for recommendation systems, text mining, and computer vision. A prevalent assumption in constructing matrix approximations is that the partially observed matrix is of low-rank. We propose a new matrix approximation model where we assume instead that the matrix is only locally of low-rank, leading to a representation of the observed matrix as a weighted sum of low-rank matrices. We analyze the accuracy of the proposed local low-rank modeling. Our experiments show improvements in prediction accuracy in recommendation tasks.

 

Hierarchical Data Representation Model - Multi-layer NMF
Hyun Ah Song, Soo-Young Lee
28 Jan 2013  arXiv  5 CommentsICLR 2013 Workshop Track
 
Understanding and representing the underlying structure of feature hierarchies present in complex data in intuitively understandable manner is an important issue. In this paper, we propose a data representation model that demonstrates hierarchical feature learning using NMF with sparsity constraint. We stack simple unit algorithm into several layers to take step-by-step approach in learning. By utilizing NMF as unit algorithm, our proposed network provides intuitive understanding of the learning process. It is able to demonstrate hierarchical feature development process and also discover and represent feature hierarchies in the complex data in intuitively understandable manner. We apply hierarchical multi-layer NMF to image data and document data to demonstrate feature hierarchies present in the complex data. Furthermore, we analyze the reconstruction and classification abilities of our proposed network and prove that hierarchical feature learning approach excels performance of standard shallow network. By providing underlying feature hierarchies in complex real-world data sets, our proposed network is expected to help machines develop intelligence based on the learned relationship between concepts, and at the same time, perform better with the small number of features provided for data representation.

 

A Semantic Matching Energy Function for Learning with Multi-relational Data
Xavier Glorot, Antoine Bordes, Jason Weston, Yoshua Bengio
16 Jan 2013  arXiv  3 CommentsICLR 2013 Workshop Track
 
Large-scale relational learning becomes crucial for handling the huge amounts of structured data generated daily in many application domains ranging from computational biology or information retrieval, to natural language processing. In this paper, we present a new neural network architecture designed to embed multi-relational graphs into a flexible continuous vector space in which the original data is kept and enhanced. The network is trained to encode the semantics of these graphs in order to assign high probabilities to plausible components. We empirically show that it reaches competitive performance in link prediction on standard datasets from the literature.

 

Gradient Driven Learning for Pooling in Visual Pipeline Feature Extraction Models
Derek Rose, Itamar Arel
17 Jan 2013  arXiv  3 CommentsICLR 2013 Workshop Track
 
Hyper-parameter selection remains a daunting task when building a pattern recognition architecture which performs well, particularly in recently constructed visual pipeline models for feature extraction. We re-formulate pooling in an existing pipeline as a function of adjustable pooling map weight parameters and propose the use of supervised error signals from gradient descent to tune the established maps within the model. This technique allows us to learn what would otherwise be a design choice within the model and specialize the maps to aggregate areas of invariance for the task presented. Preliminary results show moderate potential gains in classification accuracy and highlight areas of importance within the intermediate feature representation space.

 

Regularized Discriminant Embedding for Visual Descriptor Learning
Kye-Hyeon Kim, Rui Cai, Lei Zhang, Seungjin Choi
17 Jan 2013  arXiv  3 CommentsICLR 2013 Workshop Track
 
Images can vary according to changes in viewpoint, resolution, noise, and illumination. In this paper, we aim to learn representations for an image, which are robust to wide changes in such environmental conditions, using training pairs of matching and non-matching local image patches that are collected under various environmental conditions. We present a regularized discriminant analysis that emphasizes two challenging categories among the given training pairs: (1) matching, but far apart pairs and (2) non-matching, but close pairs in the original feature space (e.g., SIFT feature space). Compared to existing work on metric learning and discriminant analysis, our method can better distinguish relevant images from irrelevant, but look-alike images.

 

Learning Features with Structure-Adapting Multi-view Exponential Family Harmoniums
Yoonseop Kang, Seungjin Choi
16 Jan 2013  arXiv  4 CommentsICLR 2013 Workshop Track
 
We proposea graphical model for multi-view feature extraction that automatically adapts its structure to achieve better representation of data distribution. The proposed model, structure-adapting multi-view harmonium (SA-MVH) has switch parameters that control the connection between hidden nodes and input views, and learn the switch parameter while training. Numerical experiments on synthetic and a real-world dataset demonstrate the useful behavior of the SA-MVH, compared to existing multi-view feature extraction methods.

 

Visual Objects Classification with Sliding Spatial Pyramid Matching
Hao Wooi Lim, Yong Haur Tay
19 Jan 2013  arXiv  2 CommentsICLR 2013 Workshop Track
 
We present a method for visual object classification using only a single feature, transformed color SIFT with a variant of Spatial Pyramid Matching (SPM) that we called Sliding Spatial Pyramid Matching (SSPM), trained with an ensemble of linear regression (provided by LINEAR) to obtained state of the art result on Caltech-101 of 83.46%. SSPM is a special version of SPM where instead of dividing an image into K number of regions, a subwindow of fixed size is slide around the image with a fixed step size. For each subwindow, a histogram of visual words is generated. To obtained the visual vocabulary, instead of performing K-means clustering, we randomly pick N exemplars from the training set and encode them with a soft non-linear mapping method. We then trained 15 models, each with a different visual word size with linear regression. All 15 models are then averaged together to form a single strong model.

 

Learning New Facts From Knowledge Bases With Neural Tensor Networks and Semantic Word Vectors
Danqi Chen, Richard Socher, Christopher D. Manning, Andrew Y. Ng
17 Jan 2013  arXiv  2 CommentsICLR 2013 Workshop Track
 
Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpora. In contrast, here we mainly aim to complete a knowledge base by predicting additional true relationships between entities, based on generalizations that can be discerned in the given knowledgebase. We introduce a neural tensor network (NTN) model which predicts new relationship entries that can be added to the database. This model can be improved by initializing entity representations with word vectors learned in an unsupervised fashion from text, and when doing this, existing relations can even be queried for entities that were not present in the database. Our model generalizes and outperforms existing models for this problem, and can classify unseen relationships in WordNet with an accuracy of 82.8%.

 

The Manifold of Human Emotions
Seungyeon Kim, Fuxin Li, Guy Lebanon, Irfan Essa
16 Jan 2013  arXiv  4 CommentsICLR 2013 Workshop Track
 
Sentiment analysis predicts the presence of positive or negative emotions in a text document. In this paper, we consider higher dimensional extensions of the sentiment concept, which represent a richer set of human emotions. Our approach goes beyond previous work in that our model contains a continuous manifold rather than a finite set of human emotions. We investigate the resulting model, compare it to psychological observations, and explore its predictive capabilities.

 

Not selected for presentation at this time

Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand and deep machine learning on the other, where depth and scale play a similar role. In order to illustrate this connection, we review a recent numerical method based on the RG---the multiscale entanglement renormalization ansatz (MERA)---and show how it can be converted into a learning algorithm based on a generative hierarchical Bayesian network model. Under the assumption---common in physics---that the distribution to be learned is fully characterized by local correlations, this algorithm involves only explicit evaluation of probabilities, hence doing away with sampling.

 

A Geometric Descriptor for Cell-Division Detection
Marcelo Cicconet, Italo Lima, Davi Geiger, Kris Gunsalus
16 Jan 2013  arXiv  3 Comments
We describe a method for cell-division detection based on a geometric-driven descriptor that can be represented as a 5-layers processing network, based mainly on wavelet filtering and a test for mirror symmetry between pairs of pixels. After the centroids of the descriptors are computed for a sequence of frames, the two-steps piecewise constant function that best fits the sequence of centroids determines the frame where the division occurs.

 

We propose a novel method for automatic pain intensity estimation from facial images based on the framework of kernel Conditional Ordinal Random Fields (KCORF). We extend this framework to account for heteroscedasticity on the output labels(i.e., pain intensity scores) and introduce a novel dynamic features, dynamic ranks, that impose temporal ordinal constraints on the static ranks (i.e., intensity scores). Our experimental results show that the proposed approach outperforms state-of-the art methods for sequence classification with ordinal data and other ordinal regression models. The approach performs significantly better than other models in terms of Intra-Class Correlation measure, which is the most accepted evaluation measure in the tasks of facial behaviour intensity estimation.

 

This paper presents a basic enhancement to the DeSTIN deep learning architecture by replacing the explicitly calculated transition tables that are used to capture temporal features with a simpler, more scalable mechanism. This mechanism uses feedback of state information to cluster over a space comprised of both the spatial input and the current state. The resulting architecture achieves state-of-the-art results on the MNIST classification benchmark.

 

Large scale agglomerative clustering is hindered by computational burdens. We propose a novel scheme where exact inter-instance distance calculation is replaced by the Hamming distance between Kernelized Locality-Sensitive Hashing (KLSH) hashed values. This results in a method that drastically decreases computation time. Additionally, we take advantage of certain labeled data points via distance metric learning to achieve a competitive precision and recall comparing to K-Means but in much less computation time.

 

Fast and cheaper next generation sequencing technologies will generate unprecedentedly massive and highly-dimensional genomic and epigenomic variation data. In the near future, a routine part of medical record will include the sequenced genomes. A fundamental question is how to efficiently extract genomic and epigenomic variants of clinical utility which will provide information for optimal wellness and interference strategies. Traditional paradigm for identifying variants of clinical validity is to test association of the variants. However, significantly associated genetic variants may or may not be usefulness for diagnosis and prognosis of diseases. Alternative to association studies for finding genetic variants of predictive utility is to systematically search variants that contain sufficient information for phenotype prediction. To achieve this, we introduce concepts of sufficient dimension reduction and coordinate hypothesis which project the original high dimensional data to very low dimensional space while preserving all information on response phenotypes. We then formulate clinically significant genetic variant discovery problem into sparse SDR problem and develop algorithms that can select significant genetic variants from up to or even ten millions of predictors with the aid of dividing SDR for whole genome into a number of subSDR problems defined for genomic regions. The sparse SDR is in turn formulated as sparse optimal scoring problem, but with penalty which can remove row vectors from the basis matrix. To speed up computation, we develop the modified alternating direction method for multipliers to solve the sparse optimal scoring problem which can easily be implemented in parallel. To illustrate its application, the proposed method is applied to simulation data and the NHLBI's Exome Sequencing Project dataset

 

Accepted for Oral Presentation

Complexity of Representation and Inference in Compositional Models with Part Sharing
Alan L. Yuille, Roozbeh Mottaghi
16 Jan 2013  arXiv  8 CommentsICLR 2013 Conference Track
 
This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary description. We describe inference and learning algorithms for these models. We analyze the complexity of this model in terms of computation time (for serial computers) and numbers of nodes (e.g., "neurons") for parallel computers. In particular, we compute the complexity gains by part sharing and its dependence on how the dictionary scales with the level of the hierarchy. We explore three regimes of scaling behavior where the dictionary size (i) increases exponentially with the level, (ii) is determined by an unsupervised compositional learning algorithm applied to real data, (iii) decreases exponentially with scale. This analysis shows that in some regimes the use of shared parts enables algorithms which can perform inference in time linear in the number of levels for an exponential number of objects. In other regimes part sharing has little advantage for serial computers but can give linear processing on parallel computers.

 

Stochastic Pooling for Regularization of Deep Convolutional Neural Networks
Matthew D. Zeiler, Rob Fergus
16 Jan 2013  arXiv  6 CommentsICLR 2013 Conference Track
 
We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within the pooling region. The approach is hyper-parameter free and can be combined with other regularization approaches, such as dropout and data augmentation. We achieve state-of-the-art performance on four image datasets, relative to other approaches that do not utilize data augmentation.

 

Indoor Semantic Segmentation using depth information
Camille Couprie, Clément Farabet, Laurent Najman, Yann LeCun
16 Jan 2013  arXiv  6 CommentsICLR 2013 Conference Track
 
This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA.

 

Discriminative Recurrent Sparse Auto-Encoders
Jason Tyler Rolfe, Yann LeCun
16 Jan 2013  arXiv  17 CommentsICLR 2013 Conference Track
 
We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The network is first trained in unsupervised mode to reconstruct the input, and subsequently trained discriminatively to also produce the desired output. The recurrent network is time-unfolded with a given number of iterations, and trained using back-propagation through time. In its time-unfolded form, the network can be seen as a very deep multi-layer network in which the weights are shared between the hidden layers. The depth allows the system to exhibit all the power of deep network while substantially reducing the number of trainable parameters. From an initially unstructured recurrent network, the hidden units of discriminative recurrent sparse auto-encoders naturally organize into a hierarchy of features. The systems spontaneously learns categorical-units, whose activity build up over time through interactions with part-units, which represent deformations of templates. Even using a small number of hidden units per layer, discriminative recurrent sparse auto-encoders that are pixel-permutation agnostic achieve excellent performance on MNIST.

 

Efficient Learning of Domain-invariant Image Representations
Judy Hoffman, Erik Rodner, Jeff Donahue, Trevor Darrell, Kate Saenko
15 Jan 2013  arXiv  7 CommentsICLR 2013 Conference Track
 
We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches.

 

Barnes-Hut-SNE
Laurens van der Maaten
16 Jan 2013  arXiv  9 CommentsICLR 2013 Conference Track
 
The paper presents an O(N log N)-implementation of t-SNE -- an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots and that normally runs in O(N^2). The new implementation uses vantage-point trees to compute sparse pairwise similarities between the input data objects, and it uses a variant of the Barnes-Hut algorithm - an algorithm used by astronomers to perform N-body simulations - to approximate the forces between the corresponding points in the embedding. Our experiments show that the new algorithm, called Barnes-Hut-SNE, leads to substantial computational advantages over standard t-SNE, and that it makes it possible to learn embeddings of data sets with millions of objects.

 

Discrete Restricted Boltzmann Machines
Guido F. Montufar, Jason Morton
16 Jan 2013  arXiv  5 CommentsICLR 2013 Conference Track
 
In this paper we describe discrete restricted Boltzmann machines: graphical probability models with bipartite interactions between discrete visible and hidden variables. These models generalize standard binary restricted Boltzmann machines and discrete na\"ive Bayes models. For a given number of visible variables and cardinalities of their state spaces, we bound the number of hidden variables, depending on the cardinalities of their state spaces, for which the model is a universal approximator of probability distributions. More generally, we describe tractable exponential subfamilies and use them to bound the maximal and expected Kullback-Leibler approximation errors of these models from above. We discuss inference functions, mixtures of product distributions with shared parameters, and patterns of strong modes of probability distributions represented by discrete restricted Boltzmann machines in terms of configurations of projected products of simplices in normal fans of products of simplices. Finally, we use tropicalization and coding theory to study the geometry of these models, and show that in many cases they have the expected dimension but in some cases they do not. Keywords: expected dimension, tropical statistical model, distributed representation, q-ary variable, Kullback-Leibler divergence, hierarchical model, mixture model, Hadamard product, universal approximation, covering code

 

Information Theoretic Learning with Infinitely Divisible Kernels
Luis G. Sanchez Giraldo, Jose C. Principe
16 Jan 2013  arXiv  8 CommentsICLR 2013 Conference Track
 
In this paper, we develop a framework for information theoretic learning based on infinitely divisible matrices. We formulate an entropy-like functional on positive definite matrices based on Renyi's entropy definition and examine some key properties of this functional that lead to the concept of infinite divisibility. The proposed formulation avoids the plug in estimation of density and brings along the representation power of reproducing kernel Hilbert spaces. We show how analogues to quantities such as conditional entropy can be defined, enabling solutions to learning problems. In particular, we derive a supervised metric learning algorithm with very competitive results.

 

What Regularized Auto-Encoders Learn from the Data Generating Distribution
Guillaume Alain, Yoshua Bengio
16 Jan 2013  arXiv  6 CommentsICLR 2013 Conference Track
 
What do auto-encoders learn about the underlying data generating distribution? Recent work suggests that some auto-encoder variants do a good job of capturing the local manifold structure of data. This paper clarifies some of these previous intuitive observations by showing that minimizing a particular form of regularized reconstruction error yields a reconstruction function that locally characterizes the shape of the data generating density. We show that the auto-encoder captures the score (derivative of the log-density with respect to the input), along with the second derivative of the density and the local mean associated with the unknown data-generating density. This is the second result linking denoising auto-encoders and score matching, but in way that is different from previous work, and can be applied to the case when the auto-encoder reconstruction function does not necessarily correspond to the derivative of an energy function. The theorems provided here are completely generic and do not depend on the parametrization of the auto-encoder: they show what the auto-encoder would tend to if given enough capacity and examples. These results are for a contractive training criterion we show to be similar to the denoising auto-encoder training criterion with small corruption noise, but with contraction applied on the whole reconstruction function rather than just encoder. Similarly to score matching, one can consider the proposed training criterion as a convenient alternative to maximum likelihood, i.e., one not involving a partition function.

 

Feature grouping from spatially constrained multiplicative interaction
Felix Bauer, Roland Memisevic
17 Jan 2013  arXiv  4 CommentsICLR 2013 Conference Track
 
We present a feature learning model that learns to encode relationships between images. The model is defined as a Gated Boltzmann Machine, which is constrained such that hidden units that are nearby in space can gate each other's connections. We show how frequency/orientation "columns" as well as topographic filter maps follow naturally from training the model on image pairs. The model also helps explain why square-pooling models yield feature groups with similar grouping properties. Experimental results on synthetic image transformations show that spatially constrained gating is an effective way to reduce the number of parameters and thereby to regularize a transformation-learning model.

 

The Neural Representation Benchmark and its Evaluation on Brain and Machine
Charles F. Cadieu, Ha Hong, Dan Yamins, Nicolas Pinto, Najib J. Majaj, James J. DiCarlo
17 Jan 2013  arXiv  6 CommentsICLR 2013 Conference Track
 
A key requirement for the development of effective learning representations is their evaluation and comparison to representations we know to be effective. In natural sensory domains, the community has viewed the brain as a source of inspiration and as an implicit benchmark for success. However, it has not been possible to directly test representational learning algorithms directly against the representations contained in neural systems. Here, we propose a new benchmark for visual representations on which we have directly tested the neural representation in multiple visual cortical areas in macaque (utilizing data from [Majaj et al., 2012]), and on which any computer vision algorithm that produces a feature space can be tested. The benchmark measures the effectiveness of the neural or machine representation by computing the classification loss on the ordered eigendecomposition of a kernel matrix [Montavon et al., 2011]. In our analysis we find that the neural representation in visual area IT is superior to visual area V4, indicating an increase in representational performance in higher levels of the cortical visual hierarchy. In our analysis of representational learning algorithms, we find that a number of current algorithms approach the representational performance of V4. Impressively, we find that a recent supervised algorithm [Krizhevsky et al., 2012] achieves performance equal to that of IT for an intermediate level of image variation difficulty, and performs between V4 and IT at a higher difficulty level. We believe this result represents a major milestone: it is the first learning algorithm we have found that produces a representation on par with IT on this task of intermediate difficulty. We hope that this benchmark will serve as an initial rallying point for further correspondence between representations derived in brains and machines.

 

Feature Learning in Deep Neural Networks - A Study on Speech Recognition Tasks
Dong Yu, Michael L. Seltzer, Jinyu Li, Jui-Ting Huang, Frank Seide
16 Jan 2013  arXiv  4 CommentsICLR 2013 Conference Track
 
Recent studies have shown that deep neural networks (DNNs) perform significantly better than shallow networks and Gaussian mixture models (GMMs) on large vocabulary speech recognition tasks. In this paper we argue that the difficulty in speech recognition is primarily caused by the high variability in speech signals. DNNs, which can be considered a joint model of a nonlinear feature transform and a log-linear classifier, achieve improved recognition accuracy by extracting discriminative internal representations that are less sensitive to small perturbations in the input features. However, if test samples are very dissimilar to training samples, DNNs perform poorly. We demonstrate these properties empirically using a series of recognition experiments on mixed narrowband and wideband speech and speech distorted by environmental noise.

 

Herded Gibbs Sampling
Luke Bornn, Yutian Chen, Nando de Freitas, Mareija Eskelin, Jing Fang, Max Welling
18 Jan 2013  arXiv  9 CommentsICLR 2013 Conference Track
 
The Gibbs sampler is one of the most popular algorithms for inference in statistical models. In this paper, we introduce a herding variant of this algorithm, called herded Gibbs, that is entirely deterministic. We prove that herded Gibbs has an $O(1/T)$ convergence rate for models with independent variables and for fully connected probabilistic graphical models. Herded Gibbs is shown to outperform Gibbs in the tasks of image denoising with MRFs and named entity recognition with CRFs. However, the convergence for herded Gibbs for sparsely connected probabilistic graphical models is still an open problem.

 

Knowledge Matters: Importance of Prior Information for Optimization
Caglar Gulcehre, Yoshua Bengio
18 Jan 2013  arXiv  10 CommentsICLR 2013 Conference Track
 
We explore the effect of introducing prior information into the intermediate level of neural networks for a learning task on which all the state-of-the-art machine learning algorithms tested failed to learn. We motivate our work from the hypothesis that humans learn such intermediate concepts from other individuals via a form of supervision or guidance using a curriculum. The experiments we have conducted provide positive evidence in favor of this hypothesis. In our experiments, a two-tiered MLP architecture is trained on a dataset with 64x64 binary inputs images, each image with three sprites. The final task is to decide whether all the sprites are the same or one of them is different. Sprites are pentomino tetris shapes and they are placed in an image with different locations using scaling and rotation transformations. The first part of the two-tiered MLP is pre-trained with intermediate-level targets being the presence of sprites at each location, while the second part takes the output of the first part as input and predicts the final task's target binary event. The two-tiered MLP architecture, with a few tens of thousand examples, was able to learn the task perfectly, whereas all other algorithms (include unsupervised pre-training, but also traditional algorithms like SVMs, decision trees and boosting) all perform no better than chance. We hypothesize that the optimization difficulty involved when the intermediate pre-training is not performed is due to the {\em composition} of two highly non-linear tasks. Our findings are also consistent with hypotheses on cultural learning inspired by the observations of optimization problems with deep learning, presumably because of effective local minima.

 

Accepted for Poster Presentation

The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization
Hugo Van hamme
16 Jan 2013  arXiv  6 CommentsICLR 2013 Conference Track
 
Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative entries. In this paper, the approximation quality is measured by the Kullback-Leibler divergence between the data and its low-rank reconstruction. The existence of the simple multiplicative update (MU) algorithm for computing the matrix factors has contributed to the success of NMF. Despite the availability of algorithms showing faster convergence, MU remains popular due to its simplicity. In this paper, a diagonalized Newton algorithm (DNA) is proposed showing faster convergence while the implementation remains simple and suitable for high-rank problems. The DNA algorithm is applied to various publicly available data sets, showing a substantial speed-up on modern hardware.

 

Local Component Analysis
Nicolas Le Roux, Francis Bach
11 Jan 2013  arXiv  4 CommentsICLR 2013 Conference Track
 
Kernel density estimation, a.k.a. Parzen windows, is a popular density estimation method, which can be used for outlier detection or clustering. With multivariate data, its performance is heavily reliant on the metric used within the kernel. Most earlier work has focused on learning only the bandwidth of the kernel (i.e., a scalar multiplicative factor). In this paper, we propose to learn a full Euclidean metric through an expectation-minimization (EM) procedure, which can be seen as an unsupervised counterpart to neighbourhood component analysis (NCA). In order to avoid overfitting with a fully nonparametric density estimator in high dimensions, we also consider a semi-parametric Gaussian-Parzen density model, where some of the variables are modelled through a jointly Gaussian density, while others are modelled through Parzen windows. For these two models, EM leads to simple closed-form updates based on matrix inversions and eigenvalue decompositions. We show empirically that our method leads to density estimators with higher test-likelihoods than natural competing methods, and that the metrics may be used within most unsupervised learning techniques that rely on such metrics, such as spectral clustering or manifold learning methods. Finally, we present a stochastic approximation scheme which allows for the use of this method in a large-scale setting.

 

Adaptive learning rates and parallelization for stochastic, sparse, non-smooth gradients
Tom Schaul, Yann LeCun
16 Jan 2013  arXiv  4 CommentsICLR 2013 Conference Track
 
Recent work has established an empirically successful framework for adapting learning rates for stochastic gradient descent (SGD). This effectively removes all needs for tuning, while automatically reducing learning rates over time on stationary problems, and permitting learning rates to grow appropriately in non-stationary tasks. Here, we extend the idea in three directions, addressing proper minibatch parallelization, including reweighted updates for sparse or orthogonal gradients, improving robustness on non-smooth loss functions, in the process replacing the diagonal Hessian estimation procedure that may not always be available by a robust finite-difference approximation. The final algorithm integrates all these components, has linear complexity and is hyper-parameter free.

 

Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines
Guillaume Desjardins, Razvan Pascanu, Aaron Courville, Yoshua Bengio
16 Jan 2013  arXiv  4 CommentsICLR 2013 Conference Track
 
This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for training Boltzmann Machines. Similar in spirit to the Hessian-Free method of Martens [8], our algorithm belongs to the family of truncated Newton methods and exploits an efficient matrix-vector product to avoid explicitely storing the natural gradient metric $L$. This metric is shown to be the expected second derivative of the log-partition function (under the model distribution), or equivalently, the variance of the vector of partial derivatives of the energy function. We evaluate our method on the task of joint-training a 3-layer Deep Boltzmann Machine and show that MFNG does indeed have faster per-epoch convergence compared to Stochastic Maximum Likelihood with centering, though wall-clock performance is currently not competitive.

 

Cutting Recursive Autoencoder Trees
Christian Scheible, Hinrich Schuetze
20 Jan 2013  arXiv  7 CommentsICLR 2013 Conference Track
 
Deep Learning models enjoy considerable success in Natural Language Processing. While deep architectures produce useful representations that lead to improvements in various tasks, they are often difficult to interpret. This makes the analysis of learned structures particularly difficult. We therefore have to rely on empirical tests to see whether a particular structure makes sense. In this paper, we present an analysis of a well-received model that produces structural representations of text: the Semi-Supervised Recursive Autoencoder. We show that for certain tasks, the structure of the autoencoder may be significantly reduced and we evaluate the produced structures through human judgment.

 

Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals
Sebastian Hitziger, Maureen Clerc, Alexandre Gramfort, Sandrine Saillet, Christian Bénar, Théodore Papadopoulo
17 Jan 2013  arXiv  5 CommentsICLR 2013 Conference Track
 
Dictionary Learning has proven to be a powerful tool for many image processing tasks, where atoms are typically defined on small image patches. As a drawback, the dictionary only encodes basic structures. In addition, this approach treats patches of different locations in one single set, which means a loss of information when features are well-aligned across signals. This is the case, for instance, in multi-trial magneto- or electroencephalography (M/EEG). Learning the dictionary on the entire signals could make use of the alignement and reveal higher-level features. In this case, however, small missalignements or phase variations of features would not be compensated for. In this paper, we propose an extension to the common dictionary learning framework to overcome these limitations by allowing atoms to adapt their position across signals. The method is validated on simulated and real neuroelectric data.

 

Saturating Auto-Encoder
Rostislav Goroshin, Yann LeCun
17 Jan 2013  arXiv  8 CommentsICLR 2013 Conference Track
 
We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE's ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.

 

Block Coordinate Descent for Sparse NMF
Vamsi K. Potluru, Sergey M. Plis, Jonathan Le Roux, Barak A. Pearlmutter, Vince D. Calhoun, Thomas P. Hayes
16 Jan 2013  arXiv  9 CommentsICLR 2013 Conference Track
 
Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms, such as L$_1$/L$_2$ measure, have been shown to model sparsity robustly, based on intuitive attributes that such measures need to satisfy. This is in contrast to computationally cheaper alternatives such as the plain L$_1$ norm. However, present algorithms designed for optimizing the mixed norm L$_1$/L$_2$ are slow and other formulations for sparse NMF have been proposed such as those based on L$_1$ and L$_0$ norms. Our proposed algorithm allows us to solve the mixed norm sparsity constraints while not sacrificing computation time. We present experimental evidence on real-world datasets that shows our new algorithm performs an order of magnitude faster compared to the current state-of-the-art solvers optimizing the mixed norm and is suitable for large-scale datasets.

 

Training Neural Networks with Stochastic Hessian-Free Optimization
Ryan Kiros
16 Jan 2013  arXiv  9 CommentsICLR 2013 Conference Track
 
Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property and study stochastic HF with small gradient and curvature mini-batches independent of the dataset size for classification. We modify Martens' HF for this setting and integrate dropout, a method for preventing co-adaptation of feature detectors, to guard against overfitting. On classification tasks, stochastic HF achieves accelerated training and competitive results in comparison with dropout SGD without the need to tune learning rates.

 

Not selected for presentation at this time

Efficient Estimation of Word Representations in Vector Space
Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean
16 Jan 2013  arXiv  5 CommentsICLR 2013 Workshop Track
 
We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day and one CPU to derive high quality 300-dimensional vectors for one million vocabulary from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring various types of word similarities. We intend to publish this test set to be used by the research community.

 

Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint
Xanadu Halkias, Sebastien Paris, Herve Glotin
20 Jan 2013  arXiv  3 Comments
Deep Belief Networks (DBN) have been successfully applied on popular machine learning tasks. Specifically, when applied on hand-written digit recognition, DBNs have achieved approximate accuracy rates of 98.8%. In an effort to optimize the data representation achieved by the DBN and maximize their descriptive power, recent advances have focused on inducing sparse constraints at each layer of the DBN. In this paper we present a theoretical approach for sparse constraints in the DBN using the mixed norm for both non-overlapping and overlapping groups. We explore how these constraints affect the classification accuracy for digit recognition in three different datasets (MNIST, USPS, RIMES) and provide initial estimations of their usefulness by altering different parameters such as the group size and overlap percentage.

 

Boltzmann Machines and Denoising Autoencoders for Image Denoising
Kyunghyun Cho
16 Jan 2013  arXiv  5 CommentsICLR 2013 Workshop Track
 
Image denoising based on a probabilistic model of local image patches has been employed by various researchers, and recently a deep (denoising) autoencoder has been proposed by Burger et al. [2012] and Xie et al. [2012] as a good model for this. In this paper, we propose that another popular family of models in the field of deep learning, called Boltzmann machines, can perform image denoising as well as, or in certain cases of high level of noise, better than denoising autoencoders. We empirically evaluate the two models on three different sets of images with different types and levels of noise. Throughout the experiments we also examine the effect of the depth of the models. The experiments confirmed our claim and revealed that the performance can be improved by adding more hidden layers, especially when the level of noise is high.

 

A neural probabilistic language model (NPLM) provides an idea to achieve the better perplexity than n-gram language model and their smoothed language models. This paper investigates application area in bilingual NLP, specifically Statistical Machine Translation (SMT). We focus on the perspectives that NPLM has potential to open the possibility to complement potentially `huge' monolingual resources into the `resource-constraint' bilingual resources. We introduce an ngram-HMM language model as NPLM using the non-parametric Bayesian construction. In order to facilitate the application to various tasks, we propose the joint space model of ngram-HMM language model. We show an experiment of system combination in the area of SMT. One discovery was that our treatment of noise improved the results 0.20 BLEU points if NPLM is trained in relatively small corpus, in our case 500,000 sentence pairs, which is often the case due to the long training time of NPLM.

 

Deep Predictive Coding Networks
Rakesh Chalasani, Jose C. Principe
16 Jan 2013  arXiv  9 CommentsICLR 2013 Workshop Track
 
The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative model that empirically alters priors on the latent representations in a dynamic and context-sensitive manner. This model captures the temporal dependencies in time-varying signals and uses top-down information to modulate the representation in lower layers. The centerpiece of our model is a novel procedure to infer sparse states of a dynamic model; which is used for feature extraction. We also extend this feature extraction block to introduce a pooling function that captures locally invariant representations. When applied on a natural video data, we show that our method is able to learn high-level visual features. We also demonstrate the role of the top-down connections by showing the robustness of the proposed model to structured noise.

 

We study the use of inverse reinforcement learning (IRL) as a tool for the recognition of agents' behavior on the basis of observation of their sequential decision behavior interacting with the environment. We model the problem faced by the agents as a Markov decision process (MDP) and model the observed behavior of the agents in terms of forward planning for the MDP. We use IRL to learn reward functions and then use these reward functions as the basis for clustering or classification models. Experimental studies with GridWorld, a navigation problem, and the secretary problem, an optimal stopping problem, suggest reward vectors found from IRL can be a good basis for behavior pattern recognition problems. Empirical comparisons of our method with several existing IRL algorithms and with direct methods that use feature statistics observed in state-action space suggest it may be superior for behavior recognition problems.

 

Audio Artist Identification by Deep Neural Network
Zhen Hu, Kun Fu, Changshui Zhang
15 Jan 2013  arXiv  4 Comments
Since officially began in 2005, the annual Music Information Retrieval Evaluation eXchange (MIREX) has made great contributions to the Music Information Retrieval (MIR) research. By defining some important tasks and providing a meaningful comparison system, the International Music Information Retrieval Systems Evaluation Laboratory (IMIRSEL), organizer of the MIREX, drives researchers in the MIR field to develop more advanced system to fulfill the tasks. One of the important tasks is the Audio Artist Identification task, or the AAI task. We implemented a Deep Belief Network (DBN) to identify the artist by audio signal. As a matter of copyright, IMIRSEL didn't publish there data set and we had to construct our own. In our data set we got an accuracy of 69.87% without carefully choosing parameters while the best result reported on MIREX is 69.70%. We think our method is promising and we want to discuss with others.

 

Clustering Learning for Robotic Vision
Eugenio Culurciello, Jordan Bates, Aysegul Dundar, Jose Carrasco, Clement Farabet
15 Jan 2013  arXiv  5 CommentsICLR 2013 Workshop Track
 
We present the clustering learning technique applied to multi-layer feedforward deep neural networks. We show that this unsupervised learning technique can compute network filters with only a few minutes and a much reduced set of pa- rameters. The goal of this paper is to promote the technique for general-purpose robotic vision systems. We report its use in static image datasets and object track- ing datasets. We show that networks trained with clustering learning can outper- form large networks trained for many hours on complex datasets.

 

When Does a Mixture of Products Contain a Product of Mixtures?
Guido F. Montufar, Jason Morton
10 Jan 2013  arXiv  5 CommentsICLR 2013 Workshop Track
 
We prove results on the relative representational power of mixtures of product distributions and restricted Boltzmann machines (products of mixtures of pairs of product distributions). Tools of independent interest are mode-based polyhedral approximations sensitive enough to compare full-dimensional models, and characterizations of possible modes and support sets of multivariate probability distributions that can be represented by both model classes. We find, in particular, that an exponentially larger mixture model, requiring an exponentially larger number of parameters, is required to represent probability distributions that can be represented by the restricted Boltzmann machines. The title question is intimately related to questions in coding theory and point configurations in hyperplane arrangements.

 

Several recent results in machine learning have established formal connections between autoencoders---artificial neural network models that attempt to reproduce their inputs---and other coding models like sparse coding and K-means. This paper explores in depth an autoencoder model that is constructed using rectified linear activations on its hidden units. Our analysis builds on recent results to further unify the world of sparse linear coding models. We provide an intuitive interpretation of the behavior of these coding models and demonstrate this intuition using small, artificial datasets with known distributions.

 

One conjecture in both deep learning and classical connectionist viewpoint is that the biological brain implements certain kinds of deep networks as its back-end. However, to our knowledge, a detailed correspondence has not yet been set up, which is important if we want to bridge between neuroscience and machine learning. Recent researches emphasized the biological plausibility of Linear-Nonlinear-Poisson (LNP) neuron model. We show that with neurally plausible settings, the whole network is capable of representing any Boltzmann machine and performing a semi-stochastic Bayesian inference algorithm lying between Gibbs sampling and variational inference.

 

Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences
Sainbayar Sukhbaatar, Takaki Makino, Kazuyuki Aihara
15 Jan 2013  arXiv  7 CommentsICLR 2013 Workshop Track
 
Learning invariant representations from images is one of the hardest challenges facing computer vision. Spatial pooling is widely used to create invariance to spatial shifting, but it is restricted to convolutional models. In this paper, we propose a novel pooling method that can learn soft clustering of features from image sequences. It is trained to improve the temporal coherence of features, while keeping the information loss at minimum. Our method does not use spatial information, so it can be used with non-convolutional models too. Experiments on images extracted from natural videos showed that our method can cluster similar features together. When trained by convolutional features, auto-pooling outperformed traditional spatial pooling on an image classification task, even though it does not use the spatial topology of features.

 

Unsupervised Feature Learning for low-level Local Image Descriptors
Christian Osendorfer, Justin Bayer, Patrick van der Smagt
20 Jan 2013  arXiv  6 CommentsICLR 2013 Workshop Track
 
Unsupervised feature learning has shown impressive results for a wide range of input modalities, in particular for object classification tasks in computer vision. Using a large amount of unlabeled data, unsupervised feature learning methods are utilized to construct high-level representations that are discriminative enough for subsequently trained supervised classification algorithms. However, it has never been quantitatively investigated yet how well unsupervised learning methods can find low-level representations for image patches without any supervised refinement. In this paper we examine the performance of pure unsupervised methods on a low-level correspondence task, a problem that is central to many Computer Vision applications. We find that a special type of Restricted Boltzmann Machines performs comparably to hand-crafted descriptors. Additionally, a simple binarization scheme produces compact representations that perform better than several state-of-the-art descriptors.

 

Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities
Tommi Vatanen, Tapani Raiko, Harri Valpola, Yann LeCun
16 Jan 2013  arXiv  4 CommentsICLR 2013 Workshop Track
 
Recently, we proposed to transform the outputs of each hidden neuron in a multi-layer perceptron network to have zero output and zero slope on average, and use separate shortcut connections to model the linear dependencies instead. We continue the work by firstly introducing a third transformation to normalize the scale of the outputs of each hidden neuron, and secondly by analyzing the connections to second order optimization methods. We show that the transformations make a simple stochastic gradient behave closer to second-order optimization methods and thus speed up learning. This is shown both in theory and with experiments. The experiments on the third transformation show that while it further increases the speed of learning, it can also hurt performance by converging to a worse local optimum, where both the inputs and outputs of many hidden neurons are close to zero.

 

Learnable Pooling Regions for Image Classification
Mateusz Malinowski, Mario Fritz
17 Jan 2013  arXiv  11 CommentsICLR 2013 Workshop Track
 
From the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the predominance of this approach in current recognition systems, we have seen little progress to fully adapt the pooling strategy to the task at hand. This paper proposes a model for learning task dependent pooling scheme -- including previously proposed hand-crafted pooling schemes as a particular instantiation. In our work, we investigate the role of different regularization terms used in the proposed model together with an efficient method to train them. Our experiments show improved performance over hand-crafted pooling schemes on the CIFAR-10 and CIFAR-100 datasets -- in particular improving the state-of-the-art to 56.29% on the latter.

 

Natural Gradient Revisited
Razvan Pascanu, Yoshua Bengio
17 Jan 2013  arXiv  11 CommentsICLR 2013 Workshop Track
 
The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive Natural Gradient from basic principles, contrasting the difference between the two version of the algorithm that are in the literature.

 

The Expressive Power of Word Embeddings
Yanqing Chen, Bryan Perozzi, Rami Al-Rfou, Steven Skiena
15 Jan 2013  arXiv  6 Comments
We seek to better understand the difference in quality of the several publicly released embeddings. We propose several tasks that help to distinguish the characteristics of different embeddings. Our evaluation shows that embeddings are able to capture deep semantics even in the absence of sentence structure. Moreover, benchmarking the embeddings shows great variance in quality and characteristics of the semantics captured by the tested embeddings. Finally, we show the impact of varying the number of dimensions and the resolution of each dimension on the effective useful features captured by the embedding space. Our contributions highlight the importance of embeddings for NLP tasks and the effect of their quality on the final results.

 

 
Expand abstract
Factorized Topic Models
Cheng Zhang, Carl Henrik Ek, Hedvig Kjellstrom
16 Jan 2013  arXiv  8 CommentsICLR 2013 Workshop Track
 
In this paper we present a new type of latent topic model, which exploits supervision to produce a factorized representation of the observed data. The structured parameterization separately encodes variance that is shared between classes from variance that is private to each class by the introduction of a new prior. The approach allows for a more efficient inference and provides an intuitive interpretation of the data in terms of an informative signal together with structured noise. The factorized representation is shown to enhance inference performance for both image and text classification.

 

 

from: http://openreview.net/venue/iclr2013

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值