- 博客(0)
- 资源 (13)
- 收藏
- 关注
多层卷积脉冲神经网络.pdf
Spiking neural networks (SNNs) have advantages
over traditional, non-spiking networks with respect to bio-
realism, potential for low-power hardware implementations, and
theoretical computing power. However, in practice, spiking net-
works with multi-layer learning have proven difficult to train.
This paper explores a novel, bio-inspired spiking convolutional
neural network (CNN) that is trained in a greedy, layer-wise
fashion. The spiking CNN consists of a convolutional/pooling
layer followed by a feature discovery layer, both of which
undergo bio-inspired learning. Kernels for the convolutional layer
are trained using a sparse, spiking auto-encoder representing
primary visual features. The feature discovery layer uses a
probabilistic spike-timing-dependent plasticity (STDP) learning
rule. This layer represents complex visual features using WTA-
thresholded, leaky, integrate-and-fire (LIF) neurons. The new
model is evaluated on the MNIST digit dataset using clean and
noisy images. Intermediate results show that the convolutional
layer is stack-admissible, enabling it to support a multi-layer
learning architecture. The recognition performance for clean
images is above 98%. This performance is accounted for by
the independent and informative visual features extracted in
a hierarchy of convolutional and feature discovery layers. The
performance loss for recognizing the noisy images is in the range
0.1% to 8.5%. This level of performance loss indicates that the
network is robust to additive noise
2020-07-27
逼近精度梯度误差界 for structured convex optimization.pdf
Convex optimization problems arising in applications, possibly as approx-
imations of intractable problems, are often structured and large scale. When the data
are noisy, it is of interest to bound the solution error relative to the (unknown) solu-
tion of the original noiseless problem. Related to this is an error bound for the lin-
ear convergence analysis of first-order gradient methods for solving these problems.
Example applications include compressed sensing, variable selection in regression,
TV-regularized image denoising, and sensor network localization
2020-07-27
元学习 概要.pdf
Meta-learning, or learning to learn, is the science of systematically observing how different
machine learning approaches perform on a wide range of learning tasks, and then learning
from this experience, or meta-data, to learn new tasks much faster than otherwise possible.
Not only does this dramatically speed up and improve the design of machine learning
pipelines or neural architectures, it also allows us to replace hand-engineered algorithms
with novel approaches learned in a data-driven way. In this chapter, we provide an overview
of the state of the art in this fascinating and continuously evolving field.
2020-07-27
空空如也
TA创建的收藏夹 TA关注的收藏夹
TA关注的人