Deep Learning Glossary

1.Affine Layer

A fully-connected layer in a Neural Network. Affine means that each neuron in the previous layer is connected to each neuron in the current layer. In many ways, this is the "standard" layer of a Neural Network. Affine layers are often added on top of the outputs of Convolutional Neural Networks or Recurrent Neural Networks before making a final prediction.


2. Attention Mechanism

Attention Mechanism are inspired by human visual attention, the ability to focus on specific parts of an image. Attention mechanisms can be incorporated in both Language Processing and Image Recognition architectures to help the network learn what to "focus" on when making predictions.


Attention and Memory in Deep Learning and NLP


3. Alexnet

Alexnet is the name of the Convolutional Neural Network architecture that won the ILSVRC 2012 competition by a large margin and was responsible for a resurgence of interest in CNNs for Image Recognition. It consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. Alexnet was introduced in ImageNet Classification with Deep Convolutional Neural Networks


4. Autoencoder

An Autoencoder is a Neural Network model whose goal is to predict the input self, typically through a "bottleneck" somewhere in the network. By introducing a bottleneck, we force the network to learn a lower-dimensional repression of the input, effectively compressing the input into a good representation. Autoencoders are related to PCA and other dimensionality reduction techniques, but can learn more complex mappings due to their nonlinear nature. A wide range of autoencoder architectures exist, includingDenoising AutoencodersVariational Autoencoders, or Sequence Autoencoders


5. Average-Pooling

Average-Pooling is a pooling technique used in Convolutional Neural Networks for Image Recognition. It works by sliding a window over patches of features, such as pixels, and taking the average of all values within the window. It compresses the input representation into a lower-dimensional representation.


6. Batch Normalization

Batch Normalization is a technique that normalizes layer inputs per mini-batch. It speed up training, allows for the usage of higher learner rates, and can act as a regularizer. Batch Normalization has been found to be very effective for Convolutional and Feedforward Neural Networks but hasn't been successfully applied to Recurrent Neural Networks.


Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 

Batch Normalized Recurrent Neural Networks


7. Categorical Cross-Entropy Loss

The categorical cross-entropy loss is also known as the negative log likelihood. It is a popular loss function for categorization problems and measures the similarity between two probability distributions, typically the true labels and the predicted labels. It is given by L=-sum(y*log(y_prediction)) where y is the probability of true labels(typically a one-hot vector) and y_prediction is probability distribution of predicted labels, often coming from a softmax.


8. Deep Belief Network(DBN)

DBNs are a type of probabilistic graphical model that a hierarchical representation of data in an unsupervised manner. DBNs consist of multiple hidden layers with connections between neurons in each successive pair of layers. 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值