深度学习入门_深度学习

深度学习入门

深度学习 (Deep Learning)

Deep Learning is a subset of machine learning. Technically, machine learning looks through input data for valuable representations, making use of feedback signal as guidance. The “deep” in deep learning isn’t a reference to any kind of deeper understanding attained by the approach; rather, it stands for the idea of consecutive layers of representations. The “depth” of the model is the indication of the number of layers subscribed to the model. Concurrently, shallow learning’s (another approach of machine leaning) focal point is to learn representations from merely one or two layers.

深度学习是机器学习的一个子集 。 从技术上讲,机器学习通过使用反馈信号作为指导,通过输入数据寻找有价值的表示形式。 深度学习中的“ 深度 ”并不是指通过这种方法获得的任何形式的更深层次的理解。 相反,它代表连续表示层的想法。 模型的“ 深度 ”是订阅该模型的层数的指示。 同时, 浅层学习 (机器学习的另一种方法)的重点是仅从一层或两层学习表示形式。

数字分类示例 (Digit classification example)

Neural network for digit classification.

The above figure shows a network with multiple layers predicting the handwritten input digit. Each layer in the network learns different features about the data and passes it as an input to the next layer. While specifying the n-dimensional intermediate layers (hidden units) we need to be mindful of not establishing any “information bottleneck”. Within the pile of dense layers, if one layer drops any relevant information; that information cannot be recuperated by the subsequent layer. For example, while predicting the output from the dataset of 10 different classes; (in case of this example, numbers from 1 to 10) if we introduce only 6-dimensional intermediate layers information bottleneck will arise because of such compact layers, which might lead to irreversible loss of pertinent information.

上图显示了一个具有多层预测手写输入数字的网络。 网络中的每一层都学习有关数据的不同功能,并将其作为输入传递到下一层。 在指定n维中间层 (隐藏单元)时,我们需要注意不要建立任何“ 信息瓶颈 ”。 在一堆密集的层中,如果一层掉落了任何相关信息; 该信息不能被后续层恢复。 例如,在预测10个不同类别的数据集的输出时; (在本例中为1到10的数字)如果我们仅引入6维中间层,则由于这种紧凑的层会出现信息瓶颈,这可能导致相关信息不可逆转地丢失。

深度学习功能概述 (Overview of how deep learning functions)

Overview of how deep learning functions.

The transformation carried out by a layer is parameterized by its weights. To map the inputs to the associated labels, we need to find a suitable set of values for the weights. But how do we know what are the best appropriate values to be considered as weights? Well, this can seem intimidating at first however it is pretty easy to understand.

图层执行的转换通过其权重进行参数化 。 要将输入映射到关联的标签,我们需要找到一组合适的权重值。 但是我们怎么知道什么才是最合适的权重呢? 好吧,乍一看这似乎令人生畏,但这很容易理解。

Initially, weights are assigned some random values, so the network implements random transformations on the input data and produces an output. Needless to say, the output that will be obtained will be far from the truth label (associated label to the input data). Now, this is where the loss function and optimizer play a vital role. The loss function (aka objective function) computes the distance between the predicted label and the truth label and yields a loss score, which helps to access how well the model is functioning. The values of weights are altered to lower the loss score based on the feedback signal (previous loss score). Optimizers are the ones who handle this task. This process is repeated until we’ve reached a minimal loss score. A network with minimal loss score has the predicted values close to truth values.

最初,权重被分配了一些随机值,因此网络对输入数据进行随机转换并产生输出。 不用说,将获得的输出将远离真值标签(与输入数据相关联的标签)。 现在, 损失函数优化器在这里起着至关重要的作用。 损失函数(又称目标函数)计算预测标签与真标签之间的距离并得出损失评分 ,这有助于了解模型的运行情况。 根据反馈信号 (先前的损失分数)更改权重值以降低损失分数。 优化器是处理此任务的人。 重复此过程,直到我们达到最低损失分数。 损失分数最小的网络的预测值接近真实值。

Everyone can code concluding image.
Photo by Adi Goldstein on Unsplash
Adi GoldsteinUnsplash拍摄的照片

I hope this blog post helped you grasp the basic concept of deep learning. Thank you for reading. Have a productive day :)

我希望这篇博客文章可以帮助您掌握深度学习的基本概念。 感谢您的阅读。 祝您有个愉快的一天:)

翻译自: https://medium.com/analytics-vidhya/comprehend-deep-learning-ab7d74577790

深度学习入门

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值