glsl 指定片段深度_深度学习的历史-通过6条代码片段进行探索

本文通过六个代码片段探索深度学习的发展历程,从最小二乘法到梯度下降,再到线性回归和感知器,接着讨论了人工神经网络和深度神经网络。作者介绍了每个关键概念的发明人和背景,同时提供了代码示例,以帮助读者更好地理解和实践这些理论。
摘要由CSDN通过智能技术生成

glsl 指定片段深度

by Emil Wallner

埃米尔·沃尔纳(Emil Wallner)

深度学习的历史-通过6条代码片段进行探索 (The History of Deep Learning — Explored Through 6 Code Snippets)

In this article, we’ll explore six snippets of code that made deep learning what it is today. We’ll cover the inventors and the background to their breakthroughs. Each story includes simple code samples on FloydHub and GitHub to play around with.

在本文中,我们将探讨六个片段,这些片段使深度学习今天的内容成为可能。 我们将介绍发明人及其突破的背景。 每个故事都包含FloydHubGitHub上的简单代码示例, 供您试用

If this is your first encounter with deep learning, I’d suggest reading my Deep Learning 101 for Developers.

如果这是您第一次接触深度学习,建议您阅读我的《面向开发人员的深度学习101》

To run the code examples on FloydHub, install the floydcommand line tool. Then clone the code examples I’ve provided to your local machine.

要在FloydHub上运行代码示例,请安装floydcommand line工具 。 然后克隆我提供给本地计算机的代码示例

Note: If you are new to FloydHub, you might want to first read the getting started with FloydHub section in my earlier post.

注意:如果您是FloydHub的新手,则可能需要先阅读我以前的文章中的FloydHub入门部分。

Initiate the CLI in the example project folder on your local machine. Now you can spin up the project on FloydHub with the following command:

在本地计算机上的示例项目文件夹中启动CLI。 现在,您可以使用以下命令在FloydHub上启动项目:

floyd run --data emilwallner/datasets/mnist/1:mnist --tensorboard --mode jupyter

最小二乘法 (The Method of Least Squares)

Deep learning started with a snippet of math.

深度学习从一小段数学开始。

I’ve translated it into Python:

我已经将其翻译成Python:

# y = mx + b# m is slope, b is y-interceptdef compute_error_for_line_given_points(b, m, coordinates):    totalError = 0    for i in range(0, len(coordinates)):        x = coordinates[i][0]        y = coordinates[i][1]        totalError += (y - (m * x + b)) ** 2    return totalError / float(len(coordinates))# example compute_error_for_line_given_points(1, 2, [[3,6],[6,9],[12,18]])

This was first published by Adrien-Marie Legendre in 1805. He was a Parisian mathematician who was also known for measuring the meter.

该书最初由Adrien-Marie Legendre于1805年出版。他是巴黎的数学家,以测量电表而闻名。

He had a particular obsession with predicting the future location of comets. He had the locations of a couple of past comets. He was relentless as he used them in his search for a method to calculate their trajectory.

他对预测彗星的未来位置特别着迷。 他曾经有过两次彗星的位置。 当他使用它们搜索方法来计算它们的轨迹时,他不懈地努力。

It really was one of those spaghetti-on-the-wall moments. He tried several methods, then one version finally stuck with him.

这确实是那些意大利面条式的时刻之一。 他尝试了几种方法,然后终于有了一个版本。

Legendre’s process started by guessing the future location of a comet. Then he squared the errors he made, and finally remade his guess to reduce the sum of the squared errors. This was the seed for linear regression.

Legendre的过程始于猜测彗星的未来位置。 然后,他对所犯的错误求平方,最后重新进行猜测,以减少平方误差的总和。 这是线性回归的种子。

Play with the above code in the Jupyter notebook I’ve provided to get a feel for it. m is the coefficient and b in the constant for your prediction, and the coordinates are the locations of the comet. The goal is to find a combination of m and b where the error is as small as possible.

在我提供的Jupyter笔记本中使用上面的代码来体验一下。 m 是用于预测的系数和常数中的bcoordinates是彗星的位置。 目的是找到误差尽可能小的mb的组合。

This is the core of deep learning:

这是深度学习的核心:

  • Take an input and a desired output

    接受输入并获得所需的输出
  • Then search for the correlation between the two

    然后搜索两者之间的相关性

梯度下降 (Gradient Descent)

Legendre’s method of manually trying to reduce the error rate was time-consuming. Peter Debye was a Nobel prize winner from The Netherlands. He formalized a solution for this process a century later in 1909.

勒让德勒(Legendre)手动尝试降低错误率的方法非常耗时。 彼得·德拜(Peter Debye)是荷兰的诺贝尔奖获得者。 一个世纪后的1909年,他为这一过程确定了解决方案

Let’s imagine that Legendre had one parameter to worry about — we’ll call it X. The Y axis represents the error value for each value of X. Legendre was searching for where X results in the lowest error.

假设Legendre有一个参数值得担心-我们将其称为X Y轴代表X每个值的误差值。 Legendre正在寻找X导致最小错误的位置。

In this graphica

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值