《Deep Learning from Scratch》·第一集:基本概念

★深度学习,有时也称为【端对端机器学习】。

端对端是指从一端到另外一端的意思,

- - - 即:从原始数据(输入)中获的目标结果(输出)的意思。

 

★损失函数是表示神经网络性能的“恶劣程度”的指标,

- - - 即当前的神经网络对监督数据在多大程度上不拟合,在多大程度上不一致。

理论上,任意的函数都可以作为损失函数。常用的损失函数有均方误差、交叉熵误差。

 

★mini-batch学习:

从全部数据中选出一部分,作为全部数据的“近似”。

神经网络的学习是从训练数据中选出一批数据(称为mini-batch,小批量),然后,对每个mini-batch进行学习。

- - - 例如,以MNIST数据为例。从6万个训练数据中随机选出100笔,再用这100笔数据进行学习。

mini-batch的作用:利用随机选择的小批量(mini-batch)数据作为全体训练数据的近似值。

 

★★为何要设定损失函数?

以数字识别为例。

我们的目标是获得识别精度尽可能高的神经网络。那么,我们为啥不以识别精度作为指标??-★-

在神经网络学习中,寻找最优参数时,要寻找使得损失函数的值尽可能的小的参数。因此,需要借助导数进行更新迭代。

- - -★不能使用识别精度的原因★:1、因为绝大多数地方的导数都会变成0,导致参数无法更新。2、间断的,突然地变化。对于微小的参数变化不起反应。阶跃函数亦是如此。

 

 

 

Deep Learning, Vol. 1: From Basics to Practice By 作者: Andrew Glassner Pub Date: 2018 ISBN: n/a Pages: (909 of 1750) Language: English Format: PDF People are using the tools of deep learning to change how we think about science, art, engineering, business, medicine, and even music. This book is for people who want to understand this field well enough to create deep learning systems, train them, and then use them with confidence to make their own contributions. The book takes a friendly, informal approach. Our goal is to make the ideas of this field simple and accessible to everyone, as shown in the Contents below. Since most practitioners today use one of several free, open-source deep-learning libraries to build their systems, the hard part isn’t in the programming. Rather, it’s knowing what tools to use, and when, and how. Building a working deep learning system requires making a series of technically informed choices, and with today’s tools, those choices require understanding what’s going on under the hood. This book is designed to give you that understanding. You’ll be able to choose the right kind of architecture, how to build a system that can learn, how to train it, and then how to use it to accomplish your goals. You’ll be able to read and understand the documentation for whatever library you’d like to use. And you’ll be able to follow exciting, on-going breakthroughs as they appear, because you’ll have the knowledge and vocabulary that let you read new material, and discuss it with other people doing deep learning. The book is extensively illustrated with over 1000 original figures. They are also all available for free download, for your own use. You don’t need any previous experience with machine learning or deep learning for this book. You don’t need to be a mathematician, because there’s nothing in the book harder than the occasional multiplication. You don’t need to choose a particular programming language, or library, or piece of hardware, because our approach is largely independent of those things. Our focus is on the principles and techniques that are applicable to any language, library, and hardware. Even so, practical programming is important. To stay focused, we gather our programming discussions into 3 chapters that show how to use two important and free Python libraries. Both chapters come with extensive Jupyter notebooks that contain all the code. Other chapters also offer notebooks for for every Python-generated figure. Our goal is to give you all the basics you need to understand deep learning, and then show how to use those ideas to construct your own systems. Everything is covered from the ground up, culminating in working systems illustrated with running code. The book is organized into two volumes. Volume 1 covers the basic ideas that support the field, and which form the core understanding for using these methods well. Volume 2 puts these principles into practice. Deep learning is fast becoming part of the intellectual toolkit used by scientists, artists, executives, doctors, musicians, and anyone else who wants to discover the information hiding in their data, paintings, business reports, test results, musical scores, and more.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值