深度学习概论

Neural Network

Term deep learning: Term deep learning refers to training neural networks.

single neuron: Inputs x, computes this function by itself, and then outputs y.

For example,
Size ->single neuron -> Price
  x                                     y

neural network: Neural network is formed by taking many of these single neurons and stacking them together.

For example,

neural network
This hollow circle in the picture is a single neuron which is also called hidden units in a neural network. And the entire map in the picture is a neural network. That each of them takes its input of all input and decides to use which of these inputs by itself. So we say that this layer in the middle, the neural network, are densely connected, because every input feature’s connected to everyone of these circles in the middle.

supervised learning

supervised learning: In supervised learning, you have some input x, and you want to learn a function mapping to some output y.

For example,

Input(x)Output(y)ApplicationModel
Home featuresPriceReal Estatestandard neural network
Ad, user infoClick on ad?(0/1)Online Advertisingstandard neural network
ImageObject(1,…,1000)Photo taggingconvolutional neural network
AudioText transcriptSpeech recognitionrecurrent neural network
EnglishChineseMachine translationrecurrent neural network
Image, Radar infoPosition of other carsAutonomous drivingcustom/hybrid neural network

Structured Data: Structured Data means basically databases of data.

For example,

User AgeIdnumber
41932923000
32932342170
53931234090
14931431300

Unstructured Data: On the contrary, unstructured Data refers to things like audio, raw audio, or images where you might want to recognize what’s in the image or text.

Scale drives deep learning progress

Scale drives deep learning progress

In this small training set, the priorities of various algorithms are actually not very clear, so if you don’t have a large number of training sets, the effect will depend on your feature engineering capabilities, which will determine the final performance. For example, in this small training set, the SVM algorithm may perform better than the larger training NN algorithm. So you know that on the left side of this graphics area, the priorities between the various algorithms are not clearly defined, and the final performance depends more on your ability to select features using engineering and some details on algorithm processing. Neural network algorithms have an advantage only when some large data sets have a very large training set, which is to the right of the image.

Question

Which of the following are true? (Check all that apply.)

[√] Increasing the training set size generally does not hurt an algorithm 'performance, and it may help significantly.
[√] Increasing the size of a neural network generally does not hurt an algorithm 'performance, and it may help significantly.
[ ] Decreasing the training set size generally does not hurt an algorithm 'performance, and it may help significantly.
[ ] Decreasing the size of a neural network generally does not hurt an algorithm 'performance, and it may help significantly.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值