深度学习与神经网络习题(1)

       最近在学习网易云课堂上吴恩达教授的《神经网络与深度学习》的课程,做了第一周的测试题,没有答案,于是想分享自己的参考解析(根据coursera 荣誉准则不允许公布答案),以供讨论。

1。What does the analogy “AI is the new electricity” refer to?

A. Similar to electricity starting about 100 years ago, AI is transforming multiple industries.

B. Through the “smart grid”, AI is delivering a new wave of electricity.

C. AI runs on computers and is thus powered by electricity, but it is letting computers do things not possible before.

D. AI is powering personal devices in our homes and offices, similar to electricity.

我的解析:AI是一种新的生产力,就像100年前的电的出现一样,带动了很多工业的发展。

2。Which of these are reasons for Deep Learning recently taking off? (Check the three options that apply.)

A. Deep learning has resulted in significant improvements in important applications such as online advertising, speech recognition, and image recognition.

B. We have access to a lot more data.

C. We have access to a lot more computational power.

D. Neural Networks are a brand new field.

我的解析:视频中老师讲到,第三节的文档材料中如下图所示,阐述的是数据、计算速度、算法都是深度学习的兴起因素。

 220348_AQTc_2619218.png

3。Recall this diagram of iterating over different ML ideas. Which of the statements below are true? (Check all that apply.)

 220406_T1dp_2619218.png

A. Being able to try out ideas quickly allows deep learning engineers to iterate more quickly.

B. Faster computation can help speed up how long a team takes to iterate to a good idea.

C. It is faster to train on a big dataset than a small dataset.

D. Recent progress in deep learning algorithms has allowed us to train good models faster (even without changing the CPU/GPU hardware).

我的解析:数据、计算、算法

 220415_sa03_2619218.png

4。When an experienced deep learning engineer works on a new problem, they can usually use insight from previous problems to train a good model on the first try, without needing to iterate multiple times through different models. True/False?

我的解析:一位有经验的深度学习工程师可以基于之前的问题来选择问题适合的模型,而不用在模型上迭代很多次,就比如说CNN相比于其他模型来说更适用图像识别等,那么再遇到一个新的问题时,比如无人驾驶的问题,就可以使用CNN来处理图像。

5。Which one of these plots represents a ReLU activation function?

我的解析:

 220428_zq0o_2619218.png

6。Images for cat recognition is an example of “structured” data, because it is represented as a structured array in a computer. True/False?

我的解析:音频、图像、文本都是非结构化数据。

220431_IeR4_2619218.png

7。A demographic dataset with statistics on different cities' population, GDP per capita, economic growth is an example of “unstructured” data because it contains data coming from different sources. True/False?

我的解析:像统计数据这些有明确定义的数据是结构化数据。

8。Why is an RNN (Recurrent Neural Network) used for machine translation, say translating English to French? (Check all that apply.)

A. It can be trained as a supervised learning problem.

B. It is strictly more powerful than a Convolutional Neural Network (CNN).

C. It is applicable when the input/output is a sequence (e.g., a sequence of words).

D. RNNs represent the recurrent process of Idea->Code->Experiment->Idea->....

我的解析:RNN适合机器翻译,原因是机器翻译是作为监督学习问题处理,并RNN算法更适合处理序列化的数据。

9In this diagram which we hand-drew in lecture, what do the horizontal axis (x-axis) and vertical axis (y-axis) represent?

220503_syzN_2619218.png

A. x-axis is the performance of the algorithm

y-axis (vertical axis) is the amount of data.

B. x-axis is the input to the algorithm

y-axis is outputs.

C. x-axis is the amount of data

y-axis is the size of the model you train.

D. x-axis is the amount of data

y-axis (vertical axis) is the performance of the algorithm.

我的解析:

 220524_Gq6v_2619218.png

10。Assuming the trends described in the previous question's figure are accurate (and hoping you got the axis labels right), which of the following are true? (Check all that apply.)

A. Decreasing the training set size generally does not hurt an algorithm’s performance, and it may help significantly.

B. Decreasing the size of a neural network generally does not hurt an algorithm’s performance, and it may help significantly.

C. Increasing the training set size generally does not hurt an algorithm’s performance, and it may

help significantly.

D. Increasing the size of a neural network generally does not hurt an algorithm’s performance, and it may help significantly.

我的解析:”Scale drives deep learning progress.”根据第9题的图也可以得出以下结论——无论是增加数据的规模还是神经网络的规模都不会降低算法的表现,有时会有显著作用。

转载于:https://my.oschina.net/u/2619218/blog/1554403

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值