![](https://img-blog.csdnimg.cn/20201014180756919.png?x-oss-process=image/resize,m_fixed,h_64,w_64)
深度学习理论知识
文章平均质量分 81
介绍深度学习的理论知识
hello_JeremyWang
这个作者很懒,什么都没留下…
展开
-
异常检测小结
1. 异常检测小结 异常检测本质上就是找不同,找到检测异常的点或者数据。李宏毅老师在异常检测这部分介绍了两种方法。分别用来处理带 label 的数据以及不带 label 的数据。 1.1 带 label 的数据 对于带 label 的数据(此处要求数据不含杂质,即数据不包含异常值点),我们可以训练一个分类器,得到每个数据的类别以及信心分数。比如对于火影忍者的人物,我们可以训练一个分类器,将其分为鸣人、佐助、雏田等等。这个时候,如果我们给分类器一个柯南的图片,分类器也会将其分为火影中的某一类人物,但是由于分类原创 2022-02-22 16:17:26 · 956 阅读 · 0 评论 -
Auto-Encoder的补充知识
Auto-Encoder的补充知识原创 2022-02-13 09:48:27 · 827 阅读 · 1 评论 -
Notes for Deep Learning Lessons of Pro. Hung-yi Lee (5)
0. Introduction In this lesson, Pro.Lee teaches us a new field(actually for me, it is completely new) called explainable deep learning. In this field, I mainly want to introduce a special model, which is known as Lime. 1. Lime The main thought of Lime is t原创 2021-11-14 21:03:10 · 392 阅读 · 0 评论 -
词嵌入Word Embedding
1. Word Embedding 上一个博客 Pytorch实战__LSTM做文本分类 中我们提到了Word Embedding方法,这种方法的本质想法是将词语映射到向量空间上去,同时尽可能地保留词语之间的联系。可以理解为把词语翻译成网络能看懂的形式。一个著名的例子就是下图,可以看到,King-man+women=Queen,这正好符合我们的常识。 Word Embedding最重要的模型是Word2vec模型。下面就简单地介绍Word2vec模型中两个最常见的算法:Skip-gram(跳字模型)和 C原创 2021-11-07 15:47:53 · 176 阅读 · 0 评论 -
Notes for Deep Learning Lessons of Pro. Hung-yi Lee (3)
To be honest, I did not fully understand the process of developing perceptron to nerual network. Today, Pro. Lee help me solve this problem. Well, feel good. Maybe I have not gotten that point, haha. As we all know, the perceptron model is a linear model.原创 2021-10-12 20:09:12 · 96 阅读 · 0 评论 -
卷积神经网络(CNN)基础
1. CNN的两大特点 李宏毅老师在课程的第一张PPT就提出了这样一个问题:“我们可以通过考虑图片的性质来简化我们的网络呢?”。对于图像处理而言,这是一个十分有价值的思考题。因为我们通常面对的图像数据都是一个很大维度的矩阵,如果我们简单地采用全连接神经网络去处理这种数据,会导致参数过大,训练过程十分繁琐。为了解决这种问题,CNN应运而生。 在介绍CNN究竟是怎么利用图像特点来训练之前,我们首先给出CNN的两大特征,之后,会详细阐明这两大特征。 提取图像的突出特征去学习,而不是学习整个图片 在保留图片特征原创 2021-10-24 22:54:05 · 501 阅读 · 0 评论 -
Notes for Deep Learning Lessons of Pro. Hung-yi Lee (2)
Today, Knowledge concerning about the optimization of deep learning is written here. What is the meaning of optimaztion? The following ppt shows us the answer. 1. SGD with Momentum (SGDM) Just as the name shows us, SGDM is invented by combining SGD with M原创 2021-10-10 20:15:52 · 119 阅读 · 0 评论 -
Notes for Deep Learning Lessons of Pro. Hung-yi Lee (1)
I will try to use English to write down the knowledge learned in this class, just with the aim to make sure I will not forget this important language tool. I hope I can achieve this target. Maybe I will give up one day, haha. 1. Tip1 for Gradient Descent原创 2021-10-07 17:14:06 · 151 阅读 · 0 评论 -
Notes for Deep Learning Lessons of Pro. Hung-yi Lee (4)
1. Tips for DNN In this lesson, Pro. LEE taught us some tips for deep neural network, which contains: Adaptive Learning Rate New Activation Function Dropout Regularization Early Stopping 1.1 Adaptive Learning rate The knowledge about Adaptive Learning R原创 2021-10-21 23:56:17 · 137 阅读 · 0 评论