Course2_AdvancedAlgorithm

吴恩达机器学习Course2——高级学习算法

Week 1

为什么要使用neural network

image-20240514115231472

The network demonstrated the ability of neural networks to handle complex decisions by dividing the decisions between multiple units.

该网络展示了神经网络通过在多个单元之间划分决策来处理复杂决策的能力。

神经元激活的表达式

image-20240519183658072

tensorflow

构建神经网络判断咖啡是不是好咖啡

image-20240813153202470

增加训练集的数据规模以减少训练轮数

image-20240813154630387

为什么在最后一层应用sigmoid激活函数不认为是最好的操作?

image-20240813155117208

如何判断每一层的参数数量

image-20240813155605144

batch and epoch

image-20240813160924848

Units in the first layer

image-20240813163802186

图中的阴影部分展示了每个单元负责不同的“bad roast”区域

the second layer

image-20240813170117461

对多个神经元的计算进行向量化

image-20240813214611999

ds

Tensorflow and Keras

Tensorflow is a machine learning package developed by Google. In 2019, Google integrated Keras into Tensorflow and released Tensorflow 2.0. Keras is a framework developed independently by François Chollet that creates a simple, layer-centric interface to Tensorflow. This course will be using the Keras interface.

Element-Wise operation

image-20240813221009178

Week 2

Train a neural network in TensorFlow

image-20240813231849728

  • epoch: number of steps in gradient decent

code snippet: 代码片段

binary cross entropy

image-20240813233333042

  • binary reemphasize it is a bianary classification question——手写数字识别(0/1)

指定loss function

image-20240813233720459

back propagation

image-20240813234238226

  • tensorflow通过 modle.fit(X,y,iteration = 100)进行back propagation

ReLU vs sigmoid

image-20240814000853583

  • 以awareness为例,引出新的激活函数——ReLU(rectified linear union)

三个激活函数

image-20240814001230511

How to choose activation function for Output Layer

image-20240814145810781

How to choose activation function for hidden layer

image-20240814150244820

multiclass classification

image-20240814204801289

Logistic Regression vs Softmax Regression

image-20240814205808199

Compare Cost function between logistic regression and softmax regression

image-20240814211947536

legible : 易读的

more numerically accurate implementation of softmax

image-20240814214929597

classfication with multile outputs

image-20240814215759680

two solutions to multi-lable classification

image-20240814220000621

SparseCategoricalCrossentropy or CategoricalCrossEntropy

softmax 和 sigmoid 与 Relu的区别

  • Recognized that unlike ReLU and Sigmoid, the softmax spans multiple outputs.

adam algorithm

image-20240815124426366

image-20240815124630539

convolutional layer

image-20240815130229395

What is computation graph

image-20240815204043468

gradient decent in computation graph

image-20240815225537010

why we tell that backprop is an efficient way

image-20240815230349860

Cost 随着 epoch的增加而下降

image-20240816191634146

Week 3

what is diagnostic

image-20240817004527158

training set and test set

image-20240817005547443

training set & cross validation set & test set

variance and bias

image-20240817192947728

J_cv 和 J_train的关系展示出bias和variance

image-20240817194202182

lamda 和 variance/bias 的关系

image-20240817195052387

learning curve

image-20240817212727491

image-20240817213656476

  • 在 bias的情况下,增加training set size 不会带来性能的提升,即error的下降

image-20240817214402194

  • 在high variance的情况下,增加training set 可以带来性能的提升

debuging a learning algorithm

more features——too much flexibility to fit very complicated model

image-20240817220335634

neural network and bias variance

image-20240818012133700

neural network regulazation

image-20240818012903891

iterative loop of ML development

image-20240818091607244

error analysis

data augmentation

data centric approach

image-20240818102712578

transfer learning

image-20240818105736671

image-20240818110739144

full cycle of the machine learning project

image-20240818114340362

deployment

image-20240818120006017

skewed dataset

precision and recall

image-20240818161145748

image-20240818165247077

F1 score

image-20240818165854284

Week 4

Entropy as a measure of entropy

image-20240820134742665

image-20240820135714217

有点像logistic 回归的损失函数

the reduction of entropy is called information gain

image-20240820142009093

image-20240820142535059

image-20240820161854385

The steps of building a decision tree

image-20240820143203908

recursive splitting

image-20240820143803802

How to deal with one feature with multiple values

image-20240820150359348

通过 one-hot encoding 进行处理,所有的这些数据可以作为input 提供给神经网络、线性回归或者是逻辑回归

Splitting on a continuous variable

image-20240820151520193

regression tree

image-20240820161022157

和一般的决策树相似,不过information gain 变成了 reduction in variance

decision tree ensemble

单个的决策树对于数据会比较敏感,所以为了让算法less sensetive more rebust,我们会构建一个tree ensemble

sampling with replacement

image-20240820180722652

random forest algorithm

image-20240820181511166

XG boost(Extreme Gradient Boost)

  • it use a classic thought: diliberate practice

image-20240820182722533

image-20240820183104065

when to use decision tree

image-20240820203015822

image-20240820204329765

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值