ML学习笔记
Andrew Ng课程学习笔记
CCrazyGuy
这个作者很懒,什么都没留下…
展开
-
ML Notes: Week 5 - Neural Networks: Learning
1. Cost function for neural networks J(Θ)=−1m∑i=1m∑k=1K[yk(i)log((hΘ(x(i)))k)+(1−yk(i))log(1−(hΘ(x(i)))k)]+λ2m∑l=1L−1∑i=1sl∑j=1sl+1(Θj,i(l))2\begin{aligned}J(\Theta) = &- \frac{1}{m} \sum_{i=1}^m \sum_{k=1}^K \left[y^{(i)}_k \log ((h_\Theta (x^{(i)})原创 2020-06-24 10:16:31 · 147 阅读 · 0 评论 -
ML Notes: Week 4 - Neural Networks: Representation
1. Model representation 1.1 Neural network model 1.2 Some notations in the neural networks 1.3 Forward propagation nueral network 2. How to compute a complex nonlinear function? 2.1 AND 2.2原创 2020-06-06 17:26:10 · 210 阅读 · 1 评论 -
ML Notes: Week 3 - Logistic regression
What is logistic model In statistics, the logistic model (or logit model) is used to model the probability of a certain class or event existing such as pass/fail, win/lose, alive/dead or healthy/sick. This can be extended to model several classes of events原创 2020-05-26 22:44:52 · 253 阅读 · 1 评论 -
ML Notes: Week 2 - Multivariate Linear Regression
The Basic Theory for Multivariate Linear Regression Hypothesis: hθ(x)=θ0x0+θ1x1+…+θnxn=θTXh_\theta(x)=\theta_0x_0+\theta_1x_1+\ldots+\theta_nx_n = \theta^TXhθ(x)=θ0x0+θ1x1+…+θnxn=θTX Parameters: θ0,θ1,…,θn\theta_0, \theta_1, \ldots, \theta_nθ0,θ1,原创 2020-05-24 22:30:54 · 161 阅读 · 1 评论 -
ML Notes: Week 1 - Univariate Linear Regression
The Basic Theory Hypothesis: hθ(x)=θ0+θ1xh_\theta(x)=\theta_0+\theta_1xhθ(x)=θ0+θ1x Parameters: θ0,θ1\theta_0, \theta_1θ0,θ1 Cost Function: J(θ)=12m∑i=1m(hθ(x(i))−y(i))2J(\theta)=\frac{1}{2m}\sum\limits_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})^2J(θ)=2m1i=1原创 2020-05-24 22:31:32 · 203 阅读 · 0 评论