自定义博客皮肤VIP专享

*博客头图:

格式为PNG、JPG,宽度*高度大于1920*100像素,不超过2MB,主视觉建议放在右侧,请参照线上博客头图

请上传大于1920*100像素的图片!

博客底图:

图片格式为PNG、JPG,不超过1MB,可上下左右平铺至整个背景

栏目图:

图片格式为PNG、JPG,图片宽度*高度为300*38像素,不超过0.5MB

主标题颜色:

RGB颜色,例如:#AFAFAF

Hover:

RGB颜色,例如:#AFAFAF

副标题颜色:

RGB颜色,例如:#AFAFAF

自定义博客皮肤

-+
  • 博客(27)
  • 收藏
  • 关注

翻译 机器学习笔记之反向传播算法

Backpropagation Algorithm "Backpropagation" is neural-network terminology for minimizing our cost function, just like what we were doing with gradient descent in logistic and linear regression. Our g

2017-07-16 10:59:12 321

翻译 机器学习笔记之神经网络模型(二)

Model Representation II To re-iterate, the following is an example of a neural network: a(2)1=g(Θ(1)10x0+Θ(1)11x1+Θ(1)12x2+Θ(1)13x3)a(2)2=g(Θ(1)20x0+Θ(1)21x1+Θ(1)22x2+Θ(1)23x3)a(2)3=g(Θ(1)30x0

2017-07-15 15:16:33 249

翻译 机器学习笔记之神经网络模型(一)

Model Representation I Let's examine how we will represent a hypothesis function using neural networks. At a very simple level, neurons are basically computational units that take inputs (dendrites)

2017-07-15 14:37:06 302

翻译 机器学习笔记之逻辑回归的正则化

Regularized Logistic Regression We can regularize logistic regression in a similar way that we regularize linear regression. As a result, we can avoid overfitting. The following image shows how the r

2017-07-14 11:01:18 746

翻译 机器学习笔记之线性回归的正则化

Regularized Linear Regression Note: [8:43 - It is said that X is non-invertible if m ≤ n. The correct statement should be that X is non-invertible if m We can apply regularization to both linear

2017-07-14 10:33:35 412

翻译 机器学习笔记之过拟合问题

The Problem of Overfitting Consider the problem of predicting y from x ∈ R. The leftmost figure below shows the result of fitting a y = θ0+θ1x to a dataset. We see that the data doesn’t really lie

2017-07-14 08:40:35 327

翻译 机器学习笔记之高级优化

Advanced Optimization Note: [7:35 - '100' should be 100 instead. The value provided should be an integer and not a character string.] "Conjugate gradient", "BFGS", and "L-BFGS" are more sophisticate

2017-07-13 18:33:36 471

翻译 机器学习笔记之简化成本函数和梯度下降

Simplified Cost Function and Gradient Descent Note: [6:53 - the gradient descent equation should have a 1/m factor] We can compress our cost function's two conditional cases into one case: Cost(hθ(

2017-07-13 17:59:24 812

翻译 机器学习笔记之决策边界

Decision Boundary In order to get our discrete 0 or 1 classification, we can translate the output of the hypothesis function as follows: hθ(x)≥0.5→y=1hθ(x)0.5→y=0 The way our logistic f

2017-07-13 14:50:43 874

翻译 机器学习笔记之正规矩阵的不可逆性

Normal Equation Non-invertibility When implementing the normal equation inoctave we want to use the 'pinv' function rather than 'inv.' The 'pinv'function will give you a value of θ evenif XTX is no

2017-07-10 08:39:44 897

翻译 机器学习笔记之正规方程

Normal Equation Note: [8:00 to 8:44 - The design matrixX (in the bottom right side of the slide) given in the example should haveelements x with subscript 1 and superscripts varying from 1 to m bec

2017-07-10 08:38:18 190

翻译 机器学习笔记之学习速率

GradientDescent in Practice II - Learning Rate Note: [5:20 - the x -axis label in theright graph should be θ rather than No. of iterations ] Debugginggradient descent. Make a plot with number of i

2017-07-10 08:34:47 407

翻译 机器学习笔记之特征及多项式回归

Features and Polynomial Regression We can improve our features and the formof our hypothesis function in a couple different ways. We can combine multiple features intoone. For example, we can combin

2017-07-10 08:32:12 316

翻译 机器学习笔记之特征缩放

GradientDescent in Practice I - Feature Scaling Note: [6:20 -The average size of a house is 1000 but 100 is accidentally written instead] We can speed upgradient descent by having each of our input

2017-07-10 08:30:48 271

翻译 机器学习笔记之应用于线性回归的梯度下降算法

Gradient Descent For Linear Regression Note: [At 6:15 "h(x) = -900 - 0.1x" should be "h(x) = 900 - 0.1x"] When specifically applied to the case of linear regression, a new form of the gradient desce

2017-07-07 15:58:11 162

翻译 机器学习笔记之梯度下降(二)

Gradient Descent Intuition In this video we explored the scenario where we used one parameter θ1 and plotted its cost function to implement a gradient descent. Our formula for a single parameter wa

2017-07-07 15:37:33 215

转载 git学习之远程仓库

要关联一个远程库,使用命令git remote add origin git@server-name:path/repo-name.git; 关联后,使用命令git push -u origin master第一次推送master分支的所有内容; 此后,每次本地提交后,只要有必要,就可以使用命令git push origin master推送最新修改; 分布式版本系统的最大好处之

2017-07-07 12:17:04 131

转载 git学习之checkout和reset

文件层面操作 git add files 把当前文件放入暂存区域。git commit 给暂存区域生成快照并提交。git reset -- files 用来撤销最后一次git add files,你也可以用git reset 撤销所有暂存区域文件。git checkout -- files 把文件从暂存区域复制到工作目录,用来丢弃本地修改。 checkout命令用于从历史提交(

2017-07-07 10:49:11 326

翻译 机器学习笔记之梯度下降

Gradient Descent So we have our hypothesis function and we have a way of measuring how well it fits into the data. Now we need to estimate the parameters in the hypothesis function. That's where grad

2017-07-07 09:57:13 173

翻译 机器学习笔记之代价函数(三)

Cost Function - Intuition II A contour plot is a graph that contains many contour lines. A contour line of a two variable function has a constant value at all points of the same line. An example of s

2017-07-07 09:12:12 237

翻译 机器学习笔记之代价函数(二)

Cost Function - Intuition I If we try to think of it in visual terms, our training data set is scattered on the x-y plane. We are trying to make a straight line (defined by hθ(x)) which passes thro

2017-07-07 08:18:39 207

翻译 机器学习笔记之代价函数

Cost Function We can measure the accuracy of our hypothesis function by using a cost function. This takes an average difference (actually a fancier version of an average) of all the results of the hy

2017-07-06 21:50:55 183

翻译 机器学习笔记之模型表述

Model Representation To establish notation for future use, we’ll use x(i) to denote the “input” variables (living area in this example), also called input features, and y(i) to denote the “output

2017-07-06 20:48:21 295

原创 Python学习随笔(一)

len()函数计算的是str的字符数 list Python内置的一种数据类型是列表:list。list是一种有序的集合,可以随时添加和删除其中的元素。 d=['aaaa','bbbb','cccc'] d就是一个list len(d)=3 其中d[0]是aaaa,d[-1]是cccc,d[-2]是bbbb list是一个可变的有序表,所以,可以往li

2017-07-05 20:44:43 194

翻译 机器学习笔记之无监督学习

Unsupervised Learning Unsupervised learning allows us to approach problems with little or no idea what our results should look like. We can derive structure from data where we don't necessarily know

2017-07-05 15:09:54 334

翻译 机器学习笔记之监督学习

Supervised Learning In supervised learning, we are given a data set and already know what our correct output should look like, having the idea that there is a relationship between the input and the o

2017-07-05 14:03:11 162

翻译 机器学习笔记

What is Machine Learning? Two definitions of Machine Learning are offered. Arthur Samuel described it as: "the field of study that gives computers the ability to learn without being explicitly progra

2017-07-05 09:31:16 223

空空如也

空空如也

TA创建的收藏夹 TA关注的收藏夹

TA关注的人

提示
确定要删除当前文章?
取消 删除