【笔记】Week1:机器学习,监督学习,无监督学习,一元线性回归,线代复习 (Machine Learning)

吐槽:第一周只有quiz没有编程作业,于是记个笔记总结一下吧。。英文的话都是复制,我大概就是用自己话再说一遍,如有不对可以评论指出。。

正文:

1,Machine Learning(机器学习)

Two definitions of Machine Learning are offered. Arthur Samuel described it as: "the field of study that gives computers the ability to learn without being explicitly programmed." This is an older, informal definition.

Tom Mitchell provides a more modern definition: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E."

Example: playing checkers.

E = the experience of playing many games of checkers

T = the task of playing checkers.

P = the probability that the program will win the next game.

In general, any machine learning problem can be assigned to one of two broad classifications:

Supervised learning and Unsupervised learning.

两个定义:不正式的说是计算机不通过显式编程完成学习;正式的说是计算机通过经验E完成任务T达到效果P。

两个广义的分类:监督学习和无监督学习。

2,Supervised Learning(监督学习)

In supervised learning, we are given a data set and already know what our correct output should look like, having the idea that there is a relationship between the input and the output.

Supervised learning problems are categorized into "regression" and "classification" problems. In a regression problem, we are trying to predict results within a continuous output, meaning that we are trying to map input variables to some continuous function. In a classification problem, we are instead trying to predict results in a discrete output. In other words, we are trying to map input variables into discrete categories.

Example 1:

Given data about the size of houses on the real estate market, try to predict their price. Price as a function of size is a continuous output, so this is a regression problem.

We could turn this example into a classification problem by instead making our output about whether the house "sells for more or less than the asking price." Here we are classifying the houses based on price into two discrete categories.

Example 2:

(a) Regression - Given a picture of a person, we have to predict their age on the basis of the given picture

(b) Classification - Given a patient with a tumor, we have to predict whether the tumor is malignant or benign.

定义:有个已知结果(正确答案)的数据集,我们要找输入输出之间的关系。

两类问题:回归问题(输出是连续的,例如预测房价问题的价格是连续的),分类问题(输出是离散的,例如检测乳腺癌是良性还是恶性)。

3,Unsupervised Learning(无监督学习)

Unsupervised learning allows us to approach problems with little or no idea what our results should look like. We can derive structure from data where we don't necessarily know the effect of the variables.

We can derive this structure by clustering the data based on relationships among the variables in the data.

With unsupervised learning there is no feedback based on the prediction results.

Example:

Clustering: Take a collection of 1,000,000 different genes, and find a way to automatically group these genes into groups that are somehow similar or related by different variables, such as lifespan, location, roles, and so on.

Non-clustering: The "Cocktail Party Algorithm", allows you to find structure in a chaotic environment. (i.e. identifying individual voices and music from a mesh of sounds at a cocktail party).

定义:允许我们解决问题在不知道结果的情况下,我们可以从数据中推出模型,即使我们不知道变量的影响。

两类问题/例子:聚类问题(例如社交网络分析,市场分割等),非聚类问题(例如鸡尾酒会问题,可以在杂乱的环境中寻找一个结构,这里lecture上有展示最终效果,大概就是从酒会上分析出各种不一样人说话的音频)。

4,Linear regression with one variable(一元线性回归)

因为这边带公式了,就不复制了。。

记号:x^(i) 表示输入变量或叫输入特征,y^(i) 表示输出变量,(x^(i), y^(i)) 一个训练样本(对一个训练集来说有m个训练样本),m是训练样本个数,X是输入变量的空间,Y是输出变量的空间,h是假设函数。

代价函数:J(theta) = 1/2m*sum((h(x)-y)^2)(最小二乘法吧,中学学过的,theta是个列向量 [theta0; ....; theta1],sum是对i=1到m的h(x^(i))-y^(i)平方的求和)

说明:这个函数的用途就是用它去估计theta们的取值怎么样,J越小theta越好;而且这个J在线性回归情况下一定是个凸函数,所以局部最小值一定是全局最小值,因此我们可以用梯度下降法去求解。

梯度下降法:theta := theta - alpha*对J求对theta_j的偏导(:= 是赋值的意思;这要注意theta要同步变化,所以如果单个求要记得算完之后再赋值,如果theta是列向量,就自动完成同步变化了;alpha是学习率)

说明:alpha如果太小,则梯度下降太慢;过大则不一定会收敛;直观的理解就是在一个山上任意一点寻找最快下降的方向,然后走学习率大小的步长,最后会走到局部最小值点(但在这里是凸函数,所以一定是全局最小值点);梯度下降法也叫batch梯度下降法,因为每一步都要用到所有的训练样本。

5,Linear Algebra review(线代复习)

讲了矩阵,向量的定义,强调了1-indexed和0-indexed的区别(就是有时候是坐标从0开始有时候从1开始,matlab里面是从1开始的),矩阵加法,数乘矩阵,矩阵向量乘法规则和性质,矩阵逆和转置。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值