7 篇文章 0 订阅

# 单变量线性回归

## 代价函数：

$J(theta) =\frac{1}{2m}sum_{i=1}^m(h_{theta}(x^{i})-y^{i}))^2$ $h_{theta}(x^{i})=theta_0*x_0+theta_1*x_1$

## 梯度下降

$theta_j:=theta_j-alpha\frac{1}{m}sum_{i=1}^m(h_{theta}(x^{i})-y^{i}))x_j^i$

# 练习1

### 数据集

X代表poplation，y代表profits


#### 数据集的可视化

function plotData(x, y)
figure;
x = data(:,1);
y = data(:,2);
plot(x,y,'rx','MarkerSize',10);
xlabel('Population of a city in 10,000s');
ylabel('Profit in &10,000s');
end



### 代价函数

$J(theta) =\frac{1}{2m}sum_{i=1}^m(h_{theta}(x^{i})-y^{i}))^2$ $h_{theta}(x^{i})=theta^TX=theta_0*x_0+theta_1*x_1$
%代价函数
%computeCost.m
function J = computeCost(X, y, theta)
m = length(y);
J = 0;
h = X*theta;
J = sum(h-y).^2/(2*m);
end


$X=[x0,x1];theta=[theta0;theta1];h_{theta}(x^{i})=X*theta$

### 梯度下降法

$theta_j:=theta_j-alpha\frac{1}{m}sum_{i=1}^m(h_{theta}(x^{i})-y^{i}))x_j^i$
%梯度下降
function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
m = length(y);
J_history = zeros(num_iters, 1);
for iter = 1:num_iters
h=X*theta;
t1 = theta(1) - alpha*(1/m)*sum(h-y);
t2 = theta(2) - alpha*(1/m)*sum((h-y).*X(:,2));
theta = [t1;t2];
J_history(iter) = computeCostMulti(X, y, theta);
end
end


### 可视化J

• 0
点赞
• 0
评论
• 2
收藏
• 打赏
• 扫一扫，分享海报

01-27
01-04

05-03 996
03-24 3万+
10-25
09-06 1万+
11-02 587
09-24
04-01
04-26 1165
05-20 1323
04-30 320
©️2022 CSDN 皮肤主题：书香水墨 设计师：CSDN官方博客

cherry1307

¥2 ¥4 ¥6 ¥10 ¥20

1.余额是钱包充值的虚拟货币，按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载，可以购买VIP、C币套餐、付费专栏及课程。