Numpy实现Regression

本文介绍了使用Python实现的线性回归(包括Lasso和Ridge)、多项式回归以及它们的训练过程,着重于梯度下降和正则化技术在模型优化中的应用。
摘要由CSDN通过智能技术生成

l1_contr = self.l1_ratio * np.linalg.norm(w)

l2_contr = (1 - self.l1_ratio) * 0.5 * w.T.dot(w)

return self.alpha * (l1_contr + l2_contr)

def grad(self, w):

l1_contr = self.l1_ratio * np.sign(w)

l2_contr = (1 - self.l1_ratio) * w

return self.alpha * (l1_contr + l2_contr)

class Regression(object):

“”" Base regression model. Models the relationship between a scalar dependent variable y and the independent

variables X.

Parameters:


n_iterations: float

The number of training iterations the algorithm will tune the weights for.

learning_rate: float

The step length that will be used when updating the weights.

“”"

def init(self, n_iterations, learning_rate):

self.n_iterations = n_iterations

self.learning_rate = learning_rate

def initialize_weights(self, n_features):

“”" Initialize weights randomly [-1/N, 1/N] “”"

limit = 1 / math.sqrt(n_features)

self.w = np.random.uniform(-limit, limit, (n_features, ))

def fit(self, X, y):

Insert constant ones for bias weights

X = np.insert(X, 0, 1, axis=1)

self.training_errors = []

self.initialize_weights(n_features=X.shape[1])

Do gradient descent for n_iterations

for i in range(self.n_iterations):

y_pred = X.dot(self.w)

Calculate l2 loss

mse = np.mean(0.5 * (y - y_pred)**2 + self.regularization(self.w))

self.training_errors.append(mse)

Gradient of l2 loss w.r.t w

grad_w = -(y - y_pred).dot(X) + self.regularization.grad(self.w)

Update the weig

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值