python实现多元线性拟合、一元多项式拟合、多元多项式拟合

数据分析中经常会使用到数据拟合,本文中将阐述如何实现一元以及多元的线性拟合以及多项式拟合,本文中只涉及实现方式,不涉及理论知识。

模型拟合中涉及的误差评估方法如下所示:

在这里插入图片描述

import numpy as np
def stdError_func(y_test, y):
  return np.sqrt(np.mean((y_test - y) ** 2))


def R2_1_func(y_test, y):
  return 1 - ((y_test - y) ** 2).sum() / ((y.mean() - y) ** 2).sum()


def R2_2_func(y_test, y):
  y_mean = np.array(y)
  y_mean[:] = y.mean()
  return 1 - stdError_func(y_test, y) / stdError_func(y_mean, y)

一元线性拟合

import pandas as pd
import numpy as np
from sklearn.preprocessing import PolynomialFeatures
from sklearn import linear_model


filename = "E:/data.csv"
df= pd.read_csv(filename)
x = np.array(df.iloc[:,0].values)

y = np.array(df.iloc[:,5].values)

cft = linear_model.LinearRegression()
cft.fit(x[:,np.newaxis], y) #模型将x变成二维的形式, 输入的x的维度为[None, 1]

print("model coefficients", cft.coef_)
print("model intercept", cft.intercept_)


predict_y =  cft.predict(x[:,np.newaxis])
strError = stdError_func(predict_y, y)
R2_1 = R2_1_func(predict_y, y)
R2_2 = R2_2_func(predict_y, y)
score = cft.score(x[:,np.newaxis], y) ##sklearn中自带的模型评估,与R2_1逻辑相同

print(' strError={:.2f}, R2_1={:.2f},  R2_2={:.2f}, clf.score={:.2f}'.format(
    strError,R2_1,R2_2,score))

结果输出为:
model coefficients [-31.2375]
model intercept 7.415750000000001
strError=1.11, R2_1=0.28, R2_2=0.15, clf.score=0.28

模型拟合的表达式为:
y = 7.415750000000001 +(-31.2375) * x
从拟合的均方误差和得分来看效果不佳

多元线性拟合

 import pandas as pd
import numpy as np
from sklearn.preprocessing import PolynomialFeatures
from sklearn import linear_model




filename = "E:/data.csv"
df= pd.read_csv(filename)
x = np.array(df.iloc[:,0:4].values)

y = np.array(df.iloc[:,5].values)

cft = linear_model.LinearRegression()
print(x.shape)
cft.fit(x, y) #

print("model coefficients", cft.coef_)
print("model intercept", cft.intercept_)


predict_y =  cft.predict(x)
strError = stdError_func(predict_y, y)
R2_1 = R2_1_func(predict_y, y)
R2_2 = R2_2_func(predict_y, y)
score = cft.score(x, y) ##sklearn中自带的模型评估,与R2_1逻辑相同

print('strError={:.2f}, R2_1={:.2f},  R2_2={:.2f}, clf.score={:.2f}'.format(
    strError,R2_1,R2_2,score))

结果输出为:
model coefficients [-31.2375 17.74375 44.325 5.7375 ]
model intercept 0.5051249999999978
strError=0.58, R2_1=0.80, R2_2=0.56, clf.score=0.80

模型拟合的表达式为:
y = 0.5051249999999978 +(-31.2375) * x11 + 17.74375 *x2 + 44.325 * x3 + 5.7375 * x4
从拟合的均方误差和得分来看在之前的基础上有所提升

一元多项式拟合

以三次多项式为例

import pandas as pd
import numpy as np
from sklearn.preprocessing import PolynomialFeatures
from sklearn import linear_model

filename = "E:/data.csv"
df= pd.read_csv(filename)
x = np.array(df.iloc[:,0].values)

y = np.array(df.iloc[:,5].values)

poly_reg =PolynomialFeatures(degree=3) #三次多项式
X_ploy =poly_reg.fit_transform(x[:, np.newaxis])
print(X_ploy.shape)
lin_reg_2=linear_model.LinearRegression()
lin_reg_2.fit(X_ploy,y)
predict_y =  lin_reg_2.predict(X_ploy)
strError = stdError_func(predict_y, y)
R2_1 = R2_1_func(predict_y, y)
R2_2 = R2_2_func(predict_y, y)
score = lin_reg_2.score(X_ploy, y) ##sklearn中自带的模型评估,与R2_1逻辑相同

print("model coefficients", lin_reg_2.coef_)
print("model intercept", lin_reg_2.intercept_)
print('degree={}: strError={:.2f}, R2_1={:.2f},  R2_2={:.2f}, clf.score={:.2f}'.format(
    3, strError,R2_1,R2_2,score))

输出结果
model coefficients [ 0. 990.64583333 -11906.25 44635.41666667]
model intercept -20.724999999999117
degree=3: strError=1.08, R2_1=0.32, R2_2=0.17, clf.score=0.32

对应的函数表达式为 -20.724999999999117 + [0, 990.64583333, -11906.25, 44635.41666667] *[1, x, x^2, x.^3].T = -20.724999999999117 + 990.64583333 * x + ( -11906.25) * x^2 + 44635.41666667 * x^3

多元多项式拟合

import pandas as pd
import numpy as np
from sklearn.preprocessing import PolynomialFeatures
from sklearn import linear_model

filename = "E:/data.csv"
df= pd.read_csv(filename)
x = np.array(df.iloc[:,0:4].values)

y = np.array(df.iloc[:,5].values)


poly_reg =PolynomialFeatures(degree=2) #三次多项式
X_ploy =poly_reg.fit_transform(x)
lin_reg_2=linear_model.LinearRegression()
lin_reg_2.fit(X_ploy,y)
predict_y =  lin_reg_2.predict(X_ploy)
strError = stdError_func(predict_y, y)
R2_1 = R2_1_func(predict_y, y)
R2_2 = R2_2_func(predict_y, y)
score = lin_reg_2.score(X_ploy, y) ##sklearn中自带的模型评估,与R2_1逻辑相同

print("coefficients", lin_reg_2.coef_)
print("intercept", lin_reg_2.intercept_)
print('degree={}: strError={:.2f}, R2_1={:.2f},  R2_2={:.2f}, clf.score={:.2f}'.format(
    3, strError,R2_1,R2_2,score))

函数输出结果为:

coefficients [ 0. 332.28129937 -19.9240981 -9.10607925
-191.05593023 -287.93919929 -912.11402936 -1230.21922184
-207.90033986 99.03441748 190.26204994 433.25169929
273.13674555 257.66550523 344.92652936]
intercept 4.35175537840722
degree=3: strError=0.23, R2_1=0.97, R2_2=0.82, clf.score=0.97

代码中输入的自变量是一个包含四个变量的输入, 对应coefficients输出的是长度为15的向量, 其中对应到的变量分别为 variable_X = [1, x1, x2, x3, x4, x 1 ∗ x 1 x1*x1 x1x1, x 1 ∗ x 2 x1*x2 x1x2, x 1 ∗ x 3 x1*x3 x1x3, x 1 ∗ x 4 x1*x4 x1x4, x 2 ∗ x 2 x2*x2 x2x2, x 2 ∗ x 3 x2*x3 x2x3, x 2 ∗ x 4 x2*x4 x2x4, x 3 ∗ x 3 x3*x3 x3x3, x 3 ∗ x 4 x3*x4 x3x4, x 4 ∗ x 4 x4*x4 x4x4]

对应的方程式为: i n t e r c e p t + c o e f f i c i e n t s ∗ v a r i a b l e X . T intercept + coefficients * variable_X.T intercept+coefficientsvariableX.T

代码中涉及到的数据集如下:

a,b,c,d,e
0.06,0.2,0.02,0.1,0.340 
0.1,0.28,0.02,0.14,0.370 
0.12,0.32,0.02,0.16,0.377 
0.08,0.24,0.02,0.12,0.383 
0.08,0.32,0.04,0.1,0.383 
0.12,0.28,0.03,0.1,0.393 
0.1,0.24,0.05,0.1,0.385 
0.06,0.32,0.05,0.14,0.362 
0.12,0.2,0.05,0.12,0.320 
0.06,0.28,0.04,0.12,0.393 
0.08,0.28,0.05,0.16,0.402 
0.08,0.2,0.03,0.14,0.349 
0.1,0.2,0.04,0.16,0.335 
0.1,0.32,0.03,0.12,0.387 
0.12,0.24,0.04,0.14,0.390 
0.06,0.24,0.03,0.16,0.315 

指数函数和幂函数拟合参照网址:
https://blog.csdn.net/kl28978113/article/details/88818885
参考链接:
https://blog.csdn.
net/weixin_44794704/article/details/89246032
https://blog.csdn.net/bxg1065283526/article/details/80043049
https://www.cnblogs.com/Lin-Yi/p/8975638.html

  • 36
    点赞
  • 252
    收藏
    觉得还不错? 一键收藏
  • 4
    评论
Python中,可以使用多项式回归来进行多元线性拟合。多项式回归是一种常用的方法,它可以用来拟合更加复杂的数据集。通过使用多项式回归,我们可以将数据拟合成一个多项式函数,从而得到一个非线性的拟合曲线。 以下是一个使用多项式回归来进行多元线性拟合的代码示例: ```python import numpy as np import matplotlib.pyplot as plt # 定义数据集 x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) y = np.array([2.5, 4.5, 4.8, 5.5, 6.0, 7.0, 7.8, 8.0, 9.0, 9.5]) # 计算多项式回归系数 coefs = np.polyfit(x, y, 3) # 使用np.poly1d函数来生成一个多项式拟合对象 poly = np.poly1d(coefs) # 生成新的横坐标,使得拟合曲线更加平滑 new_x = np.linspace(min(x), max(x), 1000) # 绘制拟合曲线 plt.scatter(x, y) plt.plot(new_x, poly(new_x), color='red') plt.show() ``` 在上述代码中,我们首先定义了数据集x和y,然后使用`np.polyfit`函数计算多项式回归系数。接着,我们使用`np.poly1d`函数将系数转换为一个多项式拟合对象。最后,我们生成新的横坐标new_x,并使用拟合对象poly对新的横坐标进行拟合,得到拟合曲线。最后,我们使用matplotlib库将数据点和拟合曲线绘制出来。 通过以上代码,我们可以得到一个多元线性拟合的结果。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *3* [三种用python进行线性/非线性拟合的方法](https://blog.csdn.net/weixin_67016521/article/details/130119425)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] - *2* [python完成非线性拟合](https://blog.csdn.net/u010824101/article/details/122162557)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值