百面机器学习——python实现二分类逻辑回归

后面注释的是一个py文件写好分装的LogisticRegression函数,而不是直接调用sklearn

import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.model_selection import train_test_split
from LogisticRegression import LogisticRegression
# 取鸢尾花数据集
iris = datasets.load_iris()
X = iris.data
y = iris.target
# 筛选特征
X = X[y < 2, :2]
y = y[y < 2]
y
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
       1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
       1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
# 绘制出图像
plt.scatter(X[y == 0, 0], X[y == 0, 1], color="red")
plt.scatter(X[y == 1, 0], X[y == 1, 1], color="blue")
plt.show()

在这里插入图片描述

# 切分数据集
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=666)
# 调用我们自己的逻辑回归函数
log_reg = LogisticRegression()
log_reg.fit(X_train, y_train)
print("final score is :%s" % log_reg.score(X_test, y_test))
print("actual prob is :")
print(log_reg.predict_proba(X_test))
final score is :1.0
actual prob is :
[0.93292947 0.98717455 0.15541379 0.18370292 0.03909442 0.01972689
 0.05214631 0.99683149 0.98092348 0.75469962 0.0473811  0.00362352
 0.27122595 0.03909442 0.84902103 0.80627393 0.83574223 0.33477608
 0.06921637 0.21582553 0.0240109  0.1836441  0.98092348 0.98947619
 0.08342411]
# import numpy as np
# from sklearn.metrics import accuracy_score
# class LogisticRegression(object):

#     def __init__(self):
#         """初始化Logistic Regression模型"""
#         self.coef = None
#         self.intercept = None
#         self._theta = None

#     def sigmoid(self, t):
#         return 1. / (1. + np.exp(-t))

#     def fit(self, X_train, y_train, eta=0.01, n_iters=1e4):
#         """使用梯度下降法训练logistic Regression模型"""
#         assert X_train.shape[0] == y_train.shape[0], \
#             "the size of X_train must be equal to the size of y_train"

#         def J(theta, X_b, y):
#             y_hat = self.sigmoid(X_b.dot(theta))
#             try:
#                 return -np.sum(y * np.log(y_hat) + (1 - y) * np.log(1 - y_hat)) / len(y)
#             except:
#                 return float('inf')

#         def dJ(theta, X_b, y):
#             # 向量化后的公式
#             return X_b.T.dot(self.sigmoid(X_b.dot(theta)) - y) / len(y)

#         def gradient_descent(X_b, y, initial_theta, eta, n_iters=1e4, epsilon=1e-8):
#             theta = initial_theta
#             cur_iter = 0

#             while cur_iter < n_iters:
#                 gradient = dJ(theta, X_b, y)
#                 last_theta = theta
#                 theta = theta - eta * gradient
#                 if abs(J(theta, X_b, y) - J(last_theta, X_b, y)) < epsilon:
#                     break

#                 cur_iter += 1

#             return theta

#         X_b = np.hstack([np.ones((len(X_train), 1)), X_train])
#         initial_theta = np.zeros(X_b.shape[1])
#         self._theta = gradient_descent(X_b, y_train, initial_theta, eta, n_iters)

#         # 截距
#         self.intercept = self._theta[0]
#         # x_i前的参数
#         self.coef = self._theta[1:]

#         return self

#     def predict_proba(self, X_predict):
#         """给定待预测数据集X_predict,返回表示X_predict的结果概率向量"""
#         assert self.intercept is not None and self.coef is not None, \
#             "must fit before predict"
#         assert X_predict.shape[1] == len(self.coef), \
#             "the feature number of X_predict must be equal to X_train"

#         X_b = np.hstack([np.ones((len(X_predict), 1)), X_predict])
#         return self.sigmoid(X_b.dot(self._theta))

#     def predict(self, X_predict):
#         """给定待预测数据集X_predict,返回表示X_predict的结果向量"""
#         assert self.intercept is not None and self.coef is not None, \
#             "must fit before predict!"
#         assert X_predict.shape[1] == len(self.coef), \
#             "the feature number of X_predict must be equal to X_train"
#         prob = self.predict_proba(X_predict)
#         return np.array(prob >= 0.5, dtype='int')

#     def score(self, X_test, y_test):
#         """根据测试数据集X_test和y_test确定当前模型的准确度"""

#         y_predict = self.predict(X_test)
#         return accuracy_score(y_test, y_predict)

#     def __repr__(self):
#         return "LogisticRegression()"

  • 0
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
机器学习中的二分类逻辑回归是一种用于解决二分类问题的算法。它的基本思路是通过构建一个逻辑回归模型,将输入的特征映射到一个概率值,然后根据这个概率值进行分类。 逻辑回归使用sigmoid函数来处理hθ(x),这是因为sigmoid函数的取值范围在0到1之间,可以将线性回归的输出转化为一个概率值。通过sigmoid函数处理后,我们可以将概率值大于等于0.5的样本划分为正类,概率值小于0.5的样本划分为负类。这使得逻辑回归可以用于二分类问题。 代价函数的推导和偏导数的推导是为了求解逻辑回归模型中的参数θ。通过最小化代价函数,我们可以得到最优的参数θ,使得模型的预测结果与真实标签最接近。 在正则化逻辑回归中,我们引入正则化项的目的是为了避免过拟合。正则化项可以惩罚模型中的参数,使得参数的值趋向于较小的数值,从而降低模型的复杂度。在正则化逻辑回归中,一般不对θ1进行正则化,这是因为θ1对应的是截距项,它影响模型在原点的位置,不参与特征的权重调整。 综上所述,机器学习中的二分类逻辑回归是一种通过构建逻辑回归模型,利用sigmoid函数将线性回归的输出转化为概率值,并通过最小化代价函数求解参数θ的算法。正则化逻辑回归则是在逻辑回归的基础上引入正则化项,避免过拟合问题。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *3* [吴恩达机器学习逻辑回归(二分类)](https://blog.csdn.net/q642634743/article/details/118831665)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"] - *2* [机器学习笔记——逻辑回归二分类](https://blog.csdn.net/dzc_go/article/details/108855689)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值