20200210_logistic回归来预测违约的概率

这是国外那大哥第二次下单,还是很好的完成了,但是第三次单子没有抓住,确实不会,出国留学生就是有钱,我也想(手动滑稽)
In this homework, we use logistic regression to predict the probability of default using incomeand balance on the Default data set. We will also estimate the test error of this logistic regressionmodel using the validation set approach. Do not forget to set a random seed before beginningyour analysis.

在这次作业中,我们使用logistic回归来预测违约的概率,使用默认数据集上的income和balance。我们还将使用validation set方法来估计这个logistic回归模型的测试误差。在开始分析之前,不要忘记设置一个随机种子。

  1. (a) Fit a multiple logistic regression model that uses income and balance to predict the probability of default, using only the observations

1.(A)拟合多元Logistic回归模型,利用收入和平衡来预测.违约概率,只使用观测结果

#导入包
import pandas as pd
from sklearn import metrics
import warnings
warnings.filterwarnings("ignore")
test=pd.read_excel('Default.xlsx')
test.head()
defaultstudentbalanceincome
1NoNo729.52649544361.625074
2NoYes817.18040712106.134700
3NoNo1073.54916431767.138947
4NoNo529.25060535704.493935
5NoNo785.65588338463.495879
#将类别型变量转化为数值变量
def fun(x):
    if 'No' in x:
        return 0
    else:
        return 1
test['default']=test.apply(lambda x: fun(x['default']),axis=1)
#定义训练集
X=test[['balance','income']]
y=test['default']
from sklearn.linear_model import LogisticRegression
# 准确率
lr_acc=[]
# 构建LogisticRegression模型(默认参数即可),并调用fit进行模型拟合
model = LogisticRegression()
model.fit(X,y)
# 计算LogisticRegression在测试集上的误差率
a=model.predict_proba(X)
# 打印误差率
result=[]
for i in range(len(a)):
    if a[i][1]>0.5:
        result.append(1)
    else:
        result.append(0)
print('误差: %.4f' % (1-metrics.recall_score(y,result,average='weighted')))
误差: 0.0336

(b) Using the validation set approach, estimate the test error of this model. In order to do this, you must perform the following steps:

利用验证集方法,对模型的测试误差进行估计。为此,您必须执行以下步骤:

i. Split the sample set into a training set and a validation set.

将样本集分成训练集和验证集。

from sklearn.model_selection import train_test_split
# 使用train_test_split方法,划分训练集和测试集,指定80%数据为训练集,20%为验证集
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.2,random_state=2020)
X_validation, X_test, y_validation, y_test = train_test_split(X_test,y_test, test_size=0.1,random_state=2020)

ii. Fit a multiple logistic regression model using only the training observations.

仅用训练观测值拟合多元Logistic回归模型。

from sklearn.linear_model import LogisticRegression
lr_acc=[]
model = LogisticRegression()
model.fit(X_train,y_train)
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
          intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,
          penalty='l2', random_state=None, solver='liblinear', tol=0.0001,
          verbose=0, warm_start=False)

Obtain a prediction of default status for each individual in the validation set by computing theposterior probability of default for that individual, and classifying the individual to the defaultcategory if the posterior probability equals 0.5.

通过计算该个体的后验违约概率,获得验证集中每个个体的违约状态预测,如果后验概率等于0.5,则将该个体分类为defaultcategory。

a=model.predict_proba(X_validation)
result=[]
for i in range(len(a)):
    if a[i][1]>0.5:
        result.append(1)
    else:
        result.append(0)
print('误差: %.4f' % (1-metrics.recall_score(y_validation,result,average='weighted')))
误差: 0.0361

© Repeat the process in (b) three times, using three different splits of the observations into a
training set and a validation set. Comment on the results obtained
©在(b)中重复上述过程三次,将观察结果分成训练集和验证集。

from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
for i in range(3):
    X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.2,random_state=2020)
    X_validation, X_test, y_validation, y_test = train_test_split(X_test,y_test, test_size=0.1,random_state=2020)
    model = LogisticRegression()
    model.fit(X_train,y_train)
    a=model.predict_proba(X_validation)
    result=[]
    for i in range(len(a)):
        if a[i][1]>0.5:
            result.append(1)
        else:
            result.append(0)
    from sklearn import metrics
    print('误差: %.4f' % (1-metrics.recall_score(y_validation,result,average='weighted')))
误差: 0.0361
误差: 0.0361
误差: 0.0361

(d) Now consider a logistic regression model that predicts the probability of default using
income, balance, and a dummy variable for student. Estimate the test error for this model using
the validation set approach. Comment on whether or not including a dummy variable for student
leads to a reduction in the test error rate.

现在考虑一个逻辑回归模型,该模型使用收入、余额和学生的虚拟变量来预测违约概率。使用验证集方法估计该模型的测试误差。评论是否包括一个虚拟变量的学生导致降低测试错误率。

def fun(x):
    if 'No' in x:
        return 0
    else:
        return 1
test['student']=test.apply(lambda x: fun(x['student']),axis=1)
X=test[['balance','income','student']]
y=test['default']
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.2,random_state=2020)
X_validation, X_test, y_validation, y_test = train_test_split(X_test,y_test, test_size=0.1,random_state=2020)
model = LogisticRegression()
model.fit(X_train,y_train)
a=model.predict_proba(X_validation)
result=[]
for i in range(len(a)):
    if a[i][1]>0.5:
        result.append(1)
    else:
        result.append(0)
from sklearn import metrics
print('误差: %.4f' % (1-metrics.recall_score(y_validation,result,average='weighted')))
误差: 0.0361

答:影响不大

  • 4
    点赞
  • 36
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值