基于 GridSearchCV的模型调参框架

网上有很多网格调参方法,笔者整理代码,编写了以下模型应用框架。不喜勿喷 

import pandas as pd
import numpy as np
from sklearn.tree import DecisionTreeClassifier
from sklearn.pipeline import make_pipeline
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import Pipeline
train_data = pd.read_csv('C://Users//holy//Desktop//train_feature.csv')
test_data= pd.read_csv('C://Users//holy//Desktop//test_feature.csv')
ztrain_data=train_data[(train_data.is_attributed==1)]#取正样例
N=len(ztrain_data)*2#取负样例为正样例的两倍
ftrain_data=train_data[(train_data.is_attributed==0)].sample(n=N)#取N倍负样例
df_train=pd.concat([ztrain_data,ftrain_data])#新训练数据
df_test=test_data

#3
# 使用pipeline定义文本分类问题常见的工作流,包含向量化和一个简单的分类器
pipeline = Pipeline([
    ('1-DTC',DecisionTreeClassifier()),
])
#参数空间
parameters = {
    '1-DTC__max_depth': (1,2,3,4,5),
    '1-DTC__max_features': (1,2,3,4),
}
#通过GridSearchCV寻求最佳参数空间
grid_search = GridSearchCV(pipeline,parameters)#网格搜索
clf=grid_search#本模型用线性回归
#特征工程
 #1将day特征变成三列
names=df_train['day'].str.split('-',expand=True)
names.columns=['year','month','day0']
df_train=df_train.join(names)#转换日期特征,变成三列特征
names2=df_test['day'].str.split('-',expand=True)
names2.columns=['year','month','day0']
df_test=df_test.join(names2)#转换日期特征,变成三列特征
 #数据集array
X_train =np.array(df_train.drop(['is_attributed','day'],axis = 1))
X_test=np.array(df_test.drop(['is_attributed','day','click_id'],axis = 1))
Y_train =np.array(df_train['is_attributed'])
Y_test=np.array(df_test['is_attributed'])
#定义准确率函数
def pre_rate(res):
    count = 0
    for i in range(len(res)):
        if (Y_test[i] == res[i]):
            count = count + 1
    print(float(count / len(res)))
#训练模型
clf.fit(X_train,Y_train)
res=clf.best_estimator_.predict(X_test)
#误差分析
pre_rate(res)
# 输出最佳的分类器到底使用了怎样的参数
best_parameters = clf.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
    print("\t%s: %r" % (param_name, best_parameters[param_name]))

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是符合你需求的Python代码示例: ```python import pandas as pd import numpy as np from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split, GridSearchCV, cross_val_score from sklearn.feature_selection import SelectFromModel from sklearn.metrics import accuracy_score import matplotlib.pyplot as plt # 1. 加载数据集 data = pd.read_csv('genotype_dataset.csv') X = data.iloc[:, 1:] # 特征 y = data.iloc[:, 0] # 标签 # 2. 数据预处理(如果有需要) # 3. 特征选择 rf = RandomForestClassifier(n_estimators=100, random_state=42) rf.fit(X, y) feature_importances = rf.feature_importances_ selector = SelectFromModel(rf, threshold='median', prefit=True) X_selected = selector.transform(X) selected_features = X.columns[selector.get_support()] # 4. 输出结果为CSV文件(候选特征) selected_data = pd.concat([y, pd.DataFrame(X_selected, columns=selected_features)], axis=1) selected_data.to_csv('selected_features.csv', index=False) # 5. 随机森林模型调参 param_grid = { 'n_estimators': [100, 200, 300], 'max_depth': [None, 5, 10], 'min_samples_split': [2, 5, 10] } grid_search = GridSearchCV(rf, param_grid, cv=5) grid_search.fit(X_selected, y) best_params = grid_search.best_params_ # 6. 输出排名前50的特征为CSV文件 feature_importances_df = pd.DataFrame({'Feature': selected_features, 'Importance': feature_importances}) top_50_features = feature_importances_df.nlargest(50, 'Importance') top_50_features.to_csv('top_50_features.csv', index=False) # 7. 绘制学习曲线 feature_counts = range(1, len(selected_features) + 1) cv_scores = [] for num_features in feature_counts: X_subset = selected_data.iloc[:, 1:num_features+1] scores = cross_val_score(rf, X_subset, y, cv=10) cv_scores.append(scores.mean()) plt.plot(feature_counts, cv_scores) plt.xlabel('Number of Features') plt.ylabel('Cross-validated Accuracy') plt.title('Learning Curve') plt.show() ``` 请注意,以上代码仅提供了一个大致的框架,你需要根据实际情况进行适当的调整和修改,比如根据你的数据集的具体位置和特征名称进行索引、对数据进行预处理等。另外,需要确保已安装所需的Python库(如pandas、numpy、sklearn和matplotlib)。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值