原文传送门:Sklearn包含的常用算法
说明
文章列出了Sklearn模块中常用的算法及调用方法,部分生僻的未列出(对我来说算生僻的),如果有写的不对的地方请指出。
参考资料来自sklearn官方网站:http://scikit-learn.org/stable/
总的来说,Sklearn可实现的函数或功能可分为以下几个方面:
- 分类算法
- 回归算法
- 聚类算法
- 降维算法
- 文本挖掘算法
- 模型优化
- 数据预处理
- 最后再说明一下可能不支持的算法(也可能是我没找到,但有其他模块可以实现)
分类算法
-
线性判别分析(LDA)
>>> from sklearn.discriminant_analysis import LinearDiscriminantAnalysis >>> lda = LinearDiscriminantAnalysis(solver="svd", store_covariance=True)
- 1
- 2
- 3
-
二次判别分析(QDA)
>>> from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis >>> qda = QuadraticDiscriminantAnalysis(store_covariances=True)
- 1
- 2
- 3
-
支持向量机(SVM)
>>> from sklearn import svm >>> clf = svm.SVC()
- 1
- 2
- 3
-
Knn算法
>>> from sklearn import neighbors >>> clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights)
- 1
- 2
- 3
-
神经网络(nn)
>>> from sklearn.neural_network import MLPClassifier >>> clf = MLPClassifier(solver='lbfgs', alpha=1e-5, ... hidden_layer_sizes=(5, 2), random_state=1)
- 1
- 2
- 3
- 4
-
朴素贝叶斯算法(Naive Bayes)
>>> from sklearn.naive_bayes import GaussianNB >>> gnb = GaussianNB()
- 1
- 2
- 3
-
决策树算法(decision tree)
>>> from sklearn import tree >>> clf = tree.DecisionTreeClassifier()
- 1
- 2
- 3
-
集成算法(Ensemble methods)
-
Bagging
>>> from sklearn.ensemble import BaggingClassifier >>> from sklearn.neighbors import KNeighborsClassifier >>> bagging = BaggingClassifier(KNeighborsClassifier(), ... max_samples=0.5, max_features=0.5)
- 1
- 2
- 3
- 4
- 5
-
随机森林(Random Forest)
>>> from sklearn.ensemble import RandomForestClassifier >>> clf = RandomForestClassifier(n_estimators=10)
- 1
- 2
- 3
-
AdaBoost
>>> from sklearn.ensemble import AdaBoostClassifier >>> clf = AdaBoostClassifier(n_estimators=100)
- 1
- 2
- 3
-
GBDT(Gradient Tree Boosting)
>>> from sklearn.ensemble import GradientBoostingClassifier >>> clf = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0, ... max_depth=1, random_state=0).fit(X_train, y_train)
- 1
- 2
- 3
- 4
-
回归算法
-
最小二乘回归(OLS)
>>> from sklearn import linear_model >>> reg = linear_model.LinearRegression()
- 1
- 2
- 3
-
岭回归(Ridge Regression)
>>> from sklearn import linear_model >>> reg = linear_model.Ridge (alpha = .5)
- 1
- 2
- 3
-
核岭回归(Kernel ridge regression)
>>> from sklearn.kernel_ridge import KernelRidge >>> KernelRidge(kernel='rbf', alpha=0.1, gamma=10)
- 1
- 2
- 3
-
支持向量机回归(SVR)
>>> from sklearn import svm >>> clf = svm.SVR()
- 1
- 2
- 3
-
套索回归(Lasso)
>>> from sklearn import linear_model >>> reg = linear_model.Lasso(alpha = 0.1)
- 1
- 2
- 3
-
弹性网络回归(Elastic Net)
>>> from sklearn.linear_model import ElasticNet >>> regr = ElasticNet(random_state=0)
- 1
- 2
- 3
-
贝叶斯回归(Bayesian Regression)
>>> from sklearn import linear_model >>> reg = linear_model.BayesianRidge()
- 1
- 2
- 3
-
逻辑回归(Logistic regression)
>>> from sklearn.linear_model import LogisticRegression >>> clf_l1_LR = LogisticRegression(C=C, penalty='l1', tol=0.01) >>> clf_l2_LR = LogisticRegression(C=C, penalty='l2', tol=0.01)
- 1
- 2
- 3
- 4
-
稳健回归(Robustness regression)
>>> from sklearn import linear_model >>> ransac = linear_model.RANSACRegressor()
- 1
- 2
- 3
-
多项式回归(Polynomial regression——多项式基函数回归)
>>> from sklearn.preprocessing import PolynomialFeatures >>> poly = PolynomialFeatures(degree=2) >>> poly.fit_transform(X)
- 1
- 2
- 3
- 4
-
高斯过程回归(Gaussian Process Regression)
-
偏最小二乘回归(PLS)
>>> from sklearn.cross_decomposition import PLSCanonical >>> PLSCanonical(algorithm='nipals', copy=True, max_iter=500, n_components=2,scale=True, tol=1e-06)
- 1
- 2
- 3
-
典型相关分析(CCA)
>>> from sklearn.cross_decomposition import CCA >>> cca = CCA(n_components=2)
- 1
- 2
- 3
聚类算法
-
Knn算法
>>> from sklearn.neighbors import NearestNeighbors >>> nbrs = NearestNeighbors(n_neighbors=2, algorithm='ball_tree').fit(X)
- 1
- 2
- 3
-
Kmeans算法
>>> from sklearn.cluster import KMeans >>> kmeans = KMeans(init='k-means++', n_clusters=n_digits, n_init=10)
- 1
- 2
- 3
-
层次聚类(Hierarchical clustering)——支持多种距离
>>> from sklearn.cluster import AgglomerativeClustering >>> model = AgglomerativeClustering(linkage=linkage, connectivity=connectivity, n_clusters=n_clusters)
- 1
- 2
- 3
- 4
降维算法
-
主成分方法(PCA)
>>> from sklearn.decomposition import PCA >>> pca = PCA(n_components=2)
- 1
- 2
- 3
-
核函主成分(kernal pca)
>>> from sklearn.decomposition import KernelPCA >>> kpca = KernelPCA(kernel="rbf", fit_inverse_transform=True, gamma=10)
- 1
- 2
- 3
-
因子分析(Factor Analysis)
>>> from sklearn.decomposition import FactorAnalysis >>> fa = FactorAnalysis()
- 1
- 2
- 3
文本挖掘算法
-
主题生成模型(Latent Dirichlet Allocation)
>>> from sklearn.decomposition import NMF, LatentDirichletAllocation
- 1
- 2
-
潜在语义分析(latent semantic analysis)
模型优化
不具体列出函数,只说明提供的功能
- 特征选择
- 随机梯度方法
- 交叉验证
- 参数调优
- 模型评估:支持准确率、召回率、AUC等计算,ROC,损失函数等作图
数据预处理
- 标准化
- 异常值处理
- 非线性转换
- 二值化
- 独热编码(one-hot)
- 缺失值插补:支持均值、中位数、众数、特定值插补、多重插补
- 衍生变量生成
可能不支持的算法(也可能是我没找到)
-
极限提升树算法(xgboost)
有专门的xgb模块支持 -
深度学习相关算法RNN,DNN,NN,LSTM等
有专门的深度学习模块入tf,keras等支持