特征选择
1. TF-IDF原理
TF-IDF是Term Frequency - Inverse Document Frequency的缩写,即“词频-逆文本频率”。它由两部分组成,TF和IDF。
TF即代表词频;IDF代表逆文本频率,反应了一个词在所有文本中出现的频率,如果一个词在很多的文本中出现,那么它的IDF值应该低,而反过来如果一个词在比较少的文本中出现,那么它的IDF值应该高。
2. 文本矩阵化(以TF-IDF特征值为权重)
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
corpus=["I come to China to travel",
"This is a car polupar in China",
"I love tea and Apple ",
"The work is to write some papers in science"]
tfidf = TfidfVectorizer()
re = tfidf.fit_transform(corpus)
print (re)
3 互信息的原理
点互信息PMI(Pointwise Mutual Information)是用来衡量两个事物之间的相关性(比如两个词)。
点互信息PMI其实就是从信息论里面的互信息这个概念里面衍生出来的。
4 互信息特征筛选
from sklearn.feature_selection import VarianceThreshold,SelectKBest,chi2
from sklearn.datasets import load_iris
import pandas as pd
from sklearn.feature_selection import mutual_info_classif
#用于度量特征和离散目标的互信息
X,y = load_iris(return_X_y=True)
X_df = pd.DataFrame(X,columns=list("ABCD"))
feature_cat = ["A","D"]
discrete_features = []
feature = X_df.columns.values.tolist()
for k in feature_cat:
if k in feature:
discrete_features.append(feature.index(k))
mu = mutual_info_classif(X_df,y,discrete_features=discrete_features,
n_neighbors=3, copy=True, random_state=None)
dict_feature = {}
for i,j in zip(X_df.columns.values,mu):
dict_feature[i]=j
#对字典按照values排序
ls = sorted(dict_feature.items(),key=lambda item:item[1],reverse=True)
#特征选取数量
k =2
ls_new_feature=[]
for i in range(k):
ls_new_feature.append(ls[i][0])
X_new = X_df[ls_new_feature]
参考
[1]: https://www.cnblogs.com/wzdLY/p/9689547.html
[2]: https://www.cnblogs.com/pinard/p/6693230.html
[3]: https://blog.csdn.net/u013710265/article/details/72848755