赛题的分析(Task 1):
1、任务:给定一个新闻事件的文本,判定该事件属于真实新闻还是虚假新闻。
2、方法:训练一个二分类器,判别新闻是真还是假。
一、数据预处理:
a、样本数据观察:
训练集有38471条数据,每条数据有id、text、label三个字段,其中1代表正例 (真),0代表负例(假)
b、数据清洗:
1、生成训练词向量的数据
2、去掉训练样本里的无意义的噪音,例如:url、@+人名等…
def clearn_data():
file = pd.read_csv('./train.csv')
f = pd.DataFrame(file,columns=['text','label']).dropna(subset=['text','label'])
print(f.head())
i = 0
with open('./train_vec.txt','w',encoding='utf-8') as ft,open('./train_data.txt','w',encoding='utf-8') as fl:
for text,label in zip(f['text'],f['label']):
if text != '':
ft.write(' '.join(jieba.lcut(text.strip()))+'\n') #生成训练词向量的数据
line = re.sub(u'#.*?#|@[\u4E00-\u9FA5A-Za-z0-9_-]+|(.*?)|\(.*?\)|(https?|ftp|file)://[-A-Za-z0-9+&@#/%?=~_|!:,.;]+[-A-Za-z0-9+&@#/%=~_|]|分享|来自','', text.strip()) #去掉样本中的噪音
fl.write('\t'.join(jieba.lcut(''.join(patten.findall(line))))+'\t'+'__label__'+str(label)+'\n')
i += 1
print(i)
预处理之后的样本:
二、特征工程:
1、训练词向量:(利用fasttext训练)
def fasttext_vec():
path1 = './train_vec.txt' #输入路径
path2 = './model_vec_300' #输出路径
print('开始训练词向量!')
model = fasttext.cbow(path1, path2, min_count=2, ws=5,dim=300, word_ngrams=3, bucket=2000000, encoding='utf-8')
print('词向量训练结束!')
2、数据集划分以及训练词频和TF-IDF向量:
def split_data():
f = open('./train_data.txt', 'r', encoding='utf-8')
x, y = [], []
i = 0
for line in f:
line_list = line.split('\t')
if len(line_list) == 2:
try:
x.append(line_list[0].strip())
y.append(data_label[line_list[-1].replace('__label__', '').strip()])
i+=1
except:
continue
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3, random_state=100)
print('数据划分完成!,共%d条数据。'%i)
return x_train, x_test, y_train, y_test
def cvANDidf():
x_train, x_test, y_train, y_test = split_data()
print('开始特征转换!......')
vectorizer = CountVectorizer(min_df=2,max_features=300)
x_train_cv = vectorizer.fit_transform(x_train).todense()
x_test_cv = vectorizer.transform(x_test).todense()
joblib.dump(vectorizer, './model/cv_300.pkl')
print(len(x_train1),np.array(x_train1).shape)
print('词频完成!')
transformer = TfidfTransformer() # 该类会统计每个词语的tf-idf权值
x_train_idf = transformer.fit_transform(x_train1).todense()
x_test_idf = transformer.transform(x_test1).toarray()
joblib.dump(transformer, './model/tfidf_300.pkl')
print(len(x_train2), np.array(x_train2).shape)
print('tf-idf完成!')
return x_train_cv,x_test_cv,x_train_idf,x_test_idf,y_train,y_test
三、特征向量生成
词向量、词频向量和TF-IDF向量的拼接,作为模型的输入
def get_fasttext_vec(file,model): #加载词向量,并转化
x,new_list = [],[]
print('文件长度:%d'%len(file))
for line in file:
word_list = line.strip().split(' ')
for word in word_list:
try:
new_list.append(model[word.strip()])
except:
continue
new_list1 = np.array(new_list).sum(axis=0)
x.append(list(new_list1 / np.sqrt((new_list1 ** 2).sum())))
return x
def w2v(x): #特征融合,生成输入特征向量
model = fasttext.load_model("./model_vec_300.bin", encoding="utf-8")
cv = joblib.load('./model/cv_300.pkl')
tfidf = joblib.load('./model/tfidf_300.pkl')
vec_feature = get_fasttext_vec(x,model)
cv_feature = cv.transform(x).todense()
tfidf_feature = tfidf.transform(cv_feature).todense()
new_feature = np.concatenate((cv_feature, tfidf_feature,vec_feature), axis=1)
return new_feature
四、模型训练
初次尝试还是选用了简单的逻辑回归,因为其速度很快
def model(x_train,x_test,y_train,y_test):
lr = LogisticRegression(C=200, class_weight='balanced')
print('开始训练!')
lr.fit(x_train,y_train)
y_pred = lr.predict(x_test)
print(classification_report(y_test, y_pred))
print('Testing accuracy %s' % accuracy_score(y_test, y_pred))
print('Testing F1 score: {}'.format(f1_score(y_test, y_pred, average='weighted')))
joblib.dump(lr, './model/lr.model')
五、模型评价
在测试集上F1有0.97,但上线测试仅有0.81,第一名F1指标线上达到0.93
六、改进方法
1、模型方面:尝试了其他模型,如fasttext、RF以及近年名声大造的BERT,效果都一般,线上测试都在0.8左右,估计这也是单模的上限,所以可以采取模型融合的方式
2、数据方面:训练样本里有许多噪音点,如:a、发表文章 邵逸夫 邵逸夫 __label__1b、邢台 威县 邢台 威县 __label__0 ;没有实际意义,也无法判断真伪,只会增加噪音