使用LR和SVM对文本数据进行分类

本文探讨了如何利用线性支持向量机(svm.LinearSVC)和逻辑回归(LogisticRegression)对文本数据进行有效分类。通过实例展示了这两种机器学习算法在处理文本特征时的性能和差异。
摘要由CSDN通过智能技术生成

使用LR和SVM对文本数据进行分类

import numpy as np
import pandas as pd

training = pd.read_csv("D:/ML/competition/daguan/new_data/train_set.csv")
# print(training.head())
#
# print(training.shape)
# print(training.columns)
print(training.info())

import time
t_start = time.time()

from sklearn.feature_extraction.text import TfidfVectorizer
vec = TfidfVectorizer(ngram_range=(1,2),min_df=3, max_df=0.9,use_idf=1,smooth_idf=1, sublinear_tf=1)
tfidfX_train = vec.fit_transform(training["word_seg"])
# print(tfidfX_train)

from sklearn.model_selection import train_test_split #to create validation data set
# X_train = training.drop("class", axis=1)
y_train = training["class"]
X_training, X_valid, y_training, y_valid = train_test_split(tfidfX_train, y_train, test_size=0.2, random_state=0)


from sklearn import svm
from sklearn.linear_model import  LogisticRegression
from sklearn.metrics import f1_score
clf = svm.LinearSVC(C=5, dual=False)
# clf = LogisticRegression(C=120, dual=True)

clf.fit(X_training, y_training)
y_prediction = clf.predict(X_valid)
f1 = f1_score(y_valid, y_prediction, average='micro')

print(f1)

t_end = time.time()
print("耗时:{}min".format((t_end-t_start)/60))

svm.LinearSVC

0.790672663277278
耗时:14.886688208580017min

LogisticRegression

0.7914059444661713
耗时:14.259357845783233min
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值