交通事故文本多分类——做一个快乐的调包侠

jieba+word2vec+GDBT+oneVsRestClassifier

一、背景

经过俩天的数据处理,现在勉强得到了1k+条事故描述-事故原因这种格式的数据,剩下的未处理的数据同组的小伙伴们还在加班加点的做,为了不浪费时间,今天就先搭个壳子出来,看看初步效果。

二、数据预处理

和之前俩篇文章一样,采取去停用词、分词、句子向量化等操作。

#导包
import numpy as np
import pandas as pd
import sys
from gensim.models import word2vec
import os
import gensim
from gensim.models.word2vec import LineSentence
#读数据
data = pd.read_excel('../data/data.xlsx')
data

数据如下所示(数据涉密,不得不打码,见谅)

#取出需要的x、y
data = data[['y','x']]
data.to_csv('../data/data.csv', index = False)

数据如下:

#判断字符串是不是中文
def check_contain_chinese(check_str):
    for ch in check_str:
        if u'\u4e00' <= ch <= u'\u9fff':
            return True     
        else:
            return False

定义一个判断某串是否含有中文的函数,便于之后去除不必要的数字

#分词
import jieba
jieba.load_userdict('../dic.txt')
stop = [line.strip() for line in open('../stopwords.txt').readlines()]
#去停用词
out = ''
for index in range(len(data)):
    ct = jieba.cut(data.loc[index,'x'])
    out = ''
    for word in ct:
        if word not in stop:
            if check_contain_chinese(word) == True:
                if (word.endswith("省") == False):
                    if (word.endswith("市") == False):
                        if (word.endswith("县") == False):
                            if (word.endswith("镇") == False):
                                out += word
                                out += " "
    data.loc[index,'split'] = out

停用词表中添加了全国所有省市的名称、简称,防止干扰模型

#读取出数据
import pprint
text = data['split']
sentences = []
for item in text:
    sentence = str(item).split(' ')
    sentences.append(sentence)
#训练
model = word2vec.Word2Vec(sentences,size = 50)
model.save('jk.model')
def buildWordVector(imdb_w2v,text, size):
    vec = np.zeros(size).reshape((1, size))
    count = 0.
    #print text
    for word in text.split():
        #print word
        try:
            vec += imdb_w2v[word].reshape((1, size))
            count += 1.
        except KeyError:
            print (word)
            continue
    if count != 0:
        vec /= count
    return vec
model = word2vec.Word2Vec.load('./jk.model')
result = buildWordVector(model, data.loc[1]['split'] , 50)
for i in range(1,len(data)):
    result = np.concatenate((result, buildWordVector(model, data.loc[i]['split'] , 50)), axis = 0)

#把series 转换成dataframe格式,并且将五十维的特征都赋值
vectors = pd.DataFrame(result, columns = ["x1","x2", "x3", "x4", "x5", "x6", "x7", "x8", "x9", "x10","x11", "x12", "x13", "x14", "x15", "x16", "x17", "x18", "x19", "x20", "x21", "x22", "x23", "x24", "x25", "x26", "x27", "x28", "x29", "x30", "x31", "x32", "x33", "x34", "x35", "x36", "x37", "x38" ,"x39", "x40", "x41", "x42", "x43" ,"x44", "x45", "x46" ,"x47", "x48", "x49", "x50"])
vectors
#合并dataframe
data = pd.concat([data, vectors], axis = 1)
data

向量化处理结束之后:

三、建模训练

import matplotlib.pyplot as plt
from itertools import cycle
from sklearn import svm, datasets
from sklearn.linear_model import Lasso, LinearRegression, Ridge
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import roc_curve, auc
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
from scipy import interp
% matplotlib inline
x = data[["x1","x2", "x3", "x4", "x5", "x6", "x7", "x8", "x9", "x10","x11", "x12", "x13", "x14", "x15", "x16", "x17", "x18", "x19", "x20", "x21", "x22", "x23", "x24", "x25", "x26", "x27", "x28", "x29", "x30", "x31", "x32", "x33", "x34", "x35", "x36", "x37", "x38" ,"x39", "x40", "x41", "x42", "x43" ,"x44", "x45", "x46" ,"x47", "x48", "x49", "x50"]]
y = data.y
print(y.shape)
print(y.value_counts())
#便于后面多分类工作
y = label_binarize(y, classes=[1, 2, 3, 4, 5])
print(y[:3])

# 设置种类
n_classes = y.shape[1]

# 训练模型并预测
random_state = np.random.RandomState(0)
n_samples, n_features = x.shape
# 随机化数据,并划分训练数据和测试数据
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.3,random_state=0)

四、结果

model = OneVsRestClassifier(GradientBoostingClassifier(learning_rate=0.4,n_estimators=1000,max_depth=7))
clt = model.fit(X_train, y_train)

经过初步选取模型及调参,得到初步结果。好了,实验室马上锁门了....关电脑接女朋友去

等明天小伙伴们弄完了停用词表和数据,再好好做一次调参侠

(数据涉密,不能外放,见谅!)

  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 3
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值