花呗问答匹配(part1 数据预处理)

当前时间是2021/6/16,matchzoo作为非常强大的文本匹配库,当前未更新到TF2.4以上版本,无法使用本机3090加速,为此我将源码反向推导实现,使用TF2.4形式实现这些模型

"""
作者英俊
QQ 2227495940
所有权:西安建筑科技大学草堂校区 信控楼704实验室
"""
"暂定只能扣13个模型出来"
'暂定只能扣13个模型出来'

导入第三方库包

# 导入tf 框架以及tf框架里面的keras,之后的反推模型需要依赖这两个库
import tensorflow as tf #导入TF
from tensorflow import keras # 导入keras库
from tensorflow.keras import backend as K # 导入后台

print(tf.__version__)
2.4.0
import matchzoo as mz
# 检查目前可以支持那些模型,但是由于本人的研究水平,目前应该只能剥离16种模型进行加速
mz.models.list_available()
Using TensorFlow backend.





[matchzoo.models.naive.Naive,
 matchzoo.models.dssm.DSSM,
 matchzoo.models.cdssm.CDSSM,
 matchzoo.models.dense_baseline.DenseBaseline,
 matchzoo.models.arci.ArcI,
 matchzoo.models.arcii.ArcII,
 matchzoo.models.match_pyramid.MatchPyramid,
 matchzoo.models.knrm.KNRM,
 matchzoo.models.duet.DUET,
 matchzoo.models.drmmtks.DRMMTKS,
 matchzoo.models.drmm.DRMM,
 matchzoo.models.anmm.ANMM,
 matchzoo.models.mvlstm.MVLSTM,
 matchzoo.contrib.models.match_lstm.MatchLSTM,
 matchzoo.contrib.models.match_srnn.MatchSRNN,
 matchzoo.contrib.models.hbmp.HBMP,
 matchzoo.contrib.models.esim.ESIM,
 matchzoo.contrib.models.bimpm.BiMPM,
 matchzoo.contrib.models.diin.DIIN,
 matchzoo.models.conv_knrm.ConvKNRM]

数据读取

import pandas as pd
# 读取数据,将数据集加载进来,并且检查
data_df = pd.read_csv("data/atec_nlp_sim_train_all.csv", sep="\t", header=None, 
                      encoding="utf-8-sig", names=["sent1", "sent2", "label"])
# 获取首部和尾部
data_df.head(10).append(data_df.tail(5))
sent1sent2label
1怎么更改花呗手机号码我的花呗是以前的手机号码,怎么更改成现在的支付宝的号码手机号1
2也开不了花呗,就这样了?完事了真的嘛?就是花呗付款0
3花呗冻结以后还能开通吗我的条件可以开通花呗借款吗0
4如何得知关闭借呗想永久关闭借呗0
5花呗扫码付钱二维码扫描可以用花呗吗0
6花呗逾期后不能分期吗我这个 逾期后还完了 最低还款 后 能分期吗0
7花呗分期清空花呗分期查询0
8借呗逾期短信通知如何购买花呗短信通知0
9借呗即将到期要还的账单还能分期吗借呗要分期还,是吗0
10花呗为什么不能支付手机交易花呗透支了为什么不可以继续用了0
102473花呗分期还一期后能用吗分期是还花呗吗0
102474我的支付宝手机号和花呗手机号不一样怎么办支付宝上的手机号,怎么和花呗上的不一样1
102475借呗这个月的分期晚几天还可以吗借呗分期后可以更改分期时间吗0
102476我怎么没有花呗临时额度了花呗有零时额度吗0
102477怎么授权芝麻信用给花呗花呗授权联系人怎么授权0
data_df.shape # 检查数据集大小
(102477, 3)

处理数据

import sklearn # 导入sklearn库
from sklearn.model_selection import train_test_split #可以将数据集划分训练集/测试集/验证集
# 为了防止运行速度缓慢,从总的数据集中抽取3500个样本进行实验
sent1=data_df.sent1.values[:3501]
sent2=data_df.sent2.values[:3501]
label=data_df.label.values[:3501]
# 这里是训练集
sent1_=sent1[:2501]
sent2_=sent2[:2501]
label_=label[:2501]
# 这里是验证集
_sent1=sent1[2501:]
_sent2=sent2[2501:]
_label=label[2501:]

# 将训练集转换成matchzoo需要的亚子
train_dev_data=pd.DataFrame()
train_dev_data['id_left']=range(2501)
train_dev_data['text_left']=sent1_
train_dev_data['id_right']=range(2501)
train_dev_data['text_right']=sent2_
train_dev_data['label']=label_

# 将测试集转换成matchzoo需要的亚子
test_data=pd.DataFrame()
test_data['id_left']=range(1000)
test_data['text_left']=_sent1
test_data['id_right']=range(1000)
test_data['text_right']=_sent2
# test_data['label']=_label
# 但本程序是不使用matchzoo的,写成这样是为了和另一道程序进行对比
# 获取字典
from collections import Counter

c = Counter()
sent_data = data_df["sent1"].values + data_df["sent2"].values
for d in sent_data:
    c.update(d)
word_counts = sorted(dict(c).items(), key=lambda x: x[1], reverse=True)

print(word_counts[:10])

# 获取实现字典和idx的互相转换
vocab_words = ["<PAD>", "<UNK>"]
for w, c in word_counts:
    vocab_words.append(w)

vocab2id = {w: i for i, w in enumerate(vocab_words)}
id2vocab = {i: w for i, w in enumerate(vocab_words)}

print("vocab size: ", len(vocab2id))
print(list(vocab2id.items())[:5])
print(list(id2vocab.items())[:5])

# 保存字典
with open("vocab.txt", "w", encoding="utf8") as f:
    for w, i in vocab2id.items():
        f.write(w+"\n")
        
# 文本转换为字典
def sent2index(vocab2id, words):
    return [vocab2id[w] if w in vocab2id else vocab2id["<UNK>"] for w in words]
# 将训练集转换成数字格式
train_dev_data["text_left"] = train_dev_data["text_left"].apply(lambda x: sent2index(vocab2id, x))
train_dev_data["text_right"] = train_dev_data["text_right"].apply(lambda x: sent2index(vocab2id, x))


# 将测试机转换成数字形式
test_data["text_left"] = test_data["text_left"].apply(lambda x: sent2index(vocab2id, x))
test_data["text_right"] =test_data["text_right"].apply(lambda x: sent2index(vocab2id, x))

[('呗', 211063), ('花', 151025), ('么', 83328), ('还', 80050), ('借', 69825), ('我', 67036), ('款', 62302), ('的', 61108), ('了', 56689), ('用', 52685)]
vocab size:  2175
[('<PAD>', 0), ('<UNK>', 1), ('呗', 2), ('花', 3), ('么', 4)]
[(0, '<PAD>'), (1, '<UNK>'), (2, '呗'), (3, '花'), (4, '么')]
train_dev_data.head(10).append(train_dev_data.tail(10))
id_lefttext_leftid_righttext_rightlabel
00[1515, 15, 4, 187, 129, 3, 2, 57, 73, 43, 60]0[7, 9, 3, 2, 18, 23, 52, 9, 57, 73, 43, 60, 14...1
11[160, 31, 13, 10, 3, 2, 14, 95, 66, 89, 10, 20...1[564, 9, 179, 200, 95, 18, 3, 2, 25, 8]0
22[3, 2, 155, 132, 23, 51, 5, 21, 31, 36, 16]2[7, 9, 243, 213, 22, 23, 31, 36, 3, 2, 6, 8, 16]0
33[76, 85, 260, 227, 69, 96, 6, 2]3[92, 459, 142, 69, 96, 6, 2]0
44[3, 2, 231, 60, 25, 34]4[180, 271, 60, 231, 679, 22, 23, 11, 3, 2, 16]0
55[3, 2, 71, 19, 51, 13, 21, 29, 19, 16]5[7, 66, 37, 53, 71, 19, 51, 5, 124, 10, 53, 93...0
66[3, 2, 29, 19, 94, 680]6[3, 2, 29, 19, 131, 261]0
77[6, 2, 71, 19, 236, 58, 36, 227]7[76, 85, 175, 86, 3, 2, 236, 58, 36, 227]0
88[6, 2, 569, 464, 41, 19, 48, 5, 9, 46, 78, 5, ...8[6, 2, 48, 29, 19, 5, 14, 18, 16]0
99[3, 2, 26, 17, 4, 13, 21, 33, 25, 57, 73, 144,...9[3, 2, 401, 33, 10, 26, 17, 4, 13, 22, 23, 383...0
24912491[3, 2, 5, 8, 107, 20, 97, 7, 9, 46, 78, 107, 2...2491[54, 11, 3, 2, 87, 49, 9, 107, 20, 97, 88, 44,...1
24922492[3, 2, 33, 25, 366, 374]2492[27, 28, 6, 2, 366, 374]0
24932493[3, 2, 29, 19, 10, 21, 40, 52, 5, 8, 16]2493[3, 2, 29, 19, 15, 4, 174, 20, 5, 8]0
24942494[3, 2, 22, 23, 115, 104, 350, 369, 16]2494[3, 2, 121, 42, 20, 30, 22, 23, 38, 350, 369, ...0
24952495[57, 73, 70, 74, 63, 7, 9, 3, 2, 339, 505, 365...2495[7, 9, 3, 2, 70, 167, 14, 26, 17, 4, 7, 9, 33,...0
24962496[3, 2, 75, 98, 29, 19, 10, 35, 213, 242, 240, ...2496[3, 2, 46, 78, 29, 19, 10, 35, 84, 32, 5, 94, ...0
24972497[7, 18, 3, 2, 59, 108, 6]2497[3, 2, 22, 23, 356, 6, 61, 107, 20, 30, 16]0
24982498[3, 2, 78, 13, 379, 10, 147, 102, 59, 194]2498[3, 2, 38, 102, 194, 253, 343]0
24992499[3, 2, 71, 19, 111, 82, 56, 35, 183, 287, 16]2499[3, 2, 238, 12, 12, 12, 14, 71, 19, 12, 12, 12...0
25002500[11, 3, 2, 59, 9, 34, 86, 145, 151, 50, 109, 9...2500[3, 2, 32, 238, 34, 14, 50, 9, 8, 147, 102, 59...0
test_data.head(10).append(test_data.tail(10))
id_lefttext_leftid_righttext_right
00[27, 28, 6, 2, 128, 43, 5, 8]0[27, 28, 6, 2, 5, 8, 103, 19, 10]
11[7, 31, 36, 3, 2, 51, 53, 20, 30, 5, 18, 12, 1...1[7, 32, 24, 31, 36, 3, 2, 16]
22[6, 2, 18, 195, 44, 5, 16]2[6, 2, 417, 424, 141, 37, 44, 105, 48, 5, 16, ...
33[3, 2, 29, 19, 14, 128, 19, 199, 82]3[3, 2, 22, 23, 29, 12, 12, 12, 19, 9, 16]
44[682, 840, 36, 68, 122, 13, 21, 31, 36, 3, 2, ...4[18, 13, 18, 682, 840, 36, 68, 1395, 32, 24, 3...
55[7, 126, 52, 9, 46, 43, 18, 22, 23, 11, 3, 2, 9]5[7, 9, 3, 2, 61, 38, 22, 23, 11, 16]
66[26, 17, 4, 6, 2, 48, 124, 503, 362, 482]6[76, 85, 124, 503, 6, 2, 40, 20, 362, 482]
77[3, 2, 417, 424, 45, 65, 34, 99, 22, 23, 91, 2...7[3, 2, 29, 12, 12, 12, 19, 111, 82, 45, 65]
88[26, 17, 4, 7, 9, 162, 35, 37, 46, 43, 75, 98,...8[14, 7, 66, 37, 33, 25, 39, 26, 17, 4, 11, 13,...
99[3, 2, 144, 290, 69, 96, 10, 15, 4, 5, 48, 5, 8]9[3, 2, 144, 290, 69, 96, 17, 4, 184, 201]
990990[27, 28, 6, 2, 155, 132, 505, 4, 202, 155]990[6, 2, 155, 132, 45, 142, 21, 132, 155]
991991[66, 228, 34, 7, 3, 2, 32, 24, 253, 46]991[50, 8, 41, 3, 2, 34, 15, 4, 32, 41, 46, 176, ...
992992[3, 2, 22, 160, 38, 86, 57, 73, 33, 25, 16, 20...992[3, 2, 33, 25, 24, 57, 854, 49, 16]
993993[7, 125, 296, 296, 629, 213, 172, 59, 108, 74,...993[21, 11, 3, 2, 53, 113, 18, 38, 7, 9, 368, 108...
994994[7, 92, 31, 36, 58, 11, 55, 3, 2, 33, 25]994[3, 2, 58, 11, 55, 47, 34, 31, 36, 10, 15, 4, 11]
995995[7, 9, 47, 8, 60, 13, 21, 31, 36, 3, 2, 97, 58...995[47, 34, 60, 21, 11, 3, 2, 25, 8]
996996[3, 2, 13, 304, 72, 5, 8, 56, 79, 72, 5, 8, 16]996[3, 2, 5, 8, 51, 217, 79, 72, 45, 62, 10]
997997[61, 38, 32, 24, 27, 28, 6, 2, 10, 16]997[6, 2, 160, 32, 24]
998998[70, 37, 44, 3, 2, 11, 10, 12, 12, 12, 14, 66,...998[3, 2, 20, 30, 12, 12, 12, 14, 118, 21, 29, 19...
999999[76, 181, 403, 437, 257, 32, 24, 330, 272, 79,...999[3, 2, 40, 20, 403, 437, 237, 122, 32, 24]
# 文本的长度
max_len = 15
# 字典的长度
vocab_size = len(vocab2id)
# 词向量长度
embedding_size = 128
# 确保text_left和text_right对齐
from tensorflow.keras.preprocessing.sequence import pad_sequences
## 训练集对齐
sent1_datas = train_dev_data.text_left.values.tolist()
sent2_datas = train_dev_data.text_right.values.tolist()
labels = train_dev_data.label.values.tolist()
train_sent1=pad_sequences(sent1_datas, maxlen=max_len)
train_sent2 = pad_sequences(sent2_datas, maxlen=max_len)
## 测试集对齐
test_sent1_datas = test_data.text_left.values.tolist()
test_sent2_datas = test_data.text_right.values.tolist()
test_sent1=pad_sequences(test_sent1_datas, maxlen=max_len)
test_sent2 = pad_sequences(test_sent2_datas, maxlen=max_len)

# 划分训练 测试数据集
count = len(labels)
# idx1, idx2 = int(count*0.8), int(count*0.9)
idx1= int(count*0.8)
sent1_train, sent2_train = train_sent1[:idx1], train_sent2[:idx1]
sent1_val, sent2_val = train_sent1[idx1:], train_sent2[idx1:]
# sent1_test, sent2_test = sent1_datas[idx2:], sent2_datas[idx2:]

train_labels, val_labels= labels[:idx1], labels[idx1:]

print("train data: ", len(sent1_train), len(sent2_train), len(train_labels))
print("val data: ", len(sent1_val), len(sent2_val), len(val_labels))
# print("test data: ", len(sent1_test), len(sent2_test), len(test_labels))

import numpy as np # 将list转换成array
train_labels=np.array(train_labels)
val_labels=np.array(val_labels)
train data:  2000 2000 2000
val data:  501 501 501
train_labels
array([1, 0, 0, ..., 0, 1, 0])
# 此时的数据已经彻底处理完毕
  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

南楚巫妖

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值