达观杯--风险事件实验记录

达观杯–风险事件实验记录

官方代码

#!/usr/bin/env python
# coding: utf-8

import pandas as pd
from sklearn.model_selection import train_test_split

import sys
sys.path.append("./")


# ### 加载数据集,并切分train/dev

# In[2]:


# 加载数据
df_train = pd.read_csv("./datasets/phase_1/splits/fold_0/train.txt")
df_train.columns = ["id", "text", "label"]
df_val = pd.read_csv("./datasets/phase_1/splits/fold_0/dev.txt")
df_val.columns = ["id", "text", "label"]
df_test = pd.read_csv("./datasets/phase_1/splits/fold_0/test.txt")
df_test.columns = ["id", "text", ]

# 构建词表
charset = set()
for text in df_train['text']:
    for char in text.split(" "):
        charset.add(char)
id2char = ['OOV', ',', '。', '!', '?'] + list(charset)
char2id = {id2char[i]: i for i in range(len(id2char))}

# 标签集
id2label = list(df_train['label'].unique())
label2id = {id2label[i]: i for i in range(len(id2label))}


# ### 定义模型

# In[3]:


# 定义模型

from tensorflow.keras.layers import *
from tensorflow.keras.models import *
MAX_LEN = 128
input_layer = Input(shape=(MAX_LEN,))
layer = Embedding(input_dim=len(id2char), output_dim=256)(input_layer)
layer = Bidirectional(LSTM(256, return_sequences=True))(layer)
layer = Flatten()(layer)
output_layer = Dense(len(id2label), activation='softmax')(layer)
model = Model(inputs=input_layer, outputs=output_layer)
model.summary()
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

from tensorflow.keras.preprocessing.sequence import pad_sequences
import numpy as np

X_train, X_val, X_test = [], [], []
y_train = np.zeros((len(df_train), len(id2label)), dtype=np.int8)
y_val = np.zeros((len(df_val), len(id2label)), dtype=np.int8)

for i in range(len(df_train)):
    X_train.append([char2id[char] for char in df_train.loc[i, 'text'].split(" ")])
    y_train[i][label2id[df_train.loc[i, 'label']]] = 1
for i in range(len(df_val)):
    X_val.append([char2id[char] if char in char2id else 0 for char in df_val.loc[i, 'text'].split(" ")])
    y_val[i][label2id[df_val.loc[i, 'label']]] = 1
for i in range(len(df_test)):
    X_test.append([char2id[char] if char in char2id else 0 for char in df_test.loc[i, 'text'].split(" ")])

X_train = pad_sequences(X_train, maxlen=MAX_LEN, padding='post', truncating='post')
X_val = pad_sequences(X_val, maxlen=MAX_LEN, padding='post', truncating='post')
X_test = pad_sequences(X_test, maxlen=MAX_LEN, padding='post', truncating='post')


# ### 模型训练

# In[5]:


model.fit(x=X_train, y=y_train, validation_data=(X_val, y_val), epochs=5, batch_size=32)


# In[19]:


y_val_pred = model.predict(X_val).argmax(axis=-1)
print(y_val_pred[: 20])
y_val = []
for i in range(len(df_val)):
    y_val.append(label2id[df_val.loc[i, 'label']])
y_val = [int(w) for w in y_val]
print(y_val[: 20])

from sklearn.metrics import classification_report
results = {}
classification_report_dict = classification_report(y_val_pred, y_val, output_dict=True)
for key0, val0 in classification_report_dict.items():
    if isinstance(val0, dict):
        for key1, val1 in val0.items():
            results[key0 + "__" + key1] = val1

    else:
        results[key0] = val0

import json
print(json.dumps(results, indent=2, ensure_ascii=False))

y_pred = model.predict(X_test).argmax(axis=-1)
pred_labels = [id2label[i] for i in y_pred]
pd.DataFrame({"id": df_test['id'], "label": pred_labels}).to_csv("submission.csv", index=False)

baseline f1-score
0.36730954652

StratifiedKFold 有放回交叉验证
20210901

dev macro-F101234avg
随机word2vec bilstm max-pool0.50197401053979820.53050473620153840.487120907159471260.487120907159471260.488103991753090660.501351448
随机word2vec textcnn max-pool0.4395620552615170.46484840733822890.442616125512826750.44570939244035070.4341243103527730.4453720582
随机word2vec bigru max-pool0.483426250238874640.50540704541556760.49331289220803780.52012172149033460.495564517284430630.499566485

submit 随机word2vec bilstm max-pool -> 0.56353646

20210902

dev macro-F101234avg
word2vec 128dim bilstm max-pool0.47087440684654240.478453412735319940.49494684981443540.474457750601617770.49207268228230060.48216102
word2vec 256dim bilstm max-pool0.50002424767355020.50747857754001520.488983134653656340.481419644085862740.50119122073624880.495819365

20210902

dev macro-F101234avg
word2vec bilstm dr_pool0.495509199632063360.49591742291327990.49756356181477690.49015137962620050.53642224670996690.503112762
word2vec bilstm slf_attn_pool0.5386909292600010.53488264227098110.53442791393901430.52971579211678130.50876949075931670.529297354
word2vec bilstm avg_pool0.50002424767355020.50747857754001520.488983134653656340.481419644085862740.50119122073624880.495819365
word2vec bilstm max_pool0.53783731233296860.54301060486485180.53966150432342090.53740386741775140.53642224670996690.538867107

有点过拟合

20210907

dev macro-F101234avg
bert-base-chinese+random0.50870.47650.49420.51020.49360.49664
bert-base-chinese+w2v0.51520.48650.50790.50370.5080.50426
chinese-bert-wwm-ext+random0.52810.48540.5120.50860.51590.51
chinese-bert-wwm-ext+w2v0.510.48690.50640.5050.50540.50274
chinese-roberta-wwm-ext+random0.5050.47990.50290.50730.49540.4981
chinese-roberta-wwm-ext+w2v0.50570.47860.49930.49170.49030.49312

20210908

dev macro-F101234avg
chinese-roberta-wwm-ext+counts0.53290.51010.51410.54120.52590.52484
chinese-roberta-wwm-ext + dict_vocab2freq0.52460.50720.50910.51390.51030.51302
chinese-roberta-wwm-ext + dict_vocab2freq_wiki_zh0.51550.50490.50750.52930.50910.51326
chinese-bert-wwm-ext + counts0.53520.514xx0.53040.50280.5206
chinese-bert-wwm-ext_dict + vocab2freq_08190.5254xxxx0.51510.51860.5197
chinese-bert-wwm-ext + dict_vocab2freq_wiki_zh0.52170.5105xx0.52740.52560.5213


*20210910
NEZHA 脱敏+counts

dev macro-F101234avg
NEZHA-Base + counts0.51910.50780.51780.51690.54690.5217
NEZHA-Base-WWM + counts0.54940.52580.50910.53250.52530.52948


Roberta 脱敏 + 预训练 + counts

dev macro-F101234avg
chinese-roberta-wwm-ext + pretrain + counts0.54270.52030.52370.5270.52860.52846

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

发呆的比目鱼

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值