SpellGCN


layout: post
title: GCN
subtitle:
date: 2020-6-27
author: RJ
header-img:
catalog: true
tags:
- project


GCN

相关论文

SEMI-SUPERVISED CLASSIFICATION WITH GRAPH CONVOLUTIONAL NETWORKS

ABSTRACT

We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs.

We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. 我们通过频谱图卷积的局部一阶近似来进行卷积架构的选择。

Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin. 我们的模型以图形边的数量线性缩放,并学习编码局部图结构和节点特征的隐藏层表示。

Our contributions are two-fold.

  • Firstly, we introduce a simple and well-behaved layer-wise propagation rule for neural network models which operate directly on graphs and show how it can be
    motivated from a first-order approximation of spectral graph convolutions (Hammond et al., 2011).
  • Secondly, we demonstrate how this form of a graph-based neural network model can be used for
    fast and scalable semi-supervised classification of nodes in a graph. Experiments on a number of
    datasets demonstrate that our model compares favorably both in classification accuracy and efficiency (measured in wall-clock time) against state-of-the-art methods for semi-supervised learning.

spell GCN

基础概念

提取拓扑图空间特征的两种方式

GCN的本质目的就是用来提取拓扑图的空间特征,那么实现这个目标只有graph convolution这一种途径吗?当然不是,在vertex domain(spatial domain)和spectral domain实现目标是两种最主流的方式。

(1)vertex domain(spatial domain)是非常直观的一种方式。顾名思义:提取拓扑图上的空间特征,那么就把每个顶点相邻的neighbors找出来。

这里面蕴含的科学问题有二:a.按照什么条件去找中心vertex的neighbors,也就是如何确定receptive field?b.确定receptive field,按照什么方式处理包含不同数目neighbors的特征?根据a,b两个问题设计算法,就可以实现目标了。

推荐阅读这篇文章Learning Convolutional Neural Networks for Graphs

vertex domain提取空间特征示意这种方法主要的缺点如下:c.每个顶点提取出来的neighbors不同,使得计算处理必须针对每个顶点d.提取特征的效果可能没有卷积好当然,对这个思路喜欢的读者可以继续搜索相关文献,学术的魅力在于百家争鸣嘛!

(2)spectral domain就是GCN的理论基础了。这种思路就是希望借助图谱的理论来实现拓扑图上的卷积操作。

从整个研究的时间进程来看:首先研究GSP(graph signal processing)的学者定义了graph上的Fourier Transformation,进而定义了graph上的convolution,最后与深度学习结合提出了Graph Convolutional Network。

Spectral graph theory(谱图理论),简单的概括就是借助于图的拉普拉斯矩阵的特征值和特征向量来研究图的性质

[拉普拉斯矩阵与拉普拉斯算子的关系](https://zhuanlan.zhihu.com/p/85287578):

拉普拉斯算子计算了周围点与中心点的梯度差。当 f(x,y) 受到扰动之后,其可能变为相邻的 f(x+1,y),f(x-1,y),f(x,y+1),f(x,y-1) 之一,拉普拉斯算子得到的是对该点进行微小扰动后可能获得的总增益 (或者说是总变化)。

[从CNN到GCN的联系与区别——GCN从入门到精(fang)通(qi)](https://www.zhihu.com/search?type=content&q=GCN)

如何从传统的傅里叶变换、卷积类比到Graph上的傅里叶变换及卷积?

把传统的傅里叶变换以及卷积迁移到Graph上来,核心工作其实就是把拉普拉斯算子的特征函数 e-iwt 变为Graph对应的拉普拉斯矩阵的特征向量。

spellGCN

与之前Bert不同的是,Bert finetune后的数据,经过 音近、形近的图卷积网络,将高频出错的4755个汉字的embedding向量做一个改变,在原有向量上 add 图卷积得到的向量,没在高频出错的4755个汉字中的其他汉字采用原embedding参数,后续的预测和Bert一样,通过这个向量来预测各个汉字的概率,取最大者作为最终预测结果。

具体的run_spell_GCN中的 gcnLayer 完成Bert finetune后的embedding矩阵和 GCN得到的向量的组合,即一部分高频出错的embedding加入了GCN的音近,形近特征,其他非高频出错的字保持原来的embedding。

拿到这个embedding之后,执行的是get_masked_lm_output:

get_masked_lm_output(
bert_config, model.get_sequence_output(), gcn_embedding,
masked_lm_positions, masked_lm_ids, masked_lm_weights)

该函数将输入句子经过多层Transformer Encoder变换得到的向量作为输入: model.get_sequence_output()拿到经过最后一程encoder的output

input_tensor = gather_indexes(input_tensor, positions) 提取mask position的 表征向量

input_tensor = tf.layers.dense(
input_tensor,
units=bert_config.hidden_size,
activation=modeling.get_activation(bert_config.hidden_act),
kernel_initializer=modeling.create_initializer(
bert_config.initializer_range))
input_tensor = modeling.layer_norm(input_tensor)

将这个mask集合中的向量接入一个dense_layer,输入输出都是768维度,相当于作了一个特征组合变换

logits = tf.matmul(input_tensor, output_weights, transpose_b=True)

logits是模型最后一层得到的一个字的768维向量特征,通过与embedding矩阵相乘,得到该字与字表中预测的每个字的概率大小。

最初的embedding是词嵌入和位置嵌入的向量,经过多层Encoder变换的vector,最后再和embedding_table相乘得到logits

优化

修改gcn_graph.ty_xj中的relation文件,这里我们只处理音近,包括以下:

同音同调 0
同音异调 0
近音同调 1
近音异调 1

{'同音同调': '0', '同音异调': '0', '形近': '1'}
dict_keys(['形近', '同音同调', '同音异调', '近音异调', '近音同调', '同部首同笔画'])
1 4753 112687
0 4738 115561

使用Faspell的finetune Bert模型。

如何限制最终输出映射到小范围词表上?

如何将词向量32位降低到16位进行计算?

削剪融合多头attention降低计算量?

预训练finetune的策略?

正常训练的掩码与纠错的掩码比例应该不同,纠错句子的难度应该比较大,所以mask的句子的百分比应该提高,提升到25%到35%应该都是可行的。

参考

如何理解 Graph Convolutional Network(GCN)?

百度纠错

实验结果:

corretion:
char_p=730/1677= 0.43530113297555156
char_r=730/1217=0.599835661462613

sent_p=381/826=0.4612590799031477
sent_r=381/967=0.3940020682523268
sent_a=401/998=0.40180360721442887
detection:
char_p=874/1677=0.5211687537268933
char_r=874/1217=0.7181594083812654

sent_p=467/826=0.5653753026634383
sent_r=467/967=0.48293691830403307
sent_a=487/998=0.4879759519038076

sh

#!/usr/bin/env bash
if [ $# -lt 6 ];then
echo ‘Usage sh run_spellgcn.sh <DATA_DIR> <JOB_NAME> <BERT_PATH> <SIGHAN13_PATH> <SIGHAN14_PATH> <SIGHAN15_PATH>’
exit 0
fi

task_name=CSC
timestamp=date "+%Y-%m-%d-%H-%M-%S"
lr=5e-5
batch_size=32
num_epochs=10
max_seq_length=180
do_lower_case=true
graph_dir="…/data/gcn_graph.ty_xj/"

mkdir -p log/

TRAIN

for i in $(seq 0 0)
do

output_dir=log/KaTeX parse error: Expected group after '_' at position 13: {2}_sighan13_̲{task_name}_ i l o g d i r = l o g / i log_dir=log/ ilogdir=log/{2}sighan13KaTeX parse error: Expected group after '_' at position 12: {task_name}_̲i

if [ ! -d “${output_dir}/src” ]; then
mkdir -p ${output_dir}/src
cp $0 ${output_dir}/src
cp …/*py ${output_dir}/src
fi

#sleep $i
echo "Start running t a s k n a m e t a s k − {task_name} task- tasknametask{i} log to o u t p u t d i r . l o g " C U D A V I S I B L E D E V I C E S = {output_dir}.log" CUDA_VISIBLE_DEVICES= outputdir.log"CUDAVISIBLEDEVICES=i python …/run_spellgcn.py
–job_name=KaTeX parse error: Undefined control sequence: \ at position 3: 2 \̲ ̲ --task_name={task_name}
–do_train=True
–do_eval=True
–do_predict=True
–data_dir=$1
–vocab_file=$3/vocab.txt
–bert_config_file=KaTeX parse error: Undefined control sequence: \ at position 20: …rt_config.json \̲ ̲ --max_seq_len…{max_seq_length}
–max_predictions_per_seq=KaTeX parse error: Undefined control sequence: \ at position 18: …ax_seq_length} \̲ ̲ --train_batch…{batch_size}
–learning_rate=KaTeX parse error: Undefined control sequence: \ at position 6: {lr} \̲ ̲ --num_train_e…{num_epochs}
–keep_checkpoint_max=10
–random_seed=${i}000
–init_checkpoint=KaTeX parse error: Undefined control sequence: \ at position 19: …ert_model.ckpt \̲ ̲ --graph_dir={graph_dir}
–output_dir=${output_dir} > ${log_dir}.log 2>&1 &
done
wait

PREDICT & TEST

for i in $(seq 0 0)
do

output_dir=log/KaTeX parse error: Expected group after '_' at position 13: {2}_sighan13_̲{task_name}_ i l o g d i r = l o g / i log_dir=log/ ilogdir=log/{2}sighan13KaTeX parse error: Expected group after '_' at position 12: {task_name}_̲i

CUDA_VISIBLE_DEVICES=$i python …/run_spellgcn.py
–job_name=KaTeX parse error: Undefined control sequence: \ at position 3: 2 \̲ ̲ --task_name={task_name}
–do_train=False
–do_eval=False
–do_predict=True
–data_dir=$4
–vocab_file=$3/vocab.txt
–bert_config_file=KaTeX parse error: Undefined control sequence: \ at position 20: …rt_config.json \̲ ̲ --max_seq_len…{max_seq_length}
–max_predictions_per_seq=KaTeX parse error: Undefined control sequence: \ at position 18: …ax_seq_length} \̲ ̲ --train_batch…{batch_size}
–learning_rate=KaTeX parse error: Undefined control sequence: \ at position 6: {lr} \̲ ̲ --num_train_e…{num_epochs}
–keep_checkpoint_max=10
–random_seed=KaTeX parse error: Undefined control sequence: \ at position 8: {i}000 \̲ ̲ --init_checkp…{output_dir}
–graph_dir=KaTeX parse error: Undefined control sequence: \ at position 13: {graph_dir} \̲ ̲ --output_dir={output_dir} >> ${log_dir}.log 2>&1 &
done
wait

PREDICT & TEST

for i in $(seq 0 0)
do

output_dir=log/KaTeX parse error: Expected group after '_' at position 13: {2}_sighan13_̲{task_name}_ i l o g d i r = l o g / i log_dir=log/ ilogdir=log/{2}sighan14KaTeX parse error: Expected group after '_' at position 12: {task_name}_̲i

CUDA_VISIBLE_DEVICES=$i python …/run_spellgcn.py
–job_name=KaTeX parse error: Undefined control sequence: \ at position 3: 2 \̲ ̲ --task_name={task_name}
–do_train=False
–do_eval=False
–do_predict=True
–data_dir=$5
–vocab_file=$3/vocab.txt
–bert_config_file=KaTeX parse error: Undefined control sequence: \ at position 20: …rt_config.json \̲ ̲ --max_seq_len…{max_seq_length}
–max_predictions_per_seq=KaTeX parse error: Undefined control sequence: \ at position 18: …ax_seq_length} \̲ ̲ --train_batch…{batch_size}
–learning_rate=KaTeX parse error: Undefined control sequence: \ at position 6: {lr} \̲ ̲ --num_train_e…{num_epochs}
–keep_checkpoint_max=10
–random_seed=KaTeX parse error: Undefined control sequence: \ at position 8: {i}000 \̲ ̲ --init_checkp…{output_dir}
–graph_dir=KaTeX parse error: Undefined control sequence: \ at position 13: {graph_dir} \̲ ̲ --output_dir={output_dir} >> ${log_dir}.log 2>&1 &
done
wait

PREDICT & TEST

for i in $(seq 0 0)
do

output_dir=log/KaTeX parse error: Expected group after '_' at position 13: {2}_sighan13_̲{task_name}_ i l o g d i r = l o g / i log_dir=log/ ilogdir=log/{2}sighan15KaTeX parse error: Expected group after '_' at position 12: {task_name}_̲i

CUDA_VISIBLE_DEVICES=$i python …/run_spellgcn.py
–job_name=KaTeX parse error: Undefined control sequence: \ at position 3: 2 \̲ ̲ --task_name={task_name}
–do_train=False
–do_eval=False
–do_predict=True
–data_dir=$6
–vocab_file=$3/vocab.txt
–bert_config_file=KaTeX parse error: Undefined control sequence: \ at position 20: …rt_config.json \̲ ̲ --max_seq_len…{max_seq_length}
–max_predictions_per_seq=KaTeX parse error: Undefined control sequence: \ at position 18: …ax_seq_length} \̲ ̲ --train_batch…{batch_size}
–learning_rate=KaTeX parse error: Undefined control sequence: \ at position 6: {lr} \̲ ̲ --num_train_e…{num_epochs}
–keep_checkpoint_max=10
–random_seed=KaTeX parse error: Undefined control sequence: \ at position 8: {i}000 \̲ ̲ --init_checkp…{output_dir}
–graph_dir=KaTeX parse error: Undefined control sequence: \ at position 13: {graph_dir} \̲ ̲ --output_dir={output_dir} >> ${log_dir}.log 2>&1 &
done

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值