RSAN源码解读以及如何修改适用中文数据集

本文档详细介绍了RSAN模型源码的解读,包括数据预处理、数据加载、训练过程和配置参数。对于使用中文数据集,需要对数据预处理部分进行修改,如使用中文分词工具LAC替换英文分词。数据预处理涉及将原始数据转化为模型输入格式,构建ground truth标签序列,并进行关系负采样。在训练过程中,通过调整config.py的参数适应中文数据集。
摘要由CSDN通过智能技术生成

源码github地址
https://github.com/Anery/RSAN
其使用的数据集NYT和webnlg都是属于英文数据集。如果要处理中文数据集需要进行一些修改

data_prepare.py

这个是数据预处理
将原始数据转为模型输入的形式并保存为pkl文件

  1. 构建ground truth 标签序列
    通过不同关系构建不同的标签序列,并将三元组的头尾实体对齐到原句中。
  2. 关系负采样
    使用关系负采样提高效率。将当前句子所描述的关系视为正样本其他为负样本。每个句子中随机选取负样本关系数,(config.py中的参数为neg_num)
    ,其 ground truth 标签序列只包含O和X。如下图所示
    在这里插入图片描述
    导入包
import config
import json
import nltk # 这个是英文分词的包
import os
import numpy as np
import six
from six.moves import cPickle

def pickle_load(f):

def pickle_load(f):
    """
    载入本地文件,恢复为py文件。
    :param f:文件
    :return:恢复的py对象
    """

def pickle_dump(obj, f):

def pickle_dump(obj, f):
    """
    将python对象序列化保存到本地
    :param obj:存储数据
    :param f:本地文件名
    :return:
    """

class DataPrepare(object):

def init(self, opt):

"""
        self.opt = opt
        vocab = np.load(opt.input_vocab)
        # 将词转为id
        self.word2id = {
   j: i for i, j in enumerate(vocab)}
        # 将id转为词
        self.id2word = {
   i: j for i, j in enumerate(vocab)}
        # 将关系转为id
        self.rel2id = json.load(open(opt.input_rel2id, 'r'))
        # 将标签转为id
        self.label2id = json.load(open(opt.input_label2id, 'r'))
        # 将位置转为id
        self.pos2id = json.load(open(opt.input_pos2id, 'r'))
        # 将字符转为id
        self.char2id = json.load(open(opt.input_char2id, 'r'))
        self.train_data = self.read_json(opt.input_train)
        self.test_data = self.read_json(opt.input_test)
        self.dev_data = self.read_json(opt.input_dev)

在这里插入图片描述

def prepare(self):

该函数主要是将原始数据(训练集,测试集,验证集)转为模型需要格式并保存
训练集具体可看process_train函数返回:

  1. 正样本特征(文本对应id, 关系id, 标注对应id,位置id, 每个词中对应字的id)
  2. 正样本长度
  3. 负样本特征(负样本文本对应id, 关系id, 标注对应id,位置id, 每个词中对应字的id)
  4. 负样本长度
    验证集和测试集可看process_dev_test函数返回:
  5. 样本特征(文本对应id, 关系id, 标注对应id,位置id, 每个词中对应字的id)
  6. 样本长度

def read_json(self, filename):

读取json,返回list

def find_pos(self, sent_list, word_list):

以sent_list: [‘Massachusetts’, ‘ASTON’, ‘MAGNA’, ‘Great’, ‘Barrington’, ‘;’, ‘also’, ‘at’, ‘Bard’, ‘College’, ‘,’, ‘Annandale-on-Hudson’, ‘,’, ‘N.Y.’, ‘,’, ‘July’, ‘1-Aug’, ‘.’]
word_list: [‘Annandale-on-Hudson’] 为例返回word_list的位置。
**l:**word_list 长度。

def process_dev_test(self, dataset):

对验证集进行转换具体跟process_train相同:
返回features([sent_ids, pos_ids, sent_chars, triples])和对应文本长度:

features
[[[80985, 41049, 73686, 62039, 81447, 45322, 73686, 32328, 27182, 28460, 30751, 73686, 83254, 70948, 45210, 64462, 73686, 38584, 72477, 66546, 70948, 35382, 4440, 32356, 12605, 841, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 
[6, 14, 38, 14, 14, 14, 38, 6, 3, 14, 12, 38, 27, 3, 12, 12, 38, 1, 12, 27, 3, 12, 12, 6, 14, 39, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 
[[1, 49, 0, 0, 0, 0, 0, 0, 0, 0], [1, 53, 21, 21, 49, 19, 0, 0, 0, 0], [54, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 26, 4, 37, 51, 0, 0, 0, 0, 0], [1, 51, 26, 4, 21, 0, 0, 0, 0, 0], [1, 26, 9, 21, 4, 19, 0, 0, 0, 0], [54, 0, 0, 0, 0, 0, 0, 0, 0, 0], [49, 21, 33, 4, 0, 0, 0, 0, 0, 0], [37, 51, 21, 0, 0, 0, 0, 0, 0, 0], [1, 33, 19, 19, 33, 53, 0, 0, 0, 0], [62, 26, 4, 18, 21, 4, 0, 0, 0, 0], [54, 0, 0, 0, 0, 0, 0, 0, 0, 0], [19, 53, 41, 41, 47, 33, 49, 37, 21, 18], [33, 0, 0, 0, 0, 0, 0, 0, 0, 0], [17, 26, 47, 32, 0, 0, 0, 0, 0, 0], [23, 26, 53, 4, 19, 21, 0, 0, 0, 0], [54, 0, 0, 0, 0, 0, 0, 0, 0, 0], [33, 49, 18, 0, 0, 0, 0, 0, 0, 0], [51, 26, 53, 19, 35, 49, 17, 0, 0, 0], [4, 21, 41, 47, 33, 23, 21, 18, 0, 0], [33, 0, 0, 0, 0, 0, 0, 0, 0, 0], [17, 4, 33, 13, 21, 47, 0, 0, 0, 0], [61, 53, 33, 4, 4, 52, 0, 0, 0, 0], [35, 49, 0, 0, 0, 0, 0, 0, 0, 0], [1, 26, 53, 17, 47, 33, 19, 37, 26, 49], [7, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
 [(('H', 24, 25), ('T', 1, 2), 10), (('H', 1, 2), ('T', 24, 25), 21)]]]
tail_words = nltk.word_tokenize(tail + ',')[:-1]

这里需要更改分词

def process_train(self, dataset):

dataset:在这里插入图片描述
sent_text为dataset的一行数据: ‘Massachusetts ASTON MAGNA Great Barrington ; also at Bard College , Annandale-on-Hudson , N.Y. , July 1-Aug .’

sent_words, sent_ids, pos_ids, sent_chars, cur_len = self.process_sentence(sent_text)

process_sentence(sent_text)函数获取对应的id信息具体可以看后面的解释。
在这里插入图片描述在这里插入图片描述
entities_为标记文本中的实体提及: [{‘start’: 1, ‘label’: ‘ORGANIZATION’, ‘text’: ‘Bard College’}, {‘start’: 2, ‘label’: ‘LOCATION’, ‘text’: ‘Annandale-on-Hudson’}]
entities为该句中的实体: [‘Bard College’, ‘Annandale-on-Hudson’]*
raw_triples_为该句中所有三元组: [{‘em1Text’: ‘Annandale-on-Hudson’, ‘em2Text’: ‘Bard College’, ‘label’: ‘/location/location/contains’}]
triples_list该句中所有三元组: [(‘Annandale-on-Hudson’, ‘Bard College’, ‘/location/location/contains’)]
triples_一个三元组:
triples表示头实体的位置尾实体的位置和关系id: [((‘H’, 11, 12), (‘T’, 8, 10), 21)]
cur_relations_list:关系对应id
head一个头实体: ‘Annandale-on-Hudson’
relation一个关系: ‘/location/location/contains’
tail:一个尾实体 ‘Bard College’
head_words对头实体进行分词(把每个字分出来): [‘Annandale-on-Hudson’]
这里如果使用中文分词可以用LAC示例如下


# 装载分词模型
lac = LAC(mode='seg')

# 单个样本输入,输入为Unicode编码的字符串
text = "LAC是个优秀的分词工具"
seg_result = lac.run(text)

head_pos为head的位置(第11): range(11, 12)
tail_words为尾实体分词: [‘Bard’, ‘College’]
tail_pos为尾实体位置: range(8, 10)
h_chunk为 头实体标签和对应位置 t_chunk同理: (‘H’, 11, 12)
none_label_ids长120,为none_label的对应id: [8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9…
none_label 长为120,前面18位(句子长度)为‘O’后面为‘X’: [‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’…
cur_label_ids BIOES标注对应id: [8, 8, 8, 8, 8, 8, 8, 8, 3, 7, 8, 0, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9…
all_labels长为120,对句子打标签: {21: [‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘B-T’, ‘E-T’, ‘O’, ‘S-H’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’, ‘X’]}
rel_id关系对应id: 21
positive_feature正样本特征([sent_ids, r_id, none_label_ids, pos_ids, sent_chars]):

[[[53751, 55927, 58554, 70633, 20905, 30264, 3204, 57238, 56198, 118, 73686, 69757, 73686, 68525, 73686, 49182, 21276, 841, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
 21,
 [8, 8, 8, 8, 8, 8, 8, 8, 3, 7, 8, 0, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9], 
[14, 14, 14, 14, 14, 1, 20, 6, 14, 14, 38, 14, 38, 14, 38, 14, 2, 39, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 
[[1, 33, 19, 19, 33, 23, 51, 53, 19, 21], [1, 1, 1, 1, 1, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 0, 0, 0, 0, 0], [1, 4, 21, 33, 37, 0, 0, 0, 0, 0], [1, 33, 4, 4, 35, 49, 17, 37, 26, 49], [65, 0, 0, 0, 0, 0, 0, 0, 0, 0], [33, 47, 19, 26, 0, 0, 0, 0, 0, 0], [33, 37, 0, 0, 0, 0, 0, 0, 0, 0], [1, 33, 4, 18, 0, 0, 0, 0, 0, 0], [1, 26, 47, 47, 21, 17, 21, 0, 0, 0], [54, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 49, 49, 33, 49, 18, 33, 47, 21, 57], [54, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 7, 1, 7, 0, 0, 0, 0, 0, 0], [54, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 53, 47, 52, 0, 0, 0, 0, 0, 0], [15, 57, 1, 53, 17, 0, 0, 0, 0, 0], [7, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0
评论 7
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值