构建文本数据集(tokenize、vocab)


根据李沐老师的课做的记录。

构建文本数据集

文本数据集可以将其看作一串单词序列或者字符序列。
构建时一般有以下几个步骤。

  1. 文本清洗(比如去除乱码和标点符号、当然在很多时候并不会去掉标点符号)。
  2. 将文本存入内存。
  3. 将文本拆分成词或者字符。
  4. 建立词汇表和对应索引。

文本清洗和读取

这里使用课上提到的《time machine》
百度网盘链接
提取码:pypt

  1. 读取文本
    只保留字母并统一成小写。
text_path = './timemachine.txt'


def read_time_machine():
    with open(text_path, 'r') as f:
        lines = f.readlines()
    return [re.sub(r'[^a-zA-Z]+', ' ', line).strip().lower() for line in lines]
lines = read_time_machine()
print('len=', len(lines), '\n', lines[0], '\n', lines[9])

输出如下

len= 3221 
the time machine by h g wells 
was expounding a recondite matter to us his grey eyes shone and

tokenize

将文本划分为词或者字符。

def tokenize(lines, token='word'):
    if token == 'word':
        return [line.split() for line in lines]
    elif token == 'char':
        return [list(line) for line in lines]
    else:
        print('wrong')
tokens = tokenize(lines)
print(tokens[0])

输出为

在这里插入代码片['the', 'time', 'machine', 'by', 'h', 'g', 'wells']

定义计数函数

统计每个词的出现频数。

def count_corpus(tokens):
    if len(tokens) == 0:
        tokens = []
    elif isinstance(tokens[0], list):
        tokens = [token for line in tokens for token in line]
    return collections.Counter(tokens)

定义vocab类

用于返回id_to_token和token_to_id。

class Vocab:
    def __init__(self, tokens=None, min_freq=0, reversed_token=None):
        if not tokens:
            tokens = []
        if not reversed_token:
            reversed_token = []
        counter = count_corpus(tokens)
        self.token_freq = sorted(counter.items(), key=lambda x: x[1], reverse=True)
        self.unk, uniq_tokens = 0, ['UNK'] + reversed_token
        uniq_tokens += [token for token, freq in self.token_freq if freq > min_freq and token not in uniq_tokens]
        self.idx_to_token, self.token_to_idx = []
  • 1
    点赞
  • 23
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
首先需要安装需要的库: ```python pip install torch pip install torchtext ``` 然后可以使用以下代码实现IMDB数据集文本分类: ```python import torch import torch.nn as nn import torch.optim as optim from torchtext.datasets import IMDB from torchtext.data import Field, LabelField, BucketIterator from torchtext.vocab import GloVe # 设置随机种子 seed = 1234 torch.manual_seed(seed) torch.backends.cudnn.deterministic = True # 定义Field TEXT = Field(tokenize='spacy', tokenizer_language='en_core_web_sm') LABEL = LabelField(dtype=torch.float) # 加载IMDB数据集 train_data, test_data = IMDB.splits(TEXT, LABEL) # 构建词表 TEXT.build_vocab(train_data, vectors=GloVe(name='6B', dim=300)) LABEL.build_vocab(train_data) # 定义模型 class Net(nn.Module): def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim, dropout): super().__init__() self.embedding = nn.Embedding(vocab_size, embedding_dim) self.fc1 = nn.Linear(embedding_dim, hidden_dim) self.fc2 = nn.Linear(hidden_dim, output_dim) self.dropout = nn.Dropout(dropout) def forward(self, text): embedded = self.embedding(text) embedded = embedded.mean(dim=0) hidden = self.dropout(torch.relu(self.fc1(embedded))) output = self.fc2(hidden) return output # 定义超参数 BATCH_SIZE = 64 EMBEDDING_DIM = 300 HIDDEN_DIM = 256 OUTPUT_DIM = 1 DROPOUT = 0.5 N_EPOCHS = 10 # 定义模型、优化器和损失函数 model = Net(len(TEXT.vocab), EMBEDDING_DIM, HIDDEN_DIM, OUTPUT_DIM, DROPOUT) optimizer = optim.Adam(model.parameters()) criterion = nn.BCEWithLogitsLoss() # 将模型和数据集迁移到GPU上 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = model.to(device) train_data, test_data = train_data.to(device), test_data.to(device) TEXT.vocab.vectors = TEXT.vocab.vectors.to(device) # 定义训练和测试函数 def train(model, iterator, optimizer, criterion): epoch_loss = 0 epoch_acc = 0 model.train() for batch in iterator: optimizer.zero_grad() text, label = batch.text, batch.label predictions = model(text).squeeze(1) loss = criterion(predictions, label) acc = binary_accuracy(predictions, label) loss.backward() optimizer.step() epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator) def evaluate(model, iterator, criterion): epoch_loss = 0 epoch_acc = 0 model.eval() with torch.no_grad(): for batch in iterator: text, label = batch.text, batch.label predictions = model(text).squeeze(1) loss = criterion(predictions, label) acc = binary_accuracy(predictions, label) epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator) def binary_accuracy(predictions, label): rounded_preds = torch.round(torch.sigmoid(predictions)) correct = (rounded_preds == label).float() acc = correct.sum() / len(correct) return acc # 定义主函数 def main(): train_iterator, test_iterator = BucketIterator.splits((train_data, test_data), batch_size=BATCH_SIZE, device=device) best_valid_loss = float('inf') for epoch in range(N_EPOCHS): train_loss, train_acc = train(model, train_iterator, optimizer, criterion) valid_loss, valid_acc = evaluate(model, test_iterator, criterion) if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(model.state_dict(), 'imdb-model.pt') print(f'Epoch: {epoch+1:02}') print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%') print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%') # 加载最佳模型并测试 model.load_state_dict(torch.load('imdb-model.pt')) test_loss, test_acc = evaluate(model, test_iterator, criterion) print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%') if __name__ == '__main__': main() ``` 这段代码中,我们使用了torchtext库中的IMDB数据集,并且使用了预训练的GloVe词向量作为初始的单词嵌入。模型结构采用了简单的全连接神经网络,并且在训练和测试过程中使用了Adam优化器和二元交叉熵损失函数。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值