tensorflow2.0使用keras的Tokenizer文本预处理

  • tokenizer = Tokenizer(num_words=max_words) # 只考虑最常见的前max_words个词

  • tokenizer.fit_on_texts(texts) #使用一系列文档来生成token词典,texts为list类,每个元素为一个文档

  • sequences =tokenizer.texts_to_sequences(texts) #将多个文档转换为word下标的向量形式,shape为[len(texts),len(text)] – (文档数,每条文档的长度)

  • word_index = tokenizer.word_index #word_index一个dict,保存所有word对应的编号id,从1开始 print(‘Found %s unique tokens.’ % len(word_index))

  • data = pad_sequences(sequences, maxlen=maxlen) #
    返回的是个2维张量,长度为maxlen,只关注前maxlen个单词 labels = np.asarray(labels)

  • print('shape of labels: ',labels.shape) # 该语料的测试集和训练集的样本数都是25000个。

1. 引入Tokenizer对文本中的单词进行标记

Tokenizer可以过滤掉句子中的标点符号,忽略大小写,对句子的所有单词标记,每个单词对应一个标签,从而生成文本的字典。num_words表示只考虑最常见的前num_words个词。

from tensorflow.keras.preprocessing.text import Tokenizer

sentences = [
    'i love my dog',
    'I, love my cat',
    'You love my dog!'
]

tokenizer = Tokenizer(num_words = 100)
#学习出文本的字典
tokenizer.fit_on_texts(sentences)
#查看对应的单词和数字的映射关系dict
word_index = tokenizer.word_index
print(word_index)

{‘love’: 1, ‘my’: 2, ‘i’: 3, ‘dog’: 4, ‘cat’: 5, ‘you’: 6}

2.生成句子列表

sequences = tokenizer.texts_to_sequences(sentences)
print(sequences)

[[3, 1, 2, 4], [3, 1, 2, 5], [6, 1, 2, 4]]

3. 当采用新的文本去生成句子列表会丢失掉那些不在字典的单词

test_data=[
  'i really love my dog',
  'my dog loves my manatee'
]
test_seq = tokenizer.texts_to_sequences(test_data)
print(test_seq)

[[3, 1, 2, 4], [2, 4, 2]]
因为这里的really,loves,manatee是之前构造字典时没有遇到过的单词

4. 用oov_token标记不在字典里的单词

from tensorflow.keras.preprocessing.text import Tokenizer

sentences = [
    'i love my dog',
    'I, love my cat',
    'You love my dog!'
]

tokenizer = Tokenizer(num_words = 100, oov_token="<>")
tokenizer.fit_on_texts(sentences)
word_index = tokenizer.word_index
print(word_index)


test_data=[
  'i really love my dog',
  'my dog loves my manatee'
]
test_seq = tokenizer.texts_to_sequences(test_data)
print(test_seq)

{’<>’: 1, ‘love’: 2, ‘my’: 3, ‘i’: 4, ‘dog’: 5, ‘cat’: 6, ‘you’: 7}

[[4, 1, 2, 3, 5], [3, 5, 1, 3, 1]]
可以看到之前丢失的单词被替换成了1

5. 为了使送入神经网络的句子序列的大小一致,使用pad转换成词袋序列

from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences

sentences = [
    'I love my dog',
    'I love my cat',
    'You love my dog!',
    'Do you think my dog is amazing?'
]

tokenizer = Tokenizer(num_words = 100, oov_token="<>")
tokenizer.fit_on_texts(sentences)
word_index = tokenizer.word_index

sequences = tokenizer.texts_to_sequences(sentences)

padded = pad_sequences(sequences, maxlen=5)
print("\nWord Index = " , word_index)
print("\nSequences = " , sequences)
print("\nPadded Sequences:")
print(padded)

Word Index = {’<>’: 1, ‘my’: 2, ‘love’: 3, ‘dog’: 4, ‘i’: 5, ‘you’: 6, ‘cat’: 7, ‘do’: 8, ‘think’: 9, ‘is’: 10, ‘amazing’: 11}

Sequences = [[5, 3, 2, 4], [5, 3, 2, 7], [6, 3, 2, 4], [8, 6, 9, 2, 4, 10, 11]]

Padded Sequences:
[[ 0 5 3 2 4]
[ 0 5 3 2 7]
[ 0 6 3 2 4]
[ 9 2 4 10 11]]
可以看到短句子前面填充了0使得所有句子长度为最长的那个句子长度

7. 设置padding='post’可以将0填充到后面,设置maxlen可以进行长度裁剪

from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences

sentences = [
    'I love my dog',
    'I love my cat',
    'You love my dog!',
    'Do you think my dog is amazing?'
]

tokenizer = Tokenizer(num_words = 100, oov_token="<OOV>")
tokenizer.fit_on_texts(sentences)
sequences = tokenizer.texts_to_sequences(sentences)

padded = pad_sequences(sequences, padding='post',maxlen=6)
print("\nSequences = " , sequences)
print("\nPadded Sequences:")
print(padded)

Sequences = [[5, 3, 2, 4], [5, 3, 2, 7], [6, 3, 2, 4], [8, 6, 9, 2, 4, 10, 11]]

Padded Sequences:
[[ 5 3 2 4 0 0]
[ 5 3 2 7 0 0]
[ 6 3 2 4 0 0]
[ 6 9 2 4 10 11]]

可以看到对前三个句子在末尾填充了0,最后一个句子由于maxlen=5,从而裁剪掉了第一个单词Do

8. 实例-利用tensorflow处理josn数据,序列化文本

数据格式如下:
在这里插入图片描述

!wget --no-check-certificate \
    https://storage.googleapis.com/laurencemoroney-blog.appspot.com/sarcasm.json \
    -O /tmp/sarcasm.json
  
import json

with open("/tmp/sarcasm.json", 'r') as f:
    datastore = json.load(f)


sentences = [] 
labels = []
urls = []
for item in datastore:
    sentences.append(item['headline'])
    labels.append(item['is_sarcastic'])
    urls.append(item['article_link'])



from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
tokenizer = Tokenizer(oov_token="<>")
tokenizer.fit_on_texts(sentences)

word_index = tokenizer.word_index
print(len(word_index))
print(word_index)
sequences = tokenizer.texts_to_sequences(sentences)
padded = pad_sequences(sequences, padding='post')
print(padded[0])
print(padded.shape)

在这里插入图片描述

  • 2
    点赞
  • 11
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值