史上最全最完善的文本增强方式 nlpaug(附例子,无脑运行!

目录

环境设定 

字符增强器(Character Augmenter)

OCR字符文本增强器(OCR Augmenter)

使用预定义的OCR错误替换字符

键盘增强器(Keyboard Augmenter)

根据键盘距离替换字符

随机增强器(Random Augmenter)

随机插入字符

随机替换字符

随机交换字符 

随机删除字符

词增强器(Word Augmenter)

拼写增强器(Spelling Augmenter)

使用拼写错误单词字典替换单词

词嵌入增强器(Word Embeddings Augmenter)

根据词嵌入的相似性随机插入单词

根据词嵌入的相似性随机替换单词

TF-IDF增强器(TF-IDF Augmenter)

根据TF-IDF相似性插入单词

根据TF-IDF相似性替换单词

上下文词嵌入增强器(Contextual Word Embeddings Augmenter)

根据上下文词嵌入(BERT、DistilBERT、RoBERTA或XLNet)插入单词

根据上下文词嵌入(BERT、DistilBERT、RoBERTA或XLNet)替换单词

同义词增强器(Synonym Augmenter)

使用WordNet的同义词替换单词

反义词增强器

使用反义词替换单词

随机词增强器(Random Word Augmenter)

随机交换单词

随机删除单词

删除一组连续的单词,这些单词将随机被移除

拆分增强器(Split Augmenter)

Split word to two tokens randomly

反向翻译增强器(Back Translation Augmenter)

保留词增强器(Reserved Word Augmenter)

句增强器(Sentence Augmentation)

使用上下文词嵌入(GPT2或XLNet)插入句子

抽象摘要增强器


环境设定 

设定路径

import os
os.environ["MODEL_DIR"] = '../model'

导入相关库并测试:

import nlpaug.augmenter.char as nac
import nlpaug.augmenter.word as naw
import nlpaug.augmenter.sentence as nas
import nlpaug.flow as nafc

from nlpaug.util import Action

text = 'The quick brown fox jumps over the lazy dog .'
print(text)

#输出结果:The quick brown fox jumps over the lazy dog 

模型输入可以如下: 

‘bert-base-uncased’, ‘bert-base-cased’, ‘distilbert-base-uncased’, ‘roberta-base’, ‘distilroberta-base’, ‘facebook/bart-base’, ‘squeezebert/squeezebert-uncased’. 

字符增强器(Character Augmenter)

在字符级别增强数据。可能的场景包括图像转文本和聊天机器人。在从图像中识别文本时,我们需要使用光学字符识别(OCR)模型来实现,但OCR会引入一些错误,比如将“o”和“0”混淆。OCR增强器(OCRAug)通过模拟这些错误来进行数据增强。对于聊天机器人来说,尽管大多数应用都带有单词校正功能,但仍会存在拼写错误。因此,引入了键盘增强器(KeyboardAug)来模拟这类错误。

OCR字符文本增强器(OCR Augmenter)

使用预定义的OCR错误替换字符

aug = nac.OcrAug()
augmented_texts = aug.augment(text, n=3)
print("Original:")
print(text)
print("Augmented Texts:")
print(augmented_texts)

'''
示例输出:
Original:
The quick brown fox jumps over the lazy dog .
Augmented Texts:
['The quick bkown fox jumps ovek the lazy dog .', 'The quick 6rown fox jumps ovek the lazy dog .', 'The quick brown f0x jomps over the la2y dog .']
'''

键盘增强器(Keyboard Augmenter)

根据键盘距离替换字符

aug = nac.KeyboardAug()
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)

'''
示例输出:
Original:
The quick brown fox jumps over the lazy dog .
Augmented Text:
The quick brown Gox juJps ocer the lazy dog .
'''

随机增强器(Random Augmenter)

随机插入字符

aug = nac.RandomCharAug(action="insert")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)

'''
示例输出:
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
T3he quicNk @brown fEox juamps $over th6e la1zy d*og
'''

随机替换字符

aug = nac.RandomCharAug(action="substitute")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
'''
示例输出:
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
ThN qDick brow0 foB jumks oveE t+e laz6 dBg
'''

随机交换字符 

aug = nac.RandomCharAug(action="swap")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)

'''
示例输出:
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
Hte quikc borwn fxo jupms ovre teh lzay dgo
'''

随机删除字符

aug = nac.RandomCharAug(action="delete")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)

'''
示例输出
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
Te quic rown fx jump ver he laz og
'''

词增强器(Word Augmenter)

除了字符级别的增强外,词级别也同样重要。利用word2vec(Mikolov等人,2013)、GloVe(Pennington等人,2014)、fasttext(Joulin等人,2016)、BERT(Devlin等人,2018)和WordNet进行相似词的插入和替换。Word2vec增强器、GloVe增强器和Fasttext增强器使用词嵌入来找到最相似的词组以替换原始词。另一方面,Bert增强器使用语言模型来预测可能的目标词。WordNet增强器则使用统计方法来找到相似的词组。

拼写增强器(Spelling Augmenter)

使用拼写错误单词字典替换单词

aug = naw.SpellingAug()
augmented_texts = aug.augment(text, n=3)
print("Original:")
print(text)
print("Augmented Texts:")
print(augmented_texts)

'''
示例输出:
Original:
The quick brown fox jumps over the lazy dog .
Augmented Texts:
['Tha qchick brown fox jumps ower the lazy dog.', 'Their quick borwn fox jumps over tge lazy dog.', 'The qchick brown fox jumps ower the lazy dod.']
Augmented Texts:
['They quick browb fox jumps over se lazy dog.', 'The quikly brown fox jumps over tge lazy dod.', 'Tha quick brown fox jumps ower their lazy dog.']

和OCR字符增强区别在于字符都是正确的单词,但是为单词拼写错误的其他单词,但是单词都是存在的
'''

词嵌入增强器(Word Embeddings Augmenter)

根据词嵌入的相似性随机插入单词

# model_type: word2vec, glove or fasttext
aug = naw.WordEmbsAug(
    model_type='word2vec', model_path=model_dir+'GoogleNews-vectors-negative300.bin',
    action="insert")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)

'''
示例输出:
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
The quick brown fox jumps Alzeari over the lazy Superintendents dog
'''

根据词嵌入的相似性随机替换单词

# model_type: word2vec, glove or fasttext
aug = naw.WordEmbsAug(
    model_type='word2vec', model_path=model_dir+'GoogleNews-vectors-negative300.bin',
    action="substitute")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)

'''
示例输出:
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
The easy brown fox jumps around the lazy dog
'''

TF-IDF增强器(TF-IDF Augmenter)

根据TF-IDF相似性插入单词

aug = naw.TfIdfAug(
    model_path=os.environ.get("MODEL_DIR"),
    action="insert")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
sinks The quick brown fox jumps over the lazy Sidney dog

根据TF-IDF相似性替换单词

aug = naw.TfIdfAug(
    model_path=os.environ.get("MODEL_DIR"),
    action="substitute")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)

'''
示例输出:
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
The quick brown fox Baked over the polygraphy dog
'''

上下文词嵌入增强器(Contextual Word Embeddings Augmenter)

根据上下文词嵌入(BERT、DistilBERT、RoBERTA或XLNet)插入单词

aug = naw.ContextualWordEmbsAug(
    model_path='bert-base-uncased', action="insert")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)

'''
示例输出:
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
even the quick brown fox usually jumps over the lazy dog
'''

根据上下文词嵌入(BERT、DistilBERT、RoBERTA或XLNet)替换单词

aug = naw.ContextualWordEmbsAug(
    model_path='bert-base-uncased', action="substitute")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)


''''
示例输出:
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
little quick brown fox jumps over the lazy dog
'''

同义词增强器(Synonym Augmenter)

使用WordNet的同义词替换单词

aug = naw.SynonymAug(aug_src='wordnet')
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)

'''
示例输出:
Original:
The quick brown fox jumps over the lazy dog .
Augmented Text:
The speedy brown fox jumps complete the lazy dog .
'''

反义词增强器

使用反义词替换单词

aug = naw.AntonymAug()
_text = 'Good boy'
augmented_text = aug.augment(_text)
print("Original:")
print(_text)
print("Augmented Text:")
print(augmented_text)

'''
示例输出:
Original:
Good boy
Augmented Text:
Good daughter
'''

随机词增强器(Random Word Augmenter)

随机交换单词

aug = naw.RandomWordAug(action="swap")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)

'''
示例输出:
Original:
The quick brown fox jumps over the lazy dog .
Augmented Text:
Quick the brown fox jumps over the lazy dog .
'''

随机删除单词

aug = naw.RandomWordAug()
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)

'''
示例输出:
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
The brown jumps over the lazy dog
'''

删除一组连续的单词,这些单词将随机被移除

aug = naw.RandomWordAug(action='crop')
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
'''
示例输出
Original:
The quick brown fox jumps over the lazy dog .
Augmented Text:
The quick brown fox jumps dog .
'''

拆分增强器(Split Augmenter)

Split word to two tokens randomly

aug = naw.SplitAug()
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)

'''
示例输出:
Original:
The quick brown fox jumps over the lazy dog .
Augmented Text:
The q uick b rown fox jumps o ver the lazy dog .
'''

反向翻译增强器(Back Translation Augmenter)

import nlpaug.augmenter.word as naw

text = 'The quick brown fox jumped over the lazy dog'
back_translation_aug = naw.BackTranslationAug(
    from_model_name='facebook/wmt19-en-de', 
    to_model_name='facebook/wmt19-de-en'
)
back_translation_aug.augment(text)

#示例输出:
'The speedy brown fox jumped over the lazy dog'

############
# Load models from local path
import nlpaug.augmenter.word as naw

from_model_dir = os.path.join(os.environ["MODEL_DIR"], 'word', 'fairseq', 'wmt19.en-de')
to_model_dir = os.path.join(os.environ["MODEL_DIR"], 'word', 'fairseq', 'wmt19.de-en')

text = 'The quick brown fox jumped over the lazy dog'
back_translation_aug = naw.BackTranslationAug(
    from_model_name=from_model_dir, from_model_checkpt='model1.pt',
    to_model_name=to_model_dir, to_model_checkpt='model1.pt', 
    is_load_from_github=False)
back_translation_aug.augment(text)

#示例输出:
'The speedy brown fox jumped over the lazy dog'

保留词增强器(Reserved Word Augmenter)

import nlpaug.augmenter.word as naw

text = 'Fwd: Mail for solution'
reserved_tokens = [
    ['FW', 'Fwd', 'F/W', 'Forward'],
]
reserved_aug = naw.ReservedAug(reserved_tokens=reserved_tokens)
augmented_text = reserved_aug.augment(text)

print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)

句增强器(Sentence Augmentation)

句子增强器的上下文词嵌入

使用上下文词嵌入(GPT2或XLNet)插入句子

# model_path: xlnet-base-cased or gpt2
aug = nas.ContextualWordEmbsForSentenceAug(model_path='xlnet-base-cased')#model_path = 'gpt2'
augmented_texts = aug.augment(text, n=3)
print("Original:")
print(text)
print("Augmented Texts:")
print(augmented_texts)

'''
示例输出
Original:
The quick brown fox jumps over the lazy dog .
Augmented Texts:
['The quick brown fox jumps over the lazy dog . A terrible , messy split second presents itself to the heart - which is we lose our heart.', 'The quick brown fox jumps over the lazy dog . Cast from the heart - the above flash is insight to the heart.', 'The quick brown fox jumps over the lazy dog . Give two mom s time to share some affection over this heart shaped version of Scott.']
'''

抽象摘要增强器

article = """
The history of natural language processing (NLP) generally started in the 1950s, although work can be 
found from earlier periods. In 1950, Alan Turing published an article titled "Computing Machinery and 
Intelligence" which proposed what is now called the Turing test as a criterion of intelligence. 
The Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian 
sentences into English. The authors claimed that within three or five years, machine translation would
be a solved problem. However, real progress was much slower, and after the ALPAC report in 1966, 
which found that ten-year-long research had failed to fulfill the expectations, funding for machine 
translation was dramatically reduced. Little further research in machine translation was conducted 
until the late 1980s when the first statistical machine translation systems were developed.
"""

aug = nas.AbstSummAug(model_path='t5-base', num_beam=3)
augmented_text = aug.augment(article)
print("Original:")
print(article)
print("Augmented Text:")
print(augmented_text)
Original:

The history of natural language processing (NLP) generally started in the 1950s, although work can be 
found from earlier periods. In 1950, Alan Turing published an article titled "Computing Machinery and 
Intelligence" which proposed what is now called the Turing test as a criterion of intelligence. 
The Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian 
sentences into English. The authors claimed that within three or five years, machine translation would
be a solved problem. However, real progress was much slower, and after the ALPAC report in 1966, 
which found that ten-year-long research had failed to fulfill the expectations, funding for machine 
translation was dramatically reduced. Little further research in machine translation was conducted 
until the late 1980s when the first statistical machine translation systems were developed.

Augmented Text:
the history of natural language processing (NLP) generally started in the 1950s. work can be found from earlier periods, such as the Georgetown experiment in 1954. little further research in machine translation was conducted until the late 1980s 

  • 7
    点赞
  • 18
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

千天夜

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值