【项目实训6】NLTK安装及使用

安装NLTK

1. 安装Python

首先,确保已经安装了Python。可以从Python官方网站下载并安装最新版本的Python。

2. 使用pip安装NLTK

打开命令行或终端,运行以下命令来安装NLTK:

pip install nltk

使用NLTK

1. 导入NLTK

在你的Python脚本或交互式解释器中,导入NLTK:

import nltk
2. 下载NLTK数据

NLTK包含许多需要下载的数据资源,如语料库、词典等。可以通过以下命令打开NLTK数据下载界面:

nltk.download()

这将打开一个图形界面,你可以选择需要下载的数据资源。你也可以使用命令行下载特定资源,例如:

nltk.download('punkt') 
nltk.download('averaged_perceptron_tagger') 
nltk.download('wordnet')
3. 基本使用示例
1. 词语标记化

将句子分割成单独的单词或标记(Tokenization):

from nltk.tokenize

import word_tokenize text = "NLTK is a leading platform for building Python programs to work with human language data."

tokens = word_tokenize(text)

print(tokens)
2. 句子标记化

将文本分割成单独的句子:

from nltk.tokenize

import sent_tokenize text = "NLTK is a leading platform. It provides easy-to-use interfaces."

sentences = sent_tokenize(text)

print(sentences)
3. 词性标注

为每个单词标注词性(Part-of-Speech Tagging):

from nltk

import pos_tag tokens = word_tokenize("NLTK is a leading platform for building Python programs.")

tagged = pos_tag(tokens)

print(tagged)
4. 命名实体识别

识别命名实体(Named Entity Recognition, NER):

from nltk.chunk

import ne_chunk tokens = word_tokenize("Barack Obama was born in Hawaii.")

tagged = pos_tag(tokens)

entities = ne_chunk(tagged)

print(entities)
5. 词干提取

提取词干(Stemming):

from nltk.stem

import PorterStemmer stemmer = PorterStemmer()

words = ["running", "ran", "runs"]

stems = [stemmer.stem(word) for word in words]

print(stems)
6. 词形还原

词形还原(Lemmatization):

from nltk.stem

import WordNetLemmatizer lemmatizer = WordNetLemmatizer()

words = ["running", "ran", "runs"]

lemmas = [lemmatizer.lemmatize(word, pos='v') for word in words]

print(lemmas)

通过上述步骤,可以安装并开始使用NLTK进行各种自然语言处理任务。

NLTK的官方文档跳转

在使用bert模型抽取实体和关系之后,我们也尝试采用GPT的方法抽取实体和关系。

但是由于市面上的GPT模型基本上都是用英文数据训练,抽取结果中存在大量英文单词,于是我们打算使用NLTK翻译relations

处理前:

处理后

import json
import nltk
from nltk.corpus import wordnet as wn
from translate import Translator as TranslateLib
import time

# 下载NLTK数据
nltk.download('wordnet')

# 初步的同义词词典,可以手动扩充和调整
synonym_dict = {
    "legalbasis": "法律依据",
    "own": "拥有",
    "possess": "拥有",
    "agreement": "合约",
    "contract": "合约"
}

# 初始化翻译器
translator = TranslateLib(to_lang="zh")

# 缓存翻译结果
translation_cache = {}


def get_synonyms(word):
    synonyms = set()
    for syn in wn.synsets(word):
        for lemma in syn.lemmas():
            synonyms.add(lemma.name())
    return synonyms


def generate_synonym_dict(words):
    synonym_dict = {}
    for word in words:
        if word in synonym_dict:
            continue
        synonyms = get_synonyms(word)
        for synonym in synonyms:
            synonym_dict[synonym] = word
    return synonym_dict


def translate_to_chinese(word, retries=3):
    for _ in range(retries):
        try:
            # 检查是否是中文
            if any('\u4e00' <= char <= '\u9fff' for char in word):
                translation_cache[word] = word
                return word

            # 尝试翻译
            translation = translator.translate(word)
            translation_cache[word] = translation
            return translation
        except Exception as e:
            print(f"Error translating word '{word}': {e}")
            time.sleep(1)  # 等待1秒再重试
    translation_cache[word] = word
    return word


def standardize_relation(relation, synonym_dict):
    if isinstance(relation, list):
        return [synonym_dict.get(rel, translate_to_chinese(rel) if rel not in synonym_dict else synonym_dict[rel]) for
                rel in relation]
    return synonym_dict.get(relation,
                            translate_to_chinese(relation) if relation not in synonym_dict else synonym_dict[relation])


# 读取所有关系词汇
input_file = 'relations.jsonl'
words = set()

with open(input_file, 'r', encoding='utf-8') as infile:
    for line in infile:
        data = json.loads(line.strip())
        if 'relation' in data:
            if isinstance(data['relation'], list):
                words.update(data['relation'])
            else:
                words.add(data['relation'])

# 生成同义词词典
auto_synonym_dict = generate_synonym_dict(words)

# 将自动生成的同义词词典与初步的同义词词典合并
synonym_dict.update(auto_synonym_dict)

# 批量翻译并缓存结果
words_to_translate = [word for word in words if word not in synonym_dict and word not in translation_cache]
for word in words_to_translate:
    translate_to_chinese(word)

# 处理JSONL文件并标准化词汇
output_file = 'standardized_relations.jsonl'

with open(input_file, 'r', encoding='utf-8') as infile, open(output_file, 'w', encoding='utf-8') as outfile:
    for line in infile:
        data = json.loads(line.strip())
        if 'relation' in data:
            data['relation'] = standardize_relation(data['relation'], synonym_dict)
        outfile.write(json.dumps(data, ensure_ascii=False) + '\n')

print(f"处理完成,结果保存在 {output_file} 中。")

# 保存生成的同义词词典和翻译缓存
with open('synonym_dict.json', 'w', encoding='utf-8') as f:
    json.dump(synonym_dict, f, ensure_ascii=False, indent=4)

with open('translation_cache.json', 'w', encoding='utf-8') as f:
    json.dump(translation_cache, f, ensure_ascii=False, indent=4)

表面上效果很好,但是仍然存在问题,即同名词存在多个含义,因此在表达关系时并不能很好的实现。(即对于多条数据,上诉可能还被指定为上诉于,导致知识图谱呈现效果较为杂乱)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值