目前,我正在使用下面的代码对一些使用spaCy的文本数据进行元素化和计算TF-IDF值:lemma = []
for doc in nlp.pipe(df['col'].astype('unicode').values, batch_size=9844,
n_threads=3):
if doc.is_parsed:
lemma.append([n.lemma_ for n in doc if not n.lemma_.is_punct | n.lemma_ != "-PRON-"])
else:
lemma.append(None)
df['lemma_col'] = lemma
vect = sklearn.feature_extraction.text.TfidfVectorizer()
lemmas = df['lemma_col'].apply(lambda x: ' '.join(x))
vect = sklearn.feature_extraction.text.TfidfVectorizer()
features = vect.fit_transform(lemmas)
feature_names = vect.get_feature_names()
dense = features.todense()
denselist = dense.tolist()
df = pd.DataFrame(denselist, columns=feature_names)
df = pd.DataFrame(denselist, columns=feature_names)
lemmas = pd.concat([lemmas, df])
df= pd.concat([df, lemmas])
我需要去掉专有名词、标点符号和停止单词,但在我当前的代码中做这件事有些困难。我读过一些documentation和other resources,但现在遇到了一个错误:---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in ()
7 if doc.is_parsed:
8 tokens.append([n.text for n in doc])
----> 9 lemma.append([n.lemma_ for n in doc if not n.lemma_.is_punct or n.lemma_ != "-PRON-"])
10 pos.append([n.pos_ for n in doc])
11 else:
in (.0)
7 if doc.is_parsed:
8 tokens.append([n.text for n in doc])
----> 9 lemma.append([n.lemma_ for n in doc if not n.lemma_.is_punct or n.lemma_ != "-PRON-"])
10 pos.append([n.pos_ for n in doc])
11 else:
AttributeError: 'str' object has no attribute 'is_punct'
有没有一种更简单的方法可以将这些内容从文本中去掉,而不必彻底改变我的方法?
完整代码可用here。