imdb数据集_基于循环神经网络(RNN)-IMDB影评情感分析Word2Vec & BiLSTM

497fb09138d34ba5af30e67b6158a86a.png

一、背景意义

NLP自然语言领域是目前非常热门的人工智能研究方向与产品方向。属于NLP自然语言技术处理领域,目前有一些比较成熟产品,例如,科大飞讯多语言翻译、苹果siri、微软cortana、小米小爱同学、百度小度小度等。本项目采用LSTM(长短记忆模型)与Word2Vec词向量计算的工具来实现一个IMDB影评的文本情感分类的模型并输出预测判断。

二、数据准备

项目地址:

IDBM_Movie | Kaggle​www.kaggle.com

下载包含 labeledTraData.tsv、imdb_master.csv、TestData.tsv三个数据集,如下图

318a8e293e43a940077b3f42758cffbc.png

三、数据分析与建模

1、导入模块、读取数据

import 

读取数据

df1 = pd.read_csv('labeledTrainData.tsv', delimiter="t")
df1 = df1.drop(['id'], axis=1)
df1.head()

95ec69917411423ca372599ee64b0856.png
df2 = pd.read_csv('imdb_master.csv',encoding="latin-1")
df2.head()

16194ce996e950b84bbe803d0e7dc4f3.png

2、数据预处理

df2 = df2.drop(['Unnamed: 0','type','file'],axis=1)
df2.columns = ["review","sentiment"]
df2.head()

2fbf6f46adc99a00a6ba3cdcb31282b6.png
df2 = df2[df2.sentiment != 'unsup']
df2['sentiment'] = df2['sentiment'].map({'pos': 1, 'neg': 0})
df2.head()

9fb1e7cd6f6e58f0995653f5300ea4ca.png
#合并数据
df = pd.concat([df1, df2]).reset_index(drop=True)
df.head()

6509799e5f17601c0432ed648e3381b7.png
#观察数据整体情况
df.info()
------------------------------------
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 75000 entries, 0 to 74999
Data columns (total 2 columns):
review       75000 non-null object
sentiment    75000 non-null int64
dtypes: int64(1), object(1)
memory usage: 1.1+ MB


3、数据可视化

#数据可视化sentiment比例分布
plt.hist(df[df.sentiment == 1].sentiment,
         bins=2, color='green', label='Positive')
plt.hist(df[df.sentiment == 0].sentiment,
         bins=2, color='blue', label='Negative')
plt.title('Classes distribution in the train data', fontsize=MEDIUM_SIZE)
plt.xticks([])
plt.xlim(-0.5, 2)
plt.legend()
plt.show()

36551ed3c7c88f6195e55c828c5a34d0.png

4、创建模型

4.1词向量处理计算

import nltk#安装导入词向量计算工具

stop_words = set(stopwords.words("english")) #停词
lemmatizer = WordNetLemmatizer()#提取单词的主干


def clean_text(text):
    # 用正则表达式取出符合规范的部分
    text = re.sub(r'[^ws]','',text, re.UNICODE)

    ##小写化所有的词,并转成词list
    text = text.lower()

    ##第一个参数表示待处理单词,必须是小写的;第二个参数表示POS,默认为NOUN
    text = [lemmatizer.lemmatize(token) for token in text.split(" ")]
    text = [lemmatizer.lemmatize(token, "v") for token in text]
    text = [word for word in text if not word in stop_words]
    text = " ".join(text)
    return text

'''

def lemmatize(tokens: list) -> list:
    # 1. Lemmatize 词形还原 去掉单词的词缀 比如,单词“cars”词形还原后的单词为“car”,单词“ate”词形还原后的单词为“eat”
    tokens = list(map(lemmatizer.lemmatize, tokens))
    lemmatized_tokens = list(map(lambda x: lemmatizer.lemmatize(x, "v"), tokens))
    # 2. Remove stop words 删除停用词
    meaningful_words = list(filter(lambda x: not x in stop_words, lemmatized_tokens))
    return meaningful_words


def preprocess(review: str, total: int, show_progress: bool = True) -> list:
    if show_progress:
        global counter
        counter += 1
        print('Processing... %6i/%6i'% (counter, total), end='r')
    # 1. Clean text
    review = clean_review(review)
    # 2. Split into individual words
    tokens = word_tokenize(review)
    # 3. Lemmatize
    lemmas = lemmatize(tokens)
    # 4. Join the words back into one string separated by space,
    # and return the result.
    return lemmas
'''

ab7ceeb6e8bc1ce3eeb0ffb9bbbc861e.png

4.2review字段引用词向量处理clean_text(x)函数

df['Processed_Reviews'] = df.review.apply(lambda x: clean_text(x))
df.head()

df.Processed_Reviews.apply(lambda x: len(x.split(" "))).mean()

8035cc1c06cea0b4da9e85358508a564.png

128.51009333333334

4.2LSTM建模训练

int:

from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Dense , Input , LSTM , Embedding, Dropout , Activation, GRU, Flatten
from keras.layers import Bidirectional, GlobalMaxPool1D
from keras.models import Model, Sequential
from keras.layers import Convolution1D
from keras import initializers, regularizers, constraints, optimizers, layers

max_features = 6000
tokenizer = Tokenizer(num_words=max_features)
tokenizer.fit_on_texts(df['Processed_Reviews'])
list_tokenized_train = tokenizer.texts_to_sequences(df['Processed_Reviews'])

maxlen = 130
X_t = pad_sequences(list_tokenized_train, maxlen=maxlen)
y = df['sentiment']

embed_size = 128
model = Sequential()
model.add(Embedding(max_features, embed_size))
model.add(Bidirectional(LSTM(32, return_sequences = True)))
model.add(GlobalMaxPool1D())
model.add(Dense(20, activation="relu"))
model.add(Dropout(0.05))
model.add(Dense(1, activation="sigmoid"))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

batch_size = 100
epochs = 3
model.fit(X_t,y, batch_size=batch_size, epochs=epochs, validation_split=0.2)

out:

训练结果:

1fc552f025a51994a72979fb6fa01ef2.png

四、评估预测

df_test=pd.read_csv("testData.tsv",header=0, delimiter="t", quoting=3)
df_test.head()
df_test["review"]=df_test.review.apply(lambda x: clean_text(x))
df_test["sentiment"] = df_test["id"].map(lambda x: 1 if int(x.strip('"').split("_")[1]) >= 5 else 0)
y_test = df_test["sentiment"]
list_sentences_test = df_test["review"]
list_tokenized_test = tokenizer.texts_to_sequences(list_sentences_test)
X_te = pad_sequences(list_tokenized_test, maxlen=maxlen)
prediction = model.predict(X_te)
y_pred = (prediction > 0.5)
from sklearn.metrics import f1_score, confusion_matrix
print('F1-score: {0}'.format(f1_score(y_pred, y_test)))
print('Confusion matrix:')
confusion_matrix(y_pred, y_test)

评估结果:0.952(F1-score是分类问题的一个衡量指标)

4e84248f0dd06f574c095b797d024519.png

生成文件提交结果:

y_pred = model.predict(X_te)
def submit(predictions):
    df_test['sentiment'] = predictions
    df_test.to_csv('submission.csv', index=False, columns=['id','sentiment'])

submit(y_pred)

五、小结

本次项目通过大量的已知的高质量影评文本,利用机器学习LSTM神经网络构文本建情感分析分类模型,项目训练预测成果非常成功,解决了NLP自然语言处理技术-电影影评情感文本分类分析问题,让自己向NLP自然语言专家更进一步。

若是还想提高训练得分,可通过GridSearchCV调参提升,并且可以构建知识图谱图数据库等技术,加快学习效率并提升训练结果。

  • 0
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值