(代码)使用预训练的词向量进行文本分类

import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, confusion_matrix
from keras.layers import Embedding, LSTM, GRU, Dropout, Dense, Input
from keras.models import Model, Sequential, load_model
from keras.preprocessing import sequence
from keras.datasets import imdb
import gensim
from gensim.models.word2vec import Word2Vec


'''
以LSTM为例,LSTM的长度为MAX_SEQ_LEN;每个cell输入一个单词,这个单词用one-hot表示
词向量矩阵是embedMatrix,记录词典中每个词的词向量;词的idx,对应embedMatrix的行号
“该词的ont-hot向量”点乘“embedMatrix”,便得到“该词的词向量表示”

比如:词典有5个词,也即:word2idx = {_stopWord:0, love:1, I:2, my:3, you:4, friend:5, my:6};每个词映射到2维;
输入句子:"I love my pen",  #pen是停用词,其idx设为0
                     [0,       0]
                     [0.3,   0.1]
[0, 0, 1, 0, 0, 0]   [-0.4, -0.5]   [-0.4, -0.5]
[0, 1, 0, 0, 0, 0] · [0.5,   0.2] = [0.3,   0.1]
[0, 0, 0, 0, 0, 1]   [-0.7,  0.6]   [-0.3, -0.8]
[1, 0, 0, 0, 0, 0]   [-0.3, -0.8]   [0,       0]
                     [0.5,   0.2]
'''
MAX_SEQ_LEN = 250
inPath = '../data/'


def train_W2V(sentenList, embedSize=300, epoch_num=1):
    w2vModel = Word2Vec(sentences=sentenList, hs=0, negative=5, min_count=5, window=5, iter=epoch_num, size=embedSize)
    w2vModel.save(inPath + 'w2vModel')
    return w2vModel


  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Sure! Here's an example code for building a textCNN model for text binary classification using pre-trained word2vec embeddings in the Keras framework: ```python from keras.layers import Conv1D, MaxPooling1D, Embedding, Flatten, Dense from keras.models import Sequential from gensim.models import KeyedVectors # Load pre-trained word2vec embeddings word2vec_path = 'path/to/word2vec.bin' word_vectors = KeyedVectors.load_word2vec_format(word2vec_path, binary=True) # Set the embedding dimension based on the pre-trained word2vec embeddings embedding_dim = word_vectors.vector_size # Define the model architecture model = Sequential() # Embedding layer with pre-trained word2vec weights model.add(Embedding(input_dim=vocab_size, output_dim=embedding_dim, weights=[word_vectors.vectors], trainable=False, input_length=max_sequence_length)) # Convolutional layers with different filter sizes filters = [3, 4, 5] conv_blocks = [] for filter_size in filters: conv = Conv1D(filters=128, kernel_size=filter_size, activation='relu')(model.layers[-1].output) pool = MaxPooling1D(pool_size=2)(conv) flatten = Flatten()(pool) conv_blocks.append(flatten) # Concatenate the outputs of the conv blocks concat = concatenate(conv_blocks, axis=-1) # Fully connected layer model.add(Dense(128, activation='relu')) model.add(Dense(1, activation='sigmoid')) # Compile the model model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) # Print the model summary model.summary() ``` Make sure to replace `vocab_size`, `word2vec_path`, and `max_sequence_length` with appropriate values for your dataset and provide the correct path to your pre-trained word2vec embeddings file.

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值