keras 生成句子向量 词向量_关于KERAS词向量报错

本文档描述了一位使用Keras处理文本数据时遇到的报错情况。作者在建立神经网络模型,尝试将句子转换为向量的过程中,遇到了错误。模型包括Embedding层、Dense层和池化层,但在训练模型时出现了InvalidArgumentError,指出indices超出范围。错误发生在embedding_lookup操作中,提示索引2087不在[0, 100)范围内。" 81978392,7282992,"理解Linux环境变量:PATH, CMAKE_PREFIX_PATH, LIBRARY_PATH与LD_LIBRARY_PATH
摘要由CSDN通过智能技术生成

KERAS小白,做了神经网络,发现老是报错,试了几天还没找出原因,请指点一下import pandas as pd

import numpy as np

import keras

import sklearn

from sklearn import metrics

from sklearn.model_selection import train_test_split

data=pd.read_csv(r"E:\Kaggle\Quora_Insincere_Questions_Classification\train.csv")

x=data["question_text"]

y=data["target"]

train_x,text_x,train_y,text_y=train_test_split(x,y,test_size=0.1,random_state=2019)

tokenizer=keras.preprocessing.text.Tokenizer(num_words=5000000)

tokenizer.fit_on_texts(list(train_x))

train_x=tokenizer.texts_to_sequences(train_x)

text_x=tokenizer.texts_to_sequences(text_x)

train_x=keras.preprocessing.sequence.pad_sequences(train_x,maxlen=100,padding="post")

text_x=keras.preprocessing.sequence.pad_sequences(text_x,maxlen=100,padding="post")

xor=keras.models.Sequential()

xor.add(keras.layers.embeddings.Embedding(input_dim=100,output_dim=1,mask_zero=False))

xor.add(keras.layers.Dense(100000,input_dim=1))

xor.add(keras.layers.pooling.GlobalMaxPool1D())

xor.add(keras.layers.Activation("relu"))

xor.add(keras.layers.Dense(2,activation="sigmoid"))

xor.add(keras.layers.Dense(2))

xor.add(keras.layers.Dense(1))

xor.compile(loss="binary_crossentropy",optimizer="adam",metrics=["accuracy"])

print(xor.summary())

history=xor.fit(train_x,train_y,epochs=100,verbose=0)

score=xor.evaluate(train_x,train_y)

print(score)

报错的内容

C:\Users\hasee\AppData\Local\Programs\Python\Python36\python.exe E:/pg/Kaggle/Quora/text10.py

Using TensorFlow backend.

_

Total params: 400,111

Trainable params: 400,111

Non-trainable params: 0

_

None

2019-01-04 14:37:15.280404: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2

Traceback (most recent call last):

File "E:/pg/Kaggle/Quora/text10.py", line 32, in history=xor.fit(train_x,train_y,epochs=100,verbose=0)

File "C:\Users\hasee\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\training.py", line 1039, in fitvalidation_steps=validation_steps)

File "C:\Users\hasee\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\training_arrays.py", line 199, in fit_loopouts = f(ins_batch)

File "C:\Users\hasee\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\backend\tensorflow_backend.py", line 2715, in callreturn self._call(inputs)

File "C:\Users\hasee\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\backend\tensorflow_backend.py", line 2675, in _callfetched = self._callable_fn(*array_vals)

File "C:\Users\hasee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1439, in callrun_metadata_ptr)

File "C:\Users\hasee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 528, in exitc_api.TF_GetCode(self.status.status))

tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,2] = 2087 is not in [0, 100)[[{{node embedding_1/embedding_lookup}} = GatherV2[Taxis=DT_INT32, Tindices=DT_INT32, Tparams=DT_FLOAT, _class=["loc:@training/Adam/Assign_2"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](embedding_1/embeddings/read, embedding_1/Cast, training/Adam/gradients/embedding_1/embedding_lookup_grad/concat/axis)]]

Process finished with exit code 1

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值