TensorFlow 2.0 教程21:词嵌入

  1.载入数据

  vocab_size = 10000

  (train_x, train_y), (test_x, text_y) = keras.datasets.imdb.load_data(num_words=vocab_size)

  print(train_x[0])

  print(train_x[1])

  [1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 2, 8, 4, 107, 117, 5952, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 2, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]

  [1, 194, 1153, 194, 8255, 78, 228, 5, 6, 1463, 4369, 5012, 134, 26, 4, 715, 8, 118, 1634, 14, 394, 20, 13, 119, 954, 189, 102, 5, 207, 110, 3103, 21, 14, 69, 188, 8, 30, 23, 7, 4, 249, 126, 93, 4, 114, 9, 2300, 1523, 5, 647, 4, 116, 9, 35, 8163, 4, 229, 9, 340, 1322, 4, 118, 9, 4, 130, 4901, 19, 4, 1002, 5, 89, 29, 952, 46, 37, 4, 455, 9, 45, 43, 38, 1543, 1905, 398, 4, 1649, 26, 6853, 5, 163, 11, 3215, 2, 4, 1153, 9, 194, 775, 7, 8255, 2, 349, 2637, 148, 605, 2, 8003, 15, 123, 125, 68, 2, 6853, 15, 349, 165, 4362, 98, 5, 4, 228, 9, 43, 2, 1157, 15, 299, 120, 5, 120, 174, 11, 220, 175, 136, 50, 9, 4373, 228, 8255, 5, 2, 656, 245, 2350, 5, 4, 9837, 131, 152, 491, 18, 2, 32, 7464, 1212, 14, 9, 6, 371, 78, 22, 625, 64, 1382, 9, 8, 168, 145, 23, 4, 1690, 15, 16, 4, 1355, 5, 28, 6, 52, 154, 462, 33, 89, 78, 285, 16, 145, 95]

  word_index = keras.datasets.imdb.get_word_index()

  word_index = {k:(v+3) for k,v in word_index.items()}

  word_index[''] = 0

  word_index[''] = 1

  word_index[''] = 2

  word_index[''] = 3

  reverse_word_index = {v:k for k, v in word_index.items()}

  def decode_review(text):

  return ' '.join([reverse_word_index.get(i, '?') for i in text])

  print(decode_review(train_x[0]))

  this film was just brilliant casting location scenery story direction everyone's really suited the part they played and you could just imagine being there robert is an amazing actor and now the same being director father came from the same scottish island as myself so i loved the fact there was a real connection with this film the witty remarks throughout the film were great it was just brilliant so much that i bought the film as soon as it was released for and would recommend it to everyone to watch and the fly fishing was amazing really cried at the end it was so sad and you know what they say if you cry at a film it must have been good and this definitely was also to the two little boy's that played the of norman and paul they were just brilliant children are often left out of the list i think because the stars that play them all grown up are such a big profile for the whole film but these children are amazing and should be praised for what they have done don't you think the whole story was so lovely because it was true and was someone's life after all that was shared with us all

  maxlen = 500

  train_x = keras.preprocessing.sequence.pad_sequences(train_x,value=word_index[''],

  padding='post', maxlen=maxlen)

  test_x = keras.preprocessing.sequence.pad_sequences(test_x,value=word_index[''],

  padding='post', maxlen=maxlen)

  2.构建模型

  embedding_dim = 16

  model = keras.Sequential([

  layers.Embedding(vocab_size, embedding_dim, input_length=maxlen),

  layers.GlobalAveragePooling1D(),

  layers.Dense(16, activation='relu'),

  layers.Dense(1, activation='sigmoid')

  ])

  model.summary()

  Model: "sequential"

  _________________________________________________________________

  Layer (type) Output Shape Param #

  =================================================================

  embedding (Embedding) (None, 500, 16) 160000

  _________________________________________________________________

  global_average_pooling1d (Gl (None, 16) 0

  _________________________________________________________________

  dense (Dense) (None, 16) 272

  _________________________________________________________________

  dense_1 (Dense) (None, 1) 17

  =================================================================

  Total params: 160,289

  Trainable params: 160,289

  Non-trainable params: 0

  _________________________________________________________________

  model.compile(optimizer=keras.optimizers.Adam(),

  loss=keras.losses.BinaryCrossentropy(),

  metrics=['accuracy'])

  history = model.fit(train_x, train_y, epochs=30, batch_size=512, validation_split=0.1)

  Train on 22500 samples, validate on 2500 samples

  Epoch 30/30无锡人流医院 http://www.0510bhyy.com/

  22500/22500 [==============================] - 1s 66us/sample - loss: 0.1290 - accuracy: 0.9596 - val_loss: 0.3055 - val_accuracy: 0.8948

  import matplotlib.pyplot as plt

  acc = history.history['accuracy']

  val_acc = history.history['val_accuracy']

  epochs = range(1, len(acc) + 1)

  plt.plot(epochs, acc, 'bo', label='Training acc')

  plt.plot(epochs, val_acc, 'b', label='Validation acc')

  plt.title('Training and validation accuracy')

  plt.xlabel('Epochs')

  plt.ylabel('Accuracy')

  plt.legend(loc='lower right')

  plt.figure(figsize=(16,9))

  plt.show()

  

png

 

  e = model.layers[0]

  weights = e.get_weights()[0]

  print(weights.shape)

  (10000, 16)

  out_v = open('vecs.tsv', 'w')

  out_m = open('meta.tsv', 'w')

  for word_num in range(vocab_size):

  word = reverse_word_index[word_num]

  embeddings = weights[word_num]

  out_m.write(word + "\n")

  out_v.write('\t'.join([str(x) for x in embeddings]) + "\n")

  out_v.close()

  out_m.close()

  放到 Embedding Projector:http://projector.tensorflow.org/

  上进行可视化

转载于:https://www.cnblogs.com/gnz49/p/11480770.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值