word2vec 小测试

Bag-of-words Model

Previous state-of-the-art document representations were based on the bag-of-words model, which represent input documents as a fixed-length vector. For example, borrowing from the Wikipedia article, the two documents
(1) John likes to watch movies. Mary likes movies too.
(2) John also likes to watch football games.
are used to construct a length 10 list of words
["John", "likes", "to", "watch", "movies", "Mary", "too", "also", "football", "games"]
so then we can represent the two documents as fixed length vectors whose elements are the frequencies of the corresponding words in our list
(1) [1, 2, 1, 1, 2, 1, 1, 0, 0, 0]
(2) [1, 1, 1, 1, 0, 0, 0, 1, 1, 1]
Bag-of-words models are surprisingly effective but still lose information about word order. Bag of n-grams models consider word phrases of length n to represent documents as fixed-length vectors to capture local word order but suffer from data sparsity and high dimensionality.



 word2vec 中的数学原理详解

 自己动手写word2vec (一):主要概念和流程

1、稀疏向量,又称为one-hot representation
2、密集向量,又称distributed representation,即分布式表示。
其实word2vec做的事情很简单,大致来说,就是构建了一个多层神经网络,然后在给定文本中获取对应的输入和输出,在训练过程中不断修正神经网络中的参数,最后得到词向量。
word2vec采用的是n元语法模型(n-gram model),即假设一个词只与周围n个词有关,而与文本中的其他词无关。这种模型构建简单直接,也有后续的各种平滑方法。
CBOW模型:输入是某个词A周围的n个单词的词向量之和,输出是词A本身的词向量;
skip-gram模型:输入是词A本身,输出是词A周围的n个单词的词向量(要循环n遍)。



tensorflow笔记:使用tf来实现word2vec
 
  

 利用Python实现中文情感极性分析




#-*-
coding:utf-8 -*-
 from sklearn.datasets import fetch_20newsgroups from bs4 import BeautifulSoup import nltk import re from gensim.models import word2vec 
news
= fetch_20newsgroups(subset='all') X, y = news.data, news.target def news_to_sentences(news): news_text = BeautifulSoup(news, 'html.parser').get_text() #分成句子 tokenizer = nltk.data.load('tokenizers/punkt/english.pickle') raw_sentences = tokenizer.tokenize(news_text)
  #分成单词 sentences
= [] for sent in raw_sentences: sentences.append(re.sub('[^a-zA-Z]', ' ', sent.lower().strip()).split()) return sentences sentences = [] for x in X: sentences += news_to_sentences(x) # Set values for various parameters num_features = 300 # Word vector dimensionality min_word_count = 20 # Minimum word count num_workers = 2 # Number of threads to run in parallel context = 5 # Context window size downsampling = 1e-3 # Downsample setting for frequent words model = word2vec.Word2Vec(sentences, workers=num_workers, \ size=num_features, min_count = min_word_count, \ window = context, sample = downsampling)
model.init_sims(replace
=True) print model.most_similar('morning')

 
 

 

转载于:https://www.cnblogs.com/pengwang52/p/7683291.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值