Bag of words 词袋模型(概念+代码实现)

Bag of words

The bag-of-words (BOW) model is a representation that turns arbitrary text into fixed-length vectors by counting how many times each word appears. This process is often referred to as vectorization.

Let’s take an example to understand this concept in depth.

- “It was the best of times” 
- “It was the worst of times”
-  “It was the age of wisdom”
-  “It was the age of foolishness”

We treat each sentence as a separate document and we make a list of all words. We get,

- ‘It’, ‘was’, ‘the’, ‘best’, ‘of’, ‘times’, ‘worst’, ‘age’, ‘wisdom’, ‘foolishness’

The next step is the vectors creation. Vectors convert text that can be used by the machine learning algorithm.We take the first document — “It was the best of times” and we check the frequency of words from the 10 unique words.

- “it” = 1
- “was” = 1
- “the” = 1
- “best” = 1
- “of” = 1
- “times” = 1
- “worst” = 0
- “age” = 0
- “wisdom” = 0
- “foolishness” = 0

Rest of the documents will be:

- “It was the best of times” = [1, 1, 1, 1, 1, 1, 0, 0, 0, 0]
- “It was the worst of times” = [1, 1, 1, 0, 1, 1, 1, 0, 0, 0]
- “It was the age of wisdom” = [1, 1, 1, 0, 1, 0, 0, 1, 1, 0]
- “It was the age of foolishness” = [1, 1, 1, 0, 1, 0, 0, 1, 0, 1]

In this approach, each word or token is called a “gram”. Creating a vocabulary of two-word pairs is called a bigram model.

The process of converting NLP text into numbers is called vectorization in ML. Different ways to convert text into vectors are:

- Counting the number of times each word appears in a document.
- Calculating the frequency that each word appears in a document out of all the words in the document.

Implementing BOW in Python

  • We use Keras’s Tokenizer class:
from keras.preprocessing.text import Tokenizer

docs = [
    'It was the best of times',
    'It was the worst of times',
    'It was the age of wisdom',
    'It was the age of foolishness'
]

## Step 1: Determine the Vocabulary
tokenizer = Tokenizer()
tokenizer.fit_on_texts(docs)
print(f'Vocabulary: {list(tokenizer.word_index.keys())}')

## Step 2: Count
vectors = tokenizer.texts_to_matrix(docs, mode='count')
print(vectors)
  • Running that code gives us:
Vocabulary: ['it', 'was', 'the', 'of', 'times', 'age', 'best', 'worst', 'wisdom', 'foolishness']
[[0. 1. 1. 1. 1. 1. 0. 1. 0. 0. 0.]
 [0. 1. 1. 1. 1. 1. 0. 0. 1. 0. 0.]
 [0. 1. 1. 1. 1. 0. 1. 0. 0. 1. 0.]
 [0. 1. 1. 1. 1. 0. 1. 0. 0. 0. 1.]]

References

  1. A Simple Explanation of the Bag-of-Words Model
  2. An Introduction to Bag-of-Words in NLP
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
本demo实现的是基于bow原理对图片进行分类,并实现对选取得测试集进行查找 BoW(Bag of Words)词袋模型最初被用在文本分类中,将文档表示成特征矢量。它的基本思想是假定对于一个文本,忽略其词序和语法、句法,仅仅将其看做是一些词汇的集合,而文本中的每个词汇都是独立的。简单说就是讲每篇文档都看成一个袋子(因为里面装的都是词汇,所以称为词袋,Bag of words即因此而来),然后看这个袋子里装的都是些什么词汇,将其分类。如果文档中猪、马、牛、羊、山谷、土地、拖拉机这样的词汇多些,而银行、大厦、汽车、公园这样的词汇少些,我们就倾向于判断它是一篇描绘乡村的文档,而不是描述城镇的。 serachFeatures.py中,前面主要是一些通过parse使得可以在敲命令行的时候可以向里面传递参数,后面就是提取SIFT特征,然后聚类,计算TF和IDF,得到单词直方图后再做一下L2归一化。一般在一幅图像中提取的到SIFT特征点是非常多的,而如果图像库很大的话,SIFT特征点会非常非常的多,直接聚类是非常困难的(内存不够,计算速度非常慢),所以,为了解决这个问题,可以以牺牲检索精度为代价,在聚类的时候先对SIFT做降采样处理。最后对一些在在线查询时会用到的变量保存下来。对于某个图像库,我们可以在命令行里通过下面命令生成BoF。 query.py只能每次查找一张图片,并返回与之匹配度(递减)最接近的6张图片
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

hUaleeF

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值