在其他来源中使用Stackoverflow上的各种帖子,我正在尝试实现我自己的
PHP分类器,将推文分类为积极,中立和负面的类.在编码之前,我需要获得流程.我的思路和榜样如下:
p(class) * p(words|class)
Bayes theorem: p(class|words) = ------------------------- with
p(words)
assumption that p(words) is the same for every class leads to calculating
arg max p(class) * p(words|class) with
p(words|class) = p(word1|class) * p(word2|topic) * ... and
p(class) = #words in class / #words in total and
p(word, class) 1
p(word|class) = -------------- = p(word, class) * -------- =
p(class) p(class)
#times word occurs in class #words in total #times word occurs in class
--------------------------- * --------------- = ---------------------------
#words in total #words in class #words in class
Example:
------+----------------+-----------------+
class | words | #words in class |
------+----------------+-----------------+
pos | happy win nice | 3 |
neu | neutral middle | 2 |
neg | sad loose bad | 3 |
------+----------------+-----------------+
p(pos) = 3/8
p(neu) = 2/8
p(meg) = 3/8
Calculate: argmax(sad loose)
p(sad loose|pos) = p(sad|pos) * p(loose|pos) = (0+1)/3 * (0+1)/3 = 1/9
p(sad loose|neu) = p(sad|neu) * p(loose|neu) = (0+1)/3 * (0+1)/3 = 1/9
p(sad loose|neg) = p(sad|neg) * p(loose|neg) = 1/3 * 1/3 = 1/9
p(pos) * p(sad loose|pos) = 3/8 * 1/9 = 0.0416666667
p(neu) * p(sad loose|neu) = 2/8 * 1/9 = 0.0277777778
p(neg) * p(sad loose|neg) = 3/8 * 1/9 = 0.0416666667
正如你所看到的,我已经用正面(“快乐赢得好”),中立(“中立中间”)和负面(“悲伤松散坏”)推文“训练”了分类器.为了防止由于所有类中缺少一个单词而导致概率为零的问题,我使用LaPlace(或äddone“)平滑,请参阅”(0 1)“.
我基本上有两个问题:
>这是一个正确的实施蓝图吗?还有改进的余地吗?
>在对推文进行分类时(“悲伤的松散”),预计在“neg”类中为100%,因为它只包含否定词.然而,LaPlace平滑使事情变得更复杂:类pos和neg具有相同的概率.这有解决方法吗?