- Bayes’ theorem
Wiki Page : https://en.wikipedia.org/wiki/Bayes'_theorem
P(A|B) = P(B|A) = P(B|A) P(A)/P(B)
- Naive Bayes Classifier
Wiki Page : https://en.wikipedia.org/wiki/Naive_Bayes_classifier
- Use naive bayes classifier to resolve a sample problem
3.1 the problem we need to resolve
We have below training sets:
a. Get up to 10% pay off of on under Armour, you’ve still got time, buy not much.(Spam)
b. Wonderful news, Cold Gear under $50. Right here, Right Now!(Spam)
c. Please help with the development work asap, we need to catch up with the deadline(Ham)
d. JavaScript developer, Full Stack developer, good oppotunity in Shanghai.(Ham)
With these 4 training sets, we need to classify if below mails are spam/ham mails:
a. We provide 20% pay off, buy it!
b. Re: Applying for javaScript develper…
3.2 Analysis using Naive Bayes Classifier
We need to resolve :
P(spam | mail) The Probability that given mail is spam
P(ham | mail) The Probability that given mail is ham
=>
P(spam | mail) = P(spam) * P(mail | spam) / P(mail)
P(ham | mail) = P(ham) * P(mail | ham) / P(mail)
We will just compare P(spam | mail) with P(ham | mail) to decide if the mail is spam or ham.
So we can ignore P(mail) and just compare :
P(spam) * P(mail | spam) & P(ham) * P(mail | ham)
P(spam) = No. of mails belonging to category spam
P(ham) = No. of mails belonging to category ham
in our given training set, they are both 0.5
P(mail | spam) = P(word1 | spam) * P(word2 | spam) * …
P(mail | ham) = P(word1 | ham) * P(word2 | ham) * …
P(word1 | spam) = count of word1 in all the spam mails / count of all the words in spam mails
things to note : if in the testing set we have a new word which is not in the training set, P(NewWord | spam) would be 0, then the multiply chain would also be 0. We cannot continue to calculate.
So we add log function to both side before compare.
if log(P(spam | mail)) > log (P(ham | mail)), return spam
else return ham
log(P(spam | mail)) = log(P(spam)) + log(P(mail | spam) - log(P(mail) =
log(P(ham | mail)) = log(P(ham)) + log(P(mail | ham) - log(P(mail)
We need to compare :
log(P(spam)) + log(P(word1 | spam)) + log (P(word2 | spam)) + …
log(P(ham)) + log(P(word1 | ham)) + log (P(word2 | ham)) + …
But the problem was still not resolved, if we meet new word, P(newWord | spam) would be 0 and log(0) cannot be calculated.
let’s use Laplace smoothing :
log(P(word1 | spam)) = (count of word1 in spam mails + 1) / (total count of words in spam mails + no. of distinct words in training data set)
3.3 Use C# to do the coding
a. Make the mail content(subject) as tokens, ignore the numbers and symbols, just get list of strings, also delete some stop words
b. Train the data sets, I do this all in memory, do not use database, just for experiment.
The WordFrequency is a model to save the training data
Sample of some saved training data :
c. Classify the testing data :