by Haicang
Nonparametric Methods
Histogram
A general formula
For a certain region
R
R
, the probability of points fall in is
P
P
, and the number of points in region is
K=NP
K
=
N
P
. The formula of probability density is as follows:
Here, K K is the number of points fall in the region, is the number of the total points, while V V is the volume of the current region.
So there are two ways to estimate . One is to fix V V , the volume and change (Kernel method), while the other is vice versa, namely fix K K , the number of points and vary the volume (Nearest-Neighbors). With N→∞ N → ∞ density from the two methods, both will converge to the true distribution.
Kernel density estimators
Use
k(u)
k
(
u
)
to denote a certain kernel function. There are some constraints:
k(u) k ( u ) is like a pdf(probability density function).
then
K
K
in (5.1) is as follow:
Substituting this expression into (5.1), we get
But there remains something I have to make sure.
h h is a smoothing parameter. There is a tradeoff between sensitivity to noise at small and over-smoothing at large h h .
Nearest-neighbor methods
Considering a small sphere centered on the point , we want to estimate density p(x p ( x . Then we allow the sphere to grow until it contains K K data points. is also a smoothing parameter to trade-off. A large K K can cause over-smoothing, while a small suffers from noise. Model produced by KNN is not a true density model because the integral over all space diverges.
KNN classifier For a point
x
x
, it’s label will be the the most frequent label in it’s KNN. As before,
K
K
is also a smoothing parameter. This can be derived from the conditional probability version of (5.1), and then comes the intuitive result.
Where, K K is the number of total points in the region, while is the number of points labeled k k in the same region.
KNN is very quick in learning (Some one says it’s not really learning), while the prediction process is very slow compared with over method. But an interesting property is that, when , the error rate is never more than twice the minimum Bayes error, which thought to be the lower bound of the mismatch of a machine learning algorithm.
The sci-kit learning link of KNN can be found here.
[1] Bishop. Pattern Recognition and Machine Learning