K近邻-鸢尾分类

对于一个待分类的测试样本,寻找与待分类的样本在特征空间中距离最近的K个已标记样本做参考,来帮助我们做出分类决策。K的不同,分类效果不同。K不属于模型通过训练数据学习的参数,因此在模型初始化时需要提前确定。

下面用KNN算法对生物物种进行分类,并使用最为著名的“鸢尾”(Iris)数据集。该数据曾被Fisher用在经典论文中,目前作为教科书般的数据样本预存在skearn的工具包中。

Python源码:

#coding=utf-8
from sklearn.datasets import load_iris
#-------------
from sklearn.cross_validation import train_test_split
#-------------
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier
#-------------
from sklearn.metrics import classification_report


#-------------load data
iris=load_iris()
print 'data shape',iris.data.shape
#show data description
print iris.DESCR
#-------------split data
#75% training set,25% testing set
X_train,X_test,y_train,y_test=train_test_split(iris.data,iris.target,test_size=0.25,random_state=33)
#-------------classify
ss=StandardScaler()
X_train=ss.fit_transform(X_train)
X_test=ss.fit_transform(X_test)
#initialize
knc=KNeighborsClassifier()
#training model
knc.fit(X_train,y_train)
#run on test data
y_predict=knc.predict(X_test)
#-------------performance
print 'The Accuracy is',knc.score(X_test,y_test)
print classification_report(y_test,y_predict,target_names=iris.target_names)
Result:

data shape (150, 4)
Iris Plants Database
====================


Notes
-----
Data Set Characteristics:
    :Number of Instances: 150 (50 in each of three classes)
    :Number of Attributes: 4 numeric, predictive attributes and the class
    :Attribute Information:
        - sepal length in cm
        - sepal width in cm
        - petal length in cm
        - petal width in cm
        - class:
                - Iris-Setosa
                - Iris-Versicolour
                - Iris-Virginica
    :Summary Statistics:


    ============== ==== ==== ======= ===== ====================
                    Min  Max   Mean    SD   Class Correlation
    ============== ==== ==== ======= ===== ====================
    sepal length:   4.3  7.9   5.84   0.83    0.7826
    sepal width:    2.0  4.4   3.05   0.43   -0.4194
    petal length:   1.0  6.9   3.76   1.76    0.9490  (high!)
    petal width:    0.1  2.5   1.20  0.76     0.9565  (high!)
    ============== ==== ==== ======= ===== ====================


    :Missing Attribute Values: None
    :Class Distribution: 33.3% for each of 3 classes.
    :Creator: R.A. Fisher
    :Donor: Michael Marshall (MARSHALL%PLU@io.arc.nasa.gov)
    :Date: July, 1988


This is a copy of UCI ML iris datasets.
http://archive.ics.uci.edu/ml/datasets/Iris


The famous Iris database, first used by Sir R.A Fisher


This is perhaps the best known database to be found in the
pattern recognition literature.  Fisher's paper is a classic in the field and
is referenced frequently to this day.  (See Duda & Hart, for example.)  The
data set contains 3 classes of 50 instances each, where each class refers to a
type of iris plant.  One class is linearly separable from the other 2; the
latter are NOT linearly separable from each other.


References
----------
   - Fisher,R.A. "The use of multiple measurements in taxonomic problems"
     Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to
     Mathematical Statistics" (John Wiley, NY, 1950).
   - Duda,R.O., & Hart,P.E. (1973) Pattern Classification and Scene Analysis.
     (Q327.D83) John Wiley & Sons.  ISBN 0-471-22361-1.  See page 218.
   - Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System
     Structure and Classification Rule for Recognition in Partially Exposed
     Environments".  IEEE Transactions on Pattern Analysis and Machine
     Intelligence, Vol. PAMI-2, No. 1, 67-71.
   - Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule".  IEEE Transactions
     on Information Theory, May 1972, 431-433.
   - See also: 1988 MLC Proceedings, 54-64.  Cheeseman et al"s AUTOCLASS II
     conceptual clustering system finds 3 classes in the data.
   - Many, many more ...


The Accuracy  is 0.710526315789
             precision    recall  f1-score   support

     setosa       1.00      1.00      1.00         8
 versicolor       0.50      1.00      0.67        11
  virginica       1.00      0.42      0.59        19

avg / total       0.86      0.71      0.70        38

Iris数据集共有150朵鸢尾数据样本,并均匀分布在3个不同的亚种;每个数据样本被4个不同的花瓣/花蕊的形状特征所描述。本例Iris数据集是按照类别依次排列,因此采样要保证随机采样。
算法特点
KNN算法与其它模型最大不同在于:该模型没用参数训练过程,没有通过任何学习算法分析训练数据,而只是根据测试样本在训练数据的分布直接做出分类决策。因此KNN属于无参数模型Nonparametric model中非常简单的一种。正是这样的决策算法,导致了非常高的计算复杂度和内存消耗。没处理一个测试样本,都需要对所有预加载在内存的训练样本进行遍历,逐一计算相似度,排序并且选取K个最近邻训练样本的标记,进而作出分类决策。这是n^2级别的算法复杂度,一旦数据规模稍大,便需要权衡更大计算时间的代价。
此外,有类似KD-Tree这样的数据结构,通过空间换时间的思想,节省决策时间,后面会再做讨论。


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值