Python数据挖掘(2)简单的分类问题

接下来将使用著名的Iris植物分类数据集。这个数据集共有150条植物数据,每条数据都
给出了四个特征:sepal length、sepal width、petal length、petal width(分别表示萼片和花瓣的长
与宽),单位均为cm。这是数据挖掘中的经典数据集之一。该
数据集共有三种类别:Iris Setosa(山鸢尾)、Iris Versicolour(变色鸢尾)和Iris Virginica(维吉尼
亚鸢尾)。我们这里的分类目的是根据植物的特征推测它的种类。

#准备数据集
from sklearn.datasets import load_iris
import numpy as np
dataset = load_iris()
X = dataset.data
y = dataset.target
# print(dataset.DESCR) 查看数据集
attribute_means = X.mean(axis=0)
assert attribute_means.shape == (n_features,)
X_d = np.array(X >= attribute_means,dtype='int') 
n_samples, n_features = X.shape
print ("样例的个数为{0}".format(n_samples))
print ("特征的个数为{0}".format(n_features))

数据集中各特征值为连续型,也就是有无数个可能的值。测量得到的数据就是这个样子,比
如,测量结果可能是1、1.2或1.25,等等。连续值的另一个特点是,如果两个值相近,表示相似
度很大。一种萼片长1.2cm的植物跟一种萼片宽1.25cm的植物很相像。
与此相反,类别的取值为离散型。虽然常用数字表示类别,但是类别值不能根据数值大小比
较相似性。Iris数据集用不同的数字表示不同的类别,比如类别0、1、2分别表示Iris Setosa、Iris
Versicolour、Iris Virginica。但是这不能说明前两种植物,要比第一种和第三种更相近——尽管单
看表示类别的数字时确实如此。在这里,数字表示类别,只能用来判断两种植物是否属于同一种
数据集的特征为连续值,而我们即将使用的算法使用类别型特征值,因此我们需要把连续值
转变为类别型,这个过程叫作离散化。
最简单的离散化算法,莫过于确定一个阈值,将低于该阈值的特征值置为0,高于阈值的置
为1。我们把某项特征的阈值设定为该特征所有特征值的均值。每个特征的均值计算方法如下。

attribute_means = X.mean(axis=0)
X_d = np.array(X >= attribute_means,dtype='int') 

OneR算法

OneR算法的思路很简单,它根据已有数据中,具有相同特征值的个体最可能属于哪个类别
进行分类。OneR是One Rule(一条规则)的简写,表示我们只选取四个特征中分类效果最好的一
个用作分类依据。
在这里插入图片描述
训练数据和测试数据

from sklearn.model_selection import train_test_split

# Set the random state to the same number to get the same results as in the book
random_state = 14

X_train, X_test, y_train, y_test = train_test_split(X_d, y, random_state=random_state)
print("There are {} training samples".format(X_train.shape))
print("There are {} testing samples".format(X_test.shape))

实现代码

from collections import defaultdict
from operator import itemgetter


def train(X, y_true, feature):

    # Check that variable is a valid number
    n_samples, n_features = X.shape
    assert 0 <= feature < n_features
    # Get all of the unique values that this variable has
    values = set(X[:,feature])
    # Stores the predictors array that is returned
    predictors = dict()
    errors = []
    for current_value in values:
        most_frequent_class, error = train_feature_value(X, y_true, feature, current_value)
        predictors[current_value] = most_frequent_class
        errors.append(error)
    # Compute the total error of using this feature to classify on
    total_error = sum(errors)
    return predictors, total_error

# Compute what our predictors say each sample is based on its value
#y_predicted = np.array([predictors[sample[feature]] for sample in X])
    

def train_feature_value(X, y_true, feature, value):
    # Create a simple dictionary to count how frequency they give certain predictions
    class_counts = defaultdict(int)
    # Iterate through each sample and count the frequency of each class/value pair
    for sample, y in zip(X, y_true):
        if sample[feature] == value:
            class_counts[y] += 1
    # Now get the best one by sorting (highest first) and choosing the first item
    sorted_class_counts = sorted(class_counts.items(), key=itemgetter(1), reverse=True)
    most_frequent_class = sorted_class_counts[0][0]
    # The error is the number of samples that do not classify as the most frequent class
    # *and* have the feature value.
    n_samples = X.shape[1]
    error = sum([class_count for class_value, class_count in class_counts.items()
                 if class_value != most_frequent_class])
    return most_frequent_class, error
# Compute all of the predictors
all_predictors = {variable: train(X_train, y_train, variable) for variable in range(X_train.shape[1])}
errors = {variable: error for variable, (mapping, error) in all_predictors.items()}
# Now choose the best and save that as "model"
# Sort by error
best_variable, best_error = sorted(errors.items(), key=itemgetter(1))[0]
print("The best model is based on variable {0} and has error {1:.2f}".format(best_variable, best_error))

# Choose the bset model
model = {'variable': best_variable,
         'predictor': all_predictors[best_variable][0]}
print(model)
def predict(X_test, model):
    variable = model['variable']
    predictor = model['predictor']
    y_predicted = np.array([predictor[int(sample[variable])] for sample in X_test])
    return y_predicted
y_predicted = predict(X_test, model)
print(y_predicted)
# Compute the accuracy by taking the mean of the amounts that y_predicted is equal to y_test
accuracy = np.mean(y_predicted == y_test) * 100
print("The test accuracy is {:.1f}%".format(accuracy))

from sklearn.metrics import classification_report
print(classification_report(y_test, y_predicted))
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值