山东大学 机器学习 决策树实验

决策树实验

实验内容

实现ID3决策树,并在给定的数据集上进行5折交叉验证。并观测训所得到的决策树在训练集和测试集的准确率,从而判断该决策树是否存在过拟合。在此基础上实现预剪枝和后剪枝,并比较预剪枝树与后剪枝树在训练集和测试集上的准确率,数据集为鸢尾花卉Iris数据集

实验思路

未进行剪枝的决策树实现

首先,用嵌套列表来表示一棵树,列表中有三个元素,第一个元素是划分依据的属性及划分点,第二个元素是左子树,第三个元素是右子树,左子树和右子树同样是这种列表结构,以此递归形成一颗树。用训练集进行训练就是对训练集进行划分,划分同样采用递归的方法,首先求出数据集中信息增益最大的属性及其对应的划分点,依据此属性和划分点将数据集分为左右两部分,这样就形成了根节点,其左右数据集还未划分,对于这两部分数据集,同样是将其划分为为一个树,所以进行递归调用来划分左右数据集,最后再看递归终止条件,当划分的数据集都属于一个类别,就不需要再划分,直接将其作为叶节点

进行预剪枝的决策树实现

预剪枝的基本思想是在选择一个最优属性对数据集进行划分时进行判断,求未划分时直接将待划分的数据集作为叶节点的树在测试集上的准确率和依据此属性进行划分的树在测试集上的准确率,采用较大者,所以可以在之前的递归划分方法中增加判断条件,由于要在方法中使用此树来测试准确率,所以将树声明为类的一个属性,同时,对于方法中输入数据集的划分,采用clear()、append()方法对形参处理,这样输入的参数也会被修改

进行后剪枝的决策树实现

首先用训练集生成一颗决策树,但是叶节点要保留划分到此处的数据集,而不是数据集的类别,然后对其进行后剪枝,后剪枝的基本思想是从下往上逐个判断中间节点,若不采用此属性进行划分,将其直接作为叶节点,求其在测试集上的准确率,和当前准确率作比较,采用较大者,由于是从下往上判断节点,所以可以利用递归的回溯来实现,即在递归方法中,先进行递归调用来对其左右子树进行后剪枝,当全部完成后,再对当前节点进行判断,只有其左右子树均为叶节点时才进行判断是否依据此属性进行划分,最后再看终止条件,当递归到叶节点,即为数据集时返回

关键代码说明

求数据集的信息熵


    def Entropy(self, dataset):
        labelNum = np.zeros((3))
        for list in dataset:
            labelNum[list[-1]] += 1
        en = 0.0
        lenth = len(dataset)
        for i in range(3):
            if labelNum[i] != 0:
                en -= (labelNum[i] / lenth) * math.log2(labelNum[i] / lenth)
        return en

求训练集中属性feature以point为划分点的信息增益

    def Gain(self, dataset, feature, point):
        En_Sum = self.Entropy(dataset)
        sub_left = []
        sub_right = []
        for i in dataset:
            if i[feature] <= point:
                sub_left.append(i)
            else:
                sub_right.append(i)
        gain = En_Sum - self.Entropy(sub_left) * len(sub_left) / len(dataset) - self.Entropy(sub_right) * len(
            sub_right) / len(
            dataset)
        return gain

求训练集中每个属性的所有可能的划分点,返回的结构为{0:[第一个属性的划分点],1:[第二个属性的划分点],2:[第三个属性的划分点],3:[第四个属性的划分点],}

    def devidePoints(self, dataset):
        value = {0: [], 1: [], 2: [], 3: []}
        for data in dataset:
            value[0].append(data[0])
            value[1].append(data[1])
            value[2].append(data[2])
            value[3].append(data[3])
        for i in range(4):
            value[i] = list(set(value[i]))
            value[i] = sorted(value[i])
        points = {0: [], 1: [], 2: [], 3: []}
        for i in range(4):
            for j in range(len(value[i]) - 1):
                points[i].append((value[i][j] + value[i][j + 1]) / 2)
        return points

依据上面方法的到的所有划分点,求训练集中信息增益最大的属性及其对应的划分点

    def feature_point(self, dataset, points):
        gains = {0: {}, 1: {}, 2: {}, 3: {}}
        for i in range(4):
            for j in points[i]:
                gains[i][j] = self.Gain(dataset, i, j)
        feature = 0
        points = 0.0
        max = 0
        for i in range(4):
            for j in gains[i].keys():
                if gains[i][j] > max:
                    feature = i
                    points = j
        return feature, points

未剪枝的决策树的划分

ef divide(self, tree):
        if self.isLabel(tree):
            newTree = [tree[0][-1]]
            return newTree
        else:
            points = self.devidePoints(tree)
            feature, point = self.feature_point(tree, points)
            fp = []
            fp.append(feature)
            fp.append(point)
            newTree = []
            sub_left = []
            sub_right = []
            for i in tree:
                if i[feature] <= point:
                    sub_left.append(i)
                else:
                    sub_right.append(i)
            newTree.append(fp)
            newTree.append(sub_left)
            newTree.append(sub_right)
            newTree[1] = self.divide(newTree[1])
            newTree[2] = self.divide(newTree[2])
            return newTree

预剪枝的决策树的划分

    def pre_devide(self, tree):
        if self.isLabel(tree):
            l = tree[0][-1]
            tree.clear()
            tree.append(l)
        else:
            tree_save = copy.deepcopy(tree)
            label = self.max_label(tree)
            tree.clear()
            tree.append(label)
            rate1 = self.rate(self.preTree, self.testData)
            points = self.devidePoints(tree_save)
            feature, point = self.feature_point(tree_save, points)
            fp = []
            fp.append(feature)
            fp.append(point)
            sub_left = []
            sub_right = []
            for i in tree_save:
                if i[feature] <= point:
                    sub_left.append(i)
                else:
                    sub_right.append(i)
            tree.clear()
            tree.append(fp)
            tree.append([self.max_label(sub_left)])
            tree.append([self.max_label(sub_right)])
            rate2 = self.rate(self.preTree, self.testData)
            if rate1 > rate2:
                tree.clear()
                tree.append(label)
            else:
                tree.clear()
                tree.append(fp)
                tree.append(sub_left)
                tree.append(sub_right)
                self.pre_devide(tree[1])
                self.pre_devide(tree[2])

生成叶子节点为数据集的决策树,为后剪枝做准备

    def pre_post_divide(self, tree):
        if self.isLabel(tree):
            return tree
        else:
            points = self.devidePoints(tree)
            feature, point = self.feature_point(tree, points)
            fp = []
            fp.append(feature)
            fp.append(point)
            newTree = []
            sub_left = []
            sub_right = []
            for i in tree:
                if i[feature] <= point:
                    sub_left.append(i)
                else:
                    sub_right.append(i)
            newTree.append(fp)
            newTree.append(sub_left)
            newTree.append(sub_right)
            newTree[1] = self.pre_post_divide(newTree[1])
            newTree[2] = self.pre_post_divide(newTree[2])
            return newTree

对上面方法生成的决策树进行后剪枝

    def post_divide(self, tree):
        if len(tree[0]) == 5:
            return
        else:
            self.post_divide(tree[1])
            self.post_divide(tree[2])
            if len(tree[1][0]) == 5 and len(tree[2][0]) == 5:
                rate1 = self.rate(self.postTree, self.testData)
                tree_save = copy.deepcopy(tree)
                tree.clear()
                tree.extend(tree_save[1])
                tree.extend(tree_save[2])
                rate2 = self.rate(self.postTree, self.testData)
                if rate1 > rate2:
                    tree.clear()
                    tree.extend(tree_save)
                else:
                    pass

对于训练好的数tree,求data数据的类别

    def Label(self, tree, data):
        if len(tree) == 1:
            return tree[0]
        if len(tree[0]) == 5:
            return self.max_label(tree)
        else:
            if data[tree[0][0]] <= tree[0][1]:
                return self.Label(tree[1], data)
            else:
                return self.Label(tree[2], data)

求5折交叉验证中,三种决策树在训练集和测试集上的准确率

    def AveragePrecision(self, sum_data):
        sum_data_save = copy.deepcopy(sum_data)
        train_rate = np.zeros((5), float)
        test_rate = np.zeros((5), float)
        j = 0
        for i in sum_data:
            tree = i[0]
            tree = self.divide(tree)
            train_rate[j] = self.rate(tree, sum_data_save[j][0])
            test_rate[j] = self.rate(tree, sum_data_save[j][1])
            j += 1
        print("未进行剪枝的决策树")
        print("训练集:", end='')
        for i in range(5):
            print("{:.2%}  ".format(train_rate[i]), end='')
        print("\n")
        print("测试集:", end='')
        for i in range(5):
            print("{:.2%}  ".format(test_rate[i]), end='')
        print("\n")

        sum_data2 = copy.deepcopy(sum_data_save)
        j = 0
        for i in sum_data2:
            self.preTree = i[0]
            self.testData = i[1]
            self.pre_devide(self.preTree)
            train_rate[j] = self.rate(self.preTree, sum_data_save[j][0])
            test_rate[j] = self.rate(self.preTree, sum_data_save[j][1])
            j += 1
        print("进行预剪枝的决策树")
        print("训练集:", end='')
        for i in range(5):
            print("{:.2%}  ".format(train_rate[i]), end='')
        print("\n")
        print("测试集:", end='')
        for i in range(5):
            print("{:.2%}  ".format(test_rate[i]), end='')
        print("\n")

        sum_data3 = copy.deepcopy(sum_data_save)
        j = 0
        for i in sum_data3:
            self.testData = i[1]
            self.postTree = self.pre_post_divide(i[0])
            self.post_divide(self.postTree)
            train_rate[j] = self.rate(self.postTree, sum_data_save[j][0])
            test_rate[j] = self.rate(self.postTree, sum_data_save[j][1])
            j += 1
        print("进行后剪枝的决策树")
        print("训练集:", end='')
        for i in range(5):
            print("{:.2%}  ".format(train_rate[i]), end='')
        print("\n")
        print("测试集:", end='')
        for i in range(5):
            print("{:.2%}  ".format(test_rate[i]), end='')

运行结果及分析

D:\anaconda\python.exe D:/pycharmproject/decisiontree/DecisionTreeID3.py
未进行剪枝的决策树
训练集:100.00%  100.00%  100.00%  100.00%  100.00%  

测试集:96.67%  93.33%  96.67%  100.00%  93.33%  

进行预剪枝的决策树
训练集:99.17%  100.00%  99.17%  100.00%  100.00%  

测试集:96.67%  93.33%  96.67%  100.00%  93.33%  

进行后剪枝的决策树
训练集:95.83%  90.83%  95.00%  94.17%  96.67%  

测试集:96.67%  90.00%  96.67%  93.33%  93.33%  
Process finished with exit code 0

每一组数据是决策树在5对训练集和测试集上的准确率,对于未进行剪枝的决策树,由于是对训练集进行划分,所以训练集在其上的准确率为1,而在测试集中有两组准确率为93.33%,和对应的训练集准确率偏差较大,这是由于训练集中存在一些异常数据,训练得到的树对这些异常数据可以很好地划分,所以对于测试集中的正常数据划分准确率不高,进行预剪枝的决策树相比未剪枝的决策树,在训练集和测试集上准确率较接近,而进行后剪枝的决策树则更为接近,消除了过拟合现象

源代码

# -*-coding:utf-8-*-
import random
import numpy as np
import math
import copy


class DecisionTree:
    def __init__(self):
        self.label = {'Iris-setosa': 0, 'Iris-versicolor': 1, 'Iris-virginica': 2}
        self.preTree = []
        self.postTree = []
        self.testData = []

    # 将数据集存放到列表中,同时将类别转化为数字以方便处理,存放的结构为[[],[],[]......],其中每一个列表是一个数据
    def load_data(self):
        dataset = []
        with open('iris.data', 'r') as f:
            for line in f:
                line = line.strip()  # 去除首尾的空格
                data = line.split(',')
                if data[-1] != '':
                    for i in range(4):
                        data[i] = float(data[i])
                    data[-1] = self.label[data[-1]]
                    dataset.append(data)
        return dataset

    # 将上面的得到数据划分为5份,再生成5对训练集和测试集,最终的结构为[[训练集,测试集],[训练集,测试集],[训练集,测试集]......]
    def k_cross(self, dataset):
        dataIndex = [i for i in range(150)]
        random.shuffle(dataIndex)
        k_data = []
        count = 0
        part = []
        for i in dataIndex:
            if count == 30:
                k_data.append(part)
                count = 0
                part = []
                part.append(dataset[i])
                count += 1
            else:
                part.append(dataset[i])
                count += 1
        k_data.append(part)
        sum_data = []
        for i in range(5):
            train_set = []
            test_set = k_data[i]
            for j in range(5):
                if j != i:
                    train_set.extend(k_data[j])
            sub = []
            sub.append(train_set)
            sub.append(test_set)
            sum_data.append(sub)
        return sum_data

    # 求训练集的信息熵
    def Entropy(self, dataset):
        labelNum = np.zeros((3))
        for list in dataset:
            labelNum[list[-1]] += 1
        en = 0.0
        lenth = len(dataset)
        for i in range(3):
            if labelNum[i] != 0:
                en -= (labelNum[i] / lenth) * math.log2(labelNum[i] / lenth)
        return en

    # 求训练集中属性feature以point为划分点的信息增益
    def Gain(self, dataset, feature, point):
        En_Sum = self.Entropy(dataset)
        sub_left = []
        sub_right = []
        for i in dataset:
            if i[feature] <= point:
                sub_left.append(i)
            else:
                sub_right.append(i)
        gain = En_Sum - self.Entropy(sub_left) * len(sub_left) / len(dataset) - self.Entropy(sub_right) * len(
            sub_right) / len(
            dataset)
        return gain

    # 求训练集中每个属性的所有可能的划分点,返回的结构为{0:[第一个属性的划分点],1:[第二个属性的划分点],2:[第三个属性的划分点],3:[第四个属性的划分点],}
    def devidePoints(self, dataset):
        value = {0: [], 1: [], 2: [], 3: []}
        for data in dataset:
            value[0].append(data[0])
            value[1].append(data[1])
            value[2].append(data[2])
            value[3].append(data[3])
        for i in range(4):
            value[i] = list(set(value[i]))
            value[i] = sorted(value[i])
        points = {0: [], 1: [], 2: [], 3: []}
        for i in range(4):
            for j in range(len(value[i]) - 1):
                points[i].append((value[i][j] + value[i][j + 1]) / 2)
        return points

    # 依据上面方法的到的所有划分点,求训练集中信息增益最大的属性及其对应的划分点
    def feature_point(self, dataset, points):
        gains = {0: {}, 1: {}, 2: {}, 3: {}}
        for i in range(4):
            for j in points[i]:
                gains[i][j] = self.Gain(dataset, i, j)
        feature = 0
        points = 0.0
        max = 0
        for i in range(4):
            for j in gains[i].keys():
                if gains[i][j] > max:
                    feature = i
                    points = j
        return feature, points

    # 判断数据集中数据是否同类别的
    def isLabel(self, dataset):
        label = []
        for i in dataset:
            label.append(i[-1])
        label = list(set(label))
        if len(label) == 1:
            return True
        else:
            return False

    # 未剪枝的决策树的划分
    def divide(self, tree):
        if self.isLabel(tree):
            newTree = [tree[0][-1]]
            return newTree
        else:
            points = self.devidePoints(tree)
            feature, point = self.feature_point(tree, points)
            fp = []
            fp.append(feature)
            fp.append(point)
            newTree = []
            sub_left = []
            sub_right = []
            for i in tree:
                if i[feature] <= point:
                    sub_left.append(i)
                else:
                    sub_right.append(i)
            newTree.append(fp)
            newTree.append(sub_left)
            newTree.append(sub_right)
            newTree[1] = self.divide(newTree[1])
            newTree[2] = self.divide(newTree[2])
            return newTree

    # 对于训练好的数tree,求data数据的类别
    def Label(self, tree, data):
        if len(tree) == 1:
            return tree[0]
        if len(tree[0]) == 5:
            return self.max_label(tree)
        else:
            if data[tree[0][0]] <= tree[0][1]:
                return self.Label(tree[1], data)
            else:
                return self.Label(tree[2], data)

    # 求tree在测试集上的准确率
    def rate(self, tree, test_set):
        right = 0
        for i in test_set:
            if self.Label(tree, i) == i[-1]:
                right += 1
        return right / len(test_set)

    # 求数据集中最多的类别
    def max_label(self, dataset):
        labelNum = np.zeros((3))
        for list in dataset:
            labelNum[list[-1]] += 1
        i = 0
        max = 0
        for j in range(3):
            if labelNum[j] > max:
                i = j
                max = labelNum[j]
        return i

    # 预剪枝的决策树的划分
    def pre_devide(self, tree):
        if self.isLabel(tree):
            l = tree[0][-1]
            tree.clear()
            tree.append(l)
        else:
            tree_save = copy.deepcopy(tree)
            label = self.max_label(tree)
            tree.clear()
            tree.append(label)
            rate1 = self.rate(self.preTree, self.testData)
            points = self.devidePoints(tree_save)
            feature, point = self.feature_point(tree_save, points)
            fp = []
            fp.append(feature)
            fp.append(point)
            sub_left = []
            sub_right = []
            for i in tree_save:
                if i[feature] <= point:
                    sub_left.append(i)
                else:
                    sub_right.append(i)
            tree.clear()
            tree.append(fp)
            tree.append([self.max_label(sub_left)])
            tree.append([self.max_label(sub_right)])
            rate2 = self.rate(self.preTree, self.testData)
            if rate1 > rate2:
                tree.clear()
                tree.append(label)
            else:
                tree.clear()
                tree.append(fp)
                tree.append(sub_left)
                tree.append(sub_right)
                self.pre_devide(tree[1])
                self.pre_devide(tree[2])

    # 生成叶子节点为数据集的决策树,为后剪枝做准备
    def pre_post_divide(self, tree):
        if self.isLabel(tree):
            return tree
        else:
            points = self.devidePoints(tree)
            feature, point = self.feature_point(tree, points)
            fp = []
            fp.append(feature)
            fp.append(point)
            newTree = []
            sub_left = []
            sub_right = []
            for i in tree:
                if i[feature] <= point:
                    sub_left.append(i)
                else:
                    sub_right.append(i)
            newTree.append(fp)
            newTree.append(sub_left)
            newTree.append(sub_right)
            newTree[1] = self.pre_post_divide(newTree[1])
            newTree[2] = self.pre_post_divide(newTree[2])
            return newTree

    # 对上面方法生成的决策树进行后剪枝
    def post_divide(self, tree):
        if len(tree[0]) == 5:
            return
        else:
            self.post_divide(tree[1])
            self.post_divide(tree[2])
            if len(tree[1][0]) == 5 and len(tree[2][0]) == 5:
                rate1 = self.rate(self.postTree, self.testData)
                tree_save = copy.deepcopy(tree)
                tree.clear()
                tree.extend(tree_save[1])
                tree.extend(tree_save[2])
                rate2 = self.rate(self.postTree, self.testData)
                if rate1 > rate2:
                    tree.clear()
                    tree.extend(tree_save)
                else:
                    pass

    # 求5折交叉验证中,三种决策树在训练集和测试集上的准确率
    def AveragePrecision(self, sum_data):
        sum_data_save = copy.deepcopy(sum_data)
        train_rate = np.zeros((5), float)
        test_rate = np.zeros((5), float)
        j = 0
        for i in sum_data:
            tree = i[0]
            tree = self.divide(tree)
            train_rate[j] = self.rate(tree, sum_data_save[j][0])
            test_rate[j] = self.rate(tree, sum_data_save[j][1])
            j += 1
        print("未进行剪枝的决策树")
        print("训练集:", end='')
        for i in range(5):
            print("{:.2%}  ".format(train_rate[i]), end='')
        print("\n")
        print("测试集:", end='')
        for i in range(5):
            print("{:.2%}  ".format(test_rate[i]), end='')
        print("\n")

        sum_data2 = copy.deepcopy(sum_data_save)
        j = 0
        for i in sum_data2:
            self.preTree = i[0]
            self.testData = i[1]
            self.pre_devide(self.preTree)
            train_rate[j] = self.rate(self.preTree, sum_data_save[j][0])
            test_rate[j] = self.rate(self.preTree, sum_data_save[j][1])
            j += 1
        print("进行预剪枝的决策树")
        print("训练集:", end='')
        for i in range(5):
            print("{:.2%}  ".format(train_rate[i]), end='')
        print("\n")
        print("测试集:", end='')
        for i in range(5):
            print("{:.2%}  ".format(test_rate[i]), end='')
        print("\n")

        sum_data3 = copy.deepcopy(sum_data_save)
        j = 0
        for i in sum_data3:
            self.testData = i[1]
            self.postTree = self.pre_post_divide(i[0])
            self.post_divide(self.postTree)
            train_rate[j] = self.rate(self.postTree, sum_data_save[j][0])
            test_rate[j] = self.rate(self.postTree, sum_data_save[j][1])
            j += 1
        print("进行后剪枝的决策树")
        print("训练集:", end='')
        for i in range(5):
            print("{:.2%}  ".format(train_rate[i]), end='')
        print("\n")
        print("测试集:", end='')
        for i in range(5):
            print("{:.2%}  ".format(test_rate[i]), end='')


if __name__ == '__main__':
    decisionTree = DecisionTree()
    dataset = decisionTree.load_data()
    sum_data = decisionTree.k_cross(dataset)
    decisionTree.AveragePrecision(sum_data)

鸢尾花卉Iris数据集

鸢尾花卉Iris数据集描述:
iris是鸢尾植物,这里存储了其萼片和花瓣的长宽,共4个属性,鸢尾植物分三类。所以该数据集一共包含4个特征变量,1个类别变量。共有150个样本,鸢尾有三个亚属,分别是山鸢尾 (Iris-setosa),变色鸢尾(Iris-versicolor)和维吉尼亚鸢尾(Iris-virginica)。
也就是说我们的数据集里每个样本含有四个属性,并且我们的任务是个三分类问题。三个类别分别为: Iris Setosa(山鸢尾),Iris Versicolour(杂色鸢尾),Iris Virginica(维吉尼亚鸢尾)。
例如:
样本一: 5.1, 3.5, 1.4, 0.2, Iris-setosa
其中“5.1,3.5,1.4,0.2”代表当前样本的四个属性的取值,“Iris-setosa”代表当前样本的类别。

5.1,3.5,1.4,0.2,Iris-setosa
4.9,3.0,1.4,0.2,Iris-setosa
4.7,3.2,1.3,0.2,Iris-setosa
4.6,3.1,1.5,0.2,Iris-setosa
5.0,3.6,1.4,0.2,Iris-setosa
5.4,3.9,1.7,0.4,Iris-setosa
4.6,3.4,1.4,0.3,Iris-setosa
5.0,3.4,1.5,0.2,Iris-setosa
4.4,2.9,1.4,0.2,Iris-setosa
4.9,3.1,1.5,0.1,Iris-setosa
5.4,3.7,1.5,0.2,Iris-setosa
4.8,3.4,1.6,0.2,Iris-setosa
4.8,3.0,1.4,0.1,Iris-setosa
4.3,3.0,1.1,0.1,Iris-setosa
5.8,4.0,1.2,0.2,Iris-setosa
5.7,4.4,1.5,0.4,Iris-setosa
5.4,3.9,1.3,0.4,Iris-setosa
5.1,3.5,1.4,0.3,Iris-setosa
5.7,3.8,1.7,0.3,Iris-setosa
5.1,3.8,1.5,0.3,Iris-setosa
5.4,3.4,1.7,0.2,Iris-setosa
5.1,3.7,1.5,0.4,Iris-setosa
4.6,3.6,1.0,0.2,Iris-setosa
5.1,3.3,1.7,0.5,Iris-setosa
4.8,3.4,1.9,0.2,Iris-setosa
5.0,3.0,1.6,0.2,Iris-setosa
5.0,3.4,1.6,0.4,Iris-setosa
5.2,3.5,1.5,0.2,Iris-setosa
5.2,3.4,1.4,0.2,Iris-setosa
4.7,3.2,1.6,0.2,Iris-setosa
4.8,3.1,1.6,0.2,Iris-setosa
5.4,3.4,1.5,0.4,Iris-setosa
5.2,4.1,1.5,0.1,Iris-setosa
5.5,4.2,1.4,0.2,Iris-setosa
4.9,3.1,1.5,0.1,Iris-setosa
5.0,3.2,1.2,0.2,Iris-setosa
5.5,3.5,1.3,0.2,Iris-setosa
4.9,3.1,1.5,0.1,Iris-setosa
4.4,3.0,1.3,0.2,Iris-setosa
5.1,3.4,1.5,0.2,Iris-setosa
5.0,3.5,1.3,0.3,Iris-setosa
4.5,2.3,1.3,0.3,Iris-setosa
4.4,3.2,1.3,0.2,Iris-setosa
5.0,3.5,1.6,0.6,Iris-setosa
5.1,3.8,1.9,0.4,Iris-setosa
4.8,3.0,1.4,0.3,Iris-setosa
5.1,3.8,1.6,0.2,Iris-setosa
4.6,3.2,1.4,0.2,Iris-setosa
5.3,3.7,1.5,0.2,Iris-setosa
5.0,3.3,1.4,0.2,Iris-setosa
7.0,3.2,4.7,1.4,Iris-versicolor
6.4,3.2,4.5,1.5,Iris-versicolor
6.9,3.1,4.9,1.5,Iris-versicolor
5.5,2.3,4.0,1.3,Iris-versicolor
6.5,2.8,4.6,1.5,Iris-versicolor
5.7,2.8,4.5,1.3,Iris-versicolor
6.3,3.3,4.7,1.6,Iris-versicolor
4.9,2.4,3.3,1.0,Iris-versicolor
6.6,2.9,4.6,1.3,Iris-versicolor
5.2,2.7,3.9,1.4,Iris-versicolor
5.0,2.0,3.5,1.0,Iris-versicolor
5.9,3.0,4.2,1.5,Iris-versicolor
6.0,2.2,4.0,1.0,Iris-versicolor
6.1,2.9,4.7,1.4,Iris-versicolor
5.6,2.9,3.6,1.3,Iris-versicolor
6.7,3.1,4.4,1.4,Iris-versicolor
5.6,3.0,4.5,1.5,Iris-versicolor
5.8,2.7,4.1,1.0,Iris-versicolor
6.2,2.2,4.5,1.5,Iris-versicolor
5.6,2.5,3.9,1.1,Iris-versicolor
5.9,3.2,4.8,1.8,Iris-versicolor
6.1,2.8,4.0,1.3,Iris-versicolor
6.3,2.5,4.9,1.5,Iris-versicolor
6.1,2.8,4.7,1.2,Iris-versicolor
6.4,2.9,4.3,1.3,Iris-versicolor
6.6,3.0,4.4,1.4,Iris-versicolor
6.8,2.8,4.8,1.4,Iris-versicolor
6.7,3.0,5.0,1.7,Iris-versicolor
6.0,2.9,4.5,1.5,Iris-versicolor
5.7,2.6,3.5,1.0,Iris-versicolor
5.5,2.4,3.8,1.1,Iris-versicolor
5.5,2.4,3.7,1.0,Iris-versicolor
5.8,2.7,3.9,1.2,Iris-versicolor
6.0,2.7,5.1,1.6,Iris-versicolor
5.4,3.0,4.5,1.5,Iris-versicolor
6.0,3.4,4.5,1.6,Iris-versicolor
6.7,3.1,4.7,1.5,Iris-versicolor
6.3,2.3,4.4,1.3,Iris-versicolor
5.6,3.0,4.1,1.3,Iris-versicolor
5.5,2.5,4.0,1.3,Iris-versicolor
5.5,2.6,4.4,1.2,Iris-versicolor
6.1,3.0,4.6,1.4,Iris-versicolor
5.8,2.6,4.0,1.2,Iris-versicolor
5.0,2.3,3.3,1.0,Iris-versicolor
5.6,2.7,4.2,1.3,Iris-versicolor
5.7,3.0,4.2,1.2,Iris-versicolor
5.7,2.9,4.2,1.3,Iris-versicolor
6.2,2.9,4.3,1.3,Iris-versicolor
5.1,2.5,3.0,1.1,Iris-versicolor
5.7,2.8,4.1,1.3,Iris-versicolor
6.3,3.3,6.0,2.5,Iris-virginica
5.8,2.7,5.1,1.9,Iris-virginica
7.1,3.0,5.9,2.1,Iris-virginica
6.3,2.9,5.6,1.8,Iris-virginica
6.5,3.0,5.8,2.2,Iris-virginica
7.6,3.0,6.6,2.1,Iris-virginica
4.9,2.5,4.5,1.7,Iris-virginica
7.3,2.9,6.3,1.8,Iris-virginica
6.7,2.5,5.8,1.8,Iris-virginica
7.2,3.6,6.1,2.5,Iris-virginica
6.5,3.2,5.1,2.0,Iris-virginica
6.4,2.7,5.3,1.9,Iris-virginica
6.8,3.0,5.5,2.1,Iris-virginica
5.7,2.5,5.0,2.0,Iris-virginica
5.8,2.8,5.1,2.4,Iris-virginica
6.4,3.2,5.3,2.3,Iris-virginica
6.5,3.0,5.5,1.8,Iris-virginica
7.7,3.8,6.7,2.2,Iris-virginica
7.7,2.6,6.9,2.3,Iris-virginica
6.0,2.2,5.0,1.5,Iris-virginica
6.9,3.2,5.7,2.3,Iris-virginica
5.6,2.8,4.9,2.0,Iris-virginica
7.7,2.8,6.7,2.0,Iris-virginica
6.3,2.7,4.9,1.8,Iris-virginica
6.7,3.3,5.7,2.1,Iris-virginica
7.2,3.2,6.0,1.8,Iris-virginica
6.2,2.8,4.8,1.8,Iris-virginica
6.1,3.0,4.9,1.8,Iris-virginica
6.4,2.8,5.6,2.1,Iris-virginica
7.2,3.0,5.8,1.6,Iris-virginica
7.4,2.8,6.1,1.9,Iris-virginica
7.9,3.8,6.4,2.0,Iris-virginica
6.4,2.8,5.6,2.2,Iris-virginica
6.3,2.8,5.1,1.5,Iris-virginica
6.1,2.6,5.6,1.4,Iris-virginica
7.7,3.0,6.1,2.3,Iris-virginica
6.3,3.4,5.6,2.4,Iris-virginica
6.4,3.1,5.5,1.8,Iris-virginica
6.0,3.0,4.8,1.8,Iris-virginica
6.9,3.1,5.4,2.1,Iris-virginica
6.7,3.1,5.6,2.4,Iris-virginica
6.9,3.1,5.1,2.3,Iris-virginica
5.8,2.7,5.1,1.9,Iris-virginica
6.8,3.2,5.9,2.3,Iris-virginica
6.7,3.3,5.7,2.5,Iris-virginica
6.7,3.0,5.2,2.3,Iris-virginica
6.3,2.5,5.0,1.9,Iris-virginica
6.5,3.0,5.2,2.0,Iris-virginica
6.2,3.4,5.4,2.3,Iris-virginica
5.9,3.0,5.1,1.8,Iris-virginica
  • 2
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值