OpenCV3.3中决策树(Decision Tree)接口简介及使用

116 篇文章 40 订阅

OpenCV 3.3中给出了决策树Decision Tres算法的实现,即cv::ml::DTrees类,此类的声明在include/opencv2/ml.hpp文件中,实现在modules/ml/src/tree.cpp文件中。其中:

(1)、cv::ml::DTrees类:继承自cv::ml::StateModel,而cv::ml::StateModel又继承自cv::Algorithm;

(2)、create函数:为static,new一个DTreesImpl对象用来创建一个DTrees对象;

(3)、setMaxCategories/getMaxCategories函数:设置/获取最大的类别数,默认值为10;

(4)、setMaxDepth/getMaxDepth函数:设置/获取树的最大深度,默认值为INT_MAX;

(5)、setMinSampleCount/getMinSampleCount函数:设置/获取最小训练样本数,默认值为10;

(6)、setCVFolds/getCVFolds函数:设置/获取CVFolds(thenumber of cross-validation folds)值,默认值为10,如果此值大于1,用于修剪构建的决策树;

(7)、setUseSurrogates/getUseSurrogates函数:设置/获取是否使用surrogatesplits方法,默认值为false;

(8)、setUse1SERule/getUse1SERule函数:设置/获取是否使用1-SE规则,默认值为true;

(9)、setTruncatePrunedTree/getTruncatedTree函数:设置/获取是否进行剪枝后移除操作,默认值为true;

(10)、setRegressionAccuracy/getRegressionAccuracy函数:设置/获取回归时用于终止的标准,默认值为0.01;

(11)、setPriors/getPriors函数:设置/获取先验概率数值,用于调整决策树的偏好,默认值为空的Mat;

(12)、getRoots函数:获取根节点索引;

(13)、getNodes函数:获取所有节点索引;

(14)、getSplits函数:获取所有拆分索引;

(15)、getSubsets函数:获取分类拆分的所有bitsets;

(16)、load函数:load已序列化的model文件。

关于决策树算法的简介可以参考:http://blog.csdn.net/fengbingchun/article/details/78880934

以下是从数据集MNIST中提取的40幅图像,0,1,2,3四类各20张,每类的前10幅来自于训练样本,用于训练,后10幅来自测试样本,用于测试,如下图:


关于MNIST的介绍可以参考:http://blog.csdn.net/fengbingchun/article/details/49611549

测试代码如下:

#include "opencv.hpp"
#include <string>
#include <vector>
#include <memory>
#include <algorithm>
#include <opencv2/opencv.hpp>
#include <opencv2/ml.hpp>
#include "common.hpp"

/ Decision Tree 
int test_opencv_decision_tree_train()
{
	const std::string image_path{ "E:/GitCode/NN_Test/data/images/digit/handwriting_0_and_1/" };
	cv::Mat tmp = cv::imread(image_path + "0_1.jpg", 0);
	CHECK(tmp.data != nullptr);
	const int train_samples_number{ 40 };
	const int every_class_number{ 10 };

	cv::Mat train_data(train_samples_number, tmp.rows * tmp.cols, CV_32FC1);
	cv::Mat train_labels(train_samples_number, 1, CV_32FC1);
	float* p = (float*)train_labels.data;
	for (int i = 0; i < 4; ++i) {
		std::for_each(p + i * every_class_number, p + (i + 1)*every_class_number, [i](float& v){v = (float)i; });
	}

	// train data
	for (int i = 0; i < 4; ++i) {
		static const std::vector<std::string> digit{ "0_", "1_", "2_", "3_" };
		static const std::string suffix{ ".jpg" };

		for (int j = 1; j <= every_class_number; ++j) {
			std::string image_name = image_path + digit[i] + std::to_string(j) + suffix;
			cv::Mat image = cv::imread(image_name, 0);
			CHECK(!image.empty() && image.isContinuous());
			image.convertTo(image, CV_32FC1);

			image = image.reshape(0, 1);
			tmp = train_data.rowRange(i * every_class_number + j - 1, i * every_class_number + j);
			image.copyTo(tmp);
		}
	}

	cv::Ptr<cv::ml::DTrees> dtree = cv::ml::DTrees::create();
	dtree->setMaxCategories(4);
	dtree->setMaxDepth(10);
	dtree->setMinSampleCount(10);
	dtree->setCVFolds(0);
	dtree->setUseSurrogates(false);
	dtree->setUse1SERule(false);
	dtree->setTruncatePrunedTree(false);
	dtree->setRegressionAccuracy(0);
	dtree->setPriors(cv::Mat());

	dtree->train(train_data, cv::ml::ROW_SAMPLE, train_labels);

	const std::string save_file{ "E:/GitCode/NN_Test/data/decision_tree_model.xml" }; // .xml, .yaml, .jsons
	dtree->save(save_file);

	return 0;
}

int test_opencv_decision_tree_predict()
{
	const std::string image_path{ "E:/GitCode/NN_Test/data/images/digit/handwriting_0_and_1/" };
	const std::string load_file{ "E:/GitCode/NN_Test/data/decision_tree_model.xml" }; // .xml, .yaml, .jsons
	const int predict_samples_number{ 40 };
	const int every_class_number{ 10 };

	cv::Mat tmp = cv::imread(image_path + "0_1.jpg", 0);
	CHECK(tmp.data != nullptr);

	// predict datta
	cv::Mat predict_data(predict_samples_number, tmp.rows * tmp.cols, CV_32FC1);
	for (int i = 0; i < 4; ++i) {
		static const std::vector<std::string> digit{ "0_", "1_", "2_", "3_" };
		static const std::string suffix{ ".jpg" };

		for (int j = 11; j <= every_class_number + 10; ++j) {
			std::string image_name = image_path + digit[i] + std::to_string(j) + suffix;
			cv::Mat image = cv::imread(image_name, 0);
			CHECK(!image.empty() && image.isContinuous());
			image.convertTo(image, CV_32FC1);

			image = image.reshape(0, 1);
			tmp = predict_data.rowRange(i * every_class_number + j - 10 - 1, i * every_class_number + j - 10);
			image.copyTo(tmp);
		}
	}

	cv::Mat result;
	cv::Ptr<cv::ml::DTrees> dtrees = cv::ml::DTrees::load(load_file);

	dtrees->predict(predict_data, result);
	CHECK(result.rows == predict_samples_number);

	cv::Mat predict_labels(predict_samples_number, 1, CV_32FC1);
	float* p = (float*)predict_labels.data;
	for (int i = 0; i < 4; ++i) {
		std::for_each(p + i * every_class_number, p + (i + 1)*every_class_number, [i](float& v){v = (float)i; });
	}

	int count{ 0 };
	for (int i = 0; i < predict_samples_number; ++i) {
		float value1 = ((float*)predict_labels.data)[i];
		float value2 = ((float*)result.data)[i];
		fprintf(stdout, "expected value: %f, actual value: %f\n", value1, value2);

		if (int(value1) == int(value2)) ++count;
	}
	fprintf(stdout, "accuracy: %f\n", count * 1.f / predict_samples_number);

	return 0;
}
执行结果如下:由于训练样本数量少,所以识别率只有72.5%,为了提高识别率,可以增加训练样本数。


GitHub: https://github.com/fengbingchun/NN_Test

  • 0
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 5
    评论
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值