CMAR代码学习1 生成FP-tree挖掘规则

使用的测试数据集tennis.txt

play outlook temp humid wind
no sunny hot high weak
no sunny hot high strong
yes overcast hot high weak
yes rain mild high weak
yes rain cool normal weak
no rain cool normal strong
yes overcast mild high strong
no sunny mild high weak
yes sunny cool normal weak
yes rain mild normal weak
yes sunny mild normal strong
yes overcast hot normal weak
yes overcast cool normal strong
no rain mild high strong

  • 配置运行的算法,由于这是一个算法集成项目的文件,除了CMAR因此需要包含其他算法的内容,所以需要指定本次运行的是CMAR算法
  • MainTestCMAR_batch_kfold.java
System.out.println("==== Step 2: Training:  Apply the algorithm to build a model (a set of rules) ===");
		// Parameters of the algorithm
		double minSup = 0.1;
		double minConf = 0.5;
		int delta = 2;

		// Create the algorithm
		ClassificationAlgorithm algorithmCMAR = new AlgoCMAR(minSup, minConf, delta);
		ClassificationAlgorithm[] algorithms = new ClassificationAlgorithm[] { algorithmCMAR };

		// We create an object Evaluator to run the experiment using k-fold cross
		// validation
		Evaluator experiment1 = new Evaluator();

		// We will test 3 folds
		int kFoldCount = 3;

		// We run the experiment
		OverallResults allResults = experiment1.trainAndRunClassifiersKFold(algorithms, dataset, kFoldCount);
  • Evaluator.java
  •  根据参数划分数据,得到训练集和测试集
  • 训练模型,调用algorithm.trainAndCalculateStats(training)
  • 用训练的模型的预测训练集,runOnInstancesAnUpdateResults(training, classifier, resultsOnTraining);
  • 用训练的模型的预测测试集,runOnInstancesAnUpdateResults(testing, classifier, resultsOnTesting);
// Split the dataset in two parts
			Dataset[] datasets = VirtualDataset.splitDatasetForKFold(dataset, posStart, posEnd);
			Dataset training = datasets[0];
			Dataset testing = datasets[1];

			if (DEBUGMODE) {
				System.out.println("===== KFOLD " + i + " =====");
				System.out.println(" k = " + k);
				System.out.println("  - Original dataset: " + dataset.getInstances().size() + " records.");
				System.out.println("  - Training part: " + training.getInstances().size() + " records.");
				System.out.println("  - Testing part: " + testing.getInstances().size() + " records.");
				System.out.println("===== RUNNING =====");
			}

			// for each classifier
			for (ClassificationAlgorithm algorithm : algorithms) {
				if (DEBUGMODE) {
					System.out.println("Running algorithm ... " + algorithm.getName());
//						System.out.println(datasets[0].getMapClassToFrequency());
//						System.out.println(datasets[1].getMapClassToFrequency());
				}
				// Train the classifier
				Classifier classifier = algorithm.trainAndCalculateStats(training);
				TrainingResults trainResults = new TrainingResults();
				trainResults.memory += algorithm.getTrainingMaxMemory();
				trainResults.runtime += algorithm.getTrainingTime();
				if (classifier instanceof RuleClassifier) {
					trainResults.avgRuleCount += ((RuleClassifier) classifier).getNumberRules() / (double) k;
				}

				// Run on training set
				ClassificationResults resultsOnTraining = new ClassificationResults();
				runOnInstancesAnUpdateResults(training, classifier, resultsOnTraining);

				// Run on testing set
				ClassificationResults resultsOnTesting = new ClassificationResults();
				runOnInstancesAnUpdateResults(testing, classifier, resultsOnTesting);

				/** Save results for this classifier for this dataset */
				allResults.addResults(resultsOnTraining, resultsOnTesting, trainResults);
  • 训练模型,调用algorithm.trainAndCalculateStats(training)内部内容
/**
     * Main method used to create the classifier    创建分类器
     * 
     * @param training Dataset used to train the classifier
     * @return associative classifier
     * @throws Exception
     */
	public Classifier trainAndCalculateStats(Dataset training) throws Exception{
		// Initialize statistics
    	MemoryLogger.getInstance().reset();
    	trainingTime = System.currentTimeMillis();
    	
    	// Do the training
    	//通过classfier可以得到一堆Classifier CMARrules类,里面有一堆rules
		classifier = train(training);
		
		// Finish calculating statistics
    	MemoryLogger.getInstance().checkMemory();
        trainingTime = System.currentTimeMillis() - trainingTime;
        trainingMaxMemory = MemoryLogger.getInstance().getMaxMemory();
		return classifier;
	}
  • 训练模型,调用抽象类protected abstract Classifier train(Dataset training);下的重写train函数
/**
     * Train a classifier
     * @param dataset a training dataset
     * @return a rule classifier
     * @throws Exception if an error occurs
     */
    @Override
    public ClassifierCMAR train(Dataset dataset){
    	// Apply a modified FPGrowth algorithm to obtain the rules
        FPGrowthForCMAR fpgrowth = new FPGrowthForCMAR(dataset, minSup, minConf);
        List<Rule> rules = fpgrowth.run();
        
        // Return a classifier that is created using these rules
        return new ClassifierCMAR(rules, dataset, delta);
    }

FPGrowthForCMAR(dataset, minSup, minConf);//传递参数

fpgrowth.run();//构建树生成规则

return new ClassifierCMAR(rules, dataset, delta);//对生成的规则进行剪枝

  • 接下来部分的代码将讲解如何生成fp-tree
	/** 利用fp-tree生成规则
	 * Run the algorithm to generate class association rules from the training
	 * dataset
	 * 
	 * @return A list of class association rules
	 */
	public List<Rule> run() {

		// Find the support of single items (attribute values)
		calculateSingletons();

		// Initialize the list to store class association rules
		rules = new ArrayList<Rule>();

		// Create the initial FP-tree
		FPTree tree = new FPTree();

		// For each instance (record) 遍历每一条数据
		for (Instance currentInstance : dataset.getInstances()) {

			// Create a list to store a revised version of this instance
			// that will contain only frequent items (attribute values)
			List<Short> revisedInstance = new ArrayList<Short>();

			// Get the class value of the current instance.
			short klass = currentInstance.getKlass();

			// For each item (attribute value) of the current instance
			for (int j = 0; j < dataset.getAttributes().size(); j++) {
				Short item = currentInstance.getItems()[j];

				// If the support is more than the minimum support threshold,
				// add this value to the revised instance
				if (mapSupport.get(item) >= minSupportRelative) {
					revisedInstance.add(item);
				}
			}

			// sort item in the revised instance by descending order of support
			Collections.sort(revisedInstance, new Comparator<Short>() {
				public int compare(Short item1, Short item2) {
					int compare = mapSupport.get(item2).compareTo(mapSupport.get(item1));

					if (compare == 0) {
						// When support is equal, lexical order is used
						return (item1 - item2);
					}
					return compare;
				}
			});

			// Insert the revised instance into the initial FP-Tree 逐一将instances添加到fp-tree当中
			tree.addInstance(revisedInstance, klass);
		}

		// Create the header table of the initial FP-tree
		//mapsupprot所有属性出现次数的排序,项头表降序排序
		tree.createHeaderList(mapSupport);

		// If the tree contains at least some frequent items
		if (tree.headerList.size() > 0) {

			// Two buffer are initialized
			short[] antecedentBuffer = new short[MAX_SIZE_ANTECEDENT];
			fpNodeSingleBuffer = new FPNode[MAX_SIZE_ANTECEDENT];

			// Then, start to recursively mine rules in the FP-tree 执行递归去挖掘规则
			fpgrowth(tree, antecedentBuffer, 0, dataset.getInstances().size(), dataset.getMapClassToFrequency(),
					mapSupport, mapSupportByKlass);
		}

		// Return the class association rules that have been found
		
		return rules;
	}
  • 统计数据集中所有items的情况,calculateSingletons();统计数据 ,得到的结果
  • mapSupport,单一items的出现次数

{1=3, 2=3, 3=3, 4=3, 5=1, 6=5, 7=3, 8=6, 9=5, 10=4}

  • mapSupportByKlass ,每个items对应的分类情况

{1={12=3}, 2={11=2, 12=1}, 3={11=1, 12=2}, 4={11=1, 12=2}, 5={12=1}, 6={11=2, 12=3}, 7={11=2, 12=1}, 8={11=1, 12=5}, 9={11=2, 12=3}, 10={11=1, 12=3}}

/**
	 * Scans the training dataset to calculate the support of single items (called
	 * singletons)
	 */
	private void calculateSingletons() {
		// Initialize the maps to count the supports of attribute values and class
		// values
		mapSupport = new HashMap<Short, Long>();
		mapSupportByKlass = new HashMap<Short, Map<Short, Long>>();

		// For each instance (record)
		List<Instance> instances = dataset.getInstances();
		for (Instance instance : instances) {
			// Get the class value of this instance
			Short klass = instance.getKlass();

			// For each attribute value (item)
			for (int j = 0; j < dataset.getAttributes().size(); j++) {
				Short item = instance.getItems()[j];

				// Get the current support count in the map
				Long count = mapSupport.getOrDefault(item, 0L);

				// and increase it by one
				mapSupport.put(item, ++count);

				// Get the map to store the support of the current class
				// for this item
				Map<Short, Long> byKlass = mapSupportByKlass.get(item);

				// If that class was not seen before for this item
				if (byKlass == null) {
					// Save the support of that class for this item to 1
					mapSupportByKlass.put(item, new HashMap<Short, Long>());
					mapSupportByKlass.get(item).put(klass, 1L);
				} else {
					// Otherwise, increase the value by 1
					Long counter = byKlass.getOrDefault(klass, 0L);
					byKlass.put(klass, counter + 1);
				}
			}
		}
	}
  • 过滤instances中不满足阈值的items,并且对instances里的items降序排序
  • 并按序添加到tree当中,调用tree.addInstance(revisedInstance, klass);

其中树中一个节点将会有以下属性

  • item,存放数值
  • parent,存放父节点
  • supportByklass,该节点的分类情况
  • childs,存放子节点

updateHeaderTable(item, newNode)//用于更新创建项头表

/**
     * Adds a transaction to the FP-Tree being created
     * 
     * @param transaction a list of transactions to be added in the FP-Tree
     * @param klass       the class value of the current transaction
     */
    public void addInstance(List<Short> transaction, Short klass) {
        FPNode currentNode = root;

        // For each item in the transaction
        for (Short item : transaction) {
            // Check if it is part of the current tree
            FPNode child = currentNode.getChildByItem(item);

            // If not
            if (child == null) {
                // Create a new node and add it to the tree
                FPNode newNode = new FPNode();
                newNode.item = item;
                newNode.parent = currentNode;
                newNode.supportByklass = new HashMap<Short, Long>();
                //
                newNode.supportByklass.put(klass, 1L);
                currentNode.childs.add(newNode);
                //currentNode更新
                currentNode = newNode;

                // update header table
                updateHeaderTable(item, newNode);
            } else {
            	// Otherwise, update the support of the current node
            	//统计该item的出现次数
                child.support++;
                //如果 HashMap 中有该key,返回value值;如果没有该 key,则返回默认值
                Long counterByKlass = child.supportByklass.getOrDefault(klass, 0L);
                //并用value值增加1
                child.supportByklass.put(klass, counterByKlass + 1);

                currentNode = child;
            }
        }
    }

生成的树

tree.createHeaderList(mapSupport);//项头表降序排序

[8, 6, 9, 10, 1, 2, 3, 4, 7, 5]

FPGrowthForCMAR.java当中

fpgrowth(tree, antecedentBuffer, 0, dataset.getInstances().size(), dataset.getMapClassToFrequency(),
                    mapSupport, mapSupportByKlass);//执行递归挖掘规则

执行的总体过程分两步

  1. 判断是否是singlePath,如果是则执行saveAllCombinationsOfPrefixPath(fpNodeSingleBuffer, numberSingleItems, prefix, prefixLength)
  2. 不是则利用前缀prefix生成规则generateRules(prefix, prefixLength + 1, betaSupport, supportByKlass);并生成新的递归树treeBeta.addPrefixPath(prefixPath, mapSupportBeta, minSupportRelative);

对应上图的树,展示它的生成规则过程 

  1. 首先从根源路径出发判断是否是单源路径,得到的结果不是
  2. 从项头表取出sup值最小的节点tree.headerList.get(i);
  3. 获取它的出现次数 1
  4. 将其放入prefix[prefixlength]中,此时prefix=p[5,......]
  5. 使用点前节点的值作为该规则的支持度,因为当前节点是会恒小于前缀的支持度的
  6. 获取分类情况 supportByKlass={12=1}
  7. 调用generateRules生成规则,[5] -> 12 #SUP: 1 #CONF: 1.0 #CHISQUARE: NaN

关于第4,5点的补充,展示调试过程

===================
输出前缀的次数
[5, 0, 0, 0, 0, 0, 0, 0, 0, 0]
item5
prefixSupport9
support1
prefixLength1
===================

//[5,8,10,1]是找到的单源路径

//下面以7开头的是从项头表取出倒数第二小sup值的数
===================
输出前缀的次数
[7, 8, 10, 1, 0, 0, 0, 0, 0, 0]
item7
prefixSupport9
support3
prefixLength1
===================

//生成的新树具有新的项头表从项头表中取出最小值10

//此时,只看图中以7结尾的部分即可
===================
输出前缀的次数
[7, 10, 10, 1, 0, 0, 0, 0, 0, 0]
item10
prefixSupport3
support1
prefixLength2
===================
===================
输出前缀的次数
[7, 3, 6, 1, 0, 0, 0, 0, 0, 0]
item3
prefixSupport3
support1
prefixLength2
===================
===================
输出前缀的次数
[7, 2, 6, 10, 0, 0, 0, 0, 0, 0]
item2
prefixSupport3
support1
prefixLength2
===================
===================
输出前缀的次数
[7, 1, 6, 9, 0, 0, 0, 0, 0, 0]
item1
prefixSupport3
support1
prefixLength2
===================
===================
输出前缀的次数
[7, 9, 6, 9, 0, 0, 0, 0, 0, 0]
item9
prefixSupport3
support2
prefixLength2
===================
===================
输出前缀的次数
[7, 6, 6, 9, 0, 0, 0, 0, 0, 0]
item6
prefixSupport3
support3
prefixLength2
===================

	/**
	 * Generate rules from an antecedent
	 * 
	 * @param antecedent       a rule antecedent
	 * @param antecedentLength number of items in the rule antecedent
	 * @param support          support of the rule
	 * @param counterByKlass   support for each class
	 */
	protected void generateRules(short[] antecedent, int antecedentLength, long support,
			Map<Short, Long> counterByKlass) {
		// Copy the antecedent into a buffer
		short[] itemsetOutputBuffer = new short[antecedentLength];
		System.arraycopy(antecedent, 0, itemsetOutputBuffer, 0, antecedentLength);
		//从小到大排序
		Arrays.sort(itemsetOutputBuffer, 0, antecedentLength);

		// For each class value
		for (Entry<Short, Long> entry : counterByKlass.entrySet()) {
			// Create a rule by combining it with the antecedent
			RuleCMAR rule = new RuleCMAR(itemsetOutputBuffer, entry.getKey());
			rule.setSupportAntecedent(support);
			rule.setSupportRule(entry.getValue());
			//总数据集里有多少条数据判断为klass
			rule.setSupportKlass(dataset.getMapClassToFrequency().get(rule.getKlass()));

			// If the rule is frequent and has a high confidence
			//rule的保存在这里
			if (rule.getSupportRule() >= this.minSupportRelative && rule.getConfidence() >= this.minConf)
				// Save the rule
				rules.add(rule);
		}
	}
  1. 该函数生成规则,调用rule类配置参数
  2. 判断是否满足阈值,满足则添加到总类FPGrowthForCMAR的rules里,if (rule.getSupportRule() >= this.minSupportRelative && rule.getConfidence() >= this.minConf)

找到[5,8,10,1]是单源路径,当前的前缀是5,利用它与后面的3个数字随机组合生成规则

itemsetOutputBuffer[5, 8]

itemsetOutputBuffer[5, 10]

itemsetOutputBuffer[5, 8, 10]

itemsetOutputBuffer[5, 1]
itemsetOutputBuffer[5, 8, 1]

itemsetOutputBuffer[5, 10, 1]

itemsetOutputBuffer[5, 8, 10, 1]

RuleCMAR.java

/**
	 * Constructor
	 * 
	 * @param antecedent the antecedent of this rule
	 * @param klass the class value of this rule
	 */
	public RuleCMAR(short[] antecedent, short klass) {
		super(antecedent, klass);
	}

Rule.java

/**
	 * Constructor
	 * 
	 * @param antecedent antecedent of the rule
	 * @param klass      consquent of the rule
	 */
	public Rule(short[] antecedent, short klass) {
		this(klass);
		add(antecedent);
	}

/**
	 * Constructor
	 * 
	 * @param klass consequent of the rule
	 */
	public Rule(short klass) {
		this();
		this.klass = klass;
	}

/**
	 * Main constructor
	 */
	public Rule() {
		this.antecedent = new ArrayList<Short>();
		this.supportRule = 0;
		this.supportAntecedent = 0;
		this.supportKlass = 0;
	}

取出当前节点tree.mapItemNodes.get(item),根据项头表的链表关系,查找所有与该节点相关的路径,利用这些路径构成新树

  1. 得到的prefixpath为 [5 1 10 8 ]
  2. 构建新树,得到的新树如下图所示
  3. 递归调用fpgrowth(treeBeta, prefix, prefixLength + 1, betaSupport, supportByKlass, mapSupportBeta,mapSupportByKlassBeta);函数
  4. 此时的树是单源路径,满足singlePath的判断,调用saveAllCombinationsOfPrefixPath(fpNodeSingleBuffer, numberSingleItems, prefix, prefixLength);生成规则

关于规则support值的赋值,是将当前节点添加到原来的前缀集合当中,就直接使用当前节点的support值即可,因为当前节点的support值恒小于前缀集合的support值

单源路径[5 1 10 8 ],利用saveAllCombinationsOfPrefixPath()循环生成规则

通过移位运算生成规则

//poistion表示不包含前缀,单源路径中节点的数量
 //prefixLength,表示前缀的数量

private void saveAllCombinationsOfPrefixPath(FPNode[] fpNodeTempBuffer, int position, short[] prefix,
			int prefixLength) {

		// Create a variable to count the overall support
		long support = 0;
		// Create a map to count the support for each class value
		Map<Short, Long> supportByKlass = null;
		
		// Generates all subsets of the current prefixPath except the empty set.
		// For each itemset that can be formed using this prefix path:
		loop1: for (long i = 1, max = 1 << position; i < max; i++) {
			int newPrefixLength = prefixLength;

			// Create the antecedent
			for (int j = 0; j < position; j++) {
				int isSet = (int) i & (1 << j);

				// if yes, add the bit position as an item to the new subset
				if (isSet > 0) {
					if (newPrefixLength == MAX_SIZE_ANTECEDENT) {
						continue loop1;
					}

					prefix[newPrefixLength++] = fpNodeTempBuffer[j].item;
					support = fpNodeTempBuffer[j].support;
					supportByKlass = fpNodeTempBuffer[j].supportByklass;
				}
			}
//			System.out.println(1);
			// Then, generate rules using the current antecedent
			
			generateRules(prefix, newPrefixLength, support, supportByKlass);
		}
	}

[5, 8] -> 12 #SUP: 1 #CONF: 1.0 #CHISQUARE: NaN
[5, 10] -> 12 #SUP: 1 #CONF: 1.0 #CHISQUARE: NaN
[5, 8, 10] -> 12 #SUP: 1 #CONF: 1.0 #CHISQUARE: NaN
[1, 5] -> 12 #SUP: 1 #CONF: 1.0 #CHISQUARE: NaN
[1, 5, 8] -> 12 #SUP: 1 #CONF: 1.0 #CHISQUARE: NaN
[1, 5, 10] -> 12 #SUP: 1 #CONF: 1.0 #CHISQUARE: NaN
[1, 5, 8, 10] -> 12 #SUP: 1 #CONF: 1.0 #CHISQUARE: NaN

此时,生成了所有以5为前缀的规则,进入第二次递归取出items中sup值第二小的节点7,依然执行上面所说的步骤,用7前缀生成规则。调用rule.getConfidence(),计算置信度。规则1的置信度就是1/3,规则2的置信度就是1/3

[7] -> 11 #SUP: 2 #CONF: 0.6666666666666666 #CHISQUARE: NaN

[7] -> 12 #SUP: 1 #CONF: 0.3333333333333333 #CHISQUARE: NaN

查找PrefixPath是[7 1 9 6] [7 3 10 6] [7 2 9 6] 生成的树,此树的headerList[6, 9, 1, 2, 3, 10]。

  1. 判断是否是单源路径,不满足,取出此该树headerList中的最小的节点10
  2. 与前缀7,组成新的前缀[7,10]
  3. 并将前缀传入,判断是否满足条件,满足生成新的规则

[7, 10] -> 11 #SUP: 1 #CONF: 1.0 #CHISQUARE: NaN 

查找节当前点10的prefixpath,得到[10 6],生成树[6]

单源路径,调用saveAllCombinationsOfPrefixPath(fpNodeSingleBuffer, numberSingleItems, prefix, prefixLength);生成规则headerList,用新树[6]与前缀[7,10]生成规则

[6, 7, 10] -> 11 #SUP: 1 #CONF: 1.0 #CHISQUARE: NaN

以[7,10]为前缀的生成规则步骤结束,取出headerList[6, 9, 1, 2, 3, 10],当中倒数第二小的节点3,生成新的前缀[3,7],利用前缀生成规则

[3, 7] -> 11 #SUP: 1 #CONF: 1.0 #CHISQUARE: NaN

查找节当前节点3的prefixpath,[3,10,6]构造新树[10,6],生成新的树,用新树与前缀[3,7]生成新的规则

[3, 6, 7] -> 11 #SUP: 1 #CONF: 1.0 #CHISQUARE: NaN
[3, 7, 10] -> 11 #SUP: 1 #CONF: 1.0 #CHISQUARE: NaN
[3, 6, 7, 10] -> 11 #SUP: 1 #CONF: 1.0 #CHISQUARE: NaN

 以[3,7]为前缀的生成规则步骤结束,取出headerList[6, 9, 1, 2, 3, 10],当中倒数第三小节点2,生成新的前缀[2,7],利用前缀生成规则

[2, 7] -> 11 #SUP: 1 #CONF: 1.0 #CHISQUARE: NaN

找到当前2节点的prefixpath[2,9,6]构造新树[9,6],生成新的树,用新树[9,6]与前缀[2, 7]生成新的规则

[2, 6, 7] -> 11 #SUP: 1 #CONF: 1.0 #CHISQUARE: NaN
[2, 7, 9] -> 11 #SUP: 1 #CONF: 1.0 #CHISQUARE: NaN
[2, 6, 7, 9] -> 11 #SUP: 1 #CONF: 1.0 #CHISQUARE: NaN

  以[2,7]为前缀的生成规则步骤结束,取出headerList[6, 9, 1, 2, 3, 10],当中倒数第四小的节点1,生成新的前缀[1,7],利用前缀生成规则

[1, 7] -> 12 #SUP: 1 #CONF: 1.0 #CHISQUARE: NaN

找到节点1的prefixpaths[1,9,6]构造新树,生成新的树[9,6]用新树[9,6]与前缀[1, 7]生成新的规则

[1, 6, 7] -> 12 #SUP: 1 #CONF: 1.0 #CHISQUARE: NaN
[1, 7, 9] -> 12 #SUP: 1 #CONF: 1.0 #CHISQUARE: NaN
[1, 6, 7, 9] -> 12 #SUP: 1 #CONF: 1.0 #CHISQUARE: NaN

   以[1,7]为前缀的生成规则步骤结束,取出headerList[6, 9, 1, 2, 3, 10],当中倒数第五小的节点9,生成新的前缀[9,7],利用前缀生成规则

调用rule.getConfidence(),计算置信度

[7, 9] -> 12 #SUP: 1 #CONF: 0.5 #CHISQUARE: NaN
[7, 9] -> 11 #SUP: 1 #CONF: 0.5 #CHISQUARE: NaN

找到节点9的prefixpath[9,6]构造新树,生成新的树[6],用新树[6]与前缀[7, 9]生成新的规则

[6, 7, 9] -> 12 #SUP: 1 #CONF: 0.5 #CHISQUARE: NaN
[6, 7, 9] -> 11 #SUP: 1 #CONF: 0.5 #CHISQUARE: NaN

 以[9,7]为前缀的生成规则步骤结束,取出headerList[6, 9, 1, 2, 3, 10],当中最大的节点6,生成新的前缀[6,7],利用前缀生成规则

[6, 7] -> 12 #SUP: 1 #CONF: 0.3333333333333333 #CHISQUARE: NaN
[6, 7] -> 11 #SUP: 2 #CONF: 0.6666666666666666 #CHISQUARE: NaN

 到此,所有以7为前缀的规则全部生成完毕,此时,退回到总树的[8, 6, 9, 10, 1, 2, 3, 4, 7, 5],选取倒数第三小的节点4模仿上述过程递归生成规则

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值