【PAT】A1151 LCA in a Binary Tree【LCA】

The lowest common ancestor (LCA) of two nodes U and V in a tree is the deepest node that has both U and V as descendants.

Given any two nodes in a binary tree, you are supposed to find their LCA.

Input Specification:

Each input file contains one test case. For each case, the first line gives two positive integers: M (≤ 1,000), the number of pairs of nodes to be tested; and N (≤ 10,000), the number of keys in the binary tree, respectively. In each of the following two lines, N distinct integers are given as the inorder and preorder traversal sequences of the binary tree, respectively. It is guaranteed that the binary tree can be uniquely determined by the input sequences. Then M lines follow, each contains a pair of integer keys U and V. All the keys are in the range of int.

Output Specification:

For each given pair of U and V, print in a line LCA of U and V is A. if the LCA is found and A is the key. But if A is one of U and V, print X is an ancestor of Y. where X is A and Y is the other node. If U or V is not found in the binary tree, print in a line ERROR: U is not found. or ERROR: V is not found. or ERROR: U and V are not found…

Sample Input:

6 8
7 2 3 4 6 5 1 8
5 3 7 2 6 4 8 1
2 6
8 1
7 9
12 -3
0 8
99 99

Sample Output:

LCA of 2 and 6 is 3.
8 is an ancestor of 1.
ERROR: 9 is not found.
ERROR: 12 and -3 are not found.
ERROR: 0 is not found.
ERROR: 99 and 99 are not found.

题意

查找两个值所在结点点的LCA。(如果两个值都在树中的话)

思路

根据题给的前序序列和中序序列建树,然后dfs搜索即可。注意dfs搜索的时候是从最下层往上冒泡式return,所以找到两个值的第一个结点就是它们的LCA。

代码

#include <iostream>
#include <vector>
#include <algorithm>
#include <map>
#include <vector>
#define MAX_N 10005
using namespace std;
struct Node{
    Node*left, *right;
    int value, height;
    Node(int value, int height) : value(value), left(NULL), right(NULL),height(height){};
}*root;
int pre[MAX_N], in[MAX_N];
int N, k = 0;
// 递归建树
Node* build(int inStart, int inEnd, int height){
    if(inStart >= inEnd) return NULL;
    int root = pre[k++], i = inStart;
    Node* x = new Node(root, height);
    while(i < inEnd && in[i] != root) i++;
    x->left = build(inStart, i, height + 1);
    x->right = build(i + 1, inEnd, height + 1);
    return x;
}
// 封装了一下,供最外层使用
Node* build(int inStart, int inEnd){
    return build(NULL, inStart, inEnd, 0);
}

int LCA = -1;
// flag1,flag2分别表示是否查找到了u, v
int search(Node*x, int u, int v, bool& flag1, bool& flag2){
    if(x == NULL) return 0;
    // cnt为找到的结点个数
    int cnt = search(x->left, u, v, flag1, flag2) + search(x->right, u, v, flag1, flag2);
    if(x->value == u){
        flag1 = true;
        cnt++;
    }
    if(x->value == v){
        flag2 = true;
        cnt++;
    }
    if(LCA == - 1 && cnt == 2){// 找到了两个结点,那么找到两个结点的最底层就是LCA,上层则不会再赋值(因为赋值一次后LCA就不是-1了)
        LCA = x->value;
    }
    return cnt;
}
int main() {
    int M;
    scanf("%d %d", &M, &N);
    for(int i = 0; i < N; i++){
        scanf("%d", &in[i]);
    }
    for(int i = 0; i < N; i++){
        scanf("%d", &pre[i]);
    }
    root = build( 0, N);
    
    
    bool flag1, flag2;
    for(int i = 0, u, v; i < M; i++){
        scanf("%d %d", &u, &v);
        // 不要忘记初始化
        LCA = -1;
        flag1 = false;
        flag2 = false;
        
        // 递归搜索
        search(root, u, v, flag1, flag2);
        if(!flag1 && !flag2){
            printf("ERROR: %d and %d are not found.\n", u, v);
            continue;
        }else if(!flag1){
            printf("ERROR: %d is not found.\n", u);
            continue;
        }else if(!flag2){
            printf("ERROR: %d is not found.\n", v);
            continue;
        }
        
        if(LCA == u){
            printf("%d is an ancestor of %d.\n", u, v);
        }else if(LCA == v){
            printf("%d is an ancestor of %d.\n", v, u);
        }else{
            printf("LCA of %d and %d is %d.\n", u, v, LCA);
        }
    }
    
    return 0;
}

内容概要:本文详细探讨了基于樽海鞘算法(SSA)优化的极限学习机(ELM)在回归预测任务中的应用,并与传统的BP神经网络、广义回归神经网络(GRNN)以及未优化的ELM进行了性能对比。首先介绍了ELM的基本原理,即通过随机生成输入层与隐藏层之间的连接权重及阈值,仅需计算输出权重即可快速完成训练。接着阐述了SSA的工作机制,利用樽海鞘群体觅食行为优化ELM的输入权重和隐藏层阈值,从而提高模型性能。随后分别给出了BP、GRNN、ELM和SSA-ELM的具体实现代码,并通过波士顿房价数据集和其他工业数据集验证了各模型的表现。结果显示,SSA-ELM在预测精度方面显著优于其他三种方法,尽管其训练时间较长,但在实际应用中仍具有明显优势。 适合人群:对机器学习尤其是回归预测感兴趣的科研人员和技术开发者,特别是那些希望深入了解ELM及其优化方法的人。 使用场景及目标:适用于需要高效、高精度回归预测的应用场景,如金融建模、工业数据分析等。主要目标是提供一种更为有效的回归预测解决方案,尤其是在处理大规模数据集时能够保持较高的预测精度。 其他说明:文中提供了详细的代码示例和性能对比图表,帮助读者更好地理解和复现实验结果。同时提醒使用者注意SSA参数的选择对模型性能的影响,建议进行参数敏感性分析以获得最佳效果。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值