Tutorial 6: Ports remapping

翻译自:https://www.behaviortree.dev

Remapping ports between Trees and SubTrees [在树和子树之间重新映射端口]

In the CrossDoor example we saw that a SubTree looks like a single leaf Node from the point of view of its parent (MainTree in the example). [在 CrossDoor 示例中,我们看到从父节点(示例中的 MainTree)的角度来看,子树看起来像单个叶节点。]

Furthermore, to avoid name clashing in very large trees, any tree and subtree use a different instance of the Blackboard. [此外,为避免在非常大的树中出现名称冲突,任何树和子树都使用不同的 Blackboard 实例。]

For this reason, we need to explicitly connect the ports of a tree to those of its subtrees. [出于这个原因,我们需要将树的端口显式连接到其子树的端口。]

Once again, you won’t need to modify your C++ implementation since this remapping is done entirely in the XML definition. [再一次,您不需要修改 C++ 实现,因为重新映射完全在 XML 定义中完成。]

Example

Let’s consider this Beahavior Tree. [让我们考虑一下这个行为树。]

<root main_tree_to_execute = "MainTree">

    <BehaviorTree ID="MainTree">

        <Sequence name="main_sequence">
            <SetBlackboard output_key="move_goal" value="1;2;3" />
            <SubTree ID="MoveRobot" target="move_goal" output="move_result" />
            <SaySomething message="{move_result}"/>
        </Sequence>

    </BehaviorTree>

    <BehaviorTree ID="MoveRobot">
        <Fallback name="move_robot_main">
            <SequenceStar>
                <MoveBase       goal="{target}"/>
                <SetBlackboard output_key="output" value="mission accomplished" />
            </SequenceStar>
            <ForceFailure>
                <SetBlackboard output_key="output" value="mission failed" />
            </ForceFailure>
        </Fallback>
    </BehaviorTree>

</root>

You may notice that: [您可能会注意到:]

  • We have a MainTree that includes a subtree called MoveRobot. [我们有一个 MainTree,其中包含一个名为 MoveRobot 的子树。]
  • We want to “connect” (i.e. “remap”) ports inside the MoveRobot subtree with other ports in the MainTree. [我们希望将 MoveRobot 子树中的端口与 MainTree 中的其他端口“连接”(即“重新映射”)。]
  • This is done using the XMl tag , where the words internal/external refer respectively to a subtree and its parent. [这是使用 XMl 标记完成的,其中内部/外部词分别指子树及其父树。]

The following image shows remapping between these two different trees. [下图显示了这两个不同树之间的重新映射。]

Note that this diagram represents the dataflow and the entries in the respective blackboard, not the relationship in terms of Behavior Trees. [请注意,此图表示相应黑板上的数据流和条目,而不是行为树方面的关系。]

在这里插入图片描述

In terms of C++, we don’t need to do much. For debugging purpose, we may show some information about the current state of a blackboard with the method debugMessage(). [在 C++ 方面,我们不需要做太多事情。出于调试目的,我们可以使用 debugMessage() 方法显示有关黑板当前状态的一些信息。]

int main()
{
    BT::BehaviorTreeFactory factory;

    factory.registerNodeType<SaySomething>("SaySomething");
    factory.registerNodeType<MoveBaseAction>("MoveBase");

    auto tree = factory.createTreeFromText(xml_text);

    NodeStatus status = NodeStatus::RUNNING;
    // Keep on ticking until you get either a SUCCESS or FAILURE state
    while( status == NodeStatus::RUNNING)
    {
        status = tree.tickRoot();
        // IMPORTANT: add sleep to avoid busy loops.
        // You should use Tree::sleep(). Don't be afraid to run this at 1 KHz.
        tree.sleep( std::chrono::milliseconds(1) );
    }

    // let's visualize some information about the current state of the blackboards.
    std::cout << "--------------" << std::endl;
    tree.blackboard_stack[0]->debugMessage();
    std::cout << "--------------" << std::endl;
    tree.blackboard_stack[1]->debugMessage();
    std::cout << "--------------" << std::endl;

    return 0;
}

/* Expected output:

    [ MoveBase: STARTED ]. goal: x=1 y=2.0 theta=3.00
    [ MoveBase: FINISHED ]
    Robot says: mission accomplished
    --------------
    move_result (std::string) -> full
    move_goal (Pose2D) -> full
    --------------
    output (std::string) -> remapped to parent [move_result]
    target (Pose2D) -> remapped to parent [move_goal]
    --------------
*/
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
AAAI 2020的教程“可解释人工智能”将重点介绍可解释人工智能的概念、方法和应用。可解释人工智能是指人工智能系统能够以一种可理解的方式解释其决策和行为的能力。该教程将涵盖可解释人工智能的基本原则和方法,包括规则推理、可视化技术、模型解释和对抗性机器学习等。 在教程中,我们将首先介绍可解释人工智能的背景和意义,解释为什么可解释性对于人工智能的发展至关重要。然后,我们将深入探讨可解释人工智能的基本概念和技术,例如局部解释和全局解释。我们还将介绍一些关键的可解释性方法,如LIME(局部诠释模型)和SHAP(SHapley Additive exPlanations),并解释它们的原理和应用场景。 此外,我们还将探讨可解释人工智能在各个领域的具体应用,包括医疗诊断、金融风险管理和智能驾驶等。我们将分享一些成功的案例和实践经验,探讨可解释人工智能在实际应用中的挑战和解决方案。最后,我们还将讨论未来可解释人工智能的发展趋势和挑战,展望可解释性在人工智能领域的重要性和前景。 通过参加该教程,学习者将能够全面了解可解释人工智能的概念、方法和应用,理解其在实际应用中的重要性,掌握一些关键的可解释性技术和工具,并对可解释人工智能的未来发展有一个清晰的认识。希望通过这次教程,能够为学习者提供一个全面而深入的可解释人工智能学习和交流平台。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值