目录
⚠️ 请注意:译文中,加粗文字为译者认为的重点部分,加粗斜体文字为译者觉得难以翻译/翻译不准的部分。
0 摘要/Abstract
根据DARPA SyNAPSE路线图,IBM推出了TrueNorth认知计算系统的创新三部曲,其灵感来自大脑的功能和效率。冯·诺依曼架构的顺序编程范式完全不适合TrueNorth。因此,作为我们的主要贡献,我们开发了一种新的编程范式,允许构建复杂的认知算法和应用程序,同时对TrueNorth和程序员的生产力有效。编程范例包括:(a)一个名为Corelet的TrueNorth程序的抽象,用于表示一个神经突触核心网络,该网络封装了除外部输入和输出之外的所有细节;(b)用于创建、组合和分解Corelet的面向对象的Corelet语言;(c) Corelet库,作为一个不断增长的可重用Corelet库,程序员可以从中组合新的Corelet;(d)端到端的Corelet实验室,这是一个与TrueNorth架构仿真器Compass集成的编程环境,以支持从设计、开发、调试到部署的编程周期的所有方面。新的范式从少量的突触和神经元无缝扩展到逐渐增加大小和复杂性的神经突触核心网络。我们在很短的时间内为TrueNorth设计并实现了100多个算法,这一事实强调了新编程范式的实用性。
Marching along the DARPA SyNAPSE roadmap, IBM unveils a trilogy of innovations towards the TrueNorth cognitive computing system inspired by the brain’s function and efficiency. The sequential programming paradigm of the von Neumann architecture is wholly unsuited for TrueNorth. Therefore, as our main contribution, we develop a new programming paradigm that permits construction of complex cognitive algorithms and applications while being efficient for TrueNorth and effective for programmer productivity. The programming paradigm consists of (a) an abstraction for a TrueNorth program, named Corelet, for representing a network of neurosynaptic cores that encapsulates all details except external inputs and outputs; (b) an object-oriented Corelet Language for creating, composing, and decomposing corelets; (c) a Corelet Library that acts as an ever-growing repository of reusable corelets from which programmers compose new corelets; and (d) an end-toend Corelet Laboratory that is a programming environment which integrates with the TrueNorth architectural simulator, Compass, to support all aspects of the programming cycle from design, through development, debugging, and up to deployment. The new paradigm seamlessly scales from a handful of synapses and neurons to networks of neurosynaptic cores of progressively increasing size and complexity. The utility of the new programming paradigm is underscored by the fact that we have designed and implemented more than 100 algorithms as corelets for TrueNorth in a very short time span.
I 简介/Introduction
A. 背景/Context
为了迎来认知计算的新时代,我们正在开发TrueNorth(图1),这是一种非冯·诺依曼的、模块化的、并行的、分布式的、事件驱动的、可扩展的架构,灵感来自有机大脑的功能、低功耗与紧凑的体积。TrueNorth是一款多功能基板,用于集成多模态、子符号、传感器-执行器系统的时空、实时认知算法。TrueNorth由可配置的神经突触核心组成的可伸缩网络。每个核心都将内存(“突触”)、处理器(“神经元”)和通信(“轴突”)紧密地连接在一起,其中核间通信由全或无脉冲事件进行,通过消息传递网络发送。
To usher in a new era of cognitive computing [1], we are developing TrueNorth (Fig. 1), a non-von Neumann, modular, parallel, distributed, event-driven, scalable architectureinspired by the function, low power, and compact volume of the organic brain. TrueNorth is a versatile substrate for integrating spatio-temporal, real-time cognitive algorithms for multi-modal, sub-symbolic, sensor-actuator systems. TrueNorth comprises of a scalable network of configurable neurosynaptic cores. Each core brings memory (“synapses”), processors (“neurons”), and communication (“axons”) in close proximity, wherein inter-core communication is carried by all-or-none spike events, sent over a message-passing network.
🖼️ 图1 TrueNorth是一种受大脑启发的芯片架构,由轻量级神经突触核心[2]、[3]组成的互联网络构建而成。TrueNorth通过核内交叉存储器实现了“灰质”短程连接,并通过核间基于脉冲的消息传递网络实现了“白质”远程连接。TrueNorth在芯片的“生理学”和“解剖学”方面都是完全可编程的,即神经元参数、突触交叉棒和核间神经元-轴突连接允许广泛的结构、动力学和行为。小图:TrueNorth神经突触核心有256个轴突,256×256个突触交叉条与256个神经元。信息从轴突流向由二进制突触控制的神经元,每个轴突平行向所有神经元展开,与点对点方法相比,通信量减少了256倍。下面是核心操作的概念描述。为了支持多值突触,轴突被分配为每个神经元的突触权重索引类型。网络运行由离散时间步长控制。在一个时间步长中,如果特定轴突-神经元对的突触值非零且轴突处于活动状态,则神经元通过与轴突类型对应的突触权重更新其状态。接下来,每个神经元应用泄漏,任何状态超过其阈值的神经元都会触发脉冲。伪随机数发生器(PRNG)可以在核内增加噪声到脉冲阈值,随机门突触与泄漏更新概率计算;缓冲区保存延迟传递的输入脉冲;网络从神经元向轴突发送脉冲。
Fig. 1. TrueNorth is a brain-inspired chip architecture built from an interconnected network of lightweight neurosynaptic cores [2], [3]. TrueNorth implements “gray matter” short-range connections with an intra-core crossbar memory and “white matter” long-range connections through an inter-core spike-based message-passing network. TrueNorth is fully programmable in terms of both the “physiology” and “anatomy” of the chip, that is, neuron parameters, synaptic crossbar, and inter-core neuron-axon connectivity allow for a wide range of structures, dynamics, and behaviors. Inset: The TrueNorth neurosynaptic core has 256 axons, a 256×256 synapse crossbar, and 256 neurons. Information flows from axons to neurons gated by binary synapses, where each axon fans out, in parallel, to all neurons thus achieving a 256-fold reduction in communication volume compared to a point-to-point approach. A conceptual description of the core’s operation follows. To support multivalued synapses, axons are assigned types which index a synaptic weight for each neuron. Network operation is governed by a discrete time step. In a time step, if the synapse value for a particular axon-neuron pair is non-zero and the axon is active, then the neuron updates its state by the synaptic weight corresponding to the axon type. Next, each neuron applies a leak, and any neuron whose state exceeds its threshold fires a spike. Within a core, PRNG (pseudorandom number generator) can add noise to the spike thresholds and stochastically gate synaptic and leak updates for probabilistic computation; Buffer holds incoming spikes for delayed delivery; and Network sends spikes from neurons to axons.
最近,我们取得了一些里程碑式的成就:首先,在45nm工艺芯片中展示了256个神经元,64k/256k-突触神经突触核[2],[4],并于2011年12月登上了《科学美国人》的封面;二、演示多个实时应用[5];第三,Compass, TrueNorth架构的仿真器,它仿真了超过20亿个神经突触核,超过1014个突触[3],[6];第四,Macaque Brain[7]的远程连接可视化——映射到TrueNorth架构——在《科学》[8]和《ACM通讯》[1]的封面上有特别介绍。
Recently, we have achieved a number of milestones: first, a demonstration of 256-neuron, 64k/256k-synapse neurosynaptic cores in 45nm silicon [2], [4] that were featured on the cover of Scientific American in December 2011; second, a demonstration of multiple real-time applications [5]; third, Compass, a simulator of the TrueNorth architecture, which simulated over 2 billion neurosynaptic cores exceeding 1014 synapses [3], [6]; and, fourth, a visualization of the longdistance connectivity of the Macaque brain [7]—mapped to TrueNorth architecture—that was featured on the covers of Science [8] and Communications of the ACM [1].
我们在一组三篇论文中揭示了一系列相互关联的创新。在本文中,我们提出了一种分层组合和配置认知系统的编程范式,它对程序员有效,对TrueNorth体系结构有效。在两篇配套论文[9][10]中,我们介绍了一种通用且高效的数字脉冲神经元模型,该模型是TrueNorth架构的构建块,以及一组算法和应用程序,它展示了TrueNorth架构的潜力和编程范式的价值。
We unveil a series of interlocking innovations in a set of three papers. In this paper, we present a programming paradigm for hierarchically composing and configuring cognitive systems that is effective for the programmer and efficient for the TrueNorth architecture. In two companion papers [9][10] we introduce a versatile and efficient digital spiking neuron model that is a building block of the TrueNorth architecture, as well as a set of algorithms and applications that demonstrate the potential of the TrueNorth architecture and value of the programming paradigm.
B. 动机/Motivation
图灵完备性限制了编程系统的计算表达能力。ENIAC(约1946年),第一台电子数字,可编程的图灵完备机,是冯·诺依曼架构[11]的灵感来源。在机器效率和程序员效率这两个经常相互冲突的目标的驱动下,编程范式已经从机器代码发展到汇编语言,再到高级语言。今天流行的高级语言可以追溯到FORTRAN[12]。正如Backus等人所指出的,“程序的基本单元是基本块;一个基本块是一个程序的延伸,它有一个入口点和一个出口点”。这意味着冯·诺依曼体系结构的程序基本上是线性或顺序的构造。
Turing-completeness bounds the computational expressiveness of programmed systems. ENIAC (circa 1946), the first electronic digital, programmable, Turing-complete machine, was the inspiration behind the formulation of the von Neumann architecture [11]. Driven by dual and often conflicting objectives of machine and programmer efficiency, the programming paradigm has evolved from machine code, to assembly language, to high-level languages. High-level languages prevalent today trace their genesis to FORTRAN [12]. As noted by Backus et al. [13], “The fundamental unit of a program is the basic block; a basic block is a stretch of program which has a single entry point and a single exit point.” This implies that a program for a von Neumann architecture is fundamentally a linear or sequential construct.
像冯·诺依曼机器一样,TrueNorth是图灵完备的[9]。然而,它们是互补的,因为它们对不同类型的计算都是有效的。TrueNorth程序是神经突触核心网络的完整规范,以及网络的所有外部输入和输出,包括生理特性的规范(神经元参数,突触权重)和解剖结构(核间和核内连通性)。TrueNorth程序员的工作是将所需的计算转换为在TrueNorth上有效执行的规范,即完全指定的神经突触核心网络及其输入和输出。在这种情况下,冯·诺依曼架构的线性规划范式对于TrueNorth程序来说并不理想。因此,我们开始开发一种全新的编程范式,它可以构建复杂的认知算法和应用程序,同时对TrueNorth和程序员的生产力都是有效的。
Like von Neumann machines, TrueNorth is Turingcomplete [9]. However, they are complementary in that each is efficient for different classes of computation. A TrueNorth program is a complete specification of a network of neurosynaptic cores, and all external inputs and outputs to the network, including the specification of the physiological properties (neuron parameters, synaptic weights) and the anatomy (inter- and intra-core connectivity). The job of a TrueNorth programmer is to translate a desired computation into a specification that efficiently executes on TrueNorth, namely, a completely specified network of neurosynaptic cores, its inputs, and its outputs. In this context, the linear programming paradigm of the von Neumann architecture is not ideal for TrueNorth programs. Therefore, we set out to develop an entirely new programming paradigm that can permit construction of complex cognitive algorithms and applications while being efficient for TrueNorth and effective for programmer productivity.
C. 贡献/Contributions
如前所述,TrueNorth程序是神经突触核心网络及其外部输入和输出的完整规范。随着网络规模的增加,要完全指定这样一个网络,同时又与TrueNorth体系结构保持一致,对于程序员来说变得越来越困难。为了帮助解决复杂性,我们提出了一种分而治之的方法,即通过相互连接一组较小的神经突触核心网络来构建一个大型神经突触核心网络,其中每个较小的网络又可以通过相互连接一组更小的网络来构建,依此类推,直到我们达到一个由单个神经突触核心组成的网络,这是基本的、不可分割的构建块。
As stated earlier, a TrueNorth program is a complete specification of a network of neurosynaptic cores, along with its external inputs and outputs. As the size of the network increases, to completely specify such a network while being consistent with TrueNorth architecture becomes increasingly difficult for the programmer. To help combat the complexity, we propose a divide-and-conquer approach whereby a large network of neurosynaptic cores is constructed by interconnecting a set of smaller networks of neurosynaptic cores, where each of the smaller networks, in turn, could be constructed by interconnecting a set of even smaller networks, and so on, until we reach a network consisting of a single neurosynaptic core, which is the fundamental, non-divisible building block.
为此,作为我们的基本贡献,我们开发了一种新的编程范式,包括:(a)corelet,即代表TrueNorth程序的抽象,它只公开外部输入和输出,同时封装神经突触核心网络的所有其他细节;(b)用于创建、组合和分解Corelet的面向对象的Corelet语言;(c) Corelet Library,作为一个不断增长的可重复使用的Corelet库,从中组合新的corelet;(d)端到端的Corelet实验室,这是一个与TrueNorth架构仿真器集成的编程环境,称为Compass[3],并支持从设计、开发、调试到部署的编程周期的所有方面。
To this end, as our fundamental contribution, we develop a new programming paradigm that consists of (a) a corelet, namely an abstraction that represents a TrueNorth program that only exposes external inputs and outputs while encapsulating all other details of the network of neurosynaptic cores; (b) an object-oriented Corelet Language for creating, composing, and decomposing corelets; (c) a Corelet Library that acts as an ever-growing repository of reusable corelets from which to compose new corelets; and (d) an end-to-end Corelet Laboratory that is a programming environment that integrates with the TrueNorth architectural simulator, called Compass [3], and supports all aspects of the programming cycle from design, through development, debugging, and into deployment.
Corelet、组成和解耦(第二节):Corelet(图2)是神经突触核心网络的抽象,它封装了所有网络内连接与所有核心内生理机能,仅向网络暴露外部输入与外部输出。我们将输入和输出分组到连接器中。Corelet用户只能访问输入和输出连接器。
Corelets, Composition, and Decomposition (Sec. II): A corelet (Fig. 2) is an abstraction of a network of neurosynaptic cores that encapsulates all intra-network connectivity and all intra-core physiology and only exposes external inputs to and external outputs from the network. We group inputs and outputs into connectors. A corelet user has access only to input and output connectors.
🖼️ 图2 图(a)、(b)和(c)说明了一个种子核心的构造,而图(d)、(e)和(f)说明了通过两个子corelet的组合来构造一个corelet。(a)通过将corelet上的一组神经元与其上的一组轴突连接起来,形成循环连接。配置突触横杆来连接轴突和神经元。神经元是脉冲的源,轴突是脉冲的目的地。(b)在核心外部接收脉冲的未连接轴突被分组到称为输入连接器的枚举列表中。在核心外部发送脉冲的未连接神经元,被分组成一个称为输出连接器的枚举列表。(c)corelet封装了核内神经元-轴突连接、突触交叉棒和神经元参数,同时暴露输入和输出连接器。corelet开发人员可以看到所有corelet内部信息,但corelet用户只能看到公开的外部接口。(d)通过将一组输出连接器引脚连接到一组输入连接器引脚,在子corelet之间建立连接。(e)未连接的输入连接器引脚,接收来自组合corelet外部的脉冲,被分组到一个新的输入连接器中。未连接的输出连接器引脚,发送组合corelet外部的脉冲,被分组到一个新的输出连接器中。(f)组合的corelet将子corelet与子corelet之间的内部连接封装,同时暴露新的输入和输出连接器。组合corelet的开发人员可以看到组合corelet的所有内部内容,但由于封装,不能看到子corelet的内部内容。然而,组合corelet的用户只能看到公开的外部接口。
Fig. 2. Panels (a), (b), and (c) illustrate construction of a seed corelet while panels (d), (e), and (f) illustrate construction of a corelet via composition of two sub-corelets. (a) Create recurrent connections, by connecting a set of neurons on the core with a set of axons on the core. Configure the synaptic crossbar to connect axons to neurons. A neuron is a source of spikes and an axon is a destination of spikes. (b) Unconnected axons, that receive spikes from outside the core, are grouped into an enumerated list known as an input connector. Unconnected neurons, that send spikes outside the core, are grouped into an enumerated list known as an output connector. (c) The seed corelet encapsulates the intra-core neuron-axon connectivity, synaptic crossbar, and neuron parameters while exposing the input and output connectors. The corelet developer sees all corelet internals, but a corelet user only sees the exposed external interfaces. (d) Create connections between sub-corelets by interconnecting a set of output connector pins to a set of input connector pins. (e) Unconnected input connector pins, that receive spikes from outside the composed corelet, are grouped into a new input connector. Unconnected output connector pins, that send spikes outside of the composed corelet, are grouped into a new output connector. (f) The composed corelet encapsulates the sub-corelets and the internal connectivity between sub-corelets, while exposing the new input and output connectors. The developer of the composed corelet sees all the internals of the composed corelet but not the internals of the sub-corelets due to encapsulation. However, a user of the the composed corelet only sees the exposed external interfaces.
给定一组corelet,组合就是创建一个新的corelet的操作。为了便于说明,我们将组成部分的corelet称为子corelet。图2(d)-(f)说明了组合的三个关键步骤:(a)将子corelet的一些输出连接到子corelet的一些输入;(b)封装所有corelet内连通性;并且(c)只从corelet暴露外部输入和外部输出。组合的过程可以分层重复,以逐步创建更复杂的corelet。因此,可以将任何corelet视为子corelet的树,其中树的叶子构成单独的神经突触核。
Given a set of corelets, composition is an operation for creating a new corelet. For ease of exposition, we refer to the constituent corelets as sub-corelets. Fig. 2(d)-(f) illustrates three key steps of composition: (a) interconnect some of the outputs of the sub-corelets to some of the inputs of the subcorelets; (b) encapsulate all intra-corelet connectivity; and (c) expose only external inputs to and external outputs from the corelet. The process of composition can be hierarchically repeated to create progressively more complex corelets. Therefore, it is possible to think of any corelet as a tree of subcorelets, where the leaves of the tree constitute individual neurosynaptic cores.
corelet抽象是为了提高程序员的工作效率而设计的,但是不能直接在TrueNorth上实现。给定一个corelet,解耦(图3)是组合的逆逻辑,也就是说,它是一种移除嵌套树结构和移除所有封装层以生成神经突触核心网络的操作,可以在TrueNorth架构上实现,无论是在仿真中还是在硬件中。
The corelet abstraction is designed to boost programmer productivity, but cannot be directly implemented on TrueNorth. Given a corelet, decomposition (Fig. 3) is the logical inverse of composition, that is, it is an operation for removing the nested tree structure and for removing all layers of encapsulation to produce a network of neurosynaptic cores that can be implemented on the TrueNorth architecture, either in simulation or hardware.
🖼️ 图3 corelet解耦的例子。假设图2面板(d)中的“Corelet A”和“Corelet B”都是图2面板(c)中的“Corelet”实例,则图2面板(f)中所示的组合corelet通过逐步去除所有封装层来解耦,从而产生神经突触核心及其外部输入输出网络,从而得到TrueNorth程序。该程序可以在TrueNorth硬件上执行,也可以使用Compass[3]进行模拟。
Fig. 3. Example of corelet decomposition. Assuming that “Corelet A” and “Corelet B” in panel (d) of Fig. 2 are both instances of the “Corelet” in panel (c) of Fig. 2, the composed corelet shown in panel (f) of Fig. 2 is decomposed by progressively removing all layers of encapsulation to produce a network of neurosynaptic cores along with its external inputs and outputs, resulting in a TrueNorth program. The program can be executed on TrueNorth hardware as well as simulated using Compass [3].
Corelet语言(第三节):语言的基本符号是神经元、神经突触核心和核心。连接器构成了将这些符号组合到TrueNorth程序中的语法。总之,这些符号和语法对于表达任何TrueNorth程序都是必要和充分的。我们在面向对象方法中实现这些原语。
Corelet Language (Sec. III): The fundamental symbols of the language are the neuron, neurosynaptic core, and corelet. The connectors constitute the grammar for composing these symbols into TrueNorth programs. Together, the symbols and the grammar are both necessary and sufficient for expressing any TrueNorth program. We implement these primitives in object-oriented methodology.
Corelet库(第四节):该库是一致的、经过验证的、参数化的、可伸缩的与可组合的函数原语的存储库。为了提高程序员的工作效率,我们在不到一年的时间里设计并实现了一个包含100多个corelet的存储库。每当编写一个新的corelet时,无论是从零开始还是通过组合,都可以将其添加回库,库会以一种自我强化的方式不断增长。此外,由于可组合性,库的表达能力随其大小的>1次方呈指数增长。
Corelet Library (Sec. IV): The library is a repository of consistent, verified, parameterized, scalable and composable functional primitives. To boost programmer productivity, we have designed and implemented a repository of more than 100 corelets in less than one year. Every time a new corelet is written, either from scratch or by composition, it can be added back to the library, which keeps growing in a self-reinforcing way. Further, by virtue of composability, the expressive capability of the library grows exponentially as some power, >1, of its size.
Corelet实验室(第五节):最终,corelet必须在TrueNorth上解耦与部署实现,无论是在硬件上还是在仿真中,并通过事件驱动的脉冲源(例如传感器)和连接到它的目标(例如执行器)与环境交互。为了促进这一过程,Corelet实验室提供了一个完整的端到端框架。
Corelet Laboratory (Sec. V): Eventually, a corelet must be decomposed and implemented on TrueNorth, either in hardware or simulation, and interact with the environment via event-driven, spiking sources (for example, sensors) and destinations (for example, actuators) that connect to it. To facilitate this process, the Corelet Laboratory provides a complete endto-end framework.
新范式对程序员的价值是:从低级硬件原语的角度思考的自由;功能性设计工具的可用性;能够在创建和验证单独模块的过程中使用分而治之的策略;一种新的思维方式,即简单的模块化结构块及其层次结构,而不是直接处理难以管理的大型神经突触核心网络;保证TrueNorth的可执行性;具备验证正确性、一致性和完整性的能力;能够重用代码和组件;易于大规模协作;能够配置更多的神经突触核心每行代码和时间单位;访问用于创建、编译、执行和调试的端到端环境;以及跨功能块使用相同概念隐喻的能力,这些功能块范围从少量突触和神经元到逐渐增加大小和复杂性的神经突触核心网络。
The value of the new paradigm to the programmer is: freedom from thinking in terms of low-level hardware primitives; availability of tools to design at the functional level; ability to use a divide-and-conquer strategy in the process of creating and verifying individual modules separately; a new way of thinking in terms of simple modular blocks and their hierarchical composition, rather than having to deal with an unmanageably large network of neurosynaptic cores directly; guaranteed implementability on TrueNorth; ability to verify correctness, consistency, and completeness; ability to reuse code and components; ease of large-scale collaboration; ability to configure more neurosynaptic cores per line of code and unit of time; access to an end-to-end environment for creating, compiling, executing, and debugging; and the ability to use the same conceptual metaphor across functional blocks that range from a handful of synapses and neurons to networks of neurosynaptic cores with progressively increasing size and complexity.
II Corelet定义/Corelet Definition
A. 种子Corelet/Seed Corelet
种子corelet是一个TrueNorth程序,由单个神经突触核心组成,只向核心暴露输入和输出,同时封装所有其他细节,包括神经元参数、突触权重和核内连接。corelet程序员指定内部细节和外部接口,而corelet用户只使用外部接口。图2的图(a)、图(b)和图©显示了种子corelet构造的示例。
A seed corelet is a TrueNorth program consisting of a single neurosynaptic core that exposes only inputs and outputs to the core while encapsulating all other details, including neuron parameters, synaptic weights, and intra-core connectivity. The corelet programmer specifies both the internal details and external interfaces, while the corelet user uses only external interfaces. Panels (a), (b), and © of Fig. 2 show an example of seed corelet construction.
神经元是脉冲的来源,轴突是脉冲的目的地。对于接收来自同一种子correlet内神经元输入的轴突,corelet内核程序员可以在corelet开发过程中明确指定并将这些轴突/神经元配对在一起。对于接收来自种子corelet外部输入的轴突,corelet程序员在开发时对源神经元是未知的。这些轴突被分组到一个称为输入连接器的枚举列表中。这些轴突的源将在稍后用户实例化并连接这个corelet时指定。
A neuron is a source of spikes, and an axon is a spike destination. For axons that receive inputs from neurons within the same seed corelet, the corelet programmer can unambiguously specify and pair these axons/neurons together during corelet development. For axons that receive inputs from outside their seed corelet, the source neuron is unknown to the corelet programmer at development time. These axons are grouped into an enumerated list known as an input connector. The sources of these axons will be specified only later, when a user instantiates and connects this corelet.
类似地,对于将输出发送到同一种子corelet中的轴突的神经元,corelet程序员可以在corelet开发期间明确地指定并将这些轴突/神经元配对在一起。对于在种子corelet外发送输出的神经元,corelet程序员在开发时对目的地是未知的。这些神经元被分组到一个称为输出连接器的枚举列表中。这些神经元的目的地仅在稍后用户实例化并连接这个corelet时指定。
Similarly, for neurons that send outputs to axons within the same seed corelet, the corelet programmer can unambiguously specify and pair these axons/neurons together during corelet development. For neurons that send output outside this seed corelet, the destination is unknown to the corelet programmer at development time. These neurons are grouped into an enumerated list known as an output connector. The destinations of these neurons will be specified only later, when a user instantiates and connects this corelet.
B. Corelet
如前所述,我们可以将一组corelet组合为一个新的corelet。通过这个过程,原来的核心成为新组成的核心的子corelet。对于所有子corelet,它们的输出连接器是脉冲源,它们的输入连接器是脉冲的目的地。
As mentioned earlier, we can compose a set of corelets into a new corelet. Through this process, the original corelets become the sub-corelets of the newly composed corelet. For all sub-corelets, their output connectors are spike sources and their input connectors are spike destinations.
图2的(d)、(e)和(f)图说明了组合。首先,我们在子corelet之间创建连接,连接一些源引脚与目标引脚。其次,将未连接的子corelet的输入引脚分组到组合corelet的输入连接器中,将未连接的子corelet的输出引脚分组到组合corelet的输出连接器中。新的corelet的连接器是公开的,但是子corelet及其本地连接被组合corelet封装。一般来说,corelet可以由神经突触核心和子corelet组成。
Composition is illustrated in panels (d), (e), and (f) of Fig. 2. First, we create connections between sub-corelets, connecting some of the source and destination pins. Second, some of the sub-corelet’s input pins that were not connected are grouped into the composed corelet’s input connector, and some of the sub-corelet’s output pins that were not connected are grouped into the composed corelet’s output connector. The new corelet’s connectors are exposed, but the sub-corelets and their local connectivity are encapsulated by the composed corelet. In general, corelets can be composed from both neurosynaptic cores and sub-corelets.
解耦是组合的逆逻辑。它对一个组合corelet进行操作,一层层地移除层次结构,直到获得一个扁平的核心网络,然后可以将其表示为一个TrueNorth程序并写入一个模型文件。组合和耦合将在后面的第3-E节中详细描述。
Decomposition is the logical inverse of composition. It operates on a composed corelet, removing the hierarchical structure layer by layer until a flat network of cores is obtained, which in turn can be expressed as a TrueNorth Program and written into a model file. Composition and decomposition are described in detail later in Section III-E.
III Corelet语言/Corelet Language
为何是面向对象的方法?/Why Object-Oriented Methodology?
从语言设计者的角度来看,面向对象编程(OOP)是实现corelet的理想方法,这至少有三个原因。
From the perspective of a language designer, objectoriented programming (OOP) is the ideal method for implementing corelets for at least three reasons.
首先,根据定义,corelet封装了TrueNorth程序的所有细节,除了外部输入和输出。封装也是OOP的一个基本特性。因此,可以通过定义一个corelet类,然后将corelet实例化为这个类的对象来保证corelet封装。
First, by definition, a corelet encapsulates all the details of a TrueNorth program except for external inputs and outputs. Encapsulation is also a fundamental feature of OOP. Therefore, corelet encapsulation can be guaranteed by defining a corelet class and then instantiating corelets as objects from this class.
其次,所有的corelet必须使用类似的数据结构和操作,并且用户必须以类似的方式访问。这种相似性可以通过OOP的另一个基本特性实现,即继承,它允许为抽象类定义一次底层数据结构和操作,然后传递给从抽象类派生的抽象子类,以及该类的对象实例。
Second, all corelets must use similar data structures and operations, and must be accessed by users in similar ways. This similarity can be achieved by another fundamental feature of OOP, inheritance, which allows the underlying data structures and operations to be defined once for an abstract class and then passed down to abstract subclasses derived from it, as well as to object instances of the class.
第三,我们需要调用诸如“解耦”(将它们转换为TrueNorth程序)和“验证”(确保它们与TrueNorth是正确的和一致的)等操作。每个操作都是跨多个corelet进行同构命名的,但是可以为不同的corelet进行异构定义。这些操作可以通过多态性来实现——这是OOP的另一个基本特性。
Third, we need to invoke operations such as “decompose” (to translate them into a TrueNorth program) and “verify” (to ensure that they are correct and consistent with respect to TrueNorth) on all corelets. Each operation is named homogeneously across mutiple corelets, but can be heterogeneously defined for different corelets. These operations can be implemented by polymorphism - another fundamental feature of OOP.
因此,将corelet定义为面向对象框架中的类,可以为我们提供封装、继承和多态性;并且极大地改进了代码的设计、结构、模块化、正确性、一致性、紧凑性和可重用性。从一个基本corelet类开始,不同的corelet可以被写成子类,而不同的具体corelet将是这些类的对象实例1。我们已经使用MATLAB OOP实现了Corelet语言,它还有一个额外的优点,就是它是一种用于矩阵与向量表达式和计算的紧凑语言。
Therefore, defining a corelet as a class in an OOP framework grants us encapsulation, inheritance, and polymorphism; and dramatically improves the design, structure, modularity, correctness, consistency, compactness, and reusability of code. Starting from a base corelet class, different corelets can be written as sub-classes, and different concrete corelets would be object instances of these classes1. We have implemented the Corelet Language using MATLAB OOP, which has the additional advantage of being a compact language for matrix and vector expressions and computations.
At this point, it is noteworthy to make a distinction between two different trees. The hierarchical composition of corelets from sub-corelets must not be confused with the hierarchy of corelet classes. The former is a tree of corelet objects, formed in the computer memory when the corelet code is executed. Each corelet object keeps handles (references, like pointers) to its sub-corelets, hence forming a tree. The later is a tree of code, with a child class inheriting properties and methods from their parent class, which comprises the Corelet Library.
该语言由四个主要类及其相关方法组成。这些类是神经元、神经突触核心、连接器与corelet。虽然实现使用了许多创新的数据结构与优化,但为了简洁与便于阐述,下面我们只介绍最基本的思想。
The language is composed of four main classes and their associated methods. The classes are the neuron, neurosynaptic core, connector, and corelet. While the implementation uses a number of innovative data structures and optimizations, in what follows we present only the most essential ideas for the sake of brevity and for ease of exposition.
A. 神经元类/The Neuron Class
根据[9]中的神经元模型,神经元类包含TrueNorth神经元模型的所有属性,例如初始膜电位、阈值、泄漏和重置模式。神经元的get()
与set()
方法用于设置和检索这些属性,确保所有值都与TrueNorth兼容。为了方便起见,我们为一组常用的神经元行为提供默认值。
Per the neuron model in [9], the Neuron class contains all of the properties of the TrueNorth neuron model, such as initial membrane potential, thresholds, leaks, and reset modes. The neuron’s
get()
andset()
methods, used for setting and retrieving these properties, ensure that all values are compatible with TrueNorth. For convenience, we provide default values for a set of commonly used neuron behaviors.
B. 核类/The Core Class
核类将TrueNorth神经突触核心建模。核类的基本属性包括:存储轴突类型的256个轴突向量;一个256个神经元对象的向量,从神经元类实例化;一个由256个轴突目标组成的向量,每个神经元一个;以及一个256×256的矩阵,表示突触交叉条中的二进制连接。除了通常的get()
与set()
方法之外,我们还提供了一些方法来确保所有值都与TrueNorth兼容。我们还提供了分配神经元-轴突对之间连通性的方法。
The core class models the TrueNorth neurosynaptic core. The essential properties of the core class include: a vector of 256 axons that stores the axon types; a vector of 256 neuron objects, instantiated from the neuron class; a vector of 256 target axonal destinations, one per neuron; and a 256×256 matrix, representing the binary connections in the synaptic crossbar. In addition to the usual
get()
andset()
methods, we provide methods to ensure that all values are compatible with TrueNorth. We also provide methods for assigning connectivity between neuron-axon pairs.
C. 连接器类/The Connector Class
每个corelet都为其组成的神经突触核和子corelet分配一个唯一的局部标识符。核上的每个轴突和每个神经元都被分配了一个关于核的唯一标识符。类似地,corelet的每个外部输出和corelet的每个外部输入都被分配一个相对于corelet的唯一标识符。
Each corelet assigns a unique local identifier to each of its constituent neurosynaptic cores and sub-corelets. Each axon and each neuron on a core is assigned a unique identifier with respect to the core. Similarly, each external output of a corelet and each external input to a corelet is assigned a unique identifier with respect to the corelet.
我们用 [ a , c o r e ] [a, core] [a,core]来标识核上的第 a a a个轴突,作为目的地址,称为核心。并且,用 [ n , c o r e ] [n, core] [n,core]来标识核上的第 n n n个神经元作为源地址。类似地,我们将corelet C C C上第 a th a^{\text{th}} ath外部输入的目的地址写成 [ a , C ] [a, C] [a,C],将corelet C C C上第 n n n个外部输出的源地址写成 [ n , C ] [n, C] [n,C]。
We identify the a th a^{\text{th}} ath axon on a core, named core, as a destination address by writing [ a , c o r e ] [a, core] [a,core] and we identify the n th n^{\text{th}} nth neuron on core as a source address by writing [ n , c o r e ] [n, core] [n,core]. Similarly, we write the destination address of the ath external input on corelet C C C as [ a , C ] [a, C] [a,C] and the source address of the nth external output on corelet C C C as [ n , C ] [n, C] [n,C].
假设我们有一组神经突触的核心与corelet。引脚可以是一个外部输入,也可以是一个核心或corelet的外部输出;它包含一个源地址和一个目的地址,以及每个地址的true或false状态,以指示其值是否已指定。在组合过程之前,引脚的源地址和目标地址是未知的。因此,我们将一个未初始化的对应于核心或corelet C C C的第 p p p个输入或输出的引脚标记为( [ p , C ] [p, C] [p,C], [ p , C ] [p, C] [p,C]),并将引脚的状态设置为(false,false)。从这个未初始化的状态开始,通过组合过程,每个引脚达到一个最终状态,这样它的源地址和目的地址就完全指定了,例如( [ n , C 1 ] [n, C_1] [n,C1], [ n , C 2 ] [n, C_2] [n,C2]),它的状态是(true, true)。
Suppose we have a set of neurosynaptic cores and corelets. A pin is either an external input or an external output of a core or a corelet; it holds a source address and a desitination address, as well as a true or false state for each address that indicates if its value has been specified. Before the composition process, the pin’s source and destination addresses are not known. Therefore, we mark an uninitialized pin that corresponds to the p th p^{\text{th}} pth input or output of core or corelet C C C as ( [ p , C ] [p, C] [p,C], [ p , C ] [p, C] [p,C]), and set the pin’s state to (false,false). Starting with this uninitialized state, via the composition process, each pin attains a final state such that its source and destination addresses are completely specified, for example ( [ n , C 1 ] [n, C_1] [n,C1], [ n , C 2 ] [n, C_2] [n,C2]), and its state is (true, true).
为了连接一个未初始化的引脚(对应于核或corelet C C C的第 p p p个外部输入)到核或corelet C 2 C_2 C2的第 a a a个输入,我们将引脚的值更新为( [ p , C ] [p, C] [p,C], [ a , C 2 ] [a, C_2] [a,C2]),其状态更新为(false, true)。具体来说,我们更新引脚的目的地址和目的状态。类似地,为了连接一个未初始化的引脚(对应于一个核心或corelet C C C的第 p p p个外部输出)到核心或corelet C 1 C_1 C1的第 n n n个输入,我们将其值更新为( [ n , C 1 ] [n, C_1] [n,C1], [ p , C ] [p, C] [p,C]),其状态更新为(true, false)。具体来说,我们更新引脚的源地址和源状态。
To connect an uninitialized pin that corresponds to the p th p^{\text{th}} pth external input of a core or a corelet C C C to the a th a^{\text{th}} ath input of core or corelet C 2 C_2 C2, we update the pin’s value to ( [ p , C ] [p, C] [p,C], [ a , C 2 ] [a, C_2] [a,C2]) and its state to (false, true). Specifically, we update the pin’s destination address and destination state. Similarly, to connect an uninitialized pin that corresponds to the p th p^{\text{th}} pth external output of a core or a corelet C C C to n th n^{\text{th}} nth input of core or corelet C 1 C_1 C1, we update its value as ( [ n , C 1 ] [n, C_1] [n,C1], [ p , C ] [p, C] [p,C]) and its state as (true, false). Specifically, we update the pin’s source address and source state.
给定输出引脚 p p p的输出,其形式为( [ n , C 1 ] [n, C_1] [n,C1], [ p , C 2 ] [p, C_2] [p,C2]) ,和状态(true, false),这意味着其目的地址是未知的;以及一个输入引脚1,其形式为( [ q , C 3 ] [q, C_3] [q,C3], [ a , C 4 ] [a, C_4] [a,C4]),和状态(false, true),这意味着它的源地址是未知的,配对的过程涉及到的状态更新如下:我们的值更新引脚 p p p的值为 ( [ n , C 1 ] [n, C_1] [n,C1], [ q , C 3 ] [q, C_3] [q,C3])和引脚 q q q的值为( [ p , C 2 ] [p, C_2] [p,C2], [ a , C 4 ] [a, C_4] [a,C4])。然后我们将两个引脚的状态更新为(true, true)。
Given an output pin p p p of the form ( [ n , C 1 ] [n, C_1] [n,C1], [ p , C 2 ] [p, C_2] [p,C2]) and state as (true, false), meaning that its destination address is unknown, and an input pin 1 of the form ( [ q , C 3 ] [q, C_3] [q,C3], [ a , C 4 ] [a, C_4] [a,C4]) and state (false, true), meaning that its source address is unknown, the process of pairing them involves updating the states as follows: We update the value of pin p p p as ( [ n , C 1 ] [n, C_1] [n,C1], [ q , C 3 ] [q, C_3] [q,C3]) and the value of pin q q q as ( [ p , C 2 ] [p, C_2] [p,C2], [ a , C 4 ] [a, C_4] [a,C4]). Then we update the states of both pins to (true, true).
总之,只要一个引脚连接到一个目的地,它的目的地地址和状态就会更新。类似地,只要一个引脚连接到一个源,它的源地址和状态就会更新。当一个引脚连接到另一个引脚时,这些操作在两个引脚上串联发生。
To summarize, whenever a pin is connected to a destination, its destination address and state are updated. Similarly, whenever a pin is connected to a source, its source address and state are updated. When a pin connects to another pin, these operations happen in tandem on both the pins.
连接器类的属性由一个有序的引脚列表组成,其中每个引脚有两个地址的元组和一个如上定义的状态。连接器类提供了几种连接方法,用于将核心连接到连接器,以及将连接器连接到其他连接器,如图4所示。
The properties of the connector class consist of an ordered list of pins, where each pin has a tuple of two addresses and a state as defined above. The connector class offers several connectivity methods for connecting cores to connectors as well as for connecting connectors to other connectors, as shown in Fig. 4.
🖼️ 图4 说明了连接器和核心之间的三种连接模式,箭头表示脉冲流向。当这三种连接模式结合在一起时,可以创建任意的TrueNorth神经突触核心网络。(a)将两个连接器中的一些引脚的目的端连接到一个核心上的一些输入轴突。(b)将两个连接器中的一些引脚的源端连接到一个核上的一些输出神经元。(c)将连接器
C
1
C_1
C1的目的端连接到连接器
C
2
C_2
C2的源端(用Corelet语言写为C1.busTo(C2,P)
,其中当置换向量P
等于单位置换时可以省略)。
Fig. 4. The three connectivity patterns between connectors and cores are illustrated, with arrows indicating the flow of spikes. These three connectivity patterns, when combined, allow to create any network of TrueNorth neurosynaptic cores. (a) Connecting the destination side of some pins in two connectors to some input axons on a core. (b) Connecting the source side of some pins in two connectors to some output neurons on a core. (c) Connecting the destination side of connector C 1 C_1 C1 to the source side of connector C 2 C_2 C2 (written in Corelet Language as
C1.busTo(C2,P)
, where the permutation vectorP
may be omitted when equal to the identity permutation).
神经突触核之间的相互连通性可以被认为是神经元和轴突之间的二部图。 N N N个神经元和轴突,有 N ! N! N! 种可能的连接模式。这种复杂的神经元-轴突连接的直接规范是不规模化的,而且很容易出错。在这个上下文中,连接器是一个非常强大的原语,它提供了连接的语义表示。
The inter-connectivity between neurosynaptic cores can be thought of as a bipartite graph between neurons and axons. With N N N neurons and axons, there are N ! N! N! possible connectivity patterns. Direct specification of such complex neuron-axon wiring does not scale and is highly error-prone. In this context, the connector is an extremely powerful primitive that provides a semantic representation of connectivity.
D. Corelet类/The Corelet Class
corelet类的属性构成子corelet、神经元、核心、输入连接器和输出连接器。虽然corelet的所有属性都是私有的,但其输入连接器的源地址和输出连接器的目标地址是公共的。为了方便组合,corelet类提供了以下方法:(a)递归地创建组成子corelet对象;(b)创建神经元类型;(c)创建组成核心对象;(d)创建输入输出连接器;以及(e)在corelet内的互连连接器、核心和子corelet。由于corelet可以按层次结构组合,因此构建了几个corelet方法来递归遍历corelet树并对其进行操作。最后,corelet类提供了一种解耦方法,该方法递归地遍历corelet对象树以删除所有抽象层并生成TrueNorth程序。Corelet的组合与解耦如图6所示。如第五节后面所述,我们使用解耦方法以JSON标记语言输出模型文件。然后使用该文件在Compass仿真器上运行TrueNorth程序。
The properties of the corelet class constitute sub-corelets, neurons, cores, input connectors, and output connectors. While all properties of a corelet are private, the source address of its input connector and the destination addresses of its output connector are public. To facilitate composition, the corelet class offers methods for: (a) recursively creating constituent sub-corelet objects; (b) creating neuron types; (c) creating constituent core objects; (d) creating input and output connectors; and (e) interconnecting the connectors, cores, and sub-corelets within the corelet. Since corelets can be composed hierarchically, several of the corelet methods are built to recursively traverse corelet trees and operate on them. Finally, the corelet class offers a method for decomposition that recursively travels the corelet object tree to remove all layers of abstractions and produce a TrueNorth program. Corelet composition and decomposition are illustrated in Fig. 6. As described later in Sec. V, we use the decomposition method to output a model file in a JSON mark-up language. The file is then used to run the TrueNorth program on the Compass Simulator.
🖼️ 图6 详细说明了Corelet MCR构造的TrueNorth程序中目标地址
[
a
2
,
c
o
r
e
2
]
[a_2, core_2]
[a2,core2]分配给神经元
[
n
1
,
c
o
r
e
1
]
[n_1, core_1]
[n1,core1]的组合与解耦过程。组合发生在递归构造corelet期间。Corelet MCR首先创建Corelet LSM,然后创建Corelet L和S(为了清晰起见,没有显示S)。Corelet L创建时,将输出神经元
[
n
1
,
c
o
r
e
1
]
[n_1, core_1]
[n1,core1]连接到引脚
[
p
1
,
C
1
]
[p_1, C_1]
[p1,C1]。接下来,Corelet LSM将连接器
C
1
C_1
C1连接到
C
2
C_2
C2,从而连接
[
p
1
,
C
1
]
[p_1, C_1]
[p1,C1]和
[
p
2
,
C
2
]
[p_2, C_2]
[p2,C2]。类似地,Corelet
C
C
C在构造时,将
[
p
4
,
C
4
]
[p_4, C_4]
[p4,C4]连接到输入轴突
[
a
2
,
c
o
r
e
2
]
[a_2, core_2]
[a2,core2]。接下来,Corelet SC将
[
p
4
,
C
4
]
[p_4, C_4]
[p4,C4]与
[
p
3
,
C
3
]
[p_3, C_3]
[p3,C3]连接。最后,在Corelet LSM和SC构造并组成后,Corelet MCR将连接器
C
2
C_2
C2与连接器
C
3
C_3
C3连接起来,从而将
[
p
2
,
C
2
]
[p_2, C_2]
[p2,C2]与
[
p
3
,
C
3
]
[p_3, C_3]
[p3,C3]连接起来,完成神经元
[
n
1
,
c
o
r
e
1
]
[n_1, core_1]
[n1,core1]与轴突
[
a
2
,
c
o
r
e
2
]
[a_2, core_2]
[a2,core2]之间的双向链接路径,由沿路径的四个连接器中的四个引脚捕获。随后的递归解耦过程以相同的顺序遍历corelet。在每个corelet上,它通过直接连接源地址引用的引脚和目的地址引用的引脚从路径中删除自己的引脚,从而将路径长度减少1。当解耦过程完成时,从路径上移除所有连接器引脚,源神经元
[
n
1
,
c
o
r
e
1
]
[n_1, core_1]
[n1,core1]直接连接到其目标轴突
[
a
2
,
c
o
r
e
2
]
[a_2, core_2]
[a2,core2],作为TrueNorth程序的一部分。
Fig. 6. The processes of composition and decomposition are illustrated in details for the assignment of the destination address [ a 2 , c o r e 2 ] [a_2, core_2] [a2,core2] to neuron [ n 1 , c o r e 1 ] [n_1, core_1] [n1,core1] in the TrueNorth program constructed by the Corelet MCR. Composition takes place during the recursive corelet construction. Corelet MCR starts by creating Corelet LSM, which in turn creates Corelets L and S (S not shown for clarity). Corelet L, when created, connects output neuron [ n 1 , c o r e 1 ] [n_1, core_1] [n1,core1] to pin [ p 1 , C 1 ] [p_1, C_1] [p1,C1]. Next, Corelet LSM connects connector C 1 C_1 C1 to C 2 C_2 C2, thereby connecting [ p 1 , C 1 ] [p_1, C_1] [p1,C1] and [ p 2 , C 2 ] [p_2, C_2] [p2,C2]. Similarly, Corelet C C C, when constructed, connects [ p 4 , C 4 ] [p_4, C_4] [p4,C4] to input axon [ a 2 , c o r e 2 ] [a_2, core_2] [a2,core2]. Next, Corelet SC connects [ p 4 , C 4 ] [p_4, C_4] [p4,C4] with [ p 3 , C 3 ] [p_3, C_3] [p3,C3]. Finally, after both Corelets LSM and SC are constructed and composed, Corelet MCR connects connector C 2 C_2 C2 with connector C 3 C_3 C3, thereby connecting [ p 2 , C 2 ] [p_2, C_2] [p2,C2] with [ p 3 , C 3 ] [p_3, C_3] [p3,C3] and completing the doubly linked path between neuron [ n 1 , c o r e 1 ] [n_1, core_1] [n1,core1] and axon [ a 2 , c o r e 2 ] [a_2, core_2] [a2,core2], captured by the four pins in the four connectors along the path. The consequent recursive decomposition process traverses the corelets in the same order. At each corelet, it removes its own pin from the path by directly connecting the pin referenced by its source address with the pin referenced by its destination address, hence reducing the path length by one. When the decomposition process completes, all connector pins are removed from the path and source neuron [ n 1 , c o r e 1 ] [n_1, core_1] [n1,core1] is connected directly to its destination axon [ a 2 , c o r e 2 ] [a_2, core_2] [a2,core2], as part of the TrueNorth program.
关键的设计标准是确保所有的TrueNorth程序都可以用corelet语言表达,相反,该语言允许的任何corelet都与TrueNorth兼容。换句话说,语言应该是完整的,并且与TrueNorth体系结构保持一致。为此,我们确定了一组不变量:(a) corelet具有所有组成核心、神经元和突触交叉条的完全指定与配置;(b)corelet的所有核心、子corelet与连接器完全连接;(c)一个corelet中的每个神经元要么被分配到轴突的目标地址,要么被定义为断开连接,要么连接到输出连接器中的引脚;(d)在corelet内的每个轴突要么连接到一个源神经元,被定义为断开,或连接到一个输入连接器的引脚。考虑到corelet的递归性质,所有这些不变量都是递归地、分层地断言和执行的。
The key design criteria is to ensure that all TrueNorth programs are expressible in the corelet language, and, conversely, any corelet that the language allows is TrueNorth compatible. In other words, the language should be complete and consistent with respect to the TrueNorth architecture. To this end, we have identified a set of invariants: (a) a corelet has all its constituent cores, neurons, and synaptic crossbars fully specified and configured. (b) a corelet has all its cores, sub-corelets and connectors fully connected; (c) each neuron in a corelet is either assigned with a destination address of an axon, is defined as disconnected, or is connected to a pin in an output connector; (d) each axon in a corelet is either connected to a source neuron, is defined as disconnected, or is connected to a pin in an input connector. Given the recursive nature of a corelet, all these invariants are recursively and hierarchically asserted and enforced throughout.
E. 组合与解耦/Composition and Decomposition
核心及其子corelet之间的连通性是通过组合和解耦两个阶段构建的。组合过程在系统中的每个源神经元与其相应的目标轴突之间创建了一个双重链接路径,同时遵循并保留corelet封装。解耦的过程(逻辑上与组合相反)将路径上的所有间接层逐一移除,最终用源神经元和目标轴突之间的直接链接取代整个路径。这一过程创建了一个TrueNorth程序,该程序以直接连接的神经突触核心网络表示,没有任何封装。这两个过程如图6所示,使用我们将在第五节中全面开发的一个例子。
The connectivity within the corelet and its sub-corelets is constructed in two phases by the process of composition and decomposition. The process of composition creates a doubly linked path between each source neuron in the system and its corresponding destination axon — while following and preserving corelet encapsulation. The process of decomposition (a logical inverse of composition) removes all layers of indirection along the path one by one and eventually replaces the entire path with a direct link between the source neuron and its target axon. This process creates a TrueNorth program, expressed in terms of a directly connected network of neurosynaptic cores without any encapsulation. The two processes are illustrated in Fig. 6, using an example that we will fully develop in Sec. V.
组合的递归过程发生在递归构造corelet过程中。如图所示,它在源神经元 [ n 1 , c o r e 1 ] [n_1, core_1] [n1,core1]和目标轴突 [ a 2 , c o r e 2 ] [a_2, core_2] [a2,core2]之间创建了一条双向链接路径,可以用路径上的引脚列表来描述,即 [ n 1 , c o r e 1 ] ⇄ [ p 1 , C 1 ] ⇄ [ p 2 , C 2 ] ⇄ [ p 3 , C 3 ] ⇄ [ p 4 , C 4 ] ⇄ [ a 2 , c o r e 2 ] [n_1, core_1]\rightleftarrows[p_1, C_1]\rightleftarrows[p_2, C_2]\rightleftarrows[p_3, C_3]\rightleftarrows[p_4, C_4]\rightleftarrows[a_2, core_2] [n1,core1]⇄[p1,C1]⇄[p2,C2]⇄[p3,C3]⇄[p4,C4]⇄[a2,core2]。请注意,列表中的每个引脚都链接到它左边的引脚和它右边的引脚,因此创建了一个双重链接路径,同时遍历多个corelet并保持封装。可以看到,在一个corelet中,由于封装,只有它的子corelet可以彼此连接。在组合过程结束时,所有源神经元都通过路径连接到相应的目标轴突。
The recursive process of composition takes place during the recursive corelet construction process. As seen in the figure, it creates a doubly linked path between the source neuron [ n 1 , c o r e 1 ] [n_1, core_1] [n1,core1] and the destination axon [ a 2 , c o r e 2 ] [a_2, core_2] [a2,core2] which can be described by the list of pins along the path, namely [ n 1 , c o r e 1 ] ⇄ [ p 1 , C 1 ] ⇄ [ p 2 , C 2 ] ⇄ [ p 3 , C 3 ] ⇄ [ p 4 , C 4 ] ⇄ [ a 2 , c o r e 2 ] [n_1, core_1]\rightleftarrows[p_1, C_1]\rightleftarrows[p_2, C_2]\rightleftarrows[p_3, C_3]\rightleftarrows[p_4, C_4]\rightleftarrows[a_2, core_2] [n1,core1]⇄[p1,C1]⇄[p2,C2]⇄[p3,C3]⇄[p4,C4]⇄[a2,core2]. Note that each pin along the list is linked to the pin on its left and the pin on its right, therefore creating a doubly linked path, while traversing through multiple corelets and preserving encapsulation. It can be seen that within a corelet, due to encapsulation, only its sub-corelets can connect with one another. At the end of the composition process, all source neurons are connected via paths to their corresponding destination axons.
解耦的递归过程将遍历已组合的corelet,从下往上处理(深度优先顺序),每次从路径中删除一个引脚。开始在Corelet L中操作,它通过将 [ n 1 , c o r e 1 ] [n_1, core_1] [n1,core1]连接到 [ p 1 , C 1 ] [p_1, C_1] [p1,C1]并将其核传递给它的父Corelet LSM来从路径中删除 [ p 1 , C 1 ] [p_1, C_1] [p1,C1]。结果,路径变成 [ n 1 , c o r e 1 ] ⇄ [ p 2 , C 2 ] ⇄ [ p 3 , C 3 ] ⇄ [ p 4 , C 4 ] ⇄ [ a 2 , c o r e 2 ] [n_1, core_1]\rightleftarrows[p_2, C_2]\rightleftarrows[p_3, C_3]\rightleftarrows[p_4, C_4]\rightleftarrows[a_2, core_2] [n1,core1]⇄[p2,C2]⇄[p3,C3]⇄[p4,C4]⇄[a2,core2], Corelet L的封装被移除。这个过程一直持续到所有的核都传递到Corelet MCR,剩下的路径则是直接连接 [ n 1 , c o r e 1 ] ⇄ [ a 2 , c o r e 2 ] [n_1, core_1]\rightleftarrows[a_2, core_2] [n1,core1]⇄[a2,core2]。
The recursive process of decomposition traverses the composed corelets, processing from the bottom up (depth-first order), removing one pin from the path at a time. Starting to operate in Corelet L, it removes [ p 1 , C 1 ] [p_1, C_1] [p1,C1] from the path by connecting [ n 1 , c o r e 1 ] [n_1, core_1] [n1,core1] to [ p 1 , C 1 ] [p_1, C_1] [p1,C1] and passing its cores to its parent, Corelet LSM. As a result, the path becomes [ n 1 , c o r e 1 ] ⇄ [ p 2 , C 2 ] ⇄ [ p 3 , C 3 ] ⇄ [ p 4 , C 4 ] ⇄ [ a 2 , c o r e 2 ] [n_1, core_1]\rightleftarrows[p_2, C_2]\rightleftarrows[p_3, C_3]\rightleftarrows[p_4, C_4]\rightleftarrows[a_2, core_2] [n1,core1]⇄[p2,C2]⇄[p3,C3]⇄[p4,C4]⇄[a2,core2], and the encapsulation of Corelet L is removed. This process continues until all the cores are passed to Corelet MCR and the remaining path is the direct connection [ n 1 , c o r e 1 ] ⇄ [ a 2 , c o r e 2 ] [n_1, core_1]\rightleftarrows[a_2, core_2] [n1,core1]⇄[a2,core2].
IV Corelet库/Corelet Library
Corelet库是组成新系统和新corelet的基础(图7)。它包含了所有可用的corelet,并随着更多corelet的开发而不断增长。库中的任何corelet都可以用于构建新系统,然后新系统本身可以以新的corelet的形式添加回库中,从而以自底向上的方式增长和丰富库。在我们开始使用Corelets语言不到一年的时间里,团队已经开发了100多个corelet。
The Corelet Library is the foundation from which new systems and new corelets are composed (Fig. 7). It contains all the corelets that are available, and continuously grows as more corelets are developed. Any corelet from the library can be used for building a new system, and the new system itself can then be added back into the library in the form of a new corelet, thereby growing and enriching the library in a bottomup fashion. In less than a year since we started to use the Corelets Language, the team has developed more than 100 corelets.
🖼️ 图7 Corelet实验室。我们描述了使用所有工具的完整开发周期。Corelet库是由开发人员创建的所有corelet的分层库。corelet开发过程通常包括创建和继承Corelet库,将子corelet组合为新的corelet,并在验证后将新的corelet提交到库中。然后,新的corelet就可以供其他应用程序使用,从而允许组合。具体应用程序是通过从创建的corelet类实例化对象来创建的。然后将实例化的对象解耦为表示TrueNorth程序的网络模型文件。这反过来又用于设置Compass仿真器。外部输入和输出连接器分别用于创建输入和输出映射文件,以显示应该将感官输入定位在哪里以及从哪里接收输出。在执行过程中,仿真器接收传感器转导的输入脉冲,并生成可用于解释分类、驱动执行器或创建可视化的输出脉冲。
Fig. 7. The Corelet Laboratory. We depict the complete development cycle with all the tools. The Corelet Library is a hierarchical library of all corelets created by developers. The Corelet development process typically involves creating and inheriting from the Corelet Library, composing sub-corelets together into new corelets, and submitting the new corelet into the library after verification. The new corelet then becomes available for use by other applications, thus allowing composition. A concrete application is created by instantiating objects from the created corelet class. The instantiated object is then decomposed into a network model file that represents a TrueNorth program. This, in turn, is used to setup the Compass simulator. The external input and output connectors are used, respectively, to create input and output map files that show where sensory input should be targeted and from where the output is to be received. During execution, the simulator receives sensor transduced input spikes and generates output spikes that can be used to interpret classification, drive actuators, or create visualizations.
所有的corelet都是corelet基类的子类,所以corelet库也是一个子类树。一个通用corelet,如线性滤波器corelet,可以有诸如Gabor滤波器corelet、非线性滤波器和递归IIR滤波器等子类。在我们的配套论文中,我们详细描述了使用这种层次结构方法构建功能系统[10]的几个应用程序。
All corelets are sub-classes of the corelet base class, so the Corelet Library is also a tree of sub-classes. A generalpurpose corelet, such as a linear filter corelet, can then have sub-classes such as a Gabor filter corelet, non-linear filters, and recursive IIR filters. In our companion paper, we describe in detail several applications that use this hierarchical approach to build functional systems [10].
目前在Corelet库中的corelet包括标量函数、代数、逻辑和时间函数、分配器、聚合器、多路复用器、线性滤波器、核卷积(1D、2D和3D数据)、有限状态机、非线性滤波器、递归时空滤波器、运动检测、光流、显著性检测器和注意力电路、颜色分割、离散傅里叶变换、线性和非线性分类器、受限玻尔兹曼机、液态机器等。corelet抽象和统一的接口使开发人员能够轻松地选择不同实现以替换库中的corelet,而不会干扰系统的其余部分。
The corelets currently in the Corelet Library include scalar functions, algebraic, logical, and temporal functions, splitters, aggregators, multiplexers, linear filters, kernel convolution (1D, 2D and 3D data), finite-state machines, non-linear filters, recursive spatio-temporal filters, motion detection, optical flow, saliency detectors and attention circuits, color segmentation, a Discrete Fourier Transform, linear and non-linear classifiers, a Restricted Boltzmann Machine, a Liquid State Machine, and more. The corelet abstraction and unified interfaces enable developers to easily replace a library corelet with an alternative implementation without disrupting the rest of the system.
V Corelet实验室/Corelet Laboratory
现在,我们把所有的部分都放在一起,我们展示了一个端到端的Corelet实验室,这个编程环境集成了TrueNorth架构仿真器Compass[3],并支持从设计、开发、调试到部署的corelet编程周期的所有方面,如图7所示。
Now that we have all the pieces together, we present an end-to-end Corelet Laboratory, the programming environment that integrates with the TrueNorth architectural simulator, Compass [3], and supports all aspects of the corelet programming cycle from design, through development, debugging, and into deployment, as shown in Fig. 7.
A. 示例应用程序:音乐作曲家识别/Sample Application: Music Composer Recognition
我们现在用一个具体的应用来说明Corelet实验室。给定巴赫或贝多芬的乐谱,考虑确定乐曲作曲家的问题。我们采用液体状态机[14]与分级分类器结合使用的方法。在本文中,我们的重点不是应用程序本身,而是用于创建应用程序的编程范式。对该应用程序的更多细节感兴趣的读者可以参考配套论文[10]。
We now illustrate the Corelet Laboratory using a concrete application. Given a musical score by Bach or Beethoven, consider the problem of identifying the music piece’s composer. We adopt the approach of using a liquid state machine [14] in conjunction with a hierarchical classifier. In this paper, our focus is not so much the application itself but the programming paradigm used to create it. The reader interested in more details about the application can consult the companion paper [10].
为该应用开发的corelet名为corelet MCR,如图8所示。该图说明了corelet的层次结构、corelet组合、封装、模块化和代码可重用性在创建复杂的认知系统时是如何有用的。
The corelet developed for the application, named Corelet MCR, is illustrated in Fig. 8. The figure illustrates how the hierarchical nature of corelets, corelet composition, encapsulation, modularity, and code reusability are useful in creating complex cognitive systems.
🖼️ 图8 音乐作曲家识别系统的corelet图,这里提供了一个Corelet语言“Hello World”示例。应用程序被编写为corelet类:MCR。它在层次上由两个子集组成;一个参数化液体状态机(LSM)核心和一个可堆叠分类器(SC)核心。Corelet LSM依次有两个子corelet:Corelets Liquid与Splitter。同样,Corelet SC由一个分层分类器组成。
Fig. 8. A corelet diagram of the Music Composer Recognition system, provided here as a Corelet Language “Hello World” example. The application is written as a corelet class: MCR. It is hierarchically composed of two subcorelets; a parametric Liquid State Machine (LSM) corelet and a Stackable Classifier (SC) corelet. The Corelet LSM has, in turn, two sub-corelets: Corelets Liquid and Splitter. Likewise, the Corelet SC is composed of a hierarchical classifier.
B. Corelet组合与解耦/Corelet Composition and Decomposition
构建Corelet MCR的代码如清单1所示。注意,它的子corelet的创建使用了两个递归的corelet实例化调用,参数以自顶向下的顺序从父corelet传递给子corelet。在这些调用完成后,以自底向上的顺序处理连接性(首先是子代码集,然后是父代码集)。递归corelet构造器调用异构地应用于所有corelet类。Corelet MCR构造函数不创建任何核心或神经元,而是由它的子corelet创建。可以观察到Corelet MCR的开发人员不需要担心其组成部分Corelet LSM和Corelet SC之间的结构或连通性。
The code for constructing the Corelet MCR is illustrated in Listing 1. Notice that the creation of its sub-corelet uses two recursive corelet instantiation calls, with parameters being passed to the sub-corelets in a top-down order from the parent corelet to the children. Connectivity is handled after these calls are completed, in a bottom-up order (first the sub-corelets are interconnected, then the parent). Recursive corelet constructor calls apply heterogeneously to all corelet classes. The Corelet MCR constructor does not create any cores or neurons, rather these are created by its sub-corelets. Observe that the developer of the Corelet MCR does not need to worry about either the construction or the connectivity within its constituent Corelet LSM and Corelet SC.
📊 清单1。用于音乐作曲家识别的Corelet MCR的构造函数,这是图8中描述的示例应用程序。MCR
类派生自corelet类(第1行)。构造函数是一个MATLAB函数,接收四个参数并返回一个句柄(引用)到构造的corelet对象obj
。它通过使用适当的参数递归调用相应的corelet构造函数来创建它的两个子corelet lsm
和sc
。每个子corelet调用都返回它创建的corelet对象的句柄。在第13行,两个子corelet句柄存储在obj.subcorelets
中,使其成为1×2的数组corelet。接下来,Corelet MCR创建自己的输入和输出连接器(第15-16行)。最后,在第18-20行中,它通过创建图8中Corelet MCR内部的三个小箭头所标记的连接来组成它的子corelet。使用连接器的busTo()
方法,它将自己的输入连接器连接到lsm
的输入;LSM的输出到sc
的输入;和sc
的输出到它自己的输出。组合与解耦的过程如图6所示。
Listing 1. The constructor of the Corelet MCR for Music Composer Recognition which is a sample application described in Fig. 8. The
MCR
class is derived from the corelet class (Line 1). The constructor, a MATLAB function, receives four parameters and returns a handle (reference) to the constructed corelet object,obj
. It creates its two sub-corelets,lsm
andsc
, by recursively invoking their corresponding corelet constructors with the appropriate parameters. Each sub-corelet call returns a handle to the corelet object it created. In Line 13 the two sub-corelet handles are stored inobj.subcorelets
, making it a 1×2 array of corelets. Next, the Corelet MCR creates its own input and output connectors (Lines 15-16). Finally in Lines 18-20 it composes its sub-corelets by creating the connections which are marked by the three little arrows inside the Corelet MCR in Fig. 8. Using the connector’sbusTo()
method it connects its own input connector to the input oflsm
; the output oflsm
to the input ofsc
; and the output ofsc
to its own output. The processes of composition and decomposition are further described in Fig. 6.
C. TrueNorth程序的创建/Creation of TrueNorth Program
Corelet实验室提供了使用TrueNorth仿真器Compass实例化、组合和执行TrueNorth程序的功能。清单2演示了将Corelet MCR实例化为app
并运行它所需的伪代码(如图8所示)。首先,通过调用它的构造函数(第2行)来创建corelet。app
的外部输入和输出接口在第3-4行中声明。对corelet进行了验证(第5行),以确保它的模型对于TrueNorth体系结构是完整且有效的。验证阶段成功后,使用modelGen方法通过分解(第6行)创建网络模型文件,生成JSON格式的模型文件,描述神经突触核的网络,即TrueNorth程序。
The Corelet Laboratory provides functionality to instantiate, compose, and execute TrueNorth programs with the TrueNorth simulator, Compass. The pseudocode required to instantiate the Corelet MCR (shown in Fig. 8) as
app
and run it is illustrated in Listing 2. First, the corelet is created by invoking its constructor (Line 2). The external input and output interfaces ofapp
are declared in Lines 3-4. The corelet is verified (Line 5) to ensure that its model is complete and valid for the TrueNorth architecture. When the verification stage is successful, the network model file is created by the modelGen method via decomposition (Line 6), generating a model file in JSON format, describing the network of neurosynaptic cores, that is, a TrueNorth program.
📊 清单2。corelet生成并运行脚本,从示例corelet MCR的实例化开始(如图8所示),验证,生成网络模型文件,I/O映射文件和脉冲输入文件,使用输入脉冲运行网络,并读取和可视化输出脉冲。
Listing 2. The corelet generation and running script, starting from instantiation of the sample Corelet MCR (described in Fig. 8), verification, generation of a network model file, I/O map files and spikes input files, running the network with the input spikes, and reading and visualizing the output spikes.
D. 输入与输出映射文件/Input and Output Map Files
输入和输出映射通过提供与外部输入输出连接器连接的目标轴突和源神经元的查找表来定义TrueNorth程序的外部接口。外部输入轴突可以由感官输入提供,来自神经元的外部输出可以向执行器或其他设备发送脉冲。清单2中的第7-8行展示了Corelet实验室是如何为app
生成输入和输出映射文件。当构建app
时,它的输入和输出作为系统的外部接口。这些连接器被设置为外部连接器,并保存为网络的输入和输出映射文件。
Input and Output maps define the external interfaces of the TrueNorth program by providing look-up tables of the destination axons and the source neurons connected to External input and output connectors. External input axons can be fed by sensory input, and external output from neurons can send spikes to actuators, or other devices. Lines 7-8 in Listing 2 show how the Corelet Laboratory enables generation of input and output map files for
app
. Whenapp
is built, its input and output serve as the external interfaces to the system. These connectors are set as external connectors, and are saved as the input and output map files of the network.
E. 转导与仿真/Transduction and Simulation
当corelet执行时,将调用功能函数videoToSpikes
(第9行),从指定的视频文件生成输入脉冲文件。来自输入视频的单个像素灰度被转换为脉冲。像素的脉冲被映射到一个核心-轴突元组,对应于输入映射文件(第9行)中定义的输入连接器的特定引脚。将数据转换为脉冲的过程称为转导。之后,生成的脉冲存储在输出(原文有误,应为输出,译者注)脉冲文件中。典型脉冲文件的光栅图如图9所示。
When the corelet is executed, the utility function
videoToSpikes
(Line 9) is called to generate an input spike file from an indicated video file. The individual pixel gray levels from the input video are converted to spikes. The spikes of a pixel are mapped to a core-axon tuple, corresponding to a specific pin of the input connector defined in the input map file (Line 9). This process of converting data to spikes is called transduction. The generated spikes are then stored in aninputoutput spikes file. A raster of a typical spike file is shown in Fig. 9.
🖼️ 图9 五层TrueNorth测试程序的脉冲光栅,每层由一个种子corelet组成。每个神经突触核心的256个神经元的输出脉冲沿y轴堆叠。脉冲时间沿x轴(1tick=1mSec)。
Fig. 9. A spike raster of a five-layers TrueNorth test program, with each layer composed of one seed corelet. Output spikes from the 256 neurons in each neurosynaptic core are stacked along the y-axis. Spike time is along the x-axis (1tick=1mSec).
使用Compass[3]仿真器在MATLAB环境之外仿真模型文件及其输入脉冲。仿真器需要三个参数:一个配置文件(app.conf),一个模型文件(app.json)与一个输入脉冲文件(v1.sfbi)。它用输入脉冲仿真模型操作,并生成一个输出脉冲文件。
Simulation of the model file with its input spikes occurs outside of the MATLAB environment using the Compass [3] simulator. The simulator expects three arguments: a configuration file (app.conf), a model file (app.json), and an input spike file (v1.sfbi). It simultes the model operation with the input spikes and produces an output spikes file.
仿真完成后,一个功能函数将生成的脉冲加载到一个数组s
(第11行),在那里可以绘制或分析它们。read_spike_file
函数使用输出映射来翻译脉冲地址。最后,s
的脉冲被可视化为视频。
After the simulation completes, a utility function loads the generated spikes into an array
s
(Line 11), where they can be plotted or analyzed. Theread_spike_file
function uses the output map to interpret the spike addresses. Finally, the spikes ins
are visualized as a video.
VI 相关工作/Related Work
标记语言(NeuroML[15])、模拟器独立语言(PyNN[16],[17])和特定领域语言(OptiML[18][19])已被开发用于软件中的神经建模和仿真,独立于任何神经突触硬件。相比之下,corelet软件与底层TrueNorth硬件架构相关联。在这种情况下,corelet编程范型在硬件兼容性、并发性、封装和组合方面与硬件描述语言(VHDL和Verilog[20])有相似之处。
Mark-up languages (NeuroML[15]), simulator-independent languages (PyNN[16], [17]), and domain-specific languages (OptiML [18][19]) have been developed for neural modeling and simulation in software, independent of any neurosynaptic hardware. In contrast, the corelet software is tied to the underlying TrueNorth hardware architecture. In this context, the corelet programming paradigm has similarities with Hardware Description Languages (VHDL and Verilog [20]) in terms of hardware-compatibility, concurrency, encapsulation, and composition.
排队Petri网建模环境[21]是一个使用Petri网建模、仿真和分析过程的系统。它包括一个用于编辑和组合层次模型的图形用户界面。然而,图形编辑器对于典型TrueNorth程序的规模和连接复杂性的网络可能是无效的。最近的连接集代数是一种高级语言,通过使用集合代数和矩阵操作来创建连接列表和邻接矩阵来指定神经元组[22]之间的连接模式。与此类似,corelet编程范例为远距离“白质”连接使用连接器和排列,为短距离“灰质”连接使用突触交叉矩阵,并利用MATLAB丰富的代数与矩阵运算符。
Queueing Petri Net Modeling Environment[21] is a system for modeling, simulation and analysis of processes using Petri nets. It includes a graphical user interface for editing and composition of hierarchical models. However, a graphical editor might be ineffective for networks of the size and connectivity complexity of typical TrueNorth programs. The recent Connection-Set Algebra is a high level language for specifying connectivity patterns between groups of neurons [22] by using set algebra and matrix operations to create connectivity lists and adjacency matrices. In a similar vein, the corelet programming paradigm employs connectors and permutations for long-distance “white-matter” connectivity and synaptic crossbar matrices for short-distance “gray-matter” connectivity, and leverages MATLAB’s rich algebraic and matrix operators.
在[23]中提出了一种表达神经网络固有的大规模并行性的语言。网络被转换成一个抽象的内部表示,然后映射到多个冯·诺伊曼处理器上,这与corelet编程范式不同,在corelet编程范式中,网络被映射到固有的并行TrueNorth架构。最后,C*[24]在1988年被开发出来,作为C语言的扩展,用于构建非冯·诺依曼架构的连接机[25]的应用程序。
A language for expressing the inherent massive parallelism of neural networks was presented in [23]. The network is converted into an abstract internal representation and then mapped onto multiple von-Neumann processors, which is different from the corelet programming paradigm, where the network is mapped to inherently parallel TrueNorth architecture. Finally, C* [24] was developed in 1988, as an extension to C, to build applications for the Connection Machine[25], a non vonNeumann architecture.
VII 结论/Conclusion
为冯·诺依曼开发的线性顺序编程范式完全不适合TrueNorth并行认知架构。有效的思维工具将认知上复杂的任务转化为人类能够擅长的简单任务。这个过程包括选择合适的隐喻来支撑思维,并利用语言的表达能力来有效地将所需的计算转换为高效的可执行程序。
The linear sequential programming paradigm developed for von Neumann is wholly unsuited for the TrueNorth parallel cognitive architecture. Effective tools for thinking transform cognitively complex tasks into simpler ones at which humans can excel [26]. This process involves choosing the appropriate metaphors to scaffold thinking and harnessing the expressive capacity of the language to effectively translate a desired computation into an efficient, executable program.
因此,本文从第一准则出发,定义了TrueNorth程序的一个新的隐喻,即corelets,如图10所示。利用这一概念,我们开发了一种全新的编程范式,可以构建复杂的认知算法和应用程序,同时对TrueNorth和程序员的生产力有效。该范例由一种语言组成,用于表达corelet;corelet库;以及corelet实验室,用来进行corelet实验。
Therefore, starting from first principles, in this paper, we have defined a novel metaphor of a TrueNorth program, namely, corelets, see, Fig. 10. Leveraging this notion, we have developed an entirely new programming paradigm that can permit construction of complex cognitive algorithms and applications while being efficient for TrueNorth and effective for programmer productivity. The paradigm consists of a language for expressing corelets; a library of corelets; and a laboratory for experimenting with corelets.
🖼️ 图10 Corelet是TrueNorth程序的抽象,由子corelet(其他corelet)分层组成,同时确保与TrueNorth体系结构相关的正确性、一致性和完整性。
Fig. 10. A corelet is an abstraction of a TrueNorth program and is composed hierarchically from sub-corelets (other corelets) while ensuring correctness, consistency, and completeness with respect to the TrueNorth architecture.
从哲学上讲,corelet编程范式与Gottlob Frege的组合原则[27]相似:
Philosophically, the corelet programming paradigm has affinity with Gottlob Frege’s Principle of Compositionality [27]:
复杂表达式的意义是其组成部分的意义及其组合方式的函数。
The meaning of a complex expression is a function of the meanings of its constituents and the way they are combined.
Corelet库是一个积累的知识和智慧的仓库,可以反复使用。新的corelet被编写并添加到库中,库以一种自我强化的方式不断增长。基于corelet的组合性和我们关于corelet库的组合增长的经验,我们为TrueNorth的软件能力设定了一个经验法则 C S C_S CS:
The Corelet Library is a store of accumulated knowledge and wisdom that can be repeatedly re-used. New corelets are written and added to the library, which keeps continually growing in a self-reinforcing way. Based on the compositionality of corelets and on our experience regarding the combinatorial growth of the Corelet Library, we posit an empirical law for TrueNorth’s software capability C S C_S CS:
C S ∝ L λ C_S\propto L^{\lambda} CS∝Lλ
其中 L L L是库的大小, λ \lambda λ是常数且 λ > 1 \lambda\gt 1 λ>1。
where L L L is the size of the library and λ > 1 \lambda\gt 1 λ>1 is a constant.
在这里,我们重点介绍了编程范式下的基本概念,并以最简单的形式介绍了它们,以帮助理解。然而,Corelet语言支持强大的原语,例如参数化corelet可以在运行时从单个corelet类实例化丰富的corelet对象,而元corelet可以在其他corelet上操作,从而紧凑地创建大规模且强大的TrueNorth程序。为了实现大规模的TrueNorth程序,我们目前正在使用MATLAB的并行计算工具箱扩展编程范例。
Here, we have focused on essential concepts underlying the programming paradigm and presented them in their simplest form to aid understandability. However, the Corelet Language supports powerful primitives, such as parametric corelets that can instantiate a rich variety of corelet objects at run-time from a single corelet class, and meta-corelets that operate on other corelets to compactly create extremely large and powerful TrueNorth programs. With a view towards largescale TrueNorth programs, we are currently extending the programming paradigm using MATLAB’s Parallel Computing Toolbox.
当提到冯·诺依曼瓶颈时,约翰·巴克斯在1972年的图灵讲座[28]中说:“这是一个智力瓶颈,它使我们被逐字思考所束缚,而不是鼓励我们从手头任务的更大的概念单位来思考”。我们现在生活在一个仪器化的世界,被来自传感器的数据海啸所淹没。大多数数据本质上是并行的,非常适合TrueNorth进行并行处理。然而,要发明有效的算法和应用程序,我们需要从长期的、顺序的冯·诺依曼思维转向短期的、并行的思维。我们相信,corelet编程范式是捕捉复杂的并行思维和组成复杂认知系统的正确范式。
When referring to the von Neumann bottleneck, John Backus, in his 1972 Turing Lecture [28] said: “it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand.” We now live in an instrumented world that is inundated with a tsunami of data from sensors. Most of this data is parallel in nature and is ideal for parallel processing by TrueNorth. However, to invent effective algorithms and applications, we need to move away from long, sequential von Neumann thinking to short, parallel thinking. We believe that the corelet programming paradigm is the right paradigm for capturing complex, parallel thinking and for composing complex cognitive systems.
在这一点上,区分两种不同的树是值得注意的。来自子corelet的corelet的层次结构不能与corelet类的层次结构相混淆。前者是corelet对象的树,在执行corelet代码时在计算机内存中形成。每个corelet对象都保留其子corelet的句柄(引用,如指针),因此形成了树。后者是一个代码树,子类从父类继承属性和方法,父类由Corelet库组成。 ↩︎