论文标题
加速模拟神经形态计算
Accelerated Analog Neuromorphic Computing
论文作者
论文摘要
本文介绍了脑广告(BSS)加速的模拟神经形态计算体系结构背后的概念。它描述了第二代Brainscales-2(BSS-2)版本及其最新的内部实现,即Hicann-X应用程序特定集成电路(ASIC),因为它是在欧洲人类脑项目(HBP)内开发的。尽管第一代是在180nm的过程中实施的,但第二代使用65nm技术。这允许整合数字可塑性处理单元,这是一种高度并行的微处理器,专门为在加速的模拟神经形态系统中学习的计算需求而构建。介绍的架构基于神经元和突触的连续时间,模拟,物理模型的实现,类似于建立数字计算核心附加的模拟神经形态加速器。尽管模拟部分在连续时间模拟神经网络的基于尖峰的动力学时,后者模拟了在较慢的时间表上发生的生物学过程,例如结构和参数变化。与生物时间尺度相比,仿真是高度加速的,即所有时代的阶段都比生物学小几个数量级。可编程的离子通道仿真和室内电导率允许对非线性树突,后传播的动作潜能以及NMDA和钙高原电位进行建模。为了扩展模拟加速器的可用性,它还支持矢量矩阵乘法。因此,BSS-2支持对深度卷积网络以及局部学习的推断,并在同一底物中具有复杂的尖峰神经元的集合。
This paper presents the concepts behind the BrainScales (BSS) accelerated analog neuromorphic computing architecture. It describes the second-generation BrainScales-2 (BSS-2) version and its most recent in-silico realization, the HICANN-X Application Specific Integrated Circuit (ASIC), as it has been developed as part of the neuromorphic computing activities within the European Human Brain Project (HBP). While the first generation is implemented in an 180nm process, the second generation uses 65nm technology. This allows the integration of a digital plasticity processing unit, a highly-parallel micro processor specially built for the computational needs of learning in an accelerated analog neuromorphic systems. The presented architecture is based upon a continuous-time, analog, physical model implementation of neurons and synapses, resembling an analog neuromorphic accelerator attached to build-in digital compute cores. While the analog part emulates the spike-based dynamics of the neural network in continuous-time, the latter simulates biological processes happening on a slower time-scale, like structural and parameter changes. Compared to biological time-scales, the emulation is highly accelerated, i.e. all time-constants are several orders of magnitude smaller than in biology. Programmable ion channel emulation and inter-compartmental conductances allow the modeling of nonlinear dendrites, back-propagating action-potentials as well as NMDA and Calcium plateau potentials. To extend the usability of the analog accelerator, it also supports vector-matrix multiplication. Thereby, BSS-2 supports inference of deep convolutional networks as well as local-learning with complex ensembles of spiking neurons within the same substrate.