论文标题

具有FPGA实现的基于神经形态原始对象的动态视觉显着性模型

A Neuromorphic Proto-Object Based Dynamic Visual Saliency Model with an FPGA Implementation

论文作者

Molin, Jamal Lottier, Thakur, Chetan Singh, Etienne-Cummings, Ralph, Niebur, Ernst

论文摘要

参与视觉场景的显着区域的能力是执行高级视觉任务的生物学和工程系统(例如对象检测,跟踪和分类)的先天和必要的预处理步骤。关于处理带宽和速度,计算效率仅通过将计算资源投入到视觉刺激的显着区域来提高。在本文中,我们首先提出了基于原始对象概念的神经形态,自下而上的动态视觉显着性模型。这是通过将视觉刺激的时间特征纳入模型来实现的,类似于人类视觉系统的早期阶段提取时间信息的方式。该神经形态模型的表现优于最先进的动态视觉显着性模型,可以预测具有相关眼睛跟踪数据的常用视频数据集上的人眼固定。其次,要使该模型具有实际应用,它必须能够在低功率,小型和轻量级限制下实时执行其计算。为了解决这个问题,我们在Opal Kelly 7350 Kintex-7板上介绍了该模型的现场编程门阵列实现。这个新颖的硬件实现允许在100 MHz时钟上运行高达23.35帧的帧 - 比软件实现的26倍加速。

The ability to attend to salient regions of a visual scene is an innate and necessary preprocessing step for both biological and engineered systems performing high-level visual tasks (e.g. object detection, tracking, and classification). Computational efficiency, in regard to processing bandwidth and speed, is improved by only devoting computational resources to salient regions of the visual stimuli. In this paper, we first present a neuromorphic, bottom-up, dynamic visual saliency model based on the notion of proto-objects. This is achieved by incorporating the temporal characteristics of the visual stimulus into the model, similarly to the manner in which early stages of the human visual system extracts temporal information. This neuromorphic model outperforms state-of-the-art dynamic visual saliency models in predicting human eye fixations on a commonly used video dataset with associated eye tracking data. Secondly, for this model to have practical applications, it must be capable of performing its computations in real-time under low-power, small-size, and lightweight constraints. To address this, we introduce a Field-Programmable Gate Array implementation of the model on an Opal Kelly 7350 Kintex-7 board. This novel hardware implementation allows for processing of up to 23.35 frames per second running on a 100 MHz clock - better than 26x speedup from the software implementation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源