Streaming Batch Eigenupdates for Hardware Neural Networks

Published: August 06, 2019

Author(s)

Brian D. Hoskins, Matthew W. Daniels, Siyuan Huang, Advait Madhavan, Gina C. Adam, Nikolai B. Zhitenev, Jabez J. McClelland, Mark D. Stiles

Abstract

Neuromorphic networks based on nanodevices, such as metal oxide memristors, phase change memories, and flash memory cells, have generated considerable interest for their increased energy efficiency and density in comparison to graphics processing units (GPUs) and central processing units (CPUs). Though immense acceleration of the training process can be achieved by leveraging the fact that the time complexity of training does not scale with the network size, it is limited by the space complexity of stochastic gradient descent, which grows quadratically. The main objective of this work is to reduce this space complexity by using low-rank approximations of stochastic gradient descent. This low spatial complexity combined with streaming methods allows for significant reductions in memory and compute overhead, opening the doors for improvements in area, time and energy efficiency of training. We refer to this algorithm and architecture to implement it as the streaming batch eigenupdate (SBE) approach.
Citation: Frontiers in Neuroscience
Volume: 13
Pub Type: Journals

Keywords

neuromorphic, memristor, network training
Created August 06, 2019, Updated August 06, 2019