Low-Rank Gradient Descent for Memory-Efficient Training of Deep In-Memory Arrays
Siyuan Huang, Brian Hoskins, Matthew Daniels, Mark Stiles, Gina C. Adam
The movement of large quantities of data during the training of a Deep Neural Network presents immense challenges for machine learning workloads. To minimize this overhead, espe- cially on the movement and calculation of gradient information, we introduce streaming batch principal component analysis as an update algorithm. Streaming batch principal component analysis uses stochastic power iterations to generate a stochastic k-rank approximation of the network gradient. We demonstrate that the low rank updates produced by streaming batch principal component analysis can effectively train convolutional neural networks on a variety of common datasets, with performance comparable to standard mini batch gradient descent. These results can lead to both improvements in the design of application specific integrated circuits for deep learning and in the speed of synchronization of machine learning models trained with data parallelism.
ACM Journal on Emerging Technologies in Computing Systems
, Hoskins, B.
, Daniels, M.
, Stiles, M.
and Adam, G.
Low-Rank Gradient Descent for Memory-Efficient Training of Deep In-Memory Arrays, ACM Journal on Emerging Technologies in Computing Systems, [online], https://doi.org/10.1145/3577214
(Accessed December 3, 2023)