Author(s)
Bakhrom Oripov, Andrew Dienstfrey, Adam McCaughan, Sonia Buckley
Abstract
In this work, we explore the capabilities of Multiplexed Gradient Descent (MGD), a scalable and efficient perturbative zeroth-order training method for estimating the gradient of a loss function in hardware and training it via stochastic gradient descent. We extend the framework to include separate perturbation and weight parameters, and show how this applies to the specific example of node perturbation. We investigate the time to train networks using MGD as a function of network size and task complexity. Previous research on perturbative gradient estimation have expressed concerns that this approach does not scale to large problems. Our findings reveal that while the time required to estimate the gradient scales linearly with the number of perturbation parameters, the time to reach a target accuracy for a large scale machine learning problem does not follow a simple scaling with network size. In practice, accuracy can be achieved significantly faster than the time required for gradient convergence. Furthermore, we demonstrate that MGD can be used as a drop-in replacement for the stochastic gradient and therefore optimization accelerators such as momentum can be used alongside MGD, ensuring compatibility with existing machine learning practices. Our results indicate that MGD can efficiently train large networks on hardware, achieving accuracy comparable to backpropagation, thus presenting a practical solution for future neuromorphic computing systems.
Citation
Applied Physics Letters
Keywords
MGD, Hardware for AI, neuromorphic hardware, Machine Learning, Online learning
Citation
Oripov, B.
, Dienstfrey, A.
, McCaughan, A.
and Buckley, S.
(2025),
Scaling of hardware-compatible perturbative training algorithms, Applied Physics Letters, [online], https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=959140 (Accessed May 9, 2026)
Additional citation formats
Issues
If you have any questions about this publication or are having problems accessing it, please contact [email protected].