NOTICE: Due to a lapse in annual appropriations, most of this website is not being updated. Learn more.
Form submissions will still be accepted but will not receive responses at this time. Sections of this site for programs using non-appropriated funds (such as NVLAP) or those that are excepted from the shutdown (such as CHIPS and NVD) will continue to be updated.
An official website of the United States government
Here’s how you know
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS
A lock (
) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
Layer ensemble averaging for fault tolerance in memristive neural networks
Published
Author(s)
Osama Yousuf, Brian Hoskins, Karthick Ramu, Mitchell Fream, William Borders, Advait Madhavan, Matthew Daniels, Andrew Dienstfrey, Jabez McClelland, Martin Lueker-Boden, Gina Adam
Abstract
Advancements in continual learning with artificial neural networks have been fueled in large part by scaling network dimensionalities. As this scaling continues, conventional computing systems are becoming increasingly inefficient due to the von Neumann bottleneck, and thus alternative in-memory computation architectures have become an important area of research. Emerging memory technologies such as memristors are a promising candidate to circumvent this problem, but device non-idealities hinder the performance of neural networks based on memristive crossbars compared to their software counterparts, especially in the context of continual learning inference at the edge. This work proposes and experimentally demonstrates layer ensemble averaging – a technique to map pre-trained neural network solutions from software to defective hardware crossbars of emerging memory devices. The approach is investigated in the context of a continual learning problem with a custom 20,000-device hardware prototyping platform, and its effectiveness is studied in simulation as well as in experiment using a defective resistive random-access memory (ReRAM) crossbar chip. Results highlight that by trading off the number of devices required for layer mapping, layer ensemble averaging can reliably boost defective memristive network performance up to the software baseline. For the investigated continual learning problem, the average multi-task classification accuracy improves from 61% (lower than a linear solver) to 72% (< 1% of software baseline) on average with hardware layer ensembles of size 3.
Yousuf, O.
, Hoskins, B.
, Ramu, K.
, Fream, M.
, Borders, W.
, Madhavan, A.
, Daniels, M.
, Dienstfrey, A.
, McClelland, J.
, Lueker-Boden, M.
and Adam, G.
(2025),
Layer ensemble averaging for fault tolerance in memristive neural networks, Nature Communications, [online], https://doi.org/10.1038/s41467-025-56319-6, https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=957710
(Accessed October 17, 2025)