Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

NIST Researchers Demonstrate that Superconducting Neural Networks Can Learn on Their Own

Using detailed simulations, researchers at the National Institute of Standards and Technology (NIST) and their collaborators have demonstrated that a class of neural networks – electronic circuits inspired by the human brain – can be programmed to learn new tasks on their own. After initial training, the NIST superconducting neural networks were 100 times faster at learning new tasks than previous neural networks.

Neural networks make decisions by mimicking the way neurons in the human body work together. By adjusting the flow of information among neurons, the human brain identifies new phenomena, learns new skills, and weighs different options in making decisions.

The NIST scientists focused their efforts on superconducting neural networks, which transmit information at high speed--in part because they allow current to flow without resistance. Once cooled to just 4 degrees above absolute zero, they consume much less energy than other networks, including neurons in the human brain.

neural network diagram
(a) Biological neuron vs. NIST superconducting neuron design. In the NIST design, incoming pulses are split into two branches (green and red), each with an adjustable superconducting quantum interference device (SQUID) inductively coupled into a summation loop (soma); the soma combines the branch outputs and, when a threshold is crossed, launches an output pulse on a transmission line (axon) to the next neuron. A superconducting loop (blue) between the branches stores the neuron's weight as a persistent quantized current that can be varied.

(b) Weighting the neuron. Increasing the positive current in the weight loop strengthens the coupling of the positive branch to the soma and weakens the negative branch, yielding a larger positive net pulse after integration. Conversely, increasing the negative current strengthens the negative branch and weakens the positive branch, producing a larger negative net pulse. The synapse stores and applies the weight of the pulse passing between neurons.

(c) Example responses. With zero current in the weight loop, an input pulse produces no net signal in the soma. Adjusting the current positively or negatively produces, respectively, a positive or negative response in the soma to the same input.
Credit: S. Kelley/NIST

In their new design, NIST scientist Michael Schneider and his colleagues found a way to manipulate the building blocks of a superconducting neural network so that it could perform a type of self-learning known as reinforcement learning. Neural networks use reinforcement learning to learn new tasks such as acquiring a new language.

A key element in any neural network is how preferences, or weights, are assigned to pathways in the electrical circuitry, akin to the way the brain assigns weights to different neural pathways. Weights are adjusted in a neural network using the proverbial carrot and stick method – strengthening the weighting of pathways that provide the correct answer and weakening those that lead to incorrect answers. In the NIST system, self-learning is acquired because hardware that makes up the circuitry determines the size and direction of these weight changes, requiring no external control or additional computations to learn new tasks.

The researchers reported their findings online March 4 in Unconventional Computing.

The NIST design offers two advantages. First, it enables the network to learn continually as new data becomes available. Without that capability, the entire network would have to be retrained from scratch each time researchers add data or alter the desired outcome.

Secondly, the design automatically adjusts the weighting of different pathways in the network to accommodate slight variations in the size and electrical properties of the hardware components may happen during fabrication. That flexibility is a huge benefit for training neural networks, which ordinarily require precise programming of weighted values.

The design also has the potential to dramatically speed up training of neural networks and use significantly less energy than training designs based on semiconductors or software, Schneider said. With simulations demonstrating the feasibility of the hardware approach, Schnieder and his colleagues now plan to build a small-scale self-learning superconducting neural network.


Paper:

Schneider, M.L., Jué, E.M., Pufall, M.R., Segall, K. and Anderson, C.W. A self-training spiking superconducting neuromorphic architecture. Unconventional. Computing. 2, 5 (2025), published online March 4, 2025. https://doi.org/10.1038/s44335-025-00021-9

Released August 18, 2025, Updated August 20, 2025
Was this page helpful?