Researchers Pioneer Energy-Efficient Thermodynamic Computing Method

A team of researchers at the Lawrence Berkeley National Laboratory (Berkeley Lab) has made significant strides in a novel field called thermodynamic computing. This innovative approach aims to harness thermal noise as a power source, rather than seeing it as an obstacle to overcome. Their findings, published in Nature Communications, outline a design and training framework for a thermodynamic computer that operates similarly to a neural network, potentially revolutionizing energy consumption in computing.

Modern computing systems are notorious for their high energy demands. For instance, a single Google search consumes enough energy to power a six-watt LED for three minutes. This excessive energy use is partly due to thermal noise, which arises from the vibrations of charge carriers, primarily electrons, in conductive materials. Traditional computers must operate at energy levels significantly higher than the energy scales of these vibrations, leading to high power requirements and inefficiencies.

In contrast, thermodynamic computing turns this paradigm on its head. Rather than attempting to minimize thermal noise, it embraces these fluctuations as a power source. This approach allows for operations at room temperature, offering a compelling alternative to quantum computers, which often require extreme cooling.

Stephen Whitelam, a staff scientist at the Molecular Foundry, explained, “The premise of thermodynamic computing is that if you take a physical device with an energy scale comparable to that of thermal energy and leave it alone, it will change state over time, driven by thermal fluctuations.” This method effectively reduces the external energy needed for computations and presents a potential solution to the growing energy crisis in computing.

Despite its promise, thermodynamic computing faces significant challenges. Current systems are limited to computations performed at thermodynamic equilibrium, which requires waiting for devices to settle into their lowest-energy states before calculations can begin. This process can be unpredictable and impractical for everyday use. Moreover, existing thermodynamic computers have only been able to solve linear algebra problems, hindering their applications in more complex computations.

In their recent paper, Whitelam and colleague Corneel Casert address these limitations. They demonstrate through digital simulations that nonlinear computations, such as those performed by neural networks, are feasible with thermodynamic computers that are not in equilibrium. This advancement means these computers can operate similarly to classical systems, performing computations without waiting for equilibrium.

Whitelam elaborated, “A nonlinear thermodynamic circuit can behave like a neuron in a neural network. What we reasoned is that if you build these thermodynamic neurons into a connected structure, then that structure should have the expressive power to mimic a neural network and so be able to do machine learning.”

The next hurdle lies in training these thermodynamic systems. Unlike traditional neural networks, thermodynamic computers are stochastic, resulting in varied outputs in different runs. The standard training methods for digital networks do not apply here. To tackle this, Casert developed a large-scale computational framework utilizing 96 GPUs in parallel on the Perlmutter supercomputer at NERSC. This innovative setup allowed for massive evolutionary simulations, evaluating billions of noisy trajectories per generation to identify the most effective network parameters.

Using a technique known as a genetic algorithm, Casert began with a variety of thermodynamic neural networks, assessed their performance, and iteratively improved them by introducing random noise. This extensive training involved more than a trillion simulations, ultimately yielding a thermodynamic computer capable of functioning on minimal energy after completion of its training.

“It’s a very different way of optimizing a neural network,” Casert explained. “Training a thermodynamic neural network by simulating it digitally is expensive, but once trained and built as physical hardware, we can perform inference on that hardware for a very low energy cost.”

The implications of these findings extend beyond theoretical discussions. The researchers are actively seeking partners to help turn their designs into practical hardware and software implementations. Furthermore, they acknowledge the need for new algorithms tailored to systems working outside of equilibrium, particularly for nonlinear computations, which could mirror the capabilities of digital neural networks.

Whitelam concluded, “It’s an exciting field. We’re looking for more efficient ways of computing, and thermodynamic computing is definitely one of them.” As this field continues to evolve, the potential for energy-efficient computing could reshape the technological landscape, addressing growing concerns about energy consumption in the digital age.

The research conducted at Berkeley Lab highlights the institution’s commitment to pioneering scientific advancements that focus on energy solutions. Established in 1931, Berkeley Lab has become a cornerstone of innovation in various scientific domains, including materials, chemistry, and computing, consistently drawing researchers worldwide to its state-of-the-art facilities.