It’s notoriously difficult to make sense of Quantum mechanics, and it’s equally difficult to calculate the behavior of many quantum systems. That’s due in part to the description of a quantum system called its *wavefunction.* The wavefunction for most single objects is pretty complicated on its own, and adding a second object makes predicting things even harder, since the wavefunction for the entire system becomes a mixture of the two individual ones. The more objects you add, the harder the calculations become.

As a result, many-body calculations are usually done through methods that produce an approximation. These typically involve either sampling potential solutions at random or figuring out some way to compress the problem down to something that can be solved. Now, though, two researchers at ETH Zurich, named Giuseppe Carleo and Matthias Troyer, have provided a third option: set a neural network loose on quantum mechanics.

## Getting spooky

This additional method could be useful, because there are a lot of cases where the existing methods fail. Random sampling is used in a variety of fields (it’s technically called Monte Carlo sampling, after the games of chance played in the famous casino there). But random sampling is only effective if the number of likely possible solutions isn’t too large. If it is, then you’re unlikely to randomly sample the relevant ones. The alternative, called compression, relies on cases where it’s possible to represent the wavefunction in a computationally efficient form. Not every quantum system is amenable to that approach.

This means there are a number of quantum systems we can’t easily understand via computation. “Examples of systems in which existing approaches fail are numerous,” Carleo and Troyer say in their paper. So the researchers decided to see whether a neural network could help us out. Their reasoning is that neural networks are very good at things like reducing information to its most relevant components and extracting key features from those components. It follows that there’s a chance they’d be good at identifying the most relevant features of a wavefunction.

An intriguing part of their approach is that they rely on an architecture that mimics an idea about quantum mechanics we’re pretty sure is wrong: hidden variables. Quantum mechanics allows what Einstein had derided as “spooky action at a distance,” where things done to one particle can influence an entangled partner no matter the distance between the pair. One possible explanation for this is that there are properties of the particles that we can’t currently measure—the so-called hidden variables—that explain this behavior. But hidden variables have largely been eliminated by experiments over the last decade or so.

They make a reappearance in Carleo and Troyer’s neural network, at least in terms of architecture. For this work, the experiments all focused on particle spin; in a multi-particle quantum system, these spins interact in complicated ways (for more about these interactions, see the section on Ising models in our look at D-Wave’s quantum computer). The neural network used here had one collection of nodes that simply represented the spin of the particles in the simulation.

## Hidden layers

But backing that collection of nodes is something the authors call a “hidden layer.” Each of the visible spin nodes was backed by a number of hidden ones, which helped extract features from known wavefunctions. And each of these hidden nodes could influence the state of multiple visible ones. In the work described here, Carleo and Troyer set it up so that each visible node has an average of four hidden ones backing it. But they point out that, if computational resources become less of an issue, it would easily be possible to scale the architecture simply by adding more hidden nodes.

The other nice thing about this architecture is that it’s inherently non-local. Individual hidden nodes can have connections with nodes representing the spin of particles that are physically separated. Since quantum mechanics is also non-local, this may help the neural network represent the underlying physics.

Neural networks need to be trained, which poses a bit of a challenge in that we don’t have any exact solutions to a many-body wavefunction against which to train the networks. Still, there are instances where the other two computational approaches described above work well, so they were used to provide reinforcement learning. The results were quite promising. For two different classes of problems, the neural network learned patterns related to the underlying physics. And, in each case, it was able to match or exceed the performance of existing computational methods.

Carleo and Troyer are optimistic about this approach. More recent neural network architectures, like deep networks, are showing some impressive performance, and it should be possible to apply them to quantum mechanics. It should also be trivial to get this approach to handle quantum behavior beyond particle spins. The real test, though, will be to apply the neural network approach to problems where the other two methods come up short in some cases (though not all, or we’d have nothing to train it with). Figuring out whether it produces physically plausible results in these cases can be left as a challenge for the experimentalists.

*Science*, 2017. DOI: 10.1126/science.aag2302 (About DOIs).

## Comments are closed

Sorry, but you cannot leave a comment for this post.