Content area
Full Text
LETTERS
PUBLISHED ONLINE: 13 FEBRUARY 2017 | DOI: 10.1038/NPHYS4037
Learning phase transitions by confusion
Evert P. L. van Nieuwenburg*, Ye-Hua Liu and Sebastian D. Huber
Classifying phases of matter is key to our understanding of many problems in physics. For quantum-mechanical systems in particular, the task can be daunting due to the exponentially large Hilbert space. With modern computing power and access to ever-larger data sets, classication problems are now routinely solved using machine-learning techniques1. Here, we propose a neural-network approach to nding phase transitions, based on the performance of a neural network after it is trained with data that are deliberately labelled incorrectly. We demonstrate the success of this method on the topological phase transition in the Kitaev chain2, the
thermal phase transition in the classical Ising model3, and
the many-body-localization transition in a disordered quantum spin chain4. Our method does not depend on order parameters, knowledgeofthetopologicalcontentofthephases,oranyother specics of the transition at hand. It therefore paves the way to the development of a generic tool for identifying unexplored phase transitions.
Machine learning as a tool for analysing data is becoming more and more prevalent in an increasing number of fields. This is due to a combination of availability of large amounts of data and the advances in hardware and computational power, the latter most notably through the use of graphical processing units.
Two typical methods of machine learning can be distinguished, namely the unsupervised and supervised methods. In the former the machine receives no input other than the data and is asked, for example, to extract features or to cluster the samples. Such an unsupervised approach was applied to identify phase transitions and order parameters from images of classical configurations of Ising models5. In the supervised learning methods, the data have to be supplemented by a set of labels. A typical example is classification of data, where each sample is assigned a class label. The machine is trained to recognize samples and predict their associated label, demonstrating that it has learned by generalizing to samples it has not encountered before. This approach, too, has been demonstrated on Ising models6.
Concepts from physics have also found their way into the field of machine learning. Examples of this are the relations between neural networks (NNs) and...