On August 25 Vitaly Vanchurin, Professor of Physics, University of Minnesota, held a technical Zoom webinar on “Thermodynamics of Machine Learning.”

Vitaly described his talk as follows:

*“In this talk, I apply the methods of statistical physics to study the behavior of neural networks in the limit of a large number of neurons.*

*I derive the first and second laws of learning: during learning the total entropy must decrease until the system reaches an equilibrium (i.e. the second law), and the increment in the loss function must be proportional to the increment in the thermodynamic entropy plus the increment in the complexity (i.e. the first law).*

*I calculate the entropy destruction to show that the efficiency of learning is given by the Laplacian of the free energy which is to be maximized in an optimal neural architecture, and explain why the optimization condition is better satisfied in a deep network with a large number of hidden layers.*

*The key properties of the model are verified numerically by training a neural network using the method of stochastic gradient descent. Additionally, I discuss a possibility that the entire universe on its most fundamental level is a neural network.”*The full presentation in English is available here (no audio).