Mutual Information, Neural Networks and the Renormalization Group
Abstract
Physical systems differring in their microscopic details often display strikingly similar behaviour when probed at low energies. Those universal properties, largely determining their physical characteristics, are revealed by the powerful renormalization group (RG) procedure, which systematically retains “slow” degrees of freedom and integrates out the rest. However, the important degrees of freedom may be difficult to identify. Here we demonstrate a machine learning (ML) algorithm capable of identifying the relevant degrees of freedom without any prior knowledge about the system. We introduce an artificial neural network based on a model-independent, information-theoretic characterization of a real-space RG procedure, performing this task. We apply the algorithm to classical statistical physics problems in two dimenstions.
pacs:
I Introduction
Machine learning has been captivating public attention lately due to groundbreaking advances in automated translation, image and speach recognition LeCun et al. (2015), game-playing Silver and et al. (2016), and achieving super-human performance in tasks in which humans excelled while more traditional algorithmic approaches struggled Hershey et al. (2010). The applications of ML techniques in physics are very recent, initially leveraging the trademark prowess of ML in classification and pattern recognition and applying them to classify phases of matter Carrasquilla and Melko (2017); Torlai and Melko (2016); van Nieuwenburg et al. (2016); Wang (2016); Ohtsuki and Ohtsuki (2017) or exploiting the neural networks’ potential as efficient non-linear approximators of arbitrary functions Hinton and Salakhutdinov (2006); Lin and Tegmark (2016) to introduce a new numerical simulation method for quantum systems Carleo and Troyer (2017). However, the exciting possibility of employing machine learning not as a numerical simulator, or a hypothesis tester, but as an integral part of the physical reasoning process is still largely unexplored and, given the staggering pace of progress in the field of artificial intelligence, of fundamental importance and promise.
The renormalization group (RG) approach has been one of the conceptually most profound tools of theoretical physics since its inception. It underlies the seminal work on critical phenomena Wilson (1975), the discovery of asymptotic freedom in quantum chromodynamics Politzer (1973), and of the Kosterlitz-Thouless phase transition Berezinskii (1971); Kosterlitz and Thouless (1973). The RG is not a monolith, but rather a conceptual framework comprising different techniques: real-space RG Kadanoff (1966), functional RG Wetterich (1993), density matrix renormalization group (DMRG) White (1992), among others. While all those schemes differ quite substantially in details, style and applicability there is an underlying physical intuition which encompasses all of them – the essence of RG lies in identifying the “relevant” degrees of freedom and integrating out the “irrelevant” ones iteratively, thereby arriving at a universal, low-energy effective theory. However potent the RG idea, those relevant degrees of freedom need to be identified first Ma et al. (1979); Corboz and Mila (2013). This is often a challenging conceptual step, particularly for strongly interacting systems and may involve a sequence of mathematical mappings to models, whose behaviour is better understood Capponi et al. (2013); Auerbach (1994).
Here we introduce an artificial neural network algorithm identifying the physically relevant degrees of freedom in a spatial region. The input data are samples of the probability distribution of the system configurations, no further knowledge about the microscopic details of the system is provided. The internal parameters of the network, which ultimately encode the degrees of freedom of interest, are optimized (’learned’, in neural networks parlance) by a training algorithm based on evaluating real-space mutual information (RSMI) between spatially separated regions. We validate our approach by studying the Ising and dimer models of classical statistical physics in two dimensions; the robustness of RSMI algorithm to physically irrelevant noise is demonstrated.
The correct distillation of the important degrees of freedom, in the spirit of a real-space RG procedure Kadanoff (1966), is not only a crucial technical step – it allows to gain insights about the correct way of thinking about the problem at hand.
Ii The Real Space Mutual Information algorithm
Before going into more detail, let us provide a bird’s eye view of our method and results. We begin by phrasing the problem in probabilistic/information-theoretic terms, an approach also investigated in Refs. Gaite and O’Connor (1996); Preskill (2000); Apenko (2012); Machta et al. (2013); Beny and Osborne (2015). To this end, we consider a small “visible” spatial area , which together with its environment forms the system , and we define a particular conditional probability distribution , which describes how the relevant degrees of freedom (dubbed “hiddens”) in depend on both and . We then show that the sought-after conditional probability distribution is found by an algorithm maximizing an information-theoretic quantity, the mutual information (MI), and that this algorithm lends itself to a natural implementation using artificial neural networks (ANNs). Finally, we provide a verification of our claims by considering two paradigmatic models of statistical physics: the Ising model – for which the RG procedure yields the famous Kadanoff block spins – and the dimer model, whose RG is much less trivial.
Consider then a classical system of local degrees of freedom , defined by a Hamiltonian energy function and associated statistical probabilities , where is the inverse temperature. Alternatively (and sufficiently for our purposes), the system is given by Monte Carlo samples of the equilibrium distribution . We denote a small spatial region of interest by and the remainder of the system by , so that . We adopt a probabilistic point of view, and treat etc. as random variables. Our goal is to extract the relevant degrees of freedom from .
“Relevance” is understood here in the following way: the degrees of freedom RG captures govern the long distance behaviour of the theory, and therefore the experimentally measurable physical properties; they carry the most information about the system at large, as opposed to local fluctuations. We thus formally define the random variable as a composite function of degrees of freedom in maximizing the mutual information (MI) Stephan et al. (2014) between and the environment .
Mutual information, denoted by , measures the total amount of information about one random variable contained in the other (thus it is more general than correlation coefficients, which measure monotonic relations between variables, only). It is given in our setting by:
(1) |
The unknown distribution and its marginalization , depending on a set of parameters (which we keep generic at this point), are functions of and of , which is the central object of interest. In the supplementary materials we discuss the relation of this approach to RG to the more standard procedures.
Finding which maximizes under certain constraints is a well-posed mathematical question and has a formal solution Tishby et al. (2000). Since, however, the space of probability distributions grows exponentially with number of local degrees of freedom, it is in practice impossible to use without further assumptions for any but the smallest physical systems. Our approach is to exploit the remarkable dimensionality reduction properties of artificial neural networks (ANNs) Hinton and Salakhutdinov (2006). We use restricted Boltzmann machines (RBM), a class of probabilistic ANNs well adapted to approximating arbitrary data probability distributions. An RBM is composed of two layers of nodes, the “visible” layer, corresponding to local degrees of freedom in our setting, and a “hidden” layer. The interactions between the layers are defined by an energy function , such that the joint probability distribution for a particular configuration of visible and hidden deegrees of freedom is given by a Boltzmann weight:
(2) |
with the normalization. The goal of training of an ANN is to find parameters (“weights” or “filters”) and optimizing a chosen objective function.
Three distinct RBMs are used: two are trained as efficient approximators of the probability distributions and , using the celebrated contrastive divergence (CD) algorithm G.E. (2002). Their trained parameters are used by the third network [see Fig. 1(B)], which has a different objective: to find maximizing , we introduce the real space mutual information (RSMI) network, whose architecture is shown in Fig. 1(A). The hidden units of RSMI correspond to coarse-grained variables .
The parameters of the RSMI network are trained by an iterative procedure. At each iteration a Monte Carlo estimate of function and its gradients is performed for the current values of parameters . The gradients are then used to improve the values of weights in the next step, using a stochastic gradient descent procedure.
Iii Results
To validate our approach we consider two important classical models of statistical physics: the (critical) Ising model, whose RG is simpler, since the coarse-grained degrees of freedom resemble the original ones, and the fully-packed dimer model, where they are entirely different.
The Ising Hamiltonian on a two-dimensional square lattice is:
(3) |
with and the summation over nearest neighbours. Real-space RG of the Ising model proceeds by the block-spin construction Kadanoff (1966), whereby each block of spins is coarse grained into a single effective spin, whose orientation is decided by a “majority rule”.
The results of the RSMI algorithm trained on Ising model samples are shown in Fig. 2. We vary the number of both hidden neurons and the visible units, which are arranged in a 2D area of size [see Fig. 1(A)]. For a spin area the network indeed rediscovers the famous Kadanoff block-spin: Fig. 2(A) shows a single hidden unit coupling uniformly to visible spins, i.e. the orientation of the hidden unit is decided by the average magnetisation in the area. Fig. 2(B) is a trivial but important sanity check: given hidden units to extract relevant degrees of freedom from an area of spins, the networks couples each hidden unit to a different spin, as expected.
We also compare the weights for areas of different size, which are generalizations of Kadanoff procedure to larger blocks. We find the network couples to the boundaries of the area to maximize MI with the rest of the system [see Figs. 2(C,D,E)]. This physical insight the RSMI provides can in fact be shown to hold exactly for a standard real-space RG that in the limit of number of coarse grained variables equaling the size of the boundary (see supplementary materials).
We next study the dimer model, given by an entropy-only partition function, which counts the number of dimer coverings of the lattice, i.e. subsets of edges such that every vertex is the endpoint of exactly one edge. Fig. 3(A) shows sample dimer configurations (and additional spin degrees of freedom added to generate noise). This deceptively simple description hides nontrivial physics Fisher and Stephenson (1963) and correspondingly, the RG procedure for the dimer model is more subtle, since – contrary to the Ising case – the correct degrees of freedom to perform RG on are not dimers, but rather look like effective local electric fields. This is revealed by a mathematical mapping to a “height field” (see Figs.3(A,B) and Ref. Fradkin (2013)), whose gradients behave like electric fields. The continuum limit of the dimer model is given by the following action:
(4) |
and therefore the coarse-grained degrees of freedom are low-momentum (Fourier) components of the electrical fields in the and directions. They correspond to “staggered” dimer configurations shown in Fig. 3(A).
Remarkably, the RSMI algorithm extracts the local electric fields from the dimer model samples without any knowledge of those mappings. In Fig. 4 the weights for and hidden neurons, for an area [similar to Fig. 3(A)] are shown: the pattern of large negative (blue) weights couples strongly to a dimer pattern corresponding to local uniform field [see left pannels of Figs. 3(A,B)]. The large positive (yellow) weights select an identical pattern, translated by one link. The remaining neurons extract linear superpositions or of the fields.
To demonstrate the robustness of the RSMI, we added physically irrelevant noise, forming nevertheless a pronounced pattern, which we model by additional spin degrees of freedom, strongly coupled (ferromagnetically) in pairs [wiggly lines in Fig. 3(A)]. Decoupled from the dimers, and from other pairs, they form a trivial system, whose fluctuations are short-range noise on top of the dimer model. Vanishing weights [green in Figs. 4(A,B)] on sites where pairs of spins reside prove RSMI discards their fluctuations as irrelevant for long-range physics, despite their regular pattern.
Iv Outlook
Artificial neural networks based on real-space mutual information optimization have proven capable of extracting complex information about physically relevant degrees of freedom. This approach is an example of a new paradigm in applying ML in physics: the internal data representations discovered by suitably designed ML systems are not just technical means to an end, but instead are a clear reflection of the underlying structure of the physical system (see also Schoenholz et al. (2016)). In spite of its “black box” reputation, the innards of ML architecture may teach us fundemental lessons. This raises the prospect of employing machine learning in science in a collaborative fashion, exploiting the machines’ power to distill subtle information from vast data, and human creativity and background knowledge Jordan and Mitchell (2015).
Numerous further research directions can be pursued. Most directly, equilibrium systems with less understood relevant degrees of freedom – e.g. disordered and glassy systems – can be investigated. Furthermore, though we applied our algorithm to classical systems, the extension to quantum domain is possible via the quantum-to-classical mapping of Euclidean path integral formalism. We envisage extending RSMI to a full RG scheme, i.e. using additional ANNs to extract the effective Hamiltonian of the coarse-grained degrees of freedom and possibly reconstruct the RG flow. To this end a formal analysis of the mutual-information based RG procedure may prove fruitful, also from theory perspective. Finally, applications of RSMI beyond physics are possible, since it offers an ANN implementation of a variant of Information Bottleneck method Tishby et al. (2000), succesful in compression and clustering analyses Slonim and Tishby (2000).
References
- LeCun et al. (2015) Y. LeCun, Y. Bengio, and Hinton G.E., “Deep learning,” Nature 521, 436–444 (2015).
- Silver and et al. (2016) D. Silver and et al., “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 584–589 (2016).
- Hershey et al. (2010) John R. Hershey, Steven J. Rennie, Peder A. Olsen, and Trausti T. Kristjansson, “Super-human multi-talker speech recognition: A graphical modeling approach,” Comput. Speech Lang. 24, 45–66 (2010).
- Carrasquilla and Melko (2017) J. Carrasquilla and R. G. Melko, “Machine learning phases of matter,” Nature Physics (2017).
- Torlai and Melko (2016) G. Torlai and R. G. Melko, ‘‘Learning thermodynamics with Boltzmann machines,” Phys. Rev. B 94, 165134 (2016).
- van Nieuwenburg et al. (2016) E. P. L. van Nieuwenburg, Y.-H. Liu, and S. D. Huber, “Learning phase transitions by confusion,” Nature Physics (2016).
- Wang (2016) L. Wang, “Discovering phase transitions with unsupervised learning,” Phys. Rev. B 94, 195105 (2016).
- Ohtsuki and Ohtsuki (2017) T. Ohtsuki and T. Ohtsuki, “Deep Learning the Quantum Phase Transitions in Random Electron Systems: Applications to Three Dimensions,” Journal of the Physical Society of Japan 86, 044708 (2017).
- Hinton and Salakhutdinov (2006) G.E. Hinton and R.R. Salakhutdinov, “Reducing the Dimensionality of Data with Neural Networks,” Science 313, 504–507 (2006).
- Lin and Tegmark (2016) H. W. Lin and M. Tegmark, ‘‘Why does deep and cheap learning work so well?” ArXiv e-prints abs/1608.08225 (2016).
- Carleo and Troyer (2017) G. Carleo and M. Troyer, “Solving the Quantum Many-Body Problem with Artificial Neural Networks,” Science 355, 602–606 (2017).
- Wilson (1975) Kenneth G. Wilson, “The renormalization group: Critical phenomena and the kondo problem,” Rev. Mod. Phys. 47, 773–840 (1975).
- Politzer (1973) H. David Politzer, ‘‘Reliable perturbative results for strong interactions?” Phys. Rev. Lett. 30, 1346–1349 (1973).
- Berezinskii (1971) V. L. Berezinskii, “Destruction of Long-range Order in One-dimensional and Two-dimensional Systems having a Continuous Symmetry Group I. Classical Systems,” Soviet Journal of Experimental and Theoretical Physics 32, 493 (1971).
- Kosterlitz and Thouless (1973) J.M. Kosterlitz and D. Thouless, “Ordering, metastability and phase transitions in two-dimensional systems,” Journal of Physics C: Solid State Physics 6, 1181 (1973).
- Kadanoff (1966) L. P. Kadanoff, “Scaling laws for Ising models near T(c),” Physics 2, 263–272 (1966).
- Wetterich (1993) Christof Wetterich, “Exact evolution equation for the effective potential,” Physics Letters B 301, 90 – 94 (1993).
- White (1992) Steven R. White, “Density matrix formulation for quantum renormalization groups,” Phys. Rev. Lett. 69, 2863–2866 (1992).
- Ma et al. (1979) Shang-keng Ma, Chandan Dasgupta, and Chin-kun Hu, “Random antiferromagnetic chain,” Phys. Rev. Lett. 43, 1434–1437 (1979).
- Corboz and Mila (2013) Philippe Corboz and Frederic Mila, “Tensor network study of the shastry-sutherland model in zero magnetic field,” Phys. Rev. B 87, 115144 (2013).
- Capponi et al. (2013) Sylvain Capponi, V. Ravi Chandra, Assa Auerbach, and Marvin Weinstein, “ chiral resonating valence bonds in the kagome antiferromagnet,” Phys. Rev. B 87, 161118 (2013).
- Auerbach (1994) A. Auerbach, Interacting electrons and quantum magnetism (Springer, 1994).
- Gaite and O’Connor (1996) Jose Gaite and Denjoe O’Connor, “Field theory entropy, the theorem, and the renormalization group,” Phys. Rev. D 54, 5163–5173 (1996).
- Preskill (2000) J. Preskill, ‘‘Quantum information and physics: some future directions,” J. Mod. Opt. 47, 127–137 (2000).
- Apenko (2012) S.M. Apenko, “Information theory and renormalization group flows,” Physica A 391, 62–77 (2012).
- Machta et al. (2013) B.B. Machta, R. Chachra, M.K. Transtrum, and J.P Sethna, “Parameter space compression undelies emergent theories and predicitve models,” Science 342, 604–607 (2013).
- Beny and Osborne (2015) Cedric Beny and Tobias J Osborne, “The renormalization group via statistical inference,” New Journal of Physics 17 (2015).
- Stephan et al. (2014) Jean-Marie Stephan, Stephen Inglis, Paul Fendley, and Roger G. Melko, “Geometric mutual information at classical critical points,” Phys. Rev. Lett. 112, 127204 (2014).
- Tishby et al. (2000) N. Tishby, F. C. Pereira, and W. Bialek, “The information bottleneck method,” ArXiv Physics e-prints physics/0004057 (2000).
- G.E. (2002) Hinton G.E., “Training Products of Experts by Minimizing Contrastive Divergence,” Neural Computation 14, 1771–1800 (2002).
- Fradkin (2013) E. Fradkin, Field theories of Condensed Matter Physics (Cambridge University Press, 2013).
- Fisher and Stephenson (1963) Michael E. Fisher and John Stephenson, “Statistical mechanics of dimers on a plane lattice. ii. dimer correlations and monomers,” Phys. Rev. 132, 1411–1431 (1963).
- Schoenholz et al. (2016) S.S. Schoenholz, E.D. Cubuk, D.M. Sussman, E. Kaxiras, and A.J. Liu, “A structural approach to relaxation in glassy liquids,” Nature Physics 12, 469–471 (2016).
- Jordan and Mitchell (2015) M.I. Jordan and T.M. Mitchell, “Machine learning: Trends, perspectives, and prospects,” Science 349, 255–260 (2015).
- Slonim and Tishby (2000) Noam Slonim and Naftali Tishby, “Document clustering using word clusters via the information bottleneck method,” in Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’00 (ACM, 2000) pp. 208–215.
- Haykin (2009) S. Haykin, Neural Networks and Learning Machines (Pearson, 2009).
- Theano Development Team (2016) Theano Development Team, “Theano: A Python framework for fast computation of mathematical expressions,” arXiv e-prints abs/1605.02688 (2016).
- Mehta and Schwab (2014) P. Mehta and D. J. Schwab, “An exact mapping between the Variational Renormalization Group and Deep Learning,” ArXiv e-prints abs/1410.3831 (2014).
Supplemental materials: Methods
Estimating Mutual Information
Here we gather the notations and define formally the quantities appearing in the text. We derive in detail the expression for the approximate mutual information measure, which is evaluated numerically by the RSMI algorithm. This measure is given in terms of a number of probability distributions, accessible via Monte Carlo samples and approximated by contrastive divergence (CD) trained RBMs, or directly defined by (different) RBMs Haykin (2009).
We consider a statistical system of classical Ising variables , which can equally well describe the presence or absence of a dimer. The system is divided into a small “visible” area , an “environment” , and a “buffer” separating the degrees of freedom in and spatially (for clarity of exposition we assumed in the main text). We additionally consider a set of “hidden” Ising variables . For the main RSMI network described in the text, has the interpretation of coarse-grained degrees of freedom extracted from .
We assume the data distribution – formally a Boltzmann equilibrium distribution defined by a Hamiltonian – is given to us indirectly via random (Monte Carlo) samples of . The distributions and are defined as marginalizations of . Performing the marginalizations explicitly is computationally costly and therefore it is much more efficient to approximate and using two RBMs of the type defined in and above Eq. (2), trained using the CD-algorithm G.E. (2002) on the restrictions of samples. The trained networks, with parameters and (which we refer to as -RBMs) define probability distributions and , respectively [see also Fig. 1(B)]. From mathematical standpoint, contrastive divergence is based on minimizing a proxy to the Kullback-Leibler divergence between and and the data probability distributions and , respectively, i.e. the training produces RBMs which model the data well G.E. (2002).
The conditional probability distribution is defined by another RBM, denoted henceforth by -RBM, with tunable parameters :
(5) | |||||
In contrast to -RBMs it will not be trained using CD-algorithm, since its objective is not to approximate the data probability distribution. Instead, the parameters will be chosen so as to maximize a measure of mutual information between and . The reason for exclusion of a buffer , generally of linear extent comparable to , is that otherwise MI would take into account correlations of with its immediate vicinity, which are equivalent with short-ranged correlations within itself. We now derive the MI expression explicitly.
Using and we can define the joint probability distribution and marginalize over to obtain . We can then define the mutual information (MI) between and in the standard fashion:
(6) |
The main task is to find the set of parameters which maximizes given the samples . Since is not a function of one can optimize a simpler quantity:
(7) |
Using the -RBM approximations of the data probability distributions as well as the definition of the one can further rewrite this as:
(8) |
The daunting looking argument of the logarithm can in fact be cast in a simple form, using the fact that all the probability distributions involved either are of Boltzmann form, or marginalization thereof over the hidden variables, which can be performed explicitly:
(9) |
where
(10) | |||||
and where and are defined by the parameter sets and of the trained -RBMs:
(11) | |||||
Note, that since in the parameter dependence cancels out [and consequently also in ], the quantity does not depend on . Hence, without loss of generality, we put in our numerical simulations, i.e. the -RBM is specified by the set of parameters only.
is an average over the distribution of a logarithmic expression [see Eq. (5)], which itself can be further rewritten as a statistical expectation value for a system with energy , with variables held fixed:
(12) | |||||
(13) |
with . Thus finally, we arrive at a simple expression for :
(14) |
This expression can be numerically evaluated: using the fact that we replace the sums over and with a Monte Carlo (MC) average over samples . Furthermore, given a -RBM (at current stage of training) and a sample of , one can easily draw a sample according to probability distribution . Hence we have a MC estimate:
(15) |
The expectation value in the summand is itself also evaluated by MC averaging, this time with respect to Boltzmann probability distribution with energy .
Maximizing with respect to parameters is most easily achieved using conventional stochastic gradient descent procedure Haykin (2009). To this end we estimate the derivative over samples . More accurately, we divide the samples into mini-batches, obtain an average assessment of , and use it to update the parameters of the -RBM: (and similarly for ) with a learning rate . This is then repeated for next mini-batch. A run through all mini-batches constitutes one epoch of training. In Fig. 1 of the Supplementary Materials we show the convergence of the estimation and the development of the weight matrices for the case of the Ising system.
A practical remark is that the gradients should best be computed explicitly (a simple, if tedious, computation) prior to numerical evaluation, and one should not use the automated gradient computation capability provided by e.g. Theano package Theano Development Team (2016). The reason is that some of the dependence on parameters is stochastic (i.e. they define the probability distribution which the MC-averaging approximates), and this dependence may possibly not be captured correctly by automated gradient computing procedures.
Relation of Mutual Information RG procedure to conventional RG schemes
Here we provide an intuitive theoretical argument elucidating the connection of our mutual information based approach to more standard treatments of real-space RG. Infomation-theoretic approaches were also advocated or investigated in Refs. Gaite and O’Connor (1996); Preskill (2000); Apenko (2012); Machta et al. (2013); Beny and Osborne (2015). Before defining our explicit criteria for identifyng relevant degrees of freedom, let us first briefly rephrase the conventional RG procedure in probabilistic terms.
Consider then a physical system, represented by a set of local degrees of freedom (or random variables) and governed by a Hamiltonian energy function . The equlibrium probability distribution is of Boltzmann form: , with the inverse temperature. Next we consider a new and smaller set of degrees of freedom , i.e. the coarse-grained variables, whose dependece on is given by a conditional probability , where are variational internal parameters to be specified and each depends on some localized set of . The RG transformation in this language consists of finding the effective Hamiltonian of the coarse-grained degrees of freedom by marginalizing over (or integrating-out, in physical terms) degrees of freedom in the joint probability distribution of and :
(16) | |||||
The new effective energy function contains all the information required to evaluate expectations values of the variables in an exact fashion.
The usefulness (and practicality) of the RG procedure depends on choosing (or equivalently the relevant degrees of freedom) such that effective Hamiltonian remains as short range as possible and (if is continuous) the fluctuations of are as small as possible, so that high powers of are not needed. More formally we demand that the Taylor expansion of in :
(17) |
contains only short-ranged and few-body terms, i.e. the coefficients decay exponentially with distance. Since the coefficients can be computed as the cumulants, or connected n-point correlation functions, of the degrees of freedom in this theory, one can also rephrase the requirements: integrating out some subset of must, via the coupling induced by parameters , generate strong mass terms for the which gap out all their long-range fluctuations. The correlation functions of decay then exponentially and thus is short-ranged. If the above requirements are satisfied, all the terms in beyond some finite distance can be removed while making only minor changes to statistical properties of system . The procedure can then be repeated recursively granting access to increasingly long-ranged features without keeping track of all degrees of freedom.
What constitutes a good RG scheme in the above language is intuitively clear, but hard to formalise, especially with an algorithmic goal in mind. The mutual information prescription, on the other hand, has a very precise formulation, and lends itself naturally to computational implementation. How are they related? For simplicity consider a one dimensional system , divided into three parts , , and . Denote the hiddens in these region by , , and , respectively. Assume first that we have enough hiddens in these regions to maximize the mutual information with their complements, i.e. that adding any additional hiddens results in no further gains to MI. A term of the type in the effective Hamiltonian implies that the probability distribution of depends directly on and and not just on . Since the underlying distribution is local, such a dependence can only be mediated by degrees of freedom in which are correlated with degrees of freedom in . However, by our assumption of maximizing the mutual information, already couple to all such degrees of freedom in . Consequently, such a term cannot exist when the mutual information is maximal. Thus, as MI grows, the coefficients of longer-ranged terms in should decay.
The argument above justifies why the MI-based RG procedure should give results similar to more conventional RG schemes. This is explicitly confirmed by our numerical results for the dimer and Ising models. A formal, analytical investigation of this relation is nevertheless desirable, and will be pursued in a separate work.
Uniform vs. Boundary centered filters for Ising model
We comment briefly on the numerical results obtained for the Ising model, namely the boundary coupling behaviour observed in the weights for increasing sizes of the visible area [see Fig. 2(E) in the main text]. As mentioned in the main text, this behaviour, generalizing the simple Kadanoff blocking seen for areas , can in fact be justified on the grounds of conventional (in the sense defined in the previous section) RG scheme behaviour, further validating the correctness of our numerical procedure.
Let us then state a concrete question: should the parameters defining slow degrees of freedom in a real-space RG scheme [as described in the previous section of Supplemental Materials around Eqs. (12,13)], in the limit of a large area , have a uniform modulation (pattern) over the area or should they more strongly couple to the boundary of ? We argue that for the Ising case it is the latter, therefore the results obtained by our MI-based procedure are consistent with behaviour expected of an RG scheme. For the sake of argument we assume the simplest scenario, when the system has only short range interactions and the number of “hiddens” coupling to equal the number of degrees of freedom on its boundary (defined as the set of degrees of freedom in with neighbours outside ). It is easy to see that in this case the optimal solution is to couple each hidden unit , with infinite strength, to one degree of freedom in , effectively enforcing there. In the spirit of the discussion in the previous section of Supplemental Materials, integrating out the would generate large mass terms for the on the boundary, removing all correlations between the inside and outside of and consequently the effective Hamiltonian for the would be short-ranged. For number of hidden units smaller than , the filters would keep favoring the boundary over the bulk of the cell to keep track of the degrees of freedom most directly coupled to the environment, an intuition which is qualitatively confirmed by our numerical results for the Ising model.
For the case of the dimer system the weight matrices appear uniformely textured even for a large area [see Fig. 4(A) in the main text], which may seem puzzling in light of the above argument. This is the result of dimers being constrained degrees of freedom (or, in the continuum formulation, the height field obeying a conservation equation) Fradkin (2013), as opposed to Ising spins. The height field is not a microscopically accessible quantity in dimer model, only its gradient is. The weights we find for the dimer model enforce a uniform coupling pattern of the hidden degrees of freedom to the gradient of height field over many unit cells contained in the visible area , i.e. the hiddens couple to an average of the electric field in . This integral of over is, however, equivalent to a difference of boundary terms for the height field itself! Thus there is no contradiction: the direct coupling of hiddens to the height field on the boundary, which is physically not possible for the dimer model, is achieved by a uniform coupling to its gradient.
In principle, the type of scaling behaviour under the size of may allow to draw conclusions as to whether the degrees of freedom are constrained or not.
Comparison with Contrastive divergence trained RBMs
The relation between unsupervised machine learning (in particular in deep NNs), and RG, has been a subject of some controversy recently. Ref.Mehta and Schwab (2014) claims an “exact” mapping between the two, while Tegmark and Lin assert, to the contrary, that RG is entirely unrelated to unsupervised learning Lin and Tegmark (2016). Our theoretical arguments, as well as explicit numerical results on dimer model disprove the very general claims of Ref. Mehta and Schwab (2014), however, we cannot entirely agree with Tegmark and Lin either.
The crucial point is that a neural network is not specified without the cost function. Both Ref. Mehta and Schwab (2014) (explicitly) and Ref. Lin and Tegmark (2016) (implicitly) assume that the networks are trained using Kullback-Leibler divergence, or some related measure, resulting in the network which approximates well the input data probability distribution. Ref. Lin and Tegmark (2016) is correct in pointing out that this will not generally result in RG transformation. We are not limited, however, to such cost functions; in point of fact, we are free to chose a cost function with an entirely different objective. As we have shown, chosing to maximize the mutual information with the system at large results in a network identifying the physically relevant degrees of freedom.
To disprove explicitly that a generic NN trained to recognize patterns performs a physically meaningful coarse-graining (i.e., potentially, RG), we examine the weight matrices of CD-trained network (-RBM) on the dimer model with additional spin “noise”, the same we considered in the main text. In Fig. 6(A) we show the examples of weights obtained for a small number of hidden units: the network strongly couples to individual pairs of spins fluctuating in sync, even though they are irrelevant from the point of view of physics. This is correct behaviour, when patterns are to be discerened (since there are many entirely different dimer textures, but the same fluctuating pattern of spin pairs for each configuration), but does not make any sense physically. If given a few hidden units we were to extract new degrees of freedom to further continue the RG procedure on, we would discard all of the dimer model in the first step, ending up with a trivial system of decoupled spin pairs. Only given enough hidden neurons to capture all the spin pairs does the CD-trained network learn to discern extensive dimer textures, and even then they are not the staggered configurations giving electric fields important from the point of RG, but rather more similar to the high-entropy columnar configurations (which map to vanishing electric fields).