Kac–Ward Solution of the 2D Classical and 1D Quantum Ising Models

We rely on the extension of the Kac–Ward identity to “faithful projections” of non-planar graphs. It was proposed by Cimasoni [4] and used in [1, 14]. In order to accommodate negative weights, we need two faithful projections for \(}}_\) with edges between nearest-neighbours. The graphs are \(G_1\) and \(G_2\), and they are illustrated in Fig.  4. Here is a full description of the left graph:

The vertices are (i, j) with \(1 \le i \le L\) and \(1 \le j \le M\).

There are edges represented by straight lines between (i, j) and \((i+1,j)\) for \(1 \le i \le L-1\), \(1 \le j \le M\); between (i, j) and \((i,j+1)\) for \(1 \le i \le L\), \(1 \le j \le M-1\); and between (i, j) and \((i+1,j+1)\) for \(1 \le i \le L-1\), \(1 \le j \le M-1\).

There are edges represented by “handles” (continuous curves with winding number \(-1\)) between (L, j) and (1, j) for \(1 \le j \le M\); between (L, j) and \((1,j+1)\) for \(1 \le j \le M-1\); between (i, M) and (i, 1) for \(1 \le i \le L\); and between (i, M) and \((i+1,1)\) for \(1 \le i \le L-1\).

And there is a self-crossing handle between (L, M) and (1, 1) whose winding number is \(-2\).

The handles are drawn so that handles starting at (i, M) only cross the handles starting at (L, j) (and they cross them exactly once); the self-crossing handle belongs to both groups.

The second graph is similar, except that the oblique handle no longer self-crosses but the other horizontal handles all self-cross.

Fig. 4figure 4

Two faithful projections of the graph \((}}_,}}_)\). The handles cross at non-vertex locations; some handles cross themselves. The matrix \(K^\) is defined on the left graph; the matrix \(K^\) is defined on the right graph

The Kac–Ward identity involves matrices indexed by directed edges. We denote \(\mathbf }}}_\) the edges of \(}}_\) with direction. The coupling constants defined in Eq. (2.2) can be extended to directed edges by assigning the same value \(J_e\) to both directions of the same edge; then, we let W to be the diagonal matrix whose element \(W_\) is equal to \(\tanh J_e\). We now introduce the Kac–Ward matrix \(K^\) by

$$\begin K_^ = 1_ \, \,\textrm^} \measuredangle _1(e,e') + \frac} \measuredangle _1(e)}\,, \quad e,e' \in }}_. \end$$

(3.1)

Here, \(e \, \triangleright \, e'\) means that the endpoint of e is equal to the starting point of \(e'\) and also that \(e'\) is not equal to the reverse of \(e'\) (the matrix is not “backtracking”). \(\measuredangle _1(e,e'):}_ \rightarrow (-\pi ,\pi ]\) is the angle between the end of e and the start of \(e'\) on the faithful projection \(G_1\); \(\measuredangle _1(e):}_ \rightarrow }\) is the integrated angle along the planar curve that represents the edge e.

Following [1], we define an average over even subgraphs: If f is a function on graphs, let

$$\begin \langle f \rangle _ = \frac}_} \sum _}}_: \partial \Gamma = \emptyset } f(\Gamma ) w(\Gamma ) \end$$

(3.2)

where the normalisation is \(}_ = \sum _}}_: \partial \Gamma = \emptyset } w(\Gamma )\). The definition of the weight is \(w(\Gamma ) = \prod _ \tanh J_e\). The boundary \(\partial \Gamma \) of a graph is the set of vertices whose incidence number is odd; the sum in the right-hand side is over even subgraphs. Notice that \(}_\) is always positive as can be seen from its relation to the Ising partition function, see (3.4).

With these definition, we have the remarkable Kac–Ward identity [1, Theorem 5.1]:

$$\begin \sqrtW)} = }_ \big \langle (-1)^_0(\Gamma )} \big \rangle _. \end$$

(3.3)

Here, \(n^_0(\Gamma )\) is the total number of crossings between all edges of \(\Gamma \) when the graph is projected on \(G_1\).

It is worth noting that the right side of (3.3) is a multinomial in \((W_)\), something that is not apparent in the left side—there are remarkable cancellations indeed. This allows [1] to prove the identity for small \((W_)\); the extension to larger values is automatic. The determinant cannot be negative and the sign of the square root cannot change.

We define the matrix \(K^\) as in (3.1) but \(\measuredangle _2(e,e')\) and \( \measuredangle _2(e)\) are the corresponding angles on the faithful projection \(G_2\). Analogously, we define \(n^_0(\Gamma )\) for this projection.

The connection with the Ising model is through the high-temperature expansion, see, for example, [7, Section 3.7.3]. The partition function (2.4) is equal to

$$\begin Z_(J_1,J_2,J_3)= & 2^ \biggl ( \prod _}}_} \cosh J_e \biggr ) \sum _}}_: \partial \Gamma = \emptyset } w(\Gamma )\nonumber \\= & 2^ \biggl ( \prod _}}_} \cosh J_e \biggr ) }_. \end$$

(3.4)

The strategy of Aizenman and Warzel [1] is to prove that \(\langle (-1)^_0(\Gamma )} \rangle _ \rightarrow 1\) as \(L,M \rightarrow \infty \). This can be done when the coupling constants are positive, and small enough so the temperature is higher than the 2D critical temperature. (Then, duality is used to get the formula for low temperatures.) The presence of negative coupling constants necessitates a different approach. We first show in Lemma 3.1 that a combination of the two faithful projections gives the partition function, up to a correction. We then show in Lemma 3.2 that this correction vanishes in the limit \(L\rightarrow \infty \), for fixed M. Denote by \(n_}(\Gamma )\) the number of horizontal handles of the subgraph \(\Gamma \), that is, the number of handles in \(\Gamma \) that connect sites of the form (L, i) with sites (1, j). Note that the total number of horizontal handles of \(}_\) is 2M.

Lemma 3.1

We have

$$\begin \sqrtW)} + \sqrtW)} = 2 }_ \Bigl ( 1 - \big \langle 1_}(\Gamma ) \, \textrm} \big \rangle _ \Bigr ). \end$$

Proof

From Eq. (3.3), we have

$$\begin \sqrtW)} + \sqrtW)}=}_ \big \langle (-1)^_0(\Gamma )}+(-1)^_0(\Gamma )} \big \rangle _\nonumber \\ \end$$

(3.5)

Let \(n_}(\Gamma )\) be the number of handles in \(\Gamma \) that connect sites of the form (i, M) with sites (j, 1) (excluding the handle between (L, M) and (1, 1)) and let \(n_}(\Gamma ) = 0,1\) be the indicator on whether the handle from (L, M) and (1, 1) is present. (Notice the asymmetric definition of \(n_}\) and \(n_}\), as the oblique handle is included in \(n_}\) but not in \(n_}\).) We have

$$\begin \begin \quad&1__0(\Gamma ) \, \textrm} = 1_}(\Gamma ) \, \textrm} \; \bigl ( 1_}(\Gamma )=0} \; 1_}(\Gamma ) \, \textrm} + 1_}(\Gamma )=1} \; 1_}(\Gamma ) \, \textrm} \bigr ); \\ \quad&1__0(\Gamma ) \, \textrm} = 1_}(\Gamma ) \, \textrm} \; \bigl ( 1_}(\Gamma )=0} \; 1_}(\Gamma ) \, \textrm} + 1_}(\Gamma )=1} \; 1_}(\Gamma ) \, \textrm} \bigr ). \end\nonumber \\ \end$$

(3.6)

It follows that

$$\begin 1__0(\Gamma ) \, \textrm} + 1__0(\Gamma ) \, \textrm} = 1_}(\Gamma ) \, \textrm}. \end$$

(3.7)

By combining the above relation with (3.5), using \((-1)^(\Gamma )} = 1 - 2 \cdot 1_ (\Gamma ) \, \textrm}\), the lemma follows. \(\square \)

Lemma 3.2

For any \(J_1, J_2, J_3 \in }}\), for any \(M \in }}\), we have

$$\begin \lim _ \big \langle 1_}(\Gamma ) \, \textrm} \big \rangle _ = 0. \end$$

Proof

We condition on the horizontal handles (including possibly the self-crossing ones). We denote by \(}}\) the set of handles that connect sites in the leftmost and rightmost columns:

$$\begin }}= \bigl \, \dots , \ \bigr \}. \end$$

(3.8)

Then, we define the support \(\mathrm}_1 }}\), resp. \(\mathrm}_L }}\), to be the set of vertices of the form \((1,j_i)\), resp. \((L,j_i')\), that appear an odd number of times in \(}}\). Let \(\mathrm}}}= \mathrm}_1 }}\cup \mathrm}_L }}\). We let \(}}}}_\) be the set of edges of the cylinder (not the torus) \(\ \times }}_M\). With \(1_}}= 1_}}(\Gamma )\) the indicator function that the random graph \(\Gamma \) has set of handles \(}}\), we have

$$\begin \begin \big \langle 1_}(\Gamma ) \, \textrm} \big \rangle _&= \sum _}}| \; \textrm} \langle 1_}}\rangle _ \\&= \sum _}}| \; \textrm} \biggl ( \prod _^k \tanh J_ \biggr ) \frac}_} \sum _}}}}_: \partial \Gamma = \mathrm}}}} w(\Gamma ). \end\nonumber \\ \end$$

(3.9)

We now consider an Ising model on the cylinder \(\ \times }}_M\). We have

$$\begin \frac}_^}} \sum _}}}}_: \partial \Gamma = \mathrm}}}} w(\Gamma ) = \left\langle \prod _}}}} \sigma _x \right\rangle _^}. \end$$

(3.10)

Notice that the partition function \(}_^}\) is almost equal to \(}_\); either ratio is less than \(\,\textrm^\,\). Next we introduce the transfer matrix \(T_\) between column configurations \(\eta ,\eta ' \in \^M\):

$$\begin T_ = \exp \biggl \^M \Bigl ( J_1 \eta _i \eta _i' + J_2 \eta _i \eta _ + J_3 \eta _i \eta _' \Bigr ) \biggr \}. \end$$

(3.11)

Here, we defined \(\eta _ \equiv \eta _1\). The transfer matrix allows to write the Ising correlations above as

$$\begin }}}} \sigma _x}_^}= & \frac \sum _ \langle \eta | T^ | \eta ' \rangle \nonumber \\ & }_1 }}} \eta _x} }_L }}} \eta _y'} \,\textrm^^M \eta _i' \eta _'}\,. \end$$

(3.12)

The matrix elements of T are positive; by the Perron–Frobenius theorem there exist vectors \(|v\rangle , |w\rangle \) such that

(3.13)

Here, \(\lambda _}>0\) is the largest eigenvalue of T. (It depends on M.) The vectors \(|v\rangle , |w\rangle \) can be decomposed in the basis \(\\) of column configurations, and their coefficients have the spin-flip symmetry. Taking the limit \(L\rightarrow \infty \) in (3.12), one gets 0. Indeed, the sum over \(\eta \) is

$$\begin \sum _\eta }_1 }}} \eta _x} \langle \eta | v \rangle \end$$

(3.14)

which is zero since \(\mathrm}_1 }}\) contains an odd number of vertices; the sum over \(\eta '\) also gives zero. \(\square \)

Next we seek to calculate the determinants of \(1-K^W\) and \(1-K^W\). For this, we first make the matrices translation-invariant so we can use the Fourier transform. Let us define \(}^,\) \(i=1,2\) to be as \(K^,\) \(i=1,2\) but omitting the respective integrated angle of the handles:

$$\begin }_^ = 1_ \,\textrm^} \measuredangle _i(e,e')}\, \hspace i=1,2. \end$$

(3.15)

Actually, \(}_^=}_^\) and we shall write \(}_\) for either \(}_^\) or \(}_^\). Then, we define modified diagonal matrices \(}_^\) and \(}_^\); matrix elements now depend on the direction of e:

$$\begin }^_ \!=\! W_ \,\textrm^\pi / L}\, & \text e = \rightarrow , \\ W_ \,\textrm^\pi / L}\, & \text e = \leftarrow , \\ W_ \,\textrm^\pi / M}\, & \text e = \uparrow , \\ W_ \,\textrm^\pi / M}\, & \text e = \downarrow , \\ W_ \,\textrm^\pi (\frac+\frac)}\, & \text e = \nearrow , \\ W_ \,\textrm^\pi (\frac+\frac)}\, & \text e = \swarrow . \end\right. } \quad }^_ = W_ & \text e = \rightarrow \text \leftarrow , \\ W_ \,\textrm^\pi / M}\, & \text e = \uparrow \text \nearrow , \\ W_ \,\textrm^\pi / M}\, & \text e = \downarrow \text \swarrow . \end\right. }\nonumber \\ \end$$

(3.16)

Lemma 3.3

We have

$$\begin \det (1-K^W) = \det (1-} }^), \qquad \det (1-K^W) = \det (1-} }^). \end$$

Proof

One can expand the determinants as products of directed loops as in [1, Theorem 3.2]. Let \(\gamma = (e_1,\dots ,e_k)\) be a directed loop with \(\ell \) handles. (The self-crossing handle between (L.M) and (1, 1) is counted twice.) We have

$$\begin \prod _^k K^_} = (-1)^\ell \, \prod _^k }_}, \qquad \prod _^k }^_ = (-1)^\ell \, \prod _^k W_.\nonumber \\ \end$$

(3.17)

Then, each loop gives the same contribution in \(\det (1-K^W)\) and in \(\det (1-} }^)\). The argument for \(\det (1-K^W)\) is the same, and counting only vertical and oblique handles between sites (i, M) and (j, 1), \(1\le i, j, \le L\). \(\square \)

Lemma 3.4

Let \(}}_^*=\frac }}_L \) and recall that \(}}}}_L = }}_^* + \frac\).

(a)

With \(k_3 = k_1 + k_2\), we have

$$\begin \det (1-} }^)= \prod _}}}}_L} \prod _}}}}_M} \bigg [ \prod _^3 \big ( 1+\tanh ^2 J_i \big )+8\prod _^3\tanh J_i \\ -2\sum _^\tanh J_i \big (1-\tanh ^2 J_ \big ) \big (1-\tanh ^2 J_ \big ) \cos k_i \bigg ]. \end$$

(b)

Again with \(k_3 = k_1 + k_2\), we have

$$\begin \det (1-} }^) = \prod _}}_^*} \prod _}}}}_M} \bigg [ \prod _^3 \big (1+\tanh ^2 J_i \big )+8\prod _^3\tanh J_i \\ - 2\sum _^ \tanh J_i \big (1-\tanh ^2 J_ \big )\big (1-\tanh ^2 J_ \big ) \cos k_i \bigg ]. \end$$

Proof

For (a), we label the set of directed edges as \((x,\alpha )\) where \(x \in }}_\) and \(\alpha \in A\), with

$$\begin A = \bigl \. \end$$

(3.18)

The Fourier coefficients are \((k,\alpha )\) with \(k \in }}_^* = }}_^* \times }}_^*\). The Fourier transform is represented by the unitary matrix U:

$$\begin U_ = \frac} \,\textrm^k x}\, \delta _, \end$$

(3.19)

for \(x \in }}_\), \(k \in }}_^*\), and \(\alpha , \beta \in A\). Since \(}_\) depends only on \(\alpha \in A\), we have

$$\begin (U }^ U^)_ = }^_\alpha \delta _ \delta _. \end$$

(3.20)

Further, straightforward Fourier calculations give

$$\begin (U } U^)_ = \delta _ \sum _}}_} \,\textrm^kx}\, }_ \equiv \delta _ }_(k),\qquad \end$$

(3.21)

with the matrix \(}(k)\) given by

$$\begin }(k)&= \left( \begin \,\text ^k_1}\, \!\!\! & \quad 0 & \quad 0 & \quad 0 & \quad 0 & \quad 0 \\ 0 & \quad \,\text ^k_1}\, \!\!\! & \quad 0 & \quad 0 & \quad 0 & \quad 0 \\ 0 & \quad 0 & \quad \,\text ^k_2}\, \!\!\! & \quad 0 & \quad 0 & \quad 0 \\ 0 & \quad 0 & \quad 0 & \quad \,\text ^k_2}\, \!\!\! & \quad 0 & \quad 0 \\ 0 & \quad 0 & \quad 0 & \quad 0 & \quad \,\text ^(k_1+k_2)}\, \!\!\!\! & \quad 0 \\ 0 & \quad 0 & \quad 0 & \quad 0 & \quad 0 & \quad \,\text ^(k_1+k_2)}\, \end \right) \nonumber \\ &\times \quad \left( \begin 1 & \quad 0 & \quad \,\text ^\frac}\, \!\! & \quad \,\text ^\frac}\, \!\! & \quad \,\text ^\frac}\, \!\! & \quad \,\text ^\frac}\, \\ 0 & \quad 1 & \quad \,\text ^\frac}\, \!\! & \quad \,\text ^\frac}\, \!\! & \quad \,\text ^\frac}\, \!\! & \quad \,\text ^\frac}\, \\ \,\text ^\frac}\, \!\! & \quad \,\text ^\frac}\, \!\! & \quad 1 & \quad 0 & \quad \,\text ^\frac}\, \!\! & \quad \,\text ^\frac}\, \\ \,\text ^\frac}\, \!\! & \quad \,\text ^\frac}\, \!\! & \quad 0 & \quad 1 & \quad \,\text ^\frac}\, \!\! & \quad \,\text ^\frac}\, \\ \,\text ^\frac}\, \!\! & \quad \,\text ^\frac}\, \!\! & \quad \,\text ^\frac}\, \!\! & \quad \,\text ^\frac}\, \!\! & \quad 1 & \quad 0 \\ \,\text ^\frac}\, \!\! & \quad \,\text ^\frac}\, \!\! & \quad \,\text ^\frac}\, \!\! & \quad \,\text ^\frac}\, \!\! & \quad 0 & \quad 1 \end \right) . \end$$

(3.22)

Let us define

$$\begin }^(k):= \left( \begin t_1 \,\textrm^k_1}\, \!\!\! & \quad 0 & \quad 0 & \quad 0 & \quad 0 & \quad 0 \\ 0 & \quad t_2 \,\textrm^k_1}\, \!\!\! & \quad 0 & \quad 0 & \quad 0 & \quad 0 \\ 0 & \quad 0 & \quad t_2 \,\textrm^k_2}\, \!\!\! & \quad 0 & \quad 0 & \quad 0 \\ 0 & \quad 0 & \quad 0 & \quad t_2 \,\textrm^k_2}\, \!\!\! & \quad 0 & \quad 0 \\ 0 & \quad 0 & \quad 0 & \quad 0 & \quad t_3 \,\textrm^(k_1+k_2)}\, \!\!\! & \quad 0 \\ 0 & \quad 0 & \quad 0 & \quad 0 & \quad 0 & \quad t_3 \,\textrm^(k_1+k_2)}\, \end \right) \nonumber \\ \end$$

(3.23)

where \(t_i=\tanh \), \(i=1,2,3\). Then,

$$\begin \begin \det \widetilde^)}&= \det ^\widetilde)} = \prod _}}_^*} \det \Bigl [ 1-}^\bigl (k+(\tfrac,\tfrac)\bigr )}(0) \Bigr ] \\&= \prod _}}}}_L} \prod _}}}}_M} \det \Bigl [ 1 - \widehat^\bigl ((k_1,k_2) \bigr )\widehat(0)) \Bigr ]. \end \end$$

(3.24)

The first identity follows from a loop expansion, see [1, Theorem 3.2]. A calculation of the determinant by grouping the terms according to \(k_1+k_2,k_1,k_2\) yields

$$\begin&\det \bigl [ 1 - \widehat^ \bigl ((k_1,k_2) \bigr )\widehat(0)) \bigr ] = \prod _^3 \big ( 1 + \tanh ^2 J_i \big )+8\prod _^3 \tanh J_i \nonumber \\&\qquad - 2\sum _^ \tanh J_i \big (1-\tanh ^2 J_ \big ) \big ( 1 - \tanh ^2 J_ \big ) \cos k_i \end$$

(3.25)

where \(k_3=k_1+k_2\). This gives (a).

The proof of (b) is similar. \(\square \)

Corollary 3.5 (a)

The determinants are nonnegative, \(\det (1-} }^) \ge 0\) and \(\det (1-} }^) \ge 0\).

(b)

Taking the logarithms, dividing by L, we have as \(L\rightarrow \infty \)

$$\begin \begin&\lim _ \frac \log \det (1 - } }^) = \lim _ \frac \log \det (1-} }^) \\&\quad = \int _ \textrmk_1 \sum _}}}}_M} \log \biggl [ \prod _^3\big (1+\tanh ^2\big ) + 8\prod _^3\tanh J_i \\&\qquad -\sum _^2\tanh J_i \big (1-\tanh ^2 J_ \big )\big (1-\tanh ^2 J_ \big )\cos k_i \biggr ]. \end \end$$

Proof

(a) By Eq. (3.3) and Lemma 3.3, we obtain that both square roots of the above determinants are real. (b) This is a consequence of Lemma 3.4; taking the logarithm we obtain Riemann sums. \(\square \)

Proof of Theorem 2.1

(a) From the high-temperature expansion (3.4), we observe that the finite-volume free energy with periodic boundary conditions satisfies

$$\begin -f_(J_1,J_2,J_3)= \log 2 + \log \left[ \prod _^3\cosh J_i \right] +\frac\log \Big [}_ \Big ].\quad \end$$

(3.26)

Using Lemma 3.1, Lemma 3.2 and Lemma 3.3, we see that the free energy on the infinite cylinder is

$$\begin&-f_M(J_1,J_2,J_3)=\log 2 + \log \left[ \prod _^3\cosh \right] \nonumber \\&\quad +\lim _\frac\log \Biggl [\sqrt} }^)}+\sqrt} }^)}\Biggr ] \end$$

(3.27)

By Corollary 3.5(a), we have

$$\begin \log \sqrt} }^)}\le & \log \Biggl [ \sqrt} }^)} + \sqrt} }^)} \Biggr ]\nonumber \\\le & \max _ \log \sqrt} }^)} + \log 2. \end$$

(3.28)

Dividing by L, all terms above converge to the same limit as \(L\rightarrow \infty \) by Corollary 3.5(b). We get

$$\begin&-f(J_1,J_2,J_3)=\log 2 + \log \left[ \prod _^3\cosh \right] \nonumber \\&\quad +\frac\int _dk_1\sum _}}}}_M}\log \bigg [\prod _^3\big (1 + \tanh ^2 J_i \big ) \nonumber \\&\quad +8\prod _^3 \tanh J_i -\sum _^ 2 \tanh J_i \big (1-\tanh ^2 J_ \big )\big (1-\tanh ^2 J_ \big ) \cos k_i \bigg ]. \end$$

(3.29)

In order to get the expression of Theorem 2.1, one should use the hyperbolic identities \(1+\tanh ^2 x = \frac\) and \(\tanh x = \frac\) and extract a factor \(\bigl ( \prod _i \cosh J_i \bigr )^\). \(\square \)

留言 (0)

沒有登入
gif