The key in our arguments is to show that the full asymptotic expansion of the spacetime metric g at the horizon is given geometrically by the data \((}, \sigma , V)\), assuming the Ricci curvature vanishes to infinite order at the horizon:
Assumption 2.1Assume that M is a smooth spacetime of dimension \(n+1 \ge 2\), with a smooth Killing horizon \(}\subset M\), with horizon Killing vector field W, such that
$$\begin \nabla ^k}|_}= 0 \end$$
(7)
for all \(k \in \mathbb _0\).
We assume throughout this section that Assumption 2.1 is satisfied.
2.1 A Geometric GaugeLet \((}, \sigma , V)\) denote the induced data (in the sense of Definition 1.16). Let
$$\begin V^\perp \subset T}\end$$
denote the smooth vector bundle of tangent vectors which are orthogonal to V, with respect to \(\sigma \). For notational convenience, we do not write out the embedding \(\iota \) in this section and instead simply write
$$\begin }\subset M, \quad V = W|_}, \end$$
etc.
Definition 2.2The unique lightlike smooth vector field L along \(}\) which satisfies
$$\begin g(L, V) = 1, \quad g(L, X) = 0, \end$$
for all \(X \in V^\perp \) is called the canonical transversal vector field of \(}\).
Note that indeed L is not tangent to \(}\), since
$$\begin 0 \ne g(L, V) = g(L, W|_}). \end$$
We think of L as a natural replacement for the unit normal along a spacelike or timelike hypersurface.
Proposition 2.3There is a unique nowhere vanishing lightlike smooth (real analytic, if the metric is real analytic) vector field \(\partial _t\) on an open neighborhood \(}\supset }\) satisfying
\(\nabla _\partial _t = 0\),
\(g(\partial _t|_}, V) = 1\),
\(g(\partial _t|_}, X) = 0\) for all \(X \in V^\perp \),
all integral curves of \(\partial _t\) intersect \(}\) precisely once.
Moreover, it follows that
$$\begin W, \partial _t] = 0. \end$$
(8)
ProofWe construct \(\partial _t\) as the unique solution to
$$\begin \nabla _\partial _t&= 0, \\ \partial _t|_}&= L, \end$$
on a neighborhood \(}\supset }\). It follows that
$$\begin \partial _tg(\partial _t, \partial _t) = 0, \end$$
which together with
$$\begin g(\partial _t, \partial _t)|_}= g(L, L)|_}= 0 \end$$
implies that \(\partial _t\) is lightlike. It remains to check that \([W, \partial _t] = 0\). For this, we first show that \([W, \partial _t]|_}= 0\). Note first that
$$\begin g([W, \partial _t]|_}, L)&= g(\nabla _W \partial _t|_}, L) - g(\nabla _ W|_}, L) \\&= \frac W g(L, L) - \frac \mathcal _Wg(L, L) \\&= 0. \end$$
Note now that \(g(\partial _t|_}, X) = g(L, X) = \frac\), for all \(X \in T}\), since \(g(L, X) = 0\) for all \(X \perp _\sigma V\), i.e., for all \(X \in \ker (\omega )\), and \(g(L, V) = 1 = \frac\). Using (4), we compute
$$\begin g([W, \partial _t]|_}, X)&= Wg(\partial _t, X)|_}- \mathcal _Wg(\partial _t, X)|_}- g(\partial _t, [W, X])|_}\\&= Wg(\partial _t, X)|_}- g(\partial _t, [W, X])|_}\\&= \frac- \frac\\&= \frac_W\omega (X)}\\&= 0. \end$$
It follows that \([W, \partial _t]|_}= 0\). Now, since \(\mathcal _Wg = 0\), we note that
$$\begin 0 = \mathcal _W\left( \nabla _\partial _t \right) = \nabla _\partial _t + \nabla _[W, \partial _t], \end$$
which is a linear first order ODE. It thus follows that \([W, \partial _t] = 0\) as claimed. Shrinking \(}\) if necessary, we can ensure that each integral curve of \(\partial _t\) intersects \(}\) precisely once. \(\square \)
Proposition 2.4(The null time function). Let Assumption 2.1 be satisfied. There is a unique smooth (real analytic, if the metric is real analytic) function
$$\begin t: }\rightarrow \mathbb , \end$$
such that
$$\begin \textrmt(\partial _t)&= 1, \\ t^(0)&= }\cap }. \end$$
In particular, \(\textrmt \ne 0\) everywhere on \(}\).
ProofConstruct the function t as the eigentime of the integral curves of \(\partial _t\), starting at \(}\). Then t is smooth and satisfies the assertion. \(\square \)
Definition 2.5We call \(t: }\rightarrow \mathbb \) the null time function.
2.2 The ExpansionSince the null time function t is constructed geometrically, it is very natural to study the asymptotic expansion of the metric g in terms of t, i.e., we formally write
$$\begin g \sim \sum _^\infty \mathcal _t^mg|_}\frac, \end$$
and iteratively compute \(\mathcal _t^m g|_}:= \underbrace_\ldots \mathcal _}_}g|_}\) in terms of the data \((\sigma , V)\). We will also use the notation
$$\begin \nabla _t:= \nabla _. \end$$
Remark 2.6Proposition 2.3 implies that
$$\begin g(\partial _t, V)|_}&= 1, \\ g(X, Y)|_}&= \sigma (X, Y), \\ g(\partial _t, \partial _t)|_}&= g(\partial _t, X)|_}= g(V, V)|_}= g(X, V)|_}= 0, \end$$
for all \(X, Y \in V^\perp \). Consequently, since \(\partial _t|_}\) is determined by the data \((\sigma , V)\) and the embedding \(\iota \) (which is suppressed here), we conclude that \(g|_}\) is determined by \((\sigma , V)\).
We note that many components of \(\mathcal _t^mg\) automatically vanish:
Lemma 2.7By construction of \(\partial _t\), we have
$$\begin \mathcal _t^mg(\partial _t, \cdot )&= 0, \end$$
for all \(m \in \mathbb \).
ProofFor \(m = 1\), we note that for any X with \([\partial _t, X] = 0\), we have
$$\begin \mathcal _tg(\partial _t, X) = \partial _tg(\partial _t, X) = g(\nabla _t \partial _t, X) + g(\partial _t, \nabla _t X) = - \frac Xg(\partial _t, \partial _t) = 0. \end$$
The general statement then follows by induction, by noting that
$$\begin \mathcal _t^mg(\partial _t, X) = \partial _t \mathcal _t^g(\partial _t, X). \end$$
\(\square \)
For the remaining components of \(\mathcal _t^mg|_}\), we will prove the following theorem:
Theorem 2.8Let Assumption 2.1 be satisfied. Then there are unique (nonlinear) differential operators \(Q_m\) on \(}\) for \(m \in \mathbb \), such that
$$\begin \mathcal _t^mg(X, Y)|_}= Q_m(\sigma , V)(X, Y), \end$$
for all \(m \in \mathbb \) and all \(X, Y \in T}\). Moreover, we have
$$\begin Q_m(\phi ^*\sigma , \phi ^*V) = \phi ^*Q_m(\sigma ,V), \end$$
(9)
for all diffeomorphisms \(\phi : }\rightarrow }\) and the \(Q_m(\sigma , V)\)’s are real analytic if \(\sigma \) and V are real analytic.
In Subsect. 2.5, we show that Theorem 2.8 implies Theorem 1.17. The condition (9) is very natural, it says that the differential operators \(Q_m\) are diffeomorphism invariant. Theorem 2.8 would obviously not be true without assuming (7), i.e., that the Ricci curvature vanishes to infinite order. The rest of this section will be devoted to prove Theorem 2.8, i.e., to study the remaining components of \(\mathcal _t^mg|_}}\).
2.3 The First DerivativeWe start by computing \(Q_1(\sigma , V)\) in Theorem 2.8. It turns out that its components are given by simple explicit formulas. Let \(\nabla ^\sigma \) denote the Levi-Civita connection with respect to \(\sigma \) and let \(}^}\) and \(\mathrm \) denote the curvature tensor and the Ricci curvature of \(\sigma \), respectively.
Proposition 2.9In terms of the null time function t, we have
$$\begin \mathcal _tg(V, V)|_}&= - 2\kappa , \\ \mathcal _tg(V, X)|_}&= 0, \\ \mathcal _tg(X, Y)|_}&= \frac\left( \mathrm (X, Y) + \frac \sigma (\nabla ^\sigma _X V, \nabla ^\sigma _Y V)\right) \end$$
for all \(X, Y \in V^\perp \), where \(\kappa \) is the surface gravity with respect to W, defined in (2). Moreover, we have
$$\begin \nabla _tW|_}&= - \kappa \partial _t|_}. \end$$
(10)
Remark 2.10We conclude that
$$\begin Q_1(\sigma , V)(V, V)&= - 2\kappa , \\ Q_1(\sigma , V)(V, X)&= 0, \\ Q_1(\sigma , V)(X, Y)&= \frac\left( \mathrm (X, Y) + \frac \sigma (\nabla ^\sigma _X V, \nabla ^\sigma _Y V)\right) , \end$$
which is a diffeomorphism invariant differential operator in \((\sigma , V)\). Since we use the convention that \(\kappa > 0\), equation (5) implies that
$$\begin \kappa = \omega (V) = \sqrt. \end$$
This proves the assertion in Theorem 2.8 for \(m = 1\).
In order to prove Proposition 2.9, we begin with the following lemma:
Lemma 2.11We have
$$\begin \sum _^n g^ }(X, e_i, e_j, Y)|_}= \mathrm (X, Y) + \frac \sigma (\nabla ^\sigma _X V, \nabla ^\sigma _Y V), \end$$
for any \(X, Y \in V^\perp \), where \(\\) is a basis for \(V^\perp \subset T}\) and \(g^\) denotes the inverse of \(g_:= g(e_i, e_j)\).
ProofWe need to compare \(\nabla \) and \(\nabla ^\sigma \). Since
$$\begin \sigma (X, Y) = g(X, Y)|_}, \end$$
for all \(X \in V^\perp \) and \(Y \in T}\), the Koszul formula implies for all \(X, Y, Z \in V^\perp \) that
$$\begin 2 \sigma (\nabla ^\sigma _X Y, Z)&= X \sigma (Y, Z) + Y \sigma (X, Z) - Z \sigma (X, Y) \\&\qquad + \sigma ([X, Y], Z) - \sigma ([X, Z], Y) - \sigma ([Y, Z], X) \\&= 2g(\nabla _XY, Z)|_}. \end$$
This implies for all \(X, Y \in V^\perp \) that
$$\begin \nabla ^\sigma _XY&= \sum _^n \sigma (\nabla ^\sigma _XY, e_i)\sigma ^e_j + \frac\sigma (\nabla ^\sigma _XY, V) V \\&= \sum _^n g(\nabla _XY, e_i)g^e_j|_}+ \frac \sigma (\nabla ^\sigma _XY, V) V \\&= \nabla _XY|_}- g(\nabla _XY, \partial _t)|_}V + \frac \sigma (\nabla ^\sigma _XY, V) V, \end$$
where we have used that \(g(\nabla _X Y, V)|_}= - g(Y, \nabla _X V)|_}= - \frac \mathcal _W g(X, Y)|_}= 0\). The Koszul formula further implies for \(X, Y \in V^\perp \) that
$$\begin 2 \sigma (\nabla ^\sigma _V X, Y)&= V \sigma (X, Y) + X \sigma (V, Y) - Y \sigma (V, X) \\&\qquad + \sigma ([V, X], Y) - \sigma ([V, Y], X) - \sigma ([X, Y], V) \\&= 2g(\nabla _VX, Y)|_}- \sigma ([X, Y], V). \end$$
We may now compute for all \(X, Y \in V^\perp \) that
$$\begin }&(X, Y, Y, X)|_}\\&= g(\nabla _X \nabla _YY - \nabla _Y \nabla _XY - \nabla _Y, X)|_}\\&= g(\nabla _X( \nabla ^\sigma _YY + g(\nabla _YY, \partial _t) V - \frac \sigma (\nabla ^\sigma _YY, V) V), X)|_}\\&\qquad - g(\nabla _Y( \nabla ^\sigma _XY + g(\nabla _XY, \partial _t) V - \frac\sigma (\nabla ^\sigma _XY, V) V), X)|_}\\&\qquad - \sum _^n \sigma ([X, Y], e_i)\sigma ^ g(\nabla _Y, X)|_}- \frac \sigma ([X, Y], V) g(\nabla _V Y, X)|_}\\&= g(\nabla _X \nabla ^\sigma _YY, X)|_}- g(\nabla _Y \nabla ^\sigma _XY, X)|_}\\&\qquad - \sum _^n \sigma ([X, Y], e_i)\sigma ^ g(\nabla ^\sigma _Y, X)|_}\\&\qquad - \frac\sigma ([X, Y], V) g(\nabla _V Y, X)|_}\\&= \sigma (\nabla ^\sigma _X \nabla ^\sigma _YY, X) - \sigma (\nabla ^\sigma _Y \nabla ^\sigma _XY, X) - \sigma (\nabla ^\sigma _Y, X) \\&\qquad + \frac \sigma ([X, Y], V) \sigma (\nabla ^\sigma _V Y, X) - \frac\sigma ([X, Y], V) g(\nabla _V Y, X)|_}\\&= }^}(X, Y, Y, X) + \frac \sigma ([X, Y], V)^2 \\&= }^}(X, Y, Y, X) + \frac\left( \sigma (\nabla ^\sigma _YV, X) - \sigma (\nabla ^\sigma _XV, Y)\right) ^2 \\&= }^}(X, Y, Y, X) + \frac \sigma (\nabla ^\sigma _XV, Y)^2, \end$$
where we have used that \(\mathcal _V\sigma = 0\). It follows that
$$\begin \sum _^n g^ }(X, e_i, e_j, X)|_}&= \sum _^n \sigma ^ }^}(X, e_i, e_j, X) + \frac \sigma (\nabla ^\sigma _X V, \nabla ^\sigma _X V) \\&= \mathrm (X, X) - \frac }^}(X, V, V, X) + \frac \sigma (\nabla ^\sigma _X V, \nabla ^\sigma _X V). \end$$
In order to compute \(}^}(X, V, V, X)\), we first note that for all \(X \in T}\), we have
$$\begin \sigma (\nabla ^\sigma _VV, X)&= \mathcal _V \sigma (V, X) - \sigma (\nabla ^\sigma _X V, V) \\&= - \frac X \sigma (V, V) \\&= 0, \end$$
from which we conclude that
$$\begin \nabla ^\sigma _V V = 0. \end$$
We may thus compute for all \(X \in V^\perp \) that
$$\begin }^}(X, V, V, X)&= \sigma (\nabla ^\sigma _X \nabla ^\sigma _V V - \nabla ^\sigma _V \nabla ^\sigma _X V - \nabla ^\sigma _ V, X) \\&= - V\sigma (\nabla ^\sigma _XV, X) + \sigma (\nabla ^\sigma _XV, \nabla ^\sigma _V X) \\&\qquad - \mathcal _V\sigma ([X, V], X) + \sigma (\nabla ^\sigma _X V, [X, V]) \\&= - \frac V \mathcal _V \sigma (X, X) + \sigma (\nabla ^\sigma _X V, \nabla ^\sigma _X V) \\&= \sigma (\nabla ^\sigma _X V, \nabla ^\sigma _X V), \end$$
which completes the proof. \(\square \)
Proof of Proposition 2.9Using that \(g(\partial _t,W)|_}= 1\) and \(\nabla _WW|_}= \kappa W|_}\), we get
$$\begin \mathcal _tg(W,W)|_}=2g(\nabla _W\partial _t,W)|_}=-2g(\partial _t,\nabla _WW)|_}=-2g(\partial _t,\kappa W)|_}=-2\kappa . \end$$
Let X be a smooth vector field, such that \(X|_}\in C^\infty (V^\perp )\) and \([X, \partial _t] = 0\). We compute
$$\begin \mathcal _tg(W,X)|_}&=g(\nabla _W\partial _t,X)|_}+ g(W,\nabla _X\partial _t)|_}\\&=-g(\partial _t,\nabla _WX)|_}- g(\nabla _XW,\partial _t)|_}\\&= - g([W,X],\partial _t)|_}-2g(\partial _t,\nabla _XW)|_}\\&= - \omega ([W, X]|_}) g( V,\partial _t)|_}-2 \omega (X|_})g(\partial _t, W)|_}\\&= - \omega ([W, X]|_}) \\&= - W|_}\omega (X) + \mathcal _}} \omega (X|_}) \\&= 0, \end$$
where we in the last step used (4). Let us now prove (10), using Lemma 2.7. For any \(X \in C^\infty (V^\perp )\), we compute
$$\begin 0&=\mathcal _Wg(X,\partial _t)|_}=g(\nabla _XW,\partial _t)|_}+ g(X,\nabla _tW)|_}=g(X,\nabla _tW)|_}, \\ 0&=\mathcal _Wg(\partial _t,\partial _t)|_}=2g(\nabla _t W,\partial _t)|_},\\ 0&=\mathcal _Wg(W,\partial _t)|_}=g(\nabla _WW,\partial _t)|_}+g(W,\nabla _tW)|_}= \kappa + g(W,\nabla _tW)|_}. \end$$
By Remark 2.6, we conclude that \(\nabla _tW =- \kappa \partial _t\). We use this in the final computation of \(\mathcal _Wg(X, Y)|_}\) for \(X, Y \in C^\infty (V^\perp )\). Since \([W,\partial _t]=0\), we note that
$$\begin \mathcal _W(\mathcal _tg)=\mathcal _t(\mathcal _Wg)+\mathcal _g=0. \end$$
We compute
$$\begin 0&=\mathcal _W(\mathcal _tg)(X,Y) \\&=W\left( \mathcal _tg(X,Y)\right) - \mathcal _tg([W,X],Y) - \mathcal _tg(X,[W,Y]) \\&=W\left( g(\nabla _X\partial _t,Y)+g(X,\nabla _Y\partial _t)\right) -g(\nabla _\partial _t,Y)\\&\qquad - g([W,X],\nabla _Y\partial _t) - g(\nabla _\partial _t,X) - g([W,Y],\nabla _X\partial _t)\\&=g(\nabla _W\nabla _X\partial _t-\nabla _\partial _t,Y)+g(X,\nabla _W\nabla _Y\partial _t-\nabla _\partial _t)\\&\qquad +g(\nabla _X\partial _t,\nabla _WY)+g(\nabla _WX,\nabla _Y\partial _t)\\&\qquad -g([W,X],\nabla _Y\partial _t)-g([W,Y],\nabla _X\partial _t)\\&= }(W, X, \partial _t, Y) +g(\nabla _X\nabla _W\partial _t,Y)+ }(W, Y, \partial _t, X) + g(\nabla _Y\nabla _W\partial _t,X)\\&\qquad + g(\nabla _X\partial _t,\nabla _YW) + g(\nabla _Y\partial _t,\nabla _XW). \end$$
Evaluating this at \(}\), with \(X, Y \in C^\infty (V^\perp )\), using (10), we get
$$\begin 0&= }(W, X, \partial _t, Y)|_}- \kappa g(\nabla _X \partial _t, Y)|_}\\&\qquad + }(W, Y, \partial _t, X)|_}- \kappa g(\nabla _Y\partial _t,X)|_}\\&= }(W, X, \partial _t, Y)|_}+ }(W, Y, \partial _t, X)|_}- \kappa \mathcal _t g(X, Y)|_}\\&= - }(X, Y)|_}- \sum _^n g^}(e_i, X, e_j, Y)|_}- \kappa \mathcal _t g(X, Y)|_}, \end$$
where \(e_2, \ldots , e_n\) is a basis for \(V^\perp \). The proof is now completed by applying Lemma 2.11 and recalling that \(}|_}= 0\). \(\square \)
The following corollary will be useful when computing the higher derivatives:
Corollary 2.12For all \(X,Y\in C^(V^\perp )\), we have
$$\begin g(\nabla _X\partial _t,Y)|_}=\frac\left( \mathrm (X, Y) + \frac \sigma (\nabla ^\sigma _X V, \nabla ^\sigma _Y V)+\textrm\omega (X,Y)\right) . \end$$
ProofWe compute
$$\begin \textrm\omega (X,Y)&= X\omega (Y) - Y\omega (X) -\omega ([X,Y]) \\&= - \omega ([X, Y]) \\&= - g([X, Y], \partial _t)|_}\omega (V) \\&= - \kappa g([X, Y], \partial _t)|_}\end$$
Combining this with
$$\begin g(\nabla _X\partial _t,Y)|_}&= \frac\mathcal _tg(X,Y)|_}+ \frac (g(\nabla _X\partial _t,Y) -g(\nabla _Y\partial _t,X))|_}\\&= \frac\mathcal _tg(X,Y)|_}- \fracg(\partial _t,[X,Y]) \end$$
and Proposition 2.9 yields the desired result. \(\square \)
2.4 The Higher DerivativesThe proof of Theorem 2.8 is an iterative construction of the \(Q_m\)’s. The proof is constructive, meaning that it in principle is possible to compute the \(Q_m\)’s explicitly, just like we did for \(Q_1\) in the previous subsection. Remark 2.10 implies that there is a unique \(Q_1\) such that
$$\begin \mathcal _tg(X, Y)|_}= Q_1(\sigma , V)(X, Y) \end$$
for all \(X, Y \in T}\). Let us therefore make the following induction assumption:
InductionAssumption 1Fix an \(m \in \mathbb \). We assume that there are unique (nonlinear) diffeomorphism invariant differential operators \(Q_1, \ldots , Q_m\) on \(}\) such that
$$\begin \mathcal _t^kg(X, Y) = Q_k(\sigma , V)(X, Y) \end$$
for all \(X, Y \in T}\) and all \(k = 1, \ldots , m\).
Remark 2.13Indeed, by Remark 2.10 we have proven that Induction Assumption 1 is satisfied for
Given Induction Assumption 1, the goal of this section is to show the existence of a unique \(Q_\) such that
$$\begin \mathcal _t^g(X, Y)|_}= Q_(\sigma , V)(X, Y) \end$$
for all \(X, Y \in T}\), which by induction would prove Theorem 2.8. Recall that we already know that
$$\begin \mathcal _t^mg(\partial _t, \cdot ) = 0, \end$$
for all \(m \in \mathbb \).
2.4.1 Notation and First IdentitiesIt will be convenient to use the notation
$$\begin \Theta (X) = \nabla _X \partial _t, \quad A(X, Y):= g(\Theta (X), Y). \end$$
We also define the square of a (0, 2)-tensor T as
$$\begin T^2(X, Y):= \sum _^n g^ T(X,e_)T(e_,Y). \end$$
Lemma 2.14We have the identities
$$\begin \nabla _tA(X,Y)&= -A^2(X,Y) + }(\partial _t,X,\partial _t,Y), \\ \nabla _t\mathcal _tg(X,Y)&= - A^2 (X,Y) -A^2 (Y, X) +2}(\partial _t,X,\partial _t,Y), \\ \nabla _tA(X,Y)&=\frac \nabla _t\mathcal _tg(X,Y) - \frac\left( A^2(X, Y) - A^2(Y, X) \right) . \end$$
ProofSince \(\nabla _t\partial _t=0\), we have
$$\begin \nabla _tA(X,Y)&=g(\nabla ^2_\partial _t,Y) \\&=g(\nabla ^2_\partial _t,Y) + }(\partial _t,X,\partial _t,Y)\\&=-g(\nabla _\partial _t,Y)+ }(\partial _t,X,\partial _t,Y)\\&= - g(\Theta (\Theta (X)),Y) + }(\partial _t,X,\partial _t,Y)\\&= - A^2(X,Y) + }(\partial _t,X,\partial _t,Y). \end$$
The second and the third identities follow from the identity
$$\begin \mathcal _tg(X, Y) = A(X, Y) + A(Y, X). \end$$
\(\square \)
2.4.2 The Curvature ComponentsThe strategy in the proof of Theorem 2.8 is to show that \(\mathcal _t^g|_}\) is given uniquely by lower orders \(\mathcal _t^kg|_}\) and for \(k = 0, \ldots , m\), using that derivatives of the Ricci curvature vanish at the horizon. The following lemma will be crucial for that purpose:
Lemma 2.15Let \(X, Y, Z, W \in C^\infty (T})\) and fix a \(k \in \mathbb \). Then there is a unique way to express
(a)\(\nabla _t^j A|_}\), for all \(j = 0, \ldots , k-1\),
(b)\(\left( \nabla _t^j\mathcal _tg - \mathcal _t^g \right) |_}\), for all \(j = 0, \ldots , k\),
(c)\(\nabla _t^j }(\partial _t, X, \partial _t, Y)|_}\), for all \(j=0, \ldots , k-2\),
(d)\(\nabla _t^j }(X, Y, \partial _t, Z)|_}\), for all \(j=0, \ldots , k-1\),
(e)\(\nabla _t^j }(X, Y, Z, W)|_}\), for all \(j=0, \ldots , k\),
in terms of V and
$$\begin g|_}, \ldots , \mathcal _t^kg|_}. \end$$
(11)
In the proof of this lemma and in further computations, we will use the following schematic notation:
Notation 2.16Given two tensors \(B_1\) and \(B_2\), the notation
$$\begin B_1 * B_2 \end$$
denotes a tensor, which is given by linear combinations of contractions with respect to the metric g (not derivatives of g). In particular, we may write
$$\begin \nabla ^k (B_1 * B_2) = \sum _ \nabla ^i B_1 * \nabla ^j B_2. \end$$
The proof of Lemma 2.15 will use the following simple observation:
Lemma 2.17Let S be a smooth tensor field on M. For any \(k \in \mathbb \), the commutator
$$\begin \nabla _^k, \nabla ]S|_}\end$$
is determined uniquely in terms of \(g|_}\) and
$$\begin \nabla ^j_t }|_}, \quad \nabla ^j A|_}, \end$$
for \(j = 0, \ldots , k-1\) and
$$\begin \nabla ^j S|_}\end$$
for \(j = 0, \ldots , k\).
Proof of Lemma 2.17Note that for any tensor field S, we have
$$\begin \nabla _t,\nabla ]S(X)&= \nabla ^2_S-\nabla _X(\nabla _tS)\\&= }(\partial _t,X)S - \nabla _S. \end$$
Using this and \(\nabla _t \partial _t = 0\), we may schematically compute the higher commutators as
$$\begin \nabla _^k, \nabla ]S&= [(\nabla _t)^k, \nabla ]S \\&= \sum _ (\nabla _t)^i[\nabla _t, \nabla ](\nabla _t)^j S \\&= \sum _ (\nabla _t)^i }(\partial _t, \cdot ) \nabla _^j S - (\nabla _t)^i \nabla _\nabla _^j S \\&= \sum _ \sum _(\nabla ^a }) * (\nabla ^ S) - (\nabla ^a \Theta ) * (\nabla ^ S). \end$$
This completes the proof, since \(\nabla ^kA = g(\nabla ^k\Theta (\cdot ), \cdot )\). \(\square \)
Proof of Lemma 2.15We start by proving the statement for \(k = 1\). By Corollary 2.12, we know that
$$\begin A|_}(X, Y), \end$$
for \(X, Y \perp V\) is given in terms of V, \(g|_}\) and \(\mathcal _tg|_}\). The other components of \(A|_}\) are computed using that \(\nabla _t \partial _t = 0\), by construction, and that \(\nabla _V \partial _t = [V, \partial _t]|_}+ \partial _t W|_}= - \kappa \partial _t|_}\), by (10). This proves claim (2.15) for \(k = 1\). Claim (2.15) then follows for \(k = 1\) immediately by noting that
$$\begin \nabla _t S = \mathcal _tS + A * S, \end$$
(12)
for any covariant tensor S, from which we get
$$\begin \nabla _t\mathcal _tg|_}= \mathcal _t^2 g|_}+ A * \mathcal _tg|_}. \end$$
Further, we compute
$$\begin }(X, Y, \partial _t, Z)|_}&= X g(\nabla _Y \partial _t, Z)|_}- g(\nabla _Y \partial _t, \nabla _X Z)|_}- Yg(\nabla _X \partial _t, Z)|_}\\&\qquad + g(\nabla _X \partial _t, \nabla _Y Z)|_}- g(\nabla _\partial _t. Z)|_}\\&= \nabla _X A(Y, Z)|_}- \nabla _Y A(X, Z)|_}, \end$$
for all \(X, Y, Z \in T}\). Clearly, \(\nabla \) can be expressed in terms of \(g|_}\) and \(\mathcal _tg|_}\) and since X, Y are tangent to \(}\), we conclude claim (2.15) for \(k = 1\). The curvature component \(}(X, Y, Z, W)|_}\) can locally be given in terms of Christoffel symbols and derivatives thereof tangent to \(}\), which are all given by \(g|_}\) and \(\mathcal _tg|_}\), proving claim (2.15) for \(j = 0\).
In order to complete the case \(k = 1\), we still need to prove (2.15) for \(j = 1\). Using the second Bianchi identity, we compute the derivatives of the third curvature component:
$$\begin \nabla _t^j&}(X, Y, Z, W)|_}\nonumber \\&= \nabla _t^ (\nabla })(\partial _t, X, Y, Z, W)|_}\nonumber \\&= - \nabla _t^ (\nabla })(X, Y, \partial _t, Z, W)|_}- \nabla _t^ (\nabla })(Y, \partial _t, X, Z, W)|_}\nonumber \\&= - [\nabla _t^, \nabla ] }(X, Y, \partial _t, Z, W)|_}- \nabla _X \nabla _t^}(Y, \partial _t, Z, W)|_}\nonumber \\&\qquad - [\nabla _t^, \nabla ] }(Y, \partial _t, X, Z, W)|_}- \nabla _Y \nabla _t^ }(\partial _t,X, Z, W)|_}, \end$$
(13)
for \(X, Y, Z, W \in T}\). Using this with \(j = 1\) and that we have proven claim (2.15) for \(k = 1\), we conclude claim (2.15) for \(j = 1\). We have thus proven Lemma 2.15 with \(k = 1\), which is initial step in our induction on k.
Assume therefore that the assertion is proven for an \(k \in \mathbb \), we want to then prove it for \(k+1\). Lemma 2.14 implies that
$$\begin \nabla _t^k A|_}&= \frac \nabla _t^k\mathcal _tg|_}+ \sum _ \nabla _t^a A * \nabla _t^b A|_}\\&= \frac \left( \nabla _t^k\mathcal _tg - \mathcal _t^g \right) |_}+ \frac\mathcal _t^g|_}+ \sum _ \nabla _t^a A * \nabla _t^b A|_}, \end$$
which by the induction assumption is given by
$$\begin g|_}, \ldots , \mathcal _t^g|_}. \end$$
proving claim (2.15) for \(k+1\). By applying (12), we note that
$$\begin \left( \nabla _t^ \mathcal _tg - \mathcal _t^g\right) |_}&= \sum _ \nabla _t^a A * \nabla _t^b\mathcal _tg|_}. \end$$
By claim (2.15) for \(k + 1\), this proves claim (2.15) for \(k + 1\). By Lemma 2.14, we deduce that
$$\begin \nabla _t^ }(\partial _t, X, \partial _t, Y)|_}= \nabla _t^kA(X, Y)|_}+ \sum _^ \nabla _t^j A * \nabla _t^ A(X, Y)|_}, \end$$
proving claim (2.15) for \(k + 1\). To prove claim (2.15) for \(k+1\), we note that the second Bianchi identity implies similar to (13) that
$$\begin \nabla _t^k&}(X, Y, \partial _t, Z)|_}\\&= - [\nabla _t^, \nabla ] }(X, Y, \partial _t, \partial _t, Z)|_}- \nabla _X \nabla _t^}(Y, \partial _t, \partial _t, Z)|_}\\&\qquad - [\nabla _t^, \nabla ] }(Y, \partial _t, X, \partial _t, Z)|_}- \nabla _Y \nabla _t^ }(\partial _t,X, \partial _t, Z)|_}, \end$$
for any \(X, Y, Z \in T}\). By Lemma 2.17 and claim (2.15) for \(k+1\), the right-hand side is expressed in terms of V and
$$\begin g|_}, \ldots , \mathcal _t^g|_}, \end$$
proving claim (2.15) for \(k+1\). Finally, in order to prove claim (2.15), we consider equation (13) with \(j = k+1\) and get
$$\begin \nabla _t^&}(X, Y, Z, W)|_}\\&= - [\nabla _t^k, \nabla ] }(X, Y, \partial _t, Z, W)|_}- \nabla _X \nabla _t^k}(Y, \partial _t, Z, W)|_}\\&\qquad - [\nabla _t^k, \nabla ] }(Y, \partial _t, X, Z, W)|_}- \nabla _Y \nabla _t^k }(\partial _t,X, Z, W)|_}. \end$$
Again, by Lemma 2.17 and claim (2.15) for \(k+1\), the right-hand side is expressed in terms of V and
$$\begin g|_}, \ldots , \mathcal _t^g|_}, \end$$
proving claim (2.15) for \(k+1\).
We have thus proven the assertion in Lemma 2.15 for \(k+1\), which completes the induction argument. \(\square \)
2.4.3 Expressions for the V-ComponentsWith the preparations in the previous subsection, we are now able to study the expression
$$\begin \mathcal _t^g(\cdot , V)|_}. \end$$
Proposition 2.18Let Assumption 2.1 and Induction Assumption 1 be satisfied for an \(m \in \mathbb \). Then, for any \(X \in C^\infty (T})\), the expression
$$\begin \mathcal _t^g(X,V)|_}\end$$
is given uniquely in terms of
$$\begin g|_}, \ldots , \mathcal _t^mg|_}. \end$$
(14)
ProofBy Lemma 2.14, we have
$$\begin \nabla _t^m \mathcal _tg(X,V)|_}= \sum _ \nabla _t^i A * \nabla _t^j A(X,V)|_}+2\nabla _t^}(\partial _t,X,\partial _t,V)|_}\end$$
for all \(X \in T}\). The first term on the right-hand side is dealt with in Lemma 2.15. The second term is computed as follows:
$$\begin 0&= \nabla _t^ }(X, \partial _t)|_}\\&= \nabla _t^ \textrm_g\left( }(\cdot , X, \partial _t, \cdot )\right) |_}\\&= \textrm_g\left( (\nabla _t^ })(\cdot , X, \partial _t, \cdot )\right) |_}\\&= \nabla _t^}(\partial _t, X, \partial _t, V)|_}+ \sum _^n g^\nabla _t^ }(e_i, X, \partial _t, e_j)|_}. \end$$
By Lemma 2.15, the sum on the right-hand side is given by (14). The proof is completed by applying Lemma 2.15, claim (2.15). \(\square \)
2.4.4 Computation of the CommutatorIn this section, we will compute the following commutator, which is essential for the remaining part of the proof of Theorem 2.8:
$$\begin \nabla _t^, \Box ]\mathcal _tg. \end$$
Proposition 2.19Let Assumption 2.1 and Induction Assumption 1 be satisfied for an \(m \in \mathbb \). The expression
$$\begin \nabla _t^, \Box ]\mathcal _tg|_}+ 2 (m-1) \kappa \mathcal _t^g|_}\end$$
is given uniquely in terms of
$$\begin g|_}, \ldots , \mathcal _t^mg|_}. \end$$
(15)
We begin with the following lemma:
Lemma 2.20For any tensor field S, we schematically have
$$\begin \nabla _t,\nabla ]S(X)&= }(\partial _t,X)S - \nabla _S, \\ \nabla _t, \nabla ^2]S&= \nabla }* S + \Theta * }* S + }* \nabla S + \nabla \Theta * \nabla S + \Theta * \nabla ^2 S, \\ \nabla _t, \Box ]S&= g(\mathcal _tg, \nabla ^2)S + \nabla }(\partial _t, \cdot ) * S + }* \nabla S + \nabla \Theta * \nabla S. \end$$
Here, we have used the notation
$$\begin g(T,\nabla ^2)S:= \sum _^ng^g^T(e_\alpha , e_\beta )\nabla ^2_S. \end$$
ProofNote first that
$$\begin \nabla _t,\nabla ]S(X)&= \nabla ^2_S-\nabla _X(\nabla _tS)\\&= }(\partial _t,X)S - \nabla _S, \end$$
which proves the first formula. For the second formula, write
$$\begin ([\nabla _t,\nabla ^2]S)(X,Y)=([\nabla _t,\nabla ]\nabla S)(X,Y)+(\nabla [\nabla _t,\nabla ]) S(X,Y). \end$$
Inserting the above yields
$$\begin \nabla _t,\nabla ]\nabla S(X,Y)&= }(\partial _t,X) \nabla _YS - \nabla _}(\partial _t,X)Y}S - \nabla ^2_S,\\ \nabla [\nabla _t,\nabla ] S(X,Y)&= (\nabla _X })(\partial _t, Y) S + }(\Theta (X), Y) S + }(\partial _t, Y)\nabla _X S \\&\qquad - \nabla ^2_S - \nabla _S, \end$$
proving in particular the second formula. We may now contract over X and Y to get
$$\begin \nabla _t, \Box ]S&= [\nabla _t, - \textrm_g(\nabla ^2)]S \\&= - \textrm_g([\nabla _t, \nabla ^2])S \\&= \textrm_g\left( (\nabla _\cdot })(\partial _t, \cdot ) \right) S + \textrm_g \left( \nabla ^2_ + \nabla ^2_\right) S \\&\qquad + \Theta * }* S + }* \nabla S + \nabla \Theta * \nabla S. \end$$
We therefore consider the endomorphism
$$\begin \textrm_g\left( (\nabla _\cdot })(\partial _t, \cdot ) \right) \end$$
by using the second Bianchi identity s to write
$$\begin&g(\textrm_g\left( (\nabla _\cdot })(\partial _t, \cdot ) \right) X, Y) \\&\quad = \sum _^n g^ g( \nabla _}(\partial _t, e_\beta ) X, Y) \\&\quad = \sum _^n g^ \nabla _}(X, Y, \partial _t, e_\beta ) \\&\quad = - \sum _^n g^ \nabla _X }(Y, e_\alpha , \partial _t, e_\beta ) - \sum _^n g^ \nabla _Y }(e_\alpha , X,\partial _t, e_\beta ) \\&\quad = \nabla _X }(\partial _t, Y) - \nabla _Y }(\partial _t, X). \end$$
Finally, we use the formula
$$\begin \mathcal _tg(X, Y) = g(\Theta (X),Y)+g(X,\Theta (Y)) \end$$
to see that
$$\begin \textrm_g\left( \nabla ^2_ + \nabla ^2_\right) S = g(\mathcal _tg, \nabla ^2)S. \end$$
This completes the proof. \(\square \)
Proof of Proposition 2.19We first note that
$$\begin \nabla _t^, \Box ] \mathcal _tg|_}&= \sum _ \nabla _t^i [\nabla _t, \Box ] \nabla _t^j \mathcal _tg|_}. \end$$
By applying Lemma 2.15, Lemma 2.20 and Assumption 2.1, we note that the only term in this sum which is not determined by (15) is
$$\begin \sum _ \nabla _t^i g(\mathcal _tg, \nabla ^2) \nabla _t^j \mathcal _tg|_}= \sum _ \sum _ c(a, b) g(\nabla _t^a\mathcal _tg, \nabla _t^b \nabla ^2) \nabla _t^j \mathcal _tg|_}, \end$$
for some combinatorial numbers c(a, b). The only term in this sum which is not determined by (15) is
$$\begin \sum _ g(\mathcal _tg, \nabla _t^i \nabla ^2) \nabla _t^j \mathcal _tg|_}&= \sum _ g(\mathcal _tg, [\nabla _t^i, \nabla ^2]) \nabla _t^j \mathcal _tg|_}\\&\qquad + \sum _g(\mathcal _tg, \nabla ^2) \nabla _t^i \nabla _t^j \mathcal _tg|_}\\&= \sum _ \sum _ g(\mathcal _tg, \nabla _t^a [\nabla _t, \nabla ^2]) \nabla _t^ \mathcal _tg|_}\\&\qquad + (m-1) g(\mathcal _tg, \nabla ^2) \nabla _t^ \mathcal _tg|_}. \end$$
Again, Lemma 2.20 implies that the only term which is not determined by (15) is
$$\begin (m-1) g(\mathcal _tg, \nabla ^2) \nabla _t^ \mathcal _tg|_}, = (m-1) g^ g^ \mathcal _tg_ \nabla _^2 \nabla _t^ \mathcal _tg|_}. \end$$
By Proposition 2.9 and Lemma 2.20, the only term in this expression which is not given by (15), is the term
$$\begin - 2 (m-1) \kappa \nabla _t^m\mathcal _tg|_}. \end$$
Applying Lemma
留言 (0)