Genetic association mapping leveraging Gaussian processes

Gaussian Process (GP)

Gaussian Process (GP) is a type of stochastic processes, whose application in the machine learning field enables us to infer a nonlinear function f(x) over a continuous domain x (e.g., time and space). Precisely, f(x) is a draw from a GP, if follows a N-dimensional multivariate normal distribution for the N input data points \(_\}}_^\). Let us denote \(X=_,\ldots ,_)}^\) and \(f=_),\ldots ,f(_))}^\), a GP is formally written as

$$f \sim }}}}}}}(m(X),k(X,X)),$$

where m(⋅) denotes the mean function and k(⋅,⋅) denotes the kernel function [11]. The simplest kernel function would be the linear kernel, such that k(X, X) = σ2XX⊤, while the automatic relevance determination squared exponential (ARD-SE) kernel is defined as

$$k(_,_)=^\exp \left[-_^\frac_-_)}^}_}\right]$$

for the (j, k) element of k(X, X), where \(_,_\in }}^\) are Q-dimensional input vectors. Here σ2 is the kernel variance parameter and \(\rho =_,\ldots ,_)}^\) is the vector of characteristic length scales, whose inverse determines the relevance of each element of the input vector. Typically, the mean function is defined as m(X) = 0.

GP regression and its use in genetic association mapping

Because the GP yielding f(x) has various useful properties inherited from the normal distribution, GP can be used to estimate a nonlinear function f(X) from output data \(y=_,\ldots ,_)}^\) along continuous factor X. The extended linear model y = f(X) + ε is referred to as the GP regression and widely used in the machine learning framework [12]. This model can be used to map dynamic genetic associations for normalized gene expression or other common complex quantitative traits (e.g., human height) along the continuous factor x (e.g., cellular states or donor’s age). Let us denote the genotype vector \(g=_,\ldots ,_)}^\) and the kinship matrix R among N individuals, the mapping model, as proposed by us or others [8, 10] can be expressed as follows:

$$y=\alpha +\beta \odot g+\gamma +\varepsilon ,$$

(1)

where

$$\alpha \sim }}}}}}}(0,K),\quad \beta \sim }}}}}}}(0,_K),\quad \gamma \sim }}}}}}}(0,_K\odot R)$$

are all GPs with similar covariance matrices, where ⊙ denotes element wise product between two vectors or matrices with the same dimensions, K = k(X, X) denotes the covariance matrix with a kernel function, and ε denotes the residuals. Intuitively, α models the average baseline change of y in relation to x, while β represents the dynamic genetic effect along x. The effect size is multiplied by the genotype vector g, indicating that the output yi varies between different genotype groups (gi∈ ). In fact, the effect size β(xi) is additive to the baseline α(xi) at each xi, which is the same as the standard association mapping. Here statistical hypothesis testing is performed under the null hypothesis of δg = 0, as the strength of genetic association is determined by δg.

It is important to note that the model (1) includes a correction term γ that accounts for the between-donor variation of dynamic changes along x, particularly when multiple data points are measured from the same donor or samples are taken from related donors. This term is essential for statistical calibration of the genetic effect β, because other genetic associations scattered over the genome (trans effects) can confound the target genotype effect. Therefore, to adjust for the confounding effect, we need to include the extra GP γ, which is drawn from a normal distribution with the covariance matrix of K multiplied by the kinship matrix R.

Here, the kinship matrix is estimated by \(\hat=\sum\nolimits_^}_}_^/L\) using genome-wide variants gl(l = 1, …,L), where \(}_\) is a standardized genotype vector (centered and scalced) based on the allele frequency at genetic variant l, while L denotes the total number of all variants across the genome [6]. The matrix is initially a N × N dense matrix, but it can be simplified if donors are (sufficiently) unrelated. Let us introduce a design matrix of donor configuration, \(Z\in }}^_}\), for the Nd donors (i.e., zij = 1 if the sample i is taken from the donor j; otherwise zij = 0), the kinship matrix can then be approximated as R = ZZ⊤. Thus, γ can be expressed as a linear combination of Nd independent GPs \(\_ \sim }}}}}}}(0,_K);j=1,\ldots ,_\}\), such that \(\gamma =\mathop\nolimits_^_}_\odot _\), where zj denotes the jth column vector of Z. This approximation is particularly useful for parameter estimation with large Nd (as discussed in section 2.4).

Sparse GP and Titsias bound

When the sample size N is large, an ordinary GP faces a severe scalability issue due to the dimension of the dense matrix K being N × N, resulting in a total computational cost of \(}}}}}}}(^)\). As a result, the application of GP in the GWAS field is hindered, as the sample sizes often reach a million these days. However, there are several alternatives to approximate the full GP model, including Nyström approximation (low-rank approximation), Projected Process approximation [13], Sparse Pseudo-inputs GP [14], Fully Independent Training Conditional approximation and Variational Free Energy approximation [15]. In this section, we introduce a sparse GP approximation proposed by [16].

The sparse GP is a scalable model using the technique of inducing points [14]. Since the computational cost of the sparse GP is \(}}}}}}}(N^)\) with M inducing points, we can greatly reduce the computational cost, which is essentially linear to N under the assumption of M ≪ N. Let us denote M inducing points by \(T=_,\ldots ,_)}^\) and corresponding GPs by \(u=_),\ldots ,u(_))}^\), the joint distribution of f and u becomes a multivariate normal distribution. Therefore a lower bound of the conditional distribution p(y∣u) can be written as

$$\log p(y| u) = \log \int\,p(y| f)p(f| u)df\ge \int\left[\log p(y| f)\right]p(f| u)df\\ = \log }}}}}}}(y| \bar,^I)-\frac^}}}}}}}}\}_\}\equiv }}}}}}}}_,$$

where

$$\bar=__^u,\quad }_=_-__^_,$$

and

$$_=k(X,X),\quad _=k(X,T),\quad _=k(T,T).$$

Therefore, the marginal distribution of the output y is approximated by

$$p(y) = \int\,p(y| u)p(u)du\ge \int\exp \}}}}}}}}_\}p(u)du\\ = \log }}}}}}}(y| 0,V)-\frac^}}}}}}}}\}_\}\equiv \exp \}}}}}}}}_\},$$

where \(V=^I+__^_\). The lower bound \(}}}}}}}}_\) is referred to as the Titsias bound and can be used for parameter estimation as well as statistical hypothesis testing.

Selecting the optimal number of inducing points M and their coordinates is crucial for accurately approximating a GP. Although a larger value of M provides a better approximation of GP, it is not feasible to increase M when N reaches hundreds of thousands in large-scale genetic association studies. Additionally, the accuracy of the GP is influenced by the complexity of nonlinearity of y and the dimension Q of input points x. There are few approaches inferring an optimal value of M from data [17], but the size of the example used in the study is too small (48 genes × 437 samples) to be applied to real-world data. However, it is worth noting that the optimal coordinate of inducing points with a fixed M can be easily learned from data, as described in the next section.

Parameter estimation

Genetic association mapping involves performing tens of millions of hypothesis tests. Therefore, it is almost impossible to estimate the parameters of GPs from each pair of trait and variant across the genome, even with use of the sparse approximation mentioned in the last subsection. Furthermore, both the baseline α and the correction term γ share the characteristic length parameter \(\rho =_,\ldots ,_)}^\) and the inducing points T. This can lead to unstable optimization and prolonged parameter estimation times. To address this issue, we have previously proposed a three-step parameter estimation strategy for performing the statistical hypothesis testing [10]. Especially, optimizing with respect to α using a quasi-Newton approach (such as the BFGS method) is sufficient in the first step, because the variance explained by γ is typically much smaller than that explained by α. The three steps are:

1.

y = α + ε (baseline model: H0) to estimate ρ and T.

2.

y = α + γ + ε (baseline model: H1) to estimate variance parameters δd and σ2. Here \(\hat\) and \(\hat\) estimated in H0 are plugged into H1.

3.

y = α + β ⊙ g + γ + ε (full model: H2) to test whether δg = 0. Here \(\,\hat,}_,}^\}\) estimated in H0 and H1 are used.

Here the Titsias bounds for these models are given by

$$}}}}}}}}_^=\left\\log }}}}}}}(y| 0,V)-\frac^}}}}}}}}\}_\},\hfill &h=_,\\ \log }}}}}}}(y| 0,_)-\frac^}}}}}}}}\_)}_\},\hfill&h=_,\\ \log }}}}}}}(y| 0,_)-\frac^}}}}}}}}\_)}_+_G}_G\},&h=_,\end\right.$$

where

$$_=V+_(__^_)\odot R,\quad _=_+_G__^_G,$$

and G = diag(g) denotes the diagonal matrix whose diagonal elements are given by the elements of g. The estimators \(\hat\) and \(\hat\) are obtained by maximizing \(}}}}}}}}_^_}\) with respect to ρ and T, and \(}_\) and \(}^\) are obtained by maximizing \(}}}}}}}}_^_}\) with respect to δd and σ2 given \(\hat\) and \(\hat\).

It is worth noting that, when the kinship matrix R can be expressed as R = ZZ⊤ with a lower rank matrix \(Z=(_,\ldots ,__})\) with Nd < N, Vd can be rewritten as

$$_=V+_\left(__^_\right)\odot (Z^)=^I+A^^,$$

where

$$A = \, (C,}}}}}}}(_)C,\ldots ,}}}}}}}(_)C),\quad \\ B = \, }}}}}}}(_,__,\ldots ,__),$$

and \(C=__^\), and B becomes a M(Nd + 1) × M(Nd + 1) block diagonal matrix. Since the computational complexity of H1 or H2 is \(}}}}}}}(_^^N)\), for large Nd such as MNd > N, the total complexity is over \(}}}}}}}(^)\) and we again face the scalability issue.

However, if the donors in the data are unrelated, we can significantly reduce the memory usage and the computational burden to be \(}}}}}}}(_^N)\). This is because the matrix A becomes a sparse matrix, with \(_^_^ }}=0\) for \(i\ne ^ }\), resulting in NM(Nd − 1) elements out of NMNd bing 0. Additionaly, non-zero elements of A are repeated and identical to the elements of C, and the block diagonal element of B is essentially \(_^\).

Score statistic and Bayes factor for genetic mapping

To perform GWAS with GP, it is crucial to reduce the computational time required to map a genetic association for each variant. The Score statistic to test δg = 0 can be computed from the first derivative of \(}}}}}}}}_^_}\) with respect to δg, and the variance parameters \(\}^,}_\}\) of Vd are estimated from \(}}}}}}}}_^_}\) once for every single variant to be tested. Therefore, it is ideal to test tens of millions of variants independently. To use the fact that the first derivative of \(_^\) given δg = 0 depends only on Vd, such that

$$_^}_}\right\vert }__ = 0}=-_^G__^_G_^,$$

the Score statistic can be explicitly written as

$$S=^}_^G__^_G}_^y,$$

(2)

whose distribution is the generalized χ2 distribution, that is, the distribution of the weighted sum of M independent χ2 statistics, such as \(\mathop\nolimits_^__^\) [8, 10]. It is known that the weights λm(m = 1, …, M) are given by the non-negative eigenvalues of

where \(_^\) can be computed using the Cholesky decomposition of \(_=_^_^\).

To compute the p-value from S, we can use the Davies’ exact method, implemented in the CompQuadForm package on R. Note that, if we use a linear kernel, S can be simplified as described [8]. Although the Score based approach is an easy and quick solution for genome-wide mapping, to check the asymptotic behavior and the statistical calibration of the Score statistics, we should use a QQ-plot to verify that the p-values obtained from multiple variants follow a uniform distribution under the null hypothesis.

If the collocalisation analysis [18] or Bayesian hierarchical model [19] is considered as a downstream analysis using the test statistics, a Bayes factor can also be computed using the Titsias bounds, such as

$$\log (BF)=}}}}}}}}_^_}-}}}}}}}}_^_}.$$

Here we would use some empirical values δg =  to average the Bayes factor, instead of integrating out δg from \(}}}}}}}}_^_}\) [20].

Decomposing static and dynamic genetic associations

In a real genetic association mapping, most of genetic associations are indeed static and ubiquitous over the factor x. To capture such a static association, we can come up with the following model

$$y=__+\alpha +_g+\beta \odot g+_+\gamma +\varepsilon ,$$

where α0 denotes the intercept, 1N denotes the N-dimensional vector of all 1’s, β0 denotes the effect size of the static genetic association, and \(_ \sim }}}}}}}(0,^_R)\) denotes the donor variation which confounds β0. For instance, in [8], the static genetic association β0 is modeled as a fixed effect, and the dynamic effect is tested using the Score statistic. On the other hand, in [10], the authors modeled both the static and dynamic associations as a random effect to test via a Bayes factor. In this case, the covariance matrix K can be rewritten as

to estimate the model parameters in (1), and then the variance δg = 0 for β is tested.

Note here that, the kernel parameter ρ0 is not necessarily common and shared across α, β and γ. Indeed, in [10], the authors estimated \(}_^\) and \(}_^\) independently in \(}}}}}}}}_^_}\). To compute the Score statistic, the authors assumed that \(}_^=}_^\) for β and γ, because the ratio of the static effect to the dynamic effect can be the same for cis and trans genetic effects.

Latent variable X

In longitudinal studies, the factor x is typically observed explicitly (e.g., donor’s age or physical locations where samples were taken). This makes it straightforward to perform genetic association mapping along x using the Score statistics or Bayes factors, as described above. However, this is not often the case for the molecular studies, and therefore we need to estimate the underlying biological state from the data.

In single-cell biology, typically, the hidden cellular state x is often referred to as “pseudotime", and the principal component analysis is normally used to estimate it as part of dimension reduction [21]. Gaussian process latent variable model (GPLVM) is a strong alternative to extract the pseudotime when the molecular phenotype gradually changes along pseudotime x in a nonlinear fashion [22, 23].

We have also proposed a GPLVM that uses the baseline model H0 to estimate the latent variable X from the single-cell RNA-seq data (see Section 3 for more details). Let Y = (y1,…,yJ) be the gene expression matrix of J genes, whose column is a vector of gene expression for the gene j, the Titsias lower bound of the GPLVM based on the baseline model H0 can be written as

$$p(Y| X)\ge }}}}}}}(Y| 0,\Sigma ,I+__^_)-\frac}}}}}}}\}_\}=}}}}}}}}_.$$

To obtain the optimal cellular state \(\hat\), this lower bound can be maximized with respect to [10, 24]. Here \(\Sigma =}}}}}}}(_^;j=1,\ldots ,J)\) denotes the residual variance parameters of J genes, and \(}}}}}}}(\cdot )\) denotes the matrix normal distribution. Due to the uniqueness of the model parameters, the variance parameter in the kernel function is set to be σ2 = 1. In addition, to maintain the uniqueness of the latent variable estimation, a prior probability on X is required. It is quite common to assume independent standard normal distributions for each of the elements of \(X \sim }}}}}}}(0,I,I)\) [24], although there are multiple alternatives to consider depending on the nature of the modeled data [10, 23].

In the parameter estimation, the limited-memory BFGS method can be used to implement GPLVM for large N. In addition, the stochastic variational Bayes approach can be used to fit GPLVM to larger data sets, while reducing the fitting time [25,26,27].

Non Gaussian output y

For the non-Gaussian output y, the Titsias bound \(}}}}}}}}_\) is not analytically available. However, for the Poisson distribution case, a lower bound of the conditional probability p(y∣u) can be computed as follows:

$$}}}}}}}}_=\mathop_\left[-\log (_!)+_}_-\exp \left(}_+\frac}_}\right)\right],$$

where \(}_\) denotes the ith diagonal element of \(}_\). Let νi and wi be the working response and the iterative weight of GLM for the ith sample, such that

$$_=}_+(_-_)/_\quad }}}}}}}\quad _=\exp \left(}_+\frac}_}\right)$$

for i = 1, …, N, the optimal \(\hat\) which maximizes \(\exp \}}}}}}}}_\}p(u)\) satisfies

$$\left(_^+_^_W__^\right)u=W\nu ,$$

(3)

where W = diag(wi; i = 1, …, N), which suggests

$$\nu | u \sim }}}}}}}(\bar,^)$$

as described in elsewhere [28]. Therefore, we can maximize

$$}}}}}}}}_=}}}}}}}(\nu | 0,^+__^_)$$

with respect to where \(u=\hat\) is iteratively updated as in (3). Thus, to obtain the Score statistic for non-Gaussian y, we replace y = ν and \(}_=^+A^A\) in (2).

For a binary output y, it is more complicated than the Poisson case, bacause it is even impossible to analytically compute the \(}}}}}}}}_\) bound with logit or Probit link function. For logit link function, several useful alternatives to the \(}}}}}}}}_\) bound have been proposed [29]. For Probit link function [30], proposed an approximation of \(}}}}}}}}_\) using the Gauss-Hermite quadrature. However, in both cases, the computational cost is much higher than the Poisson case and it is rather impractical to conduct a large genome-wide association mapping at this moment.

留言 (0)

沒有登入
gif