How plasticity shapes the formation of neuronal assemblies driven by oscillatory and stochastic inputs

Appendix A.   Derivation of self-consistent equations for synaptic weights

Here we derive a self-consistent set of equations for the synaptic weights \(w_\) and \(w_\) by assuming a separation of time-scales between the rate dynamics and the synaptic plasticity. We first note that the evolution equations for the synaptic weights, Eq. (2) can be rewritten for the case of stationary rate dynamics by noting that \(\langle r_(t)r_(t-T)\rangle _ = \langle r_(t+T)r_(t)\rangle _\), which allows for a change of variables in the second integral, leading to Eq. (3). Strictly speaking this correspondence only holds when the dynamics has been averaged over the fast time. As we are only interested in the slow-time dynamics, we write Eq. (3) as if the correspondence were exact, cognizant that it is a slight abuse of notation.

Now, given real-valued, time-varying inputs to the two neurons, the equations are

$$\begin \begin \tau \dot_&= -r_+w_r_+I_(t),\\ \tau \dot_&= -r_+w_r_+I_(t),\\ \dot_&= \int _^dT A(T)r_(t)r_(t+T),\\ \dot_&= \int _^dT A(T)r_(t)r_(t+T), \end \end$$

(A1)

where A(T) is the plasticity rule. There is no general analytical solution to these equations given the quadratic nonlinearities. However, if we assume that each synaptic weight change is small, then the synaptic weights will evolve much more slowly than the rates and we can formally separate the time scales in a multi-scale analysis. To do this we introduce the small parameter \(\epsilon \ll 1\) such that \(A(T) = \epsilon \tilde(T)\). We also introduce the slow time \(t_ = \epsilon t\) and allow for the rates and weights to evolve on both fast and slow time scales, i.e. they are functions of t and \(t_\), and these two times are taken to be independent variables.

Then we can write

$$\begin \begin r_&= r_^(t,t_)+\epsilon r_^(t,t_)+\mathcal (\epsilon ^),\\ r_&= r_^(t,t_)+\epsilon r_^(t,t_)+\mathcal (\epsilon ^),\\ w_&= w_^(t,t_)+\epsilon w_^(t,t_)+\mathcal (\epsilon ^),\\ w_&= w_^(t,t_)+\epsilon w_^(t,t_)+\mathcal (\epsilon ^). \end \end$$

(A2)

Pluggin these into Eq. (A1) and collecting terms order-by-order gives, at order \(\mathcal (1)\)

$$\begin \begin \tau \partial _r_^&= -r_^+w_^r_^+I_(t),\\ \tau \partial _r_^&= -r_^+w_^r_^+I_(t),\\ \partial _w_^&= 0,\\ \partial _w_^&= 0. \end \end$$

(A3)

These last two equations show that the leading-order weights only depend on the slow time, namely \(w_^(t,t_) = w_^(t_)\) and \(w_^(t,t_) = w_^(t_)\). Therefore, they can be treated as constants in the rate equations, which evolve on the fast time-scale.

At order \(\mathcal (\epsilon )\) we have

$$\begin \begin \tau \partial _}r_^+\tau \partial _r_^&= -r_^+w_^r_^+w_^r_^,\\ \tau \partial _}r_^+\tau \partial _r_^&= -r_^+w_^r_^+w_^r_^,\\ \partial _}w_^+\partial _w_^&= \int \limits _^dT\tilde(T)r_^(t)r_^(t+T),\\ \partial _}w_^+\partial _w_^&= \int \limits _^dT\tilde(T)r_^(t)r_^(t+T). \end \end$$

(A4)

The first two equations give a correction to the leading order solution of the firing rates, which we will not use here. The weight equations at first glance do not seem solvable since we are expected to solve for both the leading order solution of the synaptic weights as well as the next order correction in the same set of equations. However, we know that the leading order terms are independent of the fast time t, which will allow us to solve for both. Specifically, the evolution of the leading-order weights will depend only on those terms from the integral which are independent of the fast time. This leads to Eq. (A5). For simplicity in notation, in what follows we will drop the superscripts and tildes and write, for the leading-order solution, simply

$$\begin \begin \tau \partial _r_&= -r_+w_r_+I_(t),\\ \tau \partial _r_&= -r_+w_r_+I_(t),\\ \partial _}w_&= \int dt\int _^dTA(T)r_(t)r_(t+T),\\ \partial _}w_&= \int dt\int _^dTA(T)r_(t)r_(t+T). \end \end$$

(A5)

We will consider the specific cases of oscillatory and noisy drive below.

Appendix B.   Oscillatory drive

Here we study the case where the neurons are driven sinusoidally with a frequency \(\omega \) and with a phase difference of \(\phi \), i.e. \(I_ = I_+Ie^}\) and \(I_ = I_+Ie^}\). The (complex) rates can be written \((r_,r_) = (R_(t_),R_(t_))+(R_(t_),R_(t_))e^\). We find that

$$\begin R_&= I_\frac)}w_},\nonumber \\ R_&= I_\frac)}w_},\nonumber \\ R_&= R\Big ((1+i\tau \omega )e^} +w_^e^}\Big ),\nonumber \\ R_&= R\Big (w_e^}+(1+i\tau \omega )e^}\Big ),\nonumber \\ R&= \frac-w_^w_^}. \end$$

(B6)

Because we only consider balanced plasticity rules here, the constant, baseline rates will not affect the synaptic rates. Nonetheless, when conducting numerical simulations it is important to take large enough constant drive \(I_\) to ensure positive rates.

In order to calculate the equations for the synaptic weights we must use the real part of the complex rates. As an illustration we consider the equation for the weight \(w_\), which is

$$\begin \partial _}w_ = \int dt\int _^dT\bar(T)\textrm(r_(t))\textrm(r_(t+T)). \end$$

The quadratic term

$$\begin & \textrm(r_(t))\textrm(r_(t+T)) = \frac\Big (R_e^+\bar_e^\Big ) \\ & \cdot \Big (R_e^+\bar_e^\Big ),\\= & \frac\Big (R_\bar_e^+\bar_R_e^\Big ) \\ & +\frac\Big (R_R_e^+\bar_\bar_e^\Big ). \end$$

Note that the first two terms are independent of the fast time t, while the second two terms oscillate on the fast timescale with a frequency \(2\omega \). Integrating over the fast timescale therefore eliminates the latter terms. Performing the second integral and then doing the analogous calculation for the other weight leads to Eq. (5) where \(\phi = \phi _-\phi _\).

Growth rate of synaptic weights. While the final state of the synaptic weights depends on the phase difference, the rate at which plasticity occurs is strongly influenced by the frequency of forcing. This can be most easily seen for the case of in-phase forcing \(\phi = 0\), for which we expect both weights to potentiate (or depress for an anti-hebbian rule). Assuming \(w_ = w_\) leads to a right-hand side (growth rate) of Eq. (5) which is simply proportional to \(\tilde_(\omega )-\tilde_(\omega )\) which is zero for \(\omega = 0\) and as \(\omega \rightarrow \infty \), while it has a maximum for \(\omega = 1/\sqrt\tau _}\).

1.1 Theory for networks

For the case of n coupled neurons, the rate equation for the ith neuron is

$$\begin \tau r_ = -r_+\frac\sum _^w_r_+I_, \end$$

(B7)

where \(I_ = I_+Ie^}\), while the evolution equation for the synaptic weight from neuron j to neuron i is still described by Eq. (3). We can once again apply the separation of timescales formally by defining \(A(T) = \epsilon \tilde(T)\) where \(\epsilon \ll 1\) and defining the slow time \(t_ = \epsilon t\). The rates can be written in vector form as \(\textbf(t,t_) = \textbf_(t_)+\textbf_(t_,\omega )e^\) where

$$\begin \textbf_= & I_\Big (\textbf-\textbf\Big )^\textbf,\nonumber \\ \textbf_= & I\Big ((i\tau \omega +1)\textbf-\textbf\Big )^\textbf, \end$$

(B8)

where \(\textbf\), \(\textbf\) are the identity matrix and weight matrix respectively, \(\textbf\) is a vector of ones, while the jth element of the vector \(\textbf\) is \(e^}\).

Applicability of pairwise theory to network simulations If we consider the rate equations for a pair of neurons j and k in the network Eq. (B7) we find, applying the separation of time-scales approach detailed in Appendix A, that the oscillatory components obey

$$\begin \tau \dot_= & -R_+\fracw_^R_+Ie^}+\xi _,\nonumber \\ \tau \dot_= & -R_+\fracw_^R_+Ie^}+\xi _, \end$$

(B9)

where \(\xi _ = \frac\sum _^w_^R_\). From this we find the complex amplitudes

$$\begin R_= & R\Big ((1+i\tau \omega )(Ie^}+\xi _)+\fracw_^(Ie^}+\xi _)\Big ),\nonumber \\ R_= & R\Big (\fracw_^(Ie^}+\xi _)+(1+i\tau \omega )(Ie^}+\xi _)\Big ), \end$$

(B10)

where \(R = 1/((1+i\tau \omega )^-w_^w_^)\). Note that these equations are identical to those for the complex amplitudes for the pairwise case (with renormalized weights), Eq. (B6), with the exception of the meanfield terms \(\xi _\) and \(\xi _\). The slow dynamics of the synaptic weight \(w_\) is then given by

$$\begin \partial _}w_ = \int dt\int _^dT\bar(T)\textrm(r_(t))\textrm(r_(t+T)). \end$$

Note that in principle the rates \(r_\) and \(r_\) still depend on the meanfield terms and hence this equation is not self-consistent as in the pairwise case. The influence of these meanfield terms depends strongly on the distribution of phases of the complex amplitudes. In one of the two limiting cases, if all of the phases are aligned then the moduli of the terms all sum. This is equivalent to the summation of vectors all with the same angle. In the other limiting case, if the phases are uniformaly distributed, then the resultant modulus will be close to zero because we are summing many vectors all with distinct phases (as long as the moduli and phases are only weakly correlated or uncorrelated). Hence in this limit the inlfuence of the meanfield vanishes and only the pairwise interactions matter. This latter case is the relevant one for Fig. 4C and explains why the pairwise theory correctly predicts the network structure after learning.

1.2 Network simulations

For the simulations shown in Fig. 3, the following nonlinear rate equations were used

$$\begin \tau \dot} = -\textbf+\alpha \Theta \Big (\frac\textbf\textbf-I_+I\Big ), \end$$

where \(\Theta \) is the Heaviside function. A spike from a neuron i in a timestep \(\Delta t\) occurs with probability \(r_\Delta t\). Given the spike trains from neurons i and j, a weight \(w_\) undergoes updates from all spike pairs according to the STDP rule, see Pfister and Gerstner (2006) for the numerical scheme. For the simulations in Fig. 4 linear rate equations are used.

Appendix C.   Noisy drive

Here we consider an external drive of the form

$$\begin \begin I_(t)&= \sqrt\xi _(t)+\textrm(c)\sqrt\xi _(t),\\ I_(t)&= \sqrt\xi _(t)+\sqrt\xi _(t-D), \end \end$$

(C11)

where \(\xi _(t)\) is a Gaussian white noise process, i.e. \(\langle \xi _(t)\rangle = 0\) and \(\langle \xi _(t)\xi _(t^)\rangle = \sigma ^\delta _\delta (t-t^)\). Therefore, the noisy drive to the two neurons has correlation c. The correlated input is delayed to neuron 2 with respect to neuron 1 by a time D.

To solve the system of rate equations we rewrite it in vector form as

$$\begin \tau \dot} = \textbf\textbf+\textbf, \end$$

(C12)

where \(\textbf = (r_,r_)\), \(\textbf = (I_,I_)\) and

$$\begin \textbf = \left( \begin -1 & w_\\ w_ & -1\\ \end\right) . \end$$

(C13)

We diagonalize the connectivity matrix \(\textbf = \textbf\mathbf \textbf^\) and obtain the system of independent equations

$$\begin \tau \dot} = \mathbf \textbf+\textbf^\textbf, \end$$

(C14)

where \(\textbf = \textbf^\textbf\). The matrices resulting from the diagonalization are

$$\begin \textbf= & \left( \begin -\sqrt/w_} & \sqrt/w_} \\ 1 & 1\\ \end \right) , \nonumber \\ \varvec= & \left( \begin -1-\sqrtw_} & 0 \\ 0 & -1+\sqrtw_}\\ \end \right) ,\nonumber \\ \textbf^= & \left( \begin-1 & \sqrt/w_} \\ 1 & \sqrt/w_}\\ \end\right) . \end$$

(C15)

The equations for the transformed variables \(\textbf\) are

$$\begin \tau \dot_= & -(1+\sqrtw_})u_+\frac\Big (-\sqrt}}}I_(t)+I_(t)\Big ),\end$$

(C16)

$$\begin \tau \dot_= & -(1-\sqrtw_})u_+\frac\Big (\sqrt}}}I_(t)+I_(t)\Big ). \end$$

(C17)

These equations can be solved formally as

$$\begin \begin u_(t)&= u_(0)e^}-\sqrt\frac}\sqrt}}}\int \limits _^e^}dW_\\ &+\sqrt\frac}\int \limits _^e^}dW_\\&-\frac}\sqrt}}}\textrm(c)\sqrt\int \limits _^dW_e^}\\ &+\frac}\sqrt\int _^dW_e^},\\ u_(t)&= u_(0)e^}+\sqrt\frac}\sqrt}}}\int \limits _^e^}dW_\\ &+\sqrt\frac}\int \limits _^e^}dW_ \\&+\frac}\sqrt}}}\textrm(c)\sqrt\int \limits _^dW_e^}\\ &+\frac}\sqrt\int _^dW_e^} \end \end$$

(C18)

where \(dW_\), \(dW_\) and \(dW_\) are the stochastic differentials corresponding to the Gaussian processes \(\xi _\), \(\xi _\) and \(\xi _\) respectively. Also, we have defined the fast and slow time constants

$$\begin \tau _&= \fracw_}},\nonumber \\ \tau _&= \fracw_}}, \end$$

(C19)

from which it is clear that there is an instability for \(w_w_>1\). The original firing rates are linear combinations of these variables. Specifically,

$$\begin r_&= \sqrt}}}(-u_+u_),\nonumber \\ r_&= u_+u_. \end$$

(C20)

Finally, we have, and ignoring the dependence on the initial condition,

$$\begin \begin r_(t)&= \sqrt\frac}\Bigg [\int \limits _^\Big (e^}+e^}\Big )dW_\\&+\sqrt}}}\int \limits _^\Big (e^}-e^}\Big )dW_\Bigg ]\\&+\sqrt\frac}\Bigg [\textrm(c)\int \limits _^dW_e^} +\textrm(c)\int \limits _^dW_e^}\\&-\sqrt}}}\int \limits _^dW_e^}+\sqrt}}}\int \limits _^dW_e^}\Bigg ],\\ r_(t)&= \sqrt\frac}}\sqrt}}}\int \limits _^\Big (e^}-e^}\Big )dW_\\ &+\sqrt\frac}}\int \limits _^\Big (e^}+e^}\Big )dW_\\&+\sqrt\frac}}\Bigg [-\textrm(c)\sqrt}}}\int \limits _^dW_e^} \\&\qquad \qquad \qquad \quad +\textrm(c)\sqrt}}}\int \limits _^dW_e^}\\&+\int \limits _^dW_e^}+\int \limits _^dW_e^}\Bigg ]. \end \end$$

(C21)

The slow dynamics of the synaptic weights, which is calculated self-consistently through the rates, is therefore also stochastic. In this case the integral over the fast time in Eq. (A5) yields the expected value of the product of rates. Namely,

$$\begin \partial _}w_ = \int \limits _^dTA(T)E(r_(t)r_(t+T)), \end$$

and similarly for \(w_\). Evaluating this expectation requires products of stochastic integrals. For independent processes this expectation is always zero, while for integrals of the same process the product can be expressed as a standard integral through the so-called Ito isometry. For example,

$$\begin & E\Bigg (\int \limits _^e^}dW_\int \limits _^e^}dW_\Bigg )\nonumber \\ & \quad = e^}E\Bigg (\int \limits _^e^}dW_\int \limits _^e^}dW_\Bigg )\nonumber \\ & \quad = e^}\int \limits _^(t,t+T)}e^}du,\nonumber \\ & \quad = e^}\frac\Big (e^}-1\Big ) & T>0\\ e^}\frac\Big (e^}-1\Big ) & T\le 0 \end\right. }\nonumber \\ & \quad = \frac\Big (e^}-e^}\Big ), \end$$

(C22)

which is independent of t at long times. Performing these integrals yields the evolution equations, Eq. (7).

1.1 Evolution equations for small weights

If \(w_\ll 1\) and \(w_\ll 1\) then

$$\begin \dot_= & \frac^}A_\tau _\tau \Big (\frac}-\frac}\Big )\\ & +\frac\tau _\tau }\Bigg \\Big ((1-|c|)\sigma _^+|c|\sigma _^\Big )\Big (\frac})^}\\ & +\frac}-\frac}\Big )\\ & +w_\Big ((1-|c|)\sigma _^+|c|\sigma _^\Big )\Big (\frac})^}\\ & +\frac}-\frac}\Big )\Bigg \},\\ \dot_= & \frac^}A_\tau _\tau \Big (\frac}-\frac}\Big )\\ & +\frac\tau _\tau }\Bigg \\Big ((1-|c|)\sigma _^+|c|\sigma _^\Big )\Big (\frac})^}\\ & +\frac}-\frac}\Big )\\ & +w_\Big ((1-|c|)\sigma _^+|c|\sigma _^\Big )\Big (\frac})^}\\ & +\frac}-\frac}\Big )\Bigg \}. \end$$

If \(w_\ll 1\) and \(w_\) can be order one, then

$$\begin \dot_= & \frac^}A_\tau _\tau \Big (\frac}-\frac}\Big )\\ & +\frac\tau _\tau }w_\Big ((1-|c|)\sigma _^+|c|\sigma _^\Big )\Big (\frac})^}\nonumber \\ & +\frac}-\frac}\Big ),\\ \dot_= & \frac^}A_\tau _\tau \Big (\frac}-\frac}\Big )\\ & +\frac\tau _\tau }w_\Big ((1-|c|)\sigma _^+|c|\sigma _^\Big )\Big (\frac})^}\\ & +\frac}-\frac}\Big ). \end$$

If \(w_\ll 1\) and \(w_\) can be order one, then

$$\begin \dot_= & \frac^}A_\tau _\tau \Big (\frac}-\frac}\Big )\\ & +\frac\tau _\tau }w_\Big ((1-|c|)\sigma _^+|c|\sigma _^\Big )\Big (\frac})^}\\ & +\frac}-\frac}\Big ),\\ \dot_= & \frac^}A_\tau _\tau \Big (\frac}-\frac}\Big )\\ & +\frac\tau _\tau }w_\Big ((1-|c|)\sigma _^+|c|\sigma _^\Big )\Big (\frac})^}\\ & +\frac}-\frac}\Big ). \end$$

留言 (0)

沒有登入
gif