Transfer function approach to understanding periodic forcing of signal transduction networks

Signal transduction networks are systems of biomolecules that deliver signals from one point in space to another, such as from the cell surface to the nucleus. This process facilitates biochemical changes in the cell that dictate phenotypic changes, such as cell growth, death, differentiation and migration. Different input patterns on the same network can yield different phenotypic outcomes. As such, there has been much focus on understanding the dynamics of networks in response to different inputs—in particular, pulses and oscillations, which occur physiologically for some hormones. Studying networks under periodic forcing is one way to mimic these systems but also is a general approach to understand system dynamics in any network. In the literature, this is often called the frequency-domain approach (as opposed to the time domain). In the frequency domain with periodic forcing, the response output is at the same frequency as the input but is attenuated in amplitude and shifted in phase. A compact mathematical representation of the ensuing dynamics is called the transfer function. The transfer function method from conventional control theory is a popular and highly useful tool that people employ to study this frequency domain. In this tutorial we espouse the use of a transfer function approach to understanding signal transduction networks. The paper is organised as follows. In sections 1.1 and 1.2 we give experimental and theoretical motivations of why studying biochemical networks in cells under periodic forcing is useful. In section 2 we introduce the reader to the mathematical background. In section 3, we illustrate the mathematical approaches covered in section 2 and apply them to some simple network motifs yielding quantitative results. Section 4 covers a qualitative approach to gleaning the essential features of the transfer function given a network architecture. In section 5, we discuss some of the limitations of the transfer function approach and give an example where the transfer function fails. In section 6 we conclude with some final remarks.

1.1. Experimental motivations

Most cell biology experiments involve using a constant concentration of perturbant (effectively a step-function in time). Yet, in physiological systems, levels of hormones are not constant but can vary in time—often exhibiting rich dynamics (pulsing, fast and slow transients). One way of investigating dynamics in a systematic way is through using pulses or periodic forcing. There are other reasons for using periodic forcing. The number of input parameters can be increased (frequency in addition to amplitude). Periodic forcing in some cases has been reported to entrain an otherwise heterogenous responding system. In many cases the degree to which a system can be entrained by external periodic forcing can be investigated [1].

Especially with advancements in technologies like microfluidics and optogenetics, performing these periodic forcing experiments have become increasingly viable [2, 3].

Studying system dynamics in the frequency domain framework also presents a variety of advantages over its time domain counterpart.

The first area concerns signal processing and biological computing. Frequency domain methods allow extra information to be encapsulated into signals. Non-oscillatory signals lack the 'degrees of freedom' that sinusoidal ones have, such as periods and modulations, into which information can be encoded [4]. The periodic repetition of signals also allow for removal of unwanted information like transient effects and noise. Related to this information encoding is the ability to measure the information carrying capacity or bandwidth [5]. Bandwidth is obtained by measuring pathway activity in response to inputs at varying frequencies. It is a property often neglected in the time domain. Considering signal transduction networks as machinery with bandwidth opens doors towards a computational picture of biology. Like how two networks can differ in information carrying capacity, they can also have different response activities to the same signal. The window of optimal response activity is called the dynamic range. Bandwidth and dynamic range are almost synonymous quantities, and understanding one helps with understanding the other. This is especially seen in signal transduction pathways that consist of multiple components operating on a different timescales [6]. With these specific windows of information capacity and dynamic response, we can see how using the right frequency of input signals in complex biological systems allows us to observe special functionalities of biological systems and probe for the existence of system components [3, 713].

The second area concerns biological engineering and medicine. Understanding dynamic ranges also presents avenues into controlling and engineering biological systems. This is especially useful in medicine and drug development—or 'frequency domain medicine'. In embryonic development, initial vertebrae formation is controlled by oscillating levels of gene expression that travel between the two ends of the embryo along a tissue called the presomitic mesoderm. There is a specific frequency, depending on each species, required at this stage for successful vertebrae formation. Frequencies too high or low cause abnormalities in vertebral development. This frequency is determined by the activity of the Notch, Wnt and fibroblast growth factor (FGF) pathways. Controlling these pathways allows us to control embryonic development [1416]. Another example is the extracellular-regulated-kinase (ERK)/mitogen-activated protein kinase (MAPK) pathway, which is responsible for cell proliferation. ERK pulses can be frequency modulated. Different modulations yield different proliferation rates. Knowing when and how to control frequency modulation in the ERK/MAPK pathway can allow us to control cell growth [7].

1.2. Theoretical motivations

The description of an input signal in terms of periodically varying functions enables any input signal to be described via Fourier analysis. The transfer function approach enables a compact description of network dynamics. As we shall show below, the transfer function approach simplifies the solution of differential equations into a solution of algebraic equations. Transfer functions are modular and can be added following simple mathematical rules to represent large signal transduction networks.

The transfer function approach makes one very basic assumption—that our input–output system can be described by a set of dynamical equations in continuous time. Given this criteria, a sequence of logical consequences follow and give rise to our transfer function.

Dynamical equations in continuous time take the form of differential equations. All differential equations have a linearised form. Linear differential equations in time have an equivalent form in frequency. These frequency-domain equations give us our transfer function.

In this section, we will walk through this logic together—covering the mathematical background along the way. We draw upon the formalisms presented in [1719].

2.1. Dynamical equations

Physical processes can often be modelled using equations that evolve in time. We call these dynamical equations. For continuous time, dynamical equations take the form of differential equations.

To illustrate this, consider a system $S$ that evolves in continuous time $t$. In the case at hand, $S$ is our signal transduction network. Suppose that $S$ can be fully described by a set of variables

Equation (1)

That is, at any point in time $t = t^ }$, the state of $S$ is defined by the set of values

Equation (2)

For signal transduction networks, each $\left( t \right)$ might represent the quantity of a different biochemical species present in the network. These $\left( t \right)$ terms are called state variables. The column vector of all state variables forms the state vector

Equation (3)

Let us further suppose that we can interact with the system, such as by pipetting in and out species. That is, $S$ accepts inputs from and returns outputs to the external environment. Denote the set of inputs as

Equation (4)

and outputs as

Equation (5)

The respective input and output vectors are

Equation (6)

Since the system is fully described by its state variables, the time evolution of $S$ is equivalent to the time evolution of $\left\\left( t \right)} \right\}$.

If each $\left( t \right)$ can be described by an ordinary differential equation relating its first time derivative to a function of the set of state and input variables, namely

Equation (7)

then we have all the dynamical information about the system required to describe its time evolution packaged into a set of $n$ equations. That is, the time evolution of $S$ is captured in the set of ordinary differential equations

Equation (8)

This can be expressed in vector form as

Equation (9)

where

Equation (10)

Similarly, if each output variable can be expressed in terms of the state and input variables, then the dynamical information for the outputs are packaged into the following set of $o$ equations

Equation (11)

or alternatively in vector form

Equation (12)

where

Equation (13)

Together equations (8) and (11), or equivalently (9) and (12), make up what is known as the state space representation of a dynamical system. They are also individually referred to as the state equation and state observation equation, respectively. With the right choice of elements in sets $\left\\left( t \right)} \right\}$ and $\left\\left( t \right)} \right\}$, one can always describe a continuous time dynamical system by equations (8) and (11); or equivalently (9) and (12). As a simple example, consider the dynamical equation

Equation (14)

and output equation

Equation (15)

With the following choice of state variables

Equation (16)

and, consequently, output variable

Equation (17)

we can express our original equations in forms (8) and (11) as follows:

Equation (18)Equation (19)2.2. Equilibria

A system may have one or more special combinations of values of state and input variables such that its state remains stationary in time. That is, there may exist specific sets of $\left\\left( t \right)} \right\}$ and $\left\\left( t \right)} \right\}$ for which, once the system assumes, prevent $\left\\left( t \right)} \right\}$ from changing at future times. Such special sets of values, when substituted into equation (8), reduce each row to zero:

Equation (20)

telling us that the rate of change of each state variable is zero.

Each of these special sets of values defines a special coordinate

Equation (21)

in parameter space. These special coordinates are termed equilibrium coordinates, or simply equilibria.

2.3. Linearised dynamical equations

We wish to arrive at the transfer function from our state space formulation. However, there is a problem—equations (8) and (11) can be nonlinear. Nonlinear equations do not (strictly) have transfer functions. Only linear equations have transfer functions. A nonlinear equation may, however, have local regions where it behaves approximately linear. In these regions, an approximate transfer function can be obtained. In this sense, the closest equivalent to the transfer function of a nonlinear equation is the transfer function of its local linear equations.

To obtain these local linear equations, we must linearise their original nonlinear forms about their equilibria. Linearisation involves approximating a function with its first order Taylor polynomial about a point of interest. The first order Taylor approximation of a function $f\left( } \right)$ about coordinate $} = } ^ }$ is:

Equation (22a)

Equivalently, we can recentre the function by defining $}f\left( } \right) = f\left( } \right) - f\left( } ^ }} \right)$ and say:

Equation (22b)

If we choose a $} = } ^ }$ coordinate such that $f\left( } ^ }} \right) = 0$, then equation 22b reduces to

Equation (22c)

That is, for these special coordinates, the recentred approximation $}f\left( } \right)$ is equal to the original approximation $f\left( } \right)$.

If the function we are trying to linearise were to be any of the dynamical equations in equation (8), that is $f(})=F_i\left(\boldsymbol,\boldsymbol\right)$, then these special $} ^ }$ coordinates would just be the equilibria that we mentioned in section 2.2. We can see this more explicitly if we try to Taylor approximate $$ about arbitrary coordinate $\left( },}} \right) = \left( }^ },}^ }} \right)$:

Equation (23)

Now, if $\left( }^ },}^ }} \right) = \left( }_e},}_e}} \right)$, then

Equation (24)

But, as we have established in equation (20), $\left( }_e},}_e}} \right) = 0$. Therefore, equation (24) reduces to the form of equation (22c ):

Equation (25)

Approximating the dynamical equations in (8) and (11) by their first order Taylor polynomials about arbitrary equilibrium coordinate

Equation (26)

gives

Equation (27)

and

Equation (28)

respectively, where $\Delta  =  - }$ and $\Delta  =  - }$. The corresponding matrix form of these two linearised systems of equations are

Equation (29)

and

Equation (30)

where $}$, $}$, $}$ and $}$ are derivative matrices defined by entries at row $r$ and column $c$:

Equation (31)

So, given a system near equilibrium, we can approximate its dynamics from equations (9) and (12) with the simpler versions in equations (29) and (30).

If the entries in these derivative matrices are time independent, then equations (29) and (30) reduce to

Equation (32)

and

Equation (33)

One might imagine the time-dependent matrices $}\left( t \right)$, $}\left( t \right)$, $}\left( t \right)$ and $}\left( t \right)$ to represent time varying rate constants of a signal transduction network, while their time-independent counterparts $A$, $B$, $C$ and $D$ to represent fixed rate constants.

2.4. The transfer function

The transfer function is an equivalent representation to the state space representation in equations (32) and (33). It is to the frequency domain what the state space representation is to the time domain. The transfer function describes how a system amplitude modulates and phase shifts incoming sinusoids as a function of frequency.

In terms of cell signalling networks, transfer functions tell us how sinusoidal biochemical inputs are amplified and delayed in the process of turning into outputs (see figure 1).

Figure 1. System receives green oscillatory input and returns red oscillatory output (top). The output is modulated in space by factor $M$ and shifted in time by phase angle $\phi $ (bottom). Both waves have identical frequency $\omega $.

Standard image High-resolution image

For each pair of inputs and outputs in a system, there is an associated transfer function. That transfer function describes the contribution of that input to its associated output. For a system of one input and one output, a single transfer function exists, see figure 2.

Figure 2. A single-input single-output system has one transfer function.

Standard image High-resolution image

For a system of one input and two outputs, two transfer functions exist: one for the contribution of input 1 to output 1 (

留言 (0)

沒有登入
gif