A thermodynamical model of non-deterministic computation in cortical neural networks

The total entropy of system A and system B, prior to compression, is defined as:

Equation (23)

The net entropy of system A and system B, after compression, is defined as:

Equation (24)

The change in entropy during compression is equivalent to the quantity of correlations identified between system A and system B:

Equation (25)

The quantity of entropy is maximized when the system state is completely random and all non-zero eigenvalues have equal probability px. The randomness of information is minimized if a more predictable value is identified. In general, the von Neumann entropy of the system is less than maximal when some system states are more likely than others. $S(\rho^_\mathrm)$ and $S(\rho_\mathrm)$ reach equality only if no correlations are identified at all. During the time evolution, entropy is reduced, such that:

Equation (26)

The energetic account must always be balanced, with the net amount of energy acquired by the encoding system distributed toward either free energy or entropy:

Equation (27)

In order to balance the account, any loss of entropy during compression must be paired with a release of free energy, with $\Delta G = G_t - G_0$:

Equation (28)

Since the total energy of the system does not change over the time evolution, these two values are equivalent:

Equation (29)

The compression of information entropy is paired with free energy release. This conservation law is known as the Landauer principle [3134]. This free energy is available to do work, allowing the system to attain a more ordered state. This work involves shifting the resting membrane potential to optimally encode the state of the surrounding environment. This increase in Gibbs free energy, upon information compression, is paired with an decrease in the neuronal membrane potential:

Equation (30)

Information compression locally increases free energy and locally decreases the membrane potential. Neurons which have gained certainty during the computation will therefore gain free energy and move further away from action potential threshold, restoring their resting potential. By contrast, neurons which have gained uncertainty during the computation will lose free energy and move toward action potential threshold, increasing the likelihood of firing a signal.

In this model, information compression both decreases the distribution of possible system states and increases the amount of thermodynamic free energy which is available to encode the optimal system state, resulting in the synchronous firing of an ensemble of neurons across the cortex (figure 1). During the computation, the trial Hamiltonian is resolved, energy is redistributed, and the system physically instantiates the solution to a computational problem, by identifying the most likely state of the surrounding environment.

Figure 1. A prospective schematic relating visual stimuli with the computational entropy of a hypothetical single neuron, expected signaling outcomes, and expected network dynamics. (A) Greater 'certainty' in the visual stimulus is characterized by higher correlations between neighboring pixels in one orientation and higher contrast between neighboring pixels in another orientation. Sample images are used from a study of the neural correlates of objectively uncertain visual stimuli [35]. (A') The reduction of uncertainty is equivalent to entropy compression, or a reduction of the probability distribution representing possible internal states. Shown here is a hypothetical entropy profile for a neuron whose receptive field does not match the stimulus location. (A'') Entropy compression is paired with a partial recovery of the thermodynamic (Gibbs) free energy that was distributed to generate that probability distribution. The recovery of Gibbs free energy prompts the neuron to lower its voltage and return to resting potential. (A''') A small ensemble of neurons across the cortex fires synchronously to encode this distinguishable stimulus. (B) Greater 'uncertainty' in the visual stimulus is characterized by few correlations between neighboring pixels, obscuring recognizability of edges and orientations within the image. (B') Shown here is a hypothetical entropy profile for a neuron with an identical receptive field as above. (B'') The probability distribution remains intact, and so at the completion of the computational cycle, thermodynamic free energy remains distributed toward entropy, and insufficient free energy is available to maintain the resting potential, so the neuron crosses threshold and fires an action potential. (B''') During upstate, cortical neurons retain sensitivity to random electrical noise, allowing this computational process to have discrete effects on the membrane voltage of each computational unit. A large ensemble of neurons across the cortex (including this one) fires synchronously to encode this highly uncertain stimulus. Neurons represented as black circles will not fire, encoding the certainty that their receptive field is not activated; neurons represented as red stars will fire, encoding uncertainty whether their receptive field is activated; neurons represented as pink stars will also fire, upon receiving a suprathreshold stimulus. Reproduced from [35]. CC BY 4.0.

Standard image High-resolution image

If the system gains internal heat over time t, then this energy can be distributed toward increasing entropy or increasing the amount of free energy available to do work. Minimizing the amount of entropy therefore leaves more free energy available to do work within the system. Since energy is always conserved, the account must always be balanced, as the trial Hamiltonian is resolved. If the brain does engage in such non-deterministic computation, with a trial Hamiltonian maximizing free energy availability, then the brain should exhibit better-than-classical energy efficiency. The inefficiency of the system, or the net entropy production, can be calculated using empirical measures. The amount of temperature-entropy $T\Delta S$ produced over time $t = \Delta$ is equal to the net change in enthalpy $\Delta E$ less the change in available free energy $\Delta G$:

Equation (31)

The enthalpy E is the total heat content of the system. If work is done on a system, $\Delta E$ is equal to the net change in internal energy of the system $\Delta U$ plus the net work done on the system $\Delta W$. If work is done by the system, $\Delta E$ is given by the net change in internal energy of the system $\Delta U$ minus the net work done by the system $\Delta W$. In the case of reversible changes to the quantities of either volume V or pressure p (non-electrical work), $\Delta W = - V \Delta p$. For a cortical neural network—a far-from-equilibrium thermodynamical system of electrochemical cells, which actively traps energy to accomplish work—the quantity of $\Delta E$ is given by:

Equation (32)

The change in the total heat content of the system $\Delta E$ is given by the net energy gained by the system $\Delta U$ subtracted by the amount of work completed by the system $\Delta W$ over some period of time t. The net energy $\Delta U$ is given by the quantity of thermal energy entering the system $\Delta U_\mathrm$ subtracted by the quantity of thermal energy exiting the system $\Delta U_\mathrm$ over time t. Combining equations (31) and (32) yields:

Equation (33)

The net inefficiency of the system, given the temperature-entropy $T \Delta S$, is equal to the net change in internal energy of the system $\Delta U$, less the work done by the system $\Delta W$, less the change in free energy $\Delta G$.

Now that all energy in the system is accounted for, we can calculate values for $\Delta U_\mathrm$, $\Delta U_\mathrm$, $\Delta W$, $\Delta G$, and T, to estimate the quantity of $\Delta S$ generated by the human brain. The net change in internal energy of the system $\Delta U$ is equivalent to the caloric value supplied by the bloodstream to serve the neural network (provided by the energy input $\Delta U_\mathrm$) minus the amount of excess heat produced during that period of time (provided by the energy output $\Delta U_\mathrm$). To make a calculation based on neuronal signaling activity only, the change in free energy $\Delta G$ can be estimated as the quantity of energy used to set up the resting potential, while the amount of work $\Delta W$ can be estimated as the quantity of adenosine triphosphate (ATP) expended on setting up the electrochemical resting potential. Since these quantities are redundant over these timescales, only the work done to accomplish these tasks, $\Delta W$, is accounted here. The quantity of entropy $\Delta S$ produced in the human brain over time t can then be estimated by accounting for the constant overall temperature of the system, T:

Equation (34)

The energy consumed by the human brain over the course of a day is approximately 20$\%$ of oxygen intake and 20$\%$ of calories consumed by the body, a value that remains relatively constant regardless of variation in mental tasks or amount of motor activity [36, 37]. In adult males, this energetic usage is approximately 400 kilocalories or 1673.6 kJ per day. This estimated rate of $\Delta U_\mathrm$ is equal to 19.37 J s−1 or 19.37 W.

At rest, the human brain has an estimated metabolic rate of 3.5 ml O2/100 g min−1, with venous blood flow removing heat [38]. This rate yields a sustained jugular venous-to-arterial temperature difference of 0.3 ∘C [39, 40]. This value corresponds to an estimated heat production of approximately 6 J kg−1 min−1 [41, 42]. The rate of $\Delta U_\mathrm$ is therefore estimated to be 0.14 J s−1 or 0.14 W.

The amount of energy expended on work $\Delta W$ can be estimated by quantifying ATP turnover in the human brain. The quantity of ATP used on signaling processes in rat neocortex has been estimated at 21 µmol g−1 min−1 [43], with experimental measurements of total ATP use approximating 30–50 µmol g−1 min−1 [4447], although estimates vary in both directions [48, 49]. Limiting the estimate to signaling processes only, the ATP turnover in neocortical grey matter (GM) is 0.35 µmol g−1 s−1.

Assuming the human brain has a similar rate of ATP turnover to other mammals, the quantity of ATP used on signaling processes in the human brain can be calculated by estimating the total amount of neocortical GM. The size of the human brain by volume is rather variable, with a measured range of 1053–1499 cm3 in adult men and 975–1398 cm3 in adult women [50, 51]. The quantity of GM is 49.4$\%$–58.5$\%$ in adult men and 52.1$\%$–59.6$\%$ in adult women, both averaging 55$\%$ [52]. Since the average adult male brain is 1.4 kg [53], the approximate quantity of GM is 770 g, and so the estimated ATP turnover rate in this energetically expensive tissue is 0.27 mmol s−1. In living cells, the hydrolysis of one ATP molecule releases 57 kJ mol−1 of energy. Given these values, the GM of the average adult male brain expends ATP on neuronal signaling processes at a rate of 15.36 J s−1 or 15.36 W. This is the estimated value of $\Delta W$.

Here, $\Delta U_\mathrm$ is the amount of incoming caloric energy (19.37 J s−1), $\Delta U_\mathrm$ is the heat loss from the system (0.14 J s−1), $\Delta W$ is the amount of energy used to set up the electrochemical resting potential (15.36 J s−1), and $\Delta G$ is the amount of free energy stored in the neuronal membrane that is released during the action potential. [The latter quantity is negligible at these second-long timescales, because all free energy is eventually distributed toward either entropy or work.] Each parameter is provided as a rate of energy turnover in Joules per second, or Watts. Equation (34) can be used to calculate the qua

留言 (0)

沒有登入
gif