32nd Annual Computational Neuroscience Meeting: CNS*2023

1School of Computing, Faculty of Engineering and Physical Sciences, University of Leeds

*Email: n.cohen@leeds.ac.uk

How are brains organized, how does brain organization support its multitude of functions, and what guiding general principles can be gleaned from small, invertebrate brains? I will discuss these questions for a microscopic nematode worm called C. elegans, the only animal to date for which multiple complete connectomes are available. I will present a methodology for integrating topological and morphological characteristics across scales to create a meaningful brain-map of this animal, that sheds light on the modularity of the circuitry, information processing pathways, and coordination across the different functional modules. The approach will be data driven, but the principles validated with population models along the way. I will argue that some of the anatomical and functional features are likely pervasive across the animal kingdom and therefore that fundamental principles and methodologies can be learned from small, relatively simple brains. To conclude I will present opportunities to integrate whole-brain imaging, behaviour, and connectomic data to create individual and population-based whole-brain simulation models.

K2 Using non-invasive brain stimulation to modulate cognitive functions

1Cognitive and Biological Psychology, Wilhelm Wundt Institute, Leipzig University & Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences (MPI CBS) Leipzig, Germany

*Email: hartwigsen@cbs.mpg.de

The human brain is flexible. Neural networks adapt to changing cognitive demands by flexibly recruiting different regions and connections. I will discuss recent insight into functional network interactions and short-term plasticity based on combined neurostimulation and neuroimaging approaches. I will argue that short-term plasticity enables flexible adaptation to challenging conditions. My key hypothesis is that disruption of higher-level cognitive functions can be compensated for by the recruitment of domain-general networks in our brain. Example studies illustrate how non-invasive brain stimulation can be used to temporarily interfere with normal processing, probing short-term network plasticity at the systems level. These results challenge the view of a modular organization of the human brain and argue for a flexible redistribution of function via systems plasticity. I will also discuss recent attempts to improve targeting and dosing of non-invasive brain stimulation with electrical field modelling. A better understanding of the interaction between the induced electrical field in our brain and the current cognitive brain state helps to increase stimulation efficiency and may address the observed large variability in response to non-invasive brain stimulation.

K3 Mean-field approach to biophysical models of spiking networks

1NeuroPSI Institute – UMR9197, CNRS Paris-Saclay University, Campus CEA Saclay, Saclay, FRANCE

*Email: Alain.Destehxe@CRNS.FR

I will speak about how to derive mean-field models for biophysical models of neurons (such as HH, AdEx), and how this approach allows us to evaluate the emerging consequences (at large scales) of changing biophysical parameters (at microscopic scale). I will illustrate how changes in specific receptor types (GABA-A or NMDA) can lead to the emergence of slow-wave activity at the whole-brain level, which may explain how anesthetics switch the brain to slow-wave activity.

K4 Function follows Dynamics: controlling system states to modify information processing

1CNRS and Institute for Systems Neuroscience (INS), Aix-Marseille University, France & USIAS Fellow, Laboratory of Cognitive and Adaptative Neuroscience (LNCA), University of Strasbourg and CNRS

*Email: demian.battaglia@univ-amu.fr

The architect Louis Sullivan used to say that "Structure follows Function". Here, we will claim instead that "Function follows Dynamics", as both theory and data analyses suggest that self-organized collective dynamics shape information processing.

The dynamics of neural circuits indeed give rise to diverse and shifting patterns of coordinated activity, from assemblies of cells with correlated firing up to multi-regional networks of functional connectivity. Information theory and modelling can be used to track how the formation and reconfiguration of patterns of collective activity translate into different ways of propagating and integrating information through space and time. If the structure of the considered circuits constrains the types of dynamical states they can generate, it does not nevertheless fully constrain them, so that a multiplicity of alternative information processing modes can stem from circuits with a fixed connectome. These different processing modes can thus be selected by biasing the system to adopt the appropriate dynamical state.

We will focus specifically on oscillatory dynamics, considering models at different scales, but also analyses of different datasets. We will show that the haphazard variability of oscillatory patterns (ubiquitous both in silico and in vivo) does convey information about the presented stimuli or the task being performed. Despite their stochastic-like features, oscillatory bursts can coordinate between them revealing the existence of alternative system's states, modulated by behavior and cognitive processes such as working memory or attention. We will also find that the oscillatory patterns better predictive about the task being performed are not the ones with the strong power or coherence, dominating classic analyses based on extensive averaging over time and trials. On the contrary, the most informative patterns are the most "entangled", i.e., the ones exhibiting the larger degree of coordination between their components' fluctuations.

Such findings call for adopting "dynamics-aware" approaches to interrogate recordings of neural activity at different scales, since transient fluctuations, even the weaker ones, may be a computational resource rather than a disturbance, manifesting system's level complexity rather than noise.

F1 Critical scaling of whole-brain activity

Adrián Ponce-Alvarez*1, Morten L. Kringelbach2, Gustavo Deco3

1Universitat Politecnica de Catalunya, Department of Mathematics, Barcelona, Spain

2University of Oxford, Centre for Eudaimonia and Human Flourishing, Linacre College; Department of Psychiatry, Oxford, United Kingdom

3Universitat Pompeu Fabra, Barcelona, Spain

*Email: adrian.ponce@upc.edu

Scale-invariant neural activity has been reported in the brain activity of diverse species, using diverse recording techniques at different scales. The observation of scale-invariant statistics in spontaneous brain activity has contributed to support the idea that neural systems operate close to a phase transition, or critical point, a suitable regime that maximizes information capacity and transmission. Here, we investigated the relation between scale-invariant brain dynamics and the topology of structural connectivity.

For this, we analysed human resting-state (rs-) fMRI binarized signals, together with diffusion MRI (dMRI) connectivity and its approximation as an exponentially decaying function of the distance between brain regions. We analysed the rs-fMRI dynamics using a recently proposed phenomenological renormalization group (PRG) method that tracks the change of collective activity after successive coarse-graining of correlated variables. We extended this method to incorporate information from structural connections. We found that collective brain activity reliably displays power-law scaling as a function of PRG coarse-graining, thus making possible to summarize the fMRI collective activity by means of a set of power exponents.

Furthermore, to test whether the observed scaling of activity is a signature of criticality, we built a spin model based on large-scale connectivity. The model presents a second-order phase transition between ordered and disordered phases when a single parameter, determining the global coupling strength, was varied. We next applied the PRG method to the model activity. Notably, we found that the scaling exponents observed in the data were strikingly close to those predicted by a critical system of spins interacting through exponentially decaying connectivity. Thus, our results suggest that criticality is the link between the connectome’s structure and scale-invariant brain dynamics.

Overall, our study shows that the combination of PRG, connectomes, and models can be useful to distinguish the working regime of the observed neural system. Since different behavioural and pathological brain states deviate from critical dynamics, extending these analyses to different brain states could provide new insights on phase transitions in neural systems and could serve to define brain state biomarkers in clinical and fundamental research.

A.P.-A. was supported by a Ramón y Cajal fellowship (RYC2020-029117-I) from FSE/Agencia Estatal de Investigación (AEI), Spanish Ministry of Science and Innovation. A.P.-A. and G.D. were supported by the Human Brain Project SGA3 (945539). G.D. was supported by the Spanish Research Project AWAKENING (PID2019-105772GB-I00/AEI/https://doi.org/10.13039/501100011033), financed by the Spanish Ministry of Science, Innovation and Universities (MCIU), State Research Agency (AEI). M.L.K. is supported by the Centre for Eudaimonia and Human Flourishing (funded by the Pettit and Carlsberg Foundations) and Center for Music in the Brain (funded by the Danish National Research Foundation, DNRF117).

F2 An E-I Firing Rate model for PING and ING oscillations

Yiqing Lu*1, John Rinzel1

1New York University, Center for Neural Science, Courant Institute of Mathematical Sciences, New York, United States of America

Two mechanisms for gamma oscillations, Pyramidal-Interneuronal Network Gamma (PING) and Interneuronal Network Gamma (ING) [1], are studied primarily with spiking network models [2], mathematically analyzable only in limiting conditions and demanding computationally for large scale modeling. Further, ING is typically studied only in Inhibitory-Inhibitory (I-I) networks. Rate models describing the mean-field activities of neuronal ensembles can be used effectively to study network function and dynamics, including synchronization and rhythmicity. However, traditional Wilson-Cowan-like models [3], being amenable to math analysis, are found unable to capture some dynamics such as ING. Previous studies showed that adding an explicit delay can circumvent the problem [4], yet the system is still too challenging for analysis.

We resolve this issue for a rate model by introducing a mean-voltage variable (v) that considers the subthreshold integration of inputs and works as an effective delay in the negative feedback loop between firing rate (r) and synaptic gating of inhibition (s). We first describe an r-s-v firing rate model for I-I networks, which is biophysically interpretable and capable of generating ING-like oscillations. Linear stability analysis, numerical branch-tracking and simulations show that the rate model captures many of the common features of spiking network models for ING [5].

Next, we extend the framework to E-I networks. With our 6-variable r-s-v model, we describe for the first time in firing rate models the transition from PING to ING by increasing the external drive to the inhibitory population without adjusting synaptic weights (Fig. 1). Having PING and ING available in a single network, without invoking synaptic blockers, is plausible and natural for explaining the emergence and transition of two different types of gamma oscillations. With the biophysically interpretable mean-voltage variable, our analyzable model can be easily extended to a rate model for multi-compartment neuron models or used as building blocks of multi-regional model to help understanding functional roles of circuits at different level.


1.

Whittington MA, Traub RD, Kopell N, Ermentrout B, Buhl EH. Inhibition-based rhythms: experimental and mathematical observations on network dynamics. International journal of psychophysiology. 2000 Dec 1;38(3):315-36.

2.

Börgers C, Kopell N. Effects of noisy drive on rhythms in networks of excitatory and inhibitory neurons. Neural computation. 2005 Mar 1;17(3):557-608.

3.

Wilson HR, Cowan JD. Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical journal. 1972 Jan 1;12(1):1-24

4.

Keeley S, Fenton AA, Rinzel J. Modeling fast and slow gamma oscillations with interneurons of different subtype. Journal of neurophysiology. 2017 Mar 1;117(3):950-65.

5.

Wang XJ, Buzsáki G. Gamma oscillation by synaptic inhibition in a hippocampal interneuronal network model. Journal of neuroscience. 1996 Oct 15;16(20):6402-13.

figure 1

Fig. 1. The r-s-v rate model for E-I network reproduces the switch from PING to ING in the spiking network model by increasing the drive to I population. A, B. Raster plots for PING, ING in a spiking network model. C, D. Time courses of the rate model for PING, ING. E. Bifurcation plot for the rate model. F. rE-rI phase plane. The steady-state relations for E and I populations appear as nullcline-like

F3 A brain-constrained deep neural-network model that can account for the readiness potential in self-initiated volitional action

Agnese Ušacka1, Aaron Schurger2, Max Garagnani*3

1Riga Stradiņš University, Faculty of Communication, Riga, Latvia

2Chapman University, Institute for Interdisciplinary Brain and Behavioral Sciences, Orange, USA

3Goldsmiths, University of London, Computing, London, United Kingdom

*Email: m.garagnani@gold.ac.uk

The readiness potential (RP) is a gradual buildup of negative electrical potential over the motor cortices prior to onset of a self-initiated movement. It is typically interpreted as having a goal-directed nature, whereby it signals movement planning and preparation. However, a similar buildup can also be observed by averaging continuous random neural fluctuations aligned to crests in their time series [1]. Therefore, an alternative account of the RP is that it reflects ongoing background neuronal noise that has at least a small influence on the precise time of movement onset [2]. While computational modelling studies were used in the past to adjudicate between these accounts, previous attempts did not employ a fully neuroanatomically and neurobiologically realistic architecture, hence falling short of providing a cortical-level mechanistic validation of either theory.

Here, we investigated the stochastic origin of the RP by applying a fully brain-constrained deep neural-network model reproducing real cortical neurons dynamics and the structure and connectivity of relevant primary sensorimotor, secondary and association areas of the frontal and temporal lobes. This model has been previously used to account for the neuromechanistic origins and cortical topography of volitional decisions to speak and act [3]. We used the emergent feature of this neural architecture – its ability to exhibit noise-driven periodic spontaneous ignitions of previously learnt internal representations (cell assemblies, CAs, circuits of strongly and reciprocally connected cells distributed across the entire network) – to mimic spontaneous decisions to act as observed in the classical Libet experiment. Specifically, we recorded the network’s activity for 2,000 trials, each trial beginning with a network reset and lasting until the spontaneous ignition of one of the CAs occurred and used the time interval between trial start and spontaneous CA ignition as a model correlate of waiting times.

We found that the model data accounted well for the experimental waiting-time distribution. Furthermore, in line with the stochastic interpretation of the RP, appropriate calibration of the model parameters resulted in subthreshold reverberation of activity within CA circuits, and averaging across cell assemblies’ ignition episodes produced a curve that closely matched the gradual buildup of activity observed in the experimental RP and its onset time.

There are various neurophysiological sources of ongoing noise that result from neural activity. Some of this noise might accumulate and reverberate within previously acquired perception-action circuits, and, hence, produce spontaneous action. The present simulation results, obtained with a fully brain-constrained neural architecture, provide further support for this alternative view, placing the classical explanation of the RP further under scrutiny.


1.

Schurger A, Sitt JD, Dehaene S. An accumulator model for spontaneous neural activity prior to self-initiated movement. Proceedings of the National Academy of Sciences. 2012 Oct 16;109(42):E2904-13.

2.

Schurger A, Mylopoulos M, Rosenthal D. Neural antecedents of spontaneous voluntary movement: a new perspective. Trends in Cognitive Sciences. 2016 Feb 1;20(2):77-9.

3.

Garagnani M, Pulvermüller F. Neuronal correlates of decisions to speak and act: Spontaneous emergence and dynamic topographies in a computational model of frontal and temporal areas. Brain and language. 2013 Oct 1;127(1):75-85.

F4 Uncertainty-modulated prediction errors in cortical microcircuits

Katharina Anna Wilmes*1, Constanze Raltchev1, Sergej Kasavica1, Mihai A. Petrovici1, Shankar Babu Sachidhanandam1, Walter Senn1

1University of Bern and Heidelberg University, Department of Physiology, Bern, Switzerland

*Email: katharina.wilmes@unibe.ch

The brain learns an internal model of the world by making predictions about upcoming inputs and comparing its predictions with actual incoming sensory information [1]. Prediction errors are essential for learning an internal representation of the world, because they indicate how the internal representation needs to be improved. Promisingly, a neural correlate for prediction errors has been found in the activity of pyramidal neurons in layer 2/3 of diverse cortical areas [2-4]. To make appropriate predictions in a stochastic environment, the brain needs to take uncertainty into account. How uncertainty modulates prediction errors and hence learning is, however, unclear. Here, we use a normative approach to derive how prediction errors should be modulated by uncertainty and postulate that such uncertainty-modulated prediction errors (UPE) are represented by layer 2/3 pyramidal neurons. We then implement the calculation of the UPEs in a microcircuit model of layer 2/3 consisting of positive and negative prediction error neurons (Fig. 1).

In particular, in our theory, the UPE reflects the how the prediction should be changed to maximise the likelihood of the incoming sensory inputs given the prediction. The UPE is defined by the absolute error, the difference between the predicted mean and the sensory input, scaled inversely by the uncertainty. The calculation of the UPE can be implemented by subtractive and divisive inhibition, which corresponds to the experimentally observed effects of somatostatin (SST) and parvalbumin (PV) interneurons, respectively [5]. We show that with local activity-dependent plasticity rules, (1) SSTs can learn to represent the mean (Fig. 1E, F), (2) PV scan learn to represent the uncertainty (Fig. 1E, D), with inhibitory connections from SSTs to PVs being essential for estimating the uncertainty, and (3) the resulting UPE enables an adaptive learning rate (Fig. 1G-I). The model makes the following experimentally testable predictions (Fig. 1J, K): (1) PV rate scales with the uncertainty. (2) SST rate in positive prediction error circuits scales with the predicted mean. (3) SST rate in negative prediction error circuits scales with the stimulus strength. (4) Both positive and negative prediction error neuron activity is inversely proportional to the expected uncertainty.

This work was funded by the European Union 7th Framework Programme under grant agreement 604102 (HBP), the Horizon 2020 Framework Programme under grant agreements 720270, 785907 and 945539 (HBP).


1.

Kveraga K, Ghuman AS, Bar M. Top-down predictions in the cognitive brain. Brain and cognition. 2007 Nov 1;65(2):145-68.

2.

Attinger A, Wang B, Keller GB. Visuomotor coupling shapes the functional development of mouse visual cortex. Cell. 2017 Jun 15;169(7):1291-302.

3.

Ayaz A, Stäuble A, Hamada M, Wulf MA, Saleem AB, Helmchen F. Layer-specific integration of locomotion and sensory information in mouse barrel cortex. Nature communications. 2019 Jun 13;10(1):2585.

4.

Gillon CJ, Pina JE, Lecoq JA, Ahmed R, Billeh YN, Caldejon S, Groblewski P, Henley TM, Kato I, Lee E, Luviano J. Learning from unexpected events in the neocortical microcircuit. BioRxiv. 2021 Jan 16:2021-01.

5.

Wilson, N. R., Runyan, C. A., Wang, F. L., & Sur, M. Nature. (2012), https://doi.org/10.1038/nature11347

figure 2

Fig. 1. Mouse learns association between a sound and a variable whisker stimulus (A), drawn from a Gaussian (B). C, E: Positive prediction error circuit. D: PVs learn the variance. F: SSTs learn the mean. G, H, I: Internal representation neuron learns mean based on UPEs, which enable an adaptive learning rate. K, J: Cell-class specific experimental predictions for positive (J) and negative (K) mismatch

O1 Early subcortical response at the fundamental frequency of continuous speech measured with MEG

Alina Schüller*1, Achim Schilling2, Patrick Krauss2, Tobias Reichenbach1

1Friedrich-Alexander-University Erlangen-Nürnberg, Artificial Intelligence in Biomedical Engineering, Sensory Neuroengineering, Erlangen, Germany

2University Hospital Erlangen and Friedrich-Alexander-University Erlangen-Nürnberg, Neuroscience Laboratory and Department Computer Science, Pattern Recognition Lab, Erlangen, Germany

*Email: alina.schueller@fau.de

The neural response to speech is not only shaped by a comparatively slow tracking of the rhythm of words and syllables, but also by a fast tracking of the fundamental frequency of the voiced parts. EEG measurements of this frequency-following response to natural continuous speech (speech-FFR) showed the presence of early subcortical contributions at a latency of about 8 ms that were modulated by selective attention [1]. In addition, MEG studies found cortical contributions at the fundamental frequency of short speech tokens [2], as well as to the fundamental frequency of continuous speech at a latency of about 40 ms [3]. However, the latency of the early subcortical response in MEG measurements has not yet been established, mainly due to a difficulty with detecting this response to continuous speech using MEG.

In the present study, we detected and source-analyzed early MEG responses to continuous speech, employing long recording times. MEG data was recorded from 15 healthy, normal-hearing subjects listening to continuous speech stimuli of a duration of 40 min [4]. The data was subsequently analyzed in the range of the male speaker’s fundamental frequency, resulting in a speech stimulus of 17 min. The neural responses to the fundamental frequency were investigated using temporal response functions (TRFs), as well as neural source estimation followed by the computation of source-level TRFs.

The measured neural response to the fundamental frequency of the male speaker showed a right lateralized, cortical origin at a delay of 38 ms (see Fig. 1). A second response to the modulation of the envelope of higher harmonics provided an even higher magnitude of cortical origin, at a time latency of 33 ms, also with a right hemisphere bias. Both measured signals were verified to be cortically originated by the source-level TRF, which was calculated from the source reconstructed raw MEG data. Despite MEG being relatively insensitive to deeper subcortical structures, we were moreover able to identify a subcortical contribution to the neural response to the envelope modulation in continuous speech. This response emerged only in the subcortical TRF at a time delay of 9 ms, verifying previous studies on early subcortical contributions measured through EEG.

1.

Forte AE, Etard O, Reichenbach T. The human auditory brainstem response to running speech reveals a subcortical mechanism for selective attention. eLife. 2017 Oct;6:e27203.

2.

Coffey EB, Herholz SC, Chepesiuk AM, Baillet S, Zatorre RJ. Cortical contributions to the auditory frequency-following response revealed by MEG. Nat Commun. 2016 Jan;7(1):1-11.

3.

Kulasingham JP, Brodbeck C, Presacco A, Kuchinsky SE, Anderson S, Simon JZ. High gamma cortical processing of continuous speech in younger and older listeners. Neuroimage. 2020 Dec;222:117291.

4.

Schüller A, Schilling A, Krauss P, Reichenbach T. Early subcortical response at the fundamental frequency of continuous speech measured with MEG. In submission, 2022.

figure 3

Fig. 1. (a) The amplitude of the source-level TRF for the envelope modulation, averaged across subjects and vertices in the cortical ROI. (b)The highest magnitudes of the voxel TRFs at the latency of 33 ms. (c)The amplitude of the source-level TRF for the envelope modulation, averaged across subjects and vertices in the subcortical ROI.(d) The highest magnitudes of the voxel TRFs at latencies 9 ms and 27 ms

O2 Recruiting native representations for electrically induced perception in blind humans

Karolina Korvasova*1, Fabrizio Grani2, Rocío Lopez Peco2, David Berling1, Mikel Val Calvo2, Alfonso Rodil Doblado2, Tibor Rózsa1, Cristina Soto Sánchez2, Xing Chen3, Eduardo Fernandez2, Jan Antolik1

1Charles University Prague, Computational Systems Neuroscience Group, Faculty of Mathematics and Physics, Prague, Czechia

2Miguel Hernandez University, Biomedical Neuroengineering Group, Elche, Spain

3Netherlands Institute for Neuroscience, Department of Vision & Cognition, Amsterdam, Netherlands

*Email: karolina.korvasova@mff.cuni.cz

The possibility to recruit existing functional representations by external stimulation in the visual cortex could greatly advance the field of visual prosthetics. However, in blind humans' functional properties of neurons cannot be tested directly by measuring responses to visual input. A possible solution is based on the idea that functionally similar neurons tend to be more correlated also in the resting condition [1-2]. We present a method to infer the orientation preference map from spontaneous activity recorded with a Utah array from the primary visual cortex of non-human primates. We applied this method to recording blind humans with a cortical visual prosthesis and found that both spatial and functional properties of the set of stimulated electrodes affect perception. Particularly, discrimination between two stimuli becomes easier the more spatially and functionally separated the two sets of stimulated sites are.


1.

Kenet T, Bibitchkov D, Tsodyks M, Grinvald A, Arieli A. Spontaneously emerging cortical representations of visual attributes. Nature. 2003 Oct 30;425(6961):954-956

2.

Omer DB, Fekete T, Ulchin Y, Hildesheim R, Grinvald A. Dynamic Patterns of Spontaneous Ongoing Activity in the Visual Cortex of Anesthetized and Awake Monkeys are Different. Cereb Cortex. 2019 Mar 1;29(3):1291-1304.

O3 Multimodal units extract co-modulated information

Marcus Ghosh*1, Gabriel Béna2, Nicolas Perez-Nieves2, Volker Bormuth1, Dan F.M. Goodman2

1Sorbonne Université, Laboratoire Jean Perrin, Paris, France

2Imperial College London, Department of Electrical and Electronic Engineering, London, United Kingdom

We continuously detect sensory data, like sights and sounds, and use this information to guide our behaviour. However, rather than relying on single sensory channels, which are noisy and can be ambiguous alone, we merge information across our senses and leverage this combined signal. In biological networks, this process (multisensory integration) is implemented by multimodal neurons which are thought to receive the information accumulated by unimodal areas, and to fuse this across channels; an algorithm we term accumulate-then-fuse (Fig. 1D). However, is implementing this algorithm their main function? Here, we explore this question by developing novel multimodal tasks and deploying probabilistic, artificial and spiking neural network models (Fig. 1A). Using these models, we demonstrate that multimodal units are not necessary for accuracy or balancing speed/accuracy in classical multimodal tasks but are critical in a novel set of tasks in which we co-modulate signals across channels (Fig. 1B-C). We show that these co-modulation tasks require multimodal units to implement an alternative fuse-then-accumulate algorithm (Fig. 1E), and we demonstrate that this algorithm excels in naturalistic settings like predator-prey interactions (Fig. 1F). Ultimately our work suggests that multimodal neurons are critical for extracting comodulated information and provides novel tasks and models for exploring this in biological systems.

figure 4

Fig. 1. We compare the performance of multi- and unimodal networks (A) in both classical and comodulation tasks (B). We show that multimodal units are not necessary for the former but are critical for the latter (C). We relate this result to the algorithms which these architectures implement (D-E) and show that the multimodal fuse-then-accumulate algorithm excels in a naturalistic detection task (F)

O4 Neural correlates of perceptual constancy in the auditory cortex an effect of behavioural training

Carla Griffiths*1, Jules Lebert1, Joseph Sollini2, Jennifer Bizley3

1University College London, Ear Institute, London, United Kingdom

2University of Nottingham, Faculty of Medicine & Health Sciences, Nottingham, United Kingdom

*Email: carla.griffiths.16@ucl.ac.uk

While considered an essential element of judgment-making and rule-learning for agents in novel environments, the neural mechanisms underlying perceptual constancy are poorly understood. The ability to maintain perceptual constancy – that is, to generalize over identity-preserving variation in sensory input – is critical for sensory systems, including within the auditory system.

We trained four ferrets in a Go/No-Go task to examine the neural basis of perceptual constancy. In this task, ferrets identified a target word ("instruments") from a stream drawn from 54 other distractor words. Ferrets were trained on two voices (one male, one female), and once trained, we varied the mean fundamental frequency (F0) with and across trials. Ferrets could perform the task across both pitch manipulations, suggesting they display perceptual constancy within this task.

We then recorded single and multiunit activity in auditory cortex (AC) using 32 electrode arrays that spanned primary and secondary cortical fields in the right hemisphere of two ferrets. We also applied this same spike extraction process to units recorded in three naïve animals passively listening to the same stimuli while receiving water. We computed a decoding score for pairwise discrimination of the target word from the probe words, which included five distractors (train/test split = 0.33 with threefold cross-validation). Accuracy was calculated as the percentage of correctly labeled target or distractor classes that the LSTM model predicted from our test set neural data referenced against the actual test set labels of target and distractor.

Decoder performance remained above chance when classifying between F0-manipulated distractor and target stimuli for both F0 manipulations across and within trials for the trained and naive ferrets. Neural decoder scores were higher in the trained animals compared to the naïve animals for control F0 (Mann-Whitney U statistic = 1136.5p-value = 7.41e-05) and roved F0 trials (Mann-Whitney U statistic = 10784.5, p-value0.0020, alternative hypothesis = trained > naive).The distribution of roved F0 subtracted from the control F0 decoding scores and divided by the sum of the control and roved F0 scores of the naive animals was also less than trained animals, suggesting trained animals are more invariant to pitch shifts (Mann-Whitney U statistic = 3997.5, p-value = 2.58-07, alternative hypothesis = naïve < trained). Last, running a two-sample K-S test did not find a significant difference in the two distributions of the roved vs control F0 stimuli sets for the trained animals (K-S test statistic = 0.16667, p-value = 0.084, alternative hypothesis = roved ≠ control), but a significant difference between the naïve animal distribution (K-S test statistic = 0.25841, p-value = 0.00012, alternative hypothesis = roved ≠ control), illustrating a form of perceptual constancy through training (see Fig. 1).

Our results suggest that auditory cortex forms pitch-invariant representations of spectrotemporally complex sounds and that training animals so that these sounds are behaviorally relevant enhances pitch-invariance. Ongoing work will include repeating this analysis in a trained animal with a Neuropixels probe to increase single-unit yield and elucidate the manifold activity that explains this invariant behavior on a low-dimensional level.

figure 5

Fig. 1. A, control F0 neural decoding scores for the trained (purple) and naïve (turquoise) animals. B, same as A but for roved F0 trials. C, Control vs intra-trial roved F0 decoding data for single (SU) and multi-units (MU) (F = female, M = male). D,(control – roved F0)/(control + roved F0) scores. E, roved and control F0 scores for the trained animal units. F; same as E but for the naïve animals

O5 Local excitation and lateral inhibition enable the simultaneous processing of multiple signals in recurrent neural networks

Emmanouil Giannakakis*1, Oleg Vinogradov2, Victor Buendia1, Sina Khajehabdollahi1, Anna Levina2

1Eberhard Karls University of Tübingen, Computer Science, Tübingen, Germany

2University of Tübingen; Max Planck Institute for Biological Cybernetics, Computer Science, Tübingen, Germany

*Email: giannakakismanos@gmail.com

Biological neural networks exhibit complex detailed connectivity patterns between different neuron sub-types. These topological features of brain circuits are commonly hypothesized to have different functional roles, but the details of their function remain poorly understood. One of the most consistently observed features of brain connectivity is the presence of functional clusters or assemblies of neurons characterized by strong connectivity and functional similarity [1]. Although such assemblies are observed across different neuronal subtypes and brain areas, their exact contribution to brain computations is not fully studied. We hypothesize that the neural dynamics associated with synapse-type specific clusters of E/I neurons can play an important role in the simultaneous processing of multiple signals by recurrent neural networks.

At first, we studied whether structured recurrent connectivity can enable the formation of E/I co-tuning and input selectivity in upstream areas. We show that established mechanisms of synaptic plasticity, known to produce E/I co-tuning in feedforward, low-noise networks (Fig. 1a), fail to do so when receiving noisy inputs from lower areas with random recurrent connectivity. We examine whether non-trivial connectivity can reverse these effects by optimizing the level of clustering between neurons that receive the same signal using simulation-based inference. We find that strong excitatory connectivity between neurons receiving the same inputs combined with less specific lateral inhibitory connectivity (Fig. 1e) can fully restore the ability of synaptic plasticity to produce co-tuning and input selectivity in an upstream neuron [2].

We then study the effects of structured connectivity in a more challenging computational task, the simultaneous processing of two chaotic time series. We use a balanced recurrent network as a reservoir in a classical task of predicting the trajectory of chaotic attractor [3], with the modification of simultaneously predicting two attractors. For this, we split the network into two clusters, each responsible for processing one attractor (Fig. 1f). We investigate whether synaptic connections between the two clusters can benefit the reservoir's performance. In particular, we hypothesize that different probabilities of connection within and between clusters for different types of synapses could improve the network's ability to process complex dynamics. Thus, we use simulation-based inference to estimate the distribution of synapse type-specific clustering levels that lead to optimal network performance. We find that localized excitation, combined with more spread-out inhibition (the same overall pattern that was observed in the plastic network), significantly boosts the reservoir's performance.

Our findings suggest that a connectivity pattern that is commonly observed in cortical networks and is associated with distinct dynamics [4] can have functional implications for the simultaneous processing of different signals in recurrent networks. The fact that the same pattern is observed for different tasks indicates the possibility of a general principle linking E/I network topology and the dynamics it creates with the ability of recurrent networks to perform specific computations.


1.

Miehl C, Onasch S, Festa D, Gjorgjieva J. Formation and computational implications of assemblies in neural circuits. J Physiol. Sep 2023; [Online ahead of print]. https://doi.org/10.1113/JP282750.

2.

Giannakakis E, Vinogradov O, Buendía V, Levina A. Recurrent connectivity structure controls the emergence of co-tuned excitation and inhibition. bioRxiv. 2023 Feb 27; 2023.02.27.530253. https://doi.org/10.1101/2023.02.27.530253.

3.

Kim JZ, Lu Z, Nozari E, et al. Teaching recurrent neural networks to infer global temporal structure from local examples. Nat Mach Intell. 2021 Sep;3:316-323. https://doi.org/10.1038/s42256-021-00321-2.

4.

Litwin-Kumar A, Doiron B. Slow dynamics and high variability in balanced cortical networks with clustered connections. Nat Neurosci. 2012 Nov;15(11):1498-505. https://doi.org/10.1038/nn.3220. Epub 2012 Sep 23. PMID: 23001062; PMCID: PMC4106684.

figure 6

Fig. 1. (a.) An outline of the plastic network. Different input groups are connected with a single post synaptic neuron with plastic connections. If the network’s activity has the appropriate statistical structure, the plasticity converges to distinct weights for each group (b.), with matching E and I feedforward connectivity (c.). The distinct weights allow the post synaptic neuron to discriminate between

O6 Connecting the structure of individual networks to the dynamics in neural systems

Roberto Budzinski*1, Jan Minac1, Lyle Muller1

1Western University, Department of Applied Mathematics, London, Canada

New developments in connectomic reconstruction are rapidly increasing the possibilities to map connection patterns in neural systems, ranging from the microscopic synaptic connection patterns between individual neurons to the macroscopic connectivity patterns between cortical areas. The pace of these developments is increasing due to experimental support from programs such as the B.R.A.I.N. Initiative. With these technological developments, however, a new challenge arises: even if we knew the complete connectivity diagram for a single model organism, how could we understand anything about the resulting nonlinear dynamics? Here, we present a new mathematical approach to go from the connectivity of an individual network—for example, the realization of a random graph on an individual trial or the precise connection patterns in an experimental reconstruction—to the spatiotemporal pattern of oscillations in a neural system. By introducing a complex-valued matrix formulation for the Kuramoto model, we develop an analytical approach to study the transient dynamics arising from the precise topology of a single network. This approach allows us to analytically study and predict the collective behavior of oscillator networks and offers a new, geometric perspective of the spatiotemporal patterns that emerge in the system in terms of the spectrum of the network’s adjacency matrix. We then use this novel approach to study two important dynamical phenomena in neural systems: 1. The emergence of synchronous oscillations and complex spatiotemporal patterns like chimera states, and 2. Patterns of neural activity (waves) due to the presence of time delays in the coupling in whole-brain networks. Importantly, because our approach remains finite, we can directly apply it to empirical networks such as brain networks represented by connectomes. Our mathematical framework directly connects the network structure of neural systems to the specific spatiotemporal patterns that result, providing analytical insight into difficult problems such as how heterogeneous time delays create specific traveling wave patterns in empirical recordings of neural systems. These results provide new theoretical insight into how sophisticated spatiotemporal dynamics arise from specific connection patterns in neural systems.

O7 Neuronal activation sequences in primate prefrontal cortex encode working memory episodes

Alexandra Busch*1, Megan Roussy2, Rogelio Luna2, Matthew Leavitt3, Maryam Mofrad1, Roberto Gulli4, Benjamin Corrigan2, Ján Mináč1, Adam Sachs5, Lena Palaniyappan6, Lyle Muller1, Julio Martinez-Trujillo2

1Western University, Department of Applied Mathematics, London, Canada

2Western University, Physiology and Pharmacology, London, Canada

3MosaicML, San Fransisco, United States of America

4Columbia University, Zuckerman Mind Brain Behavior Institute, New York, United States of America

5Ottawa University, The Ottawa Hospital, Ottawa, Canada

6Western University, Department of Psychiatry, London, Canada

Working memory (WM) is the ability to briefly maintain and manipulate information ‘in mind’ to achieve a current goal [1]. Brain circuits supporting WM differ from those supporting sensory processing in that they must represent information in the absence of sensory inputs and without necessarily triggering motor behaviours. WM also differs from long-term memory in that the information is only maintained for short time intervals without necessarily undergoing long term storage. Experimental psychologists have elaborated on a theoretical framework that emphasizes four components of WM systems: an attentional control system, a visuospatial sketchpad, a phonological loop, and an episodic buffer that binds information across objects, space and time [2]. Systems neurophysiologists working with non-human primates (NHPs) have isolated neural correlates of the first three components and proposed candidate neural mechanisms. However, the neural correlates and mechanisms of the working memory episodic buffer have not been yet identified. Here, we demonstrate a sequence code in primate lateral prefrontal cortex (LPFC) that encodes episodes in working memory.

We simultaneously recorded the activity of hundreds of neurons in the LPFC of macaque monkeys during a visuospatial WM task set in a virtual environment. During task trials, animals encoded the location of a sensory cue, remembered that location after the cue disappeared (memory period), and finally navigated toward the location using a joystick. We report a new type of neuron that signals the boundaries between the different trial periods. These time-boundary cells parcellate the WM episode by providing a signal for the beginning and end of the episode. During the WM episode (memory period), sequences of single neuron activations encoded the path on the screen to the target location held in WM — as viewed from the subject’s own visual perspective. At first, we considered distances between targets in the virtual environment from a “top-down view”, as would be seen by an overhead observer. When we instead consider distances in the virtual environment from the visual perspective of the subject, as they would be seen by NHPs performing the task, we find that sequences have an even stronger relationship with targets held in WM. This result provides fundamental insight into how NHPs perceive the virtual navigation task. Further, this relationship is specific to WM: when subjects performed an identical task in which the target remained on screen for the whole trial (and therefore working memory was not required for navigation), the relationship between sequences and task behaviour was significantly reduced. Sequences were not found during WM tasks that lacked the varying spatiotemporal nature of memory episodes. Finally, we conducted a causal manipulation using low doses of ketamine to selectively disrupt WM performance. Ketamine also distorted sequences and their relationship to behaviour in the task. Together, our results show that neuronal ensembles in the primate lateral prefrontal cortex dynamically and flexibly encode the spatiotemporal structure of working memory episodes using sequences of spiking activation.


1.

Baddeley A. Working memory. Clarendon Press/Oxford University Press; 1986.

2.

Baddeley A. The episodic buffer: a new component of working memory?. Trends in cognitive sciences. 2000 Nov 1;4(11):417-23.

O8 The molecular communication between synapses influences synaptic plasticity

Shirin Shafiee Kamalabad*1, Christian Tetylaff2

留言 (0)

沒有登入
gif