My Work in Neuroscience

Here I discuss some work that I published in the field of Computational Neuroscience. The topics considered in the following sections are:

1) the study of neural network dynamics by means of Bifurcation Theory;

2) a computer simulation of the mouse brain that generates biologically realistic fMRI activity;

3) a network model that connects multiple scales of cortical organization, from neural microcircuits to macroscopic cortical areas.

Bifurcation Theory

In this section, I discuss my results on the study of neural networks as dynamical systems. These results have been published in a series of papers [1-3], which introduce new analytical and computational techniques for investigating network dynamics.

Purpose

The purpose of this work is to shed new light on how neural networks process afferent stimuli (e.g., the sensory input from the thalamus), and on how their processing capability is affected by the network architecture. This is key to understanding how neural networks generate behavior when humans and animals interact with the environment they live in.

Methods

Bifurcation theory is the study of sudden, qualitative changes in the behavior of a non-linear mathematical model, which occur when varying gradually the model parameters around some critical value. These phenomena are known as bifurcationsFor example, when one increases a model parameter slowly across a bifurcation point, the system can undergo an abrupt transition from a stationary state (tipping point) to another stationary state (see also this link), or a transition from a stationary state to oscillatory motion (see the next section).

The bifurcation diagram of the model shows all the bifurcations for each combination of parameters values, therefore it provides a complete description of the dynamic behavior of the model. It follows that bifurcation theory represents an ideal tool for investigating the relationship between afferent stimuli (i.e. the model parameters we are interested in) and network dynamics.

In [1-3] I developed several techniques for studying bifurcations, which vary considerably depending on the neural network model under investigation. Specifically, in [1] I developed analytical methods for studying bifurcations in fully-connected networks with graded firing rates, while in [2] I introduced a new brute force algorithm that detects bifurcations in sparse networks with arbitrary fixed architecture and binary firing rates.

Then, in [3] I extended the latter approach to study the average bifurcation diagram of randomly connected binary-rate networks across network realizations. This work is based on Extreme Value Theory (see this link for further information), and it provides a powerful tool for summarizing the bifurcation diagram of hundreds to thousands of neural networks. In other words, the technique developed in [3] averages out variations in the bifurcation diagram that result from the randomness of the synaptic connections between neurons. For this reason, this approach can be used to derive the typical bifurcation diagram of neural circuits that respect biological constraints (e.g., the lognormal distribution of the synaptic weights [4]), while not having a fixed connectivity architecture across brain areas or across subjects.

An example of bifurcation

The following picture shows an example of bifurcation in the simple case of a sphere rolling on a potential landscape.

A bifurcation from a stationary state to oscillatory dynamics

The landscape may represent the energy function of a neural network in the space of the network configurations, while the coordinates of the sphere represent the current network state.

We suppose that the shape of the landscape depends on some parameter, so that when the parameter value is small, the landscape looks as in panel A. By descending the gradient of the potential landscape (black vectors), the sphere reaches the point of minimum energy. That represents a stable stationary state, where the sphere stays until an external force is applied, or until that state is made unstable by increasing the landscape parameter, see panel B.

In panel B, the sphere rolls from the unstable state, and it reaches the newly formed circular valley by descending the gradient of the landscape. In the valley, the dynamics of the sphere is affected by a second force, known as the curl probability flux (red vectors), which leads the sphere to exhibit spiral motion (see [5] for more details). This new state represents a stable oscillation (i.e. a periodic sequence of network states), whose amplitude increases with the network parameter, see panel C.

Results

Small circuits composed of few tens of neurons (neural masses) show unexpectedly complex bifurcation diagrams. Moreover, near the bifurcation points the neurons tend to synchronize their firing rates through a phenomenon called critical slowing down (see this link for further information). Therefore, at microscopic and mesoscopic scales, the neurons can take advantage of a dense network of bifurcations to modulate their information processing capability.

Applications

The effect of some drugs on neural circuits can be modeled as a variation in the strength of the synaptic connections between neurons. Therefore, by calculating the bifurcation diagram of a network and, at the same time, rescaling globally its synaptic strengths, it is possible to quantify how drugs affect the information processing capability of the network. An example is reported in [1], where we showed that the bifurcation diagram of a model of cortical column is strongly affected by variations in the inhibitory weights.

In Silico Mouse Brain

In what follows, I provide an intuitive explanation of the techniques and results published in our paper [6].

Purpose

The purpose of the paper is twofold:

1) To replicate in silico, namely through simulations of a network model on a computer, the patterns of cortical activity observed in experiments with real mouse brains, see e.g. the video below by Prof. Michael Crair @Yale University.

2) To infer principles of brain function from the simulations, i.e. to understand why the cortical activity of real mice looks the way it does. This is possible since typically computer simulations are much simpler than the neural processes that occur in the brain. Note that our simulations attempt to simplify as much as possible the mouse brain, without neglecting the properties that give rise to the most interesting neural phenomena.

Our Approach

Our approach is described in the following picture.

Picture of the mouse brain by Elizabeth Atkinson, Washington University in St. Louis. Tract tracing and brain anatomy pictures by the Allen Institute.
Picture of the mouse brain by Elizabeth Atkinson, Washington University in St. Louis. Tract tracing and brain anatomy pictures by the Allen Institute.

Our computer simulations are constrained with multimodal data extracted experimentally from real mouse brains. The network model is composed of 34 areas or ROIs (17 for each hemisphere). In turn, each area is composed of one excitatory and one inhibitory population. The excitatory populations are recurrently connected by a 34 × 34 anatomical connectivity matrix, which was estimated from real mice by tract tracing [7].

The temporal evolution of the firing rate 𝐴(𝑡) in each population of the network (i.e., the ROI dynamics, see the picture above) is calculated through a non-linear mathematical model with a set of free parameters (see [6] for more details). Note that 𝐴(𝑡) represents a mathematical description of the spiking activity of the neural populations, and that, at the current state of technology, the whole-cortex spiking activity cannot be measured directly in real mouse brains.

However, indirect measures of brain activity (such as calcium imaging, see the video above, and fMRI) can monitor the activity of macroscopic brain areas. In [6] we modeled the resting-state fMRI (rsfMRI) activity in each ROI as a function of 𝐴(𝑡), then we fitted the free model parameters to maximize the similarity between the rsfMRI signals generated by the model, and the rsfMRI activity of real mouse brains [8] recorded at the Functional Neuroimaging Laboratory of the Italian Institute of Technology.

As discussed in the next section, the resulting mathematical model exhibits a high biological plausibility, in that it can also reproduce aspects of the rsfMRI data that it was not fitted to replicate. Then, we applied advanced mathematical and computational techniques (see [6] for more details) to unveil the main ingredients of the network model that constrain the neural activity patterns, and to shed new light on the mechanisms of whole-cortex dynamics resulting from the concerted interaction of brain areas.

Results

Our network model can reproduce aspects of empirical rsfMRI activity it was optimized to replicate, such as the distribution of the time-averaged rsfMRI activity across the 34 cortical areas of the mouse cortex, and the static functional connectivity between areas. Moreover, the model can also replicate rsfMRI statistics it was not fitted to mimic, such as the functional connectivity after global signal regression, the time the network spends in specific regions of its state space (known as basins of attraction), and the emergence of co-activation patterns previously reported in [8].

The following video shows a few seconds of simulation of our network model, and its ability to replicate some of the most fundamental features of the empirical rsfMRI activity.

The Python script is available at this link, while a compressed folder containing the script and all the supporting files can be downloaded from this link.
Our analysis of the network model revealed many foundational principles and mechanisms of the collective neuronal dynamics in the mouse brain. Among them, the model suggests that:

a) the network activity evolves over time by switching between basins of attraction,

b) the architecture of the mouse brain maximizes the complexity of collective behavior of the cortical areas, as well as the inter-hemispheric asymmetry in moment-to-moment activity (with this mechanism, the mouse brain may efficiently partition the computational load between its hemispheres, so that one can work independently from the other, while allowing at the same time inter-hemispheric information transfer),

c) complexity is highly sensitive to the presence of the weaker connections (which are also on average the longer-distance ones), thereby suggesting a possible role of those pathways in enhancing information processing in the brain,

d) the mouse brain at rest keeps the excitatory and inhibitory currents in each area balanced, and this balance is optimal for encoding of external stimuli.

Applications

This model may be employed in future studies to make empirically testable predictions about how alterations of the anatomical connections (resulting, e.g., from injury or neurodegenerative disorders) may alter brain dynamics and its ability to process/encode information.

Cortical Scales

In this section, I briefly describe my work [9] on a new multiscale model of cortical activity (article in preparation).

Purpose

Neural networks in the human cortex appear organized at multiple spatial scales. At the coarsest (i.e., macroscopic) scale, the cortex is structured into wide, interacting areas, such as the left/right hemispheres and the cortical lobes, see panel A in the following picture.

Scales of cortical organization (3D model of the cortex from the BodyParts3D/Anatomography database, columns picture from the Blue Brain Project).

Scales of cortical organization. The 3D model of the cortex is from the BodyParts3D/Anatomography database, while the cortical columns picture is from the Blue Brain Project.

At a finer, intermediate (i.e., mesoscopic) scale, the neurons are organized into modules containing hundreds to thousands of cells. These mesoscopic structures are called cortical columns, in that they look like cylindrical formations oriented perpendicular to the cortical surface. At the mesoscopic scale, the cortex can be studied as an interconnected network of such columns, see panel B in the picture above. Finally, at the finest (i.e., microscopic) scale, the cortical tissue is modeled as a recurrent network of neurons interconnected through axons and dendrites, see panel C.

The purpose of my work is to develop a biologically realistic network model that explains how cortical activity in humans is coordinated across these scales.

Applications

The model can be used to explain the functional role of brain size and architecture. Specifically, the model makes predictions about the function of the cortical columns, as well as of brain structures such as the anatomical connections between hemispheres (e.g., the corpus callosum). These predictions con be tested in future research and provide a basis for a deeper understanding of the foundational principles of brain activity.

 

Bibliography

[1] D. Fasoli, A. Cattani and S. Panzeri, The complexity of dynamics in small neural circuits, PLoS Computational Biology, 12(8):e1004992, 2016 (URL)

[2] D. Fasoli and S. Panzeri, Optimized brute-force algorithms for the bifurcation analysis of a binary neural network model, Physical Review E, 99(1): 012316, 2019 (URL)

[3] D. Fasoli and S. Panzeri, Stationary-state statistics of a binary neural network model with quenched disorder, Entropy, 21(7):630, 2019 (URL)

[4] Y. Ikegaya, T. Sasaki, D. Ishikawa, N. Honma, K. Tao, N. Takahashi, G. Minamisawa, S. Ujita and N. Matsuki, Interpyramid spike transmission stabilizes the sparseness of recurrent network activity, Cerebral Cortex, 23(2):293-304, 2013 (URL)

[5] H. Yan, L. Zhao, L. Hu, X. Wang, E. Wang and J. Wang, Nonequilibrium landscape theory of neural networks, Proceedings of the National Academy of Sciences, 110(45):E4185-E4194, 2013 (URL)

[6] D. Fasoli, L. Coletta, D. Gutierrez-Barragan, A. Gozzi and S. Panzeri, A model of the mouse cortex with attractor dynamics explains the structure and emergence of rsfMRI co-activation patterns, Submitted, 2022 (URL)

[7] J. E. Knox, K. D. Harris, N. Graddis, J. D. Whitesell, H. Zeng, J. A. Harris, E. Shea-Brown and S. Mihalas, High-resolution data-driven model of the mouse connectomeNetwork Neuroscience3(1):217-236, 2019 (URL)

[8] D. Gutierrez-Barragan, M. A. Basson, S. Panzeri, A. Gozzi, Infraslow state fluctuations govern spontaneous fMRI network dynamicsCurrent Biology29:2295-2306, 2019 (URL)

[9] D. Fasoli and S. Panzeri, The emergence of complexity in multiscale hierarchical networks: How biophysics, structure and size shape the dynamic repertoire of the brain, in preparation

MENU