CNS 2013 Paris: Tutorials ProgramThe tutorials will be held on July 13th at the Université Paris Descartes in "spaces" Curie and Grignard (see below). Rooms will be indicated on door signs.List of tutorials
T1: Neural mass and neural field models
Time: 9:00–12:20, 13:30–16:50 Axel Hutt (INRIA Nancy, France) T2: Theory of correlation transfer and correlation structure in recurrent networks
Time: 9:00–12:20, 13:30–16:50 Ruben Moreno-Bote (Foundation Sant Joan de Déu, Barcelona, Spain) T3: Modeling and interpretation of extracellular potentialsTime: 9:00–12:20, 13:30–16:50 Gaute T. Einevoll (Norwegian University of Life Sciences, Ås, Norway) T4: Probabilistic inference as a neural-computing paradigmTime: 13:30–16:50 Dejan Pecevski (Graz University of Technology, Graz, Austria) T5: Brain Activity at Rest: Dynamics and Structure of the Brain in Health and Disease
Time: 9:00–12:20 Gustavo Deco (Universitat Pompeu Fabra, Spain) T6: Developing neuron and synapse models for NEST
Time: 9:00–12:20, 13:30–16:50 Abigail Morrison (Research Center Jülich, Germany) T7: Advanced modelling of spiking neural networks with BRIAN
Time: 9:00–12:20, 13:30–16:50 Romain Brette (École Normale Supérieure, Paris, France) (see schedule) T8: Managing complex workflows in neural simulation and data analysis
Time: 9:00–12:20, 13:30–16:50 Andrew P. Davison (UNIC, CNRS, Gif sur Yvette) T9: Massively Parallel Time Encoding and Channel Identification Machines
Time: 9:00–12:20, 13:30–16:50 Aurel A. Lazar (Columbia University, New York, US) Tutorial abstractsT1: Neural mass and neural field models (pdfs are here)
Axel Hutt (INRIA Nancy, France) The brain exhibits dynamical processes on different spatial and temporal scales. Single neurons have a size of tens of micrometers and fire during few milliseconds, whereas macroscopic brain activity, such as encephalographic data or the BOLD response in functional Magnetic Resonance Imaging, evolve on a millimeter or centimeter scale during tens of milliseconds. To understand the relation between the two dynamical scales, the mesoscopic scale of neural populations between these scales is helpful. Moreover, it has been found experimentally that neural populations encode and decode cognitive functions. The tutorial presents a specific type of rate-coding models which is both mathematically tractable and verifiable experimentally. It starts with a physiological motivation of the model, followed by mathematical analysis techniques applied to explain experimental data and applications to epilepsy and robotics. In detail: Fundamentals of Neural Mass and Neural Field Models: physiological motivation and mathematical descriptions (9:00 - 10:30 , Axel Hutt) The talk will motivate physiologically the mathematical description of neural populations based on microscopic neural properties. Several different mathematical models will be discussed, explained and compared. The major aim of this presentation is to explain the standard models found in the literature in terms of mathematical elements and physiological assumptions. Pattern formation in neural network systems and applications to movement dynamics (10:45 - 12:20 , Viktor Jirsa) We will introduce and systematically evaluate the mechanisms underlying pattern formation in neuronal networks. A particular focus will be given to the contribution of network connectivity. The conditions will be derived aiding in the emergence of low-dimensional sub-spaces, in which nonlinear dynamic flows are constrained to manifolds. Applications of these concepts and experimental examples will be provided from the field of human movement sciences. Brain Networks in Epilepsy: Fusing models and Clinical Data (13:30 - 15:00 , John Terry) We will explore how both physiological and phenomenological mathematical models can be developed to understand the mechanisms that underpin transitions in clinically recorded EEG between activity corresponding to normal function and the pathological activity associated with epilepsy. We present evidence that the bifurcation sequences of physiologically inspired models give rise to dynamical sequences that precisely map those observed in both focal and generalised seizures. We further demonstrate that seizure frequency can be understood through the relationship between the dynamics of brain regions and the network structures that connect them. Our mathematical models help to explain previously counterintuitive experimental studies demonstrating loss of connectivity paradoxically makes generalised seizures more likely. The dynamic neural field approach to cognitive robotics (15:15 - 16:45 , Wolfram Erlhagen) In recent years, there has been an increased interest by part of the robotics community in using the theoretical framework of dynamic neural fields to develop neuro-inspired control architectures for autonomous agents. The formation of self-sustained activity patterns in neural populations explained by the theory offers a systematic way to endow robots with cognitive functions such as working memory, decision making, prediction and anticipation. In the tutorial, I will present a Dynamic Neural Field-architecture for natural human-robot collaboration that is heavily inspired by neuro-cognitive mechanisms supporting joint action in humans and other primates. The model is formalized as a large-scale network of reciprocally connected neural populations, each governed by a classical field dynamics of Amari type. T2: Theory of correlation transfer and correlation structure in recurrent networks
Ruben Moreno-Bote (Foundation Sant Joan de Déu, Barcelona, Spain)
In the first part, we will study correlations arising from pairs of neurons sharing common fluctuations and/or inputs. Using integrate-and-fire neurons, we will show how to compute the firing rate, auto-correlation and cross-correlation functions of the output spike trains. The transfer function of the output correlations given the inputs correlations will be discussed. We will show that the output correlations are generally weaker than the input correlations [Moreno-Bote and Parga, 2006], that the shape of the cross-correlation functions depends on the working regime of the neuron [Ostojic et al., 2009; Helias et al., 2013], and that the output correlations strongly depend on the output firing rate of the neurons [de la Rocha et al, 2007]. We will study generalizations of these results when the pair of neurons is reciprocally connected.
In the second part, we will consider correlations in recurrent random networks. Using a binary neuron model [Ginzburg & Sompolinsky, 1994], we explain how mean-field theory determines the stationary state and how network-generated noise linearizes the single neuron response. The resulting linear equation for the fluctuations in recurrent networks is then solved to obtain the correlation structure in balanced random networks. We discuss two different points of view of the recently reported active suppression of correlations in balanced networks by fast tracking [Renart et al., 2010] and by negative feedback [Tetzlaff et al., 2012]. Finally, we consider extensions of the theory of correlations of linear Poisson spiking models [Hawkes, 1971] to the leaky integrate-and-fire model and present a unifying view of linearized theories of correlations [Helias et al, 2011].
At last, we will revisit the important question of how correlations affect information and vice-versa [Zohary et al, 1994] in neuronal circuits, showing novel results about information content in recurrent networks of integrate-and-fire neurons [Moreno-Bote and Pouget, Cosyne abstracts, 2011]. References:
T3: Modeling and interpretation of extracellular potentialsGaute T. Einevoll (Norwegian University of Life Sciences, Ås)
In the second part the participants will get demonstrations and hands-on experience with
References:
T4: Probabilistic inference as a neural-computing paradigmDejan Pecevski (Graz University of Technology, Graz, Austria) Probabilistic inference has been proven to be a very suitable framework for explaining many of the computations that the brain performs in face of great amount of uncertainty present in the sensory inputs and its internal representations of the world [Rao et al, 2002, Fiser et al. 2010, Tenenbaum et al, 2011, Kording et al, 2004]. However, it still remains an open question how these probabilistic inference computations are implemented in the neural circuits of the brain. In this tutorial we will present recent results that give new perspectives on how probabilistic inference and learning could be carried out by networks of spiking neurons.
T5: Brain Activity at Rest: Dynamics and Structure of the Brain in Health and Disease
Gustavo Deco (Universitat Pompeu Fabra, Spain)
Perceptions, memories, emotions, and everything that makes us human, demand the flexible integration of information represented and computed in a distributed manner. The human brain is structured into a large number of areas in which information and computation is highly segregated. Normal brain function requires the integration of functionally specialized but widely distributed brain areas. We contend that the functional and encoding roles of diverse neuronal populations across areas are subject to the intra- and inter-cortical dynamics. In this tutorial, we try to elucidate precisely the interplay and mutual entrainment between local brain area dynamics and global network dynamics, in order to understand how segregated distributed information and processing is integrated. We can deepen our understanding of the mechanisms underlying brain functions by complementing structural and activation based analysis with dynamics. In particular, a large body of fMRI, MEG, EEG, and optical imaging experiments reveal that the ongoing brain activity of the brain at rest is not trivial but highly structured in very specific spatio-temporal patterns known as Resting State Networks (RSN). Indeed, the Functional Connectivity (FC) at rest, i.e. the spatial correlation matrix between the temporal signals reflecting spontaneous brain activity at different positions, is topologically very well structured according to the underlying RSNs. A profound understanding of these operations will help to elucidate the computational principles underlying higher brain functions and their breakdown in brain diseases. Thus, we will discuss the effects on the resting state of lesions and different type of damage in neuropsychiatric disorder. T6: Developing neuron and synapse models for NEST
Abigail Morrison (Research Center Jülich, Germany) The neural simulation tool NEST [1] is a simulator for heterogeneous networks of point neurons or neurons with a small number of electrical compartments aiming at simulations of large neural systems. It is implemented in C++ and runs on a large range of architectures from single-processor desktop computers to large clusters with thousands of processor cores. This tutorial is an extension course for anybody who is already working with either the neural simulation tool NEST, or has other experience with the simulation of networks of point neuron models. Some programming background in C++ is helpful but not required. We will start with a refresher on setting up networks of neurons focussing on customising the neuronal and synaptic parameters before, during and after creation of the network elements. In the second part we will provide a hands-on demonstration of how to develop a new neuron or synapse model for NEST. We will start from a skeleton and follow the process through to using the new model in a network simulation. References:
T7: Advanced modelling of spiking neural networks with BRIAN
Romain Brette (École Normale Supérieure, Paris, France) (see schedule)
BRIAN [1,2] is a simulator for spiking neural networks, written in the Python programming language. It focuses on making the writing of simulation code as quick as possible and on flexibility: new and non-standard models can be readily defined using mathematical notation. This tutorial will present the current state of development of BRIAN and will enable participants to adapt and extend Brian to their needs. It will also cover existing Brian extensions (brian hears [3], model fitting toolbox [4], compartmental modelling) and introduce "best practices" for complex simulations. Furthermore, it will present strategies for improving the speed of simulations by using Brian's C code generation mechanism [5]. References:
T8: Managing complex workflows in neural simulation and data analysis
Andrew P. Davison (UNIC, CNRS, Gif sur Yvette) In our attempts to uncover the mechanisms that govern brain processing on the level of interacting neurons, neuroscientists have taken on the challenge of tackling the sheer complexity exhibited by neuronal networks. Neuronal simulations are nowadays performed with a high degree of detail, covering large, heterogeneous networks. Experimentally, electrophysiologists can simultaneously record from hundreds of neurons in complicated behavioral paradigms. The data streams of simulation and experiment are thus highly complex; moreover, their analysis becomes most interesting when considering their intricate correlative structure.
References:
T9: Massively Parallel Time Encoding and Channel Identification MachinesAurel A. Lazar (Columbia University, New York, US)
This two part tutorial focusses on Time Encoding Machines (part I) and Channel Identification Machines (part II). The tutorial will give an overview of (i) nonlinear decoding of stimuli encoded with neural circuits with biophysical neuron models, (ii) functional identification of biophysical neural circuits, and (iii) the duality between the two. Scaling to massively parallel neural circuits for both encoding and functional identification will be discussed throughout. The tutorial will provide numerous examples of neural decoding and functional identification using the Time Encoding Machines Toolbox and the Channel Identification Machines Toolbox. Tutorial material, programming code and demonstrations will be provided.
Part I: Time Encoding Machines The nature of the neural code is fundamental to theoretical and systems neuroscience [1]. Can information about the sensory world be faithfully represented by a population of sensory neurons? What features of the stimulus are encoded by a multidimensional spike train? How can these features be decoded? Why does the cochlear nerve carry some 30,000 fibers and the optic nerve some 1,000,000? We will discuss these questions using a class of neural encoding circuits called Time Encoding Machines (TEMs) [2]. TEMs model the encoding of stimuli in early sensory systems with neural circuits with arbitrary connectivity and feedback. These circuits are realized with temporal, spectro-temporal and/or spatio-temporal receptive fields, and biophysical neuron models with stochastic conductances (Hodgkin-Huxley, Morris-Lecar, etc.) [3, 4, 5, 6]. The tutorial will review key theoretical results and provide numerous examples of massively parallel neural encoding circuits and stimulus decoding algorithms with the Time Encoding Machines Toolbox. Part II: Channel Identification Machines Parameter estimation is at the core of functional identification of neural circuits. How are estimates of model parameters aected by the stimuli employed in neurophysiology? What is a suitable metric to assess the faithfulness of identified parameters and the goodness of model performance? These are key open questions that are of relevance to both theoretical and experimental neuroscientists. We will discuss these questions using a class of algorithms called Channel Identification Machines (CIMs) [7, 8, 9] and give an overview of the functional identification of massively parallel neural circuit models of sensory systems arising in olfaction, audition and vision. These circuits are built with temporal, spectro-temporal and non-separable spatio-temporal receptive fields, and biophysical spiking neuron models. The tutorial will demonstrate how CIMs achieve the efficient identification of neural circuit models and will provide numerous examples of functional identification using the Channel Identification Machines Toolbox. References:
|