CNS 2014 Québec City: TutorialsThe tutorials will be held on July 26th at the Québec City Conference Center.
List of TutorialsThe following list will be updated as more tutorials are confirmed and as abstracts are available. Chris Eliasmith and Terrence Stewart, University of Waterloo, CA T2: Themes in Computational Neuroendocrinology Joel Tabak, Florida State University, USA T3: Theory of correlation transfer and correlation structure in recurrent networks Ruben Moreno-Bote, Foundation Sant Joan de Deu, Barcelona, Spain T4: Modeling and analysis of extracellular potentials Gaute Einevoll (Norwegian University of Life Sciences, Ås, Norway) and others T5: NEURON Simulation Software Bill Lytton (SUNY Downstate Medical Center, US) and others T6: Constructing biologically realistic neuron and network models with GENESIS Hugo Cornelis, University of Texas Health Science Center, San Antonio, USA T7: Modeling of Spiking Neural Networks with BRIAN Romain Brette (Institut de la Vision, Paris, France) and others T8: Simulating large-scale spiking neuronal networks with NEST Jochen M. Eppler & Jannis Schücker (Research Center Jülich, Germany) T9: Neuronal Model Parameter Search Techniques Tutorial AbstractsT1: The Neural Engineering Framework (NEF): A General Purpose Method for Building Spiking Neuron Models Chris Eliasmith and Terrence Stewart, University of Waterloo, CA We have recently created the world's largest biologically plausible brain model that is capable of performing several perceptual, motor, and cognitive tasks (Eliasmith et al., 2012). This model uses 2.5 million spiking neurons, takes visual input from a 28x28 pixel visual field, and controls a physically modelled arm. It has been shown to match a wide variety of neurophysiological and behavioral measures from animals and humans performing the same tasks. This tutorial is meant to introduce the software toolkit (Nengo) and theoretical background (NEF) to allow other researchers to use the same methods for exploring a wide variety of brain functions. We will focus on the underlying theory of the Neural Engineering Framework (NEF; Eliasmith and Anderson, 2003), a general method for implementing large-scale, nonlinear dynamics using spiking neurons. Our emphasis will be on building such models using a GUI and scripting in our open-source toolkit Nengo (). We will help participants construct networks that perform linear and nonlinear computations in high dimensional state spaces, including arbitrary attractor networks (point, line, cyclic, chaotic), controlled oscillators and filters, and winner-take-all networks. We will discuss both how the networks can be learned online with a spike-based learning rule, or more efficiently constructed. If time permits, the tutorial will introduce our Semantic Pointer Architecture (Eliasmith, 2013), encapsulated in a Python module for Nengo which can be used to rapidly implement large-scale cognitive models that include (basic) visual processing, motor control, working memory, associative memory, and cognitive control. Audience All participants are encouraged to bring a laptop for installing and running Nengo (Linux, OS X, and Windows versions are provided), allowing for hands-on interactions with the models discussed. References
T2: Themes in Computational Neuroendocrinology Joel Tabak, Florida State University, USA Computational neuroendocrinology regroups the various efforts, at different levels of organization, to better understand neuroendocrine regulations using computational models. Neuroendocrine systems are organized in “endocrine axes”. Each axis includes neuronal populations in the hypothalamus, cells in the pituitary gland that releases one or multiple hormones, and the target organ of this particular set of hormones. Computational models describe the activity of hypothalamic neurons and how these neuroendocrine cells regulate the activity of pituitary cells that secrete hormones such as growth hormone, prolactin, luteinizing hormone, etc. They may also describe how hormones released by target organs in response to pituitary hormones, such as steroids, feedback and affect hypothalamo-pituitary regulations. One recurring theme is to understand how these regulations can produce pulsatile patterns of hormone secretion, and how target cells interpret these pulsatile patterns. In this tutorial we will present examples of models that illustrate important themes in computational neuroendocrinology. These models range from the single cell level to the network level and, further, to the multi organ level. They will emphasize some key features of the neuroendocrine systems: endocrine cells have wide action potential and bursts that rely more on Ca2+ than Na+ voltage-dependent channels; the main transmitters of neuroendocrine regulations are not binding to receptor-channels but to G-protein coupled receptors that trigger second messenger cascades, leading to protein phosphorylation or gene expression; as a result neuroendocrine regulations do not operate at the millisecond time scale but at much slower time scales, from seconds to days.
T3: Theory of correlation transfer and correlation structure in recurrent networks Ruben Moreno-Bote, Foundation Sant Joan de Deu, Barcelona, Spain In the first part, we will study correlations arising from pairs of neurons sharing common fluctuations and/or inputs. Using integrate-and-fire neurons, we will show how to compute the firing rate, auto-correlation and cross-correlation functions of the output spike trains. The transfer function of the output correlations given the inputs correlations will be discussed. We will show that the output correlations are generally weaker than the input correlations [Moreno-Bote and Parga, 2006], that the shape of the cross-correlation functions depends on the working regime of the neuron [Ostojic et al., 2009; Helias et al., 2013], and that the output correlations strongly depend on the output firing rate of the neurons [de la Rocha et al, 2007]. We will study generalizations of these results when the pair of neurons is reciprocally connected. In the second part, we will consider correlations in recurrent random networks. Using a binary neuron model [Ginzburg & Sompolinsky, 1994], we explain how mean-field theory determines the stationary state and how network-generated noise linearizes the single neuron response. The resulting linear equation for the fluctuations in recurrent networks is then solved to obtain the correlation structure in balanced random networks. We discuss two different points of view of the recently reported active suppression of correlations in balanced networks by fast tracking [Renart et al., 2010] and by negative feedback [Tetzlaff et al., 2012]. Finally, we consider extensions of the theory of correlations of linear Poisson spiking models [Hawkes, 1971] to the leaky integrate-and-fire model and present a unifying view of linearized theories of correlations [Helias et al, 2011]. At last, we will revisit the important question of how correlations affect information and vice-versa [Zohary et al, 1994] in neuronal circuits, showing novel results about information content in recurrent networks of integrate-and-fire neurons [Moreno-Bote and Pouget, Cosyne abstracts, 2011]. References
T4: Modeling and analysis of extracellular potentials Gaute Einevoll, Norwegian University of Life Sciences, Ås, Norway Szymon Łęski (Nencki Institute of Experimental Biology, Warsaw) Espen Hagen (Norwegian University of Life Sciences, Ås) While extracellular electrical recordings have been the main workhorse in electrophysiology, the interpretation of such recordings is not trivial [1,2,3]. The recorded extracellular potentials in general stem from a complicated sum of contributions from all transmembrane currents of the neurons in the vicinity of the electrode contact. The duration of spikes, the extracellular signatures of neuronal action potentials, is so short that the high-frequency part of the recorded signal, the multi-unit activity (MUA), often can be sorted into spiking contributions from the individual neurons surrounding the electrode [4]. No such simplifying feature aids us in the interpretation of the low-frequency part, the local field potential (LFP). To take a full advantage of the new generation of silicon-based multielectrodes recording from tens, hundreds or thousands of positions simultaneously, we thus need to develop new data analysis methods grounded in the underlying biophysics [1,3,4]. This is the topic of the present tutorial. In the first part of this tutorial we will go through
In the second part, the participants will get demonstrations and, if wanted, hands-on experience with
Further, new results from applying the biophysical forward-modelling scheme to predict LFPs from comprehensive structured network models, in particular
will be presented. [1] KH Pettersen et al, “Extracellular spikes and CSD” in Handbook of Neural Activity Measurement, Cambridge (2012) [2] G Buzsaki et al, Nature Reviews Neuroscience 13:407 (2012) [3] GT Einevoll et al, Nature Reviews Neuroscience 14:770 (2013) [4] GT Einevoll et al, Current Opin Neurobiol 22:11 (2012) [5] G Holt, C Koch, J Comp Neurosci 6:169 (1999) [6] J Gold et al, J Neurophysiol 95:3113 (2006) [7] KH Pettersen and GT Einevoll, Biophys J 94:784 (2008) [8] KH Pettersen et al, J Comp Neurosci 24:291 (2008) [9] H Lindén et al, J Comp Neurosci 29: 423 (2010) [10] H Lindén et al, Neuron 72:859 (2011) [11] S Łęski et al, PLoS Comp Biol 9:e1003137 (2013) [12] KH Pettersen et al, J Neurosci Meth 154:116 (2006) [13] S Łęski et al, Neuroinform 5:207 (2007) [14] S Łęski et al, Neuroinform 9:401 (2011) [15] J Potworowski et al, Neural Comp 24:541 (2012) [16] GT Einevoll et al, J Neurophysiol 97:2174 (2007) [17] P Blomquist et al, PLoS Comp Biol 5:e1000328 (2009) [18] SL Gratiy et al, Front Neuroinf 5:32 (2011) [19] H Lindén et al, Front Neuroinf 7:41 (2014) [20] ML Hines et al, Front Neuroinf 3:1 (2009) [21] R Traub et al, J Neurophysiol 93:2194 (2005) [22] TC Potjans and M Diesmann, Cereb Cort 24:785 (2014) [23] E Hagen et al, BMC Neuroscience 14(Suppl 1):P119 (2013)
T5: NEURON Simulation Software Bill Lytton (SUNY Downstate Medical Center, USA) and others (half day) This half-day tutorial will focus on several new features that have been added recently to the NEURON simulator environment, as well as highlighting older features that have had recent upgrades. Questions are encouraged during each talk and during time set aside at end of each talk. Presentations will include the following:
T6: Constructing biologically realistic neuron and network models with GENESIS Hugo Cornelis, University of Texas Health Science Center, San Antonio, USA
T7: Modeling of spiking neural networks with BRIAN Romain Brette, Marcel Stimberg, Pierre Yger (Institut de la Vision, Paris, France), Dan Goodman (Harvard Medical School, Boston, USA), and Bertrand Fontaine (KU Leuven, Belgium) Brian [1,2] is a simulator for spiking neural networks, written in the Python programming language. It focuses on making the writing of simulation code as quick as possible and on flexibility: new and non-standard models can be readily defined using mathematical notation[3]. This tutorial will be based on Brian 2, the current Brian version under development. In the morning, we will give an introduction to Brian and an overview of the existing Brian extensions (brian hears [4], model fitting toolbox [5], compartmental modelling). In the afternoon, more advanced topics (extending Brian; code generation[5], including the generation of "standalone code"; contributing to Brian) will be covered. More details of the agenda for the tutorial along with teaching material will be posted here: http://briansimulator.org/brian-tutorial-at-cns-2014/. References: [1] http://briansimulator.org [2] Goodman DFM and Brette R (2009). The Brian simulator. Front Neurosci doi:10.3389/neuro.01.026.2009. [3] Stimberg M, Goodman DFM, Benichoux V, and Brette R (2014). Equation-oriented specification of neural models for simulations. Frontiers in Neuroinformatics 8. doi:10.3389/fninf.2014.00006 [4] Fontaine B, Goodman DFM, Benichoux V, Brette R (2011). Brian Hears: online auditory processing using vectorisation over channels. Frontiers in Neuroinformatics 5:9. doi:10.3389/fninf.2011.00009 [5] Rossant C, Goodman DFM, Platkiewicz J and Brette, R. (2010). Automatic fitting of spiking neuron models to electrophysiological recordings. Frontiers in Neuroinformatics. doi:10.3389/neuro.11.002.2010 [6] Goodman, DFM (2010). Code generation: a strategy for neural network simulators. Neuroinformatics. doi:10.1007/s12021-010-9082-x T8: Simulating large-scale spiking neuronal networks with NEST Jochen M. Eppler & Jannis Schücker (Research Center Jülich, Germany) The neural simulation tool NEST [1, www.nest-simulator.org] is a simulator for heterogeneous networks of point neurons or neurons with a small number of electrical compartments aiming at simulations of large neural systems. It is implemented in C++ and runs on a large range of architectures from single-processor desktop computers to large clusters and supercomputers with thousands of processor cores.
T9: Neuronal Model Parameter Search Techniques Cengiz Günay, Anca DolocMihu (Emory University, USA), Vladislav Sekulić (University of Toronto, Canada), Tomasz G. Smolinski (Delaware State University, USA) [2] Cengiz Günay, Jeremy R. Edgerton, and Dieter Jaeger. Channel density distributions explain spiking variability in the globus pallidus: A combined physiology and computer simulation database approach. J. Neurosci., 28(30):7476–91, July 2008. [3] Pablo Achard and Erik De Schutter. Complex parameter landscape for a complex neuron model. PLoS Comput Biol, 2(7):794–804, Jul 2006. [4] Tomasz G. Smolinski and Astrid A. Prinz. Computational intelligence in modeling of biological neurons: A case study of an invertebrate pacemaker neuron. In Proceedings of the International Joint Conference on Neural Networks, pages 2964–2970, Atlanta, GA, 2009. [5] Tomasz G. Smolinski and Astrid A. Prinz. Multi‐objective evolutionary algorithms for model neuron parameter value selection matching biological behavior under different simulation scenarios. BMC Neuroscience, 10(Suppl 1):P260, 2009. [6] Damon G. Lamb and Ronald L. Calabrese. Correlated conductance parameters in leech heart motor neurons contribute to motor pattern formation. PLoS One, 8(11):e79267, 2013. [7] Cengiz Günay, Jeremy R. Edgerton, Su Li, Thomas Sangrey, Astrid A. Prinz, and Dieter Jaeger. Database analysis of simulated and recorded electrophysiological datasets with PANDORA’s Toolbox. Neuroinformatics, 7(2):93–111, 2009. [8] Cengiz Günay and Astrid A. Prinz. Model calcium sensors for network homeostasis: Sensor and readout parameter analysis from a database of model neuronal networks. J Neurosci, 30:1686–1698, [9] Anca Doloc‐Mihu and Ronald L. Calabrese. A database of computational models of a half‐center oscillator for analyzing how neuronal parameters influence network activity. J Biol Phys, 37(3):263–283, Jun 2011. [10] Emlyne Forren, Myles Johnson‐Gray, Parth Patel, and Tomasz G. Smolinski. Nervolver: a computational intelligence‐based system for automated construction, tuning, and analysis of neuronal models. BMC Neuroscience, 13(Suppl 1):P36, 2012. [M2] Lobster stomatogastric ganglion pyloric network model (http://senselab.med.yale.edu/ModelDB/showmodel.asp?model=144387) [M3] Half‐center oscillator database of leech heart interneuron model (http://senselab.med.yale.edu/ModelDB/ShowModel.asp?model=144518) [S1] PANDORA Matlab Toolbox (http://software.incf.org/software/pandora) [S2] Parallel parameter search scripts for simulating neuron models (https://github.com/cengique/param‐search‐neuro) [S3] Half‐Center Oscillator model database (HCO‐db) (http://www.biology.emory.edu/research/Calabrese/hco‐db/hcoDB_Main.html)
|