International (Online) Workshop on

Reservoir Computing & Neural Networks


Confirmed speakers




Dr. Aditi Kathpalia
Postdoctoral Researcher
Department of Complex Systems
Institute of Computer Science
Czech Academy of Sciences
Czech Republic

Causality and machine learning

Abstract
Despite the recent success and widespread applications of machine learning (ML) algorithms for classification and prediction in a variety of fields, they face difficulty in interpretability, trustworthiness and generalization. One of the main reasons for this is that these algorithms are building black-box models by learning statistical associations between the given ‘input’ and its ‘output’. Decisions made solely based on ‘associational learning’ are insufficient to provide explanations and hence difficult to be employed in real world tasks requiring transparency and reliability. To overcome these limitations of ML algorithms, researchers are moving towards ‘causal machine learning’ by aiding ML decision-making based on causal reasoning and understanding. We will discuss ‘the science of causality’, its requirements in ML and possible means of integration with ML. We will also compare different ML algorithms based on their performance in learning temporal order/ structure in single time series as well as their ability to classify coupled pairs of time-series based on their cause-effect (or driver-driven) relationship.

Speaker Bio
Aditi Kathpalia is currently a postdoctoral researcher at the Department of Complex Systems, Institute of Computer Science of the Czech Academy of Sciences in Czech Republic. Her research interests include causal inference and causal machine learning, complex systems, information theory and computational neuroscience. She has co-authored 8 international journal publications and presented her work in 16 international conferences & workshops. She completed her PhD in 2021 with the dissertation titled, ‘Theoretical and Experimental Investigations into Causality, its Measures and Applications’, from the National Institute of Advanced Studies (NIAS), IISc Campus, Bengaluru, India. She graduated as a gold medallist with bachelors and masters (dual degree) in the field of Biomedical Engineering from Indian Institute of Technology (BHU), Varanasi, India in the year 2015.

Video of talk

Prof Erik M Bollt
Professor of Mathematics
Director, Clarkson Center for Complex Systems Science
Clarkson University
Potsdam, NY, USA

On explaining the surprising success of reservoir computing forecaster of chaos? 

Abstract
 Machine learning has become a widely popular and successful paradigm, especially in data-driven science and engineering. A major application problem is data-driven forecasting of future states from a complex dynamical system. Artificial neural networks have evolved as a clear leader among many machine learning approaches, and recurrent neural networks are considered to be particularly well suited for forecasting dynamical systems. In this setting, the echo-state networks or reservoir computers (RCs) have emerged for their simplicity and computational complexity advantages. Instead of a fully trained network, an RC trains only readout weights by a simple, efficient least squares method. What is perhaps quite surprising is that nonetheless, an RC succeeds in making high quality forecasts, competitively with more intensively trained methods, even if not the leader. There remains an unanswered question as to why and how an RC works at all despite randomly selected weights. To this end, this work analyzes a further simplified RC, where the internal activation function is an identity function. Our simplification is not presented for the sake of tuning or improving an RC, but rather for the sake of analysis of what we take to be the surprise being not that it does not work better, but that such random methods work at all. We explicitly connect the RC with linear activation and linear readout to well developed time-series literature on vector autoregressive (VAR) averages that includes theorems on representability through the Wold theorem, which already performs reasonably for short-term forecasts. In the case of a linear activation and now popular quadratic readout RC, we explicitly connect to a nonlinear VAR, which performs quite well. Furthermore, we associate this paradigm to the now widely popular dynamic mode decomposition; thus, these three are in a sense different faces of the same thing. We illustrate our observations in terms of popular benchmark examples including Mackey–Glass differential delay equations and the Lorenz63 system.


Speaker Bio
Prof Erik Bollt is the W. Jon Harrington Prof of Mathematics at Clarkson University and is the Director of the Clarkson Center for Complex Systems Science. He did his Ph D from the University of Colorado with Prof. Jim Meiss. His research interests lie in the data driven analysis of complex systems and dynamical systems, machine learning and data science methods, and in network science.

Video of talk

Dr Jaideep Pathak
NVIDIA Research
California USA

FourCastNet and Data Driven Earth Science

Abstract
In this talk I will describe our efforts at NVIDIA to build a neural network based dynamical surrogate model of the Earth’s atmosphere called FourCastNet. FourCastNet, short for Fourier Forecasting Neural Network, is a global, data-driven machine learning weather forecasting model based on an autoregressive transformer architecture, which provides accurate short- to medium-range global predictions at 0.25∘ resolution. FourCastNet generates a week-long global forecast in seconds with predictive skill approaching operational NWP models. This paves the way for similar ML methods to serve climate and weather projections at low latency. The speed of FourCastNet also enables inexpensive large-ensemble forecasts comprising thousands of ensemble-members for improving probabilistic forecasting. In this talk I will describe some salient aspects of our data-driven model such as extreme event forecasting and calibrated probabilistic forecasting as well as present future directions of research.

Speaker Bio
 Jaideep is a research scientist at NVIDIA working on applications of machine learning for earth science. I obtained my PhD in Physics at the University of Maryland, College Park followed by a postdoctoral research fellowship at Lawrence Berkeley National Laboratory.

Video of talk

Prof Manish Dev Shrimali
Professor in Physics
Central University of Rajasthan
India

Reservoir computing with a single driven pendulum

Abstract
The study of natural information processing capacity of a dynamical system has sought the attention of many researchers in the last few decades. Reservoir Computing (RC) provides a computational framework to exploit that. Various complex machine learning tasks may be performed using dynamical systems as the main computational substrate in RC.There are several examples of dynamical systems successfully used as a reservoir,chosen mainly on the basis of few usual criteria; high dimensionality, rich non-linearity and fading memory. In this talk, we will discuss the performance of a low dimensional dynamical system as a reservoir, namely single driven pendulum. In the conventional neural network models also, there is a notion of a single neuron being enough to perform complex tasks. Our Objective is to exploit a strikingly simple system like a single pendulum to solve intelligent computational tasks. We also discuss the remarkable result of a proof-of-principle experimental setup of the scheme.

Speaker Bio
Manish Shrimali is a professor in the Department of Physics, Central University of Rajasthan. He obtained his M. Sc (Physics) and Ph. D. from JNU New Delhi. He was a postdoctoral fellow at the Institute Of Industrial Science, the University of Tokyo. His research areas include study of synchronization, multi-stability, chimera states in complex dynamical systems and reservoir computing.

Video of talk

Mobirise


Prof Nithin Nagaraj
Associate Professor
Consciousness Studies Program 
National Institute of Advanced Studies
IISc Campus, Bengaluru

 Can chaos & noise help machine learning?

Abstract
Deterministic chaos is universal - from patterns in the decimal expansions of numbers to models for weather prediction to complex dynamics in biological systems such as the heart and the brain. While 'Chaos' is purely deterministic (yet unpredictable), 'Noise' is stochastic, random, unpredictable and unwanted - and also found in all natural and human engineered systems. What happens when Chaos meets Noise? Can a new type of machine learning emerge at the intersection of Chaos and Noise?

Speaker Bio
Dr. Nithin Nagaraj has Bachelors degree in Electrical and Electronics Engineering from National Institute of Technology Karnataka (NITK, Surathkal, 1999), Masters degree in Electrical Engineering from Rensselaer Polytechnic Institute (RPI), Troy, New York, USA (2001) and PhD from National Institute of Advanced Studies, Bengaluru (NIAS, 2010). He has previously held positions at GE Global Research, IISER-Pune and Amrita University. He is currently Associate Professor at NIAS, IISc. Campus, Bengaluru. His research areas include Brain-inspired machine learning, chaos and information theory, complexity theories of causality and consciousness. He has co-authored 30+ international peer-reviewed journal publications with 1200+ citations (h-index = 17), over 60+ national & international conference paper presentations. He has delivered over 100+ invited talks at various national and international forums. He is the co-inventor of 8 US patent applications (2 granted) and 1 Indian patent application. He is a Senior Member of the IEEE, and an invited member of the Advisory Council of METI: Messaging Extra Terrestrial Intelligence International, USA.

Video of talk

Mobirise
Dr. Sarthak Chandra
Postdoctoral Researcher
The FIETE Lab
MIT, USA

Reservoir computing in noisy real-world systems: network inference and dynamical noise

Abstract
Network link inference from measured time-series data from dynamically interacting network nodes is an important problem with wide-ranging applications. However, previous work towards this effort, using reservoir computers (RCs) and otherwise, has largely only considered relatively simplistic problems, where most inference techniques result in a clear separability between true and false network edges. Here I will talk about the challenges associated with application of RCs towards neural data obtained from C. Elegans, and how they might be overcome. In particular, I will describe a novel surrogate data construction method that allows for accurate link assessment even when the data does not lead to clear indications of connectivity.

When working with such data, an important issue that arises is the presence of dynamical noise, i.e., continual stochastic perturbations to system dynamics. I will discuss how RCs can be used as effective tools to filter out dynamical noise, allowing for the reconstruction of an underlying deterministic dynamical system that governs the dynamics of the data. We demonstrate that RCs can perform effective filtering without any access to the clean signal and by training solely on the stochastically perturbed dynamical trajectories, even when the dynamical noise causes significant distortions to the system attractor.

Speaker Bio
Sarthak Chandra joined the Physics department at IIT Kanpur after achieving AIR 86 in JEE 2011. He graduated at the top of his department and received the Director's Gold Medal for best academic and all-round performance. Thereafter, he joined the Physics department at the University of Maryland for his PhD, where he was supervised by Prof. Edward Ott and Prof. Michelle Girvan, working on problems at the intersection of complex networks and nonlinear dynamics. After finishing his PhD in 2020, Sarthak joined Prof. Ila Fiete's lab at MIT in the Brain and Cognitive Sciences department as a postdoctoral associate. He is currently working on problems related to the emergence and advantages of modularity in neural circuits, and dynamical systems perspectives on the growth and development of neural networks in the brain.

Video of talk

Mobirise
Prof V Srinivasa Chakravarthy
Professor in Department of Biotechnology
Computational Neuroscience Laboratory
Indian Institute of Technology Madras

Computing with rhythms: The search for deep oscillatory neural networks

Abstract
In the recent years, there is a growing demand to achieve a marriage between AI and neuroscience. Oscillatory activity is ubiquitous in the brain, a feature that is conspicuous by its absence in deep learning models. Although there are oscillator-based models of brain dynamics, they do not seem to enjoy the universal computational properties of rate-coded and spiking neuron network models. Use of oscillator-based models is often limited to special phenomena like locomotor rhythms and oscillatory attractor-based memories. If neuronal ensembles are taken to be the basic functional units of brain dynamics, it is desirable to develop oscillator-based models that can explain a wide variety of neural phenomena. To this end, we aim to develop a general theory of oscillatory neural networks. Specifically, we propose a novel neural network architecture consisting of Hopf oscillators described in the complex domain. The oscillators can adapt their intrinsic frequencies by tracking the frequency components of the input signals. The oscillators are also laterally connected with each other through a special form of coupling we labeled as “power coupling”. Power coupling allows two oscillators with arbitrarily different
intrinsic frequencies to interact at a constant normalized phase difference. The network can be operated in two phases. In the encoding phase the oscillators comprising the network perform a Fourier-like decomposition of the input signal(s). In the reconstruction phase, outputs the trained oscillators are combined to reconstruct the training signals. We show that the network can be trained to reconstruct high-dimensional Electro-encephalogram (EEG) signals, and fMRI signals paving the way to an exciting class of large-scale brain models of brain dynamics. A general theory of trainable oscillatory deep networks is expected to bring AI and deep networks a step closer to brain models.

Speaker bio:
V. Srinivasa Chakravarthy is a professor in the Department of Biotechnology, IIT Madras. He obtained his BTech from IIT Madras, MS /PhD from the University of Texas at Austin. His received postdoctoral training in the neuroscience department at Baylor College of Medicine, Houston. The Computational Neuroscience Lab (CNS Lab) that he heads works on developing models of the neural oscillations, basal ganglia, spatial navigation, stroke rehabilitation and neurovascular coupling. He is the author of two books in neuroscience. He is the inventor of a novel script called Bharati, a unified script for Indian languages.

Video of talk


Dr. Xavier Hinaut
Researcher
INRIA Bordeaux

Reservoir SMILES: Towards SensoriMotor Interaction of Language and Embodiment of Symbols with reservoir architectures

Abstract
Language involves several hierarchical levels of abstraction. Most mod- els focus on a particular level of abstraction making them unable to model bottom-up and top-down processes. Moreover, we do not know how the brain grounds symbols to perceptions and how these symbols emerge throughout development. Experi- mental evidence suggests that perception and action shape one-another (e.g. motor areas activated during speech perception) but the precise mechanisms involved in this action-perception shaping at various levels of abstraction are still largely un- known.

My previous and current work include the modelling of language comprehension, language acquisition with a robotic perspective, sensorimotor models and extended models of Reservoir Computing to model working memory and hierarchical process- ing. I propose to create a new generation of neural-based computational models of language processing and production; to use biologically plausible learning mecha- nisms relying on recurrent neural networks; create novel sensorimotor mechanisms to account for action-perception shaping; build hierarchical models from sensorimotor to sentence level; embody such models in robots.

Speaker Bio
Xavier Hinaut is a Research Scientist in Computational Neuroscience since 2016 at Inria, Bordeaux, France. He received a MsC in Computer Science in 2008 (UTC, Compiègne, France) and a MsC in Cognitive Science and AI in 2009 (EPHE, Paris, France). He received a PhD in Computational Neuroscience from the University of Lyon in 2013. His work is at the frontier of neurosciences, machine learning, robotics, songbirds and linguistics: from the modeling of human sentence processing to the analysis of birdsongs and their neural correlates. He manages the DeepPool ANR project on human sentence modeling with Reservoirs. He leads ReservoirPy development: a new Python library for Reservoir Computing. https://github.com/reservoirpy/reservoirpy.

Video of talk

Registration

Registration is free but compulsory.
Registered participants will be sent the link for joining via email


Schedule


Mobirise

Inauguration

November 23, 2022
16:45-17:00 IST
(+5:30 IST)

November 23, 2022

Day 1
V S Chakravarthy

Computing with rhythms: The search for deep oscillatory neural networks



 17:00- 18:00 IST (+5:30 UTC)

Xavier Hinaut

Reservoir SMILES: Towards SensoriMotor Interaction of Language and Embodiment of Symbols with reservoir architectures

18:00 - 19:00 IST (+5:30 UTC)

Erik Boltt

To be announced






19:00 -20:00 IST (+5:30 UTC)

Jaideep  Pathak

To be announced






20:00 - 21:00 IST (+5:30 UTC)

November 24, 2022

Day 2
Nithin Nagaraj

Can chaos help in learning?


 
16:00 - 17:00 IST  (+5:30 UTC)

Manish Shrimali

Reservoir computing with a single driven pendulum



17:00 - 18:00 IST  (+5:30 IST)

Aditi Kathpalia

Causality and machine learning



18:00 -19:00 IST (+5:30 UTC)

Sarthak Chandra

Reservoir computing in noisy real-world systems: network inference and dynamical noise


19:00 - 20:00 IST (+5:30 UTC)

Complex Systems & Dynamics

Indian Institute of Technology Madras

Contacts

https://web.iitm.ac.in/ccsd
Complex Systems & Dynamics
IIT Madras
Chennai 600036 India
ccsdiitm@gmail.com

Mailing List