Speaker: Mauro Maggioni
Speaker Affiliation: Johns Hopkins University
Host: Lorenzo Rosasco
Host Affiliation:Laboratory for Computational and Statistical Learning, MIT-IIT
Date: 2019-12-17
Time: 3:00 pm (subject to variability)
Location: DIBRIS - room 705, VII floor, via Dodecaneso 35, Genova, IT.
Abstract
Interacting agent-based systems are ubiquitous in science, from modeling of particles in Physics to prey-predator and colony models in Biology, to opinion dynamics in economics and social sciences. Oftentimes the laws of interactions between the agents are quite simple, for example they depend only on pairwise interactions, and only on pairwise distance in each interaction. We consider the following inference problem for a system of interacting particles or agents: given only observed trajectories of the agents in the system, can we learn what the laws of interactions are? We would like to do this without assuming any particular form for the interaction laws, i.e. they might be “any” function of pairwise distances. We consider this problem both the mean-field limit (i.e. the number of particles going to infinity) and in the case of a finite number of agents, with an increasing number of observations, albeit in this talk we will mostly focus on the latter case. We cast this as an inverse problem, and study it in the case where the interaction is governed by an (unknown) function of pairwise distances. We discuss when this problem is well-posed, and we construct estimators for the interaction kernels with provably good statistically and computational properties. We measure their performance on various examples, that include extensions to agent systems with different types of agents, second-order systems, and families of systems with parametric interaction kernels. We also conduct numerical experiments to test the large time behavior of these systems, especially in the cases where they exhibit emergent behavior. This is joint work with F. Lu, J.Miller, S. Tang and M. Zhong.
Bio
Dr. Mauro Maggioni is a Bloomberg Distinguished Professor of Mathematics, and Applied Mathematics and Statistics at Johns Hopkins University. He works at the intersection between harmonic analysis, approximation theory, high-dimensional probability, statistical and machine learning, model reduction, stochastic dynamical systems, spectral graph theory, and statistical signal processing. He received his B.Sc. in Mathematics summa cum laude at the Universitá degli Studi in Milano in 1999, the Ph.D. in Mathematics from the Washington University, St. Louis, in 2002. He was a Gibbs Assistant Professor in Mathematics at Yale University till 2006, when he moved to Duke University, becoming Professor in Mathematics, Electrical and Computer Engineering, and Computer Science in 2013. He received the Popov Prize in Approximation Theory in 2007, a N.S.F. CAREER award and Sloan Fellowship in 2008, and was nominated inaugural Fellow of the American Mathematical Society in 2013.