LCSL logo
MaLGa logo

MaLGa Seminar Series

We are involved in the organization of the MaLGa Seminar Series, in particular those on Statistical Learning and Optimization. The MaLGa seminars are divided in four main threads, including Statistical Learning and Optimization as well as Analysis and Learning, Machine Learning and Vision, Machine Learning for Data Science.

An up-to-date list of ongoing seminars is available on the MaLGa webpage.

Seminars will be streamed on our YouTube channel.

Monotonic Gaussian process flow

Speaker: Carl Heinrik Ek
Speaker Affiliation: University of Bristol
Host: Annalisa Barla
Host Affiliation:Laboratory for Computational and Statistical Learning, MIT-IIT

Date: 2020-01-16
Time: 10:00 am (subject to variability)
Location: DIBRIS- Conference Hall, III floor, via Dodecaneso 35, Genova, IT.

Abstract
Gaussian processes are stochastic processes that allows for a Bayesian treatment over the space of functions. In this talk I will briefly introduce Bayesian non-parametrics and Gaussian processes in specific. I will then describe recent work using Gaussian processes in the formulation of stochastic differential equations. In specific I will focus on how we can construct distributions over monotonic functions. I will show how these structures can be used to learn hierarchically decomposed uncertanties in composite models. Permitting time and interest I will also go through how we can perform tractable inference by formulating avariational lower bound on the marginal loglikelihood.

Bio
Dr. Carl Henrik Ek is a senior lecturer at the University of Bristol. His reasearch focuses on developing computational models that allows machines to learn from data. In specific he is interested in Bayesian non-parametric models which allows for principled quantification of uncertainty, easy interpretability and adaptable complexity. He has worked extensively on models for representation learning with applications in automatic control, robotics and bioinformatics.

Learning Interaction laws in particle- and agent-based systems

Speaker: Mauro Maggioni
Speaker Affiliation: Johns Hopkins University
Host: Lorenzo Rosasco
Host Affiliation:Laboratory for Computational and Statistical Learning, MIT-IIT

Date: 2019-12-17
Time: 3:00 pm (subject to variability)
Location: DIBRIS - room 705, VII floor, via Dodecaneso 35, Genova, IT.

Abstract
Interacting agent-based systems are ubiquitous in science, from modeling of particles in Physics to prey-predator and colony models in Biology, to opinion dynamics in economics and social sciences. Oftentimes the laws of interactions between the agents are quite simple, for example they depend only on pairwise interactions, and only on pairwise distance in each interaction. We consider the following inference problem for a system of interacting particles or agents: given only observed trajectories of the agents in the system, can we learn what the laws of interactions are? We would like to do this without assuming any particular form for the interaction laws, i.e. they might be “any” function of pairwise distances. We consider this problem both the mean-field limit (i.e. the number of particles going to infinity) and in the case of a finite number of agents, with an increasing number of observations, albeit in this talk we will mostly focus on the latter case. We cast this as an inverse problem, and study it in the case where the interaction is governed by an (unknown) function of pairwise distances. We discuss when this problem is well-posed, and we construct estimators for the interaction kernels with provably good statistically and computational properties. We measure their performance on various examples, that include extensions to agent systems with different types of agents, second-order systems, and families of systems with parametric interaction kernels. We also conduct numerical experiments to test the large time behavior of these systems, especially in the cases where they exhibit emergent behavior. This is joint work with F. Lu, J.Miller, S. Tang and M. Zhong.

Bio
Dr. Mauro Maggioni is a Bloomberg Distinguished Professor of Mathematics, and Applied Mathematics and Statistics at Johns Hopkins University. He works at the intersection between harmonic analysis, approximation theory, high-dimensional probability, statistical and machine learning, model reduction, stochastic dynamical systems, spectral graph theory, and statistical signal processing. He received his B.Sc. in Mathematics summa cum laude at the Universitá degli Studi in Milano in 1999, the Ph.D. in Mathematics from the Washington University, St. Louis, in 2002. He was a Gibbs Assistant Professor in Mathematics at Yale University till 2006, when he moved to Duke University, becoming Professor in Mathematics, Electrical and Computer Engineering, and Computer Science in 2013. He received the Popov Prize in Approximation Theory in 2007, a N.S.F. CAREER award and Sloan Fellowship in 2008, and was nominated inaugural Fellow of the American Mathematical Society in 2013.

Uniform estimation of nonlinear statistics

Speaker: Andreas Maurer
Speaker Affiliation:
Host: Lorenzo Rosasco
Host Affiliation:Laboratory for Computational and Statistical Learning, MIT-IIT

Date: 2019-12-03
Time: 3:00 pm (subject to variability)
Location: DIBRIS- Conference Hall, III floor, via Dodecaneso 35, Genova, IT.

Abstract
For nearly two decades the method of Rademacher and Gaussian complexities has been used to prove generalization bounds in learning theory, typically by showing that the sample mean is a good estimate of the mean uniformly over some loss-class, if the complexity of the loss class is not too big. Many powerful tricks to bound Rademacher or Gaussian complexities have been developed along this line of work. My talk is about an extension of this method to cases where the sample mean is replaced by a nonlinear statistic satisfying certain first- and second-order Lipschitz conditions. I will explain these conditions, sketch a proof and discuss some applications, such as the generalization of recently proposed algorithms optimizing the partial AUC.

Bio
Andreas worked in machine vision, image processing and machine learning since 1983. He is an active and independent researcher in probability theory, machine learning and statistics.

Adaptive backtracking and acceleration of a forward-backward algorithm for strongly convex optimisation: convergence results and imaging applications

Speaker: Luca Calatroni
Speaker Affiliation: I3S Laboratory, CNRS, Sophia Antipolis, France
Host: Silvia Villa
Host Affiliation:Laboratory for Computational and Statistical Learning, MIT-IIT

Date: 2019-10-29
Time: 3:30 pm (subject to variability)
Location: DIMA - room 704, VII floor, via Dodecaneso 35, Genova, IT.

Abstract
We propose an extension of the Fast Iterative Shrinkage/Thresholding Algorithm (FISTA) algorithm for non-smooth strongly convex composite optimisation problems combined with an adaptive backtracking strategy. Differently from classical monotone line searching rules, the proposed strategy allows for local increasing and decreasing of the descent step size along the iterations and enjoys linear convergence rates defined in terms of quantities averaging both local Lipschitz constant estimates and local condition numbers. We report numerical experiments showing the outperformance of the algorithm compared to standard ones for some imaging problems and we discuss the use of restarting strategies to address situations where the strong convexity parameters are unknown. This is joint work with A. Chambolle

Bio
Luca Calatroni completed his Ph.D. in Applied Mathematics in 2015 as part of the Cambridge Image Analysis research group (UK). After that, he carried out his post-doctoral research activity at the University of Genova (Italy) within a Marie Skłowdoska-Curie ITN and later at the École Polytechnique (France) as a Lecteur Hadamard post-doctoral research fellow funded by the FMJH. From October 2019, he is a full-time CNRS researcher at the I3S laboratory in Sophia Antipolis, France. His research focuses on variational methods and non-smooth optimisation algorithms for imaging with applications to biology, cultural heritage and computational neurosciences.

Optimal data approximation with group invariances

Speaker: Davide Barbieri
Speaker Affiliation: Universidad Autónoma de Madrid
Host: Ernesto De Vito
Host Affiliation:Machine Learning Genoa Center

Date: 2019-10-14
Time: 2:30 pm (subject to variability)
Location: DIMA - room 704, VII floor, via Dodecaneso 35, Genova, IT

Abstract
Suppose we are given a finite, typically large, dataset of L^2 functions with domain the Euclidean space or any LCA group, and a semidirectproduct group G of discrete translations and automorphisms/linear applications acting on such a domain. We consider the problem of approximating the dataset by its projection onto the subspace spanned by the action of G on a finite, ideally small, set of functions, called generators. In this seminar, we will first discuss a constructive proof that provides the generators of the optimal subspace for the approximation, and then see the results of this construction on common datasets of natural images. This is a joint work with C. Cabrelli, E. Hernández and U. Molter

Bio
Davide Barbieri is an Assistant Professor at Universidad Autónoma de Madrid. He obtained his PhD at Università di Bologna and Université de Cergy Pontoise, and he was a Marie Curie Research Fellow at Universidad Autónoma de Madrid.

Curvelet frame and photoacoustic reconstruction

Speaker: Marta Betcke
Speaker Affiliation: University College London
Host: Nicoletta Noceti
Host Affiliation:Machine Learning Genoa Center

Date: 2019-10-08
Time: 3:00 pm (subject to variability)
Location: DIBRIS- Conference Hall, III floor, via Dodecaneso 35, Genova, IT.

Abstract
In photoacoustic tomography, the acoustic propagation time across the specimen constitutes the ultimate limit on sequential sampling frequency. Furthermore, the state-of-the art PAT systems are still remote from realising this limit. Hence, for high resolution imaging problems, the acquisition of a complete set of data can be impractical or even not possible e.g. the underlying dynamics causes the object to evolve faster than measurements can be acquired. To mitigate this problem we revert to parallel data acquisition along with subsampling/compressed sensing techniques. Motivated by two results on near optimal sparsity of image representation and wave field propagation in Curvelet frame we consider methods for photoacoustic reconstruction under such sparsity assumptions in both image and data domain and discuss the relations between the two.

Bio
Marta Betcke is an associate professor in the Department of Computer Science, Centre for Medical Image Computing (CMIC) and Centre for Inverse Problems (CIP) at UCL. The hallmark of Betcke’s research are efficient tomographic reconstruction methods combining the analysis of the forward operator with state of the art optimisation and more recently data driven techniques to tackle high dimensional incomplete data problems such as e.g. joint dual contrast CT, T1/T2 MRI, PAT/US reconstruction and dynamic imaging by exploiting coherences.

Statistical Machine Learning and Optimisation Challenges for Brain Imaging at a Millisecond Timescale

Speaker: Alexandre Gramfort
Speaker Affiliation: INRIA Saclay Research Center and CEA Neurospin
Host: Annalisa Barla

Date: 2019-09-16
Time: 3:00 pm (subject to variability)
Location: DIBRIS- Conference Hall, III floor, via Dodecaneso 35, Genova, IT.

Abstract
Understanding how the brain works in healthy and pathological conditions is considered as one of the major challenges for the 21st century. After the first electroencephalography (EEG) measurements in 1929, the 90's was the birth of modern functional brain imaging with the first functional MRI (fMRI) and full head magnetoencephalography (MEG) system. By offering noninvasively unique insights into the living brain, imaging has revolutionized in the last twenty years both clinical and cognitive neuroscience. After pioneering breakthroughs in physics and engineering, the field of neuroscience has to face new major computational and statistical challenges. The size of the datasets produced by publicly funded populations studies (Human Connectome Project in the USA, UK Biobank or Cam-CAN in the UK etc.) keeps increasing with now hundreds of terabytes of data made available for basic and translational research. The new high density neural electrode grids record signals over hundred of sensors at thousands of Hz which represent also large datasets of time-series which are overly complex to model and analyze: non-stationarity, high noise levels, heterogeneity of sensors, strong variability between individuals, lack of accurate models for the signals. In this talk I will present some recent statistical machine learning contributions applied to electrophysiological data, and illustrate how optimization, statistics and advanced signal processing are used today to get the best of such challenging, and sometimes massive, data.

Bio
Alexandre Gramfort is senior researcher in the Parietal Team at INRIA Saclay Research Center and CEA Neurospin since 2017. He was formerly Assistant Professor at Telecom ParisTech, Université Paris-Saclay, in the image and signal processing department. His field of expertise is statistical machine learning, signal processing and scientific computing applied primarily to functional brain imaging data (EEG, MEG, fMRI). His work is strongly interdisciplinary at the interface with statistics, computer science, software engineering and neuroscience. He is known for his work on the scikit-learn open source software that he contributed to write since 2010 at Inria, as well as the MNE-Python software that he started while at Harvard in 2011. In 2015, he was awarded a Starting Grant by the European Research Council (ERC).

Machine Learning for Image Processing

Speaker: Dong Hye Ye
Speaker Affiliation: Marquette University
Host: Francesca odone

Date: 2019-07-22
Time: 2:30 pm (subject to variability)
Location: DIBRIS- Conference Hall, III floor, via Dodecaneso 35, Genova, IT.

Abstract
In recent years, it has become increasingly easy to gather large quantities of images. Processing these large image databases is key to unlocking a wealth of information with the potential to be used. However, both interpretation of that big data and connecting it to downstream image processing is still challenging. To tackle this challenge, I unlock the valuable prior knowledge from large image databases via machine learning techniques and use it to improve image processing. In this talk, I will present how machine learning can help image processing such as CT Metal Artifact Reduction/ Reconstruction, microscopic imaging, and UAV detection/tracking.

Bio
Dr. Dong Hye Ye is an Assistant Professor in Electrical and Computer Engineering at Marquette University. His research interests are in advancing image processing via machine learning. His publications have been awarded Best Paper at MICCAI-MedIA 2010, Best Paper Runner-Up at ICIP 2015, and Best Paper at EI-IMAWM 2018. During his PhD, Dong Hye conducted research at Section of Biomedical Image Analysis (SBIA) in Hospital of the University of Pennsylvania (HUP) and Microsoft Research Cambridge (MSRC). He received Bachelor’s degree from Seoul National University in 2007 and Master's degree from Georgia Institute of Technology in 2008.

Stay positive! The importance of better models in stochastic optimization

Speaker: John Duchi
Speaker Affiliation: Stanford University
Host: Lorenzo Rosasco
Host Affiliation:Laboratory for Computational and Statistical Learning, MIT-IIT

Date: 2019-07-18
Time: 3:30 pm (subject to variability)
Location: DIBRIS- Conference Hall, III floor, via Dodecaneso 35, Genova, IT.

Abstract
Standard stochastic optimization methods are brittle, sensitive to stepsize choices and other algorithmic parameters, and they exhibit instability outside of well-behaved families of objectives. To address these challenges, we investigate models for stochastic minimization and learning problems that exhibit better robustness to problem families and algorithmic parameters. With appropriately accurate models--which we call the aProx family--stochastic methods can be made stable, provably convergent and asymptotically optimal; even modeling that the objective is nonnegative is sufficient for this stability. We extend these results beyond convexity to weakly convex objectives, which include compositions of convex losses with smooth functions common in modern machine learning applications. We highlight the importance of robustness and accurate modeling with a careful experimental evaluation of convergence time and algorithm sensitivity.

Bio
He is an assistant professor of Statistics and Electrical Engineering at Stanford University. He completed his PhD in computer science at Berkeley in 2014. His research interests are a bit eclectic, and they span computation, statistics, optimization, and machine learning. At Berkeley, He worked in the Statistical Artificial Intelligence Lab (SAIL) under the joint supervision of Michael Jordan and Martin Wainwright.

The Principle of Least Cognitive Action

Speaker: Marco Gori
Speaker Affiliation: University of Siena
Host: Alessandro verri

Date: 2019-07-11
Time: 3:00 pm (subject to variability)
Location: DIBRIS- Conference Hall, III floor, via Dodecaneso 35, Genova, IT.

Abstract
In this talk we introduce the principle of Least Cognitive Action with the purpose of understanding perceptual learning processes. The principle closely parallels related approaches in physics, and suggests to regard neural networks as systems whose weights are Lagrangian variables, namely functions depending on time. Interestingly, neural networks “conquer their own life” and there is no neat distinction between learning and test; their behavior is characterized by the stationarity of the cognitive action, an appropriate functional which contains a potential and a kinetic term. While the potential term is somewhat related to the loss function used in supervised and unsupervised learning, the kinetic term represents the energy connected with the velocity of weight change. Unlike traditional gradient descent, the stationarity of the cognitive action yields differential equations in the connection weights, and gives rise to a dissipative process which is needed to yield ordered configurations. We give conditions under which this learning process reduces to stochastic gradient descent and to Backpropagation. We give examples on supervised and unsupervised learning, and briefly discuss the application to deep convolutional neural networks, where an appropriate Lagrangian term is used to enforce motion invariance in the visual feature extraction.

Bio
Marco Gori received the Ph.D. degree in 1990 from Università di Bologna, Italy, working partly at the School of Computer Science (McGill University, Montreal). In 1992, he be-came an Associate Professor of Computer Science at Università di Firenze and, in No-vember 1995, he joint the Università di Siena, where he is currently full professor of computer science. His main interests are in machine learning with applications to pattern recognition, Web mining, and game playing. He is especially interested in bridging logic and learning and in the connections between symbolic and sub-symbolic representation of informa-tion. He was the leader of the WebCrow project for automatic solving of crosswords, that outperformed human competitors in an official competition which took place dur-ing the ECAI-06 conference. As a follow up of this grand challenge he founded QuestIt, a spin-off company of the University of Siena, working in the field of question-answer-ing. He is co-author of `Web Dragons: Inside the myths of search engines technologies`, Morgan Kauffman (Elsevier), 2006, and “Machine Learning: A Constrained-Based Ap-proach,” Morgan Kauffman (Elsevier), 2018. Dr. Gori serves (has served) as an Associate Editor of a number of technical journals re-lated to his areas of expertise, he has been the recipient of best paper awards, and key-note speakers in a number of international conferences. He was the Chairman of the Italian Chapter of the IEEE Computational Intelligence Society, and the President of the Italian Association for Artificial Intelligence. He is a fellow of the IEEE, ECCAI, IAPR.

Resolution of Sobolev wavefront set and sparse representation of singular integral operator using shearlets

Speaker: Swaraj Paul
Speaker Affiliation: Indian Institute of Technology Indore
Host: Lorenzo Rosasco
Host Affiliation:Laboratory for Computational and Statistical Learning

Date: 2019-06-21
Time: 2:30 pm (subject to variability)
Location: DIMA - room 705, VII floor, via Dodecaneso 35, Genova, IT.

Abstract
The main ingredients of time-frequency analysis, such as wavelets and Gabor frames have been successfully used for the representation of most of the classes of pseudo-differential operators, singular integral operators (SIOs). The location and the geometry of the set of singularities of a distribution can be obtained by using continuous transform. The continuous transform of a distribution is related to the microlocal analysis. Microlocal analysis can be employed to study how singularities propagate under certain classes of operators, i.e., Fourier integral operators, pseudo-differential operators as well as many integral operators arising in integral geometry. Wavefront set of a distribution is an essential concept in the microlocal analysis. The microlocal analysis is perhaps useful in inverse problems, where the goal is to recover the wavefront set of a function or distribution from the solution of operator equation. In this talk, our aim is to characterize sobolev wavefront set using shearlets, and its connection with Holder regularity. Later we show that the shearlets provide very efficient representations for a large class of SIO. Shearlets are particularly useful in representing anisotropic functions due to the properties of an affine-like system of well-localized waveforms at various scales, locations, and orientations. This is a joint work with Dr. Niraj K. Shukla.

The relation between the Cahn-Hilliard equation and CMC surfaces

Speaker: Matteo Rizzi
Speaker Affiliation: Facultad de Ciencias Físicas y Matemáticas, Universidad de Chile
Host: Lorenzo Rosasco
Host Affiliation:Laboratory for Computational and Statistical Learning, MIT-IIT

Date: 2019-06-17
Time: 3:00 pm (subject to variability)
Location: DIMA - room 714, VII floor, via Dodecaneso 35, Genova, IT.

Abstract
In the talk I will show the links between the Cahn-Hilliard equation -εΔu=ε^(-1) (u-u3)-l_ε,l_ε∈R, and constant mean curvature surfaces. In particular, I will present a paper in which I constructed a family of entire solutions in R^3 whose zero level set approaches, as ε → 0, a given complete, embedded, k-ended constant mean curvature surface. It is a joint work with Michal Kowalczyk. Moreover, I will discuss some classification results about the same equation.

Optimization in inverse problems via inertial iterative regularization

Speaker: Guillaume Garrigos
Speaker Affiliation: Université de Paris
Host: Lorenzo Rosasco
Host Affiliation:Laboratory for Computational and Statistical Learning, MIT-IIT

Date: 2019-05-22
Time: 02:30 pm
Location: DIBRIS- Conference Hall, III floor, via Dodecaneso 35, Genova, IT.

Abstract
n the context of linear inverse problems, we propose and study a general iterative regularization method allowing to consider large classes of regularizers and data-fit terms. We are particularly motivated by dealing with non-smooth data-fit terms, such like a Kullback-Liebler divergence, or an L1 distance. We treat these problems by studying both a continuous (ODE) and discrete (algorithm) dynamics, based on a primal-dual diagonal inertial method, designed to solve efficiently hierarchical optimization problems. The key point of our approach is that, in presence of noise, the number of iterations of our algorithm acts as a regularization parameter. In practice this means that the algorithm must be stopped after a certain number of iterations. This is what is called regularization by early stopping, an approach which gained in popularity in statistical learning. Our main results establishes convergence and optimal stability of our algorithm, in the sense that for additive data-fit terms we achieve the same rates than the Tikhonov regularisation method for linear problems.

Bio
Guillaume Garrigos has studied Applied Mathematics in Montpellier (France). He has obtained in 2015 a Franco-Chilean Ph.D. in Applied Mathematics from both Universite de Montpellier and Universidad Tecnica Federico Santa Maria, under the direction of Hedy Attouch and Juan Peypouquet. He then did a postdoc within the Laboratory for Computational and Statistical Learning, a joint lab between the IIT and the MIT, working in collaboration with Lorenzo Rosasco and Silvia Villa. After that, he joined Gabriel Peyre's team in the Ecole Normale Superieure de Paris, for a second postdoc. Since 2018, he is Maitre de Conferences (Associate Professor) at the Universite de Paris (formerly Paris-Diderot). His current research interests focus on the interplay between optimization and the regularization for inverse problems, arising in machine learning or in signal and image processing.

Learning with discrete MAP-inference models for stereo and motion

Speaker: Thomas Pock
Speaker Affiliation: Graz University of Technology
Host: Lorenzo Rosasco
Host Affiliation:Laboratory for Computational and Statistical Learning, MIT-IIT

Date: 2019-05-16
Time: 3:00 pm (subject to variability)
Location: DIMA - Room 704, VII floor, via Dodecaneso 35, Genova, IT

Abstract
MAP inference models (also known as MRF or CRF models) are simple yet powerful discrete optimization models which can be used to solve a number of computer vision tasks. Recently, those models have been outperformed by black-box learning methods based on convolutional neural networks. In this talk, we interpret the MAP inference models as an additional inference layer in the network, hence giving us the ability to impose a well-controlled smoothness prior to the solution. In order to make the MAP inference layer also efficient we propose a highly parallel dual coordinate descent algorithm based on dynamic programming. For learning we make use of a technique similar to the structured output support vector machine which, allows us to perform end2end learning. We show applications for our learned models to stereo and motion estimation. Joint work with A. Shekhovtsov, P. Knöbelreiter, G. Munda, C. Reinbacher

Bio
Thomas Pock received his MSc (1998-2004) and his PhD (2005-2008) in Computer Engineering (Telematik) from Graz University of Technology. After a Post-doc position at the University of Bonn, he moved back to Graz University of Technology where he has been an Assistant Professor at the Institute for Computer Graphics and Vision. In 2013 Thomas Pock received the START price of the Austrian Science Fund (FWF) and the German Pattern recognition award of the German association for pattern recognition (DAGM) and in 2014, Thomas Pock received an starting grant from the European Research Council (ERC). Since 2014, Thomas Pock is a Professor of Computer Science at Graz University of Technology and a principal scientist at the Center for Vision, Automation & Control at the Austrian Institute of Technology (AIT). The focus of his research is the development of mathematical models for computer vision and image processing as well as the development of efficient convex and non-smooth optimization algorithms

Learning to Adapt: Digging Deeper into Domain Adaptation for Visual Recognition in Real-world and Dynamic Environments

Speaker: Elisa Ricci
Speaker Affiliation: University of Trento and Fondazione Bruno Kessler, Italy
Host: Lorenzo Rosasco
Host Affiliation:Laboratory for Computational and Statistical Learning, MIT-IIT

Date: 2019-05-10
Time: 3:00 pm (subject to variability)
Location: DIBRIS- Conference Hall, III floor, via Dodecaneso 35, Genova, IT.

Abstract
Deep networks have significantly improved the state of the arts for several tasks in computer vision. Unfortunately, the impressive performance gains have come at the price of a use of massive amounts of labeled data. As the cost of collecting and annotating data is often prohibitive, given a target task where few or no training samples are available, it would be desirable to build effective learners that can leverage information from labeled data of a different but related source domain. However, a major obstacle in adapting models to the target task is the shift in data distributions across domains. This problem, typically referred as domain shift, has motivated research into Domain Adaptation (DA). Traditional DA algorithms assume the presence of a single source and a single target domain. However, in real-world applications different situations may arise. For instance, in some cases multiple datasets from diverse source domains may be available, while in other settings target samples may not be given at the training stage or may arise from temporal data streams. Alternatively, in some applications knowledge about different domains may only be provided in form of side-information (e.g. metadata, captions) and should be effectively exploited to guide the adaptation process. In this talk I will provide an overview of the problem of DA, focusing onvisual recognition tasks, and describe recent works on adaptation in case of dynamic, real-world settings.

Bio
Elisa Ricci is an Associate Professor at the University of Trento and a Researcher at Fondazione Bruno Kessler, Italy. She received her PhD degree in Electrical Engineering from the University of Perugia in 2008. Her main research interests are directed along developing deep learning algorithms for human behaviour analysis from visual and multi-modal data. She received the ACM Multimedia 2015 best paper award, the IBM Best Student Paper award at ICPR 2014 and the INTEL Best Paper award at ICPR 2016. She is associate editor of IEEE Transactions on Multimedia and ACM Transactions on Multimedia Computing, Communications, and Applications. She is/was Area Chair of ACM MM 2016-2019, ECCV 2016, ICCV 2017, BMVC 2018-2019 and Program Chair of ACM MM 2020 and ICIAP 2019.

Designing non-parametric activation functions: recent advances

Speaker: Simone Scardapane
Speaker Affiliation: Sapienza University of Rome
Host: Lorenzo Rosasco and Raffaello Camoriano
Host Affiliation:Laboratory for Computational and Statistical Learning, MIT-IIT

Date: 2019-04-17
Time: 3:00 pm
Location: DIBRIS- Conference Hall, III floor, via Dodecaneso 35, Genova, IT.

Abstract
Recently, the design of flexible nonlinearities has become an important line of research in the deep learning community. In the first part of the talk we will review how to tackle this problem, both in the context of simple parameterizations of known functions (e.g., the parametric ReLU), and with the definition of more advanced, non-parametric models (e.g., the Maxout network). The second part of the talk will focus on a recent proposal, the kernel activation function, which is based on an kernel expansion of its input. We will show its core idea and some recent extensions, involving its use in the context of other types of nonlinearities, such as gates (as in LSTMs), and attention models. The talk is concluded with some open challenges and possible lines of research.

Bio
Simone Scardapane is an assistant professor at Sapienza, where he was previously a post-doctoral fellow, with a focus on deep learning. Previously, he was a fellow researcher at Stirling University (UK) and a visiting student in La Trobe University in Melbourne. He also has a strong interest in promoting machine learning in Italy. He is a co-founder and chairman of the Italian Association for Machine Learning, co-organizer of the Rome Machine Learning and Data Science Meetup, and a current Google Developer Expert for Machine Learning.

Electrical impedance tomography and Calderon's inverse problem: a review

Speaker: Matteo Santacesaria
Speaker Affiliation: University of Genoa
Host: Lorenzo Rosasco
Host Affiliation:Laboratory for Computational and Statistical Learning, MIT-IIT

Date: 2019-04-16
Time: 2:30 pm (subject to variability)
Location: DIMA- Room 705, VII floor, via Dodecaneso 35, Genova, IT

Abstract
Calderon's inverse conductivity problem consists in the determination of an electrical conductivity distribution inside a body from current and voltage measurements on its boundary. Applications include medical imaging, nondestructive testing and geophysical prospecting. Since its formulation in 1980 it has stimulated a huge amount of research both in pure and applied mathematics. On the theoretical side, the main issue has been to prove uniqueness results, meaning the injectivity of the measurement or forward map, under appropriate assumptions on the regularity of the unknown conductivity, the amount of measurements and the geometry of the domain. Concerning applications, electrical impedance tomography (EIT) has been developed as the main imaging modality modeled by Calderon's problem. EIT faces great numerical hurdles, since errors in the data propagate exponentially to the reconstruction; in order to mitigate the instability (ill-posedness), strategies ranging from regularization methods, compressed sensing or machine learning have been employed. In this talk I will review the main results obtained for this problem and point out some theoretical and numerical challenges that are still open.

Bio
Matteo Santacesaria obtained his PhD in applied mathematics at École Polytechnique (France) in 2012. He has held post-doctoral positions at Université Joseph Fourier, University of Helsinki and Politecnico di Milano. He is currenty assistant professor (RTD A) at DIMA, University of Genoa.

Pair-matching and sequential learning of communities

Speaker: Cristophe Giraud
Speaker Affiliation: Paris-Sud University
Host: Lorenzo Rosasco
Host Affiliation:Laboratory for Computational and Statistical Learning, MIT-IIT

Date: 2019-04-09
Time: 3:00 pm (subject to variability)
Location: DIMA - Room 705, VII floor, via Dodecaneso 35, Genova, IT.

Abstract
We will discuss the question of learning sequentially successful matching between individuals. We will consider the simplest settings and we will exhibit some phase transition phenomenas. The analysis will rely on some recent results on community recovery in stochastic block models, that we will introduce, starting from the most basic settings.

Bio
Christophe Giraud received a Ph.D in probability theory from the university Paris 6. He was assistant professor at the university of Nice from 2002 to 2008. He has been associate professor at the école polytechnique since 2008 and Professor at Paris Sud university (Orsay) since 2012. His recent work focus on the understanding of some fundamental problems in statistics, the analysis of some popular algorithm in Machine Learning and the design of some new ones.

On the shape of hypersurfaces with almost constant mean curvature

Speaker: Giulio Ciraolo
Speaker Affiliation: Palermo University
Host: Lorenzo Rosasco
Host Affiliation:Laboratory for Computational and Statistical Learning, MIT-IIT

Date: 2019-04-02
Time: 3:00 pm (subject to variability)
Location: DIMA- Room 705, VII floor, via Dodecaneso 35, Genova, IT.

Abstract
Alexandrov’s theorem asserts that spheres are the only closed embedded hypersurfaces with constant mean curvature in the Euclidean space. In this talk we will discuss some quantitative versions of Alexandrov’s theorem. In particular, we will consider a hypersurface with mean curvature close to a constant and quantitatively describe its proximity to a sphere or to a collection of tangent spheres of equal radii in terms of the oscillation of the mean curvature. We will also discuss these issues for the nonlocal mean curvature, by showing a remarkable rigidity property of the nonlocal problem which prevents bubbling phenomena and proving the proximity to a single sphere

Multiscale decompositions in imaging and inverse problems

Speaker: Luca Rondi
Speaker Affiliation: University of Milan
Host: Lorenzo Rosasco
Host Affiliation:Laboratory for Computational and Statistical Learning, MIT-IIT

Date: 2019-03-26
Time: 2:30 pm (subject to variability)
Location: DIMA- Room 704, VII floor, via Dodecaneso 35, Genova, IT.

Abstract
We extend the hierarchical decomposition of an image as a sum of constituents of different scales, introduced by Tadmor, Nezzar and Vese in 2004, to a general setting. We develop a theory for multiscale decompositions which, besides extending the one of Tadmor, Nezzar and Vese to arbitrary L^2 functions, is applicable to a wide range of other imaging problems, such as image registration, or strictly related ones, such as nonlinear inverse problems. This is a joint work with Klas Modin and Adrian Nachman.

Date

Speaker

Title

Location

Jan 16, 2020 Carl Heinrik Ek Monotonic Gaussian process flow Genova
Dec 17, 2019 Mauro Maggioni Learning Interaction laws in particle- and agent-based systems Genova
Dec 3, 2019 Andreas Maurer Uniform estimation of nonlinear statistics Genova
Oct 29, 2019 Luca Calatroni Adaptive backtracking and acceleration of a forward-backward algorithm for strongly convex optimisation: convergence results and imaging applications Genova
Oct 14, 2019 Davide Barbieri Optimal data approximation with group invariances Genova
Oct 8, 2019 Marta Betcke Curvelet frame and photoacoustic reconstruction Genova
Sep 16, 2019 Alexandre Gramfort Statistical Machine Learning and Optimisation Challenges for Brain Imaging at a Millisecond Timescale Genova
Jul 22, 2019 Dong Hye Ye Machine Learning for Image Processing Genova
Jul 18, 2019 John Duchi Stay positive! The importance of better models in stochastic optimization Genova
Jul 11, 2019 Marco Gori The Principle of Least Cognitive Action Genova
Jun 21, 2019 Swaraj Paul Resolution of Sobolev wavefront set and sparse representation of singular integral operator using shearlets Genova
Jun 17, 2019 Matteo Rizzi The relation between the Cahn-Hilliard equation and CMC surfaces Genova
May 22, 2019 Guillaume Garrigos Optimization in inverse problems via inertial iterative regularization Genova
May 16, 2019 Thomas Pock Learning with discrete MAP-inference models for stereo and motion Genova
May 10, 2019 Elisa Ricci Learning to Adapt: Digging Deeper into Domain Adaptation for Visual Recognition in Real-world and Dynamic Environments Genova
Apr 17, 2019 Simone Scardapane Designing non-parametric activation functions: recent advances Genova
Apr 16, 2019 Matteo Santacesaria Electrical impedance tomography and Calderon's inverse problem: a review Genova
Apr 9, 2019 Cristophe Giraud Pair-matching and sequential learning of communities Genova
Apr 2, 2019 Giulio Ciraolo On the shape of hypersurfaces with almost constant mean curvature Genova
Mar 26, 2019 Luca Rondi Multiscale decompositions in imaging and inverse problems Genova

Showing 41-60 of 129 results