Genova Machine Learning & Robotics Seminar Series
The Genova Machine Learning & Robotics Seminar Series is being
organized by:
The seminar series will take place in Genova, Italy.
See
here the calendar, and
here the organizers.
For updated information and announcements please subscribe to the
ML&R Seminar Series mailing
list.
Seminar:
TALK Multiclass Learning: Simplex Coding and Regularization
Speaker: Youssef Mroueh
Speaker Affiliation: Laboratory for Computational and Statistical Learning, MIT-IIT
Host: Lorenzo Rosasco
Host Affiliation: Laboratory for Computational and Statistical Learning, MIT-IIT
Date: January 17th, 2013.
Time: 3:00 PM - 4:00 PM
Location:
RBCS-Meeting room, IV floor.
Via Morego 30, Genova, IT.
In this talk we discuss a novel framework for
multiclass learning, defined by a suitable coding/decoding
strategy, namely the simplex coding. This new approach
allows to generalize to multiple classes a relaxation approach
commonly used in binary classification. In this framework, a
relaxation error analysis can be developed avoiding
cumbersome constraints on the considered hypotheses class.
Computational and algorithmic aspects are discussed in the
context of both kernel and boosting methods.
Youssef Mroueh is a PhD candidate in the Department of Electrical
Engineering and Computer Science at the Massachusetts Institute of
Technology as well as a Fellow at the Istituto Italiano di Tecnologia. He
works under the supervision of Prof. Tomaso Poggio and Prof. Lorenzo
Rosasco. He did his undergraduate studies and Master of Science in Ecole
Polytechnique (Paris, France) and Ecole des Mines ParisTech. His research
interests include Machine learning, Signal Processing and Compressive
Sensing, Discrete and Computational Geometry.
Seminar:
TALK Machine Learning for Motor Skills in Robotics
Speaker: Jan Peters
Speaker Affiliation: Technische Universitat Darmstadt and Max-Planck Institute for Intelligent Systems
Host: Lorenzo Rosasco
Host Affiliation: DIBRIS, Universita' di Genova; Laboratory for Computational and Statistical Learning, MIT-IIT
Date: April 04th, 2013.
Time: 3:00 PM - 4:00 PM
Location:
DIBRIS- Conference Hall, III floor, via Dodecaneso 35, Genova, IT.
Intelligent autonomous robots that can assist humans in situations of
daily life have been a long standing vision of robotics, artificial
intelligence, and cognitive sciences. A elementary step towards this
goal is to create robots that can learn tasks triggered by environmental
context or higher level instruction. However, learning techniques have
yet to live up to this promise as only few methods manage to scale to
high-dimensional manipulator or humanoid robots. In this talk, we
investigate a general framework suitable for learning motor skills in
robotics which is based on the principles behind many analytical
robotics approaches. It involves generating a representation of motor
skills by parameterized motor primitive policies acting as building
blocks of movement generation, and a learned task execution module that
transforms these movements into motor commands. We discuss learning on
three different levels of abstraction, i.e., learning for accurate
control is needed to execute, learning of motor primitives is needed to
acquire simple movements, and learning of the task-dependent
"hyperparameters" of these motor primitives allows learning complex
tasks. We discuss task-appropriate learning approaches for imitation
learning, model learning and reinforcement learning for robots with many
degrees of freedom. Empirical evaluations on a several robot systems
illustrate the effectiveness and applicability to learning control on an
anthropomorphic robot arm. A large number of real-robot examples will be
demonstrated ranging from Learning of Ball-Paddling, Ball-In-A-Cup,
Darts, Table Tennis to Grasping.
Jan Peters is electrical and mechanical engineer (Dipl-Ing, TU Muenchen;
MSc USC) and computer scientist (Dipl-Inform, FernUni Hagen; MSc, PhD
USC) who was educated and performed research at the TU Muenchen,
the DLR Robotics Center in Germany, at ATR in Japan, at USC in California.
Between 2007 and 2010, he was a senior research scientist and group
leader at the Max-Planck Institute for Biological Cybernetics. Since 2011,
he is a senior research scientist and group leader at the Max-Planck Institute
for Intelligent Systems and a full professor at Technische Universität Darmstadt.
Seminar:
TALK Inverse Density as an Inverse Problem via Fredholm Machines
Speaker:Mikhail Belkin
Speaker Affiliation: Department of Computer Science and Engineering, Ohio State University
Host: Lorenzo Rosasco
Host Affiliation:DIBRIS, Universita' di Genova; Laboratory for Computational and Statistical Learning, MIT-IIT
Date: May 02th, 2013.
Time: 3:00 PM - 4:00 PM
Location:
RBCS-Meeting room, IV floor.
Via Morego 30, Genova, IT.
In this talk I will discuss the problem of estimating the ratio q(x)/p(x), where p and q are density functions given by sampled data.
This ratio appears in a number of different settings, including the classical importance sampling in statistics and, more recently, in transfer learning, where an inference procedure learned in one domain needs to be generalized to other tasks.
Our method is based on posing the ratio estimation as an inverse problem expressed by a Fredholm
integral equation. This allows us to apply the classical techniques of regularization and to obtain simple
and easy implementable algorithms within the kernel methods framework.
We provide detailed theoretical analysis for the case of the Gaussian kernel and show very competitive experimental comparisons in several settings.
Mikhail Belkin is an Associate Professor at the Computer Science and Engineering Department and the Department of Statistics at the Ohio State University. His research focuses on applications and theory of machine and human learning. He received his Ph.D degree from the Mathematics Dept. at University of Chicago in 2003. He received the U.S National Science Foundation (NSF) Career Award in 2007, and the Lumley Research Award at the College of Engineering of OSU in 2011. He is on the Editorial board for Journal of Machine Learning Research (JMLR) and IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). Currently he is on sabbatical at the Austrian Institute of Technology (ISTA).
Seminar:
TALK Nonparametric prediction of stationary time series
Speaker: László Györfi
Speaker Affiliation:Department of Computer Science and Information Theory, Budapest University of Technology and Economics
Host: Lorenzo Rosasco
Host Affiliation:DIBRIS, Universita' di Genova; Laboratory for Computational and Statistical Learning, MIT-IIT
Date: May 10th, 2013.
Time: 3:00 PM - 4:00 PM
Location:
DIBRIS- Conference Hall, III floor, via Dodecaneso 35, Genova, IT.
For given past values of sequences of observation vectors
and target variables, the problem is the predict the target variables
such that the asymptotic averaged mean squared loss should be as small
as possible. If one needs this consistency without any conditions on
the underlying time series, then such universal consistency can be
achieved using the principles of nonparametric statistics and machine
learning.
Seminar:
TALK Unsupervised learning of probability measures.
Speaker: Guille D. Canas
Speaker Affiliation: Laboratory for Computational and Statistical Learning, MIT-IIT
Host: Lorenzo Rosasco
Host Affiliation:DIBRIS, Universita' di Genova; Laboratory for Computational and Statistical Learning, MIT-IIT
Date: June 26th, 2013.
Time: 11:30 PM - 12:30 PM
Location:
DIBRIS- Conference Hall, III floor, via Dodecaneso 35, Genova, IT.
Despite their widespread use, classical unsupervised
learning algorithms, such as k-means, PCA, or sparse coding, lack a
common, non-trivial problem definition for which they provide a
solution under different assumptions. We show that unsupervised
learning algorithms that attempt to reconstruct the input data can be
naturally extended to approximate the data-generating measure, and
hence show that this non-trivial, measure-learning problem definition
encompasses a large class of existing algorithms.
We provide learning rates for the measure-learning extension of
k-means and, in the process, prove explicit rates for the convergence
of empirical to population measures, in the 2-Wasserstein sense. While
previous learning rates typically hold for Gaussian or log-concave
measures, our results hold for general measures under a very mild
bounded moment condition. Our proofs use well-established techniques
from Optimal Quantization and empirical process theory.
Guille D. Canas received a M.E. (EE) from U.P.Madrid, M.S. (CS)
from U.C.Berkeley, and Ph.D. (CS) from Harvard University.
He has been with the Laboratory for Computational and Statistical
Learning (MIT and Istituto Italiano di Tecnologia) from 2011 to 2013.
His interests are in Machine Learning, Approximation Theory,
Computational Geometry, and Computer Vision.
Seminar:
TALK Learning representations for learning like humans do
Speaker: Tomaso Poggio
Speaker Affiliation: Department of Brain and Cognitive Sciences, MIT
Host: Lorenzo Rosasco
Host Affiliation:DIBRIS, Universita' di Genova; Laboratory for Computational and Statistical Learning, MIT-IIT
Date: July 4th, 2013.
Time: 3:30 - 4:30 PM
Location:
IIT - Sala Montalcini, 0th floor.
Via Morego 30, Genova, IT.
Today’s AI technologies, such as Watson and Siri, are impressive yet still confined to a single domain or task. Imagine how truly intelligent systems---systems that actually understand their world---could change our world. A successful research plan for understanding intelligence includes two key domains: the domain of the physical world and the domain of human agents and their interactions. First, understanding intelligence requires scene, object and action recognition; second, it requires non-verbal social perception (NVSP).
As an example of research in the first domain, I will describe work at the joint IIT-MIT Laboratory for Computational and Statistical Learning over the last 2 years developing a theory of visual cortex and of deep learning architectures of the convolutional type. I will describe the theoretical consequences of a simple assumption: the main computational goal of the feedforward path in the ventral stream – from V1, V2, V4 and to IT – is to discount image transformations, after learning them during development. A basic neural operation consists of dot products between input vectors and synaptic weights – which can be modified by learning. I will outline theorems showing that a multi-layer hierarchical architecture of dot-product modules can learn in an unsupervised way geometric transformations of images and then achieve the dual goals of invariance to global affine transformations and of robustness to deformations. These architectures can achieve invariance to transformations of a new object: from the point of view of machine learning they show how to learn in an unsupervised way representations that may reduce considerably the sample complexity of supervised learning.
Tomaso Poggio is the Eugene McDermott Professor at the Department of Brain and Cognitive Sciences; Co- Director, Center for Biological and Computational Learning; Member of the Computer Science and Artificial Intelligence Laboratory at MIT;
He is one of the founders of computational neuroscience. He pioneered models of the fly’s visual system and of human stereovision, introduced regularization theory to computational vision, made key contributions to the biophysics of computation and to learning theory, developed an influential model of recognition in the visual cortex.
-
Calendar
May:
Date |
Speaker |
Affiliation |
Info |
Hosted
by |
2 |
Mikhail Belkin |
Department of Computer Science and Engineering, Ohio State University |
Seminar info |
Lorenzo Rosasco |
10 |
László Györfi |
Department of Computer Science and Information Theory, Budapest University of Technology and Economics |
Seminar info |
Lorenzo Rosasco |
-
Organizers
The seminar series is organized by: