8-10 juil. 2014 Saint-Etienne (France)

Conférenciers Invités

 

Nous aurons le plaisir d'accueillir comme conférenciers invités lors de l'édition CAp'2014  :

Francis Bach, Directeur de Recherche INRIA (LIENS, ENS Paris, France).

francis_bach_1.jpg

 

Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. In this talk, I will show how the smoothness of loss functions may be used to design novel algorithms with improved behavior, both in theory and practice: in the ideal infinite-data setting, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of O(1/n) without strong convexity assumptions, while in the practical finite-data setting, an appropriate combination of batch and online algorithms leads to unexpected behaviors, such as a linear convergence rate for strongly convex problems, with an iteration cost similar to stochastic gradient descent. (joint work with Nicolas Le Roux, Eric Moulines and Mark Schmidt)

---------------------------------------------------------------------------

Hendrik Blockeel, Professeur (KU Leuven, Belgique).

hendrik_face2.jpeg

 

 
With increasing amounts of ever more complex forms of digital data becoming available, the methods for analyzing these data have also become more diverse and sophisticated.  With this comes an increased risk of incorrect use of these methods, and a greater burden on the user to be knowledgeable about their assumptions.  For instance, it is well-known that statistical methods are often used incorrectly.  There seems to be no reason to believe that the situation is much better for data mining methods in general.
 
The idea behind declarative data analysis is that the burden of choosing the right statistical methodology for answering a research question should no longer lie with the user, but with the system.  The user should be able to simply describe the problem, formulate a question, and let the system take it from there.  To achieve this, we need to find answers to questions such as:  what languages are suitable for formulating these questions, and what execution mechanisms can we develop for them?  In this talk, I will discuss recent and ongoing research in this direction.  The talk will touch upon query languages for data mining and for statistical inference, declarative modeling for data mining, meta-learning, contraint-based data mining, discovery of causal models, and more. What connects these research threads is that they all strive to put intelligence about data analysis into the system, instead of assuming it resides in the user.
e
Personnes connectées : 1 Flux RSS