› Graph abstraction for closed pattern mining in attributed networks - Santini, Soldano
13:45-14:15 (30min)
› Modélisation de Distances Couleur Uniformes par Apprentissage de Métriques Locales - Habrard, Muselet, Perrot, Sebban
14:15-14:20 (05min)
› Sélection de variables en mode semi-supervisé dans un contexte multi-labels - Alalga, Benabdeslem
14:20-14:25 (05min)
› Prise en Compte du Contexte pour Contraindre les Réseaux Profonds: Application à l'Étiquetage de Scènes - Kekec, Emonet, Fromont, Trémeau, Wolf
14:25-14:30 (05min)
› Adaptation de domaine de vote de majorité par auto-étiquetage non itératif - Emilie Morvant
14:30-15:00 (30min)
› Unsupervised one class identification by selecting and combining ranking functions - Cornuéjols, Martin
15:00-15:05 (05min)
› Multi Agent Learning of Relational Action Models - Rodrigues, Soldano, Bourgne, Rouveirol
15:05-15:10 (05min)
(Président : Marc Sebban) Francis Bach: Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations (large n) and each of these is large (large p). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. In this talk, I will show how the smoothness of loss functions may be used to design novel algorithms with improved behavior, both in theory and practice: in the ideal infinite-data setting, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of O(1/n) without strong convexity assumptions, while in the practical finite-data setting, an appropriate combination of batch and online algorithms leads unexpected behaviors.
Invited talk (Hendrick Blockeel - KU Leuven): Declarative data analysis
Amphithéâtre J21
(Présidente : Elisa Fromont) Hendrick Blockeel: With increasing amounts of ever more complex forms of digital data becoming available, the methods for analyzing these data have also become more diverse and sophisticated. With this comes an increased risk of incorrect use of these methods, and a greater burden on the user to be knowledgeable about their assumptions. For instance, it is well-known that statistical methods are often used incorrectly. There seems to be no reason to believe that the situation is much better for data mining methods in general. The idea behind declarative data analysis is that the burden of choosing the right statistical methodology for answering a research question should no longer lie with the user, but with the system. The user should be able to simply describe the problem, formulate a question, and let the system take it from there.