‹ mercredi 9 juillet 2014 › | |
08:00
09:00
10:00
11:00
12:00
13:00
14:00
15:00
16:00
17:00
18:00
19:00
20:00
21:00
22:00
23:00
|
›8:30 (30min)
›9:00 (1h30)
› Amphithéâtre J21
›11:00 (1h)
Invited talk (Francis Bach - INRIA ENS Paris): Beyond stochastic gradient descent for large-scale machine learning.
(Président : Marc Sebban) Francis Bach: Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations (large n) and each of these is large (large p). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. In this talk, I will show how the smoothness of loss functions may be used to design novel algorithms with improved behavior, both in theory and practice: in the ideal infinite-data setting, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of O(1/n) without strong convexity assumptions, while in the practical finite-data setting, an appropriate combination of batch and online algorithms leads unexpected behaviors. › Amphithéâtre J21
›12:00 (1h45)
›13:45 (2h)
›15:45 (45min)
›16:30 (2h)
›19:30 (3h30)
|
Session | Discours | Logistique | Pause | Sortie |