NTR organizes and hosts scientific webinars on neural networks and invites speakers from all over the world to present their recent work at the webinars.
On May 11 Yegor Klochkov, University of Cambridge, Cambridge, United Kingdom, led a technical Zoom webinar on Excess Risk Bounds Via Stability Generalization.

About the webinar:
The sharpest known high probability generalization bounds for uniformly stable algorithms (Feldman, Vondrák, NeurIPS 2018, COLT, 2019), (Bousquet, Klochkov, Zhivotovskiy, COLT, 2020) contain a generally inevitable sampling error term of order Θ(1/√n). When applied to excess risk bounds, this leads to suboptimal results in several standard stochastic convex optimization problems.
We showed that term Θ(1/√n) can be avoided. If the so-called Bernstein condition is satisfied, the and high probability excess risk bounds of order up to O(1/n) are possible via uniform stability.
Using this result, we show a high probability excess risk bound with the rate O(log n/n) for strongly convex and Lipschitz losses valid for any empirical risk minimization method. This resolves a question of Shalev-Shwartz, Shamir, Srebro, and Sridharan (COLT, 2009).
We discussed how O(log n/n) high probability excess risk bounds are possible for projected gradient descent in the case of strongly convex and Lipschitz losses without the usual smoothness assumption.
Articles:
Stability and Deviation Optimal Risk Bounds with Convergence Rate O(1/n)
Sharper Bounds for Uniformly Stable Algorithms
Moderator and contact: NTR CEO Nick Mikhailovsky: nickm@ntrlab.com.