We will show how error bounds can be turned into effective tools for deriving complexity results of first order methods for convex minimization.
This led us to study the interplay between error bounds and KL inequality. We showed the equivalence between the two concepts for functions having a Holderian growth.
In a second stage, we show how KL inequalities can be used to compute new complexity bounds for a wealth of descent methods. Our method is completely original and makes use of a one dimensional worst case proximal sequence in the spirit of the famous majorant method of Kantorovich. Our result applies to a very simple abstract scheme that covers a very wide class of descent methods.
Our approach inaugurates a simple methodology: derive an error bound, compute the desingularizing function whenever possible, identify essential constants in the descent method and finally compute the complexity using the one dimensional worst case proximal sequence. Our method is illustrated through the famous iterative thresholding algorithm, also known as ISTA, for which we provide a linear complexity bound.
(This talk is based on joint work with T.P. Nguyen, J. Peypouquet and B. Suter. Title: From error-bounds to the complexity of first-order descent methods for convex functions.)