It can be observed in many applications that algorithms converge unexpectedly well, despite any theory to explain their success. An analysis of algorithm complexity in terms of function values is wide-spreadand can be applied readily in many practical instances. However, this strategy of analysis cannot explain the observed convergence to stationary points of iterates in many cases, nor can it yield error estimates on the distance to locally optimal solutions as opposed to locally optimal values.
We propose a framework for quantifiable local convergence analysis of iterations of expansive fixed point mappings. The key to this analysis is the well-established notion of metric subregularity of the mapping at fixed points, what we simply call metric regularity on a semi-pointwise set on the graph of the mapping. Within this framework we show that a relaxed version of metric regularity together witha type of calmness of the fixed point mapping is necessary for local linear convergence. To demonstrate the theory, we prove for the first time a number of results showing local linear convergence of cyclic projections for (possibly) inconsistent feasibility problems, local linear convergence of the forward- backward algorithm for structured optimization without convexity, strong or otherwise, and local linear convergence of a nonconvex application of the Douglas–Rachford algorithm for minimization. Known results, convex and nonconvex, are recovered in this framework.