Category Archives: optimization

Axis 6: Optimization algorithms: deterministic and stochastic methods

In information geometry, statistical data take values ​​in sets equipped with a Riemannian structure, with finite or infinite dimension. The estimators of quantities relating to these data are averages, medians, or more generally p-averages of these data. Stochastic algorithms to find these p-averages are very useful for all of the practical applications that have been developed in the publications listed below. It may be mentioned in particular applications in processing stationary radar signals. A very important issue is to consider non-stationary signals. To this end, we need to work on paths spaces in Riemannian manifolds, and develop a good notion of metric, distance, and average for these paths.

New stochastic algorithms of Robbins-Monro type have also been proposed to efficiently estimate the unknown parameters of deformation models. These estimation procedures are implemented on real ECG data to detect cardiac arrhythmia problems.

Proximal methods have been extremely successful in image processing to provide efficient algorithms to compute the solutions of the considered problems. A major theme of the team is the study of the convergence of such algorithms, their speed, and robustness to errors.

Convergence of FISTA

Charles Dossal proved in 2015 with Antonin Chambolle the convergence of an extremely popular algorithm since its introduction in 2008 by Beck and Teboulle (for being extremely efficient in practice), the FISTA algorithm (Fast Iterative Shrinkage Algorithm). The convergence of the algorithm has remained an open problem in the communities of optimization/compressed sensing/image processing/machine learning for 7 years until Charles Dossal work.

Relaxations of non-convex fonctionals

We study different relaxations of non-convex functionals that can be found in image processing. Some problems, such as image segmentation, can indeed be written as the minimization of a functional. The minimizer of the functional represents the segmentation.

Different methods have been proposed in order to find local or global minima of the non-convex functional of the two-phase piecewise constant Mumford-Shah model. With a convex relaxation of this model we can find a global minimum of the non-convex functional. We present and compare some of these methods and we propose a new model with a narrow band. This models finds local minima while using robust convex optimization algorithms. Then a convex relaxation of a two-phase segmentation model is built that compares two given histograms with those of the two segmented regions.

We also study some relaxations of high-dimension multi-label problems such as optical flow computation. A convex relaxation with a new algorithm is proposed. The algorithm is iterative with exact projections. A new algorithm is given for a relaxation that is convex in each variable but that is not convex globally. We study the problem of constructing a solution of the original non-convex problem with a solution of the relaxed problem. We compare existing methods with new ones.

Study of sparse methods for tomography

As part of tomography with a limited number of angles, the resulting inverse problem is ill-conditioned. The algorithms “classic”, analytic and algebraic produce many artifacts. The optimization methods solve this problem by using a regularization. In my work, I guess the parsimonious or sparse images in a given representation. A sparse image is an image that has few non-zero components. To promote this parsimony, I will rectify the problem with a standard \ell^1 and solve one of the two equivalent problems below.

(1)   \begin{equation*} \underset{x\in \mathbb{R}^n}{\text{min}}\ \|x\|_1 \text{ s.t. } \|\mathbf{A}x-y\|_2\le \delta \end{equation*}

(2)   \begin{equation*} \underset{x\in \mathbb{R}^n}{\text{min}}\ \frac{1}{2}\|\mathbf{A}x-y\|_2 + \lambda \|x\|_1 \end{equation*}

My job is to assess the theoretical limits of these problems while being the closest to reality. Recent studies [3] have highlighted sufficient conditions ensuring the reconstruction of the image by minimizing \ell^1 and provide an error of this reconstruction [1]. These conditions require the existence of a particular vector called dual certificate . Depending on the application, it may be possible to calculate a candidate who will be a conditional dual certificate [2,4].

This work primarily cover the construction of such candidates. I propose in my work, a greedy approach to calculate a candidate who is a dual certificate for almost all images “rebuildable” and minimizing the theoretical reconstruction error.

[1] Charles Dossal and Remi Tesson. Consistency of l1 recovery from noisy deterministic measurements. Applied and Computational Harmonic Analysis, 2013.

[2] Jean-Jacques Fuchs. Recovery of exact sparse representations in the presence of bounded noise. Information Theory, IEEE Transactions on, 51(10) :3601–3608, 2005.

[3] Markus Grasmair, Otmar Scherzer, and Markus Haltmeier. Necessary and sufficient conditions for linear convergence of l1-regularization. Commun. Pure and Appl. Math., 64(2) :161–182, 2011.

[4] Samuel Vaiter, Gabriel Peyré, Charles Dossal, and Jalal Fadili. Robust sparse analysis regularization. 2011.