Convex Iteration

From Wikimization

(Difference between revisions)
Jump to: navigation, search
Line 1: Line 1:
Convex iteration is method for constraining rank or cardinality in an otherwise convex optimization problem. A rank or cardinality constraint is replaced by a weighted linear regularization term added to the objective, and then two convex problems are iterated until convergence where, ideally, solution to the original problem is found.
Convex iteration is method for constraining rank or cardinality in an otherwise convex optimization problem. A rank or cardinality constraint is replaced by a weighted linear regularization term added to the objective, and then two convex problems are iterated until convergence where, ideally, solution to the original problem is found.
 +
 +
==constraining cardinality of signed variable==
 +
Now consider a feasibility problem equivalent to the classical problem from linear algebra
 +
<math>A_{}x_{\!}=_{\!}b</math>,
 +
but with an upper bound <math>k</math> on cardinality <math>\|x\|_0</math> :
 +
&nbsp;for vector <math>b\!\in\!\mathcal{R}(A)</math>
 +
 +
<math>\begin{array}{rl}\text{find}&_{}x\in_{}\reals^n\\
 +
\mbox{subject to}&A_{}x=b\\
 +
&\|x\|_0\leq_{}k
 +
\end{array}~~~~~~~~~~(1)</math>
 +
 +
where <math>\|x\|_{0\!}\leq_{_{}\!}k</math> means vector <math>x</math> has at most <math>k</math> nonzero entries;
 +
such a vector is presumed existent in the feasible set. <br>Convex iteration works with a nonnegative variable;
 +
absolute value <math>|x|</math> is therefore needed.
 +
We propose that problem (1) can be equivalently written
 +
 +
<math>\begin{array}{cl}\text{minimize}_{x_{}\in_{_{}}\mathbb{R}^{^n},~y_{}\in_{_{}}\mathbb{R}^{^n}}&\langle_{}|x|_{}\,,\,y^{}\rangle\\
 +
\mbox{subject to}&A_{}x=b\\
 +
&0\preceq y\preceq\mathbf{1}\\
 +
&y^T\mathbf{1}=n-_{}k
 +
\end{array}</math>
 +
 +
which moves the cardinality constraint to the objective.
 +
To express this nonconvex problem as a convex iteration, we separate it into halves:
 +
for <math>\varepsilon</math> a relatively small positive constant,
 +
 +
<math>\begin{array}{rl}\text{minimize}_{x_{}\in_{_{}}\mathbb{R}^{^n},~t_{}\in_{_{}}\mathbb{R}^{^n}}&\langle t_{}\,,\,y+\varepsilon^{}\mathbf{1}\rangle\\
 +
\mbox{subject to}&A_{}x=b\\
 +
&-t\preceq x\preceq_{_{}}t
 +
\end{array}~~~~~~~~~~~~~~(2)</math>
 +
 +
by iterating with
 +
 +
<math>\begin{array}{cl}\text{minimize}_{y_{}\in_{_{}}\mathbb{R}^{^n}}&\langle t^\star,\,y+\varepsilon^{}\mathbf{1}\rangle\\
 +
\mbox{subject to}&0\preceq y\preceq\mathbf{1}\\
 +
&y^T\mathbf{1}=n-_{}k
 +
\end{array}~~~~~~~~~~~~~~~~~~~(3)</math>
 +
 +
to find direction vector <math>y</math>.
 +
The term <math>\langle t\,,_{_{}}\varepsilon^{}\mathbf{1}\rangle</math> in (2) is necessary to determine absolute value
 +
<math>|x|_{\!}=_{}t^{\star_{}}</math> because vector <math>y</math> can take zero values in its entries;
 +
a good estimate of <math>|x|</math> is required for (3).
 +
 +
By initializing direction vector <math>y</math> to <math>(1\!-_{}\!\varepsilon)\mathbf{1}_{}</math>,
 +
the first iteration of problem (2) is a 1-norm problem; ''i.e.'',
 +
 +
<math>\begin{array}{ccc}
 +
\begin{array}{rl}\text{minimize}_{x_{}\in_{_{}}\mathbb{R}^{^n},~t_{}\in_{_{}}\mathbb{R}^{^n}}&\langle t\,,_{}\mathbf{1}\rangle\\
 +
\mbox{subject to}&A_{}x=b\\
 +
&-t\preceq x\preceq_{_{}}t
 +
\end{array}
 +
&\equiv&~
 +
\begin{array}{cl}\text{minimize}_{x_{}\in_{_{}}\mathbb{R}^{^n}}&\|x\|_1\\
 +
\mbox{subject to}&A_{}x=b
 +
\end{array}
 +
\end{array}~~~~~~~~~~(4)</math>
 +
 +
Subsequent iterations of problem (2) engaging cardinality term <math>\langle t_{}\,,\,y\rangle</math>
 +
can be interpreted as corrections to this 1-norm problem leading to a 0-norm solution.
 +
 +
Iteration (2) (3) always converges to a locally optimal solution by virtue of
 +
a monotonically nonincreasing objective sequence.
 +
There can be no proof of global optimality.

Revision as of 20:07, 5 February 2008

Convex iteration is method for constraining rank or cardinality in an otherwise convex optimization problem. A rank or cardinality constraint is replaced by a weighted linear regularization term added to the objective, and then two convex problems are iterated until convergence where, ideally, solution to the original problem is found.

constraining cardinality of signed variable

Now consider a feasibility problem equivalent to the classical problem from linear algebra LaTeX: A_{}x_{\!}=_{\!}b, but with an upper bound LaTeX: k on cardinality LaTeX: \|x\|_0 :  for vector LaTeX: b\!\in\!\mathcal{R}(A)

LaTeX: \begin{array}{rl}\text{find}&_{}x\in_{}\reals^n\\
\mbox{subject to}&A_{}x=b\\
&\|x\|_0\leq_{}k
\end{array}~~~~~~~~~~(1)

where LaTeX: \|x\|_{0\!}\leq_{_{}\!}k means vector LaTeX: x has at most LaTeX: k nonzero entries; such a vector is presumed existent in the feasible set.
Convex iteration works with a nonnegative variable; absolute value LaTeX: |x| is therefore needed. We propose that problem (1) can be equivalently written

LaTeX: \begin{array}{cl}\text{minimize}_{x_{}\in_{_{}}\mathbb{R}^{^n},~y_{}\in_{_{}}\mathbb{R}^{^n}}&\langle_{}|x|_{}\,,\,y^{}\rangle\\
\mbox{subject to}&A_{}x=b\\
&0\preceq y\preceq\mathbf{1}\\
&y^T\mathbf{1}=n-_{}k
\end{array}

which moves the cardinality constraint to the objective. To express this nonconvex problem as a convex iteration, we separate it into halves: for LaTeX: \varepsilon a relatively small positive constant,

LaTeX: \begin{array}{rl}\text{minimize}_{x_{}\in_{_{}}\mathbb{R}^{^n},~t_{}\in_{_{}}\mathbb{R}^{^n}}&\langle t_{}\,,\,y+\varepsilon^{}\mathbf{1}\rangle\\
\mbox{subject to}&A_{}x=b\\
&-t\preceq x\preceq_{_{}}t
\end{array}~~~~~~~~~~~~~~(2)

by iterating with

LaTeX: \begin{array}{cl}\text{minimize}_{y_{}\in_{_{}}\mathbb{R}^{^n}}&\langle t^\star,\,y+\varepsilon^{}\mathbf{1}\rangle\\
\mbox{subject to}&0\preceq y\preceq\mathbf{1}\\
&y^T\mathbf{1}=n-_{}k
\end{array}~~~~~~~~~~~~~~~~~~~(3)

to find direction vector LaTeX: y. The term LaTeX: \langle t\,,_{_{}}\varepsilon^{}\mathbf{1}\rangle in (2) is necessary to determine absolute value LaTeX: |x|_{\!}=_{}t^{\star_{}} because vector LaTeX: y can take zero values in its entries; a good estimate of LaTeX: |x| is required for (3).

By initializing direction vector LaTeX: y to LaTeX: (1\!-_{}\!\varepsilon)\mathbf{1}_{}, the first iteration of problem (2) is a 1-norm problem; i.e.,

LaTeX: \begin{array}{ccc}
\begin{array}{rl}\text{minimize}_{x_{}\in_{_{}}\mathbb{R}^{^n},~t_{}\in_{_{}}\mathbb{R}^{^n}}&\langle t\,,_{}\mathbf{1}\rangle\\
\mbox{subject to}&A_{}x=b\\
&-t\preceq x\preceq_{_{}}t
\end{array}
&\equiv&~
\begin{array}{cl}\text{minimize}_{x_{}\in_{_{}}\mathbb{R}^{^n}}&\|x\|_1\\
\mbox{subject to}&A_{}x=b
\end{array}
\end{array}~~~~~~~~~~(4)

Subsequent iterations of problem (2) engaging cardinality term LaTeX: \langle t_{}\,,\,y\rangle can be interpreted as corrections to this 1-norm problem leading to a 0-norm solution.

Iteration (2) (3) always converges to a locally optimal solution by virtue of a monotonically nonincreasing objective sequence. There can be no proof of global optimality.

Personal tools