Linear matrix inequality

From Wikimization

(Difference between revisions)
Jump to: navigation, search
(Convexity of the LMI constraint)
Line 8: Line 8:
This linear matrix inequality specifies a convex constraint on ''y''.
This linear matrix inequality specifies a convex constraint on ''y''.
-
== Convexity of the LMI constraint ==
+
L8BXV0 <a href="http://nkoyrmvtnrvl.com/">nkoyrmvtnrvl</a>, [url=http://ltlocxpixvli.com/]ltlocxpixvli[/url], [link=http://teptabaxugtp.com/]teptabaxugtp[/link], http://hanuhwgvohbe.com/
-
<math>LMI(y)\succeq 0</math> is a convex constraint on ''y'' which means membership to a dual (convex) cone as we now explain: '''('''[http://meboo.convexoptimization.com/BOOK/convexgeometry.pdf Dattorro, Example 2.13.5.1.1]''')'''
+
-
 
+
-
Consider a peculiar vertex-description for a closed convex cone defined over the positive semidefinite cone
+
-
 
+
-
'''('''instead of the more common nonnegative orthant, <math>x\succeq0</math>''')''':
+
-
 
+
-
for <math>X\!\in\mathbb{S}^n</math> given <math>\,A_j\!\in\mathbb{S}^n</math>, <math>\,j\!=\!1\ldots m</math>
+
-
 
+
-
<math>\begin{array}{ll}\mathcal{K}
+
-
\!\!&=\left\{\left[\begin{array}{c}\langle A_1\,,\,X^{}\rangle\\\vdots\\\langle A_m\;,\,X^{}\rangle\end{array}\right]|~X\!\succeq_{\!}0\right\}\subseteq_{}\reals^m\\\\
+
-
&=\left\{\left[\begin{array}{c}\textrm{svec}(A_1)^T\\\vdots\\\textrm{svec}(A_m)^T\end{array}\right]\!\textrm{svec}X~|~X\!\succeq_{\!}0\right\}\\\\
+
-
&:=\;\{A\,\textrm{svec}X~|~X\!\succeq_{\!}0_{}\}
+
-
\end{array}</math>
+
-
 
+
-
where
+
-
*<math>A\!\in_{}\!\mathbb{R}^{m\times n(n+1)/2}</math>,
+
-
*symmetric vectorization svec is a stacking of columns defined in '''('''[http://meboo.convexoptimization.com/BOOK/convexgeometry.pdf Dattorro, Ch.2.2.2.1]''')''',
+
-
*<math>A_0=\mathbf{0}</math> is assumed without loss of generality.
+
-
+
-
<math>\mathcal{K}</math> is a convex cone because
+
-
 
+
-
<math>A\,\textrm{svec}{X_{{\rm p}_1}}_{\,},_{_{}}A\,\textrm{svec}{X_{{\rm p}_2}}\!\in\mathcal{K}~\Rightarrow~
+
-
A(\zeta_{\,}\textrm{svec}{X_{{\rm p}_1\!}}+_{}\xi_{\,}\textrm{svec}{X_{{\rm p}_2}})\in_{}\mathcal{K}
+
-
\textrm{~~for\,all~\,}\zeta_{\,},\xi\geq0</math>
+
-
 
+
-
since a nonnegatively weighted sum of positive semidefinite matrices must be positive semidefinite.
+
-
 
+
-
Now consider the (closed convex) dual cone:
+
-
 
+
-
<math>\begin{array}{rl}\mathcal{K}^*
+
-
\!\!\!&=_{}\left\{_{}y~|~\langle z\,,\,y_{}\rangle\geq_{}0\,~\textrm{for\,all}~\,z\!\in_{_{}\!}\mathcal{K}_{}\right\}\subseteq_{}\reals^m\\
+
-
&=_{}\left\{_{}y~|~\langle z\,,\,y_{}\rangle\geq_{}0\,~\textrm{for\,all}~\,z_{\!}=_{\!}A\,\textrm{svec}X\,,~X\succeq0_{}\right\}\\
+
-
&=_{}\left\{_{}y~|~\langle A\,\textrm{svec}X\,,~y_{}\rangle\geq_{}0\,~\textrm{for\,all}~\,X\!\succeq_{_{}\!}0_{}\right\}\\
+
-
&=\left\{y~|~\langle\textrm{svec}X\,,\,A^{T\!}y\rangle\geq_{}0\;~\textrm{for\,all}~\,X\!\succeq_{\!}0\right\}\\
+
-
&=\left\{y~|~\textrm{svec}^{-1}(A^{T\!}y)\succeq_{}0\right\}
+
-
\end{array}</math>
+
-
 
+
-
that follows from Fejer's dual generalized inequalities for the positive semidefinite cone:
+
-
 
+
-
* <math>Y\succeq0~\Leftrightarrow~\langle Y\,,\,X\rangle\geq0\;~\textrm{for\,all}~\,X\succeq0</math>
+
-
 
+
-
This leads directly to an equally peculiar halfspace-description
+
-
 
+
-
<math>\mathcal{K}^*=\{y\!\in_{}\!\mathbb{R}^m~|\,\sum\limits_{j=1}^my_jA_j\succeq_{}0_{}\}</math>
+
-
 
+
-
The summation inequality with respect to the positive semidefinite cone
+
-
is known as a ''linear matrix inequality''.
+
== LMI Geometry ==
== LMI Geometry ==

Revision as of 02:20, 5 December 2008

In convex optimization, a linear matrix inequality (LMI) is an expression of the form

LaTeX: LMI(y):=A_0+y_1A_1+y_2A_2+\dots+y_m A_m\succeq0\,

where

  • LaTeX: y=[y_i\,,~i\!=\!1\dots m] is a real vector,
  • LaTeX: A_0\,, A_1\,, A_2\,,\dots\,A_m are symmetric matrices in the subspace of LaTeX: n\times n symmetric matrices LaTeX: \mathbb{S}^n,
  • LaTeX: B\succeq0 is a generalized inequality meaning LaTeX: B is a positive semidefinite matrix belonging to the positive semidefinite cone LaTeX: \mathbb{S}_+ in the subspace of symmetric matrices LaTeX: \mathbb{S}.

This linear matrix inequality specifies a convex constraint on y.

L8BXV0 <a href="http://nkoyrmvtnrvl.com/">nkoyrmvtnrvl</a>, [url=http://ltlocxpixvli.com/]ltlocxpixvli[/url], [link=http://teptabaxugtp.com/]teptabaxugtp[/link], http://hanuhwgvohbe.com/

LMI Geometry

Although matrix LaTeX: \,A\, is finite-dimensional, LaTeX: \mathcal{K} is generally not a polyhedral cone (unless LaTeX: \,m\, equals 1 or 2) simply because LaTeX: \,X\!\in\mathbb{S}_+^n\,.

Provided the LaTeX: A_j matrices are linearly independent, then relative interior = interior

LaTeX: \textrm{rel\,int}\mathcal{K}=\textrm{int}\mathcal{K}

meaning, the cone interior is nonempty; implying, the dual cone is pointed (Dattorro, ch.2).

If matrix LaTeX: \,A\, has no nullspace, on the other hand, then LaTeX: \,A\,\textrm{svec}X\, is an isomorphism in LaTeX: \,X\, between the positive semidefinite cone LaTeX: \mathbb{S}_+^n and range LaTeX: \,\mathcal{R}(A)\, of matrix LaTeX: \,A.

In that case, convex cone LaTeX: \,\mathcal{K}\, has relative interior

LaTeX: \textrm{rel\,int}\mathcal{K}=\{A\,\textrm{svec}X~|~X\!\succ_{\!}0_{}\}

and boundary

LaTeX: \textrm{rel}\,\partial^{}\mathcal{K}=\{A\,\textrm{svec}X~|~X\!\succeq_{\!}0\,,~X\!\nsucc_{\!}0_{}\}


When the LaTeX: A_j matrices are linearly independent, function LaTeX: \,g(y)_{\!}:=_{_{}\!}\sum y_jA_j\, on LaTeX: \mathbb{R}^m is a linear bijection.

Inverse image of the positive semidefinite cone under LaTeX: \,g(y)\, must therefore have dimension LaTeX: _{}m .

In that circumstance, the dual cone interior is nonempty

LaTeX: \textrm{int}\mathcal{K}^*=\{y\!\in_{}\!\mathbb{R}^m~|\,\sum\limits_{j=1}^my_jA_j\succ_{}0_{}\}

having boundary

LaTeX: \partial^{}\mathcal{K}^*=\{y\!\in_{}\!\mathbb{R}^m~|\,\sum\limits_{j=1}^my_jA_j\succeq_{}0\,,~\sum\limits_{j=1}^my_jA_j\nsucc_{}0_{}\}

Applications

There are efficient numerical methods to determine whether an LMI is feasible (i.e., whether there exists a vector LaTeX: y such that LaTeX: LMI(y)\succeq0 ), or to solve a convex optimization problem with LMI constraints. Many optimization problems in control theory, system identification, and signal processing can be formulated using LMIs. The prototypical primal and dual semidefinite program is a minimization of a real linear function respectively subject to the primal and dual convex cones governing this LMI.

External links

Personal tools