Linear matrix inequality

From Wikimization

(Difference between revisions)
Jump to: navigation, search
(New page: In convex optimization, a '''linear matrix inequality (LMI)''' is an expression of the form : <math>LMI(y):=A_0+y_1A_1+y_2A_2+\dots+y_m A_m\succeq0\,</math> where * <math>y=[y_i\,,~i\!...)
Current revision (15:08, 21 September 2016) (edit) (undo)
(LMI Geometry)
 
(41 intermediate revisions not shown.)
Line 1: Line 1:
-
In [[convex optimization]], a '''linear matrix inequality (LMI)''' is an expression of the form
+
In convex optimization, a '''linear matrix inequality (LMI)''' is an expression of the form
-
: <math>LMI(y):=A_0+y_1A_1+y_2A_2+\dots+y_m A_m\succeq0\,</math>
+
: <math>LMI(y):=A_0+y_1A_1+y_2A_2+\ldots+y_m A_m\succeq0\,</math>
where
where
-
* <math>y=[y_i\,,~i\!=\!1\dots m]</math> is a real vector,
+
* <math>y=[y_i\,,~i\!=\!1\ldots m]</math> is a real vector,
-
* <math>A_0\,, A_1\,, A_2\,,\dots\,A_m</math> are [[symmetric matrix | symmetric matrices]] in the subspace of <math>n\times n</math> symmetric matrices <math>\mathbb{S}^n</math>,
+
* <math>A_0\,, A_1\,, A_2\,,\ldots\,A_m</math> are symmetric matrices in the subspace of <math>n\times n</math> symmetric matrices <math>\mathbb{S}^n</math>,
-
* <math>B\succeq0 </math> is a generalized inequality meaning <math>B</math> is a [[positive semidefinite matrix]] belonging to the positive semidefinite cone <math>\mathbb{S}_+</math> in the subspace of symmetric matrices <math>\mathbb{S}</math>.
+
* <math>B\succeq0 </math> is a generalized inequality meaning <math>B</math> is a positive semidefinite matrix belonging to the positive semidefinite cone <math>\mathbb{S}_+</math> in the subspace of symmetric matrices <math>\mathbb{S}</math>.
-
This linear matrix inequality specifies a [[convex set|convex]] constraint on ''y''.
+
This linear matrix inequality specifies a convex constraint on ''y''.
== Convexity of the LMI constraint ==
== Convexity of the LMI constraint ==
-
<math>LMI(y)\succeq 0</math> is a convex constraint on ''y'' which means membership to a dual (convex) cone as we now explain: '''('''{{harvtxt|Dattorro|2007}}, Example 2.13.5.1.1''')'''
+
<math>LMI(y)\succeq 0</math> is a convex constraint on ''y'' which means membership to a dual (convex) cone as we now explain: '''('''[http://meboo.convexoptimization.com/Meboo.html Dattorro, Example 2.13.5.1.1]''')'''
-
Consider a peculiar vertex-description for a closed [[convex cone]] defined over the positive semidefinite cone
+
Consider a peculiar vertex-description for a [[Convex cones|convex cone]] defined over the positive semidefinite cone
'''('''instead of the more common nonnegative orthant, <math>x\succeq0</math>''')''':
'''('''instead of the more common nonnegative orthant, <math>x\succeq0</math>''')''':
Line 18: Line 18:
<math>\begin{array}{ll}\mathcal{K}
<math>\begin{array}{ll}\mathcal{K}
-
\!\!&=\left\{\left[\begin{array}{c}\langle A_1\,,\,X^{}\rangle\\\vdots\\\langle A_m\;,\,X^{}\rangle\end{array}\right]|~X\!\succeq_{\!}0\right\}\subseteq_{}\reals^m\\\\
+
\!\!&=\left\{\left[\begin{array}{c}\langle A_1\,,\,X^{}\rangle\\:\\\langle A_m\;,\,X^{}\rangle\end{array}\right]|~X\!\succeq_{\!}0\right\}\subseteq_{}\mathbb{R}^m\\\\
-
&=\left\{\left[\begin{array}{c}\textrm{svec}(A_1)^T\\\vdots\\\textrm{svec}(A_m)^T\end{array}\right]\!\textrm{svec}X~|~X\!\succeq_{\!}0\right\}\\\\
+
&=\left\{\left[\begin{array}{c}{\text svec}(A_1)^T\\:\\{\text svec}(A_m)^T\end{array}\right]{\text svec}X~|~X\!\succeq_{\!}0\right\}\\\\
-
&:=\;\{A\,\textrm{svec}X~|~X\!\succeq_{\!}0_{}\}
+
&:=\;\{A\,{\text svec}X~|~X\!\succeq_{\!}0_{}\}
\end{array}</math>
\end{array}</math>
where
where
*<math>A\!\in_{}\!\mathbb{R}^{m\times n(n+1)/2}</math>,
*<math>A\!\in_{}\!\mathbb{R}^{m\times n(n+1)/2}</math>,
-
*symmetric vectorization svec is a stacking of columns defined in (Dattorro ch.2),
+
*symmetric vectorization svec is a stacking of columns defined in '''('''[http://meboo.convexoptimization.com/Meboo.html Dattorro, ch.2.2.2.1]''')''',
*<math>A_0=\mathbf{0}</math> is assumed without loss of generality.
*<math>A_0=\mathbf{0}</math> is assumed without loss of generality.
-
<math>\mathcal{K}</math> is a convex cone because
+
<math>\mathcal{K}</math> is a [[Convex cones|convex cone]] because
-
<math>A\,\textrm{svec}{X_{{\rm p}_1}}_{\,},_{_{}}A\,\textrm{svec}{X_{{\rm p}_2}}\!\in\mathcal{K}~\Rightarrow~
+
<math>A\,{\text svec}{X_{p_1}}_{\,},_{_{}}A\,{\text svec}{X_{p_2}}\!\in\mathcal{K}~\Rightarrow~
-
A(\zeta_{\,}\textrm{svec}{X_{{\rm p}_1\!}}+_{}\xi_{\,}\textrm{svec}{X_{{\rm p}_2}})\in_{}\mathcal{K}
+
A(\zeta_{\,}{\text svec}{X_{p_1}}+_{}\xi_{\,}{\text svec}{X_{p_2}})\in_{}\mathcal{K}
-
\textrm{~~for\,all~\,}\zeta_{\,},\xi\geq0</math>
+
{\text~~for\,all~\,}\zeta_{\,},\xi\geq0</math>
since a nonnegatively weighted sum of positive semidefinite matrices must be positive semidefinite.
since a nonnegatively weighted sum of positive semidefinite matrices must be positive semidefinite.
-
Now consider the (closed convex) [[dual cone]]:
+
Now consider the (closed convex) dual cone:
<math>\begin{array}{rl}\mathcal{K}^*
<math>\begin{array}{rl}\mathcal{K}^*
-
\!\!\!&=_{}\left\{_{}y~|~\langle z\,,\,y_{}\rangle\geq_{}0\,~\textrm{for\,all}~\,z\!\in_{_{}\!}\mathcal{K}_{}\right\}\subseteq_{}\reals^m\\
+
\!\!\!&=_{}\left\{_{}y~|~\langle z\,,\,y_{}\rangle\geq_{}0\,~\textrm{for\,all}~\,z\!\in_{_{}\!}\mathcal{K}_{}\right\}\subseteq_{}\mathbb{R}^m\\
-
&=_{}\left\{_{}y~|~\langle z\,,\,y_{}\rangle\geq_{}0\,~\textrm{for\,all}~\,z_{\!}=_{\!}A\,\textrm{svec}X\,,~X\succeq0_{}\right\}\\
+
&=_{}\left\{_{}y~|~\langle z\,,\,y_{}\rangle\geq_{}0\,~\textrm{for\,all}~\,z_{\!}=_{\!}A\,{\text svec}X\,,~X\succeq0_{}\right\}\\
-
&=_{}\left\{_{}y~|~\langle A\,\textrm{svec}X\,,~y_{}\rangle\geq_{}0\,~\textrm{for\,all}~\,X\!\succeq_{_{}\!}0_{}\right\}\\
+
&=_{}\left\{_{}y~|~\langle A\,{\text svec}X\,,~y_{}\rangle\geq_{}0\,~\textrm{for\,all}~\,X\!\succeq_{_{}\!}0_{}\right\}\\
-
&=\left\{y~|~\langle\textrm{svec}X\,,\,A^{T\!}y\rangle\geq_{}0\;~\textrm{for\,all}~\,X\!\succeq_{\!}0\right\}\\
+
&=\left\{y~|~\langle{\text svec}X\,,\,A^{T\!}y\rangle\geq_{}0\;~\textrm{for\,all}~\,X\!\succeq_{\!}0\right\}\\
-
&=\left\{y~|~\textrm{svec}^{-1}(A^{T\!}y)\succeq_{}0\right\}
+
&=\left\{y~|~{\text svec}^{-1}(A^{T\!}y)\succeq_{}0\right\}
\end{array}</math>
\end{array}</math>
Line 52: Line 52:
This leads directly to an equally peculiar halfspace-description
This leads directly to an equally peculiar halfspace-description
-
<math>\mathcal{K}^*=\{y\!\in_{}\!\mathbb{R}^m~|\,\sum\limits_{j=1}^my_jA_j\succeq_{}0_{}\}</math>
+
<math>\mathcal{K}^*=\{y\!\in\mathbb{R}^m~|\,\sum\limits_{j=1}^my_jA_j\succeq_{}0_{}\}</math>
The summation inequality with respect to the positive semidefinite cone
The summation inequality with respect to the positive semidefinite cone
Line 60: Line 60:
Although matrix <math>\,A\,</math> is finite-dimensional, <math>\mathcal{K}</math> is generally not a polyhedral cone
Although matrix <math>\,A\,</math> is finite-dimensional, <math>\mathcal{K}</math> is generally not a polyhedral cone
-
(unless <math>\,m\,</math> equals 1 or 2) simply because <math>\,X\!\in\mathbb{S}_+^n\,</math>.
+
(unless <math>\,m\,</math> equals 1 or 2) simply because <math>\,X\!\in\mathbb{S}_+^n\,.</math>
-
Provided the <math>A_j</math> matrices are linearly independent, then relative interior = interior
+
Relative interior of <math>\mathcal{K}</math> may always be expressed
 +
<math>\textrm{rel\,int}\,\mathcal{K}=\{A\,{\text svec}X~|~X\!\succ0_{}\}.</math>
-
<math>\textrm{rel\,int}\mathcal{K}=\textrm{int}\mathcal{K}</math>
+
Provided the <math>\,A_j</math> matrices are linearly independent, then
 +
<math>\textrm{rel\,int}\,\mathcal{K}=\textrm{int}\,\mathcal{K}</math>
-
meaning, the cone interior is nonempty; implying, the dual cone is pointed (Dattorro, ch.2).
+
meaning, cone <math>\mathcal{K}</math> interior is nonempty; implying, dual cone <math>\mathcal{K}^*</math> is pointed ([http://meboo.convexoptimization.com/Meboo.html Dattorro, ch.2]).
-
If matrix <math>\,A\,</math> has no nullspace, on the other hand, then
+
If matrix <math>\,A\,</math> has no nullspace, then
-
<math>\,A\,\textrm{svec}X\,</math> is an isomorphism in <math>\,X\,</math> between the positive semidefinite cone <math>\mathbb{S}_+^n</math> and range <math>\,\mathcal{R}(A)\,</math> of matrix <math>\,A</math>.
+
<math>\,A\,{\text svec}X\,</math> is an isomorphism in <math>\,X\,</math> between the positive semidefinite cone <math>\mathbb{S}_+^n</math> and range <math>\,\mathcal{R}(A)\,</math> of matrix <math>\,A.</math>
-
In that case, convex cone <math>\,\mathcal{K}\,</math> has relative interior
+
That is sufficient for [[Convex cones|convex cone]] <math>\,\mathcal{K}\,</math> to be closed, and necessary to have relative boundary
 +
<math>\textrm{rel}\,\partial^{}\mathcal{K}=\{A\,{\text svec}X~|~X\!\succeq0\,,~X\!\not\succ_{\!}0_{}\}.</math>
-
<math>\textrm{rel\,int}\mathcal{K}=\{A\,\textrm{svec}X~|~X\!\succ_{\!}0_{}\}</math>
+
<br>
 +
Relative interior of the dual cone may always be expressed
 +
<math>\textrm{rel\,int}\,\mathcal{K}^*=\{y\!\in_{}\!\mathbb{R}^m~|\,\sum\limits_{j=1}^my_jA_j\succ_{}0_{}\}.</math>
-
and boundary
+
When the <math>A_j</math> matrices are linearly independent, function <math>\,g(y)_{\!}:=_{_{}\!}\sum y_jA_j\,</math> is a linear bijection on <math>\mathbb{R}^m.</math>
-
 
+
-
<math>\textrm{rel}\,\partial^{}\mathcal{K}=\{A\,\textrm{svec}X~|~X\!\succeq_{\!}0\,,~X\!\nsucc_{\!}0_{}\}</math>
+
-
 
+
-
<br>When the <math>A_j</math> matrices are linearly independent, function <math>\,g(y)_{\!}:=_{_{}\!}\sum y_jA_j\,</math> on <math>\mathbb{R}^m</math> is
+
-
a linear bijection.
+
Inverse image of the positive semidefinite cone under <math>\,g(y)\,</math>
Inverse image of the positive semidefinite cone under <math>\,g(y)\,</math>
-
must therefore have dimension <math>_{}m </math>.
+
must therefore have dimension equal to <math>\dim\!\left(\mathcal{R}(A^{\rm T})_{}\!\cap{\text svec}\,\mathbb{S}_+^{_{}n}\right)</math>
-
In that circumstance, the dual cone interior is nonempty
+
and relative boundary
 +
<math>\textrm{rel\,}\partial^{}\mathcal{K}^*=\{y\!\in_{}\!\mathbb{R}^m~|\,\sum\limits_{j=1}^my_jA_j\succeq_{}0\,,~\sum\limits_{j=1}^my_jA_j\not\succ0_{}\}.</math>
-
<math>\textrm{int}\mathcal{K}^*=\{y\!\in_{}\!\mathbb{R}^m~|\,\sum\limits_{j=1}^my_jA_j\succ_{}0_{}\}</math>
+
When this dimension is <math>\,m\,</math>, the dual cone interior is nonempty
 +
<math>\textrm{rel\,int}\,\mathcal{K}^*=\textrm{int}\,\mathcal{K}^*</math>
-
having boundary
+
and closure of convex cone <math>\mathcal{K}</math> is pointed.
-
 
+
-
<math>\partial^{}\mathcal{K}^*=\{y\!\in_{}\!\mathbb{R}^m~|\,\sum\limits_{j=1}^my_jA_j\succeq_{}0\,,~\sum\limits_{j=1}^my_jA_j\nsucc_{}0_{}\}</math>
+
== Applications ==
== Applications ==
-
There are efficient numerical methods to determine whether an LMI is feasible (''i.e.'', whether there exists a vector <math>y</math> such that <math>LMI(y)\succeq0</math> ), or to solve a [[convex optimization]] problem with LMI constraints.
+
There are efficient numerical methods to determine whether an LMI is feasible (''i.e.'', whether there exists a vector <math>y</math> such that <math>LMI(y)\succeq0</math> ), or to solve a convex optimization problem with LMI constraints.
-
Many optimization problems in [[control theory]], [[system identification]] and [[signal processing]] can be formulated using LMIs. The prototypical primal and dual [[semidefinite programming|semidefinite program]] is a minimization of a real linear function respectively subject to the primal and dual convex cones governing this LMI.
+
Many optimization problems in control theory, system identification, and signal processing can be formulated using LMIs. The prototypical primal and dual semidefinite program are optimizations of a real linear function respectively subject to the primal and dual [[Convex cones|convex cones]] governing this LMI.
-
 
+
-
== Solving LMIs ==
+
-
 
+
-
A major breakthrough in convex optimization lies in the introduction of [[interior-point method]]s. These methods were developed in a series of papers and became of true interest in the context of LMI problems in the work of Yurii Nesterov and Arkadii Nemirovskii.
+
-
 
+
-
== References ==
+
-
 
+
-
* Y. Nesterov and A. Nemirovsky, ''Interior Point Polynomial Methods in Convex Programming.'' SIAM, 1994.
+
-
 
+
-
* {{citation | first1 = Jon | last1 = Dattorro | year = 2007 | title = Convex Optimization & Euclidean Distance Geometry | url = http://www.convexoptimization.com | publisher = Meboo | isbn = 0976401304 }}. Chapter 2 explains cones and their duals.
+
== External links ==
== External links ==
-
* S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, [http://www.stanford.edu/~boyd/lmibook/ Linear Matrix Inequalities in System and Control Theory] (book in pdf)
+
* S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, [http://www.stanford.edu/~boyd/lmibook Linear Matrix Inequalities in System and Control Theory]
-
 
+
-
* C. Scherer and S. Weiland [http://www.cs.ele.tue.nl/sweiland/lmi.html Course on Linear Matrix Inequalities in Control], Dutch Institute of Systems and Control (DISC).
+
-
[[Category:Optimization]]
+
* C. Scherer and S. Weiland, [http://w3.ele.tue.nl/nl/cs/education/courses/hyconlmi Course on Linear Matrix Inequalities in Control], Dutch Institute of Systems and Control (DISC).

Current revision

In convex optimization, a linear matrix inequality (LMI) is an expression of the form

LaTeX: LMI(y):=A_0+y_1A_1+y_2A_2+\ldots+y_m A_m\succeq0\,

where

  • LaTeX: y=[y_i\,,~i\!=\!1\ldots m] is a real vector,
  • LaTeX: A_0\,, A_1\,, A_2\,,\ldots\,A_m are symmetric matrices in the subspace of LaTeX: n\times n symmetric matrices LaTeX: \mathbb{S}^n,
  • LaTeX: B\succeq0 is a generalized inequality meaning LaTeX: B is a positive semidefinite matrix belonging to the positive semidefinite cone LaTeX: \mathbb{S}_+ in the subspace of symmetric matrices LaTeX: \mathbb{S}.

This linear matrix inequality specifies a convex constraint on y.

Contents

Convexity of the LMI constraint

LaTeX: LMI(y)\succeq 0 is a convex constraint on y which means membership to a dual (convex) cone as we now explain: (Dattorro, Example 2.13.5.1.1)

Consider a peculiar vertex-description for a convex cone defined over the positive semidefinite cone

(instead of the more common nonnegative orthant, LaTeX: x\succeq0):

for LaTeX: X\!\in\mathbb{S}^n given LaTeX: \,A_j\!\in\mathbb{S}^n, LaTeX: \,j\!=\!1\ldots m

LaTeX: \begin{array}{ll}\mathcal{K}
\!\!&=\left\{\left[\begin{array}{c}\langle A_1\,,\,X^{}\rangle\\:\\\langle A_m\;,\,X^{}\rangle\end{array}\right]|~X\!\succeq_{\!}0\right\}\subseteq_{}\mathbb{R}^m\\\\
&=\left\{\left[\begin{array}{c}{\text svec}(A_1)^T\\:\\{\text svec}(A_m)^T\end{array}\right]{\text svec}X~|~X\!\succeq_{\!}0\right\}\\\\
&:=\;\{A\,{\text svec}X~|~X\!\succeq_{\!}0_{}\}
\end{array}

where

  • LaTeX: A\!\in_{}\!\mathbb{R}^{m\times n(n+1)/2},
  • symmetric vectorization svec is a stacking of columns defined in (Dattorro, ch.2.2.2.1),
  • LaTeX: A_0=\mathbf{0} is assumed without loss of generality.

LaTeX: \mathcal{K} is a convex cone because

LaTeX: A\,{\text svec}{X_{p_1}}_{\,},_{_{}}A\,{\text svec}{X_{p_2}}\!\in\mathcal{K}~\Rightarrow~
A(\zeta_{\,}{\text svec}{X_{p_1}}+_{}\xi_{\,}{\text svec}{X_{p_2}})\in_{}\mathcal{K}
{\text~~for\,all~\,}\zeta_{\,},\xi\geq0

since a nonnegatively weighted sum of positive semidefinite matrices must be positive semidefinite.

Now consider the (closed convex) dual cone:

LaTeX: \begin{array}{rl}\mathcal{K}^*
\!\!\!&=_{}\left\{_{}y~|~\langle z\,,\,y_{}\rangle\geq_{}0\,~\textrm{for\,all}~\,z\!\in_{_{}\!}\mathcal{K}_{}\right\}\subseteq_{}\mathbb{R}^m\\
&=_{}\left\{_{}y~|~\langle z\,,\,y_{}\rangle\geq_{}0\,~\textrm{for\,all}~\,z_{\!}=_{\!}A\,{\text svec}X\,,~X\succeq0_{}\right\}\\
&=_{}\left\{_{}y~|~\langle A\,{\text svec}X\,,~y_{}\rangle\geq_{}0\,~\textrm{for\,all}~\,X\!\succeq_{_{}\!}0_{}\right\}\\
&=\left\{y~|~\langle{\text svec}X\,,\,A^{T\!}y\rangle\geq_{}0\;~\textrm{for\,all}~\,X\!\succeq_{\!}0\right\}\\
&=\left\{y~|~{\text svec}^{-1}(A^{T\!}y)\succeq_{}0\right\}
\end{array}

that follows from Fejer's dual generalized inequalities for the positive semidefinite cone:

  • LaTeX: Y\succeq0~\Leftrightarrow~\langle Y\,,\,X\rangle\geq0\;~\textrm{for\,all}~\,X\succeq0

This leads directly to an equally peculiar halfspace-description

LaTeX: \mathcal{K}^*=\{y\!\in\mathbb{R}^m~|\,\sum\limits_{j=1}^my_jA_j\succeq_{}0_{}\}

The summation inequality with respect to the positive semidefinite cone is known as a linear matrix inequality.

LMI Geometry

Although matrix LaTeX: \,A\, is finite-dimensional, LaTeX: \mathcal{K} is generally not a polyhedral cone (unless LaTeX: \,m\, equals 1 or 2) simply because LaTeX: \,X\!\in\mathbb{S}_+^n\,.

Relative interior of LaTeX: \mathcal{K} may always be expressed LaTeX: \textrm{rel\,int}\,\mathcal{K}=\{A\,{\text svec}X~|~X\!\succ0_{}\}.

Provided the LaTeX: \,A_j matrices are linearly independent, then LaTeX: \textrm{rel\,int}\,\mathcal{K}=\textrm{int}\,\mathcal{K}

meaning, cone LaTeX: \mathcal{K} interior is nonempty; implying, dual cone LaTeX: \mathcal{K}^* is pointed (Dattorro, ch.2).

If matrix LaTeX: \,A\, has no nullspace, then LaTeX: \,A\,{\text svec}X\, is an isomorphism in LaTeX: \,X\, between the positive semidefinite cone LaTeX: \mathbb{S}_+^n and range LaTeX: \,\mathcal{R}(A)\, of matrix LaTeX: \,A.

That is sufficient for convex cone LaTeX: \,\mathcal{K}\, to be closed, and necessary to have relative boundary LaTeX: \textrm{rel}\,\partial^{}\mathcal{K}=\{A\,{\text svec}X~|~X\!\succeq0\,,~X\!\not\succ_{\!}0_{}\}.


Relative interior of the dual cone may always be expressed LaTeX: \textrm{rel\,int}\,\mathcal{K}^*=\{y\!\in_{}\!\mathbb{R}^m~|\,\sum\limits_{j=1}^my_jA_j\succ_{}0_{}\}.

When the LaTeX: A_j matrices are linearly independent, function LaTeX: \,g(y)_{\!}:=_{_{}\!}\sum y_jA_j\, is a linear bijection on LaTeX: \mathbb{R}^m.

Inverse image of the positive semidefinite cone under LaTeX: \,g(y)\, must therefore have dimension equal to LaTeX: \dim\!\left(\mathcal{R}(A^{\rm T})_{}\!\cap{\text svec}\,\mathbb{S}_+^{_{}n}\right)

and relative boundary LaTeX: \textrm{rel\,}\partial^{}\mathcal{K}^*=\{y\!\in_{}\!\mathbb{R}^m~|\,\sum\limits_{j=1}^my_jA_j\succeq_{}0\,,~\sum\limits_{j=1}^my_jA_j\not\succ0_{}\}.

When this dimension is LaTeX: \,m\,, the dual cone interior is nonempty LaTeX: \textrm{rel\,int}\,\mathcal{K}^*=\textrm{int}\,\mathcal{K}^*

and closure of convex cone LaTeX: \mathcal{K} is pointed.

Applications

There are efficient numerical methods to determine whether an LMI is feasible (i.e., whether there exists a vector LaTeX: y such that LaTeX: LMI(y)\succeq0 ), or to solve a convex optimization problem with LMI constraints. Many optimization problems in control theory, system identification, and signal processing can be formulated using LMIs. The prototypical primal and dual semidefinite program are optimizations of a real linear function respectively subject to the primal and dual convex cones governing this LMI.

External links

Personal tools