Linear matrix inequality
From Wikimization
(New page: In convex optimization, a '''linear matrix inequality (LMI)''' is an expression of the form : <math>LMI(y):=A_0+y_1A_1+y_2A_2+\dots+y_m A_m\succeq0\,</math> where * <math>y=[y_i\,,~i\!...) |
|||
Line 1: | Line 1: | ||
- | In | + | In convex optimization, a '''linear matrix inequality (LMI)''' is an expression of the form |
: <math>LMI(y):=A_0+y_1A_1+y_2A_2+\dots+y_m A_m\succeq0\,</math> | : <math>LMI(y):=A_0+y_1A_1+y_2A_2+\dots+y_m A_m\succeq0\,</math> | ||
where | where | ||
* <math>y=[y_i\,,~i\!=\!1\dots m]</math> is a real vector, | * <math>y=[y_i\,,~i\!=\!1\dots m]</math> is a real vector, | ||
- | * <math>A_0\,, A_1\,, A_2\,,\dots\,A_m</math> are | + | * <math>A_0\,, A_1\,, A_2\,,\dots\,A_m</math> are symmetric matrices in the subspace of <math>n\times n</math> symmetric matrices <math>\mathbb{S}^n</math>, |
- | * <math>B\succeq0 </math> is a generalized inequality meaning <math>B</math> is a | + | * <math>B\succeq0 </math> is a generalized inequality meaning <math>B</math> is a positive semidefinite matrix belonging to the positive semidefinite cone <math>\mathbb{S}_+</math> in the subspace of symmetric matrices <math>\mathbb{S}</math>. |
- | This linear matrix inequality specifies a | + | This linear matrix inequality specifies a convex constraint on ''y''. |
== Convexity of the LMI constraint == | == Convexity of the LMI constraint == |
Revision as of 01:24, 9 November 2007
In convex optimization, a linear matrix inequality (LMI) is an expression of the form
where
- is a real vector,
- are symmetric matrices in the subspace of symmetric matrices ,
- is a generalized inequality meaning is a positive semidefinite matrix belonging to the positive semidefinite cone in the subspace of symmetric matrices .
This linear matrix inequality specifies a convex constraint on y.
Contents |
Convexity of the LMI constraint
is a convex constraint on y which means membership to a dual (convex) cone as we now explain: (Template:Harvtxt, Example 2.13.5.1.1)
Consider a peculiar vertex-description for a closed convex cone defined over the positive semidefinite cone
(instead of the more common nonnegative orthant, ):
for given ,
where
- ,
- symmetric vectorization svec is a stacking of columns defined in (Dattorro ch.2),
- is assumed without loss of generality.
is a convex cone because
since a nonnegatively weighted sum of positive semidefinite matrices must be positive semidefinite.
Now consider the (closed convex) dual cone:
that follows from Fejer's dual generalized inequalities for the positive semidefinite cone:
This leads directly to an equally peculiar halfspace-description
The summation inequality with respect to the positive semidefinite cone is known as a linear matrix inequality.
LMI Geometry
Although matrix is finite-dimensional, is generally not a polyhedral cone (unless equals 1 or 2) simply because .
Provided the matrices are linearly independent, then relative interior = interior
meaning, the cone interior is nonempty; implying, the dual cone is pointed (Dattorro, ch.2).
If matrix has no nullspace, on the other hand, then is an isomorphism in between the positive semidefinite cone and range of matrix .
In that case, convex cone has relative interior
and boundary
When the matrices are linearly independent, function on is
a linear bijection.
Inverse image of the positive semidefinite cone under must therefore have dimension .
In that circumstance, the dual cone interior is nonempty
having boundary
Applications
There are efficient numerical methods to determine whether an LMI is feasible (i.e., whether there exists a vector such that ), or to solve a convex optimization problem with LMI constraints. Many optimization problems in control theory, system identification and signal processing can be formulated using LMIs. The prototypical primal and dual semidefinite program is a minimization of a real linear function respectively subject to the primal and dual convex cones governing this LMI.
Solving LMIs
A major breakthrough in convex optimization lies in the introduction of interior-point methods. These methods were developed in a series of papers and became of true interest in the context of LMI problems in the work of Yurii Nesterov and Arkadii Nemirovskii.
References
- Y. Nesterov and A. Nemirovsky, Interior Point Polynomial Methods in Convex Programming. SIAM, 1994.
- Template:Citation. Chapter 2 explains cones and their duals.
External links
- S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory (book in pdf)
- C. Scherer and S. Weiland Course on Linear Matrix Inequalities in Control, Dutch Institute of Systems and Control (DISC).