Proximity Problems

From Wikimization

(Difference between revisions)
Jump to: navigation, search
Line 64: Line 64:
Let <math>\,\|~\|_F\,</math> denote the Frobenius norm and <math>\,\Delta\,</math> a given symmetric <math>\,n\times n\,</math> matrix of squared dissimilarities. Let <math>\,W=W(\Delta)\,</math> and <math>\,W_s=W_s\,(\Delta)</math>.
Let <math>\,\|~\|_F\,</math> denote the Frobenius norm and <math>\,\Delta\,</math> a given symmetric <math>\,n\times n\,</math> matrix of squared dissimilarities. Let <math>\,W=W(\Delta)\,</math> and <math>\,W_s=W_s\,(\Delta)</math>.
 +
Classical MDS can be defined by the optimization problem '''(P)''':
Classical MDS can be defined by the optimization problem '''(P)''':
Line 78: Line 79:
\end{array}</math>
\end{array}</math>
-
The following explicit solution to problem '''(P)''' (respectively problem '''(P_s)''') is well known: \
+
The following explicit solution to problem '''(P)''' (respectively problem '''(P_s)''') is well known:
-
let <math>\,\lambda_1\geqslant\ldots\geqslant\lambda_n\,</math> denote the eigenvalues of <math>\,W\,<math> (respextively of '''W_s''')
+
let <math>\,\lambda_1\geqslant\ldots\geqslant\lambda_n\,</math> denote the eigenvalues of <math>\,W\,</math> (respectively of <math>\,W_s\,</math>)
and <math>\,v_1\ldots v_n\,</math> denote the corresponding eigenvectors.
and <math>\,v_1\ldots v_n\,</math> denote the corresponding eigenvectors.
-
Assume that the <math>p</math> largest eigenvalues are positive.
+
 
 +
Assume that the <math>\,p\,</math> largest eigenvalues are positive.
Then
Then
Line 124: Line 126:
'''Result.''' The following inequality holds:
'''Result.''' The following inequality holds:
-
Given <math>\,p\leqslant n-1\,</math>, for any <math>\,B\in\Omega_n(p)\,</math>, let <math>\,D=\text{diag}(B)e^t+e\;\text{diag}(b)^t-2B\,</math>.
+
Given <math>\,p\leqslant n-1\,</math>, for any <math>\,B\in\Omega_n(p)\,</math>, let <math>\,D=\text{diag}(B)e^{\rm T}+e\;\text{diag}(B)^{\rm T}-2B\,</math>.
Then
Then
Line 145: Line 147:
;Theorem.
;Theorem.
-
For any <math>\,s\in\Bbb R^n\,</math> such that <math>\,s^te=1\,</math> and for any <math>p</math> we have
+
For any <math>\,s\in\Bbb R^n\,</math> such that <math>\,s^te=1\,</math> and for any <math>\,p\,</math> we have
<math>f \leqslant f_s</math>
<math>f \leqslant f_s</math>
Line 152: Line 154:
'''Proof.''' We show that for all <math>\,i\,</math>, <math>\,|\lambda_i(W)|\leqslant|\lambda_i(W_s)|\,</math>.
'''Proof.''' We show that for all <math>\,i\,</math>, <math>\,|\lambda_i(W)|\leqslant|\lambda_i(W_s)|\,</math>.
Toward this end, we consider the two cases:
Toward this end, we consider the two cases:
-
* If $W$ is psd then $W_s$ is psd and the inequality became $\lambda_i(W)\leqslant \lambda_i(W_s)$. But
+
* If <math>\,W\,</math> is psd then <math>\,W_s\,</math> is psd and the inequality became <math>\,\lambda_i(W)\leqslant \lambda_i(W_s)\,</math>. But
-
<math>\lambda_i(W)=\lambda_i(JW_sJ)\leqslant \lambda_i(W_s)\lambda_1(J)=\lambda_i(W_s)<math>
+
<math>\lambda_i(W)=\lambda_i(JW_sJ)\leqslant \lambda_i(W_s)\lambda_1(J)=\lambda_i(W_s)</math>
because <math>\,\lambda_1(J)=1\,</math>.
because <math>\,\lambda_1(J)=1\,</math>.
-
* If <math>\,W\,<math> is not psd, then using the definition of <math>\,J\,<math>:
+
* If <math>\,W\,</math> is not psd, then using the definition of <math>\,J\,</math>:
<math>\begin{array}{rcl}
<math>\begin{array}{rcl}
Line 169: Line 171:
<math>\lambda_1(-\frac{1}{n}ee^tW_sJW_s)= 0</math>
<math>\lambda_1(-\frac{1}{n}ee^tW_sJW_s)= 0</math>
-
because <math>\,J\,</math> and <math>\,W_s^2\,<math> are PSD we have
+
because <math>\,J\,</math> and <math>\,W_s^2\,</math> are PSD we have
-
<math>\lambda_i(W_sJW_s)\leqslant\lambda_i(W_s^2)\lambda_1(J)=\lambda_i(W_s^2) \hspace{2cm}\square</math>
+
<math>\lambda_i(W_sJW_s)\leqslant\lambda_i(W_s^2)\lambda_1(J)=\lambda_i(W_s^2)\qquad\square</math>
-
\section{Modified Gower problem}
+
= Modified Gower problem =
In this Section we consider the following problem : given a non Euclidean matrix, can we find an $s$ that maximizes the total squared real distances {\bf from the points to the centroid given by} $s$ in the fitted configuration. What is this choice of $s$ ?
In this Section we consider the following problem : given a non Euclidean matrix, can we find an $s$ that maximizes the total squared real distances {\bf from the points to the centroid given by} $s$ in the fitted configuration. What is this choice of $s$ ?

Revision as of 16:08, 3 July 2008

Contents

Abstract

The aim of this short paper is to give an algebraic result that relate two criteria in multidimensional scaling...

Key-words
Euclidean distance, Multidimensional scaling, strain, sstress, comparing criteria.

Introduction

We consider an LaTeX: \,n\times n\, pre-distance matrix LaTeX: \,D=(d_{ij})\, defined as a real symmetric matrix where LaTeX: \,d_{ii}=0\, for LaTeX: \,i=1\ldots n\, and LaTeX: \,d_{ij}\geqslant 0\, for all LaTeX: \,i,j\,.

LaTeX: D\, is said to be Euclidean distance matrix of dimension LaTeX: \,p\, if there exist points LaTeX: \,z_1\ldots z_n\, in LaTeX: \,\Bbb R^p\, LaTeX: \,(p\leqslant n-1)\, such that

LaTeX: d_{ij}=\|z_i-z_j\|^2 \quad\text{for all } i,j=1\ldots n

where LaTeX: \,\|~\|\, denotes Euclidean norm. Denote by LaTeX: \,\Bbb{EDM}^p\, the set of Euclidean distance matrices of dimension LaTeX: \,p\,.

A problem common to various sciences is to find the Euclidean distance matrix LaTeX: \,D\in\Bbb{EDM}^p\, closest, in some sense, to a given predistance matrix LaTeX: \,\Delta=[\delta_{ij}]\,. There are three statements of the closest-EDM problem prevalent in the literature, the multiplicity due primarily to choice of projection on the EDM versus positive semidefinite (PSD) cone and vacillation between the distance-square variable LaTeX: \,d_{ij}\, versus absolute distance LaTeX: \,\sqrt{d_{ij}}\,.

During the past two decades a large amount of work has been devoted to Euclidean distance matrices and approximation of predistances by an LaTeX: \,\Bbb{EDM}^p\, in a series of works including Gower[6-8], Mathar..., Critchley..., Hayden et al..., etc.

Mathematical preliminaries

It is well known that LaTeX: \,D\in\Bbb{EDM}^p\, if and only if the symmetric LaTeX: \,n\times n\, matrix

LaTeX: W_s(D)=-\frac{1}{2}(I-es^t)D(I-se^t)\qquad(1)

is positive semidefinite with LaTeX: \,\text{rank}(W_s)\leqslant p\,, where $\,e\,$ is a vector of ones and $\,s\,$ is any vector such that LaTeX: \,s^{\rm T}e=1\,.

This result was proved by Gower... as a generalization of an earlier result of Schoenberg... Later Gower considered the particular choices LaTeX: \,s=\frac{1}{n}e\, and LaTeX: \,s=e_i\, where LaTeX: \,e_i\, is the LaTeX: \,i^\text{th}\, vector from the standard basis. In what follows, when LaTeX: \,s=\frac{1}{n}e\, then matrix LaTeX: \,W_s(D)\, will be denoted by LaTeX: \,W(D)\,:

LaTeX: W(D)=-\frac{1}{2}(I-\frac{1}{n}ee^t)D(I-\frac{1}{n}ee^t)\qquad(2)

We see no compelling reason to prefer one particular LaTeX: \;s\, over another. Each has its own coherent interpretation. Neither can we say any particular problem formulation produces generally better results than another. Dattorro...

The aim of this short paper is to clarify that point...

We shall also use the notation

LaTeX: \begin{array}{rcl}
J_s &=& I-es^t\qquad(3)\\
J   &=& I-\frac{1}{n}ee^t\qquad(4)
\end{array}

so the equations (2) and (3) can be written:

LaTeX: \begin{array}{rcl}
W_s(D) &=& -\frac{1}{2}J_sDJ_s^t \\
W(D)   &=& -\frac{1}{2}JDJ
\end{array}

It is easy to verify the following properties: LaTeX: \begin{array}{c}
J^t=J,\;J^2=J,\;Je=0\\
J_s^2=J_s,\;J_se=0,\;s^tJ_s=0\\
JJ_s=J,\;J_sJ=J_s\\
W=JW_sJ\\
W_s=J_sWJ_s^t
\end{array}

Classical MDS

Given LaTeX: \,p\leqslant n\,, let LaTeX: \,\mathbf{D}_n(p)\, denote the set of Euclidean distance matrices of dimension LaTeX: \,p\, and LaTeX: \,\Omega_n(p)\, denote the closed set of symmetric LaTeX: \,n\times n\, matrices that are positive semidefinite and have rank no greater than LaTeX: \,p\,.

Let LaTeX: \,\|~\|_F\, denote the Frobenius norm and LaTeX: \,\Delta\, a given symmetric LaTeX: \,n\times n\, matrix of squared dissimilarities. Let LaTeX: \,W=W(\Delta)\, and LaTeX: \,W_s=W_s\,(\Delta).

Classical MDS can be defined by the optimization problem (P):

LaTeX: \begin{array}{rl}
\text{minimize}&\|W-B\|^2\\
\text{subject to}&B\in\Omega_n(p)
\end{array}

Problem (P) can be viewed as a particular case of a more general optimization problem (P_s):

LaTeX: \begin{array}{rl}
\text{minimize}&\|W_s-B\|^2\\
\text{subject to}&B\in\Omega_n(p)
\end{array}

The following explicit solution to problem (P) (respectively problem (P_s)) is well known: let LaTeX: \,\lambda_1\geqslant\ldots\geqslant\lambda_n\, denote the eigenvalues of LaTeX: \,W\, (respectively of LaTeX: \,W_s\,) and LaTeX: \,v_1\ldots v_n\, denote the corresponding eigenvectors.

Assume that the LaTeX: \,p\, largest eigenvalues are positive. Then

LaTeX: B^*=\sum_{i=1}^p\lambda_iv_iv_i^t

is a global minimum of problem (P) (respectively of problem (P_s)). Furthermore, the minimum value for problem (P) is

LaTeX: f=\sum_{i=p+1}^n\lambda_i^2(W)

and for problem (P_s)

LaTeX: f_s=\sum_{i=p+1}^n\lambda_i^2(W_s)

In Section 5, we will prove that for any squared dissimilarity matrix LaTeX: \Delta we have

LaTeX: f\leqslant f_s

that is, at the minimum, the strain criterion always gives smaller value than criterion (P_s). In order to show this result we shall use

Lemma

Let LaTeX: \,\lambda_i(C)\, LaTeX: \,i=1\ldots n\, denote the eigenvalues of any symmetric LaTeX: \,n\times n\, matrix in decreasing order.

  • For all $A,B$ :

LaTeX: \lambda_i(A+B) \leqslant \lambda_i(A)+\lambda_1(B)

  • For all positive semidefinite LaTeX: \,A,B\,:

LaTeX: \lambda_i(A\,B)\leqslant \lambda_i(A)\lambda_1(B)


Proof. see, for instance, Wilkinson...

Comparing strain and sstress

In this section we recall a result (see [2]) that relate the strain and sstress criteria. The sstress criterion is given by:

LaTeX: \begin{array}{rl}\text{minimize}&S(D)=\|\Delta-D\|^2\\
\text{subject to}&D\in \mathbf{D}_n(p)
\end{array}

Result. The following inequality holds: Given LaTeX: \,p\leqslant n-1\,, for any LaTeX: \,B\in\Omega_n(p)\,, let LaTeX: \,D=\text{diag}(B)e^{\rm T}+e\;\text{diag}(B)^{\rm T}-2B\,. Then

LaTeX: \|\Delta-D\|^2 \geqslant 4\|W-B\|^2


Proof. Let LaTeX: \,B\in\Omega_n(p)\,; we have

LaTeX: \begin{array}{rcl}
\delta_{ij} &=& w_{ii}+w_{jj}-2w_{ij}\\
d_{ij} &=& b_{ii}+b_{jj}-2b_{ij}
\end{array}

Writing LaTeX: \,a_{ij}=w_{ij}-b_{ij}\, we get

LaTeX: \sum_i\sum_j (\delta_{ij}-d_{ij})^2=2n\,\sum_ia_{ii}^2+4\sum_i\sum_j a_{ij}^2\geqslant 4\sum_i\sum_ja_{ij}^2\qquad\square


Main result

In this section we show an inequality involving the criteria LaTeX: \,f\, in (12) and LaTeX: \,f_s\, in (13).

Theorem.

For any LaTeX: \,s\in\Bbb R^n\, such that LaTeX: \,s^te=1\, and for any LaTeX: \,p\, we have

LaTeX: f \leqslant f_s


Proof. We show that for all LaTeX: \,i\,, LaTeX: \,|\lambda_i(W)|\leqslant|\lambda_i(W_s)|\,. Toward this end, we consider the two cases:

  • If LaTeX: \,W\, is psd then LaTeX: \,W_s\, is psd and the inequality became LaTeX: \,\lambda_i(W)\leqslant \lambda_i(W_s)\,. But

LaTeX: \lambda_i(W)=\lambda_i(JW_sJ)\leqslant \lambda_i(W_s)\lambda_1(J)=\lambda_i(W_s)

because LaTeX: \,\lambda_1(J)=1\,.

  • If LaTeX: \,W\, is not psd, then using the definition of LaTeX: \,J\,:

LaTeX: \begin{array}{rcl}
\lambda_i(W^2) &=& \lambda_i(W_sJW_s-\frac{1}{n}ee^tW_sJW_s) \\
</p>
<pre>              &\leqslant& \lambda_i(W_sJW_s)+\lambda_1(-\frac{1}{n}ee^tW_sJW_s)
</pre>
<p>\end{array}

But

LaTeX: \lambda_1(-\frac{1}{n}ee^tW_sJW_s)= 0

because LaTeX: \,J\, and LaTeX: \,W_s^2\, are PSD we have

LaTeX: \lambda_i(W_sJW_s)\leqslant\lambda_i(W_s^2)\lambda_1(J)=\lambda_i(W_s^2)\qquad\square


Modified Gower problem

In this Section we consider the following problem : given a non Euclidean matrix, can we find an $s$ that maximizes the total squared real distances {\bf from the points to the centroid given by} $s$ in the fitted configuration. What is this choice of $s$  ?

This problem can be written as an optimization problem in the following maneer. First let remark that if $\Delta$ is not Euclidean the number of negative eigenvalues of $W_s=W_s(\Delta)$ does not depend on $s$. Let call this number $p$.

The total squared real distances {\bf from the points to the centroid given by} $s$ in the fitted configuration can be written as \begin{equation} \sum_{i=1}^p\lambda_i(W_s) \end{equation} where $\lambda_i(W_s)$ denotes the $i$th eigenvalue of $W_s$. But by a well known result we have $$ \sum_{i=1}^p\lambda_i(W_s)=\max_{X^tX=\mathbf{I_p}}\;\text{trace}(X^tW_sX) $$ where $X$ is an $n\times p$ matrix and $\mathbf{I_p}$ is the $p\times p$ identity matrix. So the final optimisation problem can be written as \begin{equation} \max_{s^te=1,s\geqslant 0}\max_{X^tX=\mathbf{I_p}}\;\text{trace}(X^tW_sX) \end{equation} where $$ X^tW_sX=X^tWX-X^tWse^tX-X^tes^tWX+X^tes^tWse^tX $$

\noindent{\bf Question :} Is it true that at the optimum, the problem $(20)$ is equivalent to the problem \begin{equation} \max_{X^tX=\mathbf{I_p}}\max_{s^te=1,s\geqslant 0}\;\text{trace}(X^tW_sX) \end{equation}

??? %\appendix %\section*{Appendix} %In this appendix, we will show that \begin{flushleft}{\Large\bf References} \end{flushleft}

\vspace{3mm}\noindent [1] Critchley, F., 1986. On certain linear mappings between inner-product and squared-distance matrices. Linear Algebra Appl. 105, 91-107.

\vspace{3mm}\noindent [2] De Leeuw, J., Heiser, W., 1982. Theory of multidimensional scaling. Krishnaiah, P.R., Kanal, I.N.(Eds.), Handbook of Statistics, vol. 2. Nrth-Holland, Amsterdam, pp. 285-316 (chapter 13).

\vspace{3mm}\noindent [3] Gower, J.C., 1966. Some distance properties of latent root and vector methods in multivariate analysis. Biometrika 53, 315-328.

\vspace{3mm}\noindent [4] Gower, J.C., 1982. Euclidean distance geometry, Math. Scientist 7, 1-14.

\vspace{3mm}\noindent [5] Schoenberg, I.J., 1935. Remarks to Maurice Fr\'echet's article Sur la d\'efinition axiomatique d'une classe d'espaces distanci\'es vectoriellement applicable sur l'espace de Hilbert. Ann. of Math. 38, 724-738.

\vspace{3mm}\noindent [6] Torgerson, W.S., 1952. Multidimensional scaling: I. Theory and method. Psychometrika 17, 401-419.

\vspace{3mm}\noindent [7] Trosset, M.W., 1997. Numerical algorithms for multidimensional scaling. In: Klar, R., Opitz, P. (Eds.), Classification and Knowledge Organization. Springer, Berlin, pp. 80-92. % %\vspace{3mm}\noindent % %\vspace{3mm}\noindent % %\vspace{3mm}\noindent % %\vspace{3mm}\noindent % %\vspace{3mm}\noindent % %\vspace{3mm}\noindent % %\vspace{3mm}\noindent % %\vspace{3mm}\noindent % %\vspace{3mm}\noindent \end{document}

Personal tools