Proximity Problems

From Wikimization

Revision as of 14:34, 3 July 2008 by Dattorro (Talk | contribs)
(diff) ←Older revision | Current revision (diff) | Newer revision→ (diff)
Jump to: navigation, search

Abstract

The aim of this short paper is to give an algebraic result that relate two criteria in multidimensional scaling...

Key-words
Euclidean distance, Multidimensional scaling, strain, sstress, comparing criteria.

Introduction

We consider an LaTeX: n\times n pre-distance matrix LaTeX: D=(d_{ij}) defined as a real symmetric matrix where LaTeX: d_{ii}=0 for LaTeX: i=1\ldots n and LaTeX: d_{ij}\geqslant 0 for all LaTeX: i,j. $D$ is said to be Euclidean distance matrix of dimension $p$ if there exist points $z_1,\ldots,z_n$ in $\Bbb R^p$ ($p\leqslant n-1$) such that $$ d_{ij}=\|z_i-z_j\|^2 \hspace{1cm}\text{for all } i,j=1,\ldots,n $$ where $\|\cdot\|$ denotes the Euclidean norm. Let denotes by $\text{EDM}^p$ the set of Euclidean distance matrices of dimension $p$.

A problem common to various sciences is to find the Euclidean distance matrix $D\in\text{EDM}^p$ closest in some senss to a given pre-distance matrix $\Delta=(\delta_{ij})$. There are three statements of the closest-EDM problem prevalent in the literature, the multiplicity due primarily to choice of projection on the EDM versus positive semidefinite (PSD) cone and vacillation between the distance-square variable $d_{ij}$ versus absolute distance $\sqrt{d_{ij}}$.

During the past two decades a large amont of work has been devoted to Euclidean distance matrices and approximation of pre-distances by an $\text{EDM}^p$ in a series of works including among others Gower[6-8], Mathar[?], Critchley[?], Haydan et al.[?], etc. \section{Mathematical preliminaries} It is well known that $D\in\text{EDM}^p$ if and only if the symmetric $n\times n$ matrix \begin{equation} W_s(D)=-\frac{1}{2}(I-es^t)D(I-se^t) \end{equation} is positive semidefinite (psd) with $\text{rank}(W_s)\leqslant p$, where $I$ is the identity matrix of order $n$, $e$ is the vector of ones and $s$ is any vector such that $s^te=1$.

This result was proved by Gower as a generalization of an earlier result of Schoenberg. The later considered the particular choices $s=\frac{1}{n}e$ and $s=e_i$ where $e_i$ is the $i$th vector of canonical base. In the remainder of the paper, when $s=\frac{1}{n}e$, the matrix $W_s(D)$ will be noted by $W(D)$ and given by \begin{equation} W(D)=-\frac{1}{2}(I-\frac{1}{n}ee^t)D(I-\frac{1}{n}ee^t) \end{equation}

Dattorro (with our notation) : "we see no compelling reason to prefer one particular $s$ over another. Each has itw onwn coherent interpretation. Neither can we say any particular problem formulation produces generally better results than another". The aim of this short paper is to clarify this point ...

We shall also use the notation \begin{eqnarray} J_s &=& I-es^t \\ J &=& I-\frac{1}{n}ee^t \end{eqnarray} so the equations $(2)$ and $(3)$ can be written as : \begin{eqnarray} W_s(D) &=& -\frac{1}{2}J_sDJ_s^t \\ W(D) &=& -\frac{1}{2}JDJ \end{eqnarray} It is easy to verify the following properties : \begin{eqnarray} J^t=J,\;J^2=J,\;Je=0\\ J_s^2=J_s,\;J_se=0,\;s^tJ_s=0\\ JJ_s=J,\;J_sJ=J_s\\ W=JW_sJ\\ W_s=J_sWJ_s^t \end{eqnarray} \section{Classical MDS} Given $p\leqslant n$, let $\mathbf{D}_n(p)$ denote the se of Euclidean distance matrices of dimension $p$ and let $\Omega_n(p)$ denote the closed set of symmetric $n\times n$ matrices that are positive semidefinite and have rank no greater than $p$.

Let $\|\cdot\|_F$ denote the Frobenius norm and $\Delta$ a given symmetric $n\times n$ matrix of squared dissimilarities. Let $W=W(\Delta)$ and $W_s=W_s(\Delta)$. Classical MDS can be defined by the optimization problem $(P)$ : \begin{eqnarray*} \text{minimize }\; \|W-B\|^2\\ \text{subject to }\; B\in\Omega_n(p) \end{eqnarray*} Problem $(P)$ can be viewed as a particular case of a more general optimization problem $(P_s)$ : \begin{eqnarray*} \text{minimize }\; \|W_s-B\|^2\\ \text{subject to }\; B\in\Omega_n(p) \end{eqnarray*} The following explicit solution to problem $(P)$ (resp. problem $(P_s)$) is well known : let $\lambda_1\geqslant\ldots\geqslant\lambda_n$ denote the eigenvalues of $W$ (resp. of $W_s$) and $v_1,\ldots,v_n$ denote the corresponding eigenvectors. Assume that the $p$ largest eigenvalues are positive. Then : $$ B^*=\sum_{i=1}^p\lambda_iv_iv_i^t $$ is a global minimum of problem $(P)$ (resp. of problem $(P_s)$). Furthermore, the minimum value for problem $(P)$ is \begin{equation} f=\sum_{i=p+1}^n\lambda_i^2(W) \end{equation} and for problem $(P_s)$ is \begin{equation} f_s=\sum_{i=p+1}^n\lambda_i^2(W_s) \end{equation} In Section 5, we will prove that for any squared dissimilarity matrix $\Delta$ we have $$ f\leqslant f_s $$ that is, at the minimum, the strain criterion always gives smaller value than criterion $(P_s)$. In order to show this result we shall use the lemma : \begin{Lemma} Let $\lambda_i(C)$ $i=1,\ldots,n$ denote the eigenvalues of any symmetric $n\times n$ matrix assumed to be ranked in decreasing order. \begin{enumerate} \item For all $A,B$ : \begin{equation} \lambda_i(A+B) \leqslant \lambda_i(A)+\lambda_1(B) \end{equation} \item For all positive semidefinite $A,B$ : \begin{equation} \lambda_i(A\,B)\leqslant \lambda_i(A)\lambda_1(B) \end{equation} \end{enumerate} \end{Lemma}

\vspace{2mm}\noindent{\bf Proof.} see, for instance, Wilkinson [?] %\begin{itemize} %\item stress : %\begin{equation} %S(X)=\|H^{1/2}-D(X)^{1/2}\|^2 %\end{equation} %where $A^{1/2}$ is the Hadamard square root (elementwise) of $A$. %\item sstress : %\end{itemize} \section{Comparing strain and sstress} In this section we recall a result (see [2]) that relate the strain and sstress criteria. The sstress criterion is given by : \begin{eqnarray*} \text{minimize}\; S(D)=\|\Delta-D\|^2\\ \text{subject to }D\in \mathbf{D}_n(p) \end{eqnarray*}

\vspace{2mm}\noindent{\bf Result} The following inequality holds : given $p\leqslant n-1$, for any $B\in\Omega_n(p)$, let $D=\text{diag}(B)e^t+e\;\text{diag}(b)^t-2B$. Then \begin{equation} \|\Delta-D\|^2 \geqslant 4\|W-B\|^2 \end{equation}

\vspace{2mm}\noindent{\bf proof.} Let $B\in\Omega_n(p)$; we have \begin{eqnarray*} \delta_{ij} &=& w_{ii}+w_{jj}-2w_{ij} \\ d_{ij} &=& b_{ii}+b_{jj}-2b_{ij} \end{eqnarray*} Writting $a_{ij}=w_{ij}-b_{ij}$ we get \begin{equation} \sum_i\sum_j (\delta_{ij}-d_{ij})^2=2n\,\sum_ia_{ii}^2+4\sum_i\sum_j a_{ij}^2\geqslant 4\sum_i\sum_ja_{ij}^2 \end{equation} \begin{flushright}$\square$\end{flushright} \section{Main result} In this section we show an inequality involving the criteria $f$ in $(12)$ and $f_s$ in $(13)$.

\noindent{\bf Theorem.} for any $s\in\Bbb R^n$ such that $s^te=1$ and for any $p$ we have \begin{equation} f \leqslant f_s \end{equation}

\vspace{2mm}\noindent{\bf Proof.} we will show that for all $i$, $|\lambda_i(W)|\leqslant |\lambda_i(W_s)|$. For this end, we consider the two cases : \begin{enumerate} \item If $W$ is psd then $W_s$ is psd and the inequality became $\lambda_i(W)\leqslant \lambda_i(W_s)$. But $$ \lambda_i(W)=\lambda_i(JW_sJ)\leqslant \lambda_i(W_s)\lambda_1(J)=\lambda_i(W_s) $$ because $\lambda_1(J)=1$. \item If $W$ is not psd, then using the definition of $J$ : \begin{eqnarray*} \lambda_i(W^2) &=& \lambda_i(W_sJW_s-\frac{1}{n}ee^tW_sJW_s) \\

              &\leqslant& \lambda_i(W_sJW_s)+\lambda_1(-\frac{1}{n}ee^tW_sJW_s)

\end{eqnarray*} But $$ \lambda_1(-\frac{1}{n}ee^tW_sJW_s)= 0 $$ because $J$ and $W_s^2$ are PSD we have $$ \lambda_i(W_sJW_s)\leqslant\lambda_i(W_s^2)\lambda_1(J)=\lambda_i(W_s^2) \hspace{2cm}\square $$ %\begin{flushright}$\square$\end{flushright} \end{enumerate} \section{Modified Gower problem} In this Section we consider the following problem : given a non Euclidean matrix, can we find an $s$ that maximizes the total squared real distances {\bf from the points to the centroid given by} $s$ in the fitted configuration. What is this choice of $s$  ?

This problem can be written as an optimization problem in the following maneer. First let remark that if $\Delta$ is not Euclidean the number of negative eigenvalues of $W_s=W_s(\Delta)$ does not depend on $s$. Let call this number $p$.

The total squared real distances {\bf from the points to the centroid given by} $s$ in the fitted configuration can be written as \begin{equation} \sum_{i=1}^p\lambda_i(W_s) \end{equation} where $\lambda_i(W_s)$ denotes the $i$th eigenvalue of $W_s$. But by a well known result we have $$ \sum_{i=1}^p\lambda_i(W_s)=\max_{X^tX=\mathbf{I_p}}\;\text{trace}(X^tW_sX) $$ where $X$ is an $n\times p$ matrix and $\mathbf{I_p}$ is the $p\times p$ identity matrix. So the final optimisation problem can be written as \begin{equation} \max_{s^te=1,s\geqslant 0}\max_{X^tX=\mathbf{I_p}}\;\text{trace}(X^tW_sX) \end{equation} where $$ X^tW_sX=X^tWX-X^tWse^tX-X^tes^tWX+X^tes^tWse^tX $$

\noindent{\bf Question :} Is it true that at the optimum, the problem $(20)$ is equivalent to the problem \begin{equation} \max_{X^tX=\mathbf{I_p}}\max_{s^te=1,s\geqslant 0}\;\text{trace}(X^tW_sX) \end{equation}

??? %\appendix %\section*{Appendix} %In this appendix, we will show that \begin{flushleft}{\Large\bf References} \end{flushleft}

\vspace{3mm}\noindent [1] Critchley, F., 1986. On certain linear mappings between inner-product and squared-distance matrices. Linear Algebra Appl. 105, 91-107.

\vspace{3mm}\noindent [2] De Leeuw, J., Heiser, W., 1982. Theory of multidimensional scaling. Krishnaiah, P.R., Kanal, I.N.(Eds.), Handbook of Statistics, vol. 2. Nrth-Holland, Amsterdam, pp. 285-316 (chapter 13).

\vspace{3mm}\noindent [3] Gower, J.C., 1966. Some distance properties of latent root and vector methods in multivariate analysis. Biometrika 53, 315-328.

\vspace{3mm}\noindent [4] Gower, J.C., 1982. Euclidean distance geometry, Math. Scientist 7, 1-14.

\vspace{3mm}\noindent [5] Schoenberg, I.J., 1935. Remarks to Maurice Fr\'echet's article Sur la d\'efinition axiomatique d'une classe d'espaces distanci\'es vectoriellement applicable sur l'espace de Hilbert. Ann. of Math. 38, 724-738.

\vspace{3mm}\noindent [6] Torgerson, W.S., 1952. Multidimensional scaling: I. Theory and method. Psychometrika 17, 401-419.

\vspace{3mm}\noindent [7] Trosset, M.W., 1997. Numerical algorithms for multidimensional scaling. In: Klar, R., Opitz, P. (Eds.), Classification and Knowledge Organization. Springer, Berlin, pp. 80-92. % %\vspace{3mm}\noindent % %\vspace{3mm}\noindent % %\vspace{3mm}\noindent % %\vspace{3mm}\noindent % %\vspace{3mm}\noindent % %\vspace{3mm}\noindent % %\vspace{3mm}\noindent % %\vspace{3mm}\noindent % %\vspace{3mm}\noindent \end{document}

Personal tools