1
$\begingroup$

Say we have a matrix $A = L + \beta^{2} M$, where $\beta$ is a real scalar. The matrices $L$ and $M$ are symmetric positive semi-definite and symmetric positive definite respectively. I am interested in obtaining a low-rank approximation of $A^{-1}$, which can be obtained by solving the GHEP given by $L U = MU\Lambda$, where $U$ is the eigenvector matrix and $\Lambda$ is a diagonal matrix of the corresponding eigenvalues. Using this approach, $A^{-1} = U(\Lambda+\beta^2I)^{-1}U^{\mathrm{T}}$.

Now, consider the case when $\beta=0$. From the above equation, $A^{-1} = U\Lambda^{-1}U^{T}$. However, since $\beta=0$, we have $A=L$. This means that I can also compute $A^{-1}$ by solving the HEP given by $LV= \Omega V$ and $L^{-1} = V\Omega^{-1}V^{T}$.

I have the following questions regarding $A^{-1}$ and $L^{-1}$ for this case when $\beta=0$.

  1. I calculated $A^{-1}$ and $L^{-1}$ by solving the corresponding eigenvalue problems and computing the first 100 eigenpairs, I see that $A^{-1}x$ and $L^{-1}x$ (for some vector $x$) are quite different. Is this expected? If so why?
  2. Does $A^{-1}x$ approach $L^{-1}x$ as the number of eigenpairs used to compute $A^{-1}$ is increased?
  3. Will $A^{-1}x$ ever match $L^{-1}x$ in matrices of finite sizes, if $M\neq I$?
$\endgroup$

0

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.