1
$\begingroup$

Consider a complex vector $\mathbf{s} \in \mathbb{C}^{N \times 1}$ that is multiplied by a known unitary matrix $\mathbf{G} \in \mathbb{C}^{N \times N}$, yielding $\mathbf{y} = \mathbf{G}\mathbf{s}$. Suppose the entries of $\mathbf{s}$ at indices $\{1, 5, 9, 13, \ldots, N\}$ are known, while all other entries of $\mathbf{s}$ are unknown. Conversely, the vector $\mathbf{y}$ is fully known except for its entries at those same indices $\{1, 5, 9, 13, \ldots, N\}$. The goal is to recover the unknown portion of $\mathbf{y}$ (call it $\mathbf{y}_{o}$) using the known portion of $\mathbf{s}$ (denoted $\mathbf{s}_{k}$), given that $\mathbf{y}$ is corrupted by additive white Gaussian noise (AWGN). A direct approach uses linear algebra, where one might write:

$$ \mathbf{y}_{o} \;=\; \mathbf{G}_{k,k}^{-1}\,\mathbf{s}_{k} \;-\; \mathbf{G}_{k,k}^{-1}\,\mathbf{G}_{d,k}\,\mathbf{y}_{k},$$

with $\mathbf{G}_{k,k}$ being the square submatrix of $\mathbf{G}$ that multiplies the known entries $\mathbf{s}_{k}$, and $\mathbf{G}_{d,k}$ the submatrix coupling the rest of $\mathbf{y}$. Although this formula is correct in an ideal noise-free setting, any noise in $\mathbf{y}$ tends to get dramatically amplified by $\mathbf{G}_{k,k}^{-1}$, especially if $\mathbf{G}_{k,k}$ is ill-conditioned. (The submatrix $G_{k,k}$ is very small, that is almost zero).

How can we recover $\mathbf{y}_{o}$ without suffering severe noise amplification caused by directly inverting $\mathbf{G}_{k,k}$? Is there any other method to achieve that without amplifying the noise?


NP: $N$ is a $2^l$ integer number where $l$ is integer too.

$\endgroup$
5
  • $\begingroup$ What is the motivation for this question? $\endgroup$ Commented Feb 14 at 13:39
  • $\begingroup$ @RodrigodeAzevedo I need this for signal processing, I need a stable, noise-robust reconstruction of missing data in a linear system under partial information. $\endgroup$ Commented Feb 14 at 13:50
  • $\begingroup$ You have the same number of knowns and unknowns, right? You should have the $N$ knowns on the LHS and the $N$ knowns on the RHS, and then you could try to use least-squares instead of matrix inversion. $\endgroup$ Commented Feb 14 at 18:24
  • $\begingroup$ @RodrigodeAzevedo the issue is that submatrix $G_{k,k}$ is too small (Almost zero), so could you please explain what you mean mathematically as an answer ? $\endgroup$ Commented Feb 14 at 18:49
  • $\begingroup$ If I write an answer, I deprive you of the pleasure of figuring it out on your own. Just write the other equation, for you would like to estimate the unknown parts of vector $\bf s$, too. You then have 2 coupled linear systems that can be written in a more compressed form using block matrices. Eventually, you should get something like $\bf A \eta = b$ where all unknowns (both from $\bf s$ and $\bf y$) are in $\bf \eta$ and all the knowns are in $\bf b$. Least-squares is merely minimizing the squared Euclidean norm of the error $\bf A \eta - b$, which can be done numerically using, say, CVXPY $\endgroup$ Commented Feb 14 at 18:58

0

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.