4
$\begingroup$

The following is an easy corollary from noncommutative Khintchine's inequality (see, e.g., Vershynin's high-dimensional probability book, Theorem 6.5.1).

Let $A$ be an $n\times n$ symmetric random matrix whose entries on and above the diagonal are independent, mean zero random variables. Then $$ \mathbb{E}\|A\| \lesssim \sqrt{\log n}\ \mathbb{E}\max_i \| A_i\|_2 $$ where $\|A\|$ denotes the operator norm of $A$ and $A_i$ denotes the $i$-th row of $A$.

Question: Is $\sqrt{\log n}$ necessary in the bound above?

Vershynin's book claims that it is necessary (Exercise 6.5.4) but I am unable to find an example. The bound seems quite loose to me, actually, and it is already loose for diagonal matrices and Wigner matrices. I looked up the literature, and for entries that are gaussians (with different variances) the bound above is definitely loose, as it is known that when $A_{ij}\sim N(0,b_{ij}^2)$ (due to van Handel and others) we have $$ \mathbb{E}\|A\| \lesssim \max_i\sqrt{\sum_j b_{ij}^2} + (\max_{i,j} b_{ij})\sqrt{\log n}. $$ So the hope of finding a tight example is not to have Gaussian entries, and I don't have a clue for this. Usually I think the $\sqrt{\log n}$ factor would come from the maximum of $n$ gaussians in tightness examples.

$\endgroup$

1 Answer 1

8
$\begingroup$

This seems to be a mistake in Vershynin's book. Without any further assumption on the distribution, the optimal constant $C$ in the inequality $$ \mathbb{E}\|A\| \le C\,\mathbb{E}\max_i\|A_i\|_2 $$ is of order $\log^{1/4}n$. By symmetrization, it is enough consider the case $A_{ij}=a_{ij}\varepsilon_{ij}$ where $\varepsilon_{ij}$ are independent Rademacher variables and $a_{ij}$ are scalars. For this case the inequality with $C\lesssim \log^{1/4}n$, as well as an example showing this is the best possible, is due to Seginer, see section 3 of this paper. (A simpler proof of the Seginer bound appears in section 4.2 of this paper.)

When the entries are Gaussian or more heavy-tailed than Gaussian, the dimension-dependence is not needed at all and one always has $$ \mathbb{E}\|A\| \asymp \mathbb{E}\max_i\|A_i\|_2, $$ see this paper. In this case the operator norm is completely understood up to the universal constant. Another situation where the constant is dimension-free is when the distribution is arbitrary with mean zero but the entries are i.i.d. (not just independent), which was shown by Seginer in the earlier part of his paper.

$\endgroup$
1
  • $\begingroup$ Thank you Ramon. Your paper is very informative on this. So this implies that the spectral norm of a random matrix with i.i.d. gaussian entries of std $\sigma$ would be upper bounded by $sigma(\sqrt{n} + \sqrt(log(n)}$? $\endgroup$ Commented Aug 12, 2023 at 21:48

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.