1
$\begingroup$

Before going through row space and column space, first of all what are "spaces" of a matrix? What does "space" refer to? What is the use of row and column space? And finally, what do we get or identify when finding the row space and column space?

I went through with many of the online tutorials. Please don't refer any online reference. Whatever you understand from these concept is enough and it is admirable.

$\endgroup$
4
  • $\begingroup$ Do you know that a matrix represents a linear transformation between two vector spaces? $\endgroup$ Commented Apr 2, 2017 at 13:03
  • $\begingroup$ @EmilioNovati Is it? Two vector spaces means in row wise or column wise? $\endgroup$ Commented Apr 2, 2017 at 13:10
  • $\begingroup$ No. In the usual representation , a matrix acts on a column vector and gives a column vector. $\endgroup$ Commented Apr 2, 2017 at 13:14
  • $\begingroup$ I know you're not looking for online tutorials, but one more that you might find useful: I think the videos from 3blue1brown for chapter 3 and chapter 6 will be useful. Each video is about 10 minutes long. $\endgroup$ Commented Apr 2, 2017 at 14:34

1 Answer 1

0
$\begingroup$
  • I guess here "space" refers to vector space, which is abstractly defined by a bunch of rules that a set of elements should satisfy, plus the abstract concept of field over which the vector space is defined. It's those bunch of rules distinguish a set (or subset) from a vector space (or subspace), and it's said that "Virtually all algorithms and all applications fo linear algebra are understood by moving to subspaces."

  • A matrix $A$ can reprepresent a linear transformation between two finite-dimensional vector spaces, $V$ and $W$, given that bases has chosen for them. There are 4 subspaces associated with $A$: row space and column space are two of them, and the other two are nullspace and left nullspace, as indicated in the 2nd link (also on wikipedia item "Fundamental_theorem_of_linear_algebra").

  • What we can get from those subspaces? I think the 2nd link above gives a good answer (together with intuitive pictures). And I guess it's of help to keep in mind that one of the main topics addressed by linear algebra at the begining of its development was about solving linear equations $Ax=b$. In other words, those subspaces are at least of value for reasoning on questions related to solving $Ax=b$.

  • I feel that intuitive understandings of column space and row space are of help, particulary in the abstract atomosphere casted by the definition of vector space.

  • The idea of column space is a bit easier to grasp, as long as one sees how the left side of $Ax=b$ is linearly combined into the right side $b$, given that the concept of "linear combination" and "span" are told.

  • To me, the row space is harder to get an intuition, even I know it IS a subspace. But so what? The 2nd link hints two ways to interpret the row space: as the column space of $A^T$, and as the subspace orthogonal complement to the nullspace of $A$. I personally find that intuitively the latter is better, probably because that the nullspace is also intuitively easy to grasp. But in any case, I still feel that the concept of row space is intuitively vaguer than that of column space.

  • Then I thinked a bit on that. I don't have a satisfactory answer yet, but allow me to describe what occurs to me so far.

    • First let's return from $Ax=b$ to its original form, i.e., a bunch of linear equations, $f_i(x)=b_i$.
    • Then an intuition can be built that: each of $f_i(x)=b_i$ defines a hyperplane (a subspace) in $V$; if $b_i=0$, then the hyperplane passes through the origin; if $b_i\neq 0$, the hyperplane does not pass through the origin but parallel to the one with $b_i=0$; the cooefficients of the left side (i.e., the row vector for that equation) is perpendicular to the hyperplanes, for the equation $f_i(x)=0$ shows that the inner product of the normal vector and any vector on the hyperplane (which passes through the origin) is zero.
    • The row space, literally, is the linear combination (span) of those normal vectors. As each differently scaled normal vector can define a hyperplane (i.e, how far it's translated from the one passing the origin), the combination of the normal vectors implies a combination of hyperplanes, particulary the intersection points of those hyperplanes. So the row space can be seen as the set of the intersection points of all possible combination of hyperplanes.
    • If there is any hyperplane moves away from the origin, it means the corresponding $b_i\neq 0$. In this sense, the combination of normal vectors maps to $b$ in the column space.
    • Nullspace is the set of intersection points when all hyperplanes are stay passing through the origin. It's easy to conclude that all points (vectors) in the nullspace are perpendicular to all normal vectors, which corresponds to the theorem that nullspace and row space are orthogonal.

Btw, I am a beginer and I don't know too much either, but I hope it helps.

$\endgroup$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.