<?xml version="1.0" encoding="UTF-8" standalone="yes"?><oembed><version><![CDATA[1.0]]></version><provider_name><![CDATA[The ryg blog]]></provider_name><provider_url><![CDATA[https://fgiesen.wordpress.com]]></provider_url><author_name><![CDATA[fgiesen]]></author_name><author_url><![CDATA[https://fgiesen.wordpress.com/author/fgiesen/]]></author_url><title><![CDATA[Linear Algebra Toolbox&nbsp;2]]></title><type><![CDATA[link]]></type><html><![CDATA[<p>In the <a href="https://fgiesen.wordpress.com/2012/06/03/linear-algebra-toolbox-1/">previous part</a> I covered a bunch of basics. Now let&#8217;s continue with stuff that&#8217;s a bit more fun. Small disclaimer: In this series, I&#8217;ll be mostly talking about finite-dimensional, real vector spaces, and even more specifically <img src="https://s0.wp.com/latex.php?latex=%5Cmathbb%7BR%7D%5En&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%5Cmathbb%7BR%7D%5En&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%5Cmathbb%7BR%7D%5En&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="&#92;mathbb{R}^n" class="latex" /> for some n. So assume that&#8217;s the setting unless explicitly stated otherwise; I don&#8217;t want to bog the text down with too many technicalities.</p>
<h3>(Almost) every product can be written as a matrix product</h3>
<p>In general, most of the functions we call &#8220;products&#8221; share some common properties: they&#8217;re examples of &#8220;bilinear maps&#8221;, that is vector-valued functions of two vector-valued arguments which are linear in both of them. The latter means that if you hold either of the two arguments constant, the function behaves like a linear function of the other argument. Now we know that any linear function <img src="https://s0.wp.com/latex.php?latex=f&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=f&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=f&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="f" class="latex" /> can be written as a matrix product <img src="https://s0.wp.com/latex.php?latex=f%28x%29%3DMx&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=f%28x%29%3DMx&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=f%28x%29%3DMx&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="f(x)=Mx" class="latex" /> for some matrix M, provided we&#8217;re willing to choose a basis.</p>
<p>Okay, now take one such product-like operation between vector spaces, let&#8217;s call it <img src="https://s0.wp.com/latex.php?latex=%2A&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%2A&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%2A&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="*" class="latex" />. What the above sentence means is that for any <img src="https://s0.wp.com/latex.php?latex=a&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=a&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=a&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="a" class="latex" />, there is a corresponding matrix <img src="https://s0.wp.com/latex.php?latex=M_a&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=M_a&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=M_a&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="M_a" class="latex" /> such that <img src="https://s0.wp.com/latex.php?latex=a%2Ab+%3D+M_a+b&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=a%2Ab+%3D+M_a+b&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=a%2Ab+%3D+M_a+b&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="a*b = M_a b" class="latex" /> (and also a <img src="https://s0.wp.com/latex.php?latex=M%27_b&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=M%27_b&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=M%27_b&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="M&#039;_b" class="latex" /> such that <img src="https://s0.wp.com/latex.php?latex=a%2Ab+%3D+M%27_b+a&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=a%2Ab+%3D+M%27_b+a&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=a%2Ab+%3D+M%27_b+a&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="a*b = M&#039;_b a" class="latex" />, but let&#8217;s ignore that for a minute). Furthermore, since a product is linear in <em>both</em> arguments, <img src="https://s0.wp.com/latex.php?latex=M_a&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=M_a&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=M_a&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="M_a" class="latex" /> itself (respectively <img src="https://s0.wp.com/latex.php?latex=M%27_b&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=M%27_b&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=M%27_b&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="M&#039;_b" class="latex" />) is a linear function of a (respectively b) too.</p>
<p>This is all fairly abstract. Let&#8217;s give an example: the standard dot product. The dot product of two vectors a and b is the number <img src="https://s0.wp.com/latex.php?latex=a+%5Ccdot+b+%3D+%5Csum_%7Bi%3D1%7D%5En+a_i+b_i&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=a+%5Ccdot+b+%3D+%5Csum_%7Bi%3D1%7D%5En+a_i+b_i&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=a+%5Ccdot+b+%3D+%5Csum_%7Bi%3D1%7D%5En+a_i+b_i&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="a &#92;cdot b = &#92;sum_{i=1}^n a_i b_i" class="latex" />. This should be well known. Now let&#8217;s say we want to find the matrix <img src="https://s0.wp.com/latex.php?latex=M_a&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=M_a&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=M_a&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="M_a" class="latex" /> for some a. First, we have to figure out the correct dimensions. For fixed a, <img src="https://s0.wp.com/latex.php?latex=a+%5Ccdot+b&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=a+%5Ccdot+b&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=a+%5Ccdot+b&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="a &#92;cdot b" class="latex" /> is a scalar-valued function of two vectors; so the matrix that represents &#8220;a-dot&#8221; maps a 3-vector to a scalar (1-vector); in other words, it&#8217;s a 1&#215;3 matrix. In fact, as you can verify easily, the matrix representing &#8220;a-dot&#8221; is just &#8220;a&#8221; written as a row vector &#8211; or written as a matrix expression, <img src="https://s0.wp.com/latex.php?latex=M_a+%3D+a%5ET&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=M_a+%3D+a%5ET&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=M_a+%3D+a%5ET&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="M_a = a^T" class="latex" />. For the full dot product expression, we thus get <img src="https://s0.wp.com/latex.php?latex=a+%5Ccdot+b+%3D+a%5ET+b&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=a+%5Ccdot+b+%3D+a%5ET+b&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=a+%5Ccdot+b+%3D+a%5ET+b&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="a &#92;cdot b = a^T b" class="latex" /> = <img src="https://s0.wp.com/latex.php?latex=b%5ET+a+%3D+b+%5Ccdot+a&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=b%5ET+a+%3D+b+%5Ccdot+a&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=b%5ET+a+%3D+b+%5Ccdot+a&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="b^T a = b &#92;cdot a" class="latex" /> (because the dot product is symmetric, we can swap the positions of the two arguments). This works for any dimension of the vectors involved, provided they match of course. More importantly, it works the other way round too &#8211; a 1-row matrix represents a scalar-valued linear function (more concisely called a &#8220;linear functional&#8221;), and in case of the finite-dimensional spaces we&#8217;re dealing with, all such functions can be written as a dot product with a fixed vector.</p>
<p>The same technique works for any given bilinear map. Especially if you already know a form that works on coordinate vectors, in which case you can instantly write down the matrix (same as in part 1, just check what happens to your basis vectors). To give a second example, take the cross product <img src="https://s0.wp.com/latex.php?latex=a+%5Ctimes+b&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=a+%5Ctimes+b&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=a+%5Ctimes+b&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="a &#92;times b" class="latex" /> in three dimensions. The corresponding matrix looks like this:</p>
<p><img src="https://s0.wp.com/latex.php?latex=a+%5Ctimes+b+%3D+%5Ba%5D_%5Ctimes+b+%3D+%5Cbegin%7Bpmatrix%7D+0+%26+-a_3+%26+a_2+%5C%5C+a_3+%26+0+%26+-a_1+%5C%5C+-a_2+%26+a_1+%26+0+%5Cend%7Bpmatrix%7D+b&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=a+%5Ctimes+b+%3D+%5Ba%5D_%5Ctimes+b+%3D+%5Cbegin%7Bpmatrix%7D+0+%26+-a_3+%26+a_2+%5C%5C+a_3+%26+0+%26+-a_1+%5C%5C+-a_2+%26+a_1+%26+0+%5Cend%7Bpmatrix%7D+b&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=a+%5Ctimes+b+%3D+%5Ba%5D_%5Ctimes+b+%3D+%5Cbegin%7Bpmatrix%7D+0+%26+-a_3+%26+a_2+%5C%5C+a_3+%26+0+%26+-a_1+%5C%5C+-a_2+%26+a_1+%26+0+%5Cend%7Bpmatrix%7D+b&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="a &#92;times b = [a]_&#92;times b = &#92;begin{pmatrix} 0 &amp; -a_3 &amp; a_2 &#92;&#92; a_3 &amp; 0 &amp; -a_1 &#92;&#92; -a_2 &amp; a_1 &amp; 0 &#92;end{pmatrix} b" class="latex" />.</p>
<p>The <img src="https://s0.wp.com/latex.php?latex=%5Ba%5D_%5Ctimes+b&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%5Ba%5D_%5Ctimes+b&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%5Ba%5D_%5Ctimes+b&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="[a]_&#92;times b" class="latex" /> is standard notation for this construction. Note that in this case, because the cross product is vector-valued, we have a full 3&#215;3 matrix &#8211; and not just any matrix: it&#8217;s a skew-symmetric matrix, i.e. <img src="https://s0.wp.com/latex.php?latex=%5Ba%5D_%5Ctimes+%3D+-%5Ba%5D_%5Ctimes%5ET&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%5Ba%5D_%5Ctimes+%3D+-%5Ba%5D_%5Ctimes%5ET&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%5Ba%5D_%5Ctimes+%3D+-%5Ba%5D_%5Ctimes%5ET&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="[a]_&#92;times = -[a]_&#92;times^T" class="latex" />. I might come back to those later.</p>
<p>So what we have now is a systematic way to write any &#8220;product-like&#8221; function of a and b as a matrix product (with a matrix depending on one of the two arguments). This might seem like a needless complication, but there&#8217;s a purpose to it: being able to write everything in a common notation (namely, as a matrix expression) has two advantages: first, it allows us to manipulate fairly complex expressions using uniform rules (namely, the rules for matrix multiplication), and second, it allows us to go the other way &#8211; take a complicated-looked matrix expression and break it down into components that have obvious geometric meaning. And that turns out to be a fairly powerful tool.</p>
<h3>Projections and reflections</h3>
<p>Let&#8217;s take a simple example: assume you have a unit vector <img src="https://s0.wp.com/latex.php?latex=v&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=v&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=v&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="v" class="latex" />, and a second, arbitrary vector <img src="https://s0.wp.com/latex.php?latex=x&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=x&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=x&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="x" class="latex" />. Then, as you hopefully know, the dot product <img src="https://s0.wp.com/latex.php?latex=v+%5Ccdot+x+%3D+v%5ET+x&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=v+%5Ccdot+x+%3D+v%5ET+x&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=v+%5Ccdot+x+%3D+v%5ET+x&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="v &#92;cdot x = v^T x" class="latex" /> is a scalar representing the length of the projection of x onto v. Take that scalar and multiply it by v again, and you get a vector that represents the component of x that is parallel to v:</p>
<p><img src="https://s0.wp.com/latex.php?latex=x_%5Cparallel+%3D+v%28v+%5Ccdot+x%29+%3D+v+%28v%5ET+x%29+%3D+%28v+v%5ET%29%5C%2C+x+%3D%3A+P_v%5C%2C+x&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=x_%5Cparallel+%3D+v%28v+%5Ccdot+x%29+%3D+v+%28v%5ET+x%29+%3D+%28v+v%5ET%29%5C%2C+x+%3D%3A+P_v%5C%2C+x&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=x_%5Cparallel+%3D+v%28v+%5Ccdot+x%29+%3D+v+%28v%5ET+x%29+%3D+%28v+v%5ET%29%5C%2C+x+%3D%3A+P_v%5C%2C+x&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="x_&#92;parallel = v(v &#92;cdot x) = v (v^T x) = (v v^T)&#92;, x =: P_v&#92;, x" class="latex" />.</p>
<p>See what happened there? Since it&#8217;s all just matrix multiplication, which is associative (we can place parentheses however we want), we can instantly get the matrix <img src="https://s0.wp.com/latex.php?latex=P_v&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=P_v&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=P_v&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="P_v" class="latex" /> that represents parallel projection onto v. Similarly, we can get the matrix for the corresponding orthogonal component:</p>
<p><img src="https://s0.wp.com/latex.php?latex=x_%5Cperp+%3D+x+-+x_%5Cparallel+%3D+x+-+%28v+v%5ET%29+x+%3D+Ix+-+%28v+v%5ET%29+x+%3D+%28I+-+v+v%5ET%29+x+%3D%3A+O_v%5C%2C+x&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=x_%5Cperp+%3D+x+-+x_%5Cparallel+%3D+x+-+%28v+v%5ET%29+x+%3D+Ix+-+%28v+v%5ET%29+x+%3D+%28I+-+v+v%5ET%29+x+%3D%3A+O_v%5C%2C+x&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=x_%5Cperp+%3D+x+-+x_%5Cparallel+%3D+x+-+%28v+v%5ET%29+x+%3D+Ix+-+%28v+v%5ET%29+x+%3D+%28I+-+v+v%5ET%29+x+%3D%3A+O_v%5C%2C+x&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="x_&#92;perp = x - x_&#92;parallel = x - (v v^T) x = Ix - (v v^T) x = (I - v v^T) x =: O_v&#92;, x" class="latex" />.</p>
<p>All it takes is the standard algebra trick of multiplying by 1 (or in this case, an identity matrix); after that, we just use linearity of matrix multiplication. You&#8217;re probably more used to exploiting it when working with vectors (stuff like <img src="https://s0.wp.com/latex.php?latex=Ax+%2B+Ay+%3D+A+%28x%2By%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=Ax+%2B+Ay+%3D+A+%28x%2By%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=Ax+%2B+Ay+%3D+A+%28x%2By%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="Ax + Ay = A (x+y)" class="latex" />), but it works in both directions and with arbitrary matrices: <img src="https://s0.wp.com/latex.php?latex=AB+%2B+AC+%3D+A+%28B%2BC%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=AB+%2B+AC+%3D+A+%28B%2BC%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=AB+%2B+AC+%3D+A+%28B%2BC%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="AB + AC = A (B+C)" class="latex" /> and <img src="https://s0.wp.com/latex.php?latex=AB+%2B+CB+%3D+%28A+%2B+C%29B&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=AB+%2B+CB+%3D+%28A+%2B+C%29B&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=AB+%2B+CB+%3D+%28A+%2B+C%29B&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="AB + CB = (A + C)B" class="latex" /> &#8211; matrix multiplication is another bilinear map.</p>
<p>Anyway, with the two examples above, we get a third one for free: We&#8217;ve just separated <img src="https://s0.wp.com/latex.php?latex=x&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=x&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=x&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="x" class="latex" /> into two components, <img src="https://s0.wp.com/latex.php?latex=x+%3D+x_%5Cperp+%2B+x_%5Cparallel&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=x+%3D+x_%5Cperp+%2B+x_%5Cparallel&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=x+%3D+x_%5Cperp+%2B+x_%5Cparallel&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="x = x_&#92;perp + x_&#92;parallel" class="latex" />. If we keep the orthogonal part but flip the parallel component, we get a reflection about the plane through the origin with normal <img src="https://s0.wp.com/latex.php?latex=v&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=v&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=v&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="v" class="latex" />. This is just <img src="https://s0.wp.com/latex.php?latex=x_%5Cperp+-+x_%5Cparallel&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=x_%5Cperp+-+x_%5Cparallel&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=x_%5Cperp+-+x_%5Cparallel&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="x_&#92;perp - x_&#92;parallel" class="latex" />, which is again linear in x, and we can get the matrix <img src="https://s0.wp.com/latex.php?latex=R_v&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=R_v&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=R_v&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="R_v" class="latex" /> for the whole by subtracting the two other matrices:</p>
<p><img src="https://s0.wp.com/latex.php?latex=x_%5Cperp+-+x_%5Cparallel+%3D+O_v%5C%2C+x+-+P_v%5C%2C+x+%3D+%28O_v+-+P_v%29%5C%2C+x+%3D+%28I+-+2+v+v%5ET%29+%5C%2C+x+%3D%3A+R_v+%5C%2C+x&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=x_%5Cperp+-+x_%5Cparallel+%3D+O_v%5C%2C+x+-+P_v%5C%2C+x+%3D+%28O_v+-+P_v%29%5C%2C+x+%3D+%28I+-+2+v+v%5ET%29+%5C%2C+x+%3D%3A+R_v+%5C%2C+x&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=x_%5Cperp+-+x_%5Cparallel+%3D+O_v%5C%2C+x+-+P_v%5C%2C+x+%3D+%28O_v+-+P_v%29%5C%2C+x+%3D+%28I+-+2+v+v%5ET%29+%5C%2C+x+%3D%3A+R_v+%5C%2C+x&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="x_&#92;perp - x_&#92;parallel = O_v&#92;, x - P_v&#92;, x = (O_v - P_v)&#92;, x = (I - 2 v v^T) &#92;, x =: R_v &#92;, x" class="latex" />.</p>
<p>None of this is particularly fancy (and most of it you should know already), so why am I going through this? Two reasons. First off, it&#8217;s worth knowing, since all three special types of matrices tend to show up in a lot of different places. And second, they give good examples for transforms that are constructed by adding something to (or subtracting from) the identity map; these tend to show up in all kinds of places. In the general case, it&#8217;s hard to mentally visualize what the sum (or difference) of two transforms does, but orthogonal complements and reflections come with a nice geometric interpretation.</p>
<p>I&#8217;ll end this part here. See you next time!</p>
]]></html></oembed>