<?xml version="1.0" encoding="UTF-8" standalone="yes"?><oembed><version><![CDATA[1.0]]></version><provider_name><![CDATA[The ryg blog]]></provider_name><provider_url><![CDATA[https://fgiesen.wordpress.com]]></provider_url><author_name><![CDATA[fgiesen]]></author_name><author_url><![CDATA[https://fgiesen.wordpress.com/author/fgiesen/]]></author_url><title><![CDATA[What happens when iterating&nbsp;filters?]]></title><type><![CDATA[link]]></type><html><![CDATA[<p>Casey Muratori posted on his blog about <a href="https://caseymuratori.com/blog_0035">half-pixel interpolation filters</a> this week, where he ends up focusing on a particular criterion: whether the filter in question is stable under repeated application or not.</p>
<p>There are many things about filters that are more an art than a science, especially where perceptual factors are concerned, but this particular question is both free of tricky perceptual evaluations and firmly in the realm of things we have excellent theory for, albeit one that will require me to start with a linear algebra infodump. So let&#8217;s get into it!</p>
<h3>Analyzing iterated linear maps</h3>
<p>Take any vector space V over some field <img src="https://s0.wp.com/latex.php?latex=%5Cmathbb%7BF%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%5Cmathbb%7BF%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%5Cmathbb%7BF%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="&#92;mathbb{F}" class="latex" /> and any linear map <img src="https://s0.wp.com/latex.php?latex=T+%3A+V+%5Crightarrow+V&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=T+%3A+V+%5Crightarrow+V&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=T+%3A+V+%5Crightarrow+V&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="T : V &#92;rightarrow V" class="latex" /> from that space to itself. An <em>eigenvector</em> v of T is a nonzero element of V such that <img src="https://s0.wp.com/latex.php?latex=T%28v%29+%3D+Tv+%3D+%5Clambda+v&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=T%28v%29+%3D+Tv+%3D+%5Clambda+v&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=T%28v%29+%3D+Tv+%3D+%5Clambda+v&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="T(v) = Tv = &#92;lambda v" class="latex" /> for some <img src="https://s0.wp.com/latex.php?latex=%5Clambda+%5Cin+%5Cmathbb%7BF%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%5Clambda+%5Cin+%5Cmathbb%7BF%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%5Clambda+%5Cin+%5Cmathbb%7BF%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="&#92;lambda &#92;in &#92;mathbb{F}" class="latex" /> &#8211; that is, the result of applying the map T to v is a scaled version of v itself. The scale factor λ is the corresponding <em>eigenvalue</em>.</p>
<p>Now when we&#8217;re iterating the map T multiple times, eigenvectors of T behave in a very simple way under the iterated map: we know that applying T to v gives back a scaled version of v, and then linearity of T allows us to conclude that:<br />
<img src="https://s0.wp.com/latex.php?latex=%5Cdisplaystyle+T%5E2%28v%29+%3D+T%28T%28v%29%29+%3D+T%28Tv%29+%3D+T%28%5Clambda+v%29+%3D+%5Clambda+T%28v%29+%3D+%5Clambda%5E2+v&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%5Cdisplaystyle+T%5E2%28v%29+%3D+T%28T%28v%29%29+%3D+T%28Tv%29+%3D+T%28%5Clambda+v%29+%3D+%5Clambda+T%28v%29+%3D+%5Clambda%5E2+v&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%5Cdisplaystyle+T%5E2%28v%29+%3D+T%28T%28v%29%29+%3D+T%28Tv%29+%3D+T%28%5Clambda+v%29+%3D+%5Clambda+T%28v%29+%3D+%5Clambda%5E2+v&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="&#92;displaystyle T^2(v) = T(T(v)) = T(Tv) = T(&#92;lambda v) = &#92;lambda T(v) = &#92;lambda^2 v" class="latex" /><br />
and more generally <img src="https://s0.wp.com/latex.php?latex=T%5Ek%28v%29+%3D+%5Clambda%5Ek+v&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=T%5Ek%28v%29+%3D+%5Clambda%5Ek+v&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=T%5Ek%28v%29+%3D+%5Clambda%5Ek+v&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="T^k(v) = &#92;lambda^k v" class="latex" /> for any <img src="https://s0.wp.com/latex.php?latex=k+%5Cin+%5Cmathbb%7BN%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=k+%5Cin+%5Cmathbb%7BN%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=k+%5Cin+%5Cmathbb%7BN%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="k &#92;in &#92;mathbb{N}" class="latex" />.</p>
<p>The best possible case is that we find lots of eigenvectors &#8211; enough to fully characterize the map purely by what it does on its eigenvectors. For example, if V is a finite-dimensional vector space with <img src="https://s0.wp.com/latex.php?latex=%5Cmathrm%7Bdim%7D%28V%29%3Dn&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%5Cmathrm%7Bdim%7D%28V%29%3Dn&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%5Cmathrm%7Bdim%7D%28V%29%3Dn&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="&#92;mathrm{dim}(V)=n" class="latex" />, then if we can find n linearly independent eigenvectors, we&#8217;re golden: we can select a basis entirely made of eigenvectors, and then written in that basis, T will have a very simple form: we will have <img src="https://s0.wp.com/latex.php?latex=T+%3D+Q+%5CLambda+Q%5E%7B-1%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=T+%3D+Q+%5CLambda+Q%5E%7B-1%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=T+%3D+Q+%5CLambda+Q%5E%7B-1%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="T = Q &#92;Lambda Q^{-1}" class="latex" /> where <img src="https://s0.wp.com/latex.php?latex=%5CLambda+%3D+%5Cmathrm%7Bdiag%7D%28%5Clambda_1%2C+%5Cdots%2C+%5Clambda_n%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%5CLambda+%3D+%5Cmathrm%7Bdiag%7D%28%5Clambda_1%2C+%5Cdots%2C+%5Clambda_n%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%5CLambda+%3D+%5Cmathrm%7Bdiag%7D%28%5Clambda_1%2C+%5Cdots%2C+%5Clambda_n%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="&#92;Lambda = &#92;mathrm{diag}(&#92;lambda_1, &#92;dots, &#92;lambda_n)" class="latex" /> for some Q (whose columns contain n linearly independent eigenvectors of T).</p>
<p>That is, in the right basis (made of eigenvectors), T is just a diagonal matrix, which is to say, a (non-uniform) scale. This makes analysis of repeated applications of T easy, since:<br />
<img src="https://s0.wp.com/latex.php?latex=%5Cdisplaystyle+T%5E2+%3D+Q+%5CLambda+Q%5E%7B-1%7D+Q+%5CLambda+Q%5E%7B-1%7D+%3D+Q+%5CLambda%5E2+Q%5E%7B-1%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%5Cdisplaystyle+T%5E2+%3D+Q+%5CLambda+Q%5E%7B-1%7D+Q+%5CLambda+Q%5E%7B-1%7D+%3D+Q+%5CLambda%5E2+Q%5E%7B-1%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%5Cdisplaystyle+T%5E2+%3D+Q+%5CLambda+Q%5E%7B-1%7D+Q+%5CLambda+Q%5E%7B-1%7D+%3D+Q+%5CLambda%5E2+Q%5E%7B-1%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="&#92;displaystyle T^2 = Q &#92;Lambda Q^{-1} Q &#92;Lambda Q^{-1} = Q &#92;Lambda^2 Q^{-1}" class="latex" /><br />
and in general<br />
<img src="https://s0.wp.com/latex.php?latex=T%5Ek+%3D+Q+%5CLambda%5Ek+Q%5E%7B-1%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=T%5Ek+%3D+Q+%5CLambda%5Ek+Q%5E%7B-1%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=T%5Ek+%3D+Q+%5CLambda%5Ek+Q%5E%7B-1%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="T^k = Q &#92;Lambda^k Q^{-1}" class="latex" /> and <img src="https://s0.wp.com/latex.php?latex=%5CLambda%5Ek+%3D+%5Cmathrm%7Bdiag%7D%28%5Clambda_1%5Ek%2C+%5Cdots%2C+%5Clambda_n%5Ek%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%5CLambda%5Ek+%3D+%5Cmathrm%7Bdiag%7D%28%5Clambda_1%5Ek%2C+%5Cdots%2C+%5Clambda_n%5Ek%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%5CLambda%5Ek+%3D+%5Cmathrm%7Bdiag%7D%28%5Clambda_1%5Ek%2C+%5Cdots%2C+%5Clambda_n%5Ek%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="&#92;Lambda^k = &#92;mathrm{diag}(&#92;lambda_1^k, &#92;dots, &#92;lambda_n^k)" class="latex" />: viewed in the basis made of eigenvectors, repeated application of T is just repeated scaling, and behaviour over lots of iterations ultimately just hinges on what the eigenvalues are.</p>
<p>Not every matrix can be written that way; ones that can are called <em>diagonalizable</em>. But there is a very important class of transforms (and now we allow infinite-dimensional spaces again) that is guaranteed to be diagonalizable: so called self-adjoint transforms. In the finite-dimensional real case, these correspond to symmetric matrices (matrices A such that <img src="https://s0.wp.com/latex.php?latex=A+%3D+A%5ET&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=A+%3D+A%5ET&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=A+%3D+A%5ET&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="A = A^T" class="latex" />). Such transforms are guaranteed to be diagonalizable, and even better, their eigenvectors are guaranteed to be pairwise orthogonal to each other, meaning the transform Q is an orthogonal matrix (a rotation or reflection), which among other things makes the whole process numerically quite well-behaved.</p>
<p>As a small aside, if you&#8217;ve ever wondered why iterative solvers for linear systems usually require symmetric (or, in the complex case, Hermitian) matrices: this is why. If a matrix is symmetric, it it diagonalizable, which allows us to build an iterative process to solve linear equations that we can analyze easily and <em>know</em> will converge (if we do it right). It&#8217;s not that we can&#8217;t possibly do anything iterative on non-symmetric linear systems; it just becomes a lot trickier to make any guarantees, especially if we allow arbitrary matrices. (Which could be quite pathological.)</p>
<p>Anyway, that&#8217;s a bit of background on eigendecompositions of linear maps. But what does any of this have to do with filtering?</p>
<h3>Enter convolution</h3>
<p>Convolution itself is a <em>bilinear map</em>, meaning it&#8217;s linear in both arguments. That means that if we fix either of the arguments, we get a linear map. Suppose we have a FIR filter f given by its coefficients <img src="https://s0.wp.com/latex.php?latex=%28f_0%2C+f_1%2C+%5Cdots%2C+f_%7Bm-1%7D%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%28f_0%2C+f_1%2C+%5Cdots%2C+f_%7Bm-1%7D%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%28f_0%2C+f_1%2C+%5Cdots%2C+f_%7Bm-1%7D%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="(f_0, f_1, &#92;dots, f_{m-1})" class="latex" />. Then we can define an associated linear map <img src="https://s0.wp.com/latex.php?latex=T_f&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=T_f&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=T_f&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="T_f" class="latex" /> on a suitable space, say something like <img src="https://s0.wp.com/latex.php?latex=T_f+%3A+%5Cell%5E%5Cinfty%28%5Cmathbb%7BC%7D%29+%5Crightarrow+%5Cell%5E%5Cinfty%28%5Cmathbb%7BC%7D%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=T_f+%3A+%5Cell%5E%5Cinfty%28%5Cmathbb%7BC%7D%29+%5Crightarrow+%5Cell%5E%5Cinfty%28%5Cmathbb%7BC%7D%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=T_f+%3A+%5Cell%5E%5Cinfty%28%5Cmathbb%7BC%7D%29+%5Crightarrow+%5Cell%5E%5Cinfty%28%5Cmathbb%7BC%7D%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="T_f : &#92;ell^&#92;infty(&#92;mathbb{C}) &#92;rightarrow &#92;ell^&#92;infty(&#92;mathbb{C})" class="latex" /> (writing <img src="https://s0.wp.com/latex.php?latex=%5Cell%5E%5Cinfty%28%5Cmathbb%7BC%7D%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%5Cell%5E%5Cinfty%28%5Cmathbb%7BC%7D%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%5Cell%5E%5Cinfty%28%5Cmathbb%7BC%7D%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="&#92;ell^&#92;infty(&#92;mathbb{C})" class="latex" /> for the set of bounded sequences of complex numbers) by setting<br />
<img src="https://s0.wp.com/latex.php?latex=%5Cdisplaystyle+T_f%28x%29+%3D+T_f+x+%3A%3D+f+%2A+x&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%5Cdisplaystyle+T_f%28x%29+%3D+T_f+x+%3A%3D+f+%2A+x&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%5Cdisplaystyle+T_f%28x%29+%3D+T_f+x+%3A%3D+f+%2A+x&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="&#92;displaystyle T_f(x) = T_f x := f * x" class="latex" />.</p>
<p>If this is all a bit dense on notation for you, all I&#8217;m doing here is holding one of the two arguments to the convolution operator constant, and trying to at least specify what set our map is working on (in this case, bounded sequences of real numbers).</p>
<p>And now we&#8217;re just about ready for the punchline: we now have a linear map from a set to itself, although in this case we&#8217;re dealing with infinite sequences, not finite ones. Luckily the notions of eigenvectors (eigensequences in this case) and eigenvalues generalize just fine. What&#8217;s even better is that for <em>all</em> discrete convolutions, we get a full complement of eigensequences, and we know exactly what they are. Namely, define the family of sequences <img src="https://s0.wp.com/latex.php?latex=e_%5Comega&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=e_%5Comega&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=e_%5Comega&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="e_&#92;omega" class="latex" /> by:</p>
<p><img src="https://s0.wp.com/latex.php?latex=%5Cdisplaystyle+e_%5Comega%5Bn%5D+%3D+%5Cexp%28i+%5Comega+n%29+%3D+%5Ccos%28%5Comega+n%29+%2B+i+%5Csin%28%5Comega+n%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%5Cdisplaystyle+e_%5Comega%5Bn%5D+%3D+%5Cexp%28i+%5Comega+n%29+%3D+%5Ccos%28%5Comega+n%29+%2B+i+%5Csin%28%5Comega+n%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%5Cdisplaystyle+e_%5Comega%5Bn%5D+%3D+%5Cexp%28i+%5Comega+n%29+%3D+%5Ccos%28%5Comega+n%29+%2B+i+%5Csin%28%5Comega+n%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="&#92;displaystyle e_&#92;omega[n] = &#92;exp(i &#92;omega n) = &#92;cos(&#92;omega n) + i &#92;sin(&#92;omega n)" class="latex" /></p>
<p>That&#8217;s a cosine wave with frequency &omega; in the real part and the corresponding sine wave in the imaginary part, if you are so inclined, although I much prefer to stick with the complex exponentials, especially when doing algebra (it makes things easier). Anyway, if we apply our FIR filter f to that signal, we get (this is just expanding out the definition of discrete convolution for our filter and input signal, using the convention that unqualified summation is over all values of k where the sum is well-defined)</p>
<p><img src="https://s0.wp.com/latex.php?latex=%5Cdisplaystyle+%28T_f+e_%5Comega%29%5Bn%5D+%3D+%5Csum_k+f_k+%5Cexp%28i+%5Comega+%28n-k%29%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%5Cdisplaystyle+%28T_f+e_%5Comega%29%5Bn%5D+%3D+%5Csum_k+f_k+%5Cexp%28i+%5Comega+%28n-k%29%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%5Cdisplaystyle+%28T_f+e_%5Comega%29%5Bn%5D+%3D+%5Csum_k+f_k+%5Cexp%28i+%5Comega+%28n-k%29%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="&#92;displaystyle (T_f e_&#92;omega)[n] = &#92;sum_k f_k &#92;exp(i &#92;omega (n-k))" class="latex" /></p>
<p><img src="https://s0.wp.com/latex.php?latex=%5Cdisplaystyle+%3D+%5Cexp%28i+%5Comega+n%29+%5Cunderbrace%7B%5Csum_k+f_k+%5Cexp%28-i+%5Comega+k%29%7D_%7B%3D%3A%5Chat%7Bf%7D%28%5Comega%29%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%5Cdisplaystyle+%3D+%5Cexp%28i+%5Comega+n%29+%5Cunderbrace%7B%5Csum_k+f_k+%5Cexp%28-i+%5Comega+k%29%7D_%7B%3D%3A%5Chat%7Bf%7D%28%5Comega%29%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%5Cdisplaystyle+%3D+%5Cexp%28i+%5Comega+n%29+%5Cunderbrace%7B%5Csum_k+f_k+%5Cexp%28-i+%5Comega+k%29%7D_%7B%3D%3A%5Chat%7Bf%7D%28%5Comega%29%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="&#92;displaystyle = &#92;exp(i &#92;omega n) &#92;underbrace{&#92;sum_k f_k &#92;exp(-i &#92;omega k)}_{=:&#92;hat{f}(&#92;omega)}" class="latex" /><br />
<img src="https://s0.wp.com/latex.php?latex=%5Cdisplaystyle+%3D+%5Chat%7Bf%7D%28%5Comega%29+%5Cexp%28i+%5Comega+n%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%5Cdisplaystyle+%3D+%5Chat%7Bf%7D%28%5Comega%29+%5Cexp%28i+%5Comega+n%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%5Cdisplaystyle+%3D+%5Chat%7Bf%7D%28%5Comega%29+%5Cexp%28i+%5Comega+n%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="&#92;displaystyle = &#92;hat{f}(&#92;omega) &#92;exp(i &#92;omega n)" class="latex" /></p>
<p>There&#8217;s very little that happens here. The first line is just expanding the definition; then in the second line we use the properties of the exponential function (and the linearity of sums) to pull out the constant factor of <img src="https://s0.wp.com/latex.php?latex=%5Cexp%28i+%5Comega+n%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%5Cexp%28i+%5Comega+n%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%5Cexp%28i+%5Comega+n%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="&#92;exp(i &#92;omega n)" class="latex" />. And it turns out the entire rest of the formula doesn&#8217;t depend on n at all, so it turns into a constant factor for the whole sequence. It does depend on f and &omega;, so we label it <img src="https://s0.wp.com/latex.php?latex=%5Chat%7Bf%7D%28%5Comega%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%5Chat%7Bf%7D%28%5Comega%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%5Chat%7Bf%7D%28%5Comega%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="&#92;hat{f}(&#92;omega)" class="latex" />. The final line states exactly what we wanted, namely that the result of applying <img src="https://s0.wp.com/latex.php?latex=T_f&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=T_f&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=T_f&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="T_f" class="latex" /> to <img src="https://s0.wp.com/latex.php?latex=e_%5Comega&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=e_%5Comega&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=e_%5Comega&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="e_&#92;omega" class="latex" /> is just a scaled copy of <img src="https://s0.wp.com/latex.php?latex=e_%5Comega&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=e_%5Comega&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=e_%5Comega&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="e_&#92;omega" class="latex" /> itself&mdash;we have an eigensequence (with eigenvalue <img src="https://s0.wp.com/latex.php?latex=%5Chat%7Bf%7D%28%5Comega%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%5Chat%7Bf%7D%28%5Comega%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%5Chat%7Bf%7D%28%5Comega%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="&#92;hat{f}(&#92;omega)" class="latex" />).</p>
<p>Also note that the formula for the eigenvalue isn&#8217;t particularly scary either in our case, since we&#8217;re dealing with a FIR filter f, meaning it&#8217;s a regular finite sum:</p>
<p><img src="https://s0.wp.com/latex.php?latex=%5Cdisplaystyle+%5Chat%7Bf%7D%28%5Comega%29+%3D+%5Csum_%7Bk%3D0%7D%5E%7Bm-1%7D+f_k+%5Cexp%28-i+%5Comega+k%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%5Cdisplaystyle+%5Chat%7Bf%7D%28%5Comega%29+%3D+%5Csum_%7Bk%3D0%7D%5E%7Bm-1%7D+f_k+%5Cexp%28-i+%5Comega+k%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%5Cdisplaystyle+%5Chat%7Bf%7D%28%5Comega%29+%3D+%5Csum_%7Bk%3D0%7D%5E%7Bm-1%7D+f_k+%5Cexp%28-i+%5Comega+k%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="&#92;displaystyle &#92;hat{f}(&#92;omega) = &#92;sum_{k=0}^{m-1} f_k &#92;exp(-i &#92;omega k)" class="latex" /></p>
<p>Oh, and there&#8217;s one more minor detail I&#8217;ve neglected to mention so far: that&#8217;s just the <a href="https://en.wikipedia.org/wiki/Discrete-time_Fourier_transform">discrete-time Fourier transform</a> (DTFT, not to be confused with the DFT, although they&#8217;re related) of f. Yup, we started out with a digital FIR filter, asked what happens when we iterate it a bunch, did a brief detour into linear algebra, and ended up in Fourier theory.</p>
<p>Long story short, if you want to know whether a linear digital filter is stable under repeated application, you want to look at its eigenvalues, which in turn are just given by its frequency response. In particular, for any given frequency &omega;, we have exactly three options:</p>
<ul>
<li><img src="https://s0.wp.com/latex.php?latex=%7C%5Chat%7Bf%7D%28%5Comega%29%7C+%3D+1&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%7C%5Chat%7Bf%7D%28%5Comega%29%7C+%3D+1&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%7C%5Chat%7Bf%7D%28%5Comega%29%7C+%3D+1&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="|&#92;hat{f}(&#92;omega)| = 1" class="latex" />. In this case, the amplitude at that frequency is preserved exactly under repeated application.</li>
<li><img src="https://s0.wp.com/latex.php?latex=%7C%5Chat%7Bf%7D%28%5Comega%29%7C+%3C+1&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%7C%5Chat%7Bf%7D%28%5Comega%29%7C+%3C+1&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%7C%5Chat%7Bf%7D%28%5Comega%29%7C+%3C+1&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="|&#92;hat{f}(&#92;omega)| &lt; 1" class="latex" />. If the filter dampens a given frequency, no matter how little, then the amplitude of the signal at that frequency will eventually be driven to zero. This is stable but causes the signal to degrade. Typical interpolation filters tend to do this for the higher frequencies, which is why signals tend to lose such frequencies (in visual terms, get blurrier) over time.</li>
<li><img src="https://s0.wp.com/latex.php?latex=%7C%5Chat%7Bf%7D%28%5Comega%29%7C+%3E+1&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%7C%5Chat%7Bf%7D%28%5Comega%29%7C+%3E+1&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%7C%5Chat%7Bf%7D%28%5Comega%29%7C+%3E+1&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="|&#92;hat{f}(&#92;omega)| &gt; 1" class="latex" />. If a filter amplifies any frequency by more than 1, even by just a tiny bit, then any signal containing a nonzero amount of that frequency will eventually blow up.</li>
</ul>
<p>The proof for all three cases is simply observing that k-fold application of the filter f to the signal <img src="https://s0.wp.com/latex.php?latex=e_%5Comega&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=e_%5Comega&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=e_%5Comega&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="e_&#92;omega" class="latex" /> will result in the signal <img src="https://s0.wp.com/latex.php?latex=%28%5Chat%7Bf%7D%28%5Comega%29%29%5Ek+e_%5Comega&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" srcset="https://s0.wp.com/latex.php?latex=%28%5Chat%7Bf%7D%28%5Comega%29%29%5Ek+e_%5Comega&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002 1x, https://s0.wp.com/latex.php?latex=%28%5Chat%7Bf%7D%28%5Comega%29%29%5Ek+e_%5Comega&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002&#038;zoom=4.5 4x" alt="(&#92;hat{f}(&#92;omega))^k e_&#92;omega" class="latex" />. To generalize this to a wider class of signals (not just complex exponentials) we would need to represent said signals as sum of complex exponentials, which is exactly what Fourier series are all about; so it can be done, but I won&#8217;t bother with the details here, since they&#8217;re out of the scope of this post.</p>
<p>Therefore, all you need to know about the stability of a given filter under repeated application is contained in its Fourier transform. I&#8217;ll try to do another post soon that shows the Fourier transforms of the filters Casey mentioned (or their magnitude response anyway, which is what we care about) and touches on other aspects such as the effect of rounding and quantization, but we&#8217;re at a good stopping point right now, so I&#8217;ll end this post here.</p>
]]></html></oembed>