<?xml version="1.0" encoding="UTF-8" standalone="yes"?><oembed><version><![CDATA[1.0]]></version><provider_name><![CDATA[Azimuth]]></provider_name><provider_url><![CDATA[https://johncarlosbaez.wordpress.com]]></provider_url><author_name><![CDATA[John Baez]]></author_name><author_url><![CDATA[https://johncarlosbaez.wordpress.com/author/johncarlosbaez/]]></author_url><title><![CDATA[Network Theory (Part&nbsp;20)]]></title><type><![CDATA[link]]></type><html><![CDATA[<p><i>guest post by <b><a href="http://www.azimuthproject.org/azimuth/show/Jacob+Biamonte">Jacob Biamonte</a></b></i></p>
<p>We&#8217;re in the middle of a battle: in addition to our typical man versus equation scenario, it&#8217;s a battle between two theories.  For those good patrons following the <a href="http://math.ucr.edu/home/baez/networks/">network theory series</a>, you know the two opposing forces well.  It&#8217;s our old friends, at it again:</p>
<div align="center">
<i> Stochastic Mechanics vs Quantum Mechanics! </i>
</div>
<p>Today we&#8217;re reporting live from a crossroads, and we&#8217;re facing a skirmish that gives rise to what some might consider a paradox.  Let me sketch the main thesis before we get our hands dirty with the gory details.   </p>
<p>First I need to tell you that the battle takes place at the intersection of stochastic and quantum mechanics.  We recall from <a href="https://johncarlosbaez.wordpress.com/2011/11/04/network-theory-part-16/">Part 16</a> that there is a class of operators called &#8216;Dirichlet operators&#8217; that are valid Hamiltonians for both stochastic and quantum mechanics.  In other words, you can use them to generate time evolution both for old-fashioned random processes and for quantum processes! </p>
<p>Staying inside this class allows the theories to fight it out on the same turf.  We will be considering a special subclass of Dirichlet operators, which we call &#8216;irreducible Dirichlet operators&#8217;.  These are the ones where starting in any state in our favorite basis of states, we have a nonzero chance of winding up in any other.   When considering this subclass, we found something interesting: </p>
<p><b>Thesis.</b> Let <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> be an irreducible Dirichlet operator with <img src='https://s0.wp.com/latex.php?latex=n&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='n' title='n' class='latex' /> eigenstates.  In stochastic mechanics, there is only one valid state that is an eigenvector of <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' />: the unique so-called &#8216;Perron&#8211;Frobenius state&#8217;.  The other <img src='https://s0.wp.com/latex.php?latex=n-1&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='n-1' title='n-1' class='latex' /> eigenvectors are forbidden states of a stochastic system: the stochastic system is either in the Perron&#8211;Frobenius state, or in a superposition of at least two eigensvectors.  In quantum mechanics, all <img src='https://s0.wp.com/latex.php?latex=n&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='n' title='n' class='latex' /> eigenstates of <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> are valid states.  </p>
<p>This might sound like a riddle, but today as we&#8217;ll prove, riddle or not, it&#8217;s a fact.  If it makes sense, well that&#8217;s another issue.  As John might have said, it&#8217;s like a bone kicked down from the gods up above: we can either choose to chew on it, or let it be.  Today we are going to do a bit of chewing.  </p>
<p>One of the many problems with this post is that John had <i>a nut loose on his keyboard</i>.  It was not broken!  I&#8217;m saying he wrote enough blog posts on this stuff to turn them into a book.  I&#8217;m supposed to be compiling the blog articles into a massive LaTeX file, but I wrote this instead.  </p>
<p>Another problem is that this post somehow seems to use just about everything said before, so I&#8217;m going to have to do my best to make things self-contained.  Please bear with me as I try to recap what&#8217;s been done.  For those of you familiar with the series, a good portion of the background for what we&#8217;ll cover today can be found in <a href="https://johncarlosbaez.wordpress.com/2011/10/09/network-theory-part-12/">Part 12</a> and <a href="https://johncarlosbaez.wordpress.com/2011/11/04/network-theory-part-16/">Part 16</a>.</p>
<h3> At the intersection of two theories </h3>
<p>As John has mentioned in his <a href="http://math.ucr.edu/home/baez/prob/">recent talks</a>, the <i>typical</i> view of how quantum mechanics and probability theory come into contact looks like this:</p>
<div align="center">
<img width="400" src="https://i1.wp.com/math.ucr.edu/home/baez/networks/quantum-vs-probability-theory-I.png" />
</div>
<p>The idea is that quantum theory generalizes classical probability theory by considering observables that don&#8217;t commute.</p>
<p>That&#8217;s perfectly valid, but we&#8217;ve been exploring an alternative view in this series.  Here quantum theory doesn&#8217;t subsume probability theory, but they intersect:</p>
<div align="center">
<img width="400" src="https://i0.wp.com/math.ucr.edu/home/baez/networks/quantum-vs-probability-theory-III.png" />
</div>
<p>What goes in the middle you might ask?  As odd as it might sound at first, John showed in <a href="https://johncarlosbaez.wordpress.com/2011/11/04/network-theory-part-16/">Part 16</a> that electrical circuits made of resistors constitute the intersection!    </p>
<div align="center">
<img width="400" src="https://i1.wp.com/math.ucr.edu/home/baez/networks/quantum-vs-probability-theory-IV.png" />
</div>
<p>For example, a circuit like this:</p>
<div align="center">
<img width="400" src="https://i0.wp.com/math.ucr.edu/home/baez/networks/resistor-graph-5.png" />
</div>
<p>gives rise to a Hamiltonian <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> that&#8217;s good both for stochastic mechanics and quantum mechanics.   Indeed, he found that the power dissipated by a circuit made of resistors is related to the familiar quantum theory concept known as the expectation value of the Hamiltonian!</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Ctextrm%7Bpower%7D+%3D+-2+%5Clangle+%5Cpsi%2C+H+%5Cpsi+%5Crangle+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;textrm{power} = -2 &#92;langle &#92;psi, H &#92;psi &#92;rangle ' title='&#92;textrm{power} = -2 &#92;langle &#92;psi, H &#92;psi &#92;rangle ' class='latex' /> </p>
<p>Oh&#8212;and you might think we made a mistake and wrote our Ω (ohm) symbols upside down.  We didn&#8217;t.  It happens that ℧ is the symbol for a ‘mho’&#8212;a unit of conductance that’s the reciprocal of an ohm. Check out <a href="https://johncarlosbaez.wordpress.com/2011/11/04/network-theory-part-16/">Part 16</a> for the details.  </p>
<h3> Stochastic mechanics versus quantum mechanics </h3>
<p>Let&#8217;s recall how states, time evolution, symmetries and observables work in the two theories.     Today we&#8217;ll fix a basis for our vector space of states, and we&#8217;ll assume it&#8217;s finite-dimensional so that all vectors have <img src='https://s0.wp.com/latex.php?latex=n&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='n' title='n' class='latex' /> components over either the complex numbers <img src='https://s0.wp.com/latex.php?latex=%5Cmathbb%7BC%7D&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;mathbb{C}' title='&#92;mathbb{C}' class='latex' /> or the reals <img src='https://s0.wp.com/latex.php?latex=%5Cmathbb%7BR%7D.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;mathbb{R}.' title='&#92;mathbb{R}.' class='latex' />   In other words, we&#8217;ll treat our space as either <img src='https://s0.wp.com/latex.php?latex=%5Cmathbb%7BC%7D%5En&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;mathbb{C}^n' title='&#92;mathbb{C}^n' class='latex' /> or <img src='https://s0.wp.com/latex.php?latex=%5Cmathbb%7BR%7D%5En.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;mathbb{R}^n.' title='&#92;mathbb{R}^n.' class='latex' />  In this fashion, linear operators that map such spaces to themselves will be represented as square matrices.  </p>
<p>Vectors will be written as <img src='https://s0.wp.com/latex.php?latex=%5Cpsi_i&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi_i' title='&#92;psi_i' class='latex' /> where the index <img src='https://s0.wp.com/latex.php?latex=i&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='i' title='i' class='latex' /> runs from 1 to <img src='https://s0.wp.com/latex.php?latex=n%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='n,' title='n,' class='latex' /> and we think of each choice of the index as a <b>state</b> of our system&#8212;but since we&#8217;ll be using that word in other ways too, let&#8217;s call it a <b>configuration</b>.  It&#8217;s just a basic way our system can be.</p>
<h4> States </h4>
<p>Besides the configurations <img src='https://s0.wp.com/latex.php?latex=i+%3D+1%2C%5Cdots%2C+n%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='i = 1,&#92;dots, n,' title='i = 1,&#92;dots, n,' class='latex' /> we have more general states that tell us the probability or amplitude of finding our system in one of these configurations:</p>
<p>&bull; <b>Stochastic states</b> are <img src='https://s0.wp.com/latex.php?latex=n&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='n' title='n' class='latex' />-tuples of nonnegative real numbers:  </p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cpsi_i+%5Cin+%5Cmathbb%7BR%7D%5E%2B+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi_i &#92;in &#92;mathbb{R}^+ ' title='&#92;psi_i &#92;in &#92;mathbb{R}^+ ' class='latex' /></p>
<p>The probability of finding the system in the <img src='https://s0.wp.com/latex.php?latex=i&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='i' title='i' class='latex' />th configuration is defined to be <img src='https://s0.wp.com/latex.php?latex=%5Cpsi_i.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi_i.' title='&#92;psi_i.' class='latex' />  For these probabilities to sum to one, <img src='https://s0.wp.com/latex.php?latex=%5Cpsi_i&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi_i' title='&#92;psi_i' class='latex' /> needs to be normalized like this: </p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cdisplaystyle%7B+%5Csum_i+%5Cpsi_i+%3D+1+%7D+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;displaystyle{ &#92;sum_i &#92;psi_i = 1 } ' title='&#92;displaystyle{ &#92;sum_i &#92;psi_i = 1 } ' class='latex' /></p>
<p>or in the notation we&#8217;re using in these articles:</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cdisplaystyle%7B+%5Clangle+%5Cpsi+%5Crangle+%3D+1+%7D+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;displaystyle{ &#92;langle &#92;psi &#92;rangle = 1 } ' title='&#92;displaystyle{ &#92;langle &#92;psi &#92;rangle = 1 } ' class='latex' /></p>
<p>where we define</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cdisplaystyle%7B+%5Clangle+%5Cpsi+%5Crangle+%3D+%5Csum_i+%5Cpsi_i+%7D+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;displaystyle{ &#92;langle &#92;psi &#92;rangle = &#92;sum_i &#92;psi_i } ' title='&#92;displaystyle{ &#92;langle &#92;psi &#92;rangle = &#92;sum_i &#92;psi_i } ' class='latex' /></p>
<p>&bull; <b>Quantum states</b> are <img src='https://s0.wp.com/latex.php?latex=n&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='n' title='n' class='latex' />-tuples of complex numbers:</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cpsi_i+%5Cin+%5Cmathbb%7BC%7D+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi_i &#92;in &#92;mathbb{C} ' title='&#92;psi_i &#92;in &#92;mathbb{C} ' class='latex' /></p>
<p>The probability of finding a state in the <img src='https://s0.wp.com/latex.php?latex=i&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='i' title='i' class='latex' />th configuration is defined to be <img src='https://s0.wp.com/latex.php?latex=%7C%5Cpsi%28x%29%7C%5E2.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='|&#92;psi(x)|^2.' title='|&#92;psi(x)|^2.' class='latex' />  For these probabilities to sum to one, <img src='https://s0.wp.com/latex.php?latex=%5Cpsi&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi' title='&#92;psi' class='latex' /> needs to be normalized like this:</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cdisplaystyle%7B+%5Csum_i+%7C%5Cpsi_i%7C%5E2+%3D+1+%7D&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;displaystyle{ &#92;sum_i |&#92;psi_i|^2 = 1 }' title='&#92;displaystyle{ &#92;sum_i |&#92;psi_i|^2 = 1 }' class='latex' /></p>
<p>or in other words </p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Clangle+%5Cpsi%2C++%5Cpsi+%5Crangle+%3D+1+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;langle &#92;psi,  &#92;psi &#92;rangle = 1 ' title='&#92;langle &#92;psi,  &#92;psi &#92;rangle = 1 ' class='latex' /></p>
<p>where the <b>inner product</b> of two vectors <img src='https://s0.wp.com/latex.php?latex=%5Cpsi&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi' title='&#92;psi' class='latex' /> and <img src='https://s0.wp.com/latex.php?latex=%5Cphi&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;phi' title='&#92;phi' class='latex' /> is defined by</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cdisplaystyle%7B+%5Clangle+%5Cpsi%2C+%5Cphi+%5Crangle+%3D+%5Csum_i+%5Coverline%7B%5Cpsi%7D_i+%5Cphi_i+%7D+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;displaystyle{ &#92;langle &#92;psi, &#92;phi &#92;rangle = &#92;sum_i &#92;overline{&#92;psi}_i &#92;phi_i } ' title='&#92;displaystyle{ &#92;langle &#92;psi, &#92;phi &#92;rangle = &#92;sum_i &#92;overline{&#92;psi}_i &#92;phi_i } ' class='latex' /></p>
<p>Now, the usual way to turn a quantum state <img src='https://s0.wp.com/latex.php?latex=%5Cpsi&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi' title='&#92;psi' class='latex' /> into a stochastic state is to take the absolute value of each number <img src='https://s0.wp.com/latex.php?latex=%5Cpsi_i&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi_i' title='&#92;psi_i' class='latex' /> and then square it.  However, if the numbers <img src='https://s0.wp.com/latex.php?latex=%5Cpsi_i&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi_i' title='&#92;psi_i' class='latex' /> happen to be nonnegative, we can also turn <img src='https://s0.wp.com/latex.php?latex=%5Cpsi&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi' title='&#92;psi' class='latex' /> into a stochastic state simply by multiplying it by a number to ensure <img src='https://s0.wp.com/latex.php?latex=%5Clangle+%5Cpsi+%5Crangle+%3D+1.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;langle &#92;psi &#92;rangle = 1.' title='&#92;langle &#92;psi &#92;rangle = 1.' class='latex' /></p>
<p>This is very unorthodox, but it lets us evolve the same vector <img src='https://s0.wp.com/latex.php?latex=%5Cpsi&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi' title='&#92;psi' class='latex' /> either stochastically or quantum-mechanically, using the recipes I&#8217;ll describe next.  In physics jargon these correspond to evolution in &#8216;real time&#8217; and &#8216;imaginary time&#8217;.  But don&#8217;t ask me which is which: from a quantum viewpoint stochastic mechanics uses imaginary time, but from a stochastic viewpoint it&#8217;s the other way around! </p>
<h4> Time evolution </h4>
<p>Time evolution works similarly in stochastic and quantum mechanics, but with a few big differences:</p>
<p>&bull; In stochastic mechanics the state changes in time according to the <b>master equation</b>:</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cdisplaystyle%7B+%5Cfrac%7Bd%7D%7Bd+t%7D+%5Cpsi%28t%29+%3D+H+%5Cpsi%28t%29+%7D+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;displaystyle{ &#92;frac{d}{d t} &#92;psi(t) = H &#92;psi(t) } ' title='&#92;displaystyle{ &#92;frac{d}{d t} &#92;psi(t) = H &#92;psi(t) } ' class='latex' /></p>
<p>which has the solution</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cpsi%28t%29+%3D+%5Cexp%28t+H%29+%5Cpsi%280%29+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi(t) = &#92;exp(t H) &#92;psi(0) ' title='&#92;psi(t) = &#92;exp(t H) &#92;psi(0) ' class='latex' /></p>
<p>&bull;  In quantum mechanics the state changes in time according to <b>Schr&ouml;dinger&#8217;s equation</b>:</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cdisplaystyle%7B+%5Cfrac%7Bd%7D%7Bd+t%7D+%5Cpsi%28t%29+%3D+-i+H+%5Cpsi%28t%29+%7D+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;displaystyle{ &#92;frac{d}{d t} &#92;psi(t) = -i H &#92;psi(t) } ' title='&#92;displaystyle{ &#92;frac{d}{d t} &#92;psi(t) = -i H &#92;psi(t) } ' class='latex' /></p>
<p>which has the solution</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cpsi%28t%29+%3D+%5Cexp%28-i+t+H%29+%5Cpsi%280%29+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi(t) = &#92;exp(-i t H) &#92;psi(0) ' title='&#92;psi(t) = &#92;exp(-i t H) &#92;psi(0) ' class='latex' /></p>
<p>The operator <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> is called the <b>Hamiltonian</b>.  The properties it must have depend on whether we&#8217;re doing stochastic mechanics or quantum mechanics:</p>
<p>&bull;  We need <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> to be <b>infinitesimal stochastic</b> for time evolution given by <img src='https://s0.wp.com/latex.php?latex=%5Cexp%28t+H%29&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;exp(t H)' title='&#92;exp(t H)' class='latex' /> to send stochastic states to stochastic states.  In other words, we need that (i) its columns sum to zero and (ii) its off-diagonal entries are real and nonnegative:</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cdisplaystyle%7B+%5Csum_i+H_%7Bi+j%7D%3D0+%7D+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;displaystyle{ &#92;sum_i H_{i j}=0 } ' title='&#92;displaystyle{ &#92;sum_i H_{i j}=0 } ' class='latex' /></p>
<p><img src='https://s0.wp.com/latex.php?latex=i%5Cneq+j+%5CRightarrow+H_%7Bi+j%7D%5Cgeq+0+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='i&#92;neq j &#92;Rightarrow H_{i j}&#92;geq 0 ' title='i&#92;neq j &#92;Rightarrow H_{i j}&#92;geq 0 ' class='latex' /></p>
<p>&bull; We need <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> to be <b>self-adjoint</b> for time evolution given by <img src='https://s0.wp.com/latex.php?latex=%5Cexp%28-i+t+H%29&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;exp(-i t H)' title='&#92;exp(-i t H)' class='latex' /> to send quantum states to quantum states.  So, we need</p>
<p><img src='https://s0.wp.com/latex.php?latex=H+%3D+H%5E%5Cdagger+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H = H^&#92;dagger ' title='H = H^&#92;dagger ' class='latex' /></p>
<p>where we recall that the adjoint of a matrix is the conjugate of its transpose:</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cdisplaystyle%7B+%28H%5E%5Cdagger%29_%7Bi+j%7D+%3A%3D+%5Coverline%7BH%7D_%7Bj+i%7D+%7D+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;displaystyle{ (H^&#92;dagger)_{i j} := &#92;overline{H}_{j i} } ' title='&#92;displaystyle{ (H^&#92;dagger)_{i j} := &#92;overline{H}_{j i} } ' class='latex' /></p>
<p>We are concerned with the case where the operator <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> generates both a valid quantum evolution and also a valid stochastic one:</p>
<p>&bull; <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> is a <b>Dirichlet operator</b> if it&#8217;s both self-adjoint and infinitesimal stochastic.</p>
<p>We will soon go further and zoom in on this intersection!  But first let&#8217;s finish our review.  </p>
<h4> Symmetries </h4>
<p>As John explained in <a href="https://johncarlosbaez.wordpress.com/2011/10/09/network-theory-part-12/">Part 12</a>, besides states and observables we need symmetries, which are transformations that map states to states.  These include the evolution operators which we only briefly discussed in the preceding subsection.</p>
<p>&bull;  A linear map <img src='https://s0.wp.com/latex.php?latex=U&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='U' title='U' class='latex' /> that sends quantum states to quantum states is called an  <b>isometry</b>, and isometries are characterized by this property:</p>
<p><img src='https://s0.wp.com/latex.php?latex=U%5E%5Cdagger+U+%3D+1&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='U^&#92;dagger U = 1' title='U^&#92;dagger U = 1' class='latex' /></p>
<p>&bull; A linear map <img src='https://s0.wp.com/latex.php?latex=U&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='U' title='U' class='latex' /> that sends stochastic states to stochastic states is called a <b>stochastic operator</b>, and stochastic operators are characterized by these properties:</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cdisplaystyle%7B+%5Csum_i+U_%7Bi+j%7D+%3D+1+%7D+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;displaystyle{ &#92;sum_i U_{i j} = 1 } ' title='&#92;displaystyle{ &#92;sum_i U_{i j} = 1 } ' class='latex' /></p>
<p>and </p>
<p><img src='https://s0.wp.com/latex.php?latex=U_%7Bi+j%7D+%5Cgeq+0+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='U_{i j} &#92;geq 0 ' title='U_{i j} &#92;geq 0 ' class='latex' /></p>
<p>A notable difference here is that in our finite-dimensional situation, isometries are always invertible, but stochastic operators may not be!  If <img src='https://s0.wp.com/latex.php?latex=U&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='U' title='U' class='latex' /> is an <img src='https://s0.wp.com/latex.php?latex=n+%5Ctimes+n&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='n &#92;times n' title='n &#92;times n' class='latex' /> matrix that&#8217;s an isometry, <img src='https://s0.wp.com/latex.php?latex=U%5E%5Cdagger&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='U^&#92;dagger' title='U^&#92;dagger' class='latex' /> is its inverse.  So, we also have</p>
<p><img src='https://s0.wp.com/latex.php?latex=U+U%5E%5Cdagger+%3D+1&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='U U^&#92;dagger = 1' title='U U^&#92;dagger = 1' class='latex' /></p>
<p>and we say <img src='https://s0.wp.com/latex.php?latex=U&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='U' title='U' class='latex' /> is <b>unitary</b>.  But if <img src='https://s0.wp.com/latex.php?latex=U&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='U' title='U' class='latex' /> is stochastic, it may not have an inverse&#8212;and even if it does, its inverse is rarely stochastic.   This explains why in stochastic mechanics time evolution is often not reversible, while in quantum mechanics it always is.</p>
<p><b>Puzzle 1.</b> Suppose <img src='https://s0.wp.com/latex.php?latex=U&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='U' title='U' class='latex' /> is a stochastic <img src='https://s0.wp.com/latex.php?latex=n+%5Ctimes+n&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='n &#92;times n' title='n &#92;times n' class='latex' /> matrix whose inverse is stochastic.  What are the possibilities for <img src='https://s0.wp.com/latex.php?latex=U&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='U' title='U' class='latex' />?</p>
<p>It is quite hard for an operator to be a symmetry in both stochastic and quantum mechanics, especially in our finite-dimensional situation:</p>
<p><b>Puzzle 2.</b> Suppose <img src='https://s0.wp.com/latex.php?latex=U&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='U' title='U' class='latex' /> is an <img src='https://s0.wp.com/latex.php?latex=n+%5Ctimes+n&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='n &#92;times n' title='n &#92;times n' class='latex' /> matrix that is both stochastic and unitary.  What are the possibilities for <img src='https://s0.wp.com/latex.php?latex=U&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='U' title='U' class='latex' />?</p>
<h4> Observables </h4>
<p>&#8216;Observables&#8217; are real-valued quantities that can be measured, or predicted, given a specific theory.  </p>
<p>&bull; In quantum mechanics, an observable is given by a self-adjoint matrix <img src='https://s0.wp.com/latex.php?latex=O%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='O,' title='O,' class='latex' /> and the expected value of the observable <img src='https://s0.wp.com/latex.php?latex=O&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='O' title='O' class='latex' /> in the quantum state <img src='https://s0.wp.com/latex.php?latex=%5Cpsi&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi' title='&#92;psi' class='latex' /> is</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cdisplaystyle%7B+%5Clangle+%5Cpsi+%2C+O+%5Cpsi+%5Crangle+%3D+%5Csum_%7Bi%2Cj%7D+%5Coverline%7B%5Cpsi%7D_i+O_%7Bi+j%7D+%5Cpsi_j+%7D+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;displaystyle{ &#92;langle &#92;psi , O &#92;psi &#92;rangle = &#92;sum_{i,j} &#92;overline{&#92;psi}_i O_{i j} &#92;psi_j } ' title='&#92;displaystyle{ &#92;langle &#92;psi , O &#92;psi &#92;rangle = &#92;sum_{i,j} &#92;overline{&#92;psi}_i O_{i j} &#92;psi_j } ' class='latex' /> </p>
<p>&bull; In stochastic mechanics, an observable <img src='https://s0.wp.com/latex.php?latex=O&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='O' title='O' class='latex' /> has a value <img src='https://s0.wp.com/latex.php?latex=O_i&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='O_i' title='O_i' class='latex' /> in each configuration <img src='https://s0.wp.com/latex.php?latex=i%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='i,' title='i,' class='latex' /> and the expected value of the observable <img src='https://s0.wp.com/latex.php?latex=O&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='O' title='O' class='latex' /> in the stochastic state <img src='https://s0.wp.com/latex.php?latex=%5Cpsi&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi' title='&#92;psi' class='latex' /> is</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cdisplaystyle%7B+%5Clangle+O+%5Cpsi+%5Crangle+%3D+%5Csum_i+O_i+%5Cpsi_i+%7D+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;displaystyle{ &#92;langle O &#92;psi &#92;rangle = &#92;sum_i O_i &#92;psi_i } ' title='&#92;displaystyle{ &#92;langle O &#92;psi &#92;rangle = &#92;sum_i O_i &#92;psi_i } ' class='latex' /></p>
<p>We can turn an observable in stochastic mechanics into an observable in quantum mechanics by making a diagonal matrix whose diagonal entries are the numbers <img src='https://s0.wp.com/latex.php?latex=O_i.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='O_i.' title='O_i.' class='latex' /></p>
<h3> From graphs to matrices </h3>
<p>Back in <a href="https://johncarlosbaez.wordpress.com/2011/11/04/network-theory-part-16/">Part 16</a>, John explained how a graph with positive numbers on its edges gives rise to a Hamiltonian in both quantum and stochastic mechanics&#8212;in other words, a Dirichlet operator.</p>
<p>Here&#8217;s how this works.  We&#8217;ll consider <b><a href="http://en.wikipedia.org/wiki/Simple_graph#Simple_graph">simple graphs</a></b>: graphs without arrows on their edges, with at most one edge from one vertex to another, with no edges from a vertex to itself.  And we&#8217;ll only look at graphs with finitely many vertices and edges.  We&#8217;ll assume each edge is labelled by a positive number, like this:</p>
<div align="center"><img width="300" src="https://i0.wp.com/math.ucr.edu/home/baez/networks/complete-graph-5-zero.png" alt="" /></div>
<p>If our graph has <img src='https://s0.wp.com/latex.php?latex=n&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='n' title='n' class='latex' /> vertices, we can create an <img src='https://s0.wp.com/latex.php?latex=n+%5Ctimes+n&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='n &#92;times n' title='n &#92;times n' class='latex' /> matrix <img src='https://s0.wp.com/latex.php?latex=A&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='A' title='A' class='latex' /> where <img src='https://s0.wp.com/latex.php?latex=A_%7Bi+j%7D&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='A_{i j}' title='A_{i j}' class='latex' /> is the number labelling the edge from <img src='https://s0.wp.com/latex.php?latex=i&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='i' title='i' class='latex' /> to <img src='https://s0.wp.com/latex.php?latex=j%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='j,' title='j,' class='latex' /> if there is such an edge, and 0 if there&#8217;s not.  This matrix is symmetric, with real entries, so it&#8217;s self-adjoint.   So <img src='https://s0.wp.com/latex.php?latex=A&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='A' title='A' class='latex' /> is a valid Hamiltonian in quantum mechanics.  </p>
<p>How about stochastic mechanics?   Remember that a Hamiltonian <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> in stochastic mechanics needs to be &#8216;infinitesimal stochastic&#8217;.  So, its off-diagonal entries must be nonnegative, which is indeed true for our <img src='https://s0.wp.com/latex.php?latex=A%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='A,' title='A,' class='latex' /> but also the sums of its columns must be zero, which is not true when our <img src='https://s0.wp.com/latex.php?latex=A&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='A' title='A' class='latex' /> is nonzero.</p>
<p>But now comes the best news you&#8217;ve heard all day: we can improve <img src='https://s0.wp.com/latex.php?latex=A&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='A' title='A' class='latex' /> to a stochastic operator in a way that is completely determined by <img src='https://s0.wp.com/latex.php?latex=A&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='A' title='A' class='latex' /> itself!  This is done by subtracting a diagonal matrix <img src='https://s0.wp.com/latex.php?latex=L&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='L' title='L' class='latex' /> whose entries are the sums of the columns of <img src='https://s0.wp.com/latex.php?latex=A&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='A' title='A' class='latex' />:</p>
<p><img src='https://s0.wp.com/latex.php?latex=L_%7Bi+i%7D+%3D+%5Csum_i+A_%7Bi+j%7D+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='L_{i i} = &#92;sum_i A_{i j} ' title='L_{i i} = &#92;sum_i A_{i j} ' class='latex' /></p>
<p><img src='https://s0.wp.com/latex.php?latex=i+%5Cne+j+%5CRightarrow+L_%7Bi+j%7D+%3D+0+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='i &#92;ne j &#92;Rightarrow L_{i j} = 0 ' title='i &#92;ne j &#92;Rightarrow L_{i j} = 0 ' class='latex' /></p>
<p>It&#8217;s easy to check that</p>
<p><img src='https://s0.wp.com/latex.php?latex=H+%3D+A+-+L+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H = A - L ' title='H = A - L ' class='latex' /></p>
<p>is still self-adjoint, but now also infinitesimal stochastic.  So, it&#8217;s a <i>Dirichlet operator</i>: a good Hamiltonian for <i>both</i> stochastic and quantum mechanics!  </p>
<p>In <a href="https://johncarlosbaez.wordpress.com/2011/11/04/network-theory-part-16/">Part 16</a>, we saw a bit more: <i>every</i> Dirichlet operator arises this way.  It&#8217;s easy to see.  You just take your Dirichlet operator and make a graph with one edge for each nonzero off-diagonal entry.  Then you label the edge with this entry.</p>
<p>So, Dirichlet operators are essentially the same as finite simple graphs with edges labelled by positive numbers. </p>
<p>Now, a simple graph can consist of many separate &#8216;pieces&#8217;, called <b>components</b>.  Then there&#8217;s no way for a particle hopping along the edges to get from one component to another, either in stochastic or quantum mechanics.  So we might as well focus our attention on graphs with just one component.  These graphs are called &#8216;connected&#8217;.  In other words:</p>
<p><b>Definition.</b> A simple graph is <b>connected</b> if it is nonempty and there is a path of edges connecting any vertex to any other.  </p>
<p>Our goal today is to understand more about Dirichlet operators coming from connected graphs.  For this we need to learn the Perron&#8211;Frobenius theorem.  But let&#8217;s start with something easier.</p>
<h3> Perron&#8217;s theorem </h3>
<p>In quantum mechanics it&#8217;s good to think about observables that have positive expected values:</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Clangle+%5Cpsi%2C+O+%5Cpsi+%5Crangle+%3E+0+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;langle &#92;psi, O &#92;psi &#92;rangle &gt; 0 ' title='&#92;langle &#92;psi, O &#92;psi &#92;rangle &gt; 0 ' class='latex' /></p>
<p>for every quantum state <img src='https://s0.wp.com/latex.php?latex=%5Cpsi+%5Cin+%5Cmathbb%7BC%7D%5En.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi &#92;in &#92;mathbb{C}^n.' title='&#92;psi &#92;in &#92;mathbb{C}^n.' class='latex' />  These are called <b>positive definite</b>.  But in stochastic mechanics it&#8217;s good to think about matrices that are positive in a more naive sense:</p>
<p><b>Definition.</b> An <img src='https://s0.wp.com/latex.php?latex=n+%5Ctimes+n&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='n &#92;times n' title='n &#92;times n' class='latex' /> real matrix <img src='https://s0.wp.com/latex.php?latex=T&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T' title='T' class='latex' /> is <b>positive</b> if all its entries are positive: </p>
<p><img src='https://s0.wp.com/latex.php?latex=T_%7Bi+j%7D+%3E+0+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T_{i j} &gt; 0 ' title='T_{i j} &gt; 0 ' class='latex' /></p>
<p>for all <img src='https://s0.wp.com/latex.php?latex=1+%5Cle+i%2C+j+%5Cle+n.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='1 &#92;le i, j &#92;le n.' title='1 &#92;le i, j &#92;le n.' class='latex' /></p>
<p>Similarly: </p>
<p><b>Definition.</b> A vector <img src='https://s0.wp.com/latex.php?latex=%5Cpsi+%5Cin+%5Cmathbb%7BR%7D%5En&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi &#92;in &#92;mathbb{R}^n' title='&#92;psi &#92;in &#92;mathbb{R}^n' class='latex' /> is <b>positive</b> if all its components are positive:</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cpsi_i+%3E+0+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi_i &gt; 0 ' title='&#92;psi_i &gt; 0 ' class='latex' /></p>
<p>for all <img src='https://s0.wp.com/latex.php?latex=1+%5Cle+i+%5Cle+n.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='1 &#92;le i &#92;le n.' title='1 &#92;le i &#92;le n.' class='latex' /></p>
<p>We&#8217;ll also define <b>nonnegative</b> matrices and vectors in the same way, replacing <img src='https://s0.wp.com/latex.php?latex=%3E+0&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&gt; 0' title='&gt; 0' class='latex' /> by <img src='https://s0.wp.com/latex.php?latex=%5Cge+0.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;ge 0.' title='&#92;ge 0.' class='latex' />  A good example of a nonnegative vector is a stochastic state.</p>
<p>In 1907, Perron proved the following fundamental result about positive matrices:</p>
<p><b>Perron&#8217;s Theorem.</b> Given a positive square matrix <img src='https://s0.wp.com/latex.php?latex=T%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T,' title='T,' class='latex' /> there is a positive real number <img src='https://s0.wp.com/latex.php?latex=r%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='r,' title='r,' class='latex' /> called the <b>Perron&#8211;Frobenius eigenvalue</b> of <img src='https://s0.wp.com/latex.php?latex=T%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T,' title='T,' class='latex' /> such that <img src='https://s0.wp.com/latex.php?latex=r&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='r' title='r' class='latex' /> is an eigenvalue of <img src='https://s0.wp.com/latex.php?latex=T&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T' title='T' class='latex' /> and any other eigenvalue <img src='https://s0.wp.com/latex.php?latex=%5Clambda&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;lambda' title='&#92;lambda' class='latex' /> of <img src='https://s0.wp.com/latex.php?latex=T&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T' title='T' class='latex' /> has <img src='https://s0.wp.com/latex.php?latex=%7C%5Clambda%7C+%3C+r.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='|&#92;lambda| &lt; r.' title='|&#92;lambda| &lt; r.' class='latex' />  Moreover, there is a positive vector <img src='https://s0.wp.com/latex.php?latex=%5Cpsi+%5Cin+%5Cmathbb%7BR%7D%5En&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi &#92;in &#92;mathbb{R}^n' title='&#92;psi &#92;in &#92;mathbb{R}^n' class='latex' /> with <img src='https://s0.wp.com/latex.php?latex=T+%5Cpsi+%3D+r+%5Cpsi.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T &#92;psi = r &#92;psi.' title='T &#92;psi = r &#92;psi.' class='latex' />  Any other vector with this property is a scalar multiple of <img src='https://s0.wp.com/latex.php?latex=%5Cpsi.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi.' title='&#92;psi.' class='latex' />  Furthermore, any nonnegative vector that is an eigenvector of <img src='https://s0.wp.com/latex.php?latex=T&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T' title='T' class='latex' /> must be a scalar multiple of <img src='https://s0.wp.com/latex.php?latex=%5Cpsi.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi.' title='&#92;psi.' class='latex' /></p>
<p>In other words, if <img src='https://s0.wp.com/latex.php?latex=T&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T' title='T' class='latex' /> is positive, it has a unique eigenvalue with the largest absolute value.  This eigenvalue is positive.  Up to a constant factor, it has an unique eigenvector.  We can choose this eigenvector to be positive.  And then, up to a constant factor, it&#8217;s the <i>only</i> nonnegative eigenvector of <img src='https://s0.wp.com/latex.php?latex=T.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T.' title='T.' class='latex' /></p>
<h3> From matrices to graphs </h3>
<p>The conclusions of Perron&#8217;s theorem don&#8217;t hold for matrices that are merely nonnegative.  For example, these matrices</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cleft%28+%5Cbegin%7Barray%7D%7Bcc%7D+1+%26+0+%5C%5C+0+%26+1+%5Cend%7Barray%7D+%5Cright%29+%2C+%5Cqquad+%5Cleft%28+%5Cbegin%7Barray%7D%7Bcc%7D+0+%26+1+%5C%5C+0+%26+0+%5Cend%7Barray%7D+%5Cright%29+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;left( &#92;begin{array}{cc} 1 &amp; 0 &#92;&#92; 0 &amp; 1 &#92;end{array} &#92;right) , &#92;qquad &#92;left( &#92;begin{array}{cc} 0 &amp; 1 &#92;&#92; 0 &amp; 0 &#92;end{array} &#92;right) ' title='&#92;left( &#92;begin{array}{cc} 1 &amp; 0 &#92;&#92; 0 &amp; 1 &#92;end{array} &#92;right) , &#92;qquad &#92;left( &#92;begin{array}{cc} 0 &amp; 1 &#92;&#92; 0 &amp; 0 &#92;end{array} &#92;right) ' class='latex' /></p>
<p>are nonnegative, but they violate lots of the conclusions of Perron&#8217;s theorem.  </p>
<p>Nonetheless, in 1912 Frobenius published an impressive generalization of Perron&#8217;s result.  In its strongest form, it doesn&#8217;t apply to <i>all</i> nonnegative matrices; only to those that are &#8216;irreducible&#8217;.  So, let us define those.</p>
<p>We&#8217;ve seen how to build a matrix from a graph.  Now we need to build a graph from a matrix!  Suppose we have an <img src='https://s0.wp.com/latex.php?latex=n+%5Ctimes+n&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='n &#92;times n' title='n &#92;times n' class='latex' /> matrix <img src='https://s0.wp.com/latex.php?latex=T.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T.' title='T.' class='latex' />  Then we can build a graph <img src='https://s0.wp.com/latex.php?latex=G_T&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='G_T' title='G_T' class='latex' /> with <img src='https://s0.wp.com/latex.php?latex=n&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='n' title='n' class='latex' /> vertices where there is an edge from the <img src='https://s0.wp.com/latex.php?latex=i&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='i' title='i' class='latex' />th vertex to the <img src='https://s0.wp.com/latex.php?latex=j&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='j' title='j' class='latex' />th vertex if and only if <img src='https://s0.wp.com/latex.php?latex=T_%7Bi+j%7D+%5Cne+0.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T_{i j} &#92;ne 0.' title='T_{i j} &#92;ne 0.' class='latex' />  </p>
<p>But watch out: this is a different kind of graph!  It&#8217;s a <b><a href="http://en.wikipedia.org/wiki/Directed_graph">directed graph</a></b>, meaning the edges have directions, there&#8217;s at most one edge going from any vertex to any vertex, and we do allow an edge going from a vertex to itself.  There&#8217;s a stronger concept of &#8216;connectivity&#8217; for these graphs:</p>
<p><b>Definition.</b> A directed graph is <b>strongly connected</b> if there is a directed path of edges going from any vertex to any other vertex.</p>
<p>So, you have to be able to walk along edges from any vertex to any other vertex, but always following the direction of the edges!  Using this idea we define irreducible matrices:</p>
<p><b>Definition.</b> A nonnegative square matrix <img src='https://s0.wp.com/latex.php?latex=T&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T' title='T' class='latex' /> is <b>irreducible</b> if its graph <img src='https://s0.wp.com/latex.php?latex=G_T&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='G_T' title='G_T' class='latex' /> is strongly connected.  </p>
<h3> The Perron&#8211;Frobenius theorem </h3>
<p>Now we are ready to state:</p>
<p><b>The Perron-Frobenius Theorem.</b> Given an irreducible nonnegative square matrix <img src='https://s0.wp.com/latex.php?latex=T%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T,' title='T,' class='latex' /> there is a positive real number <img src='https://s0.wp.com/latex.php?latex=r%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='r,' title='r,' class='latex' /> called the <b>Perron&#8211;Frobenius eigenvalue</b> of <img src='https://s0.wp.com/latex.php?latex=T%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T,' title='T,' class='latex' /> such that <img src='https://s0.wp.com/latex.php?latex=r&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='r' title='r' class='latex' /> is an eigenvalue of <img src='https://s0.wp.com/latex.php?latex=T&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T' title='T' class='latex' /> and any other eigenvalue <img src='https://s0.wp.com/latex.php?latex=%5Clambda&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;lambda' title='&#92;lambda' class='latex' /> of <img src='https://s0.wp.com/latex.php?latex=T&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T' title='T' class='latex' /> has <img src='https://s0.wp.com/latex.php?latex=%7C%5Clambda%7C+%5Cle+r.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='|&#92;lambda| &#92;le r.' title='|&#92;lambda| &#92;le r.' class='latex' />  Moreover, there is a positive vector <img src='https://s0.wp.com/latex.php?latex=%5Cpsi+%5Cin+%5Cmathbb%7BR%7D%5En&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi &#92;in &#92;mathbb{R}^n' title='&#92;psi &#92;in &#92;mathbb{R}^n' class='latex' /> with <img src='https://s0.wp.com/latex.php?latex=T%5Cpsi+%3D+r+%5Cpsi.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T&#92;psi = r &#92;psi.' title='T&#92;psi = r &#92;psi.' class='latex' />  Any other vector with this property is a scalar multiple of <img src='https://s0.wp.com/latex.php?latex=%5Cpsi.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi.' title='&#92;psi.' class='latex' />  Furthermore, any nonnegative vector that is an eigenvector of <img src='https://s0.wp.com/latex.php?latex=T&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T' title='T' class='latex' /> must be a scalar multiple of <img src='https://s0.wp.com/latex.php?latex=%5Cpsi.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi.' title='&#92;psi.' class='latex' /></p>
<p>The only conclusion of this theorem that&#8217;s weaker than those of Perron&#8217;s theorem is that there may be other eigenvalues with <img src='https://s0.wp.com/latex.php?latex=%7C%5Clambda%7C+%3D+r.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='|&#92;lambda| = r.' title='|&#92;lambda| = r.' class='latex' />  For example, this matrix is irreducible and nonnegative:</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cleft%28+%5Cbegin%7Barray%7D%7Bcc%7D+0+%26+1+%5C%5C+1+%26+0+%5Cend%7Barray%7D+%5Cright%29+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;left( &#92;begin{array}{cc} 0 &amp; 1 &#92;&#92; 1 &amp; 0 &#92;end{array} &#92;right) ' title='&#92;left( &#92;begin{array}{cc} 0 &amp; 1 &#92;&#92; 1 &amp; 0 &#92;end{array} &#92;right) ' class='latex' /></p>
<p>Its Perron&#8211;Frobenius eigenvalue is 1, but it also has -1 as an eigenvalue.  In general, Perron-Frobenius theory says quite a lot about the other eigenvalues on the circle <img src='https://s0.wp.com/latex.php?latex=%7C%5Clambda%7C+%3D+r%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='|&#92;lambda| = r,' title='|&#92;lambda| = r,' class='latex' /> but we won&#8217;t need that fancy stuff here.</p>
<p>Perron&#8211;Frobenius theory is useful in many ways, from highbrow math to ranking football teams.   We&#8217;ll need it not just today but also later in this series.  There are many books and other sources of information for those that want to take a closer look at this subject.  If you&#8217;re interested, you can <a href="https://www.google.com/search?q=perron-frobenius">search online</a> or take a look at these:</p>
<p>&bull; Dimitrious Noutsos, <a href="http://www.math.uoi.gr/~dnoutsos/Papers_pdf_files/slide_perron.pdf">Perron Frobenius theory and some extensions</a>, 2008.  (Includes proofs of the basic theorems.)</p>
<p>&bull; V. S. Sunder, <a href="http://www.imsc.res.in/~sunder/pf.pdf">Perron Frobenius theory</a>, 18 December 2009.  (Includes applications to graph theory, Markov chains and von Neumann algebras.)</p>
<p>&bull; Stephen Boyd, <a href="http://www.stanford.edu/class/ee363/lectures/pf.pdf">Lecture 17: Perron Frobenius theory</a>, Winter 2008-2009.  (Includes a max-min characterization of the Perron&#8211;Frobenius eigenvalue and applications to Markov chains, economics, population growth and power control.)</p>
<p>I have not taken a look myself, but if anyone is interested and can read German, the original work appears here:</p>
<p>&bull; Oskar Perron, Zur Theorie der Matrizen, <i>Math. Ann.</i> <b>64</b> (1907), 248–263.</p>
<p>&bull; Georg Frobenius, &Uuml;ber Matrizen aus nicht negativen Elementen, <i>S.-B. Preuss Acad. Wiss. Berlin</i> (1912), 456–477.   </p>
<p>And, of course, there&#8217;s this:</p>
<p>&bull; Wikipedia, <a href="http://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem">Perron&#8211;Frobenius theorem</a>.</p>
<p>It&#8217;s got a lot of information.</p>
<h3> Irreducible Dirichlet operators </h3>
<p>Now comes the payoff.  We saw how to get a Dirichlet operator <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> from any finite simple graph with edges labelled by positive numbers.  Now let&#8217;s apply Perron&#8211;Frobenius theory to prove our thesis.</p>
<p>Unfortunately, the matrix <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> is rarely nonnegative.  If you remember how we built it, you&#8217;ll see its off-diagonal entries will always be nonnegative&#8230; but its diagonal entries can be negative.  </p>
<p>Luckily, we can fix this just by adding a big enough multiple of the identity matrix to <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' />!   The result is a nonnegative matrix</p>
<p><img src='https://s0.wp.com/latex.php?latex=T+%3D+H+%2B+c+I+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T = H + c I ' title='T = H + c I ' class='latex' /></p>
<p>where <img src='https://s0.wp.com/latex.php?latex=c+%3E+0&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='c &gt; 0' title='c &gt; 0' class='latex' /> is some large number.  This matrix <img src='https://s0.wp.com/latex.php?latex=T&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T' title='T' class='latex' /> has the same eigenvectors as <img src='https://s0.wp.com/latex.php?latex=H.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H.' title='H.' class='latex' />   The off-diagonal matrix entries of <img src='https://s0.wp.com/latex.php?latex=T&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T' title='T' class='latex' /> are the same as those of <img src='https://s0.wp.com/latex.php?latex=H%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H,' title='H,' class='latex' /> so <img src='https://s0.wp.com/latex.php?latex=T_%7Bi+j%7D&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T_{i j}' title='T_{i j}' class='latex' /> is nonzero for <img src='https://s0.wp.com/latex.php?latex=i+%5Cne+j&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='i &#92;ne j' title='i &#92;ne j' class='latex' /> exactly when the graph we started with has an edge from <img src='https://s0.wp.com/latex.php?latex=i&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='i' title='i' class='latex' /> to <img src='https://s0.wp.com/latex.php?latex=j.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='j.' title='j.' class='latex' />  So, for <img src='https://s0.wp.com/latex.php?latex=i+%5Cne+j%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='i &#92;ne j,' title='i &#92;ne j,' class='latex' /> the graph <img src='https://s0.wp.com/latex.php?latex=G_T&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='G_T' title='G_T' class='latex' /> will have an directed edge going from <img src='https://s0.wp.com/latex.php?latex=i&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='i' title='i' class='latex' /> to <img src='https://s0.wp.com/latex.php?latex=j&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='j' title='j' class='latex' /> precisely when our original graph had an edge from <img src='https://s0.wp.com/latex.php?latex=i&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='i' title='i' class='latex' /> to <img src='https://s0.wp.com/latex.php?latex=j.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='j.' title='j.' class='latex' />  And that means that if our original graph was connected, <img src='https://s0.wp.com/latex.php?latex=G_T&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='G_T' title='G_T' class='latex' /> will be strongly connected.  Thus, by definition, the matrix <img src='https://s0.wp.com/latex.php?latex=T&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T' title='T' class='latex' /> is irreducible!</p>
<p>Since <img src='https://s0.wp.com/latex.php?latex=T&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T' title='T' class='latex' /> is nonnegative and irreducible, the Perron&#8211;Frobenius theorem swings into action and we conclude:</p>
<p><b>Lemma.</b> Suppose <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> is the Dirichlet operator coming from a connected finite simple graph with edges labelled by positive numbers.  Then the eigenvalues of <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> are real.  Let <img src='https://s0.wp.com/latex.php?latex=%5Clambda&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;lambda' title='&#92;lambda' class='latex' /> be the largest eigenvalue.  Then there is a positive vector <img src='https://s0.wp.com/latex.php?latex=%5Cpsi+%5Cin+%5Cmathbb%7BR%7D%5En&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi &#92;in &#92;mathbb{R}^n' title='&#92;psi &#92;in &#92;mathbb{R}^n' class='latex' /> with <img src='https://s0.wp.com/latex.php?latex=H%5Cpsi+%3D+%5Clambda+%5Cpsi.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H&#92;psi = &#92;lambda &#92;psi.' title='H&#92;psi = &#92;lambda &#92;psi.' class='latex' />  Any other vector with this property is a scalar multiple of <img src='https://s0.wp.com/latex.php?latex=%5Cpsi.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi.' title='&#92;psi.' class='latex' />  Furthermore, any nonnegative vector that is an eigenvector of <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> must be a scalar multiple of <img src='https://s0.wp.com/latex.php?latex=%5Cpsi.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi.' title='&#92;psi.' class='latex' /></p>
<p><i>Proof.</i>  The eigenvalues of <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> are real since <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> is self-adjoint.  Notice that if <img src='https://s0.wp.com/latex.php?latex=r&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='r' title='r' class='latex' /> is the Perron&#8211;Frobenius eigenvalue of <img src='https://s0.wp.com/latex.php?latex=T+%3D+H+%2B+c+I&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T = H + c I' title='T = H + c I' class='latex' /> and</p>
<p><img src='https://s0.wp.com/latex.php?latex=T+%5Cpsi+%3D+r+%5Cpsi&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T &#92;psi = r &#92;psi' title='T &#92;psi = r &#92;psi' class='latex' /></p>
<p>then </p>
<p><img src='https://s0.wp.com/latex.php?latex=H+%5Cpsi+%3D+%28r+-+c%29%5Cpsi+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H &#92;psi = (r - c)&#92;psi ' title='H &#92;psi = (r - c)&#92;psi ' class='latex' /></p>
<p>By the Perron&#8211;Frobenius theorem the number <img src='https://s0.wp.com/latex.php?latex=r&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='r' title='r' class='latex' /> is positive, and it has the largest absolute value of any eigenvalue of <img src='https://s0.wp.com/latex.php?latex=T.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T.' title='T.' class='latex' />  Thanks to the subtraction, the eigenvalue <img src='https://s0.wp.com/latex.php?latex=r+-+c&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='r - c' title='r - c' class='latex' /> may not have the largest absolute value of any eigenvalue of <img src='https://s0.wp.com/latex.php?latex=H.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H.' title='H.' class='latex' />  It is, however, the largest eigenvalue of <img src='https://s0.wp.com/latex.php?latex=H%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H,' title='H,' class='latex' /> so we take this as our <img src='https://s0.wp.com/latex.php?latex=%5Clambda.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;lambda.' title='&#92;lambda.' class='latex' />  The rest follows from the Perron&#8211;Frobenius theorem.  &nbsp;  &#9608;</p>
<p>But in fact we can improve this result, since the largest eigenvalue <img src='https://s0.wp.com/latex.php?latex=%5Clambda&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;lambda' title='&#92;lambda' class='latex' /> is just zero.  Let&#8217;s also make up a definition, to make our result sound more slick:</p>
<p><b>Definition.</b> A Dirichlet operator is <b>irreducible</b> if it comes from a connected finite simple graph with edges labelled by positive numbers.</p>
<p>This meshes nicely with our earlier definition of irreducibility for nonnegative matrices.  Now:</p>
<p><b>Theorem.</b>  Suppose <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> is an irreducible Dirichlet operator.  Then <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> has zero as its largest real eigenvalue.  There is a positive vector <img src='https://s0.wp.com/latex.php?latex=%5Cpsi+%5Cin+%5Cmathbb%7BR%7D%5En&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi &#92;in &#92;mathbb{R}^n' title='&#92;psi &#92;in &#92;mathbb{R}^n' class='latex' /> with <img src='https://s0.wp.com/latex.php?latex=H%5Cpsi+%3D+0.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H&#92;psi = 0.' title='H&#92;psi = 0.' class='latex' />  Any other vector with this property is a scalar multiple of <img src='https://s0.wp.com/latex.php?latex=%5Cpsi.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi.' title='&#92;psi.' class='latex' />  Furthermore, any nonnegative vector that is an eigenvector of <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> must be a scalar multiple of <img src='https://s0.wp.com/latex.php?latex=%5Cpsi.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi.' title='&#92;psi.' class='latex' /></p>
<p><i>Proof.</i>  Choose <img src='https://s0.wp.com/latex.php?latex=%5Clambda&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;lambda' title='&#92;lambda' class='latex' /> as in the Lemma, so that <img src='https://s0.wp.com/latex.php?latex=H%5Cpsi+%3D+%5Clambda+%5Cpsi.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H&#92;psi = &#92;lambda &#92;psi.' title='H&#92;psi = &#92;lambda &#92;psi.' class='latex' />  Since <img src='https://s0.wp.com/latex.php?latex=%5Cpsi&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi' title='&#92;psi' class='latex' /> is positive we can normalize it to be a stochastic state:</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cdisplaystyle%7B+%5Csum_i+%5Cpsi_i+%3D+1+%7D+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;displaystyle{ &#92;sum_i &#92;psi_i = 1 } ' title='&#92;displaystyle{ &#92;sum_i &#92;psi_i = 1 } ' class='latex' /></p>
<p>Since <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> is a Dirichlet operator, <img src='https://s0.wp.com/latex.php?latex=%5Cexp%28t+H%29&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;exp(t H)' title='&#92;exp(t H)' class='latex' /> sends stochastic states to stochastic states, so</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cdisplaystyle%7B+%5Csum_i+%28%5Cexp%28t+H%29+%5Cpsi%29_i+%3D+1+%7D+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;displaystyle{ &#92;sum_i (&#92;exp(t H) &#92;psi)_i = 1 } ' title='&#92;displaystyle{ &#92;sum_i (&#92;exp(t H) &#92;psi)_i = 1 } ' class='latex' /> </p>
<p>for all <img src='https://s0.wp.com/latex.php?latex=t+%5Cge+0.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='t &#92;ge 0.' title='t &#92;ge 0.' class='latex' />  On the other hand,</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cdisplaystyle%7B+%5Csum_i+%28%5Cexp%28t+H%29%5Cpsi%29_i+%3D+%5Csum_i+e%5E%7Bt+%5Clambda%7D+%5Cpsi_i+%3D+e%5E%7Bt+%5Clambda%7D+%7D+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;displaystyle{ &#92;sum_i (&#92;exp(t H)&#92;psi)_i = &#92;sum_i e^{t &#92;lambda} &#92;psi_i = e^{t &#92;lambda} } ' title='&#92;displaystyle{ &#92;sum_i (&#92;exp(t H)&#92;psi)_i = &#92;sum_i e^{t &#92;lambda} &#92;psi_i = e^{t &#92;lambda} } ' class='latex' /></p>
<p>so we must have <img src='https://s0.wp.com/latex.php?latex=%5Clambda+%3D+0.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;lambda = 0.' title='&#92;lambda = 0.' class='latex' />  &nbsp;  &#9608;</p>
<p>What&#8217;s the point of all this?  One point is that there&#8217;s a unique stochastic state <img src='https://s0.wp.com/latex.php?latex=%5Cpsi&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi' title='&#92;psi' class='latex' /> that&#8217;s an <b>equilibrium</b> state: since <img src='https://s0.wp.com/latex.php?latex=H+%5Cpsi+%3D+0%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H &#92;psi = 0,' title='H &#92;psi = 0,' class='latex' /> it doesn&#8217;t change with time.  It&#8217;s also <b>globally stable</b>: since all the other eigenvalues of <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> are negative, all other stochastic states converge to this one as time goes forward.</p>
<h3>  An example </h3>
<p>There are many examples of irreducible Dirichlet operators.  For instance, in <a href="https://johncarlosbaez.wordpress.com/2011/10/26/network-theory-part-15/">Part 15</a> we talked about graph Laplacians.  The Laplacian of a connected simple graph is always irreducible.  But let us try a different sort of example, coming from the picture of the resistors we saw earlier:</p>
<div align="center">
<img width="320" src="https://i0.wp.com/math.ucr.edu/home/baez/networks/perron-frobenius-graph.png" />
</div>
<p>Let&#8217;s create a matrix <img src='https://s0.wp.com/latex.php?latex=A&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='A' title='A' class='latex' /> whose entry <img src='https://s0.wp.com/latex.php?latex=A_%7Bi+j%7D&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='A_{i j}' title='A_{i j}' class='latex' /> is the number labelling the edge from <img src='https://s0.wp.com/latex.php?latex=i&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='i' title='i' class='latex' /> to <img src='https://s0.wp.com/latex.php?latex=j&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='j' title='j' class='latex' /> if there is such an edge, and zero otherwise:</p>
<p><img src='https://s0.wp.com/latex.php?latex=A+%3D+%5Cleft%28++%5Cbegin%7Barray%7D%7Bccccc%7D+++0+%26+2+%26+1+%26+0+%26+1+%5C%5C+++2+%26+0+%26+0+%26+1+%26+1+%5C%5C+++1+%26+0+%26+0+%26+2+%26+1+%5C%5C+++0+%26+1+%26+2+%26+0+%26+1+%5C%5C+++1+%26+1+%26+1+%26+1+%26+0++%5Cend%7Barray%7D++%5Cright%29+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='A = &#92;left(  &#92;begin{array}{ccccc}   0 &amp; 2 &amp; 1 &amp; 0 &amp; 1 &#92;&#92;   2 &amp; 0 &amp; 0 &amp; 1 &amp; 1 &#92;&#92;   1 &amp; 0 &amp; 0 &amp; 2 &amp; 1 &#92;&#92;   0 &amp; 1 &amp; 2 &amp; 0 &amp; 1 &#92;&#92;   1 &amp; 1 &amp; 1 &amp; 1 &amp; 0  &#92;end{array}  &#92;right) ' title='A = &#92;left(  &#92;begin{array}{ccccc}   0 &amp; 2 &amp; 1 &amp; 0 &amp; 1 &#92;&#92;   2 &amp; 0 &amp; 0 &amp; 1 &amp; 1 &#92;&#92;   1 &amp; 0 &amp; 0 &amp; 2 &amp; 1 &#92;&#92;   0 &amp; 1 &amp; 2 &amp; 0 &amp; 1 &#92;&#92;   1 &amp; 1 &amp; 1 &amp; 1 &amp; 0  &#92;end{array}  &#92;right) ' class='latex' /></p>
<p>Remember how the game works.  The matrix <img src='https://s0.wp.com/latex.php?latex=A&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='A' title='A' class='latex' /> is already a valid Hamiltonian for quantum mechanics, since it&#8217;s self adjoint.  However, to get a valid Hamiltonian for both stochastic and quantum mechanics&#8212;in other words, a Dirichlet operator&#8212;we subtract the diagonal matrix <img src='https://s0.wp.com/latex.php?latex=L&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='L' title='L' class='latex' /> whose entries are the sums of the columns of <img src='https://s0.wp.com/latex.php?latex=A.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='A.' title='A.' class='latex' /> In this example it just so happens that the column sums are all 4, so <img src='https://s0.wp.com/latex.php?latex=L+%3D+4+I%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='L = 4 I,' title='L = 4 I,' class='latex' /> and our Dirichlet operator is</p>
<p><img src='https://s0.wp.com/latex.php?latex=H+%3D+A+-+4+I+%3D+%5Cleft%28++%5Cbegin%7Barray%7D%7Brrrrr%7D+++-4+%26+2+%26+1+%26+0+%26+1+%5C%5C+++2+%26+-4+%26+0+%26+1+%26+1+%5C%5C+++1+%26+0+%26+-4+%26+2+%26+1+%5C%5C+++0+%26+1+%26+2+%26+-4+%26+1+%5C%5C+++1+%26+1+%26+1+%26+1+%26+-4++%5Cend%7Barray%7D++%5Cright%29+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H = A - 4 I = &#92;left(  &#92;begin{array}{rrrrr}   -4 &amp; 2 &amp; 1 &amp; 0 &amp; 1 &#92;&#92;   2 &amp; -4 &amp; 0 &amp; 1 &amp; 1 &#92;&#92;   1 &amp; 0 &amp; -4 &amp; 2 &amp; 1 &#92;&#92;   0 &amp; 1 &amp; 2 &amp; -4 &amp; 1 &#92;&#92;   1 &amp; 1 &amp; 1 &amp; 1 &amp; -4  &#92;end{array}  &#92;right) ' title='H = A - 4 I = &#92;left(  &#92;begin{array}{rrrrr}   -4 &amp; 2 &amp; 1 &amp; 0 &amp; 1 &#92;&#92;   2 &amp; -4 &amp; 0 &amp; 1 &amp; 1 &#92;&#92;   1 &amp; 0 &amp; -4 &amp; 2 &amp; 1 &#92;&#92;   0 &amp; 1 &amp; 2 &amp; -4 &amp; 1 &#92;&#92;   1 &amp; 1 &amp; 1 &amp; 1 &amp; -4  &#92;end{array}  &#92;right) ' class='latex' /></p>
<p>We&#8217;ve set up this example so it&#8217;s easy to see that the vector <img src='https://s0.wp.com/latex.php?latex=%5Cpsi+%3D+%281%2C1%2C1%2C1%2C1%29&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi = (1,1,1,1,1)' title='&#92;psi = (1,1,1,1,1)' class='latex' /> has</p>
<p><img src='https://s0.wp.com/latex.php?latex=H+%5Cpsi+%3D+0+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H &#92;psi = 0 ' title='H &#92;psi = 0 ' class='latex' /></p>
<p>So, this is the unique eigenvector for the eigenvalue 0.  We can use Mathematica to calculate the remaining eigenvalues of <img src='https://s0.wp.com/latex.php?latex=H.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H.' title='H.' class='latex' />  The set of eigenvalues is</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5C%7B0%2C+-7%2C+-8%2C+-8%2C+-3+%5C%7D+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;{0, -7, -8, -8, -3 &#92;} ' title='&#92;{0, -7, -8, -8, -3 &#92;} ' class='latex' /></p>
<p>As we expect from our theorem, the largest real eigenvalue is 0.   By design, the eigenstate associated to this eigenvalue is</p>
<p><img src='https://s0.wp.com/latex.php?latex=%7C+v_0+%5Crangle+%3D+%281%2C+1%2C+1%2C+1%2C+1%29+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='| v_0 &#92;rangle = (1, 1, 1, 1, 1) ' title='| v_0 &#92;rangle = (1, 1, 1, 1, 1) ' class='latex' /></p>
<p>(This funny notation for vectors is common in quantum mechanics, so don&#8217;t worry about it.)   All the other eigenvectors fail to be nonnegative, as predicted by the theorem.  They are:  </p>
<p><img src='https://s0.wp.com/latex.php?latex=%7C+v_1+%5Crangle+%3D+%281%2C+-1%2C+-1%2C+1%2C+0%29+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='| v_1 &#92;rangle = (1, -1, -1, 1, 0) ' title='| v_1 &#92;rangle = (1, -1, -1, 1, 0) ' class='latex' /><br />
<img src='https://s0.wp.com/latex.php?latex=%7C+v_2+%5Crangle+%3D+%28-1%2C+0%2C+-1%2C+0%2C+2%29+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='| v_2 &#92;rangle = (-1, 0, -1, 0, 2) ' title='| v_2 &#92;rangle = (-1, 0, -1, 0, 2) ' class='latex' /><br />
<img src='https://s0.wp.com/latex.php?latex=%7C+v_3+%5Crangle+%3D+%28-1%2C+1%2C+-1%2C+1%2C+0%29+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='| v_3 &#92;rangle = (-1, 1, -1, 1, 0) ' title='| v_3 &#92;rangle = (-1, 1, -1, 1, 0) ' class='latex' /><br />
<img src='https://s0.wp.com/latex.php?latex=%7C+v_4+%5Crangle+%3D+%28-1%2C+-1%2C+1%2C+1%2C+0%29+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='| v_4 &#92;rangle = (-1, -1, 1, 1, 0) ' title='| v_4 &#92;rangle = (-1, -1, 1, 1, 0) ' class='latex' /> </p>
<p>To compare the quantum and stochastic states, consider first <img src='https://s0.wp.com/latex.php?latex=%7Cv_0%5Crangle.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='|v_0&#92;rangle.' title='|v_0&#92;rangle.' class='latex' />  This is the only eigenvector that can be normalized to a stochastic state.  Remember, a stochastic state must have nonnegative components.  This rules out <img src='https://s0.wp.com/latex.php?latex=%7Cv_1%5Crangle&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='|v_1&#92;rangle' title='|v_1&#92;rangle' class='latex' /> through to <img src='https://s0.wp.com/latex.php?latex=%7Cv_4%5Crangle&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='|v_4&#92;rangle' title='|v_4&#92;rangle' class='latex' /> as valid stochastic states, no matter how we normalize them!  However, these are allowed as states in quantum mechanics, once we normalize them correctly.  For a stochastic system to be in a state other than the Perron&#8211;Frobenius state, it must be a linear combination of least two eigenstates.  For instance, </p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cpsi_a+%3D+%281-a%29%7Cv_0%5Crangle+%2B+a+%7Cv_1%5Crangle+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;psi_a = (1-a)|v_0&#92;rangle + a |v_1&#92;rangle ' title='&#92;psi_a = (1-a)|v_0&#92;rangle + a |v_1&#92;rangle ' class='latex' /></p>
<p>can be normalized to give stochastic state only if <img src='https://s0.wp.com/latex.php?latex=0+%5Cleq+a+%5Cleq+%5Cfrac%7B1%7D%7B2%7D.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='0 &#92;leq a &#92;leq &#92;frac{1}{2}.' title='0 &#92;leq a &#92;leq &#92;frac{1}{2}.' class='latex' />  </p>
<p>And, it&#8217;s easy to see that it works this way for any irreducible Dirichlet operator, thanks to our theorem.  So, our thesis has been proved true!</p>
<h3> Puzzles on irreducibility </h3>
<p>Let us conclude with a couple more puzzles.  There are lots of ways to characterize irreducible nonnegative matrices; we don&#8217;t need to mention graphs.  Here&#8217;s one:</p>
<p><b>Puzzle 3.</b> Let <img src='https://s0.wp.com/latex.php?latex=T&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T' title='T' class='latex' /> be a nonnegative <img src='https://s0.wp.com/latex.php?latex=n+%5Ctimes+n&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='n &#92;times n' title='n &#92;times n' class='latex' /> matrix.  Show that <img src='https://s0.wp.com/latex.php?latex=T&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='T' title='T' class='latex' /> is irreducible if and only if for all <img src='https://s0.wp.com/latex.php?latex=i%2Cj+%5Cge+0%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='i,j &#92;ge 0,' title='i,j &#92;ge 0,' class='latex' /> <img src='https://s0.wp.com/latex.php?latex=%28T%5Em%29_%7Bi+j%7D+%3E+0&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='(T^m)_{i j} &gt; 0' title='(T^m)_{i j} &gt; 0' class='latex' /> for some natural number <img src='https://s0.wp.com/latex.php?latex=m.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='m.' title='m.' class='latex' /> </p>
<p>You may be confused because today we explained the usual concept of irreducibility for nonnegative matrices, but also defined a concept of irreducibility for Dirichlet operators.  Luckily there&#8217;s no conflict: Dirichlet operators aren&#8217;t nonnegative matrices, but if we add a big multiple of the identity to a Dirichlet operator it becomes a nonnegative matrix, and then:</p>
<p><b>Puzzle 4.</b>  Show that a Dirichlet operator <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> is irreducible if and only if the nonnegative operator <img src='https://s0.wp.com/latex.php?latex=H+%2B+c+I&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H + c I' title='H + c I' class='latex' /> (where <img src='https://s0.wp.com/latex.php?latex=c&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='c' title='c' class='latex' /> is any sufficiently large constant) is irreducible. </p>
<p>Irreducibility is also related to the nonexistence of interesting conserved quantities.  In <a href="http://math.ucr.edu/home/baez/networks/networks_11.html">Part 11</a> we saw a version of Noether&#8217;s Theorem for stochastic mechanics.  Remember that an observable <img src='https://s0.wp.com/latex.php?latex=O&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='O' title='O' class='latex' /> in stochastic mechanics assigns a number <img src='https://s0.wp.com/latex.php?latex=O_i&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='O_i' title='O_i' class='latex' /> to each configuration <img src='https://s0.wp.com/latex.php?latex=i+%3D+1%2C+%5Cdots%2C+n.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='i = 1, &#92;dots, n.' title='i = 1, &#92;dots, n.' class='latex' />  We can make a diagonal matrix with <img src='https://s0.wp.com/latex.php?latex=O_i&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='O_i' title='O_i' class='latex' /> as its diagonal entries, and by abuse of language we call this <img src='https://s0.wp.com/latex.php?latex=O&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='O' title='O' class='latex' /> as well.  Then we say <img src='https://s0.wp.com/latex.php?latex=O&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='O' title='O' class='latex' /> is a <b>conserved quantity</b> for the Hamiltonian <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> if the commutator <img src='https://s0.wp.com/latex.php?latex=%5BO%2CH%5D+%3D+O+H+-+H+O&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='[O,H] = O H - H O' title='[O,H] = O H - H O' class='latex' /> vanishes.</p>
<p><b>Puzzle 5.</b> Let <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> be a Dirichlet operator.  Show that <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> is irreducible if and only if every conserved quantity <img src='https://s0.wp.com/latex.php?latex=O&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='O' title='O' class='latex' /> for <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> is a constant, meaning that for some  <img src='https://s0.wp.com/latex.php?latex=c+%5Cin+%5Cmathbb%7BR%7D&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='c &#92;in &#92;mathbb{R}' title='c &#92;in &#92;mathbb{R}' class='latex' /> we have <img src='https://s0.wp.com/latex.php?latex=O_i+%3D+c&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='O_i = c' title='O_i = c' class='latex' /> for all <img src='https://s0.wp.com/latex.php?latex=i.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='i.' title='i.' class='latex' />  (Hint: examine the proof of Noether&#8217;s theorem.)</p>
<p>In fact this works more generally:</p>
<p><b>Puzzle 6.</b>  Let <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> be an infinitesimal stochastic matrix.  Show that <img src='https://s0.wp.com/latex.php?latex=H+%2B+c+I&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H + c I' title='H + c I' class='latex' /> is an irreducible nonnegative matrix for all sufficiently large <img src='https://s0.wp.com/latex.php?latex=c&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='c' title='c' class='latex' /> if and only if every conserved quantity <img src='https://s0.wp.com/latex.php?latex=O&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='O' title='O' class='latex' /> for <img src='https://s0.wp.com/latex.php?latex=H&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H' title='H' class='latex' /> is a constant.</p>
]]></html><thumbnail_url><![CDATA[https://i1.wp.com/math.ucr.edu/home/baez/networks/quantum-vs-probability-theory-I.png?fit=440%2C330]]></thumbnail_url><thumbnail_height><![CDATA[300]]></thumbnail_height><thumbnail_width><![CDATA[440]]></thumbnail_width></oembed>