<?xml version="1.0" encoding="UTF-8" standalone="yes"?><oembed><version><![CDATA[1.0]]></version><provider_name><![CDATA[Azimuth]]></provider_name><provider_url><![CDATA[https://johncarlosbaez.wordpress.com]]></provider_url><author_name><![CDATA[John Baez]]></author_name><author_url><![CDATA[https://johncarlosbaez.wordpress.com/author/johncarlosbaez/]]></author_url><title><![CDATA[A Quantum Hammersley&#8211;Clifford Theorem]]></title><type><![CDATA[link]]></type><html><![CDATA[<p>I&#8217;m at this workshop:</p>
<p>&bull; <a href="http://www.physics.usyd.edu.au/quantum/Coogee2012/">Sydney Quantum Information Theory Workshop: Coogee 2012</a>, 30 January &#8211; 2 February 2012, Coogee Bay Hotel, Coogee, Sydney, organized by Stephen Bartlett, Gavin Brennen, Andrew Doherty and Tom Stace.</p>
<p>Right now David Poulin is speaking about a quantum version of the Hammersley&#8211;Clifford theorem, which is a theorem about Markov networks.   Let me quickly say a bit about what he proved!   This will be a bit rough, since I&#8217;m doing it live&#8230;</p>
<p>The <a href="http://en.wikipedia.org/wiki/Mutual_information"><b>mutual information</b></a> between two random variables is </p>
<p><img src='https://s0.wp.com/latex.php?latex=I%28A%3AB%29+%3D+S%28A%29+%2B+S%28B%29+-+S%28A%2CB%29&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='I(A:B) = S(A) + S(B) - S(A,B)' title='I(A:B) = S(A) + S(B) - S(A,B)' class='latex' /></p>
<p>The <a href="http://en.wikipedia.org/wiki/Conditional_mutual_information"><b>conditional mutual information</b></a> between three random variables <img src='https://s0.wp.com/latex.php?latex=C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='C' title='C' class='latex' /> is</p>
<p><img src='https://s0.wp.com/latex.php?latex=I%28A%3AB%7CC%29+%3D+%5Csum_c+p%28C%3Dc%29+I%28A%3AB%7CC%3Dc%29&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='I(A:B|C) = &#92;sum_c p(C=c) I(A:B|C=c)' title='I(A:B|C) = &#92;sum_c p(C=c) I(A:B|C=c)' class='latex' /></p>
<p>It&#8217;s the average amount of information about <img src='https://s0.wp.com/latex.php?latex=B&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='B' title='B' class='latex' /> learned by measuring <img src='https://s0.wp.com/latex.php?latex=A&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='A' title='A' class='latex' /> when you already knew <img src='https://s0.wp.com/latex.php?latex=C.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='C.' title='C.' class='latex' /> </p>
<p>All this works for both classical (Shannon) and quantum (von Neumann) entropy. So, when we say &#8216;random variable&#8217; above, we<br />
could mean it in the traditional classical sense or in the quantum sense.</p>
<p>If <img src='https://s0.wp.com/latex.php?latex=I%28A%3AB%7CC%29+%3D+0&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='I(A:B|C) = 0' title='I(A:B|C) = 0' class='latex' /> then <img src='https://s0.wp.com/latex.php?latex=A%2C+C%2C+B&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='A, C, B' title='A, C, B' class='latex' /> has the following <b>Markov property</b>: if you know <img src='https://s0.wp.com/latex.php?latex=C%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='C,' title='C,' class='latex' /> learning <img src='https://s0.wp.com/latex.php?latex=A&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='A' title='A' class='latex' /> tells you nothing new about <img src='https://s0.wp.com/latex.php?latex=B.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='B.' title='B.' class='latex' />  In condensed matter physics, say a spin system, we get (quantum) random variables from measuring what&#8217;s going on in regions, and we have <b>short range entanglement</b> if <img src='https://s0.wp.com/latex.php?latex=I%28A%3AB%7CC%29+%3D+0&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='I(A:B|C) = 0' title='I(A:B|C) = 0' class='latex' /> when <img src='https://s0.wp.com/latex.php?latex=C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='C' title='C' class='latex' /> corresponds to some sufficiently thick region separating the regions <img src='https://s0.wp.com/latex.php?latex=A&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='A' title='A' class='latex' /> and <img src='https://s0.wp.com/latex.php?latex=B.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='B.' title='B.' class='latex' />   We&#8217;ll get this in any Gibbs state of a spin chain with a local Hamiltonian.</p>
<p>A <b>Markov network</b> is a graph with random variables at vertices (and thus subsets of vertices) such that  <img src='https://s0.wp.com/latex.php?latex=I%28A%3AB%7CC%29+%3D+0&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='I(A:B|C) = 0' title='I(A:B|C) = 0' class='latex' /> whenever <img src='https://s0.wp.com/latex.php?latex=C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='C' title='C' class='latex' /> is a subset of vertices that completely &#8216;shields&#8217; the subset <img src='https://s0.wp.com/latex.php?latex=A&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='A' title='A' class='latex' /> from the subset <img src='https://s0.wp.com/latex.php?latex=B&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='B' title='B' class='latex' />: any path from <img src='https://s0.wp.com/latex.php?latex=A&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='A' title='A' class='latex' /> to <img src='https://s0.wp.com/latex.php?latex=B&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='B' title='B' class='latex' /> goes through a vertex in a <img src='https://s0.wp.com/latex.php?latex=C.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='C.' title='C.' class='latex' /></p>
<p>The <a href="http://en.wikipedia.org/wiki/Hammersley%E2%80%93Clifford_theorem"><b>Hammersley&#8211;Clifford theorem</b></a> says that in the classical case we can get any Markov network from the Gibbs state </p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cexp%28-%5Cbeta+H%29&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;exp(-&#92;beta H)' title='&#92;exp(-&#92;beta H)' class='latex' /></p>
<p>of a local Hamiltonian <img src='https://s0.wp.com/latex.php?latex=H%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H,' title='H,' class='latex' /> and vice versa.   Here a Hamiltonian is <b>local</b> if it is a sum of terms, one depending on the degrees of freedom in each <a href="http://en.wikipedia.org/wiki/Clique_%28graph_theory%29">clique</a> in the graph:</p>
<p><img src='https://s0.wp.com/latex.php?latex=H+%3D+%5Csum_%7BC+%5Cin+%5Cmathrm%7Bcliques%7D%7D+h_C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='H = &#92;sum_{C &#92;in &#92;mathrm{cliques}} h_C' title='H = &#92;sum_{C &#92;in &#92;mathrm{cliques}} h_C' class='latex' /></p>
<p>Hayden, Jozsa, Petz and Winter gave a quantum generalization of one direction of this result to graphs that are just &#8216;chains&#8217;, like this:</p>
<p>o&#8212;o&#8212;o&#8212;o&#8212;o&#8212;o&#8212;o&#8212;o&#8212;o&#8212;o&#8212;o&#8212;o</p>
<p>Namely: for such graphs, any quantum Markov network is the Gibbs state of some local Hamiltonian.  Now Poulin has shown the same for <i>all</i> graphs.  But the converse is, in general, <i>false</i>.  If the different terms <img src='https://s0.wp.com/latex.php?latex=h_C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='h_C' title='h_C' class='latex' /> in a local Hamiltonian all commute, its Gibbs state will have the Markov property.  But otherwise, it may not.  </p>
<p>For some related material, see:</p>
<p>&bull; David Poulin, <a href="http://cnls.lanl.gov/CQIT/poulin.pdf">Quantum graphical models and belief propagation</a>.</p>
]]></html></oembed>