<?xml version="1.0" encoding="UTF-8" standalone="yes"?><oembed><version><![CDATA[1.0]]></version><provider_name><![CDATA[Azimuth]]></provider_name><provider_url><![CDATA[https://johncarlosbaez.wordpress.com]]></provider_url><author_name><![CDATA[John Baez]]></author_name><author_url><![CDATA[https://johncarlosbaez.wordpress.com/author/johncarlosbaez/]]></author_url><title><![CDATA[Network Theory III]]></title><type><![CDATA[link]]></type><html><![CDATA[<p>&nbsp;</p>
<span class='embed-youtube' style='text-align:center; display: block;'><iframe class='youtube-player' type='text/html' width='450' height='315' src='https://www.youtube.com/embed/qX8fSYu7ors?version=3&#038;rel=1&#038;fs=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;wmode=transparent' frameborder='0' allowfullscreen='true'></iframe></span>
<p>In the last of my <a href="https://johncarlosbaez.wordpress.com/2014/02/07/network-theory-talks-at-oxford/">Oxford talks</a> I explain how entropy and relative entropy can be understood using certain categories related to probability theory&#8230; and how these categories also let us understand Bayesian networks!  </p>
<p>The first two parts are explanations of these papers:</p>
<p>&bull; John Baez, Tobias Fritz and Tom Leinster, <a href="http://arxiv.org/abs/1106.1791">A characterization of entropy in terms of information loss</a></p>
<p>&bull; John Baez and Tobias Fritz, <a href="http://arxiv.org/abs/1402.3067">A Bayesian characterization of relative entropy</a>.</p>
<p>Somewhere around here the talk was interrupted by a fire drill, waking up the entire audience!</p>
<p>By the way, in my talk I mistakenly said that relative entropy is a continuous functor; in fact it&#8217;s just lower semicontinuous.  I&#8217;ve fixed this in my slides.</p>
<p>The third part of my talk was my own interpretation of Brendan Fong&#8217;s master&#8217;s thesis:</p>
<p>&bull; Brendan Fong, <a href="http://arxiv.org/abs/1301.6201"><i>Causal Theories: a Categorical Perspective on Bayesian Networks</i></a>.</p>
<p>I took a slightly different approach, by saying that a causal theory <img src='https://s0.wp.com/latex.php?latex=%5Cmathcal%7BC%7D_G&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;mathcal{C}_G' title='&#92;mathcal{C}_G' class='latex' /> is the free category with products on certain objects and morphisms coming from a directed acyclic graph <img src='https://s0.wp.com/latex.php?latex=G&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='G' title='G' class='latex' />.  In his thesis he said <img src='https://s0.wp.com/latex.php?latex=%5Cmathcal%7BC%7D_G&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;mathcal{C}_G' title='&#92;mathcal{C}_G' class='latex' /> was the free symmetric monoidal category where each generating object is equipped with a cocommutative comonoid structure.  This is close to a category with finite products, though perhaps not quite the same: a symmetric monoidal category where every object is equipped with a cocommutative comonoid structure in a <i>natural</i> way (i.e., making a bunch of squares commute) is a category with finite products.  It would be interesting to see if this difference hurts or helps.</p>
<p>By making this slight change, I am claiming that causal theories can be seen as <a href="http://www.iti.cs.tu-bs.de/~adamek/algebraic.theories.pdf">algebraic theories</a> in the sense of Lawvere.  This would be a very good thing, since we know a lot about those.</p>
<p>You can also <a href="http://math.ucr.edu/home/baez/networks_oxford/networks_entropy.pdf">see the slides of this talk</a>.  Click on any picture in the slides, or any text in blue, and get more information!</p>
]]></html></oembed>