<?xml version="1.0" encoding="UTF-8" standalone="yes"?><oembed><version><![CDATA[1.0]]></version><provider_name><![CDATA[Azimuth]]></provider_name><provider_url><![CDATA[https://johncarlosbaez.wordpress.com]]></provider_url><author_name><![CDATA[John Baez]]></author_name><author_url><![CDATA[https://johncarlosbaez.wordpress.com/author/johncarlosbaez/]]></author_url><title><![CDATA[The Mathematics of Biodiversity (Part&nbsp;7)]]></title><type><![CDATA[link]]></type><html><![CDATA[<p>How ignorant are you?  </p>
<p>Do you know?  </p>
<p><i>Do you know how much don&#8217;t you know?</i></p>
<p>It seems hard to accurately estimate your lack of knowledge.  It even seems hard to say precisely <i>how hard</i> it is.  But the cool thing is, we can actually extract an interesting math question from this problem.  And one answer to this question leads to the following conclusion:</p>
<blockquote><p>
<b>There&#8217;s no unbiased way to estimate how ignorant you are.<br />
</b></p></blockquote>
<p>But the devil is in the details.  So let&#8217;s see the details!</p>
<p>The <a href="http://en.wikipedia.org/wiki/Entropy_%28information_theory%29">Shannon entropy</a> of a probability distribution is a way of measuring how ignorant we are when this probability distribution describes our knowledge. </p>
<p>For example, suppose all we care about is whether this ancient Roman coin will land heads up or tails up:</p>
<div align="center"><a href="http://en.wikipedia.org/wiki/File:PupienusSest.jpg"><img src="https://i2.wp.com/upload.wikimedia.org/wikipedia/commons/6/63/PupienusSest.jpg" /></a></div>
<p>If we know there&#8217;s a 50% chance of it landing heads up, that&#8217;s a Shannon entropy of 1 bit: we&#8217;re missing one bit of information.  </p>
<p>But suppose for some reason we know for sure it&#8217;s going to land heads up.  For example, suppose we know the guy on this coin is the emperor Pupienus Maximus, a egomaniac who had lead put on the back of all coins bearing his likeness, so his face would never hit the dirt!  Then the Shannon entropy is 0: we know what&#8217;s going to happen when we toss this coin.</p>
<p>Or suppose we know there&#8217;s a 90% it will land heads up, and a 10% chance it lands tails up.  Then the Shannon entropy is somewhere in between.  We can calculate it like this:</p>
<p><img src='https://s0.wp.com/latex.php?latex=-+0.9+%5Clog_2+%280.9%29+-+0.1+%5Clog_2+%280.1%29+%3D+0.46899...+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='- 0.9 &#92;log_2 (0.9) - 0.1 &#92;log_2 (0.1) = 0.46899... ' title='- 0.9 &#92;log_2 (0.9) - 0.1 &#92;log_2 (0.1) = 0.46899... ' class='latex' /></p>
<p>so that&#8217;s how many bits of information we&#8217;re missing.</p>
<p>But now suppose we have no idea.  Suppose we just start flipping the coin over and over, and seeing what happens.  Can we <i>estimate</i> the Shannon entropy?</p>
<p>Here&#8217;s a naive way to do it.  First, use your experimental data to estimate the probability that that the coin lands heads-up.  Then, stick that probability into the formula for Shannon entropy.  For example, say we flip the coin 3 times and it lands head-up once.  Then we can <i>estimate</i> the probability of it landing heads-up as 1/3, and tails-up as 2/3.  So we can estimate that the Shannon entropy is</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cdisplaystyle%7B+-+%5Cfrac%7B1%7D%7B3%7D+%5Clog_2+%28%5Cfrac%7B1%7D%7B3%7D%29+-%5Cfrac%7B2%7D%7B3%7D+%5Clog_2+%28%5Cfrac%7B2%7D%7B3%7D%29+%3D+0.918...+%7D+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;displaystyle{ - &#92;frac{1}{3} &#92;log_2 (&#92;frac{1}{3}) -&#92;frac{2}{3} &#92;log_2 (&#92;frac{2}{3}) = 0.918... } ' title='&#92;displaystyle{ - &#92;frac{1}{3} &#92;log_2 (&#92;frac{1}{3}) -&#92;frac{2}{3} &#92;log_2 (&#92;frac{2}{3}) = 0.918... } ' class='latex' /></p>
<p>But it turns out that this approach systematically <i>underestimates</i> the Shannon entropy!  </p>
<p>Say we have a coin that lands up a certain fraction of the time, say <img src='https://s0.wp.com/latex.php?latex=p.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='p.' title='p.' class='latex' />   And say we play this game: we flip our coin <img src='https://s0.wp.com/latex.php?latex=n&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='n' title='n' class='latex' /> times, see what we get, and estimate the Shannon entropy using the simple recipe I just illustrated.  </p>
<p>Of course, our estimate will depend on the luck of the game.  But on average, it will be <i>less</i> than the <i>actual</i> Shannon entropy, which is </p>
<p><img src='https://s0.wp.com/latex.php?latex=-+p+%5Clog_2+%28p%29+-+%281-p%29+%5Clog_2+%281-p%29++&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='- p &#92;log_2 (p) - (1-p) &#92;log_2 (1-p)  ' title='- p &#92;log_2 (p) - (1-p) &#92;log_2 (1-p)  ' class='latex' /></p>
<p>We can prove this mathematically.  But it shouldn&#8217;t be surprising.  After all, if <img src='https://s0.wp.com/latex.php?latex=n+%3D+1%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='n = 1,' title='n = 1,' class='latex' /> we&#8217;re playing a game where we flip the coin just <i>once</i>.  And with this game, our naive estimate of the Shannon entropy will always be <i>zero!</i>  Each time we play the game, the coin will either land heads up 100% of the time, or tails up 100% of the time!  </p>
<p>If we play the game with more coin flips, the error gets less severe.  In fact it approaches zero as the number of coin flips gets ever larger, so that <img src='https://s0.wp.com/latex.php?latex=n+%5Cto+%5Cinfty.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='n &#92;to &#92;infty.' title='n &#92;to &#92;infty.' class='latex' />  The case where you flip the coin just once is an extreme case&#8212;but extreme cases can be good to think about, because they can indicate what may happen in less extreme cases.</p>
<p>One moral here is that naively generalizing on the basis of limited data can make you feel more sure you know what&#8217;s going on than you actually are.  </p>
<p>I hope you knew <i>that</i> already!</p>
<p>But we can also say, in a more technical way, that the naive way of estimating Shannon entropy is a <a href="http://en.wikipedia.org/wiki/Bias_of_an_estimator"><b>biased estimator</b></a>: the average value of the estimator is different from the value of the quantity being estimated.   </p>
<p>Here&#8217;s an example of an unbiased estimator.  Say you&#8217;re trying to estimate the probability that the coin will land heads up.  You flip it <img src='https://s0.wp.com/latex.php?latex=n&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='n' title='n' class='latex' /> times and see that it lands up <img src='https://s0.wp.com/latex.php?latex=m&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='m' title='m' class='latex' /> times.  You estimate that the probability is <img src='https://s0.wp.com/latex.php?latex=m%2Fn.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='m/n.' title='m/n.' class='latex' />  That&#8217;s the obvious thing to do, and it turns out to be unbiased.  </p>
<p>Statisticians like to think about <a href="http://en.wikipedia.org/wiki/Estimator">estimators</a>, and being unbiased is one way an estimator can be &#8216;good&#8217;.  Beware: it&#8217;s not the only way!  There are estimators that are unbiased, but whose standard deviation is so huge that they&#8217;re almost useless.  It can be better to have an estimate of something that&#8217;s more accurate, even though on average it&#8217;s a bit too low.  So sometimes, a biased estimator can be more useful than an unbiased estimator.  </p>
<p>Nonetheless, my ears perked up when Lou Jost mentioned that there is no unbiased estimator for Shannon entropy.  In rough terms, the moral is that:</p>
<blockquote><p>
<b>There&#8217;s no unbiased way to estimate how ignorant you are.</b>
</p></blockquote>
<p>I think this is important.  For example, it&#8217;s important because Shannon entropy is also used as a measure of <i>biodiversity</i>.  Instead of flipping a coin repeatedly and seeing which side lands up, now we go out and collect plants or animals, and see which species we find.  The relative abundance of different species defines a  probability distribution on the set of species.  In this language, the moral is:</p>
<blockquote><p>
<b>There&#8217;s no unbiased way to estimate biodiversity.<br />
</b></p></blockquote>
<p>But of course, this doesn&#8217;t mean we should give up.  We may just have to settle for an estimator that&#8217;s a bit biased!  And people have spent a bunch of time looking for estimators that are less biased than the naive one I just described.  </p>
<p>By the way, equating &#8216;biodiversity&#8217; with &#8216;Shannon entropy&#8217; is sloppy: there are <a href="https://johncarlosbaez.wordpress.com/2012/07/02/the-mathematics-of-biodiversity-part-4/">many measures of biodiversity</a>.  The Shannon entropy is just a special case of the <a href="http://en.wikipedia.org/wiki/R%C3%A9nyi_entropy">R&eacute;nyi entropy</a>, which depends on a parameter <img src='https://s0.wp.com/latex.php?latex=q&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='q' title='q' class='latex' />: we get Shannon entropy when <img src='https://s0.wp.com/latex.php?latex=q+%3D+1.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='q = 1.' title='q = 1.' class='latex' />  </p>
<p>As <img src='https://s0.wp.com/latex.php?latex=q&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='q' title='q' class='latex' /> gets smaller, the R&eacute;nyi entropy gets more and more sensitive to rare species&#8212;or shifting back to the language of probability theory, rare events.  It&#8217;s the rare events that make Shannon entropy hard to estimate, so I imagine there should be theorems about estimators for R&eacute;nyi entropy, which say it gets harder to estimate as <img src='https://s0.wp.com/latex.php?latex=q&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='q' title='q' class='latex' /> gets smaller.  Do you know such theorems?</p>
<p>Also, I should add that biodiversity is better captured by the &#8216;Hill numbers&#8217;, which are functions of the R&eacute;nyi entropy, than by the R&eacute;nyi entropy itself.  (See <a href="https://johncarlosbaez.wordpress.com/2012/07/02/the-mathematics-of-biodiversity-part-4/">here</a> for the formulas.)  Since these functions are nonlinear, the lack of an unbiased estimator for R&eacute;nyi entropy doesn&#8217;t instantly imply the same for the Hill numbers.  So there are also some obvious questions about unbiased estimators for Hill numbers.  Do you know answers to those?</p>
<p>Here are some papers on estimators for entropy.  Most of these focus on estimating the Shannon entropy of a probability distribution on a finite set.  </p>
<p>This old classic has a proof that the &#8216;naive&#8217; estimator of Shannon entropy is biased, and estimates on the bias:</p>
<p>&bull; Bernard Harris, <a href="http://www.dtic.mil/dtic/tr/fulltext/u2/a020217.pdf">The statistical estimation of entropy in the non-parametric case</a>, Army Research Office, 1975.</p>
<p>He shows the bias goes to zero as we increase the number of samples: the number I was calling <img src='https://s0.wp.com/latex.php?latex=n&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='n' title='n' class='latex' /> in my coin flip example.  In fact he shows the bias goes to zero like <img src='https://s0.wp.com/latex.php?latex=O%281%2Fn%29.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='O(1/n).' title='O(1/n).' class='latex' />  This is big <a href="http://en.wikipedia.org/wiki/Big_O_notation">big O notation</a> which means that as <img src='https://s0.wp.com/latex.php?latex=n+%5Cto+%2B%5Cinfty%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='n &#92;to +&#92;infty,' title='n &#92;to +&#92;infty,' class='latex' /> the bias is bounded by some constant times <img src='https://s0.wp.com/latex.php?latex=1%2Fn.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='1/n.' title='1/n.' class='latex' /> This constant depends on the size of our finite set&#8212;or, if you want to do better, the <b>class number</b>, which is the number of elements on which our probability distribution is nonzero. </p>
<p>Using this idea, he shows that you can find a less biased estimator if you have a probability distribution <img src='https://s0.wp.com/latex.php?latex=p_i&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='p_i' title='p_i' class='latex' /> on a finite set and you know that exactly <img src='https://s0.wp.com/latex.php?latex=k&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='k' title='k' class='latex' /> of these probabilities are nonzero.   To do this, just take the &#8216;naive&#8217; estimator I described earlier and add <img src='https://s0.wp.com/latex.php?latex=%28k-1%29%2F2n.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='(k-1)/2n.' title='(k-1)/2n.' class='latex' />  This is called the <b>Miller&#8211;Madow bias correction</b>.  The bias of this improved estimator goes to zero like <img src='https://s0.wp.com/latex.php?latex=O%281%2Fn%5E2%29.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='O(1/n^2).' title='O(1/n^2).' class='latex' /></p>
<p>The problem is that in practice you don&#8217;t know ahead of time how many probabilities are nonzero!  In applications to biodiversity this would amount to knowing ahead of time how many species exist, before you go out looking for them. </p>
<p>But what about the theorem that there&#8217;s no unbiased estimator for Shannon entropy?  The best reference I&#8217;ve found is this:</p>
<p>&bull; Liam Paninski, <a href="http://www.stat.columbia.edu/~liam/research/abstracts/info_est-nc-abs.html">Estimation of entropy and mutual information</a>, <i>Neural Computation</i> <b>15</b> (2003) 1191-1254. </p>
<p>In Proposition 8 of Appendix A, Paninski gives a quick proof that there is no unbiased estimator of Shannon entropy for probability distributions on a finite set.  But his paper goes far beyond this.  Indeed, it seems like a pretty definitive modern discussion of the whole subject of estimating entropy.  Interestingly, this subject is dominated by neurobiologists studying entropy of signals in the brain!  So, lots of his examples involve brain signals.</p>
<p>Another overview, with tons of references, is this:</p>
<p>&bull; J. Beirlant, E. J. Dudewicz, L. Gy&ouml;rfi, and E. C. van der Meulen, <a href="http://www.its.caltech.edu/~jimbeck/summerlectures/references/Entropy%20estimation.pdf">Nonparametric entropy estimation: an overview</a>.  </p>
<p>This paper focuses on the situation where don&#8217;t know ahead of time how many probabilities are nonzero:</p>
<p>&bull; Anne Chao and T.-J. Shen, <a href="http://wayback.archive.org/web/20110715000000*/http://chao.stat.nthu.edu.tw/paper/2003_eest_10_p429.pdf">Nonparametric estimation of Shannon&#8217;s index of diversity when there are unseen species in sample</a>, <i><a href="http://www.springerlink.com/content/j23110l474087421/">Environmental and Ecological Statistics</a></i> <b>10</b> (2003), 429&amp;&#8211;443.</p>
<p>In 2003 there was a conference on the problem of estimating entropy, whose webpage has useful information.  As you can see, it was dominated by neurobiologists:</p>
<p>&bull; <a href="http://menem.com/~ilya/pages/NIPS03/">Estimation of entropy and information of undersampled probability distributions: theory, algorithms, and applications to the neural code</a>, Whistler, British Columbia, Canada, 12 December 2003.</p>
<p>By the way, I was very confused for a while, because these guys claim to have found an unbiased estimator of Shannon entropy:</p>
<p>&bull; Stephen Montgomery Smith and Thomas Sch&uuml;rmann, <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.62.6882&amp;rep=rep1&amp;type=pdf">Unbiased estimators for entropy and class number</a>.</p>
<p>However, their way of estimating entropy has a funny property: in the language of biodiversity, it&#8217;s only well-defined if our samples include at least one species of each organism.   So, we cannot compute this estimate for an <i>arbitary</i> list of <img src='https://s0.wp.com/latex.php?latex=n&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='n' title='n' class='latex' /> samples.  This means it&#8217;s not <a href="http://en.wikipedia.org/wiki/Estimator">estimator</a> in the usual sense&#8212;the sense that Paninski is using!  So it doesn&#8217;t really contradict Paninski&#8217;s result.</p>
<p>To wrap up, let me state Paninski&#8217;s result in a mathematically precise way. Suppose <img src='https://s0.wp.com/latex.php?latex=p&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='p' title='p' class='latex' /> is a probability distribution on a finite set <img src='https://s0.wp.com/latex.php?latex=X&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='X' title='X' class='latex' />.  Suppose <img src='https://s0.wp.com/latex.php?latex=S&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='S' title='S' class='latex' /> is any number we can compute from <img src='https://s0.wp.com/latex.php?latex=p&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='p' title='p' class='latex' />: that is, any real-valued function on the set of probability distributions.   We&#8217;ll be interested in the case where <img src='https://s0.wp.com/latex.php?latex=S&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='S' title='S' class='latex' /> is the <b>Shannon entropy</b>:</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cdisplaystyle%7B+S+%3D+-%5Csum_%7Bx+%5Cin+X%7D+p%28x%29+%5C%2C+%5Clog+p%28x%29+%7D&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;displaystyle{ S = -&#92;sum_{x &#92;in X} p(x) &#92;, &#92;log p(x) }' title='&#92;displaystyle{ S = -&#92;sum_{x &#92;in X} p(x) &#92;, &#92;log p(x) }' class='latex' /></p>
<p>Here we can use whatever base for the logarithm we like: earlier I was using base 2, but that&#8217;s not sacred.  Define an <b>estimator</b> to be any function</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Chat%7BS%7D%3A+X%5En+%5Cto+%5Cmathbb%7BR%7D&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;hat{S}: X^n &#92;to &#92;mathbb{R}' title='&#92;hat{S}: X^n &#92;to &#92;mathbb{R}' class='latex' /></p>
<p>The idea is that given <img src='https://s0.wp.com/latex.php?latex=n&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='n' title='n' class='latex' /> <b>samples</b> from the set <img src='https://s0.wp.com/latex.php?latex=X%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='X,' title='X,' class='latex' /> meaning points <img src='https://s0.wp.com/latex.php?latex=x_1%2C+%5Cdots%2C+x_n+%5Cin+X%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='x_1, &#92;dots, x_n &#92;in X,' title='x_1, &#92;dots, x_n &#92;in X,' class='latex' /> the estimator gives a number <img src='https://s0.wp.com/latex.php?latex=%5Chat%7BS%7D%28x_1%2C+%5Cdots%2C+x_n%29&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;hat{S}(x_1, &#92;dots, x_n)' title='&#92;hat{S}(x_1, &#92;dots, x_n)' class='latex' />.   This number is supposed to estimate some feature of the probability distribution <img src='https://s0.wp.com/latex.php?latex=p&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='p' title='p' class='latex' />: for example, its entropy.   </p>
<p>If the samples are independent and distributed according to the distribution <img src='https://s0.wp.com/latex.php?latex=p%2C&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='p,' title='p,' class='latex' /> the <b>sample mean of the estimator</b> will be</p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Cdisplaystyle%7B+%5Clangle+%5Chat%7BS%7D+%5Crangle+%3D+%5Csum_%7Bx_1%2C+%5Cdots%2C+x_n+%5Cin+X%7D+%5Chat%7BS%7D%28x_1%2C+%5Cdots%2C+x_n%29+%5C%2C+p%28x_1%29+%5Ccdots+p%28x_n%29+%7D+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;displaystyle{ &#92;langle &#92;hat{S} &#92;rangle = &#92;sum_{x_1, &#92;dots, x_n &#92;in X} &#92;hat{S}(x_1, &#92;dots, x_n) &#92;, p(x_1) &#92;cdots p(x_n) } ' title='&#92;displaystyle{ &#92;langle &#92;hat{S} &#92;rangle = &#92;sum_{x_1, &#92;dots, x_n &#92;in X} &#92;hat{S}(x_1, &#92;dots, x_n) &#92;, p(x_1) &#92;cdots p(x_n) } ' class='latex' /></p>
<p>The <b>bias</b> of the estimator is the difference between the sample mean of the estimator and actual value of <img src='https://s0.wp.com/latex.php?latex=S&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='S' title='S' class='latex' />: </p>
<p><img src='https://s0.wp.com/latex.php?latex=%5Clangle+%5Chat%7BS%7D+%5Crangle+-+S+&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;langle &#92;hat{S} &#92;rangle - S ' title='&#92;langle &#92;hat{S} &#92;rangle - S ' class='latex' /></p>
<p>The estimator <img src='https://s0.wp.com/latex.php?latex=%5Chat%7BS%7D&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='&#92;hat{S}' title='&#92;hat{S}' class='latex' /> is <b>unbiased</b> if this bias is zero for all <img src='https://s0.wp.com/latex.php?latex=p.&#038;bg=ffffff&#038;fg=000&#038;s=0' alt='p.' title='p.' class='latex' /></p>
<p>Proposition 8 of Paninski&#8217;s paper says there exists no unbiased estimator for entropy!  The proof is very short&#8230; </p>
<p>Okay, that&#8217;s all for today.</p>
<p>I&#8217;m back in Singapore now; I learned so much at the <a href="http://www.crm.cat/en/Activities/Pages/ActivityDescriptions/Exploratory-Conference-on-the-Mathematics-of-Biodiversity.aspx">Mathematics of Biodiversity</a> conference that there&#8217;s no way I&#8217;ll be able to tell you all that information.   I&#8217;ll try to write a few more blog posts, but please be aware that my posts so far give a hopelessly biased and idiosyncratic view of the conference, which would be almost unrecognizable to most of the participants.  There are a lot of important themes I haven&#8217;t touched on at all&#8230; while this business of entropy estimation barely came up: I just find it interesting!</p>
<p>If more of you blogged more, we wouldn&#8217;t have this problem. </p>
]]></html><thumbnail_url><![CDATA[https://i2.wp.com/upload.wikimedia.org/wikipedia/commons/6/63/PupienusSest.jpg?fit=440%2C330]]></thumbnail_url><thumbnail_height><![CDATA[165]]></thumbnail_height><thumbnail_width><![CDATA[160]]></thumbnail_width></oembed>