<?xml version="1.0" encoding="UTF-8" standalone="yes"?><oembed><version><![CDATA[1.0]]></version><provider_name><![CDATA[The ryg blog]]></provider_name><provider_url><![CDATA[https://fgiesen.wordpress.com]]></provider_url><author_name><![CDATA[fgiesen]]></author_name><author_url><![CDATA[https://fgiesen.wordpress.com/author/fgiesen/]]></author_url><title><![CDATA[Finish your derivations,&nbsp;please]]></title><type><![CDATA[link]]></type><html><![CDATA[<p>Every time you ship a product with half-assed math in it, God kills a fluffy kitten by feeding it to an ill-tempered panda bear (don&#8217;t ask me why &#8211; I think it&#8217;s bizarre and oddly specific, but I have no say in the matter). There&#8217;s tons of ways to make your math way more complicated (and expensive) than it needs to be, but most of the time it&#8217;s the same few common mistakes repeated ad nauseam. Here&#8217;s a small checklist of things to look out for that will make your life easier and your code better:</p>
<ul>
<li><b>Symmetries</b>. If your problem has some obvious symmetry, you can usually exploit it in the solution. If it has radial symmetry around some point, move that point to the origin. If there&#8217;s some coordinate system where the constraints (or the problem statement) get a lot simpler, try solving the problem in that coordinate system. This isn&#8217;t guaranteed to win you anything, but if you haven&#8217;t checked, you should &#8211; if symmetry leads to a solution, the solutions are usually very nice, clean and efficient.</li>
<li><b>Geometry</b>. If your problem is geometrical, draw a picture first, even if you know how to solve it algebraically. Approaches that use the geometry of the problem can often make use of symmetries that aren&#8217;t obvious when you write it in equations. More importantly, in geometric derivations, most of the quantities you compute actually have geometrical meaning (points, distances, ratios of lengths, etc). Very useful when debugging to get a quick sanity check. In contrast, intermediate values in the middle of algebraic manipulations rarely have any meaning within the context of the problem &#8211; you have to treat the solver essentially as a black box.</li>
<li><b>Angles</b>. Avoid them. They&#8217;re rarely what you actually want, they tend to introduce a lot of trigonometric functions into the code, you suddenly need to worry about parametrization artifacts (e.g. dealing with wraparound), and code using angles is generally harder to read/understand/debug than equivalent code using vectors (and slower, too).</li>
<li><b>Absolute Angles</b>. Particularly, never never <em>ever</em> use angles relative to some arbitrary absolute coordinate system. They <em>will</em> wrap around to negative at some point, and suddenly something breaks somewhere and nobody knows why. And if you&#8217;re about to introduce some arbitrary coordinate system just to determine some angles, stop and think very hard if that&#8217;s really a good idea. (If, upon reflection, you&#8217;re still undecided, <a href="http://www.nooooooooooooooo.com/">this website</a> has your answer).</li>
<li><b>Did I mention angles?</b> There&#8217;s one particular case of angle-mania that really pisses me off: Using inverse trigonometric functions immediately followed by sin / cos. atan2 / sin / cos: World&#8217;s most expensive 2D vector normalize. Using acos on the result of a dot product just to get the corresponding sin/tan? Time to brush up your <a href="http://en.wikipedia.org/wiki/List_of_trigonometric_identities#Inverse_trigonometric_functions">trigonometric identities</a>. A particularly bad offender can be found <a href="http://wiki.gamedev.net/index.php/D3DBook:(Lighting)_Oren-Nayar">here</a> &#8211; the relevant section from the simplified shader is this:
<pre>float alpha = max( acos( dot( v, n ) ), acos( dot( l, n ) ) );
float beta  = min( acos( dot( v, n ) ), acos( dot( l, n ) ) );
C = sin(alpha) * tan(beta);</pre>
<p>Ouch! If you use some trig identities and the fact that acos is monotonically decreasing over its domain, this reduces to:</p>
<pre>float vdotn = dot(v, n);
float ldotn = dot(l, n);
C = sqrt((1.0 - vdotn*vdotn) * (1.0 - ldotn*ldotn))
  / max(vdotn, ldotn);</pre>
<p>..and suddenly there&#8217;s no need to use a lookup texture anymore (and by the way, this has way higher accuracy too). Come on, people! You don&#8217;t need to derive it by hand (although that&#8217;s not hard either), you don&#8217;t need to buy some formula collection, it&#8217;s all on Wikipedia &#8211; spend the two minutes and look it up!</li>
<li><b>Elementary linear algebra</b>. If you build some matrix by concatenating several transforms and it&#8217;s a performance bottleneck, don&#8217;t get all SIMD on it, do the obvious thing first: do the matrix multiply symbolically and generate the result directly instead of doing the matrix multiplies every time. (But state clearly in comments which transforms you multiplied together in what order to get your result or suffer the righteous wrath of the next person to touch that code). Don&#8217;t invert matrices that you know are orthogonal, just use the transpose! Don&#8217;t use 4&#215;4 matrices everywhere when all your transforms (except for the projection matrix) are affine. It&#8217;s not rocket science.</li>
<li><b>Unnecessary numerical differentiation</b>. Numerical differentiation is numerically unstable, and notoriously difficult to get robust. It&#8217;s also often completely unnecessary. If you&#8217;re dealing with analytically defined functions, compute the derivative directly &#8211; no robustness issues, and it&#8217;s usually faster too (&#8230;but remember the chain rule if you warp your parameter on the way in).</li>
</ul>
<p>Short version: Don&#8217;t just stick with the first implementation that works (usually barely). Once you have a solution, at least spend 5 minutes looking over it and check if you missed any obvious simplifications. If you won&#8217;t do it by yourself, think of the kittens!</p>
]]></html></oembed>