<?xml version="1.0" encoding="UTF-8" standalone="yes"?><oembed><version><![CDATA[1.0]]></version><provider_name><![CDATA[The ryg blog]]></provider_name><provider_url><![CDATA[https://fgiesen.wordpress.com]]></provider_url><author_name><![CDATA[fgiesen]]></author_name><author_url><![CDATA[https://fgiesen.wordpress.com/author/fgiesen/]]></author_url><title><![CDATA[Reading bits in far too many ways (part&nbsp;2)]]></title><type><![CDATA[link]]></type><html><![CDATA[<p>(Continued from <a href="https://fgiesen.wordpress.com/2018/02/19/reading-bits-in-far-too-many-ways-part-1/">part 1</a>.)</p>
<p>Last time, I established the basic problem and went through various ways of doing shifting and masking, and the surprising difficulties inherent therein. The &#8220;bit extract&#8221; style I picked is based on a stateless primitive, which made it convenient to start with because there&#8217;s no loop invariants involved. </p>
<p>This time, we&#8217;re going to switch to the stateful style employed by most bit readers you&#8217;re likely to encounter (because it ends up cheaper). We&#8217;re also going to switch from a monolithic <code>getbits</code> function to something a bit more fine-grained. But let&#8217;s start at the beginning.</p>
<h3>Variant 1: reading the input one byte at a time</h3>
<p>Our &#8220;extract&#8221;-style reader assumed the entire bitstream was available in memory ahead of time. This is not always possible or desirable; so let&#8217;s investigate the other extreme, a bit reader that requests additional bytes one at a time, and only when they&#8217;re needed.</p>
<p>In general, our stateful variants will dribble in input a few bytes at a time, and have partially processed bytes lying around. We need to store that data in a variable that I will call the &#8220;bit buffer&#8221;:</p>
<pre>
// Again, this is understood to be per-bit-reader state or local
// variables, not globals.
uint64_t bitbuf = 0;   // value of bits in buffer
int      bitcount = 0; // number of bits in buffer
</pre>
<p>While processing input, we will always be seesawing between putting more bits into that buffer when we&#8217;re running low, and then consuming bits from that buffer while we can.</p>
<p>Without further ado, let&#8217;s do our first stateful <code>getbits</code> implementation, reading one byte at a time, and starting with MSB-first this time:</p>
<pre>
// Invariant: there are "bitcount" bits in "bitbuf", stored from the
// MSB downwards; the remaining bits in "bitbuf" are 0.

uint64_t getbits1_msb(int count) {
    assert(count &gt;= 1 &amp;&amp; count &lt;= 57);

    // Keep reading extra bytes until we have enough bits buffered
    // Big endian; our current bits are at the top of the word,
    // new bits get added at the bottom.
    while (bitcount &lt; count) {
        bitbuf |= (uint64_t)getbyte() &lt;&lt; (56 - bitcount);
        bitcount += 8;
    }

    // We now have enough bits present; the most significant
    // "count" bits in "bitbuf" are our result.
    uint64_t result = bitbuf &gt;&gt; (64 - count);

    // Now remove these bits from the buffer
    bitbuf &lt;&lt;= count;
    bitcount -= count;

    return result;
}
</pre>
<p>As before, we can get rid of the <code>count</code>≥1 requirement by changing the way we grab the result bits, as explained in the last part. This is the last time I&#8217;ll mention this; just keep in mind that whenever I show any algorithm variant here, the variations from last time automatically apply.</p>
<p>The idea here is quite simple: first, we check whether there&#8217;s enough bits in our buffer to satisfy the request immediately. If not, we dribble in extra bytes one at a time until we have enough. <code>getbyte()</code> here is understood to ideally use some <a href="https://fgiesen.wordpress.com/2011/11/21/buffer-centric-io/">buffered IO mechanism</a> that just boils down to dereferencing and incrementing a pointer on the hot path; it should <em>not</em> be a function call or anything expensive if you can avoid it. Because we insert 8 bits at a time, the maximum number of bits we can read in a single call is 57 bits; that&#8217;s the largest number of bits we can refill the buffer to without risking anything dropping out.</p>
<p>After that, we grab the top <code>count</code> bits from our buffer, then shift them out. The invariant we maintain here is that the first unconsumed bit is kept at the MSB of the buffer.</p>
<p>The other thing I want you to notice is that this process breaks down neatly into three separate smaller operations, which I&#8217;m going to call &#8220;refill&#8221;, &#8220;peek&#8221; and &#8220;consume&#8221;, respectively. The &#8220;refill&#8221; phase ensures that a certain given minimum number of bits is present in the buffer; &#8220;peek&#8221; returns the next few bits in the buffer, without discarding them; and &#8220;consume&#8221; removes bits without looking at them. These all turn out to be individually useful operations; to show how things shake out, here&#8217;s the equivalent LSB-first algorithm, factored into smaller parts:</p>
<pre>
// Invariant: there are "bitcount" bits in "bitbuf", stored from the
// LSB upwards; the remaining bits in "bitbuf" are 0.
void refill1_lsb(int count) {
    assert(count &gt;= 0 &amp;&amp; count &lt;= 57);
    // New bytes get inserted at the top end.
    while (bitcount &lt; count) {
        bitbuf |= (uint64_t)getbyte() &lt;&lt; bitcount;
        bitcount += 8;
    }
}

uint64_t peekbits1_lsb(int count) {
    assert(bit_count &gt;= count);
    return bitbuf &amp; ((1ull &lt;&lt; count) - 1);
}

void consume1_lsb(int count) {
    assert(bit_count &gt;= count);
    bitbuf &gt;&gt;= count;
    bitcount -= count;
}

uint64_t getbits1_lsb(int count) {
    refill1_lsb(count);
    uint64_t result = peekbits1_lsb(count);
    consume1_lsb(count);
    return result;
}
</pre>
<p>Writing <code>getbits</code> as the composition of these three smaller primitives is not always optimal. For example, if you use the rotate method for MSB-first bit buffers, you really want to have only rotate shared by the <code>peekbits</code> and <code>consume</code> phases; an optimal implementation shares that work between the two. However, breaking it down into these individual steps is still a useful thing to do, because once you conceptualize these three phases as distinct things, you can start putting them together differently.</p>
<h3>Lookahead</h3>
<p>The most important such transform is amortizing refills over multiple decode operations. Let&#8217;s start with a simple toy example: say we want to read our three example bit fields from the last part:</p>
<pre>
    a = getbits(4);
    b = getbits(3);
    c = getbits(5);
</pre>
<p>With <code>getbits</code> implemented as above, this will do the refill check (and potentially some actual refilling) up to 3 times. But this is silly; we know in advance that we&#8217;re going to be reading 4+3+5=12 bits in one go, so we might as well grab them all at once:</p>
<pre>
    refill(4+3+5);
    a = getbits_no_refill(4);
    b = getbits_no_refill(3);
    c = getbits_no_refill(5);
</pre>
<p>where <code>getbits_no_refill</code> is yet another getbits variant that does <code>peekbits</code> and <code>consume</code>, but, as the name suggests, no refilling. And once you get rid of the refill loop between the individual <code>getbits</code> invocations, you&#8217;re just left with straight-line integer code, which compilers are good at optimizing further. That said, the all-fixed-length case is a bit of a cheap shot; it gets far more interesting when we&#8217;re not sure exactly how many bits we&#8217;re actually going to consume, like in this example:</p>
<pre>
    temp = getbits(5);
    if (temp &lt; 28)
        result = temp;
    else
        result = 28 + (temp - 28)*16 + getbits(4);
</pre>
<p>This is a simple variable-length code where values from 0 through 27 are sent in 5 bits, and values from 28 through 91 are sent in 9 bits. The point being, in this case, we don&#8217;t know in advance how many bits we&#8217;re eventually going to consume. We do know that it&#8217;s going to be no more than 9 bits, though, so we can still make sure we only refill once:</p>
<pre>
    refill(9);
    temp = getbits_no_refill(5);
    if (temp &lt; 28)
        result = temp;
    else
        result = 28 + (temp - 28)*16 + getbits_no_refill(4);
</pre>
<p>In fact, if you want to, you can go wild and split operations even more, so that both execution paths only <code>consume</code> bits once. For example, assuming a MSB-first bit buffer, we could write this small decoder as follows:</p>
<pre>
    refill(9);
    temp = peekbits(5);
    if (temp &lt; 28) {
        result = temp;
        consume(5);
    } else {
        // The "upper" and "lower" code are back-to-back,
        // and combine exactly the way we want! Synergy!
        result = getbits_no_refill(9) - 28*16 + 28;
    }
</pre>
<p>This kind of micro-tweaking is <em>really</em> not recommended outside very hot loops, but as I mentioned in the previous part, some of these decoder loops are quite hot indeed, and in that case saving a few instructions here and a few instructions there adds up. One particularly important technique for decoding variable-length codes (e.g. Huffman codes) is to peek ahead by some fixed number of bits and then do a table lookup based on the result. The table entry then lists what the decoded symbol should be, and how many bits to consume (i.e. how many of the bits we just peeked at really belonged to the symbol). This is <em>significantly</em> faster than reading the code a bit or two at a time and consulting a Huffman tree at every step (the method sadly taught in many textbooks and other introductory texts.)</p>
<p>There&#8217;s a problem, though. The technique of peeking ahead a bit (no pun intended) and then later deciding how many bits you actually want to consume is quite powerful, but we&#8217;ve just changed the rules: the <code>getbits</code> implementation above is careful to only read extra bytes when it&#8217;s strictly necessary. But our modified variable-length code reader example above always refills so the buffer contains at least 9 bits, even when we&#8217;re only going to consume 5 bits in the end. Depending on where that refill happens, it might even cause us to read past the end of the actual data stream.</p>
<p>In short, we&#8217;ve introduced <em>lookahead</em>. The modified code reader starts grabbing extra input bytes before it&#8217;s sure whether it needs them. This has many advantages, but the trade-off is that it may cause us to read past the logical end of a bit stream; it certainly implies that we have to make sure this case is handled correctly. It certainly should never crash or read out of bounds; but beyond that, it implies certain thing about the way input buffering or the framing layer have to work.</p>
<p>Namely, if we&#8217;re going to do any lookahead, we need to figure out a way to handle this. The primary options are as follows:</p>
<ul>
<li>We can punt and make it someone else&#8217;s problem by just requiring that everyone hand us valid data with some extra padding bytes after the end. This makes our lives easier but is an inconvenience for everyone else.</li>
<li>We can arrange things so the outer framing layer that feeds bytes to our <code>getbits</code> routine knows when the real data stream is over (either due to some escape mechanism or because the size is sent explicitly); then we can either stop doing any lookahead and switch to a more careful decoder when we&#8217;re getting close to the end, or pad the stream with some dummy value after its end (zeroes being the most popular choice).</li>
<li>We can make sure that whatever bytes we might grab during lookahead while decoding a valid stream are still part of our overall byte stream that&#8217;s being processed by our code. For example, if you use a 64-bit bit buffer, we can finagle our way around the problem by requiring that some 8-byte footer (say a checksum or something) be present right after a valid bit stream. So while our bit buffer might overshoot, it&#8217;s still data that&#8217;s ultimately going to be consumed by the decoder, which simplifies the logistics considerably.</li>
<li>Barring that, whatever I/O buffering layer we&#8217;re using needs to allow us to return some extra bytes we didn&#8217;t actually consume into the buffer. Namely, whatever lookahead bytes we have left in our bit buffer after we&#8217;re done decoding need to be returned to the buffer for whoever is going to read it next. This is essentially what the C standard library function <a href="http://pubs.opengroup.org/onlinepubs/009696899/functions/ungetc.html"><code>ungetc</code></a> does, except you&#8217;re not allowed to call <code>ungetc</code> more than once, and we might need to. So going along this route essentially dooms you to taking charge of IO buffer management.</li>
</ul>
<p>I won&#8217;t sugarcoat it, all of these options are a pain in the neck, some more so than others: hand-waving it away by putting something else at the end is easiest, handling it in some outer framing layer isn&#8217;t too bad, and taking over all IO buffering so you can un-read multiple bytes is positively hideous, but you don&#8217;t have great options when you don&#8217;t control your framing. In the past, I&#8217;ve written <a href="https://fgiesen.wordpress.com/2011/11/21/buffer-centric-io/">posts</a> about <a href="https://fgiesen.wordpress.com/2016/01/02/end-of-buffer-checks-in-decompressors/">handy techniques</a> that might help you in this context; and in some implementations you can work around it, for example by setting <code>bitcount</code> to a huge value just after inserting the final byte from the stream. But in general, if you want lookahead, you&#8217;re going to have to put some amount of work into it. That said, the winnings tend to be fairly good, so it&#8217;s not all for nothing.</p>
<h3>Variant 2: when you <em>really</em> want to read 64 bits at once</h3>
<p>The methods I&#8217;ve discussed so far both have some &#8220;slop&#8221; from working in byte granularity. The extract-style bit reader started with a full 64-bit read but then had to shift by up to 7 positions to discard the part of the current byte that&#8217;s already consumed, and the <code>getbits1</code> above inserts one byte at a time into the bit buffer; if there&#8217;s 57 bits already in the buffer, there&#8217;s no space for another byte (because that would make 65 bits, more than the width of our buffer), so that&#8217;s the maximum width the <code>getbits1</code> method supports. Now 57 bits is a useful amount; but if you&#8217;re doing this on a 32-bit platform, the equivalent magic number is 25 bits (32-7), and that&#8217;s definitely on the low side, enough so to be inconvenient sometimes.</p>
<p>Luckily, if you want the full width, there&#8217;s a way to do it (like the rotate-and-mask technique for MSB-first bit buffers, I learned this at RAD). At this point, I think you get the correspondence between the MSB-first and LSB-first methods, so I&#8217;m only going to show one per variant. Let&#8217;s do LSB-first for this one:</p>
<pre>
// Invariant: "bitbuf" contains "bitcount" bits, starting from the
// LSB up; 1 &lt;= bitcount &lt;= 64
uint64_t bitbuf = 0;     // value of bits in buffer
int      bitcount = 0;   // number of bits in buffer
uint64_t lookahead = 0;  // next 64 bits
bool     have_lookahead = false;

// Must do this to prime the pump!
void initialize() {
    bitbuf = get_uint64LE();
    bitcount = 64;
    have_lookahead = false;
}

void ensure_lookahead() {
    // grabs the lookahead word, if we don't have
    // it already.
    if (!have_lookahead) {
        lookahead = get_uint64LE();
        have_lookahead = true;
    }
}

uint64_t peekbits2_lsb(int count) {
    assert(bitcount &gt;= 1);
    assert(count &gt;= 0 &amp;&amp; count &lt;= 64);

    if (count &lt;= bitcount) { // enough bits in buf
        return bitbuf &amp; width_to_mask_table[count];
    } else {
        ensure_lookahead();

        // combine current bitbuf with lookahead
        // (lookahead bits go above end of current buf)
        uint64_t next_bits = bitbuf;
        next_bits |= lookahead &lt;&lt; bitcount;
        return next_bits &amp; width_to_mask_table[count];
    }
}

void consume2_lsb(int count) {
    assert(bitcount &gt;= 1);
    assert(count &gt;= 0 &amp;&amp; count &lt;= 64);

    if (count &lt; bitcount) { // still in current buf
        // just shift the bits out
        bitbuf &gt;&gt;= count;
        bitcount -= count;
    } else { // all of current buf consumed
        ensure_lookahead();
         
        // we advanced fully into the lookahead word
        int lookahead_consumed = count - bitcount;
        bitbuf = lookahead &gt;&gt; lookahead_consumed;
        bitcount = 64 - lookahead_consumed;
        have_lookahead = false;
    }

    assert(bitcount &gt;= 1);
}

uint64_t getbits2_lsb(int count) {
    uint64_t result = peekbits2_lsb(count);
    consume2_lsb(count);
    return result;
}
</pre>
<p>This one is a bit more complicated than the ones we&#8217;ve seen before, and needs an explicit initialization step to make the invariants work out <em>just right</em>. It also involves several extra branches compared to the variants we&#8217;ve seen before, which makes it less than ideal for deeply pipelined machines, which includes desktop PCs, and note that I&#8217;m using the <code>width_to_mask_table</code> again, and not just for show: none of the arithmetic expressions we saw last time to compute the mask for a given width work for the full 0-64 range of allowed <code>width</code>s on any common 64-bit architecture that&#8217;s not POWER, and even that only if we ignore them invoking undefined behavior, which we <em>really</em> shouldn&#8217;t.</p>
<p>The underlying idea is fairly simple: instead of just one bit buffer, we keep track of two values. We have however many bits are left of the last 64-bit value we read, and when that&#8217;s not enough for a <code>peekbits</code>, we grab the next 64-bit value from the input stream (via some externally-implemented <code>get_uint64LE()</code>) to give us the bits we&#8217;re missing. Likewise, <code>consume</code> checks whether there will still be any bits left in the current input buffer after consuming <code>width</code> bits. If not, we switch over to the bits from the lookahead value (shifting out however many of them we consumed) and clear the <code>have_lookahead</code> flag to indicate that what used to be our lookahead value is now just the contents of our bit buffer.</p>
<p>There are some contortions in this code to ensure we don&#8217;t do out-of-range (undefined-behavior-inducing) shifts. For example, note how <code>peekbits</code> tests whether <code>count &lt;= bitcount</code> to detect the bits-present-in-buffer case, whereas <code>consume</code> uses <code>count &lt; bitcount</code>. This is not an accident: in <code>peekbits</code>, the <code>next_bits</code> calculation involves a right-shift by <code>bitcount</code>. Since it only happens in the path where <code>bitcount</code> &lt; <code>count</code> &le; 64, that means that <code>bitcount &lt; 64</code>, and we&#8217;re safe. In <code>consume</code>, the situation is reversed: we shift by <code>lookahead_consumed = count - bitcount</code>. The condition around the block guarantees that <code>lookahead_consumed</code> &ge; 0; in the other direction, because <code>count</code> is at most 64 and <code>bitcount</code> is at least 1, we have <code>lookahead_consumed</code> &le; 64 &#8211; 1 = 63. That said, to paraphrase Knuth: beware of bugs in the above code; I&#8217;ve only proved it correct, not tried it.</p>
<p>This technique has another thing going for it besides supporting bigger bit field widths: note how it always reads full 64-bit uints at a time. Variant 1 above only reads bytes at a time, but requires a refill loop; the various branchless variants we&#8217;ll see later implicitly rely on the target CPU supporting fast unaligned reads. This version alone has the distinction of doing reads of a single size and with consistent alignment, which makes it more attractive on targets that don&#8217;t support fast unaligned reads, such as many old-school RISC CPUs.</p>
<p>Finally, as usual, there&#8217;s several more variations here that I&#8217;m not showing. For example, if you happen to have the data you&#8217;re decoding fully in memory, there&#8217;s no reason to bother with the boolean <code>have_lookahead</code> flag; just keep a pointer to the current lookahead word, and bump that pointer up whenever the current lookahead is consumed.</p>
<h3>Variant 3: bit extraction redux</h3>
<p>The original bit extraction-based bit reader from the previous part was a bit on the expensive side. But as long as we&#8217;re OK with the requirement that the entire input stream be in memory at once, we can wrangle it into the refill/peek/consume pattern to get something useful. It still gives us a bit reader that looks ahead (and hence has the resulting difficulties), but such is life. For this one, let&#8217;s do MSB again:</p>
<pre>
const uint8_t *bitptr; // Pointer to current byte
uint64_t       bitbuf = 0; // last 64 bits we read
int            bitpos = 0; // how many of those bits we've consumed

void refill3_msb() {
    assert(bitpos &lt;= 64);

    // Advance the pointer by however many full bytes we consumed
    bitptr += bitpos &gt;&gt; 3;

    // Refill
    bitbuf = read64BE(bitptr);

    // Number of bits in the current byte we've already consumed
    // (we took care of the full bytes; these are the leftover
    // bits that didn't make a full byte.)
    bitpos &amp;= 7;
}

uint64_t peekbits3_msb(int count) {
    assert(bitpos + count &lt;= 64);
    assert(count &gt;= 1 &amp;&amp; count &lt;= 64 - 7);

    // Shift out the bits we've already consumed
    uint64_t remaining = bitbuf &lt;&lt; bitpos;

    // Return the top "count" bits
    return remaining &gt;&gt; (64 - count);
}

void consume3_msb(int count) {
    bitpos += count;
}
</pre>
<p>This time, I&#8217;ve also left out the <code>getbits</code> built from refill / peek / consume calls, because that&#8217;s yet another pattern that should be pretty clear by now.</p>
<p>It&#8217;s a pretty sweet variant. Once we break the bit extraction logic into separate &#8220;refill&#8221; and &#8220;peek&#8221;/&#8221;consume&#8221; pieces, it becomes clear how all of the individual pieces are fairly small and clean. It&#8217;s also completely branchless! It does expect unaligned 64-bit big-endian reads to exist and be reasonably cheap (not a problem on mainstream x86s or ARMs), and of course a realistic implementation needs to include handling of the end-of-buffer cases; see the discussion in the &#8220;lookahead&#8221; section.</p>
<h3>Variant 4: a different kind of lookahead</h3>
<p>And now that we&#8217;re here, let&#8217;s do another branchless lookahead variant. This exact variant is, to the best of my knowledge, another RAD special &#8211; discovered by my colleague Charles Bloom and me while working on Kraken (<b>UPDATE:</b> as Yann points out in the comments, this basic idea was apparently used in Eric Biggers&#8217; &#8220;Xpack&#8221; long before Kraken was launched; I wasn&#8217;t aware of this and I don&#8217;t think Charles was either, but that means we&#8217;re definitely not the first ones to come up with the idea. Our variant has an interesting wrinkle though &#8211; details <a href="https://fgiesen.wordpress.com/2018/02/20/reading-bits-in-far-too-many-ways-part-2/#comment-10923">in my reply</a>). Now all branchless (well, branchless if you ignore end-of-buffer checking in the refill etc.) bit readers look very much alike, but this particular variant has a few interesting properties (some of which I&#8217;ll only discuss later because we&#8217;re lacking a bit of necessary background right now), and that I haven&#8217;t seen anywhere else in this combination; if someone else did it first, feel free to inform me in the comments, and I&#8217;ll add the proper attribution! Here goes; back to LSB-first again, because I&#8217;m committed to hammering home just how similar and interchangeable LSB-first/MSB-first are at this level, holy wars notwithstanding.</p>
<pre>
const uint8_t *bitptr;   // Pointer to next byte to insert into buf
uint64_t bitbuf = 0;     // value of bits in buffer
int      bitcount = 0;   // number of bits in buffer

void refill4_lsb() {
    // Grab the next few bytes and insert them right above
    // the current top.
    bitbuf |= read64LE(bitptr) &lt;&lt; bitcount;

    // Advance the read pointer for next iteration
    bitptr += (63 - bitcount) &gt;&gt; 3;

    // Update the available bit count
    bitcount |= 56; // now bitcount is in [56,63]
}

uint64_t peekbits4_lsb(int count) {
    assert(count &gt;= 0 &amp;&amp; count &lt;= 56);
    assert(count &lt;= bitcount);
    
    return bitbuf &amp; ((1ull &lt;&lt; count) - 1);
}

void consume4_lsb(int count) {
    assert(count &lt;= bitcount);

    bitbuf &gt;&gt;= count;
    bitcount -= count;
}
</pre>
<p>The peek and consume phases are nothing we haven&#8217;t seen before, although this time the maximum permissible bit width seems to have shrunk by one more bit down to 56 bits for some reason.</p>
<p>That reason is in the refill phase, which works slightly differently from the ones we&#8217;ve seen so far. Reading 64 little-endian bits and shifting them up to align with the top of our current bit buffer should be straightforward at this point. But the <code>bitptr</code> / <code>bitcount</code> manipulation needs some explanation.</p>
<p>It&#8217;s actually easier to start with the <code>bitcount</code> part. The variants we&#8217;ve seen so far generally have between 57 and 64 bits in the buffer after refill. This version instead targets having between 56 and 63 bits in the buffer (which is also why the limit on count went down by one). But why? Well, inserting some integer number of bytes means <code>bitcount</code> is going to be incremented by some multiple of 8 during the refill; that means that <code>bitcount &amp; 7</code> (the low 3 bits) won&#8217;t change. And if we refill to a target of [56,63] bits in the buffer, we can compute the updated bit count with a single binary OR operation.</p>
<p>Which brings me to the question of how many bytes we should advance the pointer by. Well, let&#8217;s just look at the values of the original <code>bitcount</code>:</p>
<ul>
<li>If 56 ≤ <code>bitcount</code> ≤ 63, we were already in our target range and don&#8217;t want to advance by another byte.</li>
<li>If 48 ≤ <code>bitcount</code> ≤ 55, we&#8217;re adding exactly 1 byte (and so want to advance <code>bit_ptr</code> by that much).</li>
<li>If 40 ≤ <code>bitcount</code> ≤ 47, we&#8217;re adding exactly 2 bytes.</li>
</ul>
<p>and so forth. This works out to the <code>(63 - bitcount) &gt;&gt; 3</code> bytes we&#8217;re adding to <code>bitptr</code>.</p>
<p>Now, the bits in <code>bitbuf</code> above <code>bitcount</code> can end up getting ORed over multiple times. However, when that happens, we OR over the same value every time, so it doesn&#8217;t change the result. Therefore, once they later travel downwards (from the right-shift in the consume function), they&#8217;re fine; no need to worry about garbage.</p>
<p>Okay. So what&#8217;s interesting, but what&#8217;s so special about this particular variant? When would you choose this over, say, variant 3 above?</p>
<p>One simple reason: in this variant, the address the <code>refill</code> is loading from does not have a dependency on the current value of <code>bitcount</code>. In fact, the next load address is known as soon as the <em>previous</em> refill is complete. This is a subtle distinction that turns out to be a fairly major advantage on an out-of-order CPU. Among integer operations, even when hitting the L1 cache, loads are on the high latency side (typically somewhere between 3 and 5 cycles, whereas most integer operations take a single cycle), and the exact value of <code>bitcount</code> at the end of some loop iteration is often only known late (consider the simple variable-length code example I gave above).</p>
<p>Having the load address not depend on <code>bitcount</code> means the load can potentially issue as soon as the previous refill is complete; then we have plenty of time to complete the load, potentially byte-swap the value if the endianness of our load doesn&#8217;t match the target ISA (say because we&#8217;re using a MSB-first bit buffer on a little-endian CPU), and then the only thing that depends on the previous value of <code>bitcount</code> is the shift, which is a regular ALU operation and generally takes a single cycle.</p>
<p>In short, this somewhat obscure form of refill looks weird, but provides a tangible increase in available instruction-level parallelism. It was good for about a 10% throughput improvement on desktop PCs (vs. the earlier branchless refill it replaced) in the then-current version of the Kraken Huffman decoder when I tested it in early 2016.</p>
<p>Consider this a teaser for the next (and hopefully last) part of this series, in which I <em>won&#8217;t</em> introduce many more variants (maybe one more), and will instead talk a lot more about the performance of bit stream decoders and what kinds of things to watch out for.</p>
<p>Until then!</p>
]]></html></oembed>