<?xml version="1.0" encoding="UTF-8" standalone="yes"?><oembed><version><![CDATA[1.0]]></version><provider_name><![CDATA[dougallj]]></provider_name><provider_url><![CDATA[https://dougallj.wordpress.com]]></provider_url><author_name><![CDATA[dougallj]]></author_name><author_url><![CDATA[https://dougallj.wordpress.com/author/dougallj/]]></author_url><title><![CDATA[Apple M1: Load and Store Queue&nbsp;Measurements]]></title><type><![CDATA[link]]></type><html><![CDATA[
<p>Out-of-order processors have to keep track of multiple in-flight operations at once, and they use a variety of different buffers and queues to do so. I&#8217;ve been trying to characterise and measure some of these buffers in the Apple M1 processor&#8217;s Firestorm and Icestorm microarchitectures, by timing different instruction patterns.</p>



<p>I&#8217;ve measured the size of the load and store queue, discovered that load and store queue entries are allocated when the ops issue from the scheduler, and released once they are non-speculative and all earlier loads and stores have been released. I may have also accidentally found a trick for manipulating load/store alias prediction. And I figured I should write it up, so other people can reproduce it, and/or find mistakes.</p>



<h2>Fighting Loads with Loads</h2>



<p>The general idea is to time the execution of two independent long-latency operations (instructions, or chains of dependent instructions), with a number of loads or stores between them. Usually these two long-latency operations can run in parallel, but if there are so many loads/stores that some buffer is completely filled, the machine will stall, and will have to wait until space in the buffer is freed to execute subsequent instructions. This method was described first (and much more clearly) in Henry Wong&#8217;s blog post <em><a href="https://blog.stuffedcow.net/2013/05/measuring-rob-capacity/">Measuring Reorder Buffer Capacity</a></em>.</p>



<p>Initially, in measuring the M1, I used cache-missing loads as the &#8220;two independent long-latency operations&#8221; (as did everyone else, to my knowledge).</p>



<p>My result (which very roughly looked like <a href="https://www.anandtech.com/show/16226/apple-silicon-m1-a14-deep-dive">AnandTech&#8217;s</a> results) was that 128 loads or 107 stores could run, without stopping the final long-latency load from executing in parallel, but adding one more would cause a stall. Since the first load, 128 other loads, and the last load are in flight at the same time, I&#8217;d call this a load queue size of <strong>130</strong>. I still believe this to be correct. On the other hand, the loads don&#8217;t require store buffers, so I incorrectly called this a <strong>107</strong> entry store queue.</p>



<h2>Schedulers and Dispatch Queues</h2>



<p>Although it is not the topic of this post, I also mapped out the structure of the dispatch queues and schedulers:</p>



<figure class="wp-block-image size-large"><a href="https://dougallj.files.wordpress.com/2021/04/screen-shot-2021-04-07-at-7.58.07-pm.png"><img data-attachment-id="1245" data-permalink="https://dougallj.wordpress.com/screen-shot-2021-04-07-at-7-58-07-pm/" data-orig-file="https://dougallj.files.wordpress.com/2021/04/screen-shot-2021-04-07-at-7.58.07-pm.png" data-orig-size="2850,790" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="screen-shot-2021-04-07-at-7.58.07-pm" data-image-description="" data-image-caption="" data-medium-file="https://dougallj.files.wordpress.com/2021/04/screen-shot-2021-04-07-at-7.58.07-pm.png?w=300" data-large-file="https://dougallj.files.wordpress.com/2021/04/screen-shot-2021-04-07-at-7.58.07-pm.png?w=1024" src="https://dougallj.files.wordpress.com/2021/04/screen-shot-2021-04-07-at-7.58.07-pm.png?w=1024" alt="" class="wp-image-1245" srcset="https://dougallj.files.wordpress.com/2021/04/screen-shot-2021-04-07-at-7.58.07-pm.png?w=1024 1024w, https://dougallj.files.wordpress.com/2021/04/screen-shot-2021-04-07-at-7.58.07-pm.png?w=2048 2048w, https://dougallj.files.wordpress.com/2021/04/screen-shot-2021-04-07-at-7.58.07-pm.png?w=150 150w, https://dougallj.files.wordpress.com/2021/04/screen-shot-2021-04-07-at-7.58.07-pm.png?w=300 300w, https://dougallj.files.wordpress.com/2021/04/screen-shot-2021-04-07-at-7.58.07-pm.png?w=768 768w" sizes="(max-width: 1024px) 100vw, 1024px" /></a></figure>



<p>If an op can enter a scheduler, it can leave the scheduler and execute when all its dependencies are ready. If it can enter a dispatch queue, then it will enter a scheduler when there&#8217;s room, and until then, the frontend can continue. But if the scheduler and dispatch queue are full, and we have an op that needs to go to that queue, we stall. This means no more ops can enter any dispatch queues until there is room in that queue.</p>



<p>It is worth noting that the load/store scheduler has 48 entries, with a 10 entry dispatch queue.</p>



<p>(Very briefly: These were measured by filling the scheduler with some number of loads or stores, with addresses depending on the first long-latency chain, then finding two points. First, I found the last point at which an independent load could still fit into the scheduler and run, to find the scheduler size. Then, I found the number of extra load/stores, once the scheduler was full, that were required to block independent floating-point operations. That gives the dispatch queue size. Mixing operations makes it possible to figure out which schedulers/queues are separate and which are not. Should I write this up?)</p>



<h2>Fighting Stores with Square-Roots</h2>



<p>The next idea was to use a chain of long-latency floating point operations instead. Surprisingly, this produces a result that about <strong>329</strong> loads or stores can run between floating point operations, without forcing the two long-latency chains to run completely in parallel. I assume this means that load and store queue entries are being released, and reused, before the first long-latency chain has retired, and we&#8217;re hitting another limit. Mixing loads and stores confirms it&#8217;s the same limit. (It&#8217;s not the topic of this post, but I&#8217;ve explored it a bit and called it the &#8220;coalescing retire queue&#8221;. I believe it is their ROB implementation.)</p>



<p>So at this point I&#8217;m guessing that loads and stores can complete (free their load/store queue entries, but not their retire queue entries) once they are non-speculative. I believe this completion is in-order relative to other loads and stores. The approach I used to test this was to have a single, initial load or store operation, with its address depending on the result of the first long-latency operation.</p>



<p>However, for this writeup, I got the same result by instead adding a branch, dependent on the result of the first long-latency operation. This will ensure the loads and stores cannot become non-speculative, and keep their queue entries. In this test we see we can run <strong>188</strong> loads or <strong>118</strong> stores without forcing the two long-latency chains to run completely in parallel. This was initially pretty confusing, since we believe we only have a 130 entry load buffer. So, where did the extra 58 entries come from?</p>



<p>The load/store scheduler has 48 entries, with a 10 entry dispatch queue. If the load/store scheduler and dispatch queue are full, the integer and floating point units can continue operating. But if we hit one more load/store, the machine stalls, as it has nowhere to put the instruction. This explains the 58 extra entries.</p>



<p>By this logic (subtracting 58 for the size of the scheduler and dispatch queue), the store queue has only 60 entries. So why did we think it had 47 more? Because if the 48 entry scheduler is almost full, but has one free entry, then a load can enter the scheduler, and then issue from the scheduler and execute (in parallel with the other long-latency load), but if the scheduler is completely full, it cannot.</p>



<p>So those are my current numbers, <strong>130</strong> load queue entries and <strong>60</strong> store queue entries. The same logic works for Icestorm, where we see 30 load queue entries and 18 store queue entries (with an 18 entry scheduler and a 6 entry dispatch queue).</p>



<h2>An Uncomfortable, Surprising Discovery</h2>



<p><strong>Update</strong> (2022-07-22): <em>I&#8217;ve since had trouble reproducing the results in this section on an M1 Max. I suspect this is an issue with hardware or software revisions, but take this section with a grain of salt.</em></p>



<p>In writing this up for this post, I tried to reproduce the &#8220;107 entry store queue&#8221; result, but the result I got was 60 entries. This is both the best answer I could hope for (it&#8217;s the number I currently think is correct), and the worst possible outcome (I&#8217;m writing this up, so that people can reproduce my work, and I&#8217;m failing to reproduce my own results).</p>



<p>So what went wrong? Bisecting a little, I found this was caused by refactoring to use <strong>X29</strong> as the base address for the second long-latency load (with a variable offset), and using <strong>X29</strong> or <strong>SP</strong> as the base address for the store (with a constant offset). Changing either or both registers back to <strong>X3</strong> (even though the values did not change) gave me 107 again.</p>



<p>Processors try to execute loads and stores out of order, which makes things fast, as long as the data read by a load isn&#8217;t later changed by a preceding (in program order) store. This is called a memory order violation. When it happens, a processor typically throws away all its work and starts again, which is very expensive. (Apple provides a performance counter for this. It&#8217;s called <em>MEMORY_ORDER_VIOLATION</em> and described as &#8220;Incorrect speculation between store and dependent load&#8221;.) Because this is so expensive, there are predictors that try to figure out when a load and a store might alias, and run them in order instead. You can read more about how Intel approaches this in Travis Downs&#8217; <a href="https://github.com/travisdowns/uarch-bench/wiki/Memory-Disambiguation-on-Skylake"><em>Memory Disambiguation on Skylake</em></a> writeup.</p>



<p><strong>X29</strong> is typically used as the stack frame base pointer, and I suspect the memory dependency prediction has a special case for this, and makes the load wait until it knows the store doesn&#8217;t alias it. The theory is that if we have 60 speculative stores before the load, we can figure out all their addresses, and that the load can go ahead. But if we have 61, we can&#8217;t check the last one, so the load will wait.</p>



<p>The same code running on Icestorm also measures the 18 entry store buffer.</p>



<p>I think this explanation makes sense, but it was a surprising reminder that there&#8217;s still a lot of mysteries here, and it&#8217;d be good to verify it further. I put the code to reproduce this result in a <a href="https://gist.github.com/dougallj/0d4972967c625852956fcfe427b2054c">gist</a> if others want to investigate this.</p>



<h2>The Data</h2>



<p>So, here&#8217;s the data. To get a single number, I find the top of the jump (the first point at which it&#8217;s executing completely serially) and subtract one. This is the largest number of instructions that execute a measurable amount faster than completely serially. But however you pick, the results are close enough.</p>



<figure class="wp-block-image size-large"><a href="https://dougallj.files.wordpress.com/2021/04/m1-loads.png"><img data-attachment-id="1247" data-permalink="https://dougallj.wordpress.com/m1-loads/" data-orig-file="https://dougallj.files.wordpress.com/2021/04/m1-loads.png" data-orig-size="1776,1210" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="m1-loads" data-image-description="" data-image-caption="" data-medium-file="https://dougallj.files.wordpress.com/2021/04/m1-loads.png?w=300" data-large-file="https://dougallj.files.wordpress.com/2021/04/m1-loads.png?w=1024" src="https://dougallj.files.wordpress.com/2021/04/m1-loads.png?w=1024" alt="" class="wp-image-1247" srcset="https://dougallj.files.wordpress.com/2021/04/m1-loads.png?w=1024 1024w, https://dougallj.files.wordpress.com/2021/04/m1-loads.png?w=150 150w, https://dougallj.files.wordpress.com/2021/04/m1-loads.png?w=300 300w, https://dougallj.files.wordpress.com/2021/04/m1-loads.png?w=768 768w, https://dougallj.files.wordpress.com/2021/04/m1-loads.png 1776w" sizes="(max-width: 1024px) 100vw, 1024px" /></a></figure>



<figure class="wp-block-image size-large"><a href="https://dougallj.files.wordpress.com/2021/04/m1-stores.png"><img data-attachment-id="1249" data-permalink="https://dougallj.wordpress.com/m1-stores/" data-orig-file="https://dougallj.files.wordpress.com/2021/04/m1-stores.png" data-orig-size="1778,1202" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="m1-stores" data-image-description="" data-image-caption="" data-medium-file="https://dougallj.files.wordpress.com/2021/04/m1-stores.png?w=300" data-large-file="https://dougallj.files.wordpress.com/2021/04/m1-stores.png?w=1024" src="https://dougallj.files.wordpress.com/2021/04/m1-stores.png?w=1024" alt="" class="wp-image-1249" srcset="https://dougallj.files.wordpress.com/2021/04/m1-stores.png?w=1024 1024w, https://dougallj.files.wordpress.com/2021/04/m1-stores.png?w=150 150w, https://dougallj.files.wordpress.com/2021/04/m1-stores.png?w=300 300w, https://dougallj.files.wordpress.com/2021/04/m1-stores.png?w=768 768w, https://dougallj.files.wordpress.com/2021/04/m1-stores.png 1778w" sizes="(max-width: 1024px) 100vw, 1024px" /></a></figure>



<p>(Note that the dependent B.cc is chained using FCMP, as I think FMOV might interfere with the memory operations.)</p>



<h2>Final Notes</h2>



<p>The difference between measuring using the resource itself (loads+loads), and measuring using another resource (fsqrts+loads) is very clear in both graphs. The 58 instruction difference implies that when we do not have the resources to execute more loads, we can continue move more loads into the scheduler. So I conclude that the resources are allocated late. Similarly, the ~330 limit we can hit implies that this resource can be freed early.</p>



<p>I do not see this kind of pattern when measuring physical register file sizes (e.g. comparing int+int vs int+float), so I believe they are neither allocated late, nor freed early. But there&#8217;s a lot of complexity I do not yet understand.</p>
]]></html><thumbnail_url><![CDATA[https://dougallj.files.wordpress.com/2021/04/screen-shot-2021-04-07-at-7.58.07-pm.png?fit=440%2C330]]></thumbnail_url><thumbnail_width><![CDATA[440]]></thumbnail_width><thumbnail_height><![CDATA[122]]></thumbnail_height></oembed>