<?xml version="1.0" encoding="UTF-8" standalone="yes"?><oembed><version><![CDATA[1.0]]></version><provider_name><![CDATA[Ordinary Ideas]]></provider_name><provider_url><![CDATA[https://ordinaryideas.wordpress.com]]></provider_url><author_name><![CDATA[paulfchristiano]]></author_name><author_url><![CDATA[https://ordinaryideas.wordpress.com/author/paulfchristiano/]]></author_url><title><![CDATA[Epistemic Chicken]]></title><type><![CDATA[link]]></type><html><![CDATA[<p>Consider a fixed goal-seeking agent , who is told its own code and that its objective function is U = { T if A(&lt;A&gt;,&lt;U&gt;) halts after T steps, 0 otherwise }. Alternatively, consider a pair of agents A, B, running similar AIs, who are told their own code as well as their own utility function U = { -1 if you don&#8217;t halt, 0 if you halt but your opponent halts after at least as many steps, +1 otherwise }. What would you do as A, in either situation? (That is, what happens if A is an appropriate wrapper around an emulation of your brain, giving it access to arbitrarily powerful computational aids?)</p>
]]></html></oembed>