<?xml version="1.0" encoding="UTF-8" standalone="yes"?><oembed><version><![CDATA[1.0]]></version><provider_name><![CDATA[Julia Galef]]></provider_name><provider_url><![CDATA[http://juliagalef.com]]></provider_url><author_name><![CDATA[Julia Galef]]></author_name><author_url><![CDATA[https://juliagalef.com/author/juliagalef/]]></author_url><title><![CDATA[Open questions]]></title><type><![CDATA[link]]></type><html><![CDATA[<p>As I conduct conversations for the <a href="https://juliagalef.com/update-project/">Update Project</a> (and just informally, on my own time) I&#8217;m looking for important open questions. <em>Important</em>, in the sense that what you believe about that question changes how you try to impact the world, and how successful you are at it. And <em>open</em>, in the sense that smart, well-informed people disagree about the answer.</p>
<p>I&#8217;ve begun collecting those open questions here, in the hopes that making them salient will make us more likely to notice evidence or arguments that shift our thinking about them in a useful way.</p>
<ol>
<li><a href="https://juliagalef.com/2017/02/18/when-is-overconfidence-useful-if-ever/">When is overconfidence useful (if ever)?</a></li>
<li><a href="https://juliagalef.com/2017/02/19/can-we-intentionally-improve-the-world-planners-vs-hayekians/">Can we intentionally improve the world? Planners vs. Hayekians</a></li>
<li><a href="https://juliagalef.com/2017/02/19/are-you-motivated-by-obligation-or-opportunity/">Are you motivated by obligation, or opportunity?</a></li>
<li>How bullish should we be about artificial general intelligence based on recent progress in the field?</li>
<li>A priori, how hard should we expect AGI to be?</li>
<li>How much ideological diversity is optimal (and along what axes)?</li>
<li>How efficient is the &#8220;market&#8221; for doing good?</li>
<li>Which new technologies have the potential to significantly impact humanity&#8217;s future (for good or for ill)?</li>
<li>How should Trump&#8217;s presidency alter our model of global risks?</li>
<li>How hard should we expect radical life extension to be?</li>
<li>On the margin, should we be trying to make science faster, or more rigorous?</li>
<li>How much should we expect to be able to improve on our default decisionmaking strategies?</li>
<li>What are some of the blind spots in the &#8220;rationalist&#8221; worldview?</li>
</ol>
]]></html></oembed>