<?xml version="1.0" encoding="UTF-8" standalone="yes"?><oembed><version><![CDATA[1.0]]></version><provider_name><![CDATA[Ordinary Ideas]]></provider_name><provider_url><![CDATA[https://ordinaryideas.wordpress.com]]></provider_url><author_name><![CDATA[paulfchristiano]]></author_name><author_url><![CDATA[https://ordinaryideas.wordpress.com/author/paulfchristiano/]]></author_url><title><![CDATA[Some rambling AI&nbsp;prognostication]]></title><type><![CDATA[link]]></type><html><![CDATA[<p>I want to get in the habit of sharing more of my unpolished thoughts about topics I consider important. The hope is to shift from an equilibrium where I say little (and therefore feel like I&#8217;m endorsing whatever I do say as having an unusually high quality, causing me to say even less), to an equilibrium where I say much more and feel more comfortable sharing unpolished thoughts. I think &#8220;quiet&#8221; is an OK equilibrium (most people who should read some of my thoughts shouldn&#8217;t read most of them, and it would make sense for me to try and be selective). But it seems like a suboptimal equilibrium, since there are at least a few people who do care what I think, often to better understand our disagreements.</p>
<p>A similar social problem is possible, where a broader intellectual community tends towards a &#8220;quiet&#8221; equilibrium in which any public speech comes with an implicit claim of interestingness and worthwhileness. I think intellectual communities vary considerably in where they sit on this spectrum. Bloggers qua bloggers often write quite a bit; academics in computer science tend to be much more guarded about what they say. I think there are some virtues to the noisier equilibrium, particularly in increasing our ability and inclination to notice, understand, and resolve disagreements.</p>
<p>Anyway, in that spirit, <a href="https://workflowy.com/shared/15df86ce-1b8e-57ca-dbb2-30a42d949a59/">here</a> is some of my thinking about AI&#8212;an outline of the development scenario I consider most likely, and a general discussion of the impacts of consequentialist automation. Criticism is welcome, and you can leave comments on the (quite hard-to-navigate) google doc <a href="https://docs.google.com/document/d/1tKCysndd8-SRWnXG6lp-XzmPppGYucmeG4r4lUfhjWY/edit">here</a>.</p>
]]></html></oembed>