<?xml version="1.0" encoding="UTF-8" standalone="yes"?><oembed><version><![CDATA[1.0]]></version><provider_name><![CDATA[The Dish]]></provider_name><provider_url><![CDATA[http://dish.andrewsullivan.com]]></provider_url><author_name><![CDATA[Andrew Sullivan]]></author_name><author_url><![CDATA[https://dish.andrewsullivan.com/author/sullydish/]]></author_url><title><![CDATA[Passing The Turing Test,&nbsp;Ctd]]></title><type><![CDATA[link]]></type><html><![CDATA[<span class="embed-youtube" style="text-align:center; display: block;"><iframe class='youtube-player' type='text/html' width='640' height='360' src='https://www.youtube.com/embed/3n5muEWaE_Q?version=3&#038;rel=1&#038;fs=1&#038;autohide=2&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;wmode=transparent' allowfullscreen='true' style='border:0;'></iframe></span>
<p>With the last weekend&#8217;s breakthrough being <a href="http://dish.andrewsullivan.com/2014/06/09/passing-the-turing-test/">called into question</a>, Brian Barrett <a href="http://gizmodo.com/how-big-a-deal-is-that-turing-test-win-1588011992" target="_blank">argues </a>that these days, the Turing test &#8220;isn&#8217;t so much a test of computer intelligence as it is human gullibility&#8221;:</p>
<blockquote><p>A bad chatbot might luck its way to victory if the judges aren&#8217;t familiar with tell-tale signs of chatbot-ness. That&#8217;s usually of less importance when your panel includes experts in the field of computer science. In this case, it included an actor from <em>Red Dwarf </em>and a member of the House of Lords, both of whom are incredibly accomplished and by all indications brilliant minds, but not specifically trained in this field.</p></blockquote>
<p>David Auerbach <a href="http://www.slate.com/articles/technology/bitwise/2014/06/turing_test_reading_university_did_eugene_goostman_finally_make_the_grade.single.html" target="_blank">argues </a>that &#8220;Eugene Goostman&#8221; did in fact pass the Turing test &#8211; but that the test itself has a fatal flaw:</p>
<blockquote><p>Trashing the Reading results, Hunch CEO Chris Dixon <a href="https://twitter.com/cdixon/status/475715861500932096" target="_blank">tweeted</a>, &#8220;The point of the Turing Test is that you pass it when you&#8217;ve built machines that can fully simulate human thinking.&#8221; No, that is precisely <em>not</em> how you pass the Turing test. You pass the Turing test by convincing judges that a computer program is human. That&#8217;s it. Turing was interested in one black-box metric for how we might gauge &#8220;human intelligence,&#8221; precisely because it has been so difficult to establish what it is to &#8220;simulate human thinking.&#8221; Turing&#8217;s test is only one measure.</p></blockquote>
<p><!--tpmore --></p>
<blockquote><p>So the Reading contest was not the travesty of the Turing test that Dixon claims. Dixon&#8217;s problem isn&#8217;t with the Reading contest – it&#8217;s with the<em> </em>Turing test itself. People are arguing over <a href="http://www.vice.com/en_uk/read/eugene-goostman-alan-turing-test-kevin-warwick" target="_blank">whether the test was conducted fairly</a> and <a href="http://www.buzzfeed.com/kellyoakes/no-a-computer-did-not-just-pass-the-turing-test" target="_blank">whether the metrics were right</a>, but the problem is more fundamental than that.&#8221;Intelligence&#8221; is a notoriously difficult concept to pin down. Statistician Cosma Shalizi has debunked the idea of <a href="http://vserver1.cscs.lsa.umich.edu/~crshalizi/weblog/523.html" target="_blank"><em>any </em>measurable general factor of intelligence like IQ</a>. Nonetheless, the word exists, and so we search for some way to measure it. &#8230; The Turing test, famous as it is, is only one possible concrete measure of human intelligence, and by no means the best one.</p></blockquote>
<p>Elizabeth Lopatto <a href="http://www.thedailybeast.com/articles/2014/06/10/the-ai-that-wasn-t-why-eugene-goostman-didn-t-pass-the-turing-test.html" target="_blank">offers some background </a>about how Turing turned imitating a conversation into a proxy for intelligence:</p>
<blockquote><p>The strength of the test is obvious: &#8220;intelligence&#8221; and &#8220;thinking&#8221; are fuzzy words, and no definition from psychology or neuroscience has been sufficiently general and precise to apply to machines. The Turing test side steps the messy bits to provide a pragmatic framework for testing.</p>
<p>But this strength is also the test&#8217;s weakness. Turing at no point explicitly says that his test is meant to provide a measure of intelligence. For instance: human behavior isn&#8217;t necessarily intelligent behavior—take responding to an insult with anger. Or typos: normal and human, but intelligent?</p></blockquote>
<p>Joseph Stromberg <a href="http://www.vox.com/2014/6/9/5793072/a-computer-just-passed-the-turing-test" target="_blank">still believes</a> the episode was noteworthy:</p>
<blockquote><p>This announcement certainly doesn&#8217;t mean that self-aware robots are about to take over the world – and it doesn&#8217;t even mean that there&#8217;s one out there capable of consistently fooling people into thinking its a human. It does, however, mean that one has crossed the threshold Turing predicted would be passed by 2000, a meaningful milestone on the way to artificial intelligence.</p>
<p>That said, there are plenty more milestones that still need to be passed — even in terms of the Turing test. The Loebner prize, for instance, will award a silver medal for the first program to pass a text-only test, but a gold medal for one that passes an audio test — something that&#8217;s probably still a long way off.</p></blockquote>
<p>But a less-charitable George Dvorsky <a href="http://io9.com/why-the-turing-test-is-bullshit-1588051412/+ericlimer?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A+gizmodo%2Ffull+%28Gizmodo%29" target="_blank">makes the case </a>that it&#8217;s time to abandon the &#8220;bullshit&#8221; Turing test:</p>
<blockquote><p>Turing had no way of knowing that human conversation – or the appearance of it  – could be simulated by natural language processing (NLP) software and the rise of chatterbots. Yes, these programs exhibit intelligence — but they&#8217;re intelligent in the same way that calculators are intelligent. Which isn&#8217;t really very intelligent at all. More crucially, the introduction of these programs to Turing Test competitions fail to answer the ultimate question posed by the test: Can machines think?</p>
<p>Though impressive, and despite their apparent ability to fool human judges, these machines – or more accurately, software programs – do not think in the same way humans do. … It&#8217;s all smoke and mirrors, folks. There&#8217;s no <em>thinking</em> going on here – just quasi pre-programmed responses spouted out by sophisticated <a href="http://io9.com/the-10-algorithms-that-dominate-our-world-1580110464" target="_blank">algorithms</a>. But because Turing&#8217;s conjecture was directed at assessing the presence of human-like cognition in a machine, his test falls flat.</p></blockquote>
]]></html></oembed>