https://wiki.issarice.com/index.php?title=Soft-hard_takeoff&feed=atom&action=historySoft-hard takeoff - Revision history2024-03-29T05:01:23ZRevision history for this page on the wikiMediaWiki 1.31.6https://wiki.issarice.com/index.php?title=Soft-hard_takeoff&diff=1949&oldid=prevIssa at 01:43, 2 March 20212021-03-02T01:43:41Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">Revision as of 01:43, 2 March 2021</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l16" >Line 16:</td>
<td colspan="2" class="diff-lineno">Line 16:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>: "33. Weak AGI" in https://aiimpacts.org/relevant-pre-agi-possibilities/</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>: "33. Weak AGI" in https://aiimpacts.org/relevant-pre-agi-possibilities/</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* The idea of a prepotent AI: http://acritch.com/media/arches.pdf</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* The idea of a prepotent AI: http://acritch.com/media/arches.pdf</div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">* "Slow Takeoff Fast GWP Acceleration Scenario" in <ref>https://www.greaterwrong.com/posts/aFaKhG86tTrKvtAnT/against-gdp-as-a-metric-for-timelines-and-takeoff-speeds</ref></ins></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>I thought [https://www.greaterwrong.com/posts/ZFtesgbY9XwtqqyZ5/human-psycholinguists-a-critical-appraisal this post] about GPT-2 was pretty interesting: "But even this would be an important discovery – the discovery that huge swaths of what we consider most essential about language can be done “non-linguistically.” For every easy test that children pass and GPT-2 fails, there are hard tests GPT-2 passes which the scholars of 2001 would have thought far beyond the reach of any near-future machine. If this is the conclusion we’re drawing, it would imply a kind of paranoia about true linguistic ability, an insistence that one can do so much of it so well, can learn to write like spookily like Nabokov (or like me) given 12 books and 6 hours to chew on them … and yet still not be “the real thing,” not even a little bit. It would imply that there are language-like behaviors out there in logical space which aren’t language and which are nonetheless so much like it, non-trivially, beautifully, spine-chillingly like it." -- if many other things that humans can do are like language in this sense, we could start seeing crazy things happening in the world even though the AI "doesn't really understand" anything.</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>I thought [https://www.greaterwrong.com/posts/ZFtesgbY9XwtqqyZ5/human-psycholinguists-a-critical-appraisal this post] about GPT-2 was pretty interesting: "But even this would be an important discovery – the discovery that huge swaths of what we consider most essential about language can be done “non-linguistically.” For every easy test that children pass and GPT-2 fails, there are hard tests GPT-2 passes which the scholars of 2001 would have thought far beyond the reach of any near-future machine. If this is the conclusion we’re drawing, it would imply a kind of paranoia about true linguistic ability, an insistence that one can do so much of it so well, can learn to write like spookily like Nabokov (or like me) given 12 books and 6 hours to chew on them … and yet still not be “the real thing,” not even a little bit. It would imply that there are language-like behaviors out there in logical space which aren’t language and which are nonetheless so much like it, non-trivially, beautifully, spine-chillingly like it." -- if many other things that humans can do are like language in this sense, we could start seeing crazy things happening in the world even though the AI "doesn't really understand" anything.</div></td></tr>
</table>Issahttps://wiki.issarice.com/index.php?title=Soft-hard_takeoff&diff=1494&oldid=prevIssa at 01:41, 21 June 20202020-06-21T01:41:08Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">Revision as of 01:41, 21 June 2020</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l14" >Line 14:</td>
<td colspan="2" class="diff-lineno">Line 14:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* [[Owen Cotton-Barratt]]: "Number 1 is the step I feel most sceptical about. It seems to me likely that the first AIs which can perform pivotal acts will not perform fully general consequentialist reasoning. I expect that they will perform consequentialist reasoning within certain domains (e.g. AlphaGo in some sense reasons about consequences of moves, but has no conception of consequences in the physical world). This isn’t enough to alleviate concern: some such domains might be general enough that something misbehaving in them would cause large problems. But it is enough for me to think that paying attention to scope of domains is a promising angle." [https://agentfoundations.org/item?id=1242]</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* [[Owen Cotton-Barratt]]: "Number 1 is the step I feel most sceptical about. It seems to me likely that the first AIs which can perform pivotal acts will not perform fully general consequentialist reasoning. I expect that they will perform consequentialist reasoning within certain domains (e.g. AlphaGo in some sense reasons about consequences of moves, but has no conception of consequences in the physical world). This isn’t enough to alleviate concern: some such domains might be general enough that something misbehaving in them would cause large problems. But it is enough for me to think that paying attention to scope of domains is a promising angle." [https://agentfoundations.org/item?id=1242]</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* [[Daniel Kokotajlo]]: 'This model leaves out several important things. First, it leaves out the whole "intelligence explosion" idea: A project's innovation rate should increase as some function of how many innovations they have access to. Adding this in will make the situation more extreme and make the gap between the leading project and everyone else grow even bigger very quickly.' [https://lw2.issarice.com/posts/PKy8NuNPknenkDY74/soft-takeoff-can-still-lead-to-decisive-strategic-advantage]</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* [[Daniel Kokotajlo]]: 'This model leaves out several important things. First, it leaves out the whole "intelligence explosion" idea: A project's innovation rate should increase as some function of how many innovations they have access to. Adding this in will make the situation more extreme and make the gap between the leading project and everyone else grow even bigger very quickly.' [https://lw2.issarice.com/posts/PKy8NuNPknenkDY74/soft-takeoff-can-still-lead-to-decisive-strategic-advantage]</div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">: "33. Weak AGI" in https://aiimpacts.org/relevant-pre-agi-possibilities/</ins></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* The idea of a prepotent AI: http://acritch.com/media/arches.pdf</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* The idea of a prepotent AI: http://acritch.com/media/arches.pdf</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
</table>Issahttps://wiki.issarice.com/index.php?title=Soft-hard_takeoff&diff=1433&oldid=prevIssa at 01:37, 10 June 20202020-06-10T01:37:24Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">Revision as of 01:37, 10 June 2020</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l1" >Line 1:</td>
<td colspan="2" class="diff-lineno">Line 1:</td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">("Soft-hard takeoff" is a pretty horrible name; maybe something like "continuous takeoff + FOOM/locality" is better)</ins></div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;"></ins></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>a scenario i've been wondering about recently is where there is a stereotypical "soft takeoff" until around the point where the AI has somewhat-infra-human level general intelligence, and then once it crosses some threshold, a stereotypical "hard takeoff" happens.</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>a scenario i've been wondering about recently is where there is a stereotypical "soft takeoff" until around the point where the AI has somewhat-infra-human level general intelligence, and then once it crosses some threshold, a stereotypical "hard takeoff" happens.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
</table>Issahttps://wiki.issarice.com/index.php?title=Soft-hard_takeoff&diff=1432&oldid=prevIssa at 01:35, 10 June 20202020-06-10T01:35:54Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">Revision as of 01:35, 10 June 2020</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l12" >Line 12:</td>
<td colspan="2" class="diff-lineno">Line 12:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* [[Owen Cotton-Barratt]]: "Number 1 is the step I feel most sceptical about. It seems to me likely that the first AIs which can perform pivotal acts will not perform fully general consequentialist reasoning. I expect that they will perform consequentialist reasoning within certain domains (e.g. AlphaGo in some sense reasons about consequences of moves, but has no conception of consequences in the physical world). This isn’t enough to alleviate concern: some such domains might be general enough that something misbehaving in them would cause large problems. But it is enough for me to think that paying attention to scope of domains is a promising angle." [https://agentfoundations.org/item?id=1242]</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* [[Owen Cotton-Barratt]]: "Number 1 is the step I feel most sceptical about. It seems to me likely that the first AIs which can perform pivotal acts will not perform fully general consequentialist reasoning. I expect that they will perform consequentialist reasoning within certain domains (e.g. AlphaGo in some sense reasons about consequences of moves, but has no conception of consequences in the physical world). This isn’t enough to alleviate concern: some such domains might be general enough that something misbehaving in them would cause large problems. But it is enough for me to think that paying attention to scope of domains is a promising angle." [https://agentfoundations.org/item?id=1242]</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* [[Daniel Kokotajlo]]: 'This model leaves out several important things. First, it leaves out the whole "intelligence explosion" idea: A project's innovation rate should increase as some function of how many innovations they have access to. Adding this in will make the situation more extreme and make the gap between the leading project and everyone else grow even bigger very quickly.' [https://lw2.issarice.com/posts/PKy8NuNPknenkDY74/soft-takeoff-can-still-lead-to-decisive-strategic-advantage]</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* [[Daniel Kokotajlo]]: 'This model leaves out several important things. First, it leaves out the whole "intelligence explosion" idea: A project's innovation rate should increase as some function of how many innovations they have access to. Adding this in will make the situation more extreme and make the gap between the leading project and everyone else grow even bigger very quickly.' [https://lw2.issarice.com/posts/PKy8NuNPknenkDY74/soft-takeoff-can-still-lead-to-decisive-strategic-advantage]</div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">* The idea of a prepotent AI: http://acritch.com/media/arches.pdf</ins></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>I thought [https://www.greaterwrong.com/posts/ZFtesgbY9XwtqqyZ5/human-psycholinguists-a-critical-appraisal this post] about GPT-2 was pretty interesting: "But even this would be an important discovery – the discovery that huge swaths of what we consider most essential about language can be done “non-linguistically.” For every easy test that children pass and GPT-2 fails, there are hard tests GPT-2 passes which the scholars of 2001 would have thought far beyond the reach of any near-future machine. If this is the conclusion we’re drawing, it would imply a kind of paranoia about true linguistic ability, an insistence that one can do so much of it so well, can learn to write like spookily like Nabokov (or like me) given 12 books and 6 hours to chew on them … and yet still not be “the real thing,” not even a little bit. It would imply that there are language-like behaviors out there in logical space which aren’t language and which are nonetheless so much like it, non-trivially, beautifully, spine-chillingly like it." -- if many other things that humans can do are like language in this sense, we could start seeing crazy things happening in the world even though the AI "doesn't really understand" anything.</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>I thought [https://www.greaterwrong.com/posts/ZFtesgbY9XwtqqyZ5/human-psycholinguists-a-critical-appraisal this post] about GPT-2 was pretty interesting: "But even this would be an important discovery – the discovery that huge swaths of what we consider most essential about language can be done “non-linguistically.” For every easy test that children pass and GPT-2 fails, there are hard tests GPT-2 passes which the scholars of 2001 would have thought far beyond the reach of any near-future machine. If this is the conclusion we’re drawing, it would imply a kind of paranoia about true linguistic ability, an insistence that one can do so much of it so well, can learn to write like spookily like Nabokov (or like me) given 12 books and 6 hours to chew on them … and yet still not be “the real thing,” not even a little bit. It would imply that there are language-like behaviors out there in logical space which aren’t language and which are nonetheless so much like it, non-trivially, beautifully, spine-chillingly like it." -- if many other things that humans can do are like language in this sense, we could start seeing crazy things happening in the world even though the AI "doesn't really understand" anything.</div></td></tr>
</table>Issahttps://wiki.issarice.com/index.php?title=Soft-hard_takeoff&diff=1430&oldid=prevIssa at 01:33, 10 June 20202020-06-10T01:33:21Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">Revision as of 01:33, 10 June 2020</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l30" >Line 30:</td>
<td colspan="2" class="diff-lineno">Line 30:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* "whoever develops AGI first has a massive advantage over the rest of the world and hence great freedom in choosing what to do with their invention" -- i think this is true</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* "whoever develops AGI first has a massive advantage over the rest of the world and hence great freedom in choosing what to do with their invention" -- i think this is true</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* "in slow takeoff scenarios, other actors will already have nearly-as-good-AGI, and a group that tries to use AGI in a very restricted or handicapped way won’t be able to take any pivotal action" -- not necessarily true; in soft-hard takeoff, AGI could still be much better than the narrow predecessors.</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* "in slow takeoff scenarios, other actors will already have nearly-as-good-AGI, and a group that tries to use AGI in a very restricted or handicapped way won’t be able to take any pivotal action" -- not necessarily true; in soft-hard takeoff, AGI could still be much better than the narrow predecessors.</div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;"></ins></div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">==See also==</ins></div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;"></ins></div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">* [[Will there be significant changes to the world prior to some critical AI capability threshold being reached?]]</ins></div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;"></ins></div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">[[Category:AI safety]]</ins></div></td></tr>
</table>Issahttps://wiki.issarice.com/index.php?title=Soft-hard_takeoff&diff=1429&oldid=prevIssa at 01:32, 10 June 20202020-06-10T01:32:57Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">Revision as of 01:32, 10 June 2020</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l18" >Line 18:</td>
<td colspan="2" class="diff-lineno">Line 18:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>so what is paul's argument that once we reach a "takeoff level capability", we won't get a unipolar/[[decisive strategic advantage]] outcome? i think it's something like: because the runner-ups will have almost-as-capable AI, they can fight back somewhat effectively, and in a short time they will reach the current winner's capability (even while the current winner will be even ahead by then). it has to be something like this, since if we use the counterfactual in [https://lw2.issarice.com/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress] where a seed-level AI is suddenly dropped into today's world, i think even paul would say there would be a sudden unipolar outcome. See [[counterfactual of dropping a seed AI into a world without other capable AI]].</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>so what is paul's argument that once we reach a "takeoff level capability", we won't get a unipolar/[[decisive strategic advantage]] outcome? i think it's something like: because the runner-ups will have almost-as-capable AI, they can fight back somewhat effectively, and in a short time they will reach the current winner's capability (even while the current winner will be even ahead by then). it has to be something like this, since if we use the counterfactual in [https://lw2.issarice.com/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress] where a seed-level AI is suddenly dropped into today's world, i think even paul would say there would be a sudden unipolar outcome. See [[counterfactual of dropping a seed AI into a world without other capable AI]].</div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;"></ins></div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">https://www.facebook.com/yudkowsky/posts/10154597918309228</ins></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==implications==</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==implications==</div></td></tr>
</table>Issahttps://wiki.issarice.com/index.php?title=Soft-hard_takeoff&diff=101&oldid=prevIssa at 07:05, 22 February 20202020-02-22T07:05:24Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">Revision as of 07:05, 22 February 2020</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l11" >Line 11:</td>
<td colspan="2" class="diff-lineno">Line 11:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* In [https://gist.github.com/bshlgrs/a46f3fa25e2f9a8fbd07026de354fc22 this gist], [[buck]] asks "It seems like there might be various ways that the world could be radically transformed by narrow AI before it's transformed by AGI; why don't people talk about that? (Maybe CAIS is people talking about this?)"</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* In [https://gist.github.com/bshlgrs/a46f3fa25e2f9a8fbd07026de354fc22 this gist], [[buck]] asks "It seems like there might be various ways that the world could be radically transformed by narrow AI before it's transformed by AGI; why don't people talk about that? (Maybe CAIS is people talking about this?)"</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* [[Owen Cotton-Barratt]]: "Number 1 is the step I feel most sceptical about. It seems to me likely that the first AIs which can perform pivotal acts will not perform fully general consequentialist reasoning. I expect that they will perform consequentialist reasoning within certain domains (e.g. AlphaGo in some sense reasons about consequences of moves, but has no conception of consequences in the physical world). This isn’t enough to alleviate concern: some such domains might be general enough that something misbehaving in them would cause large problems. But it is enough for me to think that paying attention to scope of domains is a promising angle." [https://agentfoundations.org/item?id=1242]</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* [[Owen Cotton-Barratt]]: "Number 1 is the step I feel most sceptical about. It seems to me likely that the first AIs which can perform pivotal acts will not perform fully general consequentialist reasoning. I expect that they will perform consequentialist reasoning within certain domains (e.g. AlphaGo in some sense reasons about consequences of moves, but has no conception of consequences in the physical world). This isn’t enough to alleviate concern: some such domains might be general enough that something misbehaving in them would cause large problems. But it is enough for me to think that paying attention to scope of domains is a promising angle." [https://agentfoundations.org/item?id=1242]</div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">* [[Daniel Kokotajlo]]: 'This model leaves out several important things. First, it leaves out the whole "intelligence explosion" idea: A project's innovation rate should increase as some function of how many innovations they have access to. Adding this in will make the situation more extreme and make the gap between the leading project and everyone else grow even bigger very quickly.' [https://lw2.issarice.com/posts/PKy8NuNPknenkDY74/soft-takeoff-can-still-lead-to-decisive-strategic-advantage]</ins></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>I thought [https://www.greaterwrong.com/posts/ZFtesgbY9XwtqqyZ5/human-psycholinguists-a-critical-appraisal this post] about GPT-2 was pretty interesting: "But even this would be an important discovery – the discovery that huge swaths of what we consider most essential about language can be done “non-linguistically.” For every easy test that children pass and GPT-2 fails, there are hard tests GPT-2 passes which the scholars of 2001 would have thought far beyond the reach of any near-future machine. If this is the conclusion we’re drawing, it would imply a kind of paranoia about true linguistic ability, an insistence that one can do so much of it so well, can learn to write like spookily like Nabokov (or like me) given 12 books and 6 hours to chew on them … and yet still not be “the real thing,” not even a little bit. It would imply that there are language-like behaviors out there in logical space which aren’t language and which are nonetheless so much like it, non-trivially, beautifully, spine-chillingly like it." -- if many other things that humans can do are like language in this sense, we could start seeing crazy things happening in the world even though the AI "doesn't really understand" anything.</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>I thought [https://www.greaterwrong.com/posts/ZFtesgbY9XwtqqyZ5/human-psycholinguists-a-critical-appraisal this post] about GPT-2 was pretty interesting: "But even this would be an important discovery – the discovery that huge swaths of what we consider most essential about language can be done “non-linguistically.” For every easy test that children pass and GPT-2 fails, there are hard tests GPT-2 passes which the scholars of 2001 would have thought far beyond the reach of any near-future machine. If this is the conclusion we’re drawing, it would imply a kind of paranoia about true linguistic ability, an insistence that one can do so much of it so well, can learn to write like spookily like Nabokov (or like me) given 12 books and 6 hours to chew on them … and yet still not be “the real thing,” not even a little bit. It would imply that there are language-like behaviors out there in logical space which aren’t language and which are nonetheless so much like it, non-trivially, beautifully, spine-chillingly like it." -- if many other things that humans can do are like language in this sense, we could start seeing crazy things happening in the world even though the AI "doesn't really understand" anything.</div></td></tr>
</table>Issahttps://wiki.issarice.com/index.php?title=Soft-hard_takeoff&diff=100&oldid=prevIssa at 06:58, 22 February 20202020-02-22T06:58:26Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">Revision as of 06:58, 22 February 2020</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l16" >Line 16:</td>
<td colspan="2" class="diff-lineno">Line 16:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>"Continuous takeoff is a statement about what happens before we reach the point where a fast takeoff is supposed to happen, and is perfectly consistent with the claim that given the stated preconditions for fast takeoff, fast takeoff will happen. It’s a statement that serious problems, possibly serious enough to pose an existential threat, will show up before the window where we expect fast takeoff scenarios to occur." [https://lw2.issarice.com/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress#The_Outside_View] -- i guess this is the same sort of thing as what i am describing here. if so, then i agree that talking about "soft" vs "hard" takeoff is pretty misleading, and we should be talking about "shape prior to takeoff" (business as usual vs crazy shit happens), and then ''separately'' about the speed of takeoff (where it doesn't seem like paul and MIRI even disagree?). Also if this is the case, i don't see how anything [[eliezer]] writes actually contradicts what paul writes. like, i think eliezer is consistently only talking about the speed of takeoff, ''not'' the shape of progress prior to takeoff?</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>"Continuous takeoff is a statement about what happens before we reach the point where a fast takeoff is supposed to happen, and is perfectly consistent with the claim that given the stated preconditions for fast takeoff, fast takeoff will happen. It’s a statement that serious problems, possibly serious enough to pose an existential threat, will show up before the window where we expect fast takeoff scenarios to occur." [https://lw2.issarice.com/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress#The_Outside_View] -- i guess this is the same sort of thing as what i am describing here. if so, then i agree that talking about "soft" vs "hard" takeoff is pretty misleading, and we should be talking about "shape prior to takeoff" (business as usual vs crazy shit happens), and then ''separately'' about the speed of takeoff (where it doesn't seem like paul and MIRI even disagree?). Also if this is the case, i don't see how anything [[eliezer]] writes actually contradicts what paul writes. like, i think eliezer is consistently only talking about the speed of takeoff, ''not'' the shape of progress prior to takeoff?</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>so what is paul's argument that once we reach a "takeoff level capability", we won't get a unipolar/[[decisive strategic advantage]] outcome? i think it's something like: because the runner-ups will have almost-as-capable AI, they can fight back somewhat effectively, and in a short time they will reach the current winner's capability (even while the current winner will be even ahead by then). it has to be something like this, since if we use the counterfactual in [https://lw2.issarice.com/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress] where a seed-level AI is suddenly dropped into today's world, i think even paul would say there would be a sudden unipolar outcome. See [[<del class="diffchange diffchange-inline">Counterfactual </del>of dropping a seed AI into a world without other capable AI]]</div></td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>so what is paul's argument that once we reach a "takeoff level capability", we won't get a unipolar/[[decisive strategic advantage]] outcome? i think it's something like: because the runner-ups will have almost-as-capable AI, they can fight back somewhat effectively, and in a short time they will reach the current winner's capability (even while the current winner will be even ahead by then). it has to be something like this, since if we use the counterfactual in [https://lw2.issarice.com/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress] where a seed-level AI is suddenly dropped into today's world, i think even paul would say there would be a sudden unipolar outcome. See [[<ins class="diffchange diffchange-inline">counterfactual </ins>of dropping a seed AI into a world without other capable AI]]<ins class="diffchange diffchange-inline">.</ins></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==implications==</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==implications==</div></td></tr>
</table>Issahttps://wiki.issarice.com/index.php?title=Soft-hard_takeoff&diff=99&oldid=prevIssa at 06:58, 22 February 20202020-02-22T06:58:08Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">Revision as of 06:58, 22 February 2020</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l16" >Line 16:</td>
<td colspan="2" class="diff-lineno">Line 16:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>"Continuous takeoff is a statement about what happens before we reach the point where a fast takeoff is supposed to happen, and is perfectly consistent with the claim that given the stated preconditions for fast takeoff, fast takeoff will happen. It’s a statement that serious problems, possibly serious enough to pose an existential threat, will show up before the window where we expect fast takeoff scenarios to occur." [https://lw2.issarice.com/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress#The_Outside_View] -- i guess this is the same sort of thing as what i am describing here. if so, then i agree that talking about "soft" vs "hard" takeoff is pretty misleading, and we should be talking about "shape prior to takeoff" (business as usual vs crazy shit happens), and then ''separately'' about the speed of takeoff (where it doesn't seem like paul and MIRI even disagree?). Also if this is the case, i don't see how anything [[eliezer]] writes actually contradicts what paul writes. like, i think eliezer is consistently only talking about the speed of takeoff, ''not'' the shape of progress prior to takeoff?</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>"Continuous takeoff is a statement about what happens before we reach the point where a fast takeoff is supposed to happen, and is perfectly consistent with the claim that given the stated preconditions for fast takeoff, fast takeoff will happen. It’s a statement that serious problems, possibly serious enough to pose an existential threat, will show up before the window where we expect fast takeoff scenarios to occur." [https://lw2.issarice.com/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress#The_Outside_View] -- i guess this is the same sort of thing as what i am describing here. if so, then i agree that talking about "soft" vs "hard" takeoff is pretty misleading, and we should be talking about "shape prior to takeoff" (business as usual vs crazy shit happens), and then ''separately'' about the speed of takeoff (where it doesn't seem like paul and MIRI even disagree?). Also if this is the case, i don't see how anything [[eliezer]] writes actually contradicts what paul writes. like, i think eliezer is consistently only talking about the speed of takeoff, ''not'' the shape of progress prior to takeoff?</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>so what is paul's argument that once we reach a "takeoff level capability", we won't get a unipolar/[[decisive strategic advantage]] outcome? i think it's something like: because the runner-ups will have almost-as-capable AI, they can fight back somewhat effectively, and in a short time they will reach the current winner's capability (even while the current winner will be even ahead by then). it has to be something like this, since if we use the counterfactual in [https://lw2.issarice.com/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress] where a seed-level AI is suddenly dropped into today's world, i think even paul would say there would be a sudden unipolar outcome. <del class="diffchange diffchange-inline">daniel kokotajlo raises this point </del>[<del class="diffchange diffchange-inline">https://lw2.issarice.com/posts/PKy8NuNPknenkDY74/soft-takeoff-can-still-lead-to-decisive-strategic-advantage here]: "Well yeah, ''if'' we dropped </del>a <del class="diffchange diffchange-inline">superintelligence </del>into a world <del class="diffchange diffchange-inline">full of humans. But realistically the rest of the world will be undergoing intelligence explosion too. And indeed the world as a whole will undergo a faster intelligence explosion than any particular project could; to think that one project could pull ahead of everyone else is to think that, prior to intelligence explosion, there would be a single project innovating faster than the rest of the world combined!"</del></div></td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>so what is paul's argument that once we reach a "takeoff level capability", we won't get a unipolar/[[decisive strategic advantage]] outcome? i think it's something like: because the runner-ups will have almost-as-capable AI, they can fight back somewhat effectively, and in a short time they will reach the current winner's capability (even while the current winner will be even ahead by then). it has to be something like this, since if we use the counterfactual in [https://lw2.issarice.com/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress] where a seed-level AI is suddenly dropped into today's world, i think even paul would say there would be a sudden unipolar outcome. <ins class="diffchange diffchange-inline">See </ins>[<ins class="diffchange diffchange-inline">[Counterfactual of dropping </ins>a <ins class="diffchange diffchange-inline">seed AI </ins>into a world <ins class="diffchange diffchange-inline">without other capable AI]]</ins></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==implications==</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==implications==</div></td></tr>
</table>Issahttps://wiki.issarice.com/index.php?title=Soft-hard_takeoff&diff=97&oldid=prevIssa at 06:51, 22 February 20202020-02-22T06:51:38Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">Revision as of 06:51, 22 February 2020</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l16" >Line 16:</td>
<td colspan="2" class="diff-lineno">Line 16:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>"Continuous takeoff is a statement about what happens before we reach the point where a fast takeoff is supposed to happen, and is perfectly consistent with the claim that given the stated preconditions for fast takeoff, fast takeoff will happen. It’s a statement that serious problems, possibly serious enough to pose an existential threat, will show up before the window where we expect fast takeoff scenarios to occur." [https://lw2.issarice.com/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress#The_Outside_View] -- i guess this is the same sort of thing as what i am describing here. if so, then i agree that talking about "soft" vs "hard" takeoff is pretty misleading, and we should be talking about "shape prior to takeoff" (business as usual vs crazy shit happens), and then ''separately'' about the speed of takeoff (where it doesn't seem like paul and MIRI even disagree?). Also if this is the case, i don't see how anything [[eliezer]] writes actually contradicts what paul writes. like, i think eliezer is consistently only talking about the speed of takeoff, ''not'' the shape of progress prior to takeoff?</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>"Continuous takeoff is a statement about what happens before we reach the point where a fast takeoff is supposed to happen, and is perfectly consistent with the claim that given the stated preconditions for fast takeoff, fast takeoff will happen. It’s a statement that serious problems, possibly serious enough to pose an existential threat, will show up before the window where we expect fast takeoff scenarios to occur." [https://lw2.issarice.com/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress#The_Outside_View] -- i guess this is the same sort of thing as what i am describing here. if so, then i agree that talking about "soft" vs "hard" takeoff is pretty misleading, and we should be talking about "shape prior to takeoff" (business as usual vs crazy shit happens), and then ''separately'' about the speed of takeoff (where it doesn't seem like paul and MIRI even disagree?). Also if this is the case, i don't see how anything [[eliezer]] writes actually contradicts what paul writes. like, i think eliezer is consistently only talking about the speed of takeoff, ''not'' the shape of progress prior to takeoff?</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>so what is paul's argument that once we reach a "takeoff level capability", we won't get a unipolar/[[decisive strategic advantage]] outcome? i think it's something like: because the runner-ups will have almost-as-capable AI, they can fight back somewhat effectively, and in a short time they will reach the current winner's capability (even while the current winner will be even ahead by then). it has to be something like this, since if we use the counterfactual in [https://lw2.issarice.com/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress] where a seed-level AI is suddenly dropped into today's world, i think even paul would say there would be a sudden unipolar outcome.</div></td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>so what is paul's argument that once we reach a "takeoff level capability", we won't get a unipolar/[[decisive strategic advantage]] outcome? i think it's something like: because the runner-ups will have almost-as-capable AI, they can fight back somewhat effectively, and in a short time they will reach the current winner's capability (even while the current winner will be even ahead by then). it has to be something like this, since if we use the counterfactual in [https://lw2.issarice.com/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress] where a seed-level AI is suddenly dropped into today's world, i think even paul would say there would be a sudden unipolar outcome. <ins class="diffchange diffchange-inline">daniel kokotajlo raises this point [https://lw2.issarice.com/posts/PKy8NuNPknenkDY74/soft-takeoff-can-still-lead-to-decisive-strategic-advantage here]: "Well yeah, ''if'' we dropped a superintelligence into a world full of humans. But realistically the rest of the world will be undergoing intelligence explosion too. And indeed the world as a whole will undergo a faster intelligence explosion than any particular project could; to think that one project could pull ahead of everyone else is to think that, prior to intelligence explosion, there would be a single project innovating faster than the rest of the world combined!"</ins></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==implications==</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==implications==</div></td></tr>
</table>Issa