Difference between revisions of "AlphaGo as evidence of discontinuous takeoff"

From Issawiki
Jump to: navigation, search
 
(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
Some people like [[Eliezer Yudkowsky]] take [[AlphaGo]] to be evidence for a discontinuous takeoff.
 +
 +
"First, there is a step not generally valid from supposing that because a previous AI is a technological precursor which has 19 out of 20 critical insights, it has 95% of the later AI's IQ, applied to similar domains. When you count stuff like "multiplying tensors by matrices" and "ReLUs" and "training using TPUs" then AlphaGo only contained a very small amount of innovation relative to previous AI technology, and yet it broke trends on Go performance. You could point to all kinds of incremental technological precursors to AlphaGo in terms of AI technology, but they wouldn't be smooth precursors on a graph of Go-playing ability."<ref>https://www.lesswrong.com/posts/vwLxd6hhFvPbvKmBH/yudkowsky-and-christiano-discuss-takeoff-speeds</ref>
 +
 
lw question: was there a discontinuity in Go capability within deepmind itself? if so, isn't that evidence for discontinuities in general? if not, what is the reason it is not evidence for a discontinuity?
 
lw question: was there a discontinuity in Go capability within deepmind itself? if so, isn't that evidence for discontinuities in general? if not, what is the reason it is not evidence for a discontinuity?
why aren't people asking things like "how many attempts did deepmind make to hit upon the alphago architecture?" in particular, how good was the second best attempt? If there are many architectures that were thrown away, you'd expect it a more continuous improvement between different attempts.
+
why aren't people asking things like "how many attempts did deepmind make to hit upon the alphago architecture?" in particular, how good was the second best attempt? If there are many architectures that were thrown away, you'd expect a more continuous improvement between different attempts.
 
actually hanson doesn't care if there's a discontinuity here, even within deepmind. because his point is that Go is like a narrow tech, not like a city or country where you need to do well on lots of tasks. but others like paul might care?
 
actually hanson doesn't care if there's a discontinuity here, even within deepmind. because his point is that Go is like a narrow tech, not like a city or country where you need to do well on lots of tasks. but others like paul might care?
 +
 +
another way to ask a question i have: how many different permutations/iterations of an AI system are there, for every AI system that gets published out into the world as a finished thing? how many trial and error iterations do ML systems go through before they make it out to the world? this question matters mostly for AI systems that produce a large qualitative shift in how good they are (like alphago).
  
 
would we have seen a discontinuity in computer Go playing ability if there was economic incentive?
 
would we have seen a discontinuity in computer Go playing ability if there was economic incentive?
Line 9: Line 15:
 
https://aiimpacts.org/research-topic-hardware-software-and-ai/
 
https://aiimpacts.org/research-topic-hardware-software-and-ai/
  
 +
==References==
 +
 +
<references/>
  
 
[[Category:AI safety]]
 
[[Category:AI safety]]

Latest revision as of 23:35, 2 August 2022

Some people like Eliezer Yudkowsky take AlphaGo to be evidence for a discontinuous takeoff.

"First, there is a step not generally valid from supposing that because a previous AI is a technological precursor which has 19 out of 20 critical insights, it has 95% of the later AI's IQ, applied to similar domains. When you count stuff like "multiplying tensors by matrices" and "ReLUs" and "training using TPUs" then AlphaGo only contained a very small amount of innovation relative to previous AI technology, and yet it broke trends on Go performance. You could point to all kinds of incremental technological precursors to AlphaGo in terms of AI technology, but they wouldn't be smooth precursors on a graph of Go-playing ability."[1]

lw question: was there a discontinuity in Go capability within deepmind itself? if so, isn't that evidence for discontinuities in general? if not, what is the reason it is not evidence for a discontinuity? why aren't people asking things like "how many attempts did deepmind make to hit upon the alphago architecture?" in particular, how good was the second best attempt? If there are many architectures that were thrown away, you'd expect a more continuous improvement between different attempts. actually hanson doesn't care if there's a discontinuity here, even within deepmind. because his point is that Go is like a narrow tech, not like a city or country where you need to do well on lots of tasks. but others like paul might care?

another way to ask a question i have: how many different permutations/iterations of an AI system are there, for every AI system that gets published out into the world as a finished thing? how many trial and error iterations do ML systems go through before they make it out to the world? this question matters mostly for AI systems that produce a large qualitative shift in how good they are (like alphago).

would we have seen a discontinuity in computer Go playing ability if there was economic incentive? theoretically we *could* run this experiment. This is similar to the project outlined here: """1.2.C Apply early software understanding to modern hardware ∑2,000 ∆8 Using contemporary hardware and a 1970’s or 1980’s understanding of connectionism, observe the extent to which a modern AI researcher (or student) could replicate contemporary performance on benchmark AI problems. This project is relatively expensive, among those we are describing. It requires substantial time from collaborators with a historically accurate minimal understanding of AI. Students may satisfy this role well, if their education is incomplete in the right ways. One might compare to the work of similar students who had also learned about modern methods.""" https://aiimpacts.org/research-topic-hardware-software-and-ai/

References