Difference between revisions of "Resource overhang"

From Issawiki
Jump to: navigation, search
(Created page with " ==Resource overhang and AI takeoff== Whether we are already in hardware overhang / other "resource bonanza". So whether we are in overhang depends on whether future earl...")
 
(Resource overhang and AI takeoff)
 
(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
Does overhang just mean "not a bottleneck"?
  
 
==Resource overhang and AI takeoff==
 
==Resource overhang and AI takeoff==
  
 
Whether we are already in [[hardware overhang]] / other "resource bonanza". So whether we are in overhang depends on whether future early AGI systems can make use of existing resources much more efficiently, which means overhang has a dependence on whether few insights are needed to get to AGI? More precisely, there's a feedback loop, where resource overhang amplifies any existing discontinuity.
 
Whether we are already in [[hardware overhang]] / other "resource bonanza". So whether we are in overhang depends on whether future early AGI systems can make use of existing resources much more efficiently, which means overhang has a dependence on whether few insights are needed to get to AGI? More precisely, there's a feedback loop, where resource overhang amplifies any existing discontinuity.
 +
 +
Andy Jones argues we are already in a resource overhang because GPT-3 could be scaled up to be much more capable but no company has tried to do so.<ref>https://www.greaterwrong.com/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang</ref> See [[scaling hypothesis]].
 +
 +
==External links==
 +
 +
* https://parallel-forecast.github.io/AI-dict/docs/dictionary.html#overhang
  
 
[[Category:AI safety]]
 
[[Category:AI safety]]

Latest revision as of 03:19, 24 February 2021

Does overhang just mean "not a bottleneck"?

Resource overhang and AI takeoff

Whether we are already in hardware overhang / other "resource bonanza". So whether we are in overhang depends on whether future early AGI systems can make use of existing resources much more efficiently, which means overhang has a dependence on whether few insights are needed to get to AGI? More precisely, there's a feedback loop, where resource overhang amplifies any existing discontinuity.

Andy Jones argues we are already in a resource overhang because GPT-3 could be scaled up to be much more capable but no company has tried to do so.[1] See scaling hypothesis.

External links

  • https://www.greaterwrong.com/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang