Prosaic AI

From Issawiki
Jump to: navigation, search

what are the possibilities for prosaic AI? i.e. if prosaic AI happened, then what are some possible reasons for why this happened? some ideas:

  • optimizing hard enough produced a mesa-optimizer inside the neural network (or whatever) the story here is something like: either (1) there turns out to be nothing "deep" about intelligence at all, and just increasing the model size/compute of existing ML systems (or straightforward tweaks to them) somehow produces an AI that fully "understands" things and can do everything humans can; (2) there is some simple core algorithm and this can be found by using lots of compute (or whatever).
  • all of the things we humans do with our general intelligence turns out to be possible to do without actually "understanding" anything. i.e. in the same way that AlphaGo can play Go "without understanding" what it is doing, and how GPT-2 can write an essay "without understanding" its meaning, it turns out that all of the things humans do (including AI research) is possible without such understanding.

What is the relationship between prosaic AI and secret sauce for intelligence? One interpretation is that the prosaic AI hypothesis is saying that humans won't make any conceptual breakthrough prior to building AGI (which still allows for an AI to rapidly improve its capabilities in a discontinuous way by discovering a conceptual breakthrough on its own), while the "no secret sauce" hypothesis is saying that there won't be a conceptual breakthrough at all.