Difference between revisions of "Late singularity"
Line 10: | Line 10: | ||
in other words, finding P(doom | AGI after 2220) seems tricky to figure out, since there could be multiple reasons for why AGI happens after 2220, and each thing says something pretty different about probability of doom. | in other words, finding P(doom | AGI after 2220) seems tricky to figure out, since there could be multiple reasons for why AGI happens after 2220, and each thing says something pretty different about probability of doom. | ||
+ | |||
+ | [[Category:AI safety]] |
Latest revision as of 21:33, 4 April 2021
here "late singularity" means something like "the singularity happens 200+ years from now, rather than like 20 years or 100 years".
are late singularities actually better? if AGI is developed in 200 years, what does this say about ai xrisk? this could happen for several reasons:
- making AGI turned out to be much harder than we thought -- i think if you have the intuition that "ai safety will be much harder than building agi", then this might push you to think "ai safety is basically impossible for humans without intelligence enhancement to solve".
- our institutions turned out to be much suckier than we thought (e.g. all of AI academia and for-profit company labs just Great Stagnate) -- this seems bad from an xrisk standpoint, because it means we're probably just super bad at coordinating, so we can't even solve ai safety.
- on the other hand, maybe this means we just get a lot more time to get together and work on ai safety.
- couldn't this also mean that we are super good at coordination, and we were able to slow down AI progress?
- we will likely have a lot more hardware than we do today, and lots of other technologies as well, so the world will look pretty different prior to AGI.
in other words, finding P(doom | AGI after 2220) seems tricky to figure out, since there could be multiple reasons for why AGI happens after 2220, and each thing says something pretty different about probability of doom.