Late singularity
here "late singularity" means something like "the singularity happens 200+ years from now, rather than like 20 years or 100 years".
are late singularities actually better? if AGI is developed in 200 years, what does this say about ai xrisk? this could happen for several reasons:
- making AGI turned out to be much harder than we thought -- i think if you have the intuition that "ai safety will be much harder than building agi", then this might push you to think "ai safety is basically impossible for humans without intelligence enhancement to solve".
- our institutions turned out to be much suckier than we thought (e.g. all of AI academia and for-profit company labs just Great Stagnate) -- this seems bad from an xrisk standpoint, because it means we're probably just super bad at coordinating, so we can't even solve ai safety.
- on the other hand, maybe this means we just get a lot more time to get together and work on ai safety.
- couldn't this also mean that we are super good at coordination, and we were able to slow down AI progress?