Difference between revisions of "Future planning"
m (Issa moved page AI safety to Future planning without leaving a redirect) |
|
(No difference)
|
Revision as of 07:48, 24 February 2020
things to talk about:
- the most decision-relevant questions for me right now (everything else should feed into one of these questions):
- AI safety vs something else? right now AI safety seems like the best candidate for the biggest/soonest change, but i want to investigate some other things.
- if AI safety, then what technical agenda seems best? this matters for (1) deciding what to do technical research on, if at all; (2) what technical research to pay attention to/support.
- if AI safety, then what will the end of the world look like (basically this is takeoff speeds + what the most likely failure mode looks like)? (this matters for prepping)
- how likely is the end of the world?
- when will AGI come?
Example scenarios:
Name of scenario | Next big thing? | AI safety technical agenda | What will end of the world look like? | How likely is the world to end? | When will the world end/reach a singularity/cure aging? | What I should do |
---|---|---|---|---|---|---|
Eliezerian end of world | AGI | MIRI | FOOM/AI takeover | Highly likely? | 10-20 years? | Don't have kids. Probably don't even bother with wife search. Focus on AI safety research, either field building or figuring out how to contribute to MIRI-like technical work. |