Difference between revisions of "Pivotal act"
(→Examples) |
(→Non-examples) |
||
Line 9: | Line 9: | ||
==Non-examples== | ==Non-examples== | ||
+ | |||
+ | * make dramatic progress in proving mathematical theorems (without explaining the results to humans) -- not a pivotal capability because it's unclear how to perform a pivotal act<ref>https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty#1_1__Deep_vs__shallow_problem_solving_patterns</ref> | ||
==External links== | ==External links== |
Revision as of 15:14, 26 February 2022
A pivotal act is (putting down my guess here) something that allows a single entity (?) to take over the world? form a singleton? prevent others from taking over the world or forming a singleton or achieving AGI?
according to some views of AI takeoff, a pivotal act is an important part of the plan for preventing existential doom from AI.
Examples
- "Build self-replicating open-air nanosystems and use them (only) to melt all GPUs."[1]
- Make progress on the full (i.e. not restricted to a limited AI system like present-day systems or minimal AGI) alignment problem faster than humans can[2]
Non-examples
- make dramatic progress in proving mathematical theorems (without explaining the results to humans) -- not a pivotal capability because it's unclear how to perform a pivotal act[3]
External links
References
- ↑ https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty#1_1__Deep_vs__shallow_problem_solving_patterns
- ↑ https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty#1_1__Deep_vs__shallow_problem_solving_patterns
- ↑ https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty#1_1__Deep_vs__shallow_problem_solving_patterns