Difference between revisions of "Pivotal act"

From Issawiki
Jump to: navigation, search
(Examples)
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
A '''pivotal act''' is (putting down my guess here) something that allows a single entity (?) to take over the world? form a [[singleton]]? prevent others from taking over the world or forming a singleton or achieving AGI?
 
A '''pivotal act''' is (putting down my guess here) something that allows a single entity (?) to take over the world? form a [[singleton]]? prevent others from taking over the world or forming a singleton or achieving AGI?
 +
 +
things to clarify: is pivotal act only for "good" actors, or does it also include destroying the world? also pivotal capability vs pivotal act
  
 
according to some views of [[AI takeoff]], a pivotal act is an important part of the plan for preventing [[existential doom from AI]].
 
according to some views of [[AI takeoff]], a pivotal act is an important part of the plan for preventing [[existential doom from AI]].
Line 6: Line 8:
  
 
* "Build self-replicating open-air nanosystems and use them (only) to melt all GPUs."<ref>https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty#1_1__Deep_vs__shallow_problem_solving_patterns</ref>
 
* "Build self-replicating open-air nanosystems and use them (only) to melt all GPUs."<ref>https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty#1_1__Deep_vs__shallow_problem_solving_patterns</ref>
* Make progress on the full (i.e. not restricted to a limited AI system or [[minimal AGI]]) alignment problem faster than humans can<ref>https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty#1_1__Deep_vs__shallow_problem_solving_patterns</ref>
+
* Make progress on the full (i.e. not restricted to a limited AI system like present-day systems or [[minimal AGI]]) alignment problem faster than humans can<ref>https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty#1_1__Deep_vs__shallow_problem_solving_patterns</ref>
  
 
==Non-examples==
 
==Non-examples==
 +
 +
* make dramatic progress in proving mathematical theorems (without explaining the results to humans) -- not a pivotal capability because it's unclear how to perform a pivotal act<ref>https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty#1_1__Deep_vs__shallow_problem_solving_patterns</ref>
  
 
==External links==
 
==External links==

Latest revision as of 15:15, 26 February 2022

A pivotal act is (putting down my guess here) something that allows a single entity (?) to take over the world? form a singleton? prevent others from taking over the world or forming a singleton or achieving AGI?

things to clarify: is pivotal act only for "good" actors, or does it also include destroying the world? also pivotal capability vs pivotal act

according to some views of AI takeoff, a pivotal act is an important part of the plan for preventing existential doom from AI.

Examples

  • "Build self-replicating open-air nanosystems and use them (only) to melt all GPUs."[1]
  • Make progress on the full (i.e. not restricted to a limited AI system like present-day systems or minimal AGI) alignment problem faster than humans can[2]

Non-examples

  • make dramatic progress in proving mathematical theorems (without explaining the results to humans) -- not a pivotal capability because it's unclear how to perform a pivotal act[3]

External links

References

What links here