Difference between revisions of "Coherence and goal-directed agency discussion"

From Issawiki
Jump to: navigation, search
(Created page with "https://www.greaterwrong.com/posts/vphFJzK3mWA4PJKAg/coherent-behaviour-in-the-real-world-is-an-incoherent#comment-F2YB5aJgDdK9ZGspw https://www.greaterwrong.com/posts/NxF5G6...")
 
 
(6 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
One of the "[[List of big discussions in AI alignment|big discussions]]" that has been taking place in AI safety in 2018–2020 is about [[coherence argument]]s and [[goal-directed]] agency. Some of the topics in this discussion are:
 +
 +
* What point was [[Eliezer]] trying to make when bringing up coherence arguments?
 +
* Will the first AGI systems we build be goal-directed?
 +
* Are utility maximizers goal-directed?
 +
* What are the differences and implication structure among intelligence, goal-directedness, expected utility maximizer, coherent, "EU maximizer + X" (for various values of X like "can convert extra resources into additional utility", "has a typical reward function", "has a simple reward function", "wasn't specifically optimized away from goal-directedness")?
 +
 
https://www.greaterwrong.com/posts/vphFJzK3mWA4PJKAg/coherent-behaviour-in-the-real-world-is-an-incoherent#comment-F2YB5aJgDdK9ZGspw
 
https://www.greaterwrong.com/posts/vphFJzK3mWA4PJKAg/coherent-behaviour-in-the-real-world-is-an-incoherent#comment-F2YB5aJgDdK9ZGspw
  
Line 8: Line 15:
  
 
https://www.greaterwrong.com/posts/TE5nJ882s5dCMkBB8/conclusion-to-the-sequence-on-value-learning
 
https://www.greaterwrong.com/posts/TE5nJ882s5dCMkBB8/conclusion-to-the-sequence-on-value-learning
 +
 +
==See also==
 +
 +
* [[Comparison of terms related to agency]]
 +
* [[List of terms used to describe the intelligence of an agent]]
 +
 +
[[Category:AI safety]]

Latest revision as of 19:09, 27 February 2021

One of the "big discussions" that has been taking place in AI safety in 2018–2020 is about coherence arguments and goal-directed agency. Some of the topics in this discussion are:

  • What point was Eliezer trying to make when bringing up coherence arguments?
  • Will the first AGI systems we build be goal-directed?
  • Are utility maximizers goal-directed?
  • What are the differences and implication structure among intelligence, goal-directedness, expected utility maximizer, coherent, "EU maximizer + X" (for various values of X like "can convert extra resources into additional utility", "has a typical reward function", "has a simple reward function", "wasn't specifically optimized away from goal-directedness")?

https://www.greaterwrong.com/posts/vphFJzK3mWA4PJKAg/coherent-behaviour-in-the-real-world-is-an-incoherent#comment-F2YB5aJgDdK9ZGspw

https://www.greaterwrong.com/posts/NxF5G6CJiof6cemTw/coherence-arguments-do-not-imply-goal-directed-behavior

https://www.greaterwrong.com/posts/9zpT9dikrrebdq3Jf/will-humans-build-goal-directed-agents

https://www.greaterwrong.com/posts/tHxXdAn8Yuiy9y2pZ/ai-safety-without-goal-directed-behavior

https://www.greaterwrong.com/posts/TE5nJ882s5dCMkBB8/conclusion-to-the-sequence-on-value-learning

See also