Difference between revisions of "Coherence and goal-directed agency discussion"

From Issawiki
Jump to: navigation, search
(Created page with "https://www.greaterwrong.com/posts/vphFJzK3mWA4PJKAg/coherent-behaviour-in-the-real-world-is-an-incoherent#comment-F2YB5aJgDdK9ZGspw https://www.greaterwrong.com/posts/NxF5G6...")
 
Line 1: Line 1:
 +
One of the "big discussions" that has been taking place in AI safety in 2018 and 2019 is about [[coherence argument]]s and [[goal-directed]] agency. Some of the topics in this discussion are:
 +
 +
* What point was [[Eliezer]] trying to make when bringing up coherence arguments?
 +
* Will the first AGI systems we build be goal-directed?
 +
* Are utility maximizers goal-directed?
 +
 
https://www.greaterwrong.com/posts/vphFJzK3mWA4PJKAg/coherent-behaviour-in-the-real-world-is-an-incoherent#comment-F2YB5aJgDdK9ZGspw
 
https://www.greaterwrong.com/posts/vphFJzK3mWA4PJKAg/coherent-behaviour-in-the-real-world-is-an-incoherent#comment-F2YB5aJgDdK9ZGspw
  

Revision as of 03:40, 27 February 2020