Difference between revisions of "Coherence and goal-directed agency discussion"
(Created page with "https://www.greaterwrong.com/posts/vphFJzK3mWA4PJKAg/coherent-behaviour-in-the-real-world-is-an-incoherent#comment-F2YB5aJgDdK9ZGspw https://www.greaterwrong.com/posts/NxF5G6...") |
|||
Line 1: | Line 1: | ||
+ | One of the "big discussions" that has been taking place in AI safety in 2018 and 2019 is about [[coherence argument]]s and [[goal-directed]] agency. Some of the topics in this discussion are: | ||
+ | |||
+ | * What point was [[Eliezer]] trying to make when bringing up coherence arguments? | ||
+ | * Will the first AGI systems we build be goal-directed? | ||
+ | * Are utility maximizers goal-directed? | ||
+ | |||
https://www.greaterwrong.com/posts/vphFJzK3mWA4PJKAg/coherent-behaviour-in-the-real-world-is-an-incoherent#comment-F2YB5aJgDdK9ZGspw | https://www.greaterwrong.com/posts/vphFJzK3mWA4PJKAg/coherent-behaviour-in-the-real-world-is-an-incoherent#comment-F2YB5aJgDdK9ZGspw | ||
Revision as of 03:40, 27 February 2020
One of the "big discussions" that has been taking place in AI safety in 2018 and 2019 is about coherence arguments and goal-directed agency. Some of the topics in this discussion are:
- What point was Eliezer trying to make when bringing up coherence arguments?
- Will the first AGI systems we build be goal-directed?
- Are utility maximizers goal-directed?
https://www.greaterwrong.com/posts/9zpT9dikrrebdq3Jf/will-humans-build-goal-directed-agents
https://www.greaterwrong.com/posts/tHxXdAn8Yuiy9y2pZ/ai-safety-without-goal-directed-behavior
https://www.greaterwrong.com/posts/TE5nJ882s5dCMkBB8/conclusion-to-the-sequence-on-value-learning