Coherence and goal-directed agency discussion
- What point was Eliezer trying to make when bringing up coherence arguments?
- Will the first AGI systems we build be goal-directed?
- Are utility maximizers goal-directed?
- What are the differences and implication structure among intelligence, goal-directedness, expected utility maximizer, coherent, "EU maximizer + X" (for various values of X like "can convert extra resources into additional utility", "has a typical reward function", "has a simple reward function", "wasn't specifically optimized away from goal-directedness")?