Search results
Create the page "AI alignment" on this wiki! See also the search results found.
- ...ct is an important part of the plan for preventing [[existential doom from AI]]. ...ref>https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty#1_1__Deep_vs__shallow_problem_solving_patterns</ref>2 KB (218 words) - 15:15, 26 February 2022
- ...lesswrong.com/s/n945eovrA3oDueqtq/p/hwxj4gieR7FWNwYfa Ngo and Yudkowsky on AI capability gains] ...hether there will be a period of rapid economics progress from "pre-scary" AI before "scary" cognition appears (Eliezer doesn't think this is likely, but6 KB (948 words) - 21:27, 1 August 2022