Search results

Jump to: navigation, search
  • ...ct is an important part of the plan for preventing [[existential doom from AI]]. ...ref>https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty#1_1__Deep_vs__shallow_problem_solving_patterns</ref>
    2 KB (218 words) - 15:15, 26 February 2022
  • ...lesswrong.com/s/n945eovrA3oDueqtq/p/hwxj4gieR7FWNwYfa Ngo and Yudkowsky on AI capability gains] ...hether there will be a period of rapid economics progress from "pre-scary" AI before "scary" cognition appears (Eliezer doesn't think this is likely, but
    6 KB (948 words) - 21:27, 1 August 2022

View (previous 20 | next 20) (20 | 50 | 100 | 250 | 500)