User contributions
(newest | oldest) View (newer 50 | older 50) (20 | 50 | 100 | 250 | 500)
- 23:28, 22 February 2020 (diff | hist) . . (+258) . . Evolution
- 23:25, 22 February 2020 (diff | hist) . . (0) . . m Evolution (Issa moved page Evolution analogy to Evolution)
- 23:25, 22 February 2020 (diff | hist) . . (+23) . . N Evolution analogy (Issa moved page Evolution analogy to Evolution) (current) (Tag: New redirect)
- 23:23, 22 February 2020 (diff | hist) . . (+240) . . N Evolution (Created page with "In discussions of AI risk, evolution is often used as an analogy for the development of AGI. here are some examples: * chimpanzee vs human intelligence * optimization daemo...")
- 23:20, 22 February 2020 (diff | hist) . . (+52) . . Paperclip maximizer
- 23:16, 22 February 2020 (diff | hist) . . (+228) . . Paperclip maximizer
- 23:14, 22 February 2020 (diff | hist) . . (+754) . . N Paperclip maximizer (Created page with "my understanding is that the paperclip maximizer example was intended to illustrate the following two concepts: * orthogonality thesis: intelligence/capability and values...")
- 22:41, 22 February 2020 (diff | hist) . . (+37) . . Iterated amplification
- 22:40, 22 February 2020 (diff | hist) . . (+59) . . My understanding of how IDA works
- 22:40, 22 February 2020 (diff | hist) . . (-1) . . IDA (Redirected page to Iterated amplification) (current) (Tag: New redirect)
- 22:40, 22 February 2020 (diff | hist) . . (+37) . . N IDA (Created page with "# redirect Iterated amplification")
- 22:38, 22 February 2020 (diff | hist) . . (+239) . . N Iterated amplification (Created page with "'''Iterated amplification''' (also called '''iterated distillation and amplification''', and abbreviated '''IDA''') is the technical alignment agenda that Paul Christiano...")
- 22:36, 22 February 2020 (diff | hist) . . (+213) . . N AlphaGo (Created page with "'''AlphaGo''' and its successor '''AlphaGo Zero''' are used to make various points in AI safety. * Rapid capability gain * (for AlphaGo Zero) comparison to Paul Chr...")
- 09:44, 22 February 2020 (diff | hist) . . (+20) . . Comparison of AI takeoff scenarios
- 09:42, 22 February 2020 (diff | hist) . . (+40) . . Comparison of AI takeoff scenarios
- 09:41, 22 February 2020 (diff | hist) . . (+148) . . Comparison of AI takeoff scenarios
- 09:40, 22 February 2020 (diff | hist) . . (+48) . . Comparison of AI takeoff scenarios
- 09:39, 22 February 2020 (diff | hist) . . (+26) . . N Hansonian (Redirected page to Robin Hanson) (current) (Tag: New redirect)
- 09:38, 22 February 2020 (diff | hist) . . (+31) . . N Yudkowskian (Redirected page to Eliezer Yudkowsky) (current) (Tag: New redirect)
- 09:37, 22 February 2020 (diff | hist) . . (+391) . . N Comparison of AI takeoff scenarios (Created page with "{| class="wikitable" |- ! Scenario !! Significant changes to the world prior to critical AI capability threshold being reached? !! Intelligence explosion? !! Decisive strategi...")
- 09:13, 22 February 2020 (diff | hist) . . (+62) . . Future planning
- 09:12, 22 February 2020 (diff | hist) . . (+558) . . Future planning
- 09:06, 22 February 2020 (diff | hist) . . (+27) . . Future planning
- 08:38, 22 February 2020 (diff | hist) . . (+230) . . Future planning
- 08:35, 22 February 2020 (diff | hist) . . (+160) . . Future planning
- 08:32, 22 February 2020 (diff | hist) . . (+565) . . Secret sauce for intelligence
- 08:18, 22 February 2020 (diff | hist) . . (+63) . . Secret sauce for intelligence
- 08:12, 22 February 2020 (diff | hist) . . (+112) . . Secret sauce for intelligence
- 07:05, 22 February 2020 (diff | hist) . . (+499) . . Soft-hard takeoff
- 06:58, 22 February 2020 (diff | hist) . . (+1) . . Soft-hard takeoff
- 06:58, 22 February 2020 (diff | hist) . . (-546) . . Soft-hard takeoff
- 06:57, 22 February 2020 (diff | hist) . . (+929) . . N Counterfactual of dropping a seed AI into a world without other capable AI (Created page with ""Well yeah, ''if'' we dropped a superintelligence into a world full of humans. But realistically the rest of the world will be undergoing intelligence explosion too. And indee...")
- 06:51, 22 February 2020 (diff | hist) . . (+629) . . Soft-hard takeoff
- 06:36, 22 February 2020 (diff | hist) . . (+294) . . Soft-hard takeoff
- 06:27, 22 February 2020 (diff | hist) . . (+397) . . Soft-hard takeoff
- 02:15, 22 February 2020 (diff | hist) . . (+4) . . Soft-hard takeoff
- 02:14, 22 February 2020 (diff | hist) . . (+233) . . Soft-hard takeoff
- 02:10, 22 February 2020 (diff | hist) . . (+899) . . Soft-hard takeoff
- 08:34, 20 February 2020 (diff | hist) . . (+12) . . Soft-hard takeoff
- 08:30, 20 February 2020 (diff | hist) . . (+496) . . N Minimal AGI vs task AGI (Created page with "* I'm not clear on what the difference is here * See MIRI's strategic plan from end of 2017 where they make this distinction * It seems like MIRI went from: ** a one-step view...")
- 08:27, 20 February 2020 (diff | hist) . . (+37) . . My understanding of how IDA works
- 08:25, 20 February 2020 (diff | hist) . . (0) . . My understanding of how IDA works
- 08:24, 20 February 2020 (diff | hist) . . (+197) . . My understanding of how IDA works (→Analysis)
- 08:23, 20 February 2020 (diff | hist) . . (+1,122) . . My understanding of how IDA works
- 08:21, 20 February 2020 (diff | hist) . . (+2,740) . . N My understanding of how IDA works (Created page with "==explanation== '''Stage 0''': In the beginning, Hugh directly rates actions to provide the initial training data on what Hugh approves of. This is used to train <math>A_0</m...")
- 08:00, 20 February 2020 (diff | hist) . . (+189) . . Future planning
- 07:57, 20 February 2020 (diff | hist) . . (+46) . . Future planning
- 07:56, 20 February 2020 (diff | hist) . . (+578) . . Future planning
- 03:54, 20 February 2020 (diff | hist) . . (+212) . . How doomed are ML safety approaches?
- 03:48, 20 February 2020 (diff | hist) . . (+55) . . N Simple core algorithm for agency (Redirected page to Simple core of consequentialist reasoning) (current) (Tag: New redirect)
(newest | oldest) View (newer 50 | older 50) (20 | 50 | 100 | 250 | 500)