User contributions
(newest | oldest) View (newer 20 | older 20) (20 | 50 | 100 | 250 | 500)
- 22:52, 7 March 2020 (diff | hist) . . (+791) . . N Selection effect for who builds AGI (Created page with ""But also, what I’m more worried about is that the arguments will always be a bit uncertain, and that they will be the kind of arguments that maybe should push a rational pe...")
- 07:42, 5 March 2020 (diff | hist) . . (+187) . . N MIRI vs Paul research agenda hypotheses (Created page with "from "The concern" in https://agentfoundations.org/item?id=1220 key hopes listed in https://www.greaterwrong.com/posts/HCv2uwgDGf5dyX5y6/preface-to-the-sequence-on-iterated-a...")
- 22:35, 1 March 2020 (diff | hist) . . (+3,114) . . N Comparison of terms related to agency (Created page with "{| class="wikitable" ! Term !! Opposite |- | Agent || |- | Optimizer, optimization process || |- | consequentialist || |- | expected utility maximizer || |- | goal-directed, g...")
- 04:33, 27 February 2020 (diff | hist) . . (+73) . . N Terms used to describe intelligence of agents (Issa moved page Terms used to describe intelligence of agents to List of terms used to describe the intelligence of an agent) (current) (Tag: New redirect)
- 04:31, 27 February 2020 (diff | hist) . . (+421) . . N List of terms used to describe the intelligence of an agent (Created page with "Eliezer has lots of terms to describe how intelligent/powerful/whatever an agent is. * Relevant * Pivotal (is this only for actions, or also for agents, as in "agent who...")
- 03:30, 27 February 2020 (diff | hist) . . (+529) . . N Coherence and goal-directed agency discussion (Created page with "https://www.greaterwrong.com/posts/vphFJzK3mWA4PJKAg/coherent-behaviour-in-the-real-world-is-an-incoherent#comment-F2YB5aJgDdK9ZGspw https://www.greaterwrong.com/posts/NxF5G6...")
- 04:33, 26 February 2020 (diff | hist) . . (+381) . . N Simple core (Created page with "'''Simple core''' is a term that has been used to describe various things in AI safety. * Simple core to corrigibility<ref>https://ai-alignment.com/corrigibility-3039e668...")
- 23:27, 25 February 2020 (diff | hist) . . (+359) . . N Agent foundations (Created page with "==Terminology== There are many different terms used to describe MIRI's research, and it's a little unclear how careful people are when using these terms. * agent foundations...")
- 07:42, 25 February 2020 (diff | hist) . . (+34) . . N Uncertain Future (Redirected page to The Uncertain Future) (current) (Tag: New redirect)
- 03:54, 25 February 2020 (diff | hist) . . (+27) . . N Wei Dai (Created page with "aka distributor of weipills")
- 03:31, 25 February 2020 (diff | hist) . . (+362) . . N AI safety field consensus (Created page with "People in AI safety tend to disagree about many things. However, there is also wide agreement about some other things (which people...")
- 22:28, 24 February 2020 (diff | hist) . . (+323) . . N Deconfusion (Created page with "'''Deconfusion''' is a type of research aimed at making ourselves less confused about fundamental things. ==History=== The term was invented/popularized by MIRI. It was orig...")
- 10:19, 24 February 2020 (diff | hist) . . (+53) . . N MIRI (Redirected page to Machine Intelligence Research Institute) (current) (Tag: New redirect)
- 09:46, 24 February 2020 (diff | hist) . . (+306) . . N The Uncertain Future (Created page with "'''The Uncertain Future''' is a future modeling web applet created by MIRI. Instructions for running it in 2020: https://gist.github.com/riceissa/6e239e8828cceeed5a0bd656...")
- 02:40, 24 February 2020 (diff | hist) . . (+94) . . N List of technical AI alignment agendas (Created page with "This is a '''list of technical AI alignment agendas'''. https://aiwatch.issarice.com/#agendas")
- 00:37, 24 February 2020 (diff | hist) . . (+492) . . N Rapid capability gain vs AGI progress (Created page with "* rapid capability gain seems to refer to how well a ''single'' AI system improves over time, e.g. AlphaGo going from "no knowledge of anything" to "superhuman at Go"...")
- 23:24, 23 February 2020 (diff | hist) . . (+276) . . N Jessica Taylor (Created page with "The most important posts for AI strategy are: * [https://agentfoundations.org/item?id=1220 On motivations for MIRI's highly reliable agent design research] * [https://agentfo...")
- 23:19, 23 February 2020 (diff | hist) . . (+5,066) . . N List of disagreements in AI safety (Created page with "* list of things people disagree about:<ref>[https://drive.google.com/file/d/1wI21XP-lRa6mi5h0dq_USooz0LpysdhS/view Clarifying some key hypotheses in AI alignment].</ref> ** h...")
- 09:43, 23 February 2020 (diff | hist) . . (+43) . . N Simple core algorithm (Redirected page to Secret sauce for intelligence) (current) (Tag: New redirect)
- 09:29, 23 February 2020 (diff | hist) . . (+651) . . N Prosaic AI (Created page with "what are the possibilities for prosaic AI? i.e. if prosaic AI happened, then what are some possible reasons for why this happened? some ideas: * optimizing hard enough produc...")
(newest | oldest) View (newer 20 | older 20) (20 | 50 | 100 | 250 | 500)