Difference between revisions of "AlphaGo"
(→Eliezer's commentary) |
|||
(14 intermediate revisions by the same user not shown) | |||
Line 3: | Line 3: | ||
* [[Rapid capability gain]] | * [[Rapid capability gain]] | ||
* single group pulling ahead | * single group pulling ahead | ||
− | * a single architecture / basic AI technique working for many different games | + | * a single architecture / basic AI technique working for many different games ([[single-architecture generality]]) |
* (for AlphaGo Zero) comparison to [[Paul Christiano]]'s [[iterated amplification]] | * (for AlphaGo Zero) comparison to [[Paul Christiano]]'s [[iterated amplification]] | ||
+ | * [[AlphaGo as evidence of discontinuous takeoff]] | ||
==Eliezer's commentary== | ==Eliezer's commentary== | ||
− | * https://www.facebook.com/yudkowsky/posts/10153987984049228 | + | {| class="sortable wikitable" |
− | + | |- | |
− | + | ! Date !! Initial segment + link !! Description | |
− | + | |- | |
− | + | | 2016-01-27 || [https://www.facebook.com/yudkowsky/posts/10153914357214228 People occasionally ask me about signs that the remaining timeline might be short. It's *very* easy for nonprofessionals to take too much alarm too easily. Deep Blue beating Kasparov at chess was *not* such a sign. Robotic cars are *not* such a sign.] || | |
− | + | |- | |
− | * | + | | 2016-02-08 || [https://www.facebook.com/yudkowsky/posts/10153941386639228 I have one bet on at 2:3 against AlphaGo winning against Sedol in March - they get my $667 if AlphaGo wins, I get their $1000 if AlphaGo loses] || |
− | + | |- | |
− | + | | 2016-02-29 || [https://www.facebook.com/yudkowsky/posts/10153987984049228 This suggests that AlphaGo beating Sedol in March might not be nearly as out-of-character fast progress as I thought] || | |
+ | |- | ||
+ | | 2016-03-08 || [https://www.facebook.com/yudkowsky/posts/10154008064814228 With regards to tonight's match of Deepmind vs. Sedol, an example of an outcome that would indicate strong general AI progress would be if a sweating, nervous Sedol resigns on his first move, or if a bizarre-seeming pattern of Go stones causes Sedol to have a seizure.] || | ||
+ | |- | ||
+ | | 2016-03-09 || [https://www.facebook.com/yudkowsky/posts/10154010758639228 Second match ongoing. #AlphaGo just made a move that everyone is saying nobody else would have played. #Sedol walked out of the room with his clock running, presumably to think about it.] || | ||
+ | |- | ||
+ | | 2016-03-09 || [https://www.facebook.com/yudkowsky/posts/10154009668254228 Question for someone with a much deeper understanding of Go: If a 6p was analyzing a game of a 9p against a 6p, but the analyst thought it was 6p vs. 6p, is this what their analysis might sound like?] || | ||
+ | |- | ||
+ | | 2016-03-10 || [https://www.facebook.com/yudkowsky/posts/10154011176819228 It's possible that, contrary to hopeful commentators, #AlphaGo is not actually enriching the Go game for humans] || | ||
+ | |- | ||
+ | | 2016-03-11 || [https://www.facebook.com/yudkowsky/posts/10154018209759228 (Long.) As I post this, AlphaGo seems almost sure to win the third game and the match.] || | ||
+ | |- | ||
+ | | 2016-03-13 || [https://www.facebook.com/yudkowsky/posts/10154024894449228 And then AlphaGo got confused in a way no human would and lost its 4th game] || | ||
+ | |- | ||
+ | | 2016-03-13 || [https://www.facebook.com/yudkowsky/posts/10154027095839228 Okay, look, to everyone going "Aha but of course superhuman cognition will always be bugged for deep reason blah": Please remember that machine chess *is* out of the phase where a human can analyze it psychologically without computer assistance] || | ||
+ | |- | ||
+ | | 2016-04-16 || [https://www.facebook.com/yudkowsky/posts/10154120081504228 "As soon as anyone does it, it stops being Artificial Intelligence!" No, as soon as anyone in AI achieves surprisingly good performance in some domain that people previously imagined being done as a specialized application of human general intelligence, the inference is, correctly, "Oh, it seems there was surprisingly a specialized way to do that which didn't invoke general intelligence" rather than "Oh, it looks like surprisingly more progress was made toward generally intelligent algorithms than we thought."] || | ||
+ | |- | ||
+ | | 2017-10-19 || [https://www.facebook.com/yudkowsky/posts/10155848910529228 AlphaGo Zero uses 4 TPUs, is built entirely out of neural nets with no handcrafted features, doesn't pretrain against expert games or anything else human, reaches a superhuman level after 3 days of self-play, and is the strongest version of AlphaGo yet] ([https://www.greaterwrong.com/posts/shnSyzv4Jq3bhMNw5/alphago-zero-and-the-foom-debate crossposted to LessWrong] and to [https://intelligence.org/2017/10/20/alphago/ MIRI blog]) || [https://www.greaterwrong.com/posts/D3NspiH2nhKA6B2PE/what-evidence-is-alphago-zero-re-agi-complexity Robin Hanson's reply] | ||
+ | |- | ||
+ | | 2017-12-09 || [https://www.facebook.com/yudkowsky/posts/10155992246384228 Max Tegmark put it well, on Twitter: The big deal about Alpha Zero isn't how it crushed human chess players, it's how Alpha Zero crushed human chess *programmers*.] || | ||
+ | |} | ||
+ | |||
+ | ==See also== | ||
+ | |||
+ | * [[AlphaGo as evidence of discontinuous takeoff]] | ||
+ | |||
+ | ==What links here== | ||
+ | |||
+ | {{Special:WhatLinksHere/{{FULLPAGENAME}}}} | ||
+ | |||
+ | [[Category:AI safety]] |
Latest revision as of 20:19, 11 August 2021
AlphaGo and its successor AlphaGo Zero are used to make various points in AI safety.
- Rapid capability gain
- single group pulling ahead
- a single architecture / basic AI technique working for many different games (single-architecture generality)
- (for AlphaGo Zero) comparison to Paul Christiano's iterated amplification
- AlphaGo as evidence of discontinuous takeoff