Difference between revisions of "Analyzing disagreements"

From Issawiki
Jump to: navigation, search
 
(7 intermediate revisions by the same user not shown)
Line 3: Line 3:
 
i want to ask similar questions about disagreements, particularly disagreements in ai safety about [[AI timelines]], [[takeoff speed]], [[simple core algorithm of agency]], and so forth. why do people disagree? is it reasonable to have disagreements about this sort of thing? would we have expected these disagreements to arise prior to seeing the discussion?
 
i want to ask similar questions about disagreements, particularly disagreements in ai safety about [[AI timelines]], [[takeoff speed]], [[simple core algorithm of agency]], and so forth. why do people disagree? is it reasonable to have disagreements about this sort of thing? would we have expected these disagreements to arise prior to seeing the discussion?
  
one step forward in this kind of thinking is [[eliezer]]'s [http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/ "Policy Debates Should Not Appear One-Sided"], which gives a kind of rule of thumb about ''when'' we should expect debates to be one-sided vs multi-sided. (namely, if it's a simple question of fact, it should appear one-sided, but if it's a policy question with multiple parties who have different stakes and each policy affects people in complicated ways, then we should expect strong arguments on multiple sides)
+
one step forward in this kind of thinking is [[eliezer]]'s [http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/ "Policy Debates Should Not Appear One-Sided"], which gives a kind of rule of thumb about ''when'' we should expect debates to be one-sided vs multi-sided. (namely, if it's a simple question of fact, it should appear one-sided, but if it's a policy question with multiple parties who have different stakes and each policy affects people in complicated ways, then we should expect strong arguments on multiple sides.) Given this theory, it feels like AI safety should be a one-sided debate; it's a simple matter of fact, so we shouldn't expect "strong likelihoods pointing in multiple directions". So then what is going on, at the level of probability theory/bayesian math/whatever?
  
 
see also https://www.greaterwrong.com/posts/TGux5Fhcd7GmTfNGC/is-that-your-true-rejection which has the following list:
 
see also https://www.greaterwrong.com/posts/TGux5Fhcd7GmTfNGC/is-that-your-true-rejection which has the following list:
Line 18: Line 18:
 
* Fear that a past mistake could be disproved;
 
* Fear that a past mistake could be disproved;
 
* Deep self-deception for the sake of pride or other personal benefits.</blockquote>
 
* Deep self-deception for the sake of pride or other personal benefits.</blockquote>
 +
 +
another idea: http://johnsalvatier.org/blog/2017/the-i-already-get-it-slide
  
 
also see [https://intelligence.org/files/IEM.pdf IEM], which has:
 
also see [https://intelligence.org/files/IEM.pdf IEM], which has:
Line 31: Line 33:
 
* the [[gap between chimpanzee and human intelligence]]: [[Eliezer]] sees this as supporting FOOM, i.e. you can make a small tweak to a brain to get much more general intelligence. [[Paul]] sees this as an argument against FOOM, because evolution wasn't optimizing for general intelligence (so that it isn't an example of discontinuous improvement). See [https://lw2.issarice.com/posts.php?id=CjW4axQDqLd2oDCGG#YPodaAtRhN4qJefxb this comment] for more.
 
* the [[gap between chimpanzee and human intelligence]]: [[Eliezer]] sees this as supporting FOOM, i.e. you can make a small tweak to a brain to get much more general intelligence. [[Paul]] sees this as an argument against FOOM, because evolution wasn't optimizing for general intelligence (so that it isn't an example of discontinuous improvement). See [https://lw2.issarice.com/posts.php?id=CjW4axQDqLd2oDCGG#YPodaAtRhN4qJefxb this comment] for more.
 
* [[AlphaGo Zero]]: Eliezer sees this as supporting FOOM because a single project got ahead of everyone else and suddenly shot past human performance, and because a "simple" algorithm that generalizes to many settings was able to achieve this performance. Paul sees this as evidence against because Go isn't an economically valuable task so there wasn't enough incentive to get superhuman performance,<ref>https://www.facebook.com/yudkowsky/posts/10155848910529228?comment_id=10155849456079228</ref> and because he sees AlphaGo Zero as being impressive in implementation (rather than basic insights).<ref>https://www.facebook.com/yudkowsky/posts/10155848910529228?comment_id=10155849474514228</ref>
 
* [[AlphaGo Zero]]: Eliezer sees this as supporting FOOM because a single project got ahead of everyone else and suddenly shot past human performance, and because a "simple" algorithm that generalizes to many settings was able to achieve this performance. Paul sees this as evidence against because Go isn't an economically valuable task so there wasn't enough incentive to get superhuman performance,<ref>https://www.facebook.com/yudkowsky/posts/10155848910529228?comment_id=10155849456079228</ref> and because he sees AlphaGo Zero as being impressive in implementation (rather than basic insights).<ref>https://www.facebook.com/yudkowsky/posts/10155848910529228?comment_id=10155849474514228</ref>
* whether there is any strong evidence obtainable: Hanson thinks convincing evidence of FOOM can only be obtained once AIs control a substantial fraction of the economy (or something like that), and Eliezer predicts the world will have ended by that point. In other words, if Eliezer is right, then Hanson can never be convinced of this (because we will all be dead by the time the strong evidence comes in).
+
* whether there is any strong evidence obtainable: Hanson thinks convincing evidence of FOOM can only be obtained once AIs control a substantial fraction of the economy (or something like that), and Eliezer predicts the world will have ended by that point. In other words, if Eliezer is right, then Hanson can never be convinced of this (because we will all be dead by the time the strong evidence comes in). On the other hand, if Hanson is right, then Eliezer ''can'' be convinced of this (but since Hanson's timelines are long(?) it will be many years until we find out(?)).
  
 
Interesting common disagreements:
 
Interesting common disagreements:
Line 38: Line 40:
 
* Wei vs Paul
 
* Wei vs Paul
 
* Eliezer vs Hanson
 
* Eliezer vs Hanson
 +
 +
I like these posts by Eli Tyre:
 +
 +
* [https://www.greaterwrong.com/posts/faaoyve5ryY8E5M4r/eli-s-shortform-feed/comment/HpFfoqZXAbBZcsPKF Some things I think about Double Crux and related topics]
 +
* [https://www.greaterwrong.com/posts/faaoyve5ryY8E5M4r/eli-s-shortform-feed/comment/D2v9JH7DX9s7GXHWF The Basic Double Crux Pattern]
 +
* [https://musingsandroughdrafts.wordpress.com/2019/09/27/basic-double-crux-pattern-example/ Basic Double Crux pattern example]
 +
 +
I want to try doing an imaginary double crux example for AI safety stuff.
 +
 +
another point: gary drescher mentions this in ''[[Good and Real]]'', but if you just try to get feedback on/argue about a single piece of your world model, you often get nowhere, because the other side doesn't have all of the other pieces in place in order to place your single point in the right context. So you need your interlocutor to model you holistically, rather than just critique a single part of your views.
 +
 +
https://www.econlib.org/archives/2017/12/whats_my_core_m.html/#comment-174163 -- this is an interesting comment about how to identify correct contrarians. but for AI safety arguments the heuristics he uses don't discriminate between the various views, so it's not useful.
 +
 +
see also [[Justin Shovelain]]'s https://lw2.issarice.com/posts/DYWXntS3ybp8x3cKq/causes-of-disagreements
  
 
==See also==
 
==See also==
Line 46: Line 62:
  
 
<references/>
 
<references/>
 +
 +
[[Category:AI safety]]

Latest revision as of 22:52, 8 February 2021

a cognitive reduction doesn't ask "why do i have free will?" but instead asks "why do i believe i have free will? what concrete mechanistic facts about the world corresponds to me having free will? what cognitive algorithms would make me ask 'why do i have free will?' if they happened to be the cognitive algorithms running in my mind?"

i want to ask similar questions about disagreements, particularly disagreements in ai safety about AI timelines, takeoff speed, simple core algorithm of agency, and so forth. why do people disagree? is it reasonable to have disagreements about this sort of thing? would we have expected these disagreements to arise prior to seeing the discussion?

one step forward in this kind of thinking is eliezer's "Policy Debates Should Not Appear One-Sided", which gives a kind of rule of thumb about when we should expect debates to be one-sided vs multi-sided. (namely, if it's a simple question of fact, it should appear one-sided, but if it's a policy question with multiple parties who have different stakes and each policy affects people in complicated ways, then we should expect strong arguments on multiple sides.) Given this theory, it feels like AI safety should be a one-sided debate; it's a simple matter of fact, so we shouldn't expect "strong likelihoods pointing in multiple directions". So then what is going on, at the level of probability theory/bayesian math/whatever?

see also https://www.greaterwrong.com/posts/TGux5Fhcd7GmTfNGC/is-that-your-true-rejection which has the following list:

I suspect that, in general, if two rationalists set out to resolve a disagreement that persisted past the first exchange, they should expect to find that the true sources of the disagreement are either hard to communicate, or hard to expose. E.g.:

  • Uncommon, but well-supported, scientific knowledge or math;
  • Long inferential distances;
  • Hard-to-verbalize intuitions, perhaps stemming from specific visualizations;
  • Zeitgeists inherited from a profession (that may have good reason for it);
  • Patterns perceptually recognized from experience;
  • Sheer habits of thought;
  • Emotional commitments to believing in a particular outcome;
  • Fear that a past mistake could be disproved;
  • Deep self-deception for the sake of pride or other personal benefits.

another idea: http://johnsalvatier.org/blog/2017/the-i-already-get-it-slide

also see IEM, which has:

These differently behaving cases are not competing arguments about how a single grand curve of cognitive investment has previously operated. They are all simultaneously true, and hence they must be telling us different facts about growth curves—telling us about different domains of a multivariate growth function—advising us of many compatible truths about how intelligence and real-world power vary with different kinds of cognitive investments.

with the footnote:

Reality itself is always perfectly consistent—only maps can be in conflict, not the territory. Under the Bayesian definition of evidence, “strong evidence” is just that sort of evidence that we almost never see on more than one side of an argument. Unless you’ve made a mistake somewhere, you should almost never see extreme likelihood ratios pointing in different directions. Thus it’s not possible that the facts listed are all “strong” arguments, about the same variable, pointing in different directions.

I think one of the interesting things about the disagreements in AI safety is how different people interpret evidence in different ways (i.e. each person sees new evidence as supporting their own case). For example:

  • the gap between chimpanzee and human intelligence: Eliezer sees this as supporting FOOM, i.e. you can make a small tweak to a brain to get much more general intelligence. Paul sees this as an argument against FOOM, because evolution wasn't optimizing for general intelligence (so that it isn't an example of discontinuous improvement). See this comment for more.
  • AlphaGo Zero: Eliezer sees this as supporting FOOM because a single project got ahead of everyone else and suddenly shot past human performance, and because a "simple" algorithm that generalizes to many settings was able to achieve this performance. Paul sees this as evidence against because Go isn't an economically valuable task so there wasn't enough incentive to get superhuman performance,[1] and because he sees AlphaGo Zero as being impressive in implementation (rather than basic insights).[2]
  • whether there is any strong evidence obtainable: Hanson thinks convincing evidence of FOOM can only be obtained once AIs control a substantial fraction of the economy (or something like that), and Eliezer predicts the world will have ended by that point. In other words, if Eliezer is right, then Hanson can never be convinced of this (because we will all be dead by the time the strong evidence comes in). On the other hand, if Hanson is right, then Eliezer can be convinced of this (but since Hanson's timelines are long(?) it will be many years until we find out(?)).

Interesting common disagreements:

  • Paul vs Eliezer/MIRI
  • Wei vs Paul
  • Eliezer vs Hanson

I like these posts by Eli Tyre:

I want to try doing an imaginary double crux example for AI safety stuff.

another point: gary drescher mentions this in Good and Real, but if you just try to get feedback on/argue about a single piece of your world model, you often get nowhere, because the other side doesn't have all of the other pieces in place in order to place your single point in the right context. So you need your interlocutor to model you holistically, rather than just critique a single part of your views.

https://www.econlib.org/archives/2017/12/whats_my_core_m.html/#comment-174163 -- this is an interesting comment about how to identify correct contrarians. but for AI safety arguments the heuristics he uses don't discriminate between the various views, so it's not useful.

see also Justin Shovelain's https://lw2.issarice.com/posts/DYWXntS3ybp8x3cKq/causes-of-disagreements

See also

References