The Hour I First Believed

From Issawiki
Revision as of 23:59, 15 March 2020 by Issa (talk | contribs) (Comments)
Jump to: navigation, search

"The Hour I First Believed" is a blog post by Scott Alexander about acausal trade and a big picture of what will happen in the multiverse.[1]

Comments

Here's my thinking on this post:

  • I think the explanations of the five parts ("acausal trade, value handshakes, counterfactual mugging, simulation capture, and the Tegmarkian multiverse") are basically fine/accurate descriptions of those things.
  • I think it's plausible that something sort of like what the post describes will happen, where there will be one dominant "universal law", and many/most superintelligences in the multiverse will follow this law.
  • I think the universal law will mostly look alien to us, and completely unlike what the post describes ("since the superentity is identical to the moral law, it’s not really asking you to do anything except be a good person anyway").

Here are some considerations:

  • maybe it turns out that most civilizations in general, across the multiverse, screw up AI alignment. If so, most superintelligences that exist could have messed up values (values that looked good to program into an AI, but aren't actually the real thing). If so, the universal law will take into account these messed up values, rather than the values which tend to naturally to evolve.
  • Eliezer's idea of reflectively consistent degrees of freedom: if you AI uses CDT, it will not self-modify to use UDT; instead it will evolve to use son-of-CDT. There are other things like this, where different initial configurations lead to totally different endpoints after many iterations of self-modification. So it isn't necessarily the case that all superintelligences will use acausal trade/value handshakes.

References