Difference between revisions of "Convergent evolution of values"
Line 1: | Line 1: | ||
− | (warning: this page is especially | + | (warning: this page is especially cRaZy~. you have been warned!) |
multiplicative process => log-resource utility function (see [https://www.greaterwrong.com/posts/gptXmhJxFiEwuPN98/meetup-notes-ole-peters-on-ergodicity/comment/AyrjdFhFCueq8CtdJ my comment]) | multiplicative process => log-resource utility function (see [https://www.greaterwrong.com/posts/gptXmhJxFiEwuPN98/meetup-notes-ole-peters-on-ergodicity/comment/AyrjdFhFCueq8CtdJ my comment]) |
Revision as of 08:30, 19 March 2020
(warning: this page is especially cRaZy~. you have been warned!)
multiplicative process => log-resource utility function (see my comment)
i think there can be lots of similar, robust phenomena like this, for evolved organisms. so in terms of six plausible meta-ethical alternatives, there can be some features that are just "baked in" to many diverse kinds of organisms (like getting bored of a single kind of resource!).
"How convergent is human-style compassion for the suffering of others, including other species? Is this an incidental spandrel of human evolution, due to mirror neurons and long infant-development durations requiring lots of parental nurturing? Or will most high-functioning, reciprocally trading civilizations show a similar trend?" https://foundational-research.org/open-research-questions/#Aliens
see also paul's post about aliens.
it seems notable that in "three worlds collide", all three civilizations were more or less "civilized" by our standards!
truth-seeking seems like a convergent value for highly intelligent/advanced organisms.
how could convergence arise?
- causal processes that convergently produce the same ethical code
- some sort of acausal law / moral code written deeply in logic somewhere (like The Hour I First Believed)
related: i've heard people saying how 3d organisms are the most likely since they allow for eating/digestion or whatever, and how this isn't easy to do in 2d or 4d or whatever (i forgot what the reasoning is). see https://en.wikipedia.org/wiki/Anthropic_principle#Dimensions_of_spacetime (i haven't re-read that page)
i've been wondering: must organisms be physical? or can they be more like... just "algorithms" that are implemented in this really weird substrate (that doesn't involve each organism having its own body that moves around and so forth). could such a thing evolve in the "natural" world?
how common is it to have a "physical world" where creatures are running around, like on earth/our physics? versus like, just raw observer-moments being computed without reference to a physical world.
are there any "simple" ways to produce intelligence that don't involve evolution? e.g. if we found some intelligent alien species, would they have to have arose from evolutionary processes?
what alternatives are there to evolution? like, our universe is one made of simple rules, that barfed out organisms through trial and error. but could there be simple universes that somehow have certain organisms/observer-moments "baked in"? or some other simple optimization process that outputs complicated organisms/OMs? i guess one idea is that evolution produces the initial intelligences, but then these create a UFAI, so then the values that get implemented could be pretty "random".
structure of discounting in the world: e.g. humans seem to use hyperbolic discounting, which is not consistent across time. is this a fairly stable feature of organisms across different evolutionary histories?
whereas like agents in reinforcement learning use geometric (?) discounting, which is not the same as hyperbolic discounting (right??).
if we can nail down what kind of discounting is likely to evolve/be most common in the multiverse, that should tell us something about what values are/whether values have convergent evolution.
is there reinforcement learning that uses hyperbolic discounting, instead of the usual exponential/geometric one?
related: https://reducing-suffering.org/why-organisms-feel-both-suffering-and-happiness -- this question of why there is both pain and pleasure; if we can answer this one, then we might know more about what sort of values we would expect to see "out in the multiverse".
relatedly, is there some reason that evolution made organisms that have a values vs beliefs (probabilities) split? why not a "stranger" (to humans) split, like the examples given in abram's jeffrey-bolker rotation post? after all, in the end, all that matters is how the system behaves, not what it believes or what it values.
maybe you could argue that even in humans, the split isn't so clean as we would like to imagine! e.g. consider crony beliefs/social reality. these don't track reality very well, but they still produce actions that are adaptive.