Difference between revisions of "Convergent evolution of values"

From Issawiki
Jump to: navigation, search
Line 1: Line 1:
 
(warning: this page is especially crazy)
 
(warning: this page is especially crazy)
  
multiplicative process => log-resource utility function
+
multiplicative process => log-resource utility function (see [https://www.greaterwrong.com/posts/gptXmhJxFiEwuPN98/meetup-notes-ole-peters-on-ergodicity/comment/AyrjdFhFCueq8CtdJ my comment])
  
 
i think there can be lots of similar, robust phenomena like this, for evolved organisms. so in terms of six plausible meta-ethical alternatives, there can be some features that are just "baked in" to many diverse kinds of organisms (like getting bored of a single kind of resource!).
 
i think there can be lots of similar, robust phenomena like this, for evolved organisms. so in terms of six plausible meta-ethical alternatives, there can be some features that are just "baked in" to many diverse kinds of organisms (like getting bored of a single kind of resource!).

Revision as of 08:26, 19 March 2020

(warning: this page is especially crazy)

multiplicative process => log-resource utility function (see my comment)

i think there can be lots of similar, robust phenomena like this, for evolved organisms. so in terms of six plausible meta-ethical alternatives, there can be some features that are just "baked in" to many diverse kinds of organisms (like getting bored of a single kind of resource!).

"How convergent is human-style compassion for the suffering of others, including other species? Is this an incidental spandrel of human evolution, due to mirror neurons and long infant-development durations requiring lots of parental nurturing? Or will most high-functioning, reciprocally trading civilizations show a similar trend?" https://foundational-research.org/open-research-questions/#Aliens

see also paul's post about aliens.

it seems notable that in "three worlds collide", all three civilizations were more or less "civilized" by our standards!

truth-seeking seems like a convergent value for highly intelligent/advanced organisms.

how could convergence arise?

  1. causal processes that convergently produce the same ethical code
  2. some sort of acausal law / moral code written deeply in logic somewhere (like The Hour I First Believed)

related: i've heard people saying how 3d organisms are the most likely since they allow for eating/digestion or whatever, and how this isn't easy to do in 2d or 4d or whatever (i forgot what the reasoning is). see https://en.wikipedia.org/wiki/Anthropic_principle#Dimensions_of_spacetime (i haven't re-read that page)

i've been wondering: must organisms be physical? or can they be more like... just "algorithms" that are implemented in this really weird substrate (that doesn't involve each organism having its own body that moves around and so forth). could such a thing evolve in the "natural" world?

how common is it to have a "physical world" where creatures are running around, like on earth/our physics? versus like, just raw observer-moments being computed without reference to a physical world.

are there any "simple" ways to produce intelligence that don't involve evolution? e.g. if we found some intelligent alien species, would they _have_ to have arose from evolutionary processes?


structure of discounting in the world: e.g. humans seem to use hyperbolic discounting, which is not consistent across time. is this a fairly stable feature of organisms across different evolutionary histories?

whereas like agents in reinforcement learning use geometric (?) discounting, which is not the same as hyperbolic discounting (right??).

if we can nail down what kind of discounting is likely to evolve/be most common in the multiverse, that should tell us something about what values are/whether values have convergent evolution.

External links