Difference between revisions of "Evolution"
Line 3: | Line 3: | ||
* chimpanzee vs human intelligence | * chimpanzee vs human intelligence | ||
** [[Changing selection pressures argument]] | ** [[Changing selection pressures argument]] | ||
+ | ** see also "hominid evolution" in [[Secret_sauce_for_intelligence#Evidence]] | ||
* [[optimization daemon]]s/inner optimizers/mesa-optimization: organisms (including humans) are optimized for inclusive genetic fitness, but humans have come to value many different things (happiness, taste of food, love, curiosity, etc.), and these things can "come apart" (e.g. humans eating candy) | * [[optimization daemon]]s/inner optimizers/mesa-optimization: organisms (including humans) are optimized for inclusive genetic fitness, but humans have come to value many different things (happiness, taste of food, love, curiosity, etc.), and these things can "come apart" (e.g. humans eating candy) | ||
* the plausibility of AGI | * the plausibility of AGI | ||
[[Category:AI safety]] | [[Category:AI safety]] |
Revision as of 23:43, 19 May 2021
In discussions of AI risk, evolution is often used as an analogy/source of insights for the development of AGI. here are some examples:
- chimpanzee vs human intelligence
- Changing selection pressures argument
- see also "hominid evolution" in Secret_sauce_for_intelligence#Evidence
- optimization daemons/inner optimizers/mesa-optimization: organisms (including humans) are optimized for inclusive genetic fitness, but humans have come to value many different things (happiness, taste of food, love, curiosity, etc.), and these things can "come apart" (e.g. humans eating candy)
- the plausibility of AGI