Difference between revisions of "Evolution"
Line 4: | Line 4: | ||
** [[Changing selection pressures argument]] | ** [[Changing selection pressures argument]] | ||
** see also "hominid evolution" in [[Secret_sauce_for_intelligence#Evidence]] | ** see also "hominid evolution" in [[Secret_sauce_for_intelligence#Evidence]] | ||
+ | ** [[Missing_gear_for_intelligence#Evidence]] | ||
* [[optimization daemon]]s/inner optimizers/mesa-optimization: organisms (including humans) are optimized for inclusive genetic fitness, but humans have come to value many different things (happiness, taste of food, love, curiosity, etc.), and these things can "come apart" (e.g. humans eating candy) | * [[optimization daemon]]s/inner optimizers/mesa-optimization: organisms (including humans) are optimized for inclusive genetic fitness, but humans have come to value many different things (happiness, taste of food, love, curiosity, etc.), and these things can "come apart" (e.g. humans eating candy) | ||
* the plausibility of AGI | * the plausibility of AGI | ||
[[Category:AI safety]] | [[Category:AI safety]] |
Latest revision as of 23:46, 19 May 2021
In discussions of AI risk, evolution (especially hominid evolution, as it is the only example we have of fully general human-level intelligence) is often used as an analogy/source of insights for the development of AGI. here are some examples:
- chimpanzee vs human intelligence
- Changing selection pressures argument
- see also "hominid evolution" in Secret_sauce_for_intelligence#Evidence
- Missing_gear_for_intelligence#Evidence
- optimization daemons/inner optimizers/mesa-optimization: organisms (including humans) are optimized for inclusive genetic fitness, but humans have come to value many different things (happiness, taste of food, love, curiosity, etc.), and these things can "come apart" (e.g. humans eating candy)
- the plausibility of AGI