Human safety problem

From Issawiki
Revision as of 05:07, 26 August 2021 by Issa (talk | contribs)
Jump to: navigation, search

A human safety problem is a counterpart to an AI safety problem that appears in humans. For instance, distributional shift is an AI safety problem where an AI trained in one environment will behave poorly when deployed in an unfamiliar environment (for example, a cleaning robot that was trained in an office environment may behave dangerously when it is used in a factory).[1] The counterpart to this is in humans could be if a human encounters an unfamiliar situation that it hasn't learned to deal from experience, or also the modern world in general relative to the human environment of evolutionary adaptedness.

History

Potential issues if human safety problems are not addressed

notes

the most canonical-seeming post is https://www.greaterwrong.com/posts/vbtvgNXkufFRSrx4j/three-ai-safety-related-ideas

e.g. "Think of the human as a really badly designed AI with a convoluted architecture that nobody understands, spaghetti code, full of security holes, has no idea what its terminal values are and is really confused even about its "interim" values, has all kinds of potential safety problems like not being robust to distributional shifts, and is only "safe" in the sense of having passed certain tests for a very narrow distribution of inputs." [1]

https://www.alignmentforum.org/posts/JbcWQCxKWn3y49bNB/disentangling-arguments-for-the-importance-of-ai-safety

https://www.alignmentforum.org/posts/HBGd34LKvXM9TxvNf/new-safety-research-agenda-scalable-agent-alignment-via#2gcfd3PN8GGqyuuHF

https://www.alignmentforum.org/posts/HTgakSs6JpnogD6c2/two-neglected-problems-in-human-ai-safety