Difference between revisions of "Unintended consequences of AI safety advocacy argument against AI safety"

From Issawiki
Jump to: navigation, search
Line 1: Line 1:
 
I first heard this argument made by [[Michael Nielsen]], that AI safety advocacy has increased the amount of work done on [[AI capabilities]] without actually increasing the safety of AI systems. In other words, it's a kind of [[differential progress]] argument, where advocacy about safety has the unintentional consequence of speeding up capabilities relative to safety.<ref>https://twitter.com/michael_nielsen/status/1350549515762167808 "Curious: have any interventions yet clearly reduced AI risk?" "Afaict talking a lot about AI risk has clearly increased it quite a bit (many of the most talented people I know working on actual AI were influenced to by Bostrom.)" "Many of the most talented people I know working on building AGI seem to have gotten interested in part due to AI safety arguments.  This seems likely (a) to have meaningfully accelerated progress toward AGI; but AFAICT (b) has done little to make such systems safer."</ref>
 
I first heard this argument made by [[Michael Nielsen]], that AI safety advocacy has increased the amount of work done on [[AI capabilities]] without actually increasing the safety of AI systems. In other words, it's a kind of [[differential progress]] argument, where advocacy about safety has the unintentional consequence of speeding up capabilities relative to safety.<ref>https://twitter.com/michael_nielsen/status/1350549515762167808 "Curious: have any interventions yet clearly reduced AI risk?" "Afaict talking a lot about AI risk has clearly increased it quite a bit (many of the most talented people I know working on actual AI were influenced to by Bostrom.)" "Many of the most talented people I know working on building AGI seem to have gotten interested in part due to AI safety arguments.  This seems likely (a) to have meaningfully accelerated progress toward AGI; but AFAICT (b) has done little to make such systems safer."</ref>
 +
 +
[[Buck]] also said in some EA forum post about how some people think MIRI screwed up by posting weird things on the internet in the early days.
 +
 +
there was also that [[Dario]] quote on Facebook.
  
 
==See also==
 
==See also==

Revision as of 18:22, 20 May 2021

I first heard this argument made by Michael Nielsen, that AI safety advocacy has increased the amount of work done on AI capabilities without actually increasing the safety of AI systems. In other words, it's a kind of differential progress argument, where advocacy about safety has the unintentional consequence of speeding up capabilities relative to safety.[1]

Buck also said in some EA forum post about how some people think MIRI screwed up by posting weird things on the internet in the early days.

there was also that Dario quote on Facebook.

See also

References

  1. https://twitter.com/michael_nielsen/status/1350549515762167808 "Curious: have any interventions yet clearly reduced AI risk?" "Afaict talking a lot about AI risk has clearly increased it quite a bit (many of the most talented people I know working on actual AI were influenced to by Bostrom.)" "Many of the most talented people I know working on building AGI seem to have gotten interested in part due to AI safety arguments. This seems likely (a) to have meaningfully accelerated progress toward AGI; but AFAICT (b) has done little to make such systems safer."