Difference between revisions of "Goalpost for usefulness of HRAD work"

From Issawiki
Jump to: navigation, search
Line 1: Line 1:
There's a pattern I see where:
+
When thinking about the question of "How useful is [[HRAD]] work?", what standards/goalposts should we use? There's a pattern I see where:
  
 
* people advocating [[HRAD]] research bring up historical cases like Turing, Shannon, etc. where formalization worked well
 
* people advocating [[HRAD]] research bring up historical cases like Turing, Shannon, etc. where formalization worked well

Revision as of 06:14, 27 May 2020

When thinking about the question of "How useful is HRAD work?", what standards/goalposts should we use? There's a pattern I see where:

  • people advocating HRAD research bring up historical cases like Turing, Shannon, etc. where formalization worked well
  • people arguing against HRAD research talk about how "complete axiomatic descriptions" haven't been useful so far in AI, and how they aren't used to describe machine learning systems

It seems like there's a question of what is the relevant goalpost, for deciding whether HRAD work is useful.

  • will early advanced AI systems be understandable in terms of HRAD's formalisms? [1]
    • lack of historical precedent at applying "complete axiomatic descriptions of AI systems" to help design AI systems [2]
    • lack of success so far at using complete axiomatic descriptions for modern ML systems [3]
    • what will early advanced AI systems look like?
  • how convincing historical examples are (e.g. Shannon, Turing, Bayes, Pearl, Kolmogorov, null-terminated strings in C [4], [5] [6], Eliezer also brings up the Shannon vs Poe chess example) See also selection effect for successful formalizations.

See also