Difference between revisions of "Summary of my beliefs"

From Issawiki
Jump to: navigation, search
Line 1: Line 1:
This is my version of the table created by [[Brian Tomasik]]<ref>https://reducing-suffering.org/summary-beliefs-values-big-questions/</ref> and [[Pablo Stafforini]].<ref>http://www.stafforini.com/blog/my-beliefs-updated/</ref>
+
This is my version of the table created by [[Brian Tomasik]]<ref>https://reducing-suffering.org/summary-beliefs-values-big-questions/</ref> and [[Pablo Stafforini]].<ref>http://www.stafforini.com/blog/my-beliefs-updated/</ref> Like Pablo, confidence is operationalized as my estimate of whether I would change my mind if I spent more time thinking about the topic.
  
 
{| class="sortable wikitable"
 
{| class="sortable wikitable"

Revision as of 06:17, 10 November 2020

This is my version of the table created by Brian Tomasik[1] and Pablo Stafforini.[2] Like Pablo, confidence is operationalized as my estimate of whether I would change my mind if I spent more time thinking about the topic.

Belief Probability Confidence Comments
Conditional on a government building the first human-level AGI, it will be the USA (rather than China, etc.)
Eternalism on philosophy of time
Cognitive closure of some philosophical problems
By at least 10 years before human-level AGI is built, debate about AGI risk will be as mainstream as global warming is in 2015
Human-controlled AGI in expectation would result in less suffering than uncontrolled
Earth-originating intelligence will colonize the entire galaxy (ignoring anthropic arguments)
Soft AGI takeoff
"Abstract objects: Platonism or nominalism?" Answer: nominalism
A government will build the first human-level AGI, assuming humans build one at all
Moral anti-realism
The universe/multiverse is finite
Human-inspired colonization of space will cause more suffering than it prevents if it happens
Faster technological innovation increases net suffering in the far future
By 2100, if biological humans still exist, most of them will regard factory farming as a great evil of the past
Humans will eventually build human-level AGI conditional on no other major intervening disruptions to civilization as we know it
Crop cultivation prevents net suffering
Compatibilism on free will
Wild-animal suffering will be a mainstream moral issue by 2100, conditional on biological humans still existing
The effective-altruism movement, all things considered, reduces rather than increases total suffering in the far future (not counting happiness)a
"Aesthetic value: objective or subjective?" Answer: subjective
Type-A physicalism regarding consciousness
Climate change will cause more suffering than it prevents
Whole brain emulation will come before de novo AGI, assuming both are possible to build
The Foundational Research Institute reduces net suffering in the far future
At bottom, physics is discrete/digital rather than continuous
Modal realism
Humans will go extinct within millions of years for some reason other than AGI
Faster economic growth will cause net suffering in the far future
Many-worlds interpretation of quantum mechanics (or close kin)
A design very close to CEV will be implemented in humanity's AGI, conditional on AGI being built (excluding other value-learning approaches and other machine-ethics proposals)
The Machine Intelligence Research Institute reduces net suffering in the far future
A full world government will develop before human-level AGI
Rare Earth explanation of Fermi Paradox
Artificial general intelligence (AGI) is possible in principle
Electing more liberal politicians reduces net suffering in the far future
Earth will eventually be controlled by a singleton of some sort

References