Summary of my beliefs

From Issawiki
Revision as of 06:05, 10 November 2020 by Issa (talk | contribs) (Created page with "This is my version of the table created by Brian Tomasik<ref>https://reducing-suffering.org/summary-beliefs-values-big-questions/</ref> and Pablo Stafforini.<ref>http:...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

This is my version of the table created by Brian Tomasik[1] and Pablo Stafforini.[2]

Belief Probability Confidence
"Aesthetic value: objective or subjective?" Answer: subjective
"Abstract objects: Platonism or nominalism?" Answer: nominalism
Compatibilism on free will
Moral anti-realism
Artificial general intelligence (AGI) is possible in principle
Humans will eventually build human-level AGI conditional on no other major intervening disruptions to civilization as we know it
Human-inspired colonization of space will cause more suffering than it prevents if it happens
Earth will eventually be controlled by a singleton of some sort
Soft AGI takeoff
Eternalism on philosophy of time
Type-A physicalism regarding consciousness
Rare Earth explanation of Fermi Paradox
By at least 10 years before human-level AGI is built, debate about AGI risk will be as mainstream as global warming is in 2015
A government will build the first human-level AGI, assuming humans build one at all
By 2100, if biological humans still exist, most of them will regard factory farming as a great evil of the past
The Foundational Research Institute reduces net suffering in the far future
The Machine Intelligence Research Institute reduces net suffering in the far future
Electing more liberal politicians reduces net suffering in the far future
Human-controlled AGI in expectation would result in less suffering than uncontrolled
Climate change will cause more suffering than it prevents
The effective-altruism movement, all things considered, reduces rather than increases total suffering in the far future (not counting happiness)a
Cognitive closure of some philosophical problems
Faster technological innovation increases net suffering in the far future
Crop cultivation prevents net suffering
Conditional on a government building the first human-level AGI, it will be the USA (rather than China, etc.)
Earth-originating intelligence will colonize the entire galaxy (ignoring anthropic arguments)
Faster economic growth will cause net suffering in the far future
Modal realism
Many-worlds interpretation of quantum mechanics (or close kin)
At bottom, physics is discrete/digital rather than continuous
The universe/multiverse is finite
Whole brain emulation will come before de novo AGI, assuming both are possible to build
A full world government will develop before human-level AGI
Wild-animal suffering will be a mainstream moral issue by 2100, conditional on biological humans still existing
Humans will go extinct within millions of years for some reason other than AGI
A design very close to CEV will be implemented in humanity's AGI, conditional on AGI being built (excluding other value-learning approaches and other machine-ethics proposals)