Difference between revisions of "Summary of my beliefs"

From Issawiki
Jump to: navigation, search
(Created page with "This is my version of the table created by Brian Tomasik<ref>https://reducing-suffering.org/summary-beliefs-values-big-questions/</ref> and Pablo Stafforini.<ref>http:...")
 
 
(4 intermediate revisions by the same user not shown)
Line 1: Line 1:
This is my version of the table created by [[Brian Tomasik]]<ref>https://reducing-suffering.org/summary-beliefs-values-big-questions/</ref> and [[Pablo Stafforini]].<ref>http://www.stafforini.com/blog/my-beliefs-updated/</ref>
+
This is my version of the table created by [[Brian Tomasik]]<ref>https://reducing-suffering.org/summary-beliefs-values-big-questions/</ref> and [[Pablo Stafforini]].<ref>http://www.stafforini.com/blog/my-beliefs-updated/</ref> Like Pablo, confidence is operationalized as my estimate of whether I would change my mind if I spent more time thinking about the topic.
  
 
{| class="sortable wikitable"
 
{| class="sortable wikitable"
! Belief !! Probability !! Confidence
+
! Belief !! Inside view probability !! Confidence !! Probability after social update !! Comments
 
|-
 
|-
| "Aesthetic value: objective or subjective?" Answer: subjective
+
| Conditional on a government building the first human-level AGI, it will be the USA (rather than China, etc.)
 
|-
 
|-
| "Abstract objects: Platonism or nominalism?" Answer: nominalism
+
| Eternalism on philosophy of time
 
|-
 
|-
| Compatibilism on free will
+
| Cognitive closure of some philosophical problems
 
|-
 
|-
| Moral anti-realism
+
| By at least 10 years before human-level AGI is built, debate about AGI risk will be as mainstream as global warming is in 2015
 
|-
 
|-
| Artificial general intelligence (AGI) is possible in principle
+
| Human-controlled AGI in expectation would result in less suffering than uncontrolled
 
|-
 
|-
| Humans will eventually build human-level AGI conditional on no other major intervening disruptions to civilization as we know it
+
| Earth-originating intelligence will colonize the entire galaxy (ignoring anthropic arguments)
 
|-
 
|-
| Human-inspired colonization of space will cause more suffering than it prevents if it happens
+
| Soft AGI takeoff
 
|-
 
|-
| Earth will eventually be controlled by a singleton of some sort
+
| "Abstract objects: Platonism or nominalism?" Answer: nominalism
 
|-
 
|-
| Soft AGI takeoff
+
| A government will build the first human-level AGI, assuming humans build one at all
 
|-
 
|-
| Eternalism on philosophy of time
+
| Moral anti-realism
 
|-
 
|-
| Type-A physicalism regarding consciousness
+
| The universe/multiverse is finite
 
|-
 
|-
| Rare Earth explanation of Fermi Paradox
+
| Human-inspired colonization of space will cause more suffering than it prevents if it happens
 
|-
 
|-
| By at least 10 years before human-level AGI is built, debate about AGI risk will be as mainstream as global warming is in 2015
+
| Faster technological innovation increases net suffering in the far future
|-
 
| A government will build the first human-level AGI, assuming humans build one at all
 
 
|-
 
|-
 
| By 2100, if biological humans still exist, most of them will regard factory farming as a great evil of the past  
 
| By 2100, if biological humans still exist, most of them will regard factory farming as a great evil of the past  
 
|-
 
|-
| The Foundational Research Institute reduces net suffering in the far future
+
| Humans will eventually build human-level AGI conditional on no other major intervening disruptions to civilization as we know it
 
|-
 
|-
| The Machine Intelligence Research Institute reduces net suffering in the far future
+
| Crop cultivation prevents net suffering  
 
|-
 
|-
| Electing more liberal politicians reduces net suffering in the far future
+
| Compatibilism on free will
 
|-
 
|-
| Human-controlled AGI in expectation would result in less suffering than uncontrolled
+
| Wild-animal suffering will be a mainstream moral issue by 2100, conditional on biological humans still existing
|-
 
| Climate change will cause more suffering than it prevents
 
 
|-
 
|-
 
| The effective-altruism movement, all things considered, reduces rather than increases total suffering in the far future (not counting happiness)a  
 
| The effective-altruism movement, all things considered, reduces rather than increases total suffering in the far future (not counting happiness)a  
 
|-
 
|-
| Cognitive closure of some philosophical problems
+
| "Aesthetic value: objective or subjective?" Answer: subjective
 
|-
 
|-
| Faster technological innovation increases net suffering in the far future
+
| Type-A physicalism regarding consciousness
 
|-
 
|-
| Crop cultivation prevents net suffering
+
| Climate change will cause more suffering than it prevents  
 
|-
 
|-
| Conditional on a government building the first human-level AGI, it will be the USA (rather than China, etc.)
+
| Whole brain emulation will come before de novo AGI, assuming both are possible to build
 
|-
 
|-
| Earth-originating intelligence will colonize the entire galaxy (ignoring anthropic arguments)
+
| The Foundational Research Institute reduces net suffering in the far future
 
|-
 
|-
| Faster economic growth will cause net suffering in the far future
+
| At bottom, physics is discrete/digital rather than continuous
 
|-
 
|-
 
| Modal realism  
 
| Modal realism  
 +
|-
 +
| Humans will go extinct within millions of years for some reason other than AGI
 +
|-
 +
| Faster economic growth will cause net suffering in the far future
 
|-
 
|-
 
| Many-worlds interpretation of quantum mechanics (or close kin)  
 
| Many-worlds interpretation of quantum mechanics (or close kin)  
 
|-
 
|-
| At bottom, physics is discrete/digital rather than continuous
+
| A design very close to CEV will be implemented in humanity's AGI, conditional on AGI being built (excluding other value-learning approaches and other machine-ethics proposals)
 
|-
 
|-
| The universe/multiverse is finite
+
| The Machine Intelligence Research Institute reduces net suffering in the far future
 
|-
 
|-
| Whole brain emulation will come before de novo AGI, assuming both are possible to build
+
| A full world government will develop before human-level AGI  
 
|-
 
|-
| A full world government will develop before human-level AGI
+
| Rare Earth explanation of Fermi Paradox
 
|-
 
|-
| Wild-animal suffering will be a mainstream moral issue by 2100, conditional on biological humans still existing
+
| Artificial general intelligence (AGI) is possible in principle
 
|-
 
|-
| Humans will go extinct within millions of years for some reason other than AGI
+
| Electing more liberal politicians reduces net suffering in the far future
 
|-
 
|-
| A design very close to CEV will be implemented in humanity's AGI, conditional on AGI being built (excluding other value-learning approaches and other machine-ethics proposals)
+
| Earth will eventually be controlled by a singleton of some sort
 
|}
 
|}
 +
 +
==References==
 +
 +
<references/>

Latest revision as of 20:21, 11 November 2020

This is my version of the table created by Brian Tomasik[1] and Pablo Stafforini.[2] Like Pablo, confidence is operationalized as my estimate of whether I would change my mind if I spent more time thinking about the topic.

Belief Inside view probability Confidence Probability after social update Comments
Conditional on a government building the first human-level AGI, it will be the USA (rather than China, etc.)
Eternalism on philosophy of time
Cognitive closure of some philosophical problems
By at least 10 years before human-level AGI is built, debate about AGI risk will be as mainstream as global warming is in 2015
Human-controlled AGI in expectation would result in less suffering than uncontrolled
Earth-originating intelligence will colonize the entire galaxy (ignoring anthropic arguments)
Soft AGI takeoff
"Abstract objects: Platonism or nominalism?" Answer: nominalism
A government will build the first human-level AGI, assuming humans build one at all
Moral anti-realism
The universe/multiverse is finite
Human-inspired colonization of space will cause more suffering than it prevents if it happens
Faster technological innovation increases net suffering in the far future
By 2100, if biological humans still exist, most of them will regard factory farming as a great evil of the past
Humans will eventually build human-level AGI conditional on no other major intervening disruptions to civilization as we know it
Crop cultivation prevents net suffering
Compatibilism on free will
Wild-animal suffering will be a mainstream moral issue by 2100, conditional on biological humans still existing
The effective-altruism movement, all things considered, reduces rather than increases total suffering in the far future (not counting happiness)a
"Aesthetic value: objective or subjective?" Answer: subjective
Type-A physicalism regarding consciousness
Climate change will cause more suffering than it prevents
Whole brain emulation will come before de novo AGI, assuming both are possible to build
The Foundational Research Institute reduces net suffering in the far future
At bottom, physics is discrete/digital rather than continuous
Modal realism
Humans will go extinct within millions of years for some reason other than AGI
Faster economic growth will cause net suffering in the far future
Many-worlds interpretation of quantum mechanics (or close kin)
A design very close to CEV will be implemented in humanity's AGI, conditional on AGI being built (excluding other value-learning approaches and other machine-ethics proposals)
The Machine Intelligence Research Institute reduces net suffering in the far future
A full world government will develop before human-level AGI
Rare Earth explanation of Fermi Paradox
Artificial general intelligence (AGI) is possible in principle
Electing more liberal politicians reduces net suffering in the far future
Earth will eventually be controlled by a singleton of some sort

References