Difference between revisions of "The Precipice notes"
(3 intermediate revisions by the same user not shown) | |||
Line 2: | Line 2: | ||
* "If we are the only moral agents that will ever arise in our universe—the only beings capable of making choices on the grounds of what is right and wrong—then responsibility for the history of the universe is entirely on us. This is the only chance ever to shape the universe toward what is right, what is just, what is best for all. If we fail, then the potential not just of humanity, but of all moral action, will have been irrevocably squandered." -- it sounds like Toby is still a moral realist. Does this mean that he would expect aliens to also discover moral truths, so that as long as ''some'' civilization spreads to the stars, it's ok if humans go extinct (for non-AI extinctions)? | * "If we are the only moral agents that will ever arise in our universe—the only beings capable of making choices on the grounds of what is right and wrong—then responsibility for the history of the universe is entirely on us. This is the only chance ever to shape the universe toward what is right, what is just, what is best for all. If we fail, then the potential not just of humanity, but of all moral action, will have been irrevocably squandered." -- it sounds like Toby is still a moral realist. Does this mean that he would expect aliens to also discover moral truths, so that as long as ''some'' civilization spreads to the stars, it's ok if humans go extinct (for non-AI extinctions)? | ||
* "The theory of how to make decisions when we are uncertain about the moral value of outcomes was almost completely neglected in moral philosophy until very recently—despite the fact that it is precisely our uncertainty about moral matters that leads people to ask for moral advice and, indeed, to do research on moral philosophy at all." -- i think the key phrase here is "in moral philosophy"; moral uncertainty seems to have been discussed extensively even before the references that Ord cites. i don't like this kind of writing, where you say "neglected in X" and you don't emphasize the restriction to X, so the reader might think "neglected in X because X is the only place to discuss this thing", i.e. that the thing has been neglected ''everywhere''. | * "The theory of how to make decisions when we are uncertain about the moral value of outcomes was almost completely neglected in moral philosophy until very recently—despite the fact that it is precisely our uncertainty about moral matters that leads people to ask for moral advice and, indeed, to do research on moral philosophy at all." -- i think the key phrase here is "in moral philosophy"; moral uncertainty seems to have been discussed extensively even before the references that Ord cites. i don't like this kind of writing, where you say "neglected in X" and you don't emphasize the restriction to X, so the reader might think "neglected in X because X is the only place to discuss this thing", i.e. that the thing has been neglected ''everywhere''. | ||
− | * "The international body responsible for the continued prohibition of bioweapons (the Biological Weapons Convention) has an annual budget of just $1.4 million—less than the average McDonald’s restaurant.54" "The 2019 budget was $1.4 million (BWC ISU, 2019). Between 2016 and 2018, McDonald’s company-operated restaurants incurred an average of $2.8 million expenses per restaurant per year (McDonald’s Corporation, 2018, pp. 14, 20). The company does not report costs for its franchised restaurants." -- but it looks like the vast majority of McDonald's restaurants are franchised (see [https://www.google.com/search?q=what+percent+of+mcdonald%27s+are+franchised]; i see numbers from 75% to 90% from checking a few sources). So there are potentially weird selection effects, if e.g. the average franchised restaurant is run much worse than the average company-owned restaurant. | + | * "The international body responsible for the continued prohibition of bioweapons (the Biological Weapons Convention) has an annual budget of just $1.4 million—less than the average McDonald’s restaurant.54" The footnote says: "The 2019 budget was $1.4 million (BWC ISU, 2019). Between 2016 and 2018, McDonald’s company-operated restaurants incurred an average of $2.8 million expenses per restaurant per year (McDonald’s Corporation, 2018, pp. 14, 20). The company does not report costs for its franchised restaurants." -- but it looks like the vast majority of McDonald's restaurants are franchised (see [https://www.google.com/search?q=what+percent+of+mcdonald%27s+are+franchised]; i see numbers from 75% to 90% from checking a few sources). So there are potentially weird selection effects, if e.g. the average franchised restaurant is run much worse than the average company-owned restaurant. |
+ | * so... the book doesn't cite eliezer at all, even for AI risk stuff. and i was like "huh, that's odd". well, then in the "Our Neglect of Existential Risks" section, Ord starts talking about how things like availability heuristic and scope neglect means that humans are really bad at caring about x-risks. and i was like, "sounds pretty familiar", and yes of course there's an [https://intelligence.org/files/CognitiveBiases.pdf entire paper] that eliezer wrote about this. I'm not sure what's going on here. |
Latest revision as of 00:23, 27 April 2020
- early on in the book, toby says that you can't buy super crappy versions of things in developed countries because (i forget his reason exactly) but it was something like they're so crappy they've been priced out of the market? or something like that. i think this could be true for some products, but i'm suspicious it's true for many/most of them? crappy housing for example seems impossible to obtain for super cheap, and i think that has more to do with regulations preventing super crappy housing. i guess this isn't really a question.
- "If we are the only moral agents that will ever arise in our universe—the only beings capable of making choices on the grounds of what is right and wrong—then responsibility for the history of the universe is entirely on us. This is the only chance ever to shape the universe toward what is right, what is just, what is best for all. If we fail, then the potential not just of humanity, but of all moral action, will have been irrevocably squandered." -- it sounds like Toby is still a moral realist. Does this mean that he would expect aliens to also discover moral truths, so that as long as some civilization spreads to the stars, it's ok if humans go extinct (for non-AI extinctions)?
- "The theory of how to make decisions when we are uncertain about the moral value of outcomes was almost completely neglected in moral philosophy until very recently—despite the fact that it is precisely our uncertainty about moral matters that leads people to ask for moral advice and, indeed, to do research on moral philosophy at all." -- i think the key phrase here is "in moral philosophy"; moral uncertainty seems to have been discussed extensively even before the references that Ord cites. i don't like this kind of writing, where you say "neglected in X" and you don't emphasize the restriction to X, so the reader might think "neglected in X because X is the only place to discuss this thing", i.e. that the thing has been neglected everywhere.
- "The international body responsible for the continued prohibition of bioweapons (the Biological Weapons Convention) has an annual budget of just $1.4 million—less than the average McDonald’s restaurant.54" The footnote says: "The 2019 budget was $1.4 million (BWC ISU, 2019). Between 2016 and 2018, McDonald’s company-operated restaurants incurred an average of $2.8 million expenses per restaurant per year (McDonald’s Corporation, 2018, pp. 14, 20). The company does not report costs for its franchised restaurants." -- but it looks like the vast majority of McDonald's restaurants are franchised (see [1]; i see numbers from 75% to 90% from checking a few sources). So there are potentially weird selection effects, if e.g. the average franchised restaurant is run much worse than the average company-owned restaurant.
- so... the book doesn't cite eliezer at all, even for AI risk stuff. and i was like "huh, that's odd". well, then in the "Our Neglect of Existential Risks" section, Ord starts talking about how things like availability heuristic and scope neglect means that humans are really bad at caring about x-risks. and i was like, "sounds pretty familiar", and yes of course there's an entire paper that eliezer wrote about this. I'm not sure what's going on here.