Philosoraptor [he/him, comrade/them]

  • 13 Posts
  • 251 Comments
Joined 5 years ago
cake
Cake day: August 3rd, 2020

help-circle


















  • My goal here isn’t to actually find the exact probability of an AI apocalypse, it’s to raise a warning flag that says “hey, this is more plausible than you might initially think!”

    That’s fair enough as far as it goes, but I think you’re in the minority about being explicit about that. It’s also important to really be precise here: the claim this kind of reasoning lets you defend isn’t “this is more probable than you think” but rather “if you examine your beliefs carefully, you’ll see that you actually think this is more plausible than you might be aware of.” That’s a very important distinction. It’s fine–good, even–to help people try to sort out their own subjective probabilities in a more systematic way, but we should be really careful to remember that that’s what’s going on here, not an objective assessment of probability. I think many (most) Rationalists and x-risk people elide that distinction, and either make it sound like or themselves believe that they’re putting a real objective numerical probability on these kinds of events. As I said, that’s not something you can do without rigorously derived and justified priors. We simply don’t have that for things like this. It’s easy to either delude yourself or give the wrong impression when you’re using the Bayesian framework in a way that looks objective but pulling numbers out of thin air for your priors.