Discussion about this post

User's avatar
Matt Reardon's avatar

Premise zero should be "ASI defined" and premise one should be "ASI possible." Most dismissive views seem to imagine ASI is something very different from what you do, and further they don't believe it is possible at all in some sense, though this is easy to conflate with "ASI soon."

Expand full comment
Mark's avatar
2dEdited

I like that you've not neglected gradual disempowerment! The situation seems almost like this: Okay, we have AGI/ASI. Solved alignment? No? AI takeover and extinction. Solved alignment, yes? Is ASI under control by some one guy/company? "Human takeover" i.e. Extreme power concentration. Value lock-in. No? Okay, well eventually: Gradual disempowerment. Existential risks on all sides. :( Seems like the most robust way of reducing existential risks (not just extinction risk) is to enforce a Pause / Moratorium (perhaps backed up by Deterrence, cf. Hendrycks' MAIM) while Humanity works hard at solving both the alignment problem and also the governance/coordination problems (and Philosophers like yourself and also political scientists, economists etc. are much needed here!). If we only solve the former but not the latter, we open humanity to various existential risks. And a Pause looks necessary to solve those two sets of problems, especially if AGI/ASI is on the horizon. So in short, preventing existential risks (understood as threats to our long-term global flourishing, of which survival is one part) as a whole requires Pause.

And it seems undeniable (unless one wants to have some supreme arrogance in humanity) that some safe, aligned AGI could have far greater likelihood in solving the problems of philosophy than human philosophers (ctrl-f-replace with science and scientists and this also makes sense—indeed It's what animates Demis Hassabis). So philosophers, whether they prioritize global flourishing or just philosophy alone, should prioritize AI safety first.

Expand full comment
17 more comments...

No posts

Ready for more?