Discussion about this post

User's avatar
Elliott Thornley's avatar

Often I find that philosophers' dismissals of AI risk are driven by a sort of fatalism, and that they can sometimes be swayed by making a quick case for tractability along the following lines. With AI risk, the dangerous technology doesn't exist yet (in contrast to nukes), we can shape its features (in contrast to pandemics and asteroids), and the barriers to causing harm are high (in contrast to engineered pandemics and climate change). To make a difference, we just need to persuade a small number of (admittedly very powerful and motivated) people to do things a little bit differently.

And it helps to note that reducing AI risk is especially tractable *for philosophers*. To address nuclear, pandemic, asteroid, or climate risks, you have to learn a new field and your philosophical skills are approximately useless. With AI risks, that's not true. Philosophical skills are useful. And since Claude Code can now design and run ML experiments for you, you don't even have to learn much of a new field.

Matt Reardon's avatar

Premise zero should be "ASI defined" and premise one should be "ASI possible." Most dismissive views seem to imagine ASI is something very different from what you do, and further they don't believe it is possible at all in some sense, though this is easy to conflate with "ASI soon."

23 more comments...

No posts

Ready for more?