Discussion about this post

User's avatar
Ghatanathoah's avatar

Why does Srinavasan use the expected value of being an anticapitalist revolutionary as an example of something that is hard to quantify? There have been anticapitalist revolutionaries around for more than a century now and they have enough of a track record to establish that their expected marginal value is massively negative. Becoming an anticapitalist revolutionary is a rational thing to do if you want to maximize death and suffering. If EA philosophy stops people from becoming anticapitalist revolutionaries then it's already made the world a better place, even if they don't go on to do any good at all.

JerL's avatar

Others have said similar things, but to add my two cents:

First, I am sympathetic to, and probably count as, an EA, so I am not really the kind of person you are addressing, but I can think of a few things:

First, you really might disagree with some of the core ideas: you may be a deontologist, so that some proposed EA interventions, though positive in expectation, are still impermissible (e.g. a "charity" that harvests organs unwillingly from the homeless and donates them to orphans is bad, no matter how compelling your EV calculation). Or as Michael St. Jules points out, on longtermism, you might reject any number of the supporting propositions.

Second: Agreement with the core ideas doesn't imply all that much; you say to Michael that you are only interested in defending longtermism as meaning "the far future merits being an important priority"; but this is hardly distinctive to EA! If EA just means, "we should try to think carefully about what it means to do good", then almost any program for improving the world will endorse some version of that! What makes EA distinctive isn't the versions of its claims that are most broadly acceptable!

You can agree in principle with "core" EA ideas but think there is some methodological flaw, or a particular set of analytical blinders in the EA community such that the EA version of those ideas is hopelessly flawed. This is entangled with

Third: So, if you agree with the EA basics, and you think EA is making a big mistake in how it interprets/uses/understands those basics, why not try to get on board to improve the program? Either because those misunderstandings/methodologies/viewpoints are so central to EA that it makes more sense to just start again fresh, or because EA as an actual social movement is too resistant to hearing such critiques.

Like, take the revolutionary communist example from the other end: lots of people (even many EAs) would agree to core communist principles like "Material abundance should be shared broadly", and revolutionary ideas like "We shouldn't stick to a broken status quo just because it would take violence to reach a better world"--and there is a sense in which you can start as a revolutionary communist, and ultimately talk yourself into a completely different viewpoint that still takes those ideas as fundamental but otherwise looks nothing like revolutionary communism (indeed, I think this is a journey many left-leaning teenagers go through, and it wouldn't even surprise me if some of them end up at something like EA).

But I don't think people who don't start from the point of view of communism should feel obliged to present their critiques as ways of improving the doctrine of revolutionary communism. This is for both philosophical reasons (there is too much bad philosophy in there that takes a long time to clear out, better to present your ideas as a separate system on their own merits) and social ones (the actual people who spend all their time thinking about revolutionary communism aren't the kind of people you can have productive discussions with about this sort of thing).

Obviously that's an unfair comparison to EA, but people below have pointed out that EA-the-movement is at least a little bit cult-y, and has had a few high-profile misfires of people applying its ideas. I personally think its successes more than outweigh the failures, but I think it's fair for someone to disagree.

Finally, I'd like to try steelman the "become an anticapitalist revolutionary" point of view. Basically, the point here is that "thinking on the margin" often blinds one to coordination problems--perhaps we could get the most expected value if a sufficiently large number of people become anticapitalist revolutionaries, but below some large threshold, there is no value--then the marginal benefit of becoming a revolutionary is negligible, yet it still may be the case that we would wish to coordinate on that action if we could. This is (I think) what Srinivasan is getting at: the value of being a revolutionary is conditional on lots of other people being revolutionaries as well. It's not impossible to fit this sort of thinking into an EA-type framework, but I think it's a lot more convoluted and complicated. But I don't think we should rule it out as a theory of doing good, or prioritizing how to do good, even if I don't find that particular example very compelling.

65 more comments...

No posts

Ready for more?