I had a fun chat with Bentham's Bulldog this morning about the mysterious phenomenon of people objecting to effective altruism. I was a bit tired and rambly at times, but fortunately BB was on point—he even channeled willpower satisficing against me at one point!
At the end we got a question from the audience about whether (some? most? all?) EAs have been shown to have bad practical judgment. We both disagreed with the questioner overall, but unless you’re planning to blindly defer to EAs’ practical views, the question doesn’t seem especially worth focusing on. As I wrote in ‘Why Not Effective Altruism?’:
Sometimes critics take themselves to be attacking a narrower target: “EA as it actually exists”, or some such. But these critics tend to be ignorant of the actually-existing diversity of opinion and approaches within EA (it’s a big tent!), and have a tendency to straw-man their targets. To address the narrow criticisms, one would need to get into the sociology of EA, and assess the extent to which critics’ characterizations accurately describe their targets. Such sociological questions are of limited philosophical interest. From the perspective of first-personal moral deliberation about how to live our lives, the important question is not whether we should blindly defer to actual EAs (of course we shouldn’t), but whether there is some version of the EA project that is worth pursuing. So that is the question that this paper will address.
My sense is that a lot of the critics are essentially focused on trying to lower the status of the EA brand rather than engaging in serious moral inquiry.1 Proponents of EA, by contrast, tend to focus less on the brand; their nominal aim is to work out (and then pursue) what’s most worth doing, and to encourage others to do likewise.2 I personally think the EA brand/movement has significant instrumental value for coordination and for motivating people, so I’d prefer to see its status raised and will happily defend its track record (as a whole, not in every instance). (See theses 32-39 of ‘What “Effective Altruism” Means to Me’.) And insofar as others associate me with EA, I’d of course prefer they had positive rather than negative associations. But I really do think all that social jockeying ought to be secondary to the question of what’s worth doing—and that EA principles are very helpful for answering that question and yet remain unduly neglected in practice.
A point I keep coming back to: Consider whether the world would be better off with more or less concern for effectiveness and impartial altruism on the margins, and the answer is obviously “more”.3 Critics’ refusal to acknowledge this important point (or else dispute it head-on) really bothers me. “I associate your brand with tech bros” is not a good reason to let children die of malaria! Folks who don’t like the EA brand should find another way to promote the principles underlying effective altruism.
Related posts (on why effectiveness and impartial altruism, respectively, are both controversial):
Trade-off Denialism
One of the things I find most annoying is when people—especially those who should know better—refuse to acknowledge or grapple with the reality of tradeoffs. (Silas has a neat post on this, and why this psychological tendency may lead some people to feel hostile towards Effective Altruism.)
Doing Good Effectively is Unusual
tl;dr: It actually seems pretty rare for people to care about the general good as such (i.e., optimizing cause-agnostic impartial well-being), as we can see by their hasty dismissals of EA concern for non-standard beneficiaries.
Some seemed positively gleeful when they heard about sexual harassment scandals in the community, for example. “More ammunition!”
If you do good effectively without adopting the “EA” label, that’s fine!
Unless the point of the “EAs have bad judgment” argument is that the critic believes the link is causal, and adopting more impartial concern and effectiveness-focus would make other people have worse judgment too? I’d like to see such an argument developed at greater length.
I explain in Good Judgment with Numbers what I think (very roughly speaking) the ideal decision procedure looks like. I’d like to hear critics provide their preferred alternative, along with some reasoning for why they think it would do better. So far we’ve only heard Leif Wenar’s, and his was demonstrably bad.







