Some people are extremely hostile to Effective Altruism. I find this puzzling. At least, I find it puzzling how to interpret this most charitably. (There are obvious explanations that would reflect poorly on the hostile critics, such as “do-gooder derogation”, akin to how some omnivores hate vegans for making them look/feel bad in comparison. No doubt such motivated reasoning is part of the story.1 But what does it feel like, from the inside, to hate on EA? What story does the critic tell about themselves, when they discourage others from doing good effectively, that doesn’t make them seem the villain?)
Others don’t seem to find it so puzzling. So I wrote this paper to explain my puzzlement. I hope it will serve as a useful introduction to the philosophical debates surrounding Effective Altruism, for any undergraduate classes that touch on this topic. (Suggestions welcome, in the comments below, for the best philosophical critique of EA to pair this with as an assigned reading.)
My paper argues that (i) EA principles are clearly good; (ii) core EA claims on controversial topics (from “earning to give” to “longtermism”) are clearly correct, even if there’s room for dispute on the margins; and (iii) we should generally affirm important moral truths, such as (i) and (ii) above, even if they’re politically inconvenient.
Moreover, the “political” critique of EA as “actually harmful, even if well-meaning” plausibly applies much more strongly to the critics themselves: by discouraging others from giving effectively, they are very likely causally responsible for immense harms (e.g. children dying from avoidable malaria). Their primary real-world effect is—very obviously!—to provide “moral cover” to the morally complacent. This should be more widely recognized as disreputable.
As the paper abstract summarizes:
Effective altruism sounds so innocuous—who could possibly be opposed to doing good, more effectively? Yet it has inspired significant backlash in recent years. This paper addresses some common misconceptions, and argues that the core "beneficentric" ideas of effective altruism are both excellent and widely neglected. Reasonable people may disagree on details of implementation, but all should share the basic goals or values underlying effective altruism.
A Dialectical Oddity
It was a weird paper to write, since it all feels incredibly obvious: I’m not really sure how anyone could honestly disagree. (I’ll share some highlights below.) Hopefully some who do disagree will explain their thinking. Obviously it’s fine to disagree over details of implementation: how to actually go about doing good effectively. What I can’t comprehend is all the hostility to the very idea of EA. As noted in fn 7:
The degree of hostility many critics express towards EA doesn’t make sense if they agree with EA principles and simply disagree about how best to apply them. One doesn’t see these critics say, “EA is a great idea, and here’s how we could do it better.” Their disagreement seems deeper than that…
On the other hand, if it turns out that critics actually love the idea of effective altruism, and are merely suspicious of actually-existing Effective Altruists (for whatever reason), that would certainly be interesting to hear! But then why do they sound so much like they want to sink the whole project, rather than improve upon it?
On Prioritization
Perhaps the thing that most sets Effective Altruism apart is its commitment to explicit cause prioritization: considering trade-offs, and seriously trying to work out what should be our top moral priority (on current margins). I think this is a really big deal. It’s obviously a very fallible process. But between the options of (i) trying to do more good rather than less, all else equal, or (ii) not even trying, it seems pretty obvious that the former is the way to go!
The objections to this are completely daft. Consider Srinivasan:
What’s the expected marginal value of becoming an anti-capitalist revolutionary? To answer that you’d need to put a value and probability measure on achieving an unrecognizably different world—even, perhaps, on our becoming unrecognizably different sorts of people. It’s hard enough to quantify the value of a philanthropic intervention: how would we go about quantifying the consequences of radically reorganizing society?
But as I explain in fn 13, at least a rough ballpark estimate in answer to these questions would seem necessary in order to have a justified belief that becoming an anti-capitalist revolutionary is actually a good idea. If you’re truly clueless about the expected consequences of an action, it’s hard to see much reason to do it. It would seem especially indefensible to pass up saving someone’s life because you prefer to take a gamble that you don’t even think is positive in expectation.
This doesn’t necessarily require “quantification” in any strict sense: we can be guided by expected value without necessarily making explicit calculations. Relatedly, critics sometimes misattribute an unduly narrow conception of “evidence” to EAs, but obviously any real epistemic reason should count. As I summarize the dilemma faced by those who think that “systemic change” somehow constitutes a challenge to EA:
Either their total evidence supports the idea that attempting to promote systemic change would be a better bet (in expectation) than safer alternatives, or it does not. If it does, then EA principles straightforwardly endorse attempting to promote systemic change. If it does not, then by their own lights they have no basis for thinking it a better option. In neither case does it constitute a coherent objection to EA principles.
As far as I can tell, the only reason to reject the core idea of effective altruism is that you’re antecedently committed to some other project that you suspect is less good, but you don’t want to have to admit that it is less good. Better, then, for the question of effectiveness/prioritization to not even be asked. (If you think I’m being unduly cynical here, I’d love to hear how you think to reconcile these criticisms with both rationality and intellectual integrity.)
On Earning to Give
Many people seem to find the very idea of “earning to give” somehow disreputable. This is, again, completely daft:
Moral theorists may argue about precisely which directly harmful careers could, or could not, be justified by indirectly saving more lives. But these edge cases are a distraction from the core idea, much as an excessive focus on the ethics of Robin Hoodery would be a distraction when evaluating the basic case for giving more to the poor. In both cases, we can simply limit our attention to increasing one’s donations via permissible means.
Rare exceptions aside, most careers are presumably permissible. The basic idea of earning to give is just that we have good moral reasons to prefer better-paying careers, from among our permissible options, if we would donate the excess earnings. There can thus be excellent altruistic reasons to pursue higher pay. This claim is both true and widely neglected.
On Billionaire Philanthropy
[I]t makes sense that if billionaires exist, we should prefer that they spend their money in ways that effectively help others. And billionaires, notoriously, do exist…
There is nothing inconsistent about both (i) trying to change the system to make it more egalitarian, and (ii) until such a time as those efforts succeed, encouraging those with excessive wealth to dispose of it in better rather than worse ways.
Even if your top moral priority were to institute egalitarian reforms so that no billionaires exist in future,2 you would probably be better served on the margin by having an extra billionaire supporting your project than an extra activist. Criticizing social reformers for seeking funding support for their reforms is absurd: it’s effectively to criticize them for being instrumentally rational. (“How dare you try to actually achieve your moral goals?”)
On Longtermism
I trust that most readers of this paper are sufficiently cosmopolitan to agree that we should not ignore the greater plight of children dying of malaria overseas, merely because they are geographically distant from us. We can—and should—intellectually appreciate that “statistical lives” are every bit as real as the ones we see before our eyes. But distance in time seems no more intrinsically significant than distance in space. So we should not be moved by appeals to strictly prioritize the more easily identifiable individuals of the “here and now”. We should want to help people, and bring about a better world, without (geographic or temporal) restriction.
The paper goes on to explain the basics of population ethics, including why we should unconditionally value good lives. I conclude:
[I]t remains an open question how to implement a concern for protecting future generations. You could accept life-affirming longtermism in principle while remaining highly uncertain about what should be done in practice. Longtermists can disagree about whether to prioritize (i) specific risk-mitigating interventions, or (ii) more general investigation into possible risks and responses, or (iii) more general societal (ethical, scientific, and economic) progress and capacity-building so that future generations can do a better job than we at tackling future problems. Maybe there are other options too. I leave open such questions of implementation. I’m merely arguing that we should all agree on the in-principle importance of the long-term future.
Conclusion
The answer to our title question, ‘Why not effective altruism?’, is that there’s no principled reason why not. We should all want to do more good rather than less, and use the best available evidence to guide our efforts. There’s plenty of room for reasonable disagreement about how best to pursue this humanitarian goal. But its in-principle desirability cannot reasonably be disputed…
Some may nonetheless argue that we can have good political reasons to bury inconvenient (or “harmful”) truths. I grant that this is possible, but I think we should have a high bar for endorsing such dishonesty. I also worry that it’s far more likely that denunciations of effective altruism function to provide “moral cover” for the morally complacent. Doing more good may not be in our self-interest, after all. But it is worth doing, nonetheless.
For example, philosopher Mary Townsend seems pretty openly vicious when she writes, “It’s almost too easy to feel a certain schadenfreude at the possibility that effective altruism—and its parent philosophy, classical utilitarianism—will really, finally get the pie in the face they deserve.”
Not something I personally recommend: seems rather too likely to have negative unintended consequences.
Why does Srinavasan use the expected value of being an anticapitalist revolutionary as an example of something that is hard to quantify? There have been anticapitalist revolutionaries around for more than a century now and they have enough of a track record to establish that their expected marginal value is massively negative. Becoming an anticapitalist revolutionary is a rational thing to do if you want to maximize death and suffering. If EA philosophy stops people from becoming anticapitalist revolutionaries then it's already made the world a better place, even if they don't go on to do any good at all.
An interesting case is that Émile Torres is among the best-known and most aggressive critics of effective altruism, and I recall them (very admirably) helping to run a fundraiser for GiveDirectly -- in fact via the GWWC website.
I really think it is worth taking seriously that the main concern is with the peculiar and sometimes troubling social scene that has sprung up around the EA idea. (And the adjacent and much more troubling rationalist social scene.)
If people let their (IMO justified) worries about the people and social dynamics bleed over a bit into their judgment of the philosophy, well, maybe that's a good heuristic if you aren't a professional philosopher.