Yeah EA is necessary because uhhh, it's gonna be hard for someone to appreciate the arts when they went blind as a child from vitamin A deficiency. This applies to a full third of children globally, all for the lack of a pill that costs like a dollar for a year's supply.
The Notre Dame renovation after the fire cost $760 million. At the time, Dylan Matthews wrote an article for Vox arguing that spending this money on repairing the cathedral is more or less equivalent to letting children die.
But I would ask, if you want to come up with $760 million to take from the arts and give to malaria nets, why pick Notre Dame, a sacred and beautiful site beloved by billions of people for centuries?
For example, consider the following movies from 2023 [edit: I misremembered the year of the Notre Dame fire]:
- Ant-Man and the Wasp: Quantumania
- Transformers: Rise of the Beasts
- Aquaman and the Lost Kingdom
- The Flash
Together, the budgets of these movies are about $800 million. In the thought experiment, why not funge this different $800 million against malaria nets? These movies all sucked and probably even people who enjoyed them in theaters would admit Notre Dame is better. The money is equally “available” for purposes of thought experiments.
If the argument is that this is investment not discretionary spending, you can easily add up $800 million in box office for terrible movies and then say people should have donated instead of buying tickets. If the argument is that we should reallocate the marginal dollar from the arts to effective charities until it’s not worth it, bad movies seems like a strictly better place to start.
So I ask again, why specifically pick the thought experiment where we let Notre Dame burn?
I think utilitarians take their philosophy seriously so are willing to bite some bullets about tradeoffs. These arguments indeed should be paid more attention.
But it seems to me there is a tendency to relate to bullet-biting itself as a positive good, and begin looking for really extra-tough and chewy bullets to bite. I don’t profess to know the psychological or social factors, but I do see it happen a lot.
I’m not making an argument against utilitarianism here, rather I’m pointing out a psychological error I think utilitarians fall into which needlessly results in them making unconvincing arguments that make people angry at them.
The link appears paywalled now so I can't double-check it, but my recollection from the time was that Matthews was *responding* to someone else who claimed that you "can't compare" the value of the Notre Dame repairs to an alternative of X lives saved, and he was showing how you literally can.
So I'm pretty sure you've misidentified who is actually responsible for picking that thought experiment, and hence confusing "honest engagement with an interlocutor's claims" with "going looking for bullets". (This happens a lot. It wasn't a utilitarian who came up with the Transplant thought experiment, for example, though people sometimes find it distasteful when utilitarians respond to it. If you're willing to engage honestly with the hardest cases your opponents throw at you, you're going to look very different from the politician-types who just don't engage with objections to their views. That's not a flaw of those who honestly engage!)
This is accurate (the Notre Dame vs. kids comparison was originally from Amy Schiller), but he sought out this one particular argument from the entire giant universe of bad criticisms of effective altruism, chose to buy into its weird and dubious framing, and gets very excited about completely biting the bullet. From a section titled "Kids over cathedrals":
"...That said, it’s a harder question than the Notre Dame one. I can imagine explaining to kids waiting for bednets that my tax dollars are going to help people suffering in the US, not Nigeria...
But can I imagine going down Main Street and telling people they need to die for Notre Dame? Of course not.
If I were to file effective altruism down to its more core, elemental truth, it’s this: “We should let children die to rebuild a cathedral” is not a principle anyone should be willing to accept. Every reasonable person should reject it."
Schiller's argument is not really very good or coherent at all, so I don't think this constitutes responding to the hardest cases. Rather, it's a weak argument to which the relatively simple rebuttal will unfortunately seem counterintuitive and hard to swallow.
But I think utilitarians often seek out such cases and seem to take a perverse pleasure in getting to say the counterintuitive thing.
Doing ANYTHING that's not frugal life while maximising earnings to donate to most effective charities is letting children die. Literally.
The issue is how we earmark/categorise the money pools. Could the same money be raised for more effective causes?
I was motivated by the OP to at last set up a (very modest but this isn't important for the purpose of this argument) give directly regular donation, and was welcomed by a screen requesting donations for "poor Americans". Kinda appaling form of nudging/path creating imo, because I feel that at least SOME people will donate on that page, by reflex, instead of clicking "other programmes" button. But perhaps I'm wrong and that money would simply never make it to global poverty funds.
In other words, even in purely utilitarian terms it's better to have a Notre Dame than nothing, it's better to help a poor American than nobody.
On a personal level, I think trade-off denial gets a lot more purchase. Let's say I work at a gallery and I want to run a programme of zine making workshops. All good fun, personally enriching, doesn't save any lives. It would be absurd for someone to object by saying I'd be better off spending my time working at a Fortune 500 company and donating my money to mitigate a hypothetical robot uprising. It's not a trade-off because I was never going to do that other thing anyway. It was never an option.
Absolutely. But then will you present your programme as charitable beneficence, or something you do for largey self interest reasons? I don't help my friends for charity reasons, I help them because they're my friends. It does not feel quite like a MORAL choice to me.
A moral choice and a clear trade off would be: do I give my £50 to malaria prevention or to what opens on clicking the give directly link -- people in one of the wealthiest countries in the world who temporarily lost their social welfare benefits. The choice for me (I have literally zero personal/specific investment ("love") in either poor-ish Americans or African children in sub Saharan Africa] is obvious, on pure value/efficiency basis, and so are the trade offs.
I resonate with that idea of trade-off resistance or reactance and also with the theory that protected and sacred values are important psychological factors. I don't know that I agree with that being a strong reason or motivation for not accepting EA in particular though.
I want to distinguish two arguments that I think need to be made here though:
1. The normative argument for altruism (going out of our way to help others is good).
2. The prescriptive argument for Effective Altruism (EA is the recommended way to be altruistic).
I wonder if they should be separate arguments initially for the sake of fairness. A normative argument that kindness and giving are good. And a prescriptive argument for the strategy of Effective Altruism. I think many people accept the first but not the second.
When people oppose altruism in general, I would agree that a resistance to trading off protected values or sacred values is a very influential causal factor. That's given the assumption of a human social nature, capacity for empathy, and kind disposition as the starting point for most humans, which I think is a plausible assumption for our species in general. It feels a bit weird (callous) to me for people to want to argue a principled standpoint against altruism or kindness in general.
On the other hand unless you make the assumption that Effective Altruism is hands down the only way it makes sense to be altruistic, that argument doesn't seem to simply extend to supporting EA. There is a separate specific prescriptive argument to be made that EA should be connected to altruism as its most or only sensible method. I think it probably is a good idea to distinguish these to avoid clouding the questions about what specifically it means to help once we agree that kindness and its handmaiden altruism are good things human life.
I love the "vibe ethicist" vs systematic ethics frame, though I tend to view the difference as different ways of doing ethics. A charitable reading "vibe ethics" would be that "vibes" are important moral emotions, explanans of sorts. I speculate that these "vibes" come from pre-theoretic notions of moral agency tying to our responsibility practices. To accept the Trade-offs require us to think that we are responsible for other lives and matters that we do not usually think we are responsible for. If theoretically, a huge amount of trade-offs to be formulated for each of the actions that we do in our daily life, being held responsible for such trade-offs (and the practice of blame and resentment) may pose huge burdens on individual moral agents.
Yeah EA is necessary because uhhh, it's gonna be hard for someone to appreciate the arts when they went blind as a child from vitamin A deficiency. This applies to a full third of children globally, all for the lack of a pill that costs like a dollar for a year's supply.
The Notre Dame renovation after the fire cost $760 million. At the time, Dylan Matthews wrote an article for Vox arguing that spending this money on repairing the cathedral is more or less equivalent to letting children die.
https://www.vox.com/future-perfect/390458/charity-america-effective-altruism-local
But I would ask, if you want to come up with $760 million to take from the arts and give to malaria nets, why pick Notre Dame, a sacred and beautiful site beloved by billions of people for centuries?
For example, consider the following movies from 2023 [edit: I misremembered the year of the Notre Dame fire]:
- Ant-Man and the Wasp: Quantumania
- Transformers: Rise of the Beasts
- Aquaman and the Lost Kingdom
- The Flash
Together, the budgets of these movies are about $800 million. In the thought experiment, why not funge this different $800 million against malaria nets? These movies all sucked and probably even people who enjoyed them in theaters would admit Notre Dame is better. The money is equally “available” for purposes of thought experiments.
If the argument is that this is investment not discretionary spending, you can easily add up $800 million in box office for terrible movies and then say people should have donated instead of buying tickets. If the argument is that we should reallocate the marginal dollar from the arts to effective charities until it’s not worth it, bad movies seems like a strictly better place to start.
So I ask again, why specifically pick the thought experiment where we let Notre Dame burn?
I think utilitarians take their philosophy seriously so are willing to bite some bullets about tradeoffs. These arguments indeed should be paid more attention.
But it seems to me there is a tendency to relate to bullet-biting itself as a positive good, and begin looking for really extra-tough and chewy bullets to bite. I don’t profess to know the psychological or social factors, but I do see it happen a lot.
I’m not making an argument against utilitarianism here, rather I’m pointing out a psychological error I think utilitarians fall into which needlessly results in them making unconvincing arguments that make people angry at them.
The link appears paywalled now so I can't double-check it, but my recollection from the time was that Matthews was *responding* to someone else who claimed that you "can't compare" the value of the Notre Dame repairs to an alternative of X lives saved, and he was showing how you literally can.
So I'm pretty sure you've misidentified who is actually responsible for picking that thought experiment, and hence confusing "honest engagement with an interlocutor's claims" with "going looking for bullets". (This happens a lot. It wasn't a utilitarian who came up with the Transplant thought experiment, for example, though people sometimes find it distasteful when utilitarians respond to it. If you're willing to engage honestly with the hardest cases your opponents throw at you, you're going to look very different from the politician-types who just don't engage with objections to their views. That's not a flaw of those who honestly engage!)
This is accurate (the Notre Dame vs. kids comparison was originally from Amy Schiller), but he sought out this one particular argument from the entire giant universe of bad criticisms of effective altruism, chose to buy into its weird and dubious framing, and gets very excited about completely biting the bullet. From a section titled "Kids over cathedrals":
"...That said, it’s a harder question than the Notre Dame one. I can imagine explaining to kids waiting for bednets that my tax dollars are going to help people suffering in the US, not Nigeria...
But can I imagine going down Main Street and telling people they need to die for Notre Dame? Of course not.
If I were to file effective altruism down to its more core, elemental truth, it’s this: “We should let children die to rebuild a cathedral” is not a principle anyone should be willing to accept. Every reasonable person should reject it."
Schiller's argument is not really very good or coherent at all, so I don't think this constitutes responding to the hardest cases. Rather, it's a weak argument to which the relatively simple rebuttal will unfortunately seem counterintuitive and hard to swallow.
But I think utilitarians often seek out such cases and seem to take a perverse pleasure in getting to say the counterintuitive thing.
(maybe I don't understand what you refer to, but) why is it counterintuitive?
To be fair that absolutely does sound like something Matthews could argue.
Doing ANYTHING that's not frugal life while maximising earnings to donate to most effective charities is letting children die. Literally.
The issue is how we earmark/categorise the money pools. Could the same money be raised for more effective causes?
I was motivated by the OP to at last set up a (very modest but this isn't important for the purpose of this argument) give directly regular donation, and was welcomed by a screen requesting donations for "poor Americans". Kinda appaling form of nudging/path creating imo, because I feel that at least SOME people will donate on that page, by reflex, instead of clicking "other programmes" button. But perhaps I'm wrong and that money would simply never make it to global poverty funds.
In other words, even in purely utilitarian terms it's better to have a Notre Dame than nothing, it's better to help a poor American than nobody.
On a personal level, I think trade-off denial gets a lot more purchase. Let's say I work at a gallery and I want to run a programme of zine making workshops. All good fun, personally enriching, doesn't save any lives. It would be absurd for someone to object by saying I'd be better off spending my time working at a Fortune 500 company and donating my money to mitigate a hypothetical robot uprising. It's not a trade-off because I was never going to do that other thing anyway. It was never an option.
Absolutely. But then will you present your programme as charitable beneficence, or something you do for largey self interest reasons? I don't help my friends for charity reasons, I help them because they're my friends. It does not feel quite like a MORAL choice to me.
A moral choice and a clear trade off would be: do I give my £50 to malaria prevention or to what opens on clicking the give directly link -- people in one of the wealthiest countries in the world who temporarily lost their social welfare benefits. The choice for me (I have literally zero personal/specific investment ("love") in either poor-ish Americans or African children in sub Saharan Africa] is obvious, on pure value/efficiency basis, and so are the trade offs.
I resonate with that idea of trade-off resistance or reactance and also with the theory that protected and sacred values are important psychological factors. I don't know that I agree with that being a strong reason or motivation for not accepting EA in particular though.
I want to distinguish two arguments that I think need to be made here though:
1. The normative argument for altruism (going out of our way to help others is good).
2. The prescriptive argument for Effective Altruism (EA is the recommended way to be altruistic).
I wonder if they should be separate arguments initially for the sake of fairness. A normative argument that kindness and giving are good. And a prescriptive argument for the strategy of Effective Altruism. I think many people accept the first but not the second.
When people oppose altruism in general, I would agree that a resistance to trading off protected values or sacred values is a very influential causal factor. That's given the assumption of a human social nature, capacity for empathy, and kind disposition as the starting point for most humans, which I think is a plausible assumption for our species in general. It feels a bit weird (callous) to me for people to want to argue a principled standpoint against altruism or kindness in general.
On the other hand unless you make the assumption that Effective Altruism is hands down the only way it makes sense to be altruistic, that argument doesn't seem to simply extend to supporting EA. There is a separate specific prescriptive argument to be made that EA should be connected to altruism as its most or only sensible method. I think it probably is a good idea to distinguish these to avoid clouding the questions about what specifically it means to help once we agree that kindness and its handmaiden altruism are good things human life.
I love the "vibe ethicist" vs systematic ethics frame, though I tend to view the difference as different ways of doing ethics. A charitable reading "vibe ethics" would be that "vibes" are important moral emotions, explanans of sorts. I speculate that these "vibes" come from pre-theoretic notions of moral agency tying to our responsibility practices. To accept the Trade-offs require us to think that we are responsible for other lives and matters that we do not usually think we are responsible for. If theoretically, a huge amount of trade-offs to be formulated for each of the actions that we do in our daily life, being held responsible for such trade-offs (and the practice of blame and resentment) may pose huge burdens on individual moral agents.