Ethics is easy when autonomy and beneficence converge: of course people should be allowed to do good things.1 And I’m enough of a Millian to think that in general, promoting human capacities and individual autonomy may be our most robustly secure route to creating a better future. So I think the two do very often converge. On occasions when they don’t (e.g., people fail to voluntarily give as much to charity as would be ideal), principled proceduralism rules out unilateral rights violations (e.g. stealing to give).2
So, respect for autonomy is, in practice, very important to me. But things get trickier when both (i) autonomy and beneficence diverge — people are inclined to make impartially detrimental (self-interested) decisions, and (ii) we can somehow prevent this without violating any rights. For example, suppose that the relevant individuals can’t implement some detrimental decision by themselves. (Despite being impartially detrimental, the decision is intuitively “theirs” to make. They’re pursuing their own interests in a way that is intuitively perfectly reasonable, though it happens to result in greater harms to others.) They ask us for help. The tricky moral question: Should we help them? Let’s step through some concrete examples.
Example 1: Tax advice
Suppose there was an “income surtax for foreign aid” that citizens could, with some tricky paperwork, opt out of when doing their taxes. And suppose the foreign aid was truly well-targeted and cost-effective, comparable to the Against Malaria Foundation. To make things extra awkward, suppose that most middle- and upper-class individuals opted out, and that poorer citizens would like to opt out too, but many lack the administrative capacity to file the necessary paperwork. I could see helpful progressive organizations in such circumstances wanting to set up tax advising support for poor citizens, to help them avoid the “unfair” tax that their wealthier compatriots were mostly opting out of. Of course, the cost of such helpful advice is that more (and much poorer) African kids die of malaria. Should you support this tax advisory support?
It’s obviously messed up that the described society is, in effect, specifically taxing its poorest citizens in order to fund the foreign aid budget. I’d prefer that the rich not be able to opt out of such a tax. You might optimistically hope to reconcile beneficence and autonomy by arguing that helping the poor to equally dodge the tax could create political pressure to close the loophole for everyone. But suppose that wasn’t realistic: the only effect of helping the local poor opt out of the tax is that now nobody pays the surtax. That helps the local poor, but hurts the global poor (whose lives are literally on the line) even more.
So clarified, the thought experiment seems a stark test of whether one ultimately gives more weight to impartial beneficence or to local co-operativeness. (As someone who likes both, it feels very awkward to imagine having to explicitly trade off between them like this!)
To help pump beneficent intuitions, suppose you were initially planning to donate to AMF yourself. Should you redirect some of your charity budget away from that good cause, to instead help other people avoid supporting that good cause? Seems like that would be hard to justify to the kids dying of malaria.3
Suppose you agree with me so far. Let’s make it even more awkward. Suppose it would be extremely easy and cost-effective to advocate to donors currently supporting the tax advisory service that they should stop funding the lethal tax advice. (Suppose each dollar spent on this advocacy results in $10 or more going to top charities via the foreign aid surtax, plus a bit from the targeted donors themselves now supporting better things with their funds.) Should you support that instead? That is, should you fund advocacy to discourage others from providing tax advice to poor people?
Terrible optics, right? (Like refusing to save a person stuck on the train tracks.) So, maybe not a good idea to get involved if you’re part of a movement that hopes for broader influence and popularity. But suppose you’re just a random (unaffiliated) individual with no such reputational stakes from the PR hit. What sounds good and what does good can come apart, and (given the stipulations) this case is an example of that. If there’s a non-instrumental moral objection to doing the impartially best thing in this case, I’d be curious to hear what it is. Perhaps people have a right, not just against being used as a means, but also against being allowed to be used as a means, against their will? Anti-instrumentalist intuitions might be stronger in cases involving bodily integrity, like the next couple of examples.
Example 2: Infectious Meat Allergies
Apparently, bites from the lone star tick can make people severely allergic to meat. Good news for animal liberation, right? (Imagine if abolitionists could’ve made slaveholders literally allergic to slavery.) Again, let’s grant that principled proceduralism prohibits us from deliberately infecting people with diseases against their will. That would just be too cartoonishly villainous. But how far out of our way do we need to go to help people avoid these tick bites? Suppose the ticks have newly taken up residence along a popular walking trail. Do we have to warn people about them? Should we? (Suppose that there is no further health risk beyond the meat allergy.)
Suppose it would be costly to run an effective public information campaign to this end. If we’re currently financially supporting vegan outreach or other effective animal charities, should we redirect some of that money to instead protect people against becoming involuntarily vegetarian? Wouldn’t that be a weird thing to prioritize (even if we agree that it would be wrong to deliberately infect them)?
Example 3: Trolley Footbridge Rescue
For a more stylized case, consider a status-quo reversal of Trolley Footbridge: a very large man is already on the tracks, and—as the trolley approaches—asks you to lower a ladder from your overhead footbridge so that he can escape. If you help him, the unimpeded trolley will go on to instead kill five others further down the track. Should you help the one to escape, and thereby condemn five to death?
It seems interpersonally extremely awkward to say “no” on the grounds that you actually want the one to get hit by the trolley (for purely instrumental reasons, of course—nothing personal!). People hate to admit to such instrumental motivations.4 But saving the one would be outrageously unjustifiable to the five who you thereby condemn to death. (“You’re seriously going out of your way to condemn all five of us to death? Why value that one stranger so much more than us? Just because he talked to you first? Are you f***ing kidding me?”)
Standard presentations of trolley cases bias us by focusing attention much more on the one than on the five. Once you look for neglected interests, it’s much harder to justify anti-utilitarian verdicts.
If you’re still tempted to save the one, consider a Double Trolley case, where you only have time to either save this one (condemning five) or to pull the switch for a completely separate rail set-up, where pulling the switch results in the other trolley killing one on a side track rather than five on the main track. The second group of five would have a strong complaint if, instead of saving them, you preferred to help one person in a way that condemned five more. That would show really messed up priorities, right?
The lesson:
Respect for autonomy, including for impartially detrimental decisions (like keeping money for yourself or eating factory-farmed meat), plausibly grounds a negative duty not to directly violate this autonomy (e.g. by stealing or deliberately infecting others with a meat allergy). It’s less clear whether we have positive moral reasons to help others to avoid their undesired but impartially better outcomes. I think the reasons against helping must be the greater—otherwise we are failing to give due weight to those who would be indirectly but more gravely harmed by our “aid”. (Though might we have an additional negative duty not to indirectly undermine autonomy by actively discouraging others from helping someone? Or is there a positive obligation not even to allow salient harms solely because we foresee instrumental benefits to others following from them? I’m skeptical, but remain curious to hear further arguments.) What’s clearest to me here is that it would be misguided to prioritize such negative-externality aid over helping others in greater need and with no such moral downsides.
An even more awkward example
I’m going to throw up the paywall at this point because I could see it being a bad thing for the following example to be publicly discussed. (The paywall screens off the comments too, so feel free to use the accompanying ‘substack note’ for comments on this public portion.) Still, I think the case is interesting enough to be worth sharing with a smaller, private audience. So, here goes:
Keep reading with a 7-day free trial
Subscribe to Good Thoughts to keep reading this post and get 7 days of free access to the full post archives.