29 Comments
User's avatar
Kenny Easwaran's avatar

I’ve thought about this in the context of wondering why people are so much more bothered by police brutality (which kills about a thousand people a year in the US) than traffic fatalities (which kill well over 30,000 people a year). I think there’s some sense in which they see police killings as our societal collective action, while traffic killings are just byproducts of a system we use, but none of that matters to the victims.

Expand full comment
Richard Y Chappell's avatar

It's interesting to compare that to opposition to self-driving cars, where we'd have both *fewer* deaths AND they would be even more explicitly mere "byproducts of a system" than human-driver fatalities. I guess status quo bias is so strong it can even outweigh some of these other biasing factors.

Expand full comment
Ali Afroz's avatar

I think part of what’s going on is that people see changes as collective actions, but leaving things as they are is not seen as an action and so deaths caused by inaction are seen as nature running its course, while death caused by changes are the fault of humans. Of course, this is only one of many factors which makes the status quo bias so strong.

Expand full comment
EC's avatar

Recently I’ve been thinking along similar lines about the deaths caused by future climate change vs by malaria, and then about Thomas Pogge’s research program from the mid 2000s (not sure what he’s doing now), where he put considerable effort into arguing that the global rich are responsible for the plight of the global poor.

If all we cared about was the fact of poverty or deaths, none of that should matter, and Pogge wasted that period of his professional life.

Expand full comment
Richard Y Chappell's avatar

I don't think it's a waste to show that even people with mistaken views should support good policies. (Especially if the mistaken views in question are extremely widespread, and unlikely to be remedied any time soon!)

Expand full comment
Kenny Easwaran's avatar

I think the question here has to be about what “responsible” means. It could well be an interesting and meaningful concept that is therefore worth arguing about, even if it doesn’t have the evaluative moral implications people sometimes think it has.

Expand full comment
Thomas's avatar

How deep does your metaphysical scepticism go. Suppose A and B each dislike you (independently, they're not in cahoots here). A throws a rock through your window, B stands by and could easily have stopped the rock from hitting your window, but declines to do so. Who should pay for the replacement of your window? (Relatedly: who should the law require to replace the window?)

Expand full comment
Richard Y Chappell's avatar

I agree there are all sorts of practical reasons to assign more (esp. legal) responsibility to people who unjustifiably cause harms than to those who unjustifiably allow them. But I don't think it follows from this that preventing harms is less important than refraining from causing harm, or that you cannot easily justify causing lesser harms in order to prevent greater ones.

Expand full comment
Thomas's avatar

I'm surprised that you're so happy with causal distinctions. If you accept that there are causal distinctions in the world, then I don't understand the grounds of your mockery in this post.

The best way of conceptualising the deontological endeavours you mention here are as efforts to theorise about the nature of causation—about the nature of those causal distinctions.

Understood as such, what they're doing isn't much different to what Lewis et al. were doing in the 80s vis-a-vis causation. Lewis was doing reflective equilibrium on causal intuitions vs theory. The deontologists you mock here start with idea that there is enormous overlap between causal intuitions and moral ones, read something deep into that overlap, and then proceeds to to do reflective equilibrium on causal and moral intuitions vs theory.

There's fundamental moral importance attached to whether you, e.g., diverted or initiated a threat no more than there is fundamental epistemological importance attached to barn facades.

Expand full comment
Richard Y Chappell's avatar

I think it's silly to treat causal distinctions as having fundamental moral significance. I agree there can be practical/heuristic reasons for them to play some roles in law and such, and we can explain this on consequentialist grounds.

Expand full comment
Thomas's avatar

Is there an argument for its being silly?

If I ask St Peter why I'm being treated differently to others, even though our counterfactual effects on the world are the same, and he replies "well, you caused all that X, and they didn't" ...that response doesn't smack me of silliness. (And to the extent that most all of human society is built around the same distinction, it doesn't seem to strike most people as silly either.)

Expand full comment
Richard Y Chappell's avatar

Going back to my first reply, we should distinguish two very different normative roles causal notions could play:

(1) Are they relevant to determining what I ideally should do? (E.g. should I seek to realize a worse world because the better alternative would involve my standing in a causal relation to some harms?)

(2) Are they relevant to who we should hold (primarily) accountable for suboptimal/unjustified outcomes?

I think it's most clearly silly to grant causal relations fundamental significance in #1-type roles, and the argument is given in the main post (just look at the cases, and note how it goes against the interests of the people that we ought to care about).

I also think #2-type fundamentality is misguided, and that common sense actually agrees with me. Cases of ignoring sufficiently *salient* needs (like watching a child drown before your eyes) are generally seen as relevantly similar to causing the harms in question. The fundamental issue is not causation but how we can reasonably be expected to use limited agential resources (including moral effort). If you go out of your way - expending mental effort - to make things worse than they would otherwise be, then that's (commonsensically) objectionable whether it technically involves doing or allowing. And if the situation is such that any minimally decent person would be motivated to do X, and you instead do not-X, then - again - that's commonsensically objectionable regardless of the metaphysics. For more, see: https://www.goodthoughts.blog/i/57070558/salience-and-killing-vs-letting-die

Expand full comment
Thomas's avatar

I don't think there is an argument against (1) in the post. There's an argument against, e.g., it being of fundamental moral importance whether you create or divert a threat. But deontologists shouldn't think that is of fundamental moral importance: if it is of moral importance at all it's because it bears upon causation (which would be of fundamental moral importance).

Compare St Peter saying: "you're being treated differently because more C-fibres fired as a result of what you did than of what they did." And me replying "this is silly, why should which fibres electrons pass through be of fundamental moral importance?" (Answer: it shouldn't be and it's not—it's of moral importance only to the extent it bears upon pain/pleasure.)

Expand full comment
Joseph Rahi's avatar

I think your treatment of the Transplant case here is revealing (in the best way). It relies on an impossible hypothetical in which normal causality is overridden, such that we isolate the moral decision as nothing but a choice between possible after-worlds, with the act itself completely wiped from causal history. It deliberately excludes all the aspects that are crucial from a virtue ethics standpoint: it cuts away the action itself, any link to character, and any social context. In the real world, our choices are less "which world shall I actualise?", and more "what shall I do, who shall I be, in this context?"

It feels a bit like how physicists might talk about how things behave in a frictionless vacuum, but reality includes air resistance, friction, and other confounding factors. That doesn't mean the physicists (consequentialists) are wrong though.

Expand full comment
Daniel Muñoz's avatar

I have a draft called “Priceless Taboos” defending deontology on the following grounds:

1) rights are sensitive to social norms

2) social norms have to be enforceable

3) to be enforceable, norms have to be arbitrary sometimes

My analogy is the nuclear taboo. We want some norm against the most powerful nuclear weapons. But there are plenty of conventional weapons much larger than tactical nukes. Isn’t it arbitrary, as JF Dulles argued, to ban weapons just because they’re nuclear?

Yes it is, but we all recognize that “ban all big weapons” is too vague to be an effective norm, whereas “ban all nuclear weapons” does the trick.

Similarly, “ban all acts with net bad consequences is a laughably bad social norm, whereas “no killing” is arguably the most important norm in human history—even if some killings are less bad than some lettings-lie, just as some tiny nukes are less bad than big conventional weapons.

Expand full comment
Richard Y Chappell's avatar

Sounds like an argument against Kamm-style "intricate" deontology! Supposing we're all on board with promoting good simple norms, what's the case for giving them a deontological rather than two-level consequentialist theoretical explanation?

Expand full comment
Daniel Muñoz's avatar

Kamm & Kagan were the original target!

I don’t think it’s a mistake to internalize certain norms. On my deathbed, if I keep a promise that fails to maximize utility, I think that’s noble rather than being a kind of sad delusion—like clinging to a rule of thumb even in an edge case.

But I want to hear more about the virtues of the two-level consequentialist. Maybe I can get your thoughts tomorrow?

Expand full comment
Richard Y Chappell's avatar

Yeah, I think my preferred form of two-level view will agree with a lot of that. (It's definitely not a mistake to internalize many good norms. Some instances of acting upon the norm may be undesirable, but that's compatible with still seeing the committed agent as "noble" or praiseworthy in certain respects.)

And yes, sounds like a great topic to explore more in our upcoming discussion. Looking forward to it!

Expand full comment
Avram Hiller's avatar

Great post! I like to imagine the referee reports if the transplant case were submitted to a reputable scientific journal. Imagine if the authors of such a paper said that the results of the case (where most people say that it is wrong to transplant) show that people have a basic belief that it is wrong to violate the rights of one to save five. The referees would have a fit! Did you perform a factor analysis, measuring it against people's other attitudes towards doctor/patient relationships and their beliefs about future consequences?

There is no way such a paper would pass peer review.

Expand full comment
Mary M.'s avatar

Hi again. Okay, so I know that you've argued against adding context to theoretical examples in moral philosophy, but I'm having trouble accepting that restriction as I consider the example you present.

If the person whose head is cut off happens to be a fellow surgeon who might be weeks away from a medical discovery that could save millions of lives, that changes the calculus so much--especially if, let's say, the 5 who need organs are all in organ failure because they have diabetes from drinking too much soda in their parents' basements, while playing video games.

Now, you might then say that the additional contextual information would simply flip the utilitarian position on the issue. For a utilitarian, it seems clear that you should save the surgeon and let the couch potatoes come to meet their natural fates. More lives will ultimately be saved by the surgeon if he lives, so that makes this an easy case...But how sure are we about that? How do we know that the surgeon won't actually fall back into his drinking habit and end up on the couch, dysfunctional and sloven? Or that one of the couch potatoes won't get a sudden surge of motivation, go to medical school, save lives, etc.?

In short, I struggle to see how a utilitarian can feel confident in his epistemic access to secured ends. I know that you advocate for long-term thinking about consequences, which I agree with, but how does one ever feel confident in pulling the proverbial trigger when so many outcomes can't be clearly foreseen? Granted, you might be able to deal with the epistemic access problem in a hermetically sealed example, but, seriously, how can bare utilitarianism reliably guide action in real life?

(If you already have a post addressing this, please send it my way. Thanks!)

Expand full comment
Richard Y Chappell's avatar

If you have usable information, then use it. If you don't, then the best you can do may be to assume that everyone involved is equally likely (in expectation) to do wonderful things in future. So saving five lives gives you five times as many chances of good stuff happening in future.

(You can think of each person as a bundle of lottery tickets, representing their future possibilities. Unless you have sufficiently good reason to think that one bundle of tickets is vastly better than the rest, it would clearly be irrational to choose just one bundle over five.)

Expand full comment
Schneeaffe's avatar

But utilitarianism also depends on metaphysical distinctions. Agents, preference, utilons, or worst of all consciousness. Its just not so visible, because among utilitarians, concrete explanations of these have less priority than the deontologist version to them. You might object that "those actually matter", but Im not sure they would pass a test like the one in this post: Given a concrete definition of them, constructing a scenario where thats the only difference.

>Why, then, shouldn’t doctors go around killing people? Presumably because it wouldn’t have good expected consequences in real life!

Ive never found this very convincing because it depends on the reactions of non-utilitarians. If only the general public where enlightened enough, then the doctor could go around killing people.

>A second was just about to save you when they realized that the side track—where just one person awaited as collateral damage—later loops back, turning the purportedly-collateral damage into an instrumental killing.

Whats the context of this version?

Expand full comment
Richard Y Chappell's avatar

Right, I think consciousness etc. clearly matters, is perfectly sensible to care about, and holds up fine in pairwise comparison cases.

> Whats the context of this version?

A standard way to explain why it's permissible to kill one to save five in the original trolley switch case but not in footbridge (where you push one off a bridge to land in front of the trolley) is to say - as in the Doctrine of Double Effect - that it's OK to kill one as an unintended side-effect of saving the five, but not to kill one *as a means* to saving the five. The trolley loop case poses a challenge to that principle, since most people still think it's permissible to switch even in the case where, if the one hadn't been on the side track, it would have looped around to kill the five. The looping track means that the one is now killed as a means, not just as a side-effect, of saving the five.

Expand full comment
Schneeaffe's avatar

Do you have a concrete method for recognising consciousness? I think that in a general sense, sure it sounds like consciousness matters - but also, people generally feel that omission/commision matters, and you have to come up with things like the trolley problem to argue against it, *which you can only do because omission/comission is relatively easy to identify in edge cases*. We cant construct cases like that for consciousness if its left unexplained - but if it were explained, and we could construct them, would it pass? Just the fact that its intuitive now doesnt really speak against that.

So basically, I doubt we have seen the really tough pairwise comparisons for consciousness, because its not defined enough.

>since most people still think it's permissible to switch even in the case where, if the one hadn't been on the side track, it would have looped around to kill the five

So in this scenario, the one is also fat enough to stop the train?

Expand full comment
Richard Y Chappell's avatar

I don't think the trolley problem involves "edge cases". (They're not, like, borderline cases of causation or anything like that.) They're just clear-cut cases where all else is equal. We can do the same for consciousness by imagining cases involving phenomenal zombies. The stipulation that they don't experience anything (including happiness or suffering) clearly makes a huge difference to how much it matters whether or not they get hit by trolleys.

On the other hand, the claim that conscious experience is the *only* thing that matters is subject to intuitive counterexamples like the experience machine. For further discussion, see: https://www.utilitarianism.net/theories-of-well-being/

> So in this scenario, the one is also fat enough to stop the train?

Sure. They suffice to stop it, in any case.

Expand full comment
Schneeaffe's avatar

>We can do the same for consciousness by imagining cases involving phenomenal zombies.

But there seems to be very little discussion if this. I suspect that youre in the minority here, and that the idea of two indistinguishable people, only one of which has moral value, would be rejected by most people. I would think this is one of the *easy* cases against consciousness mattering - if there was some kind of naturalistic explanation of it, you would have to get into the details of that, but this version is very straightforward.

Expand full comment
The Ancient Geek's avatar

If the purpose of mortality is to identify bad people, then it's obvious that intention would matter as well ax outcome.

Expand full comment
Richard Y Chappell's avatar

That's not the purpose of morality, and I'm not sure what the claim that "intention matters" has to do with this post (a person with actually-good, selfless intentions will want the best outcome, not the one that keeps their hands the cleanest). But I've written elsewhere about how intention matters:

https://www.goodthoughts.blog/p/how-intention-matters

Expand full comment
The Ancient Geek's avatar

We are in the habit of jailing people for being bad people, and if that has nothing to do with morality, that would be worrying.

Intention matters because an undesirable outcome that isn't brought about intentionally is a tragedy, not a wrongdoing.

Which is not to say consequences don't matter, only that they do a different job. People get more het up over police brutality than traffic accidents because because it's seems as intentional , voluntary and avoidable,...and there is a set of social emotions that function to alter those kinds of behaviours.

People also don't approve of traffic fatalities, but think about the subject in a more technical , less emotive way,...because you can't solve the problem by blaming one person or group.

They are different kinds of "bad" , so there isn't a problem of failing to trade them off.

Expand full comment