29 Comments
User's avatar
Jake Zuehl's avatar

I'm so glad this paper has found a happy (and well-deserved) home! Congrats, Richard! I'd say it is pretty much mandatory reading for anyone interested in these issues.

I'm a robust deontologist (in your sense), and, unlike many deontologists, I'm perfectly comfortable with the theoretical vocabulary ("preferability," etc.) in which the paradox is formulated. So I need to think about how I want to respond in substantive terms. My initial, very tentative inclination is to reject (4), the claim that a successful prevention is vastly preferable to a failed prevention, where "vastly preferable" means more preferable than no gratuitous murders is to one such murder. I don't think that claim is as obvious as you suggest.

In general, I find it plausible that everyone (deontologists included) should be indifferent (or close to indifferent) between (A) an already attempted murder proving a success and (B) an accidental death. (If I could save one of two people from death, and I know that one of them was put in harm's way as an attempt at murder but I don't know which, I don't think I should be willing to pay much of a moral penalty to find out which is which so I can prevent the murder instead of the accident). So: yes, we should strongly prefer that the prevention succeed instead of fail, just as we should, in general, strongly prefer that five deaths be avoided. But I don't think the deontologist needs to be committed to thinking that the difference between a failed and a successful preventative killing is as big as (let alone bigger than) the difference between one gratuitous murder and zero, any more than they need be committed to thinking that five deaths are worse than one murder.

By the way, something like this may help interpret what Kieran Setiya had in mind with the remark in "Must Consequentialists Kill?" (cited in your fn. 41) about the "damage" already being done once the five murders have been attempted such that they can only be prevented by killing an additional one. Maybe he was thinking that the specifically 'moral' damage, the damage that makes a murder dispreferable to an accidental death, is already done once the murder is attempted. There is, of course, additional 'damage,' of a different sort, done when the five actually die!

It now occurs to me, though, that you could respond by adjusting the case to: one killing to prevent five other killings, which would succeed if attempted, from even being attempted (planned, conceived, desired...). The move I just made won't work against a case like that. (Apologies if you discuss this in the paper -- I haven't given the final version the careful attention it deserves). And it seems right that a successful prevention of that kind is vastly preferable (in your sense) to a failed prevention.

I have another possible reply in mind, turning on the difference between a preference for success conditional on the attempt, and a preference for a successful attempt. But it needs more thought, and this is already a very long comment!

Anyway, congrats again!

Expand full comment
Richard Y Chappell's avatar

Thanks! Yeah, you're exactly right that the strongest version of my case is one where the successful prevention works by preventing the other killers from even forming their murderous intentions to begin with. (I don't discuss this in the paper, alas — it would indeed have been worth mentioning.)

I'll be curious to hear more about your conditional preference idea, once it's fully formed!

Expand full comment
Terence Highsmith's avatar

Disclaimer: I am not a philosopher and am a measly economist with a vague understanding of symbolic logic and strong understanding of preference orderings.

I am not sure if the amendment you propose works. When failed prevention is impossible, the deontologist is free of premises (4) and (5). The conclusion does not follow. Which brings me to my objection. This business about 'failed prevention' feels odd.

It is not clear to me that one killing to (successfully) prevent five should be preferrable in any way to failed prevention. Failed prevention is contingent upon an attempt to prevent (what you are gesturing toward as success conditional on the attempt?) which means that the deontologist is comparing a scenario where (a) an attempt was made and succeeds versus (b) an attempt was made and fails. The reference point 'an attempt was made' is important for the deontologist's evaluation of the preferability of the scenarios, and it's not obvious that the attempt is irrelevant from the EX-ANTE perspective of the outside observer. The deontologist could make the appeal that---even beyond indifference---she prefers neither (a) nor (b) given that a deontological constraint was violated in either scenario which implies a partial, incomplete preference order over these scenarios.

More intuitively, I imagine a conversation like this:

(C)onsequentialist: (D)eontologist, do you think the killer should complete her plan and successfully prevent the five deaths, now that the mastermind is dead?

D: I cannot comment.

C: Why?

D: I do not think the killer should have killed in the first place. This is now a terrible scenario we are all in.

C: So what? Let the innocent people die?

D: No. If you're asking me whether to save five lives or not, save the five lives. If you're asking me about this whole conundrum we're in, I say I loathe it even if the five live, because the killer killed. I loathe it if the five die, because the killer killed. I cannot compare the two situations we could find ourselves in, even though I prefer the five to live.

i.e. 'One Killing to Prevent Five' >> 'Failed Prevention' is actually 'One Successful Killing to Prevent Five' >> ' One Failed Killing to Prevent Five'. I dispute the latter.

Expand full comment
Bentham's Bulldog's avatar

Awesome!

Expand full comment
Corsaren's avatar

Hi Richard! I’ve been thinking about this paper a lot for the past couple weeks, and while I think the argument is pretty ingenious, I do have some objections. Rather than list them all out—since I’ve managed to scribble a several thousand words in my notes app at this point—I’ll stick to the two most interesting ones:

#1 Transitivity: you mention on p.194 that there’s no reason for us to not assume transitivity in this case, but I can think of at least one. In premise (2) you state “If an agent can bring about just W1 or W2…” which seems to establish that when we first demonstrate W2>W1, those two are the only worlds that the protagonist can “bring about” — yet once we introduce W3 (Failed Prevention), this is no longer the case. I think a deontologist could argue that it is improper to apply transitivity across these cases since they involve different world-models.

To clarify what I mean, it’s helpful to spell out what “bring about” entails—which seems underspecified as of now. To me, “bring about” cannot refer to just any set of outcomes that succeeds the Protagonist’s actions, but only those that are a *necessary consequence* of those actions (A1 -> W1). If I shoot a man and he dies, but then a day later the President bombs Iran, I did not “bring about” the action-inclusive world-state of the US bombing Iran, whereas I did “bring about” the man’s death. Now, technically, any action can be interrupted before it results in its intended outcome (e.g., the bullet could undergo spontaneous quantum tunneling just before it hits the man). So for the man’s death to be a “necessary consequence” of my pulling the trigger, it essentially means that we must be modeling a world where certain facts about what happens after I pull the trigger are fixed. We assume, for example, that we are in a world-model where the gun fires correctly, that the bullet will hit him, that it will damage his vital organs, that medical staff will not be around to save his life, etc.

Therefore, when we say W1 > W2 in (3), this must mean that, if the Protagonist’s chosen action was A2 (Killing), the outcome would be W2 (Five Saved); i.e., that we are currently occupying a world-model where W2 would be the *necessary result* of the Protagonist’s choice to select A2 over A1. But once you invoke W3 and allow for the Protagonist’s same action to NOT cause W2, then you are now leveraging two different world-models.

This is not a trivial concern: I think it is reasonable for a deontologist to insist that, given their primary concern is actions, they need only express preferences that are consistent within a single world-model and that transitivity across world-models is improper. They may have perfectly consistent preferences within any given world-model, and the strength of those preferences over action-inclusive world-states within a world-model (i.e., the difference between W1, W2, and W4) may, in fact, be weaker than their preferences about which world-model they are occupying (i.e., the difference between W2 vs. W3), and this is all totally fine because they are different kinds of preferences about fundamentally different kinds of questions. Transitivity does not apply.

That being said, I don’t think this objection is TOO damaging—as you note in footnote 40, the argument can potentially still go through if we ignore the failed prevention and instead apply a moral datum for One Killing to Prevent Five >> Six Killings, which I believe can both live in the same world-model.

This next objection, however, is more serious.

#2 (7) is false: you define “>>” as being a preference strictly stronger than the degree to which a neutral deontological bystander would disprefer one generic killing. This definition holds for (4): we add five generic killings from W2 to W3, and five is more than one. But in W4 we have not added one generic killing—we have added a killing *by the protagonist*, and that is different under deontology, even in an agent-neutral sense! If it wasn’t, then (3) would already contradict itself! To assume that a rights violation which is part of the action is equivalent to a rights violation that is part of the outcome is to make an implicitly consequentialist assumption.

Part of the confusion is that you refer to W4 as “Six Killings” when in reality it is more like “Protagonist's Gratuitous Killing and Five Other Killings”. A true “Six Killings” universe would be that, say, the Protagonist chooses not to kill the innocent person, but then because they remain alive, the other killers decide to kill that person too (maybe they get a sixth guy to do it so that we keep the nice 1 person -> 1 killing correspondence). If we call that world W5, then we can ask how a deontologist would rank W5 vs. W1 and W4.

Now, the difference in preference between W1 and W5 is, of course, one generic killing, but in that case, your argument assumes that W4 ~ W5, which is not true. In fact, by (2), W5 > W4, since in W4 the Protagonist has acted wrongly by killing, but in W5 they have not. And so we have W1 >_1 W5 > W4 (where >_1 means a preference whose strength is equal to one generic killing); this means that W1 >> W4, so (7) is false.

I think it might be tempting to treat the above as an alternative way of deriving (6), but it's not; what I'm showing is that, for a deontologist, the difference between W1 and W4 is not equal to one generic killing; it is, in fact, strictly more than that because they care about the action part of the act-inclusive world-state MORE than the outcome part of it.

I have more thoughts here (e.g., that if you try and rescue (7) as you do in footnote 39, then (4) fails; and if you try and rescue both, then (2)/(3) fails), but I’ll stop here since I’m at 1,000+ words. I also realize that these two objections are sort of incompatible (the second objection invokes transitivity between comparisons across world-models [W1 vs. W5] vs. comparisons between world-states within a world-model [W5 vs. W4]), but I could see a deontologist taking up either of these depending on their disposition. If you bother reading all of this then I’d love to hear your thoughts! I really liked the paper even if I did have my qualms.

EDITED: I had (3) flipped as W2 > W1 in an earlier version of this because I was trying to analyze the counterfactual. Corrected!

Expand full comment
Richard Y Chappell's avatar

Thanks Corsaren, super interesting objections! As you anticipate with the footnote 39 reference, I really intend "one more generic killing" to be understood *by reference to* the difference between Five Killings and Six, i.e. it's one more generic killing *by the agent making a decision at the current choice point*.

So the big question is whether you can say more to make it defensible to reject (4) in light of this. You say that deontologists care primarily about "the action part of the act-inclusive world state", but the difference between Successful and Failed Prevention is precisely 5 more killing *actions*, not just "death" outcomes. (We can even build in that it prevents the murderous intentions, if that makes a difference.)

Perhaps you're thinking that the best option for the robust deontologist is to double-down on treating "live options" (of present or otherwise salient choice points) as having near-lexical priority over non-live actions (as seems an implicit commitment of preferring Five Killings over One Killing to Prevent Five to begin with). So then the argument is: they don't have an unduly weak preference for Successful over Failed Prevention. It may be as strong as any other forward-looking preference where five lives are saved! It's just that *no* such forward-looking preference can compare to how strongly they oppose *live* wrongdoing (i.e. at the choice point under current consideration). Something like that?

It's an intriguing idea. Tricky to fully make sense of, but intriguing. I'll have to mull it over!

Expand full comment
Corsaren's avatar

Thanks Richard, I’m glad you appreciated it!

And yes, exactly! I think the robust deontologist can argue that their preferences regarding live actions dominate their concerns for non-live actions, even if they still have strong preferences about what non-live actions occur.

In fact, that’s kind of how I default interpreted your explanation of how an agent-neutral deontology would differ from a consequentialism of rights: that a deontologist would “oppose rights-violations in each instance (no matter the agent), and for each rights-violating act, prefer that the agent had instead acted permissibly—no matter the downstream consequences” (p. 179)

Applying that framework to these sorts of rights-maximizing rights-violation scenarios, the robust deontologist’s preference for the Protagonist to act permissibly dominates the downstream concerns, even if those downstream concerns include OTHER/FUTURE agents acting impermissibly. The deontologist is, of course, opposed to those latter impermissible actions as well, and prefers that they didn’t happen (in a robust/consistent way, such that they prefer one downstream killing to five downstream killings), but we’re not comparing act-inclusive world-states where those agents are choosing to act permissibly vs. not, and so those preferences have lower lexical priority.

In a sense, this actually gets back to my first objection, but instead of arguing that the difference in preference type (i.e., preferences between AIWSs where the chosen act differs vs. preferences between AIWSs where the underlying world-model differs) invalidates transitivity, we merely argue that one type of preferences dominates the other. Sort of like infinitesimal numbers, where 2ε > ε > 0, but there exists no X where Xε > 1.

I will say that this discussion does suggest there could be an alternative version of the paradox where we make the other rights violations “live” in some sense? Not sure how/if that would work…my gut says that it’d need a fair bit of additional scaffolding to properly make sense of, but perhaps there’s something there.

Expand full comment
Jake Zuehl's avatar

A question for you: Your argument, if it works, shows (among other things) that certain widespread and initially attractive preferences can't be rationally combined. What do you think most common sense deontologists (the folk, insofar as they are unreflectively robust deontologists) give up? I suspect most people would (1) disprefer that one be killed to prevent five killings, (2) vastly prefer successful prevention to failed prevention, and (3) be indifferent between failed prevention and six unrelated killings. If that's right, then I guess you would say that they have subtly intransitive preferences?

Expand full comment
Jake Zuehl's avatar

Sorry, just to be clear: I know that your argument is about ideal preferability rather than about what anyone actually prefers. I just wonder where you think most people depart from ideal rationality, insofar as they have broadly deontological preferences to these trade offs, at least when each is taken in isolation.

Expand full comment
Richard Y Chappell's avatar

It'd be an interesting x-phi project to find out!

I suspect that ordinary people's preferences between Five Killings and One Killing to Prevent Five are highly unstable and prone to vacillate depending on how they're probed. (See the discussion of prospective vs retrospective preferences on p.185.) I'd guess that most people would retrospectively prefer One Killing to Prevent Five (e.g. reading about it in the newspaper the next day), but prospectively prefer *that the agent not kill* (thus committing themselves to preferring Five Killings, though how many would follow through and endorse that state-directed preference is less clear). As I note in footnote 29, this inconsistency could be used to ground an independent argument for consequentialism, as an instrumental account of constraints might offer the best explanation of this otherwise-puzzling pattern of judgments.

That said, I also find it plausible that many could just have "subtly intransitive preferences", as you suggest (while presumably still endorsing transitivity in principle).

Expand full comment
Jessie Ewesmont's avatar

One possible response for the deontologist turns your definition of >> as being stronger than the preferability of avoiding one *generic* murder. The deontologist must see some difference between the five people being killed and the one person being killed, precisely because they have a marked preference for Five Killings over One Killing To Prevent Five (premise 3). Whatever this hidden quality is (and no doubt it differs from deontologist to deontologist), it must be important indeed if it can outweigh four more people dying!

If that's the case, then the difference between Five Killings and Six Killings is not one murder. To the consequentialist it is. But to the deontologist, who cares deeply about the hidden quality, the difference in moral weight is one murder plus the hidden quality, which we know is worth at least four lives. That creates a moral gulf of *at least* five lives' worth of difference between Five Killings and Six Killings, which justifies the >> perfectly fine.

Expand full comment
Richard Y Chappell's avatar

So the big question is whether they can specify any such "hidden quality" in such a way that I can't just build the same quality into the other five killings.

I think it's a mistake to assume that the deontologist must determine their preference via a (consequentialist-style) process of aggregating "qualities" (like values and disvalues). Rather, my suggestion is that they can oppose Protagonist's One Killing to Prevent Five on the simple grounds that the action is *wrong*, and moral wrongness is sufficient—indeed, decisive—grounds for dispreferring an action. But then one can't read off from this preference that there is any "hidden quality... worth at least four lives." There's just the deontic *wrongness* of the distinguishing act, no different from any of the other five *wrongful* killings (each of which the deontologist similarly disprefers, when considering their specific choice points of their respective agents deciding whether or not to kill).

Expand full comment
Jessie Ewesmont's avatar

Good point! And of course, if the deontologist just operates on a binary wrong/not wrong system, then it "doesn't matter" if the rescue operation succeeds.

I will have to think more on this - although, since I'm not a deontologist, I'm not *too* shaken up if their view has problems :-)

Expand full comment
Mon0's avatar

Congrats on the publication!

Expand full comment
David Riceman's avatar

People form habits, and unusual but successful interventions can strongly form habits. So the killer has a greater impetus to kill next time, and the planners have greater impetus to consider killing as a possible solution next time.

Expand full comment
Richard Y Chappell's avatar

Suppose all the agents have terminal cancer and will die tomorrow no matter what decisions they make today.

Expand full comment
sean s's avatar

All moral/ethical systems are consequentialist; the real controversy is how to determine which consequences matter most.

Expand full comment
dotyloykpot's avatar

Definitionally, deontologists don't care about consequences. So claiming they are wrong because of consequences (eg other agents furthering goals they don't agree with) makes no sense. Not saying deontology is coherent, but neither is this argument.

Expand full comment
Richard Y Chappell's avatar

Please read an intro ethics textbook, you don't know what you're talking about.

Expand full comment
Simultan's avatar

Fwiw, I found myself wondering about this same point when reading the post, so this reply felt needlessly dismissive to me as an outside observer and non-philosopher.

Expand full comment
Richard Y Chappell's avatar

I imagine you would have phrased your question as a question, which I would then have been happy to answer. But I've little patience for Doty's mix of ignorance and arrogance. (It's just not true that "Definitionally, deontologists don't care about consequences." The standard characterization has deontologists caring about features of actions *in addition to* their consequences, whereas consequentialists care ONLY about consequences.)

There are deeper background issues about how deontologists should engage with questions of preferability, which I explore in more detail here:

https://www.goodthoughts.blog/p/deontology-and-preferability

Expand full comment
User was temporarily suspended for this comment. Show
Expand full comment
Richard Y Chappell's avatar

You don't understand the difference between "SOME choices cannot be justified by their effects" and "deontologists don't care about consequences". The former does not entail the latter. Every sane deontologist agrees that, all else equal, you should care whether people live or die.

Don't bother commenting here again until you learn some humility. (Feel free to google my CV and take an outside view on (i) whether there's any chance that an internet rando has a better grasp of basic concepts in moral philosophy than someone with my academic record and qualifications; and (ii) whether this paper would have passed peer review in a top journal if it was so obviously incompetent.)

Expand full comment
Philosophy bear's avatar

I've sent you a DM about the Robust versus Quiet distinction

Expand full comment
David Duffy's avatar

I have the usual problems with this argument. The use of a ledger of deaths increases the vividness of the example, but doesn't alter the moral arithmetic seen in a less fraught example eg RH steals $10000 from A and donates it to B-F, whose lives are greatly enhanced. If I am a "deontologist" [1], I disapprove of the crime against A, but approve of the improved welfare of deserving B-F. Am I inconsistent or hypocritical?

[1] This seems to cover such a huge spectrum eg many meeting this label might be believers in just war and proportionality, so that killing 1 civilian per naughty enemy combatant killed is acceptable, but 75:1 might or might not leave a bad taste.

Expand full comment
Richard Y Chappell's avatar

Is there a premise of my argument that you disagree with?

Note that your comment hasn't described a complete, all-things-considered attitude. You see one aspect of RH's act as pro tanto undesirable, and another as pro tanto desirable. My question is: do you OVERALL prefer for RH to "steal & donate" or not? If "yes" (while still holding the act to be wrong), then my objections to "quiet deontology" apply. If "no", then my objection to robust deontology.

Expand full comment
David Duffy's avatar

I dunno if preference is the right language. How about: accept or reject a maxim that one must (always) steal from the rich to give to the poor? (Insert joke about progressive taxation here).

I don't see that either response is quiet or robust.

Expand full comment
Richard Y Chappell's avatar

That would be a different question. Preferences are real mental states, so we can ask which ones are or aren't warranted. A complete moral theory had better give an answer. (For more detail, see the paper's section IV.C. "Rejecting Preferability".)

Expand full comment
David Duffy's avatar

"...torn, which leaves us entirely lacking in practical normative guidance..." is of course the key part of IV.C, except that trolleyology type dilemmas are always between two constrained unpleasant alternatives. In many real life situations, the "deontologist" aims to overturn the current state of play. So, if Protagonist has a chance to explain to the first victim that their death is necessary to save five other lives and that person assents, then I suspect this might be acceptable to many, but from a beneficence standpoint may be regarded as irrelevant. Similarly, in the firing squad, one randomly assigned weapon fires blanks to weaken causation, or the setup is changed by Protagonist to make the death of A a direct double effect of saving B-F.

Expand full comment