38 Comments
User's avatar
Bentham's Bulldog's avatar

As I've mentioned before, I think there's an even simpler version of the puzzle that has the advantage of *actually being explainable in conversation*. I think when we discussed it though you mentioned thinking it had some small disadvantage (which might be right!) But the core idea is as follows.

Imagine three states of affairs:

1) Person A kills one person to stop B and C from killing one person each.

2) Person A kills one person indiscriminately. B and C do not kill.

3) B and C kill indiscriminately.

Clearly 1>2>3. So then the third party prefers the state of affairs with one killing to stop two killings to the one where 2 killings happen. But if one prefers a state of affairs where some agent acts in some way to one in which they don't, then it seems they prefer the action.

I think this argument is basically totally decisive. It shows that deontologists have to be quiet in the sense that you describe. But quiet deontology is hugely problematic. First of all, it's just wildly counterintuitive that God should be sitting in heaven hoping that people do the wrong thing.

Second, as you note, it seems that hoping is tied up with all sorts of other commitments--e.g. what you should vote for. So the deontologist shouldn't vote to stop the killing and organ harvesting or pushing people off bridges.

But things get even weirder! Presumably if you want X to happen, you shouldn't stop X. But deontologists would generally be pretty uncomfortable thinking that you shouldn't stop killings to save multiple lives. The person going around killing and harvesting organs should be stopped if deontology is true.

Similarly, should you try to persuade them out of it? If you should want a person to X, then you shouldn't try to stop them out of Xing. So then if your utilitarian friend asks if they should kill to save lives, the deontologist should say: "yes." They should even lie--for lying is worth saving multiple lives.

At this point, deontology begins to look weirdly egoistic. You want everyone else to breach the moral norms--you just don't want to get *your* hands dirty! You even should trick them into following it.

Should you then hope that you do the wrong thing in the future? Either answer is weird. If so, that is pretty insane. If not, then if you watch some person doing some wrong action in a screen and have dimension, whether you should hope they do the wrong thing will depend on whether they are you. Nuts!

Generally we think a big advantage of murder laws is that they deter crime. Deontologists must think that at least regarding murders that prevent multiple other lives saved, the fact that laws against those deter committing them is a bug not a feature. And this is so even though killings to save lives are morally wrong.

(Unrelated: this argument was one of the things that got me seriously thinking about ethics. I remember puzzling over it for a long time--and thinking it illustrated both how to do good ethics and what was wrong with deontology. It inspired a lot of other arguments against deontology for quite a while. I can't tell if this or you 2-D semantic argument against moral naturalism is my favorite argument from you).

Expand full comment
MichaelKiwi's avatar

Couldn’t a deontologist argue 2>1 on the basis that only person wishes to murder rather than three (ie 3 people are violating a deontic constraint rather than 1)?

Expand full comment
Richard Y Chappell's avatar

Adjust (1) so that A prevents both B and C from developing murderous inclinations in the first place.

Expand full comment
MichaelKiwi's avatar

Thanks, seems decisive. What’s the small disadvantage?

Expand full comment
Richard Y Chappell's avatar

Robust deontologists will (and, by the lights of their view, *should*) simply deny that 1>2. More generally, they deny that the way to assess the preferability of an outcome is to "add up" the values of its parts in any consequentialist-like fashion. Rather, their method may involve picturing the choice point from which the two worlds diverge, and asking *what we should want the agent to decide* at that point? And it's perfectly coherent for them to prefer that A should not kill as a means to preventing two other killings. That seems a paradigmatically "deontological" preference, rather than any great bullet to bite.

By contrast, denying my premise (4), that Successful Prevention >> Failed Prevention, seems much more costly *from the perspective of the robust deontologist*.

Expand full comment
Jake Zuehl's avatar

Isn't BB's (1)>(2) essentially the same as your premise (5)? Of course, premise (5) is weak preferability, but BB could weaken the comparison to (1) >/= (2) and the argument would still go through. Or am I missing something?

Sorry for commenting on an old thread.

Expand full comment
Richard Y Chappell's avatar

Oh, interesting; I hadn't noticed that similarity before. I see my (5) as just claiming that beneficent motivations (from the protagonist) don't make matters worse, when you otherwise have all the same killers doing all the same killings as the background "default outcome" all along.

I can see now how BB's (1)>(2) looks similar, in that the *primary* difference between them is that agent A kills beneficently in (1) and maliciously in (2), and the same deaths result in the end. But a further difference that jumped out to me as potentially relevant (let me know if you don't share this sense) was that in (1) but not (2), the other agents B and C *had* both been "on track" to kill, and this only changed as a result of A's wrongful killing. So that struck me as in some ways more akin to asserting that One Killing to Prevent Five > Five Killings, which I take to be a denial of the distinctive claim of robust deontology.

Expand full comment
J. Goard's avatar

My understanding as a consequentialist has been that deontologists ought to think such rankings are undefined (or at least that between 2 and 3); there are moral equalities and inequalities within the system of each agent, but not for world states as such.

Expand full comment
Gabe's avatar

I really enjoy the paradox paper. I do find it quite counterintuitive to think about what we have reason to prefer and what we have reason to do coming apart in this radical way. I take it the deontologist has to bite this bullet or show we shouldn't always prefer better states of affairs (or perhaps deny that we have reasons for any preferences?). Showing we shouldn't always prefer the best seems preferable (heh) to me, since I'm already convinced I have reason to prefer a case in which, say, my child lives and two strangers die. But there is something weird about saying, when nothing that we (appropriately) care about from the agent-relative perspective is at stake, we shouldn't always prefer the impersonally better outcome/state of affairs. I take this is something the failed rescue vs successful rescue is meant to bring out. I'm not entirely sure what to say. I look forward to seeing the responses to the paper.

I do think you're being unfair to the quiet deontologist when you make it sound that what they 'want' is to avoid acting wrongly (i.e. "quiet deontologists want the best outcome to happen, they just don’t want to be personally responsible for it"). Quite deontology, I take it, is the position that what you have reason to want comes apart from what you have reason to do. It seems then that quiet deontologists have reason to prefer that they act wrongly. They just can't do it! Now, this strikes me as very weird. But do we sometimes have such preferences? One could imagine a judge who is duty-bound to condemn his son to death and does so, but wholeheartedly prefers the world in which he fails to do his duty. How weird is that? I don't know.

It is worth noting, though, that we might have very strong reason for preferring that the public sphere not be dominated by consequentialist thinking, even if we are quiet deontologists--it might be that unchecked consequentialism could lead to, or simply constitute, a worse state of affairs. So in that regard, the quiet deontologist may not be condemned to acting against what she should prefer.

Expand full comment
Richard Y Chappell's avatar

The most clearly coherent form of quiet deontology just involves agent-relative preferences: You don't want *yourself* to act wrongly, but you don't especially care about others' (purely agent-relative) wrongdoing.

An alternative, as you suggest, is to to have reasons for action diverge from what one should prefer (even over act-inclusive states of affairs). I discuss this a bit in the paper; it's harder to make sense of.

> "we might have very strong reason for preferring that the public sphere not be dominated by consequentialist thinking, even if we are quiet deontologists--it might be that unchecked consequentialism could lead to, or simply constitute, a worse state of affairs"

That's logically possible, but goals aren't usually counterproductive so I think there's a hefty evidential burden on someone who wants to claim that X would be better achieved if people generally cared less about X. (A superior alternative, IMO, would be to instead ensure that people are educated about the dangers of naive instrumentalism as a decision procedure. Talk of "unchecked consequentialism" seems to implicitly build in naive instrumentalism, when it is that - rather than the consequentialist goals - that we should want people to reject.)

Expand full comment
Gabe's avatar

Well, I take it the alternative I suggest is what Chris Howard goes in for in his paper on consequentialism and preference. I liked your discussion of it, but I don’t think it’s incoherent. There’s nothing incoherent as such about suggesting that what we should do comes apart from what we have reason to prefer (in the same way what we have reason to believe comes apart from what we have reason to prefer). Now, I do think you’ve highlighted ways in which a view that holds this position is unattractive, and I’m quite moved. But that’s different from saying that it’s hard to make sense of.

Expand full comment
Angra Mainyu's avatar

Hi Richard,

Long time no see! (bis: I tried to reply to your post from June-6 as well, but apparently the post didn't get through).

As I mentioned in my other reply (I tried to reply to your post from June-6 as well, but apparently the post didn't get through some filter), I think I count as a (very unusual) deontologist (I may not count as quiet or robust, though, since I think it depends on the case whether we should oppose...).

But to address your challenge (briefly), I do not see why I shouldn't vote as a deontologist, as my position is:

P1: Morality contains deontological constraints and also concerns about outcomes - and that holds for the way the human instinctive moral sense works.

P2: Generally, a deontologist - like generally any other human - should not use their belief that deontology is true in order to make moral assessments. Rather, they should use their moral sense - this is the best tool we humans have. In practice, that gives that sometimes voting might be obligatory, or not so but permissible, or impermissible depending on the case.

I do not see how that is odd or leads to any kind of absurdity or implausible result, etc.

But I would add:

P3: Attempts to follow consequentialism lead to more frequent moral mistakes (due to dismissing deontological constraints), but in general, humans psychologically would instinctively respect those constraints in their daily lives (without even realizing it), for example when decisions involve family, friends, etc. I.e., consequentialism can only be followed in a pretty limited fashion, as the constraints are built-in the instinctive human moral sense.

P4: Overall, attempts to follow consequentialism would lead to worse outcomes, not better ones, even if - in some cases - the opposite is would happen. For that matter, even outright vicious and malicious behavior can result in better outcomes sometimes. But I do not see the difficulty here, either.

P5: Unrealistic and weird robot-footbridge-like cases might be harder to assess than usual, but they are fortunately extremely improbable - e.g., no deontologist has ever found herself in that position, and I bet no one will ever be there. Tentatively, I think there probably is no obligation to vote in this particular case.

As for realistic policy decisions, etc., I would go with the 'case-by-case basis' answer as well - and of course, outcomes would surely be a concern when assessing the matter. Perhaps, I would often disagree with some of the views you criticize, though I reckon I have no obligation to publicly take a stance in many cases (for a number of reasons, but I don't how how long a post here can be).

Expand full comment
Mary M.'s avatar

So the problem was a lack of consent? What if the participants had agreed for money and the public had never found out about it? And we got all the benefits of the research?

(If you feel that you address this in the paper you sent, then no worries about responding again. I’ll have a look!)

Expand full comment
Richard Y Chappell's avatar

Yeah, I don't see what remaining objection there could be then. People are allowed to refuse treatment, even when it does no good at all. So they should certainly be *allowed* to refuse treatment when it would be socially beneficial (allowing researchers to track their disease progression). And if people are allowed to do a socially-beneficial thing for free, it's even better to compensate them for it.

"Undue inducement" worries seem to stem from people paternalistically assuming that they know better than the participant how it would be best for the participant to balance the three factors of (i) their health interests, (ii) their financial interests, and (iii) any altruistic interests they might have in participating in valuable research. In particular, all too many bioethicists assume that factor (i) automatically trumps both (ii) and (iii). But that is to force the bioethicist's values on others, which seems very disrespectful to me. My view is that we should let candidate participants decide for themselves how they want to balance these values.

Going back to your original question: If I'm right that this is the policy that would have the best consequences, then "quiet deontologists" should also want this policy to be implemented *even if they think it is wrong*. Quiet deontologists are odd like that (as explained in the post's final section).

Expand full comment
Mary M.'s avatar

Thanks for the reply. I understand your criticism of quiet deontology, and I agree with several of your points. But regarding the principle of consent, how would you see that factoring in on the case of the fat man on the bridge or the 1 murder victim vs. the 5? It sounds like you are trying to avoid a human rights-based view of ethics by focusing on the instrumentality of consent, but it also sounds like you might think there is a basic good involved with human freedom to choose that should be respected. Am I hearing that tension correctly?

Expand full comment
Richard Y Chappell's avatar

We need to distinguish abstract debates in ethical *theory* (about determining which considerations are morally *fundamental*) from advisable moral *practices* (about instrumentally valuable heuristics, like respecting rights).

In real life, I expect that respecting rights (incl. individual liberty) will tend to be the best way to secure better consequences in the long run. But the thought experiments show that it's the better consequences, rather than the respecting of rights (or individual liberty), that *fundamentally* matters.

This is probably better explained in the links I provided in another comment: https://www.goodthoughts.blog/p/deontologists-shouldnt-vote/comment/125061883

Expand full comment
Mary M.'s avatar

Thanks for the response. I've really enjoyed reading through several of your posts, as I've followed links to links to links :) I think I have a much better understanding of your view now, and I appreciate the work you've done to bring nuance to several aspects of the debate between consequentialism and deontology.

Having said that, my broad takeaway, I think, is that your view is probably going to be much more appealing to persons who don't believe in the soul (in a religious sense). As a theist, and specifically as a Christian, I struggle with seeing a simple answer to your paradox because I place so much importance on the intention of any given human agent (how human beings use their freedom to make moral choices) and also on divine providence (certain events coming to pass in certain times and places in history). In other words, because I believe in an eternal soul and a divine creator of the universe, I think my value calculus is bound to be different than that of a non-theist, and it will likely find itself at odds with utilitarian reasoning once we get down to the basic units of how we are measuring goodness.

I can see the blessing in disguise if the fat man slips and stops the train, but I abhor the thought of human agents collectively deciding to press a button on a trap door, even if it saves everyone on the train. I'd much rather inhabit a reality in which trains crash and maniacs kill innocent people than one in which the "good guys" push innocent people onto train tracks.

(I'm guessing that you probably have a piece or two addressing some of these points directly, as you seem very thorough. Feel free to send those on, and I will check them out!)

Thanks for chatting! 🩵

Expand full comment
MichaelKiwi's avatar

There are seriously real life professional philosopher who are like yes I am a quiet deontologist? That flabbergasts me. I was sure all would try and find a problem with your paradox, not just accept quiet deontology.

Expand full comment
Justin's avatar

Question: Why not just have the robot jump in front of the trolly? 😉

Genuinely curious: if “quiet deontology” is the version that privately maintains moral constraints while publicly deferring to better outcomes, is there a corresponding “soft consequentialism”—a version that privately endorses optimizing outcomes but publicly follows moral rules or social norms to avoid causing harm?

In other words, if quiet deontology is constrained preference, is soft consequentialism strategic restraint? And does it face its own internal tensions, especially when optimization would require violating norms people deeply value?

Expand full comment
Richard Y Chappell's avatar

If other people seem likely to *misapply* the theory, then any theory may recommend against its own publicity. For a good discussion of the issue in relation to consequentialism in particular, see Katarzyna de Lazari-Radek & Peter Singer's paper, 'Secrecy in consequentialism: A defence of esoteric morality': https://philpapers.org/rec/DELSIC

As I discuss a bit in my full paper, Quiet Deontology goes even further in implying that we shouldn't even want others to *successfully* act rightly. This is a distinctive feature of agent-relative theories, like egoism, which give different agents different (and potentially competing) moral aims.

Expand full comment
PolizRajt's avatar

Government house utilitarianism is something similar:

"Government House utilitarianism was a moral philosophy that envisaged an elite who knew the moral truth and could put out simple rules for the natives (or ordinary people) to use, even though in the commissioner’s bungalow it was known that the use of these rules would not always be justified. We (the governors) know that lying, for example, is sometimes justified, but we don’t want to let on to the natives, who may not have the wit to figure out when this is so; we don’t trust them to make the calculations that we make about when the ordinary rules should not be followed"

Expand full comment
Alexandria's avatar
User was temporarily suspended for this comment. Show
Expand full comment
Richard Y Chappell's avatar

Comments on this blog are for counterarguments, not insults.

Expand full comment
Arnav Sood's avatar

I agree that “quiet deontology” (let’s call it QD) is bullshit. But I’m not convinced that deontologists are committed to this quiet view.

Clearly they are if we want to make deontology as compatible with consequentialism as possible. QD is the version that interferes the least with aggregate social outcomes. By consequentialist lights, QD is the minimum deontic perversion — it allows agents to maintain correct “higher-order beliefs” about what should happen and what other agents should do.

But to me, this completely misses the point of deontology. The whole project is based on the idea is that respecting deontic constraints is good (maybe even the highest good.) If for some moral reason I don’t want to do X, then by deontic lights X shouldn’t happen, and I shouldn’t want other agents to do X either.

(This opens the door to an interpretation where deontologists have a lexicographic ordering over states of the world — first check about the violations of any constraints or rights, and then score them by utils.)

Expand full comment
Richard Y Chappell's avatar

Do you have any thoughts on how to escape my paradox of robust deontology? (See links in post.)

Expand full comment
Arnav Sood's avatar

Fair enough, I should have paid more attention to why you think deontologists are committed to QD.

I haven't read the whole paper, so pardon me if this is off-base, but initially I think the paradox is "simply" a reflection of the fact that the moral preference relation doesn't respect what economists call IIA (the independence of irrelevant alternatives.) Preferences fail to respect IIA if A > B, but B > A if C is also an option.

There are a few philosophical situations where this (or something like it) occurs. Joe Horton describes some in his work on the "all or nothing problem." Say there's a burning building situation, where an agent has to risk something (like a 5% chance of being seriously burned) to save 5. Clearly (by commonsense morality) both choices are permissible, and S5 > S0. But if we introduce a new choice (saving 6), then we have that S6 > S0, both S6 and S0 are permissible, and somehow that S0 > S5 and S5 is no longer permissible (i.e., it requires an explicitly malevolent or defective will to save fewer lives at no cost.)

I believe these "menu effects" are at the heart of the paradox here (and in general, I think economics is a good way to address some of these issues --- e.g. recalling an earlier post, where "habit formation" and the distinction between local and global preferences can address the Peter Singer drowning child "paradox.")

Expand full comment
Richard Y Chappell's avatar

I think I ended up cutting discussion of the "All or Nothing" problem from the paper for reasons of space, but I take it that although in the three-option scenario only S0 and S6 are permissible, still it is the case that S5 ≻ S0. A bystander should prefer that the agent wrongly save just 5 than that they permissibly exercise their prerogative to do nothing and let all 6 die.

There's a great explanation of this case in Daniel Muñoz's wonderful paper, 'Three Paradoxes of Supererogation' - https://philpapers.org/rec/MUOTPO-3 - (though I may end up wording this a bit differently than he would). There's not really any menu-sensitivity in how *desirable* each action is. It's just that prerogatives give agents an excuse for doing some suboptimal actions (saving none, because saving any would involve personal risk) and not others (leaving one to die after you've already taken the risk to save 5). Still, S5 is plainly a more desirable action than S0, and any decent bystander should prefer that the agent do S5 rather than S0.

This is why my formulation of "weak deontic constraints" in premise (2) is limited to when "an agent can bring about *just* W1 or W2". In a three-option case, not every permissible option is necessarily preferable to an impermissible option. Rather, all robust deontology commits one to is the view that there is *a* permissible option (e.g. S6) that is preferable.

Expand full comment
Terence Highsmith's avatar

I bring this up in a comment on your post about the paradox, but I think Arnav is on the money here.

A deontologist with preferences violating IIA, specifically one with 'framing effects', can escape the paradox. The relevant frame is 'killing for prevention'. We have alternatives A = killing with five lives saved (successful prevention), B = killing with five lives lost (failed prevention), C = save five lives, and D = end five lives. Your argument in the original post argues that after the killing (A or B), the deontologist's preferences should not consider either A or B relevant for C or D. In particular, it's obvious that the deontologist should prefer C. However, it's not obvious that the deontologist should strongly prefer A over B because there is killing in either alternative (D can assert that she has no preferences over A or B, specifically).

So a deontologist with IIA preferences presented with C or D should prefer C over D. Now, suppose we add an alternative E which represents 'killing the mastermind holding the five hostages', and we force the deontologist to operate under the assumption that E (as in your paradox) has occurred. IIA stipulates that the deontologist should still prefer C (A) because E is an irrelevant alternative (as you argue, the mastermind can't be brought back to life).

However, framing effects imply that these irrelevant alternatives might serve as important frames which affect subsequent decisions. In this case, the frame E seems very relevant to (some) deontologists that might argue that A and B are incomparable because a deontological constraint has been violated in both scenarios, and we cannot consider C vs. D in isolation from the frame E, which disputes premise (4) of your paradox. If this is a reasonable contention, then it seems like the best (and probably only) escape route.

/disclaimer: I am an economist not a philosopher please don't come for me

Expand full comment
Richard Y Chappell's avatar

Yeah, it's an available option for a deontologist to insist: "Once an innocent person has been killed, I just don't care whether 5 additional people are killed or not. Once a wrong action has been done, every possible outcome is equally—infinitely—bad. Kill one person or a million, it's all the same to me."

I just think any view that says "I don't care whether additional gratuitous murders occur or not" is thereby disqualified. My target audience is deontologists who share my sense that this would be an awful response, and one that they are committed to avoiding. But if someone is truly willing to bite that bullet, I'm fine with ending the conversation there.

Expand full comment
Terence Highsmith's avatar

It's not excellent for the deontologist to not be able to condemn the gratuitous murders, but no preference relation is not the same thing as indifference; it's agnosticism. It's more akin to saying: “I'm not sure if the state of affairs A or B is better because, in either case, a horrific act has been committed.” Does the fact that you saved some people in the process make it any better? Perhaps, perhaps not? If you elevate the grievance of the killing, for example, by requiring the horrendous torture of the mastermind before finally decapitating him, this becomes more evident. The frame seems to play an outsized role and maybe that's a ding against deontology in itself. Either way, I can tell that this isn't the most compelling to you. I commend your paradox! It is a very good argument and somewhat like a famous impossibility theorem in Economics (Arrow’s) for deontology.

Expand full comment
Arnav Sood's avatar

Denote situations by (killings, savings) for convenience.

I think you're definitely right that any moral theory which fails to distinguish between (1, 1) and (1, 10^6) doesn't deserve to be taken seriously. The whole reason the deontic constraints matter is that they, somehow, defend human life/preserve the rights of people/etc. Failing to distinguish between those two situations is actually terrifying --- it's the logic of dictators (some higher principle matters more than all the excess death.) More simply, moral philosophers should care about deaths!

That said, I think the deontologist has something like lexicographic preferences over these situations. Conditional on a killing, (1, 1) >>>> (1, 10^6). But depending on how seriously you take the deontic constraints, (2, 1) < (1, 10^6), because we know that (1, 1) ~ (1 ,1), and to say that (2, 1) > (1, 10^6) requires accepting killing 1 to save about a million.

Edit: The natural question is at what point does (2, 1) > (1, X)? Is a deontologist willing to sacrifice the entire population just to avoid killing? I don't know. But I don't think that lexicographic preferences is actually the correct model, because they obey IIA, which you pointed out is problematic. If I recall, Parfit has a scheme that works better (as does Theron Pummer, i.e. judging the goodness of acts on some sort of 2D grid of "requiring" and "permitting" reasons, instead of just via a scalar ranking with a permissibility threshold.) But at this point I'm just riffing.

Expand full comment
SkinShallow's avatar

I'm not a deontologists (or a moral philosopher) so it's quite possible I'm missing a point here. But this seems on point to me: "The whole project is based on the idea is that respecting deontic constraints is good (maybe even the highest good.) If for some moral reason I don’t want to do X, then by deontic lights X shouldn’t happen, and I shouldn’t want other agents to do X either."

The way I understand deontology, the paradox smuggles in a consequentialist assumption: that there's a clear, outcome-based sense in which a world with 5 deaths is (ALSO MORALLY) worse than the world with 1 death, regardless of how those deaths occured; and that a preference for that world should guide the moral reasoning.

But it seems to me that from a deontological standpoint, outcomes are not what morality is about and even if they were, what makes a world/outcome worse is how people act within it. The wrongness lies in the act, not merely the result. A world in which no one commits murder—even if significantly more people end up dead— might be “better” in the morally relevant sense. A 'proper' deontologists MUST be a robust one, surely?

Obviously real people in the real world care about outcomes in ways much more compatible with consequentialist framing, but it feels to me that that's not what deontology is about (and that's why I'm not a deontologists myself).

Expand full comment
Richard Y Chappell's avatar

I agree that the "robust" version of the view seems like a more "proper" form of deontology. But most deontologist philosophers disagree, and seem to prefer a *purely* agent-relative version of the view! See the "Isn't this all terribly odd?" section of the post.

> "the paradox smuggles in a consequentialist assumption: that there's a clear, outcome-based sense in which a world with 5 deaths is (ALSO MORALLY) worse than the world with 1 death, regardless of how those deaths occured"

The paradox is specifically about *killings*. For more on why the key notion of preferability is not question-begging, this is discussed in the full paper, and also excerpted here: https://www.goodthoughts.blog/p/deontology-and-preferability

Expand full comment
Rajat Sirkanungo's avatar

Good stuff as always! I never went away from consequentialism after few months of reading yours and Matthew's blog. At least (mostly) welfarist consequentialism is something firm or solid for me. And it has been since 2023 (after I informed you that I came back to Utilitarianism... after I had a journey with lexical threshold views).

My political positions have basically shifted much more radically throughout these years (i was never a conservative though) that is, from libertarian capitalism to social liberalism to libcap again and to social democracy, and now, Marxist-Leninist Socialism. Interestingly, Nikhil Venkatesh has written some fire articles arguing for socialism. And I always thought that revolution seems much easily justified under utilitarianism than something like deontology. I got into Marxist-Leninism (a few months ago). I have been reading some Stalin and Lenin, and it is interesting how Stalin, Lenin, Marx are pretty utilitarian. Trotsky even says that Utilitarianism seems like the best view for communists because they care about the whole collective humanity than just individuals and their rights - https://philpapers.org/rec/VENUAT

Now, I also know the bajillion deaths statistic thrown at state or authoritarian socialists (like me and other MLs) by liberals and conservatives, but when you actually read history (even by liberal historians like Kotkin, Thurston, Montefiore, etc.), you find that Stalin or USSR was not some hellhole. This is a nice long discussion on Stalin and USSR's mistakes - https://youtu.be/tmimHKLDWcU

And prolespod did a much longer series on Stalin (8 hours approx) - https://prolespod.libsyn.com/63-the-stalin-eras-an-introduction-1878-1917

Expand full comment
David Duffy's avatar

I can't believe that I'm defending conventional morality but

0) We are back to means and ends - there are numerous setups where a good end is too "tainted" by the means to be generally accepted even if there is net beneficence.

1) In the voting situation, the legal principle is "silence implies assent" for a reason. If the outcome we were voting for was uncertain and turned out badly, blame would also spread to the "good people who did nothing". In a desert type way of thinking, even if things turned out well, the voter panel might still be regarded as having obligations to the deceased person eg their family, class.

2) In bioethics, the situation with a large empirical literature is blood donation, where adjacent countries may have voluntary (with less or more non-monetary incentives) or paid donation. Both seem to be OK in a consequentialist cum public health way, but unsurprisingly, those who volunteer under one regime don't do so when others are paid. But I don't think that paid is obviously superior in outcomes. Organ donation, surrogate pregnancy etc are more fraught compared to an essentially riskless blood donation, though, yes, perhaps less dangerous than freely undertaken extreme sports.

Expand full comment
Mary M.'s avatar

Hi, so under your view, a deontologist would have the morally repugnant outlook on something like the Tuskegee Syphilis Study? You would defend the consequentialist position with regard to this example?

Expand full comment
Richard Y Chappell's avatar

The wise consequentialist position is that informed consent matters for instrumental reasons (preventing public mistrust, the risk of severe harms outweighing the positive value of the research, etc.). See: https://www.goodthoughts.blog/p/naive-instrumentalism-vs-principled

In ethical theory debates, we deliberately discuss unrealistically simplified cases. It's important not to naively extrapolate verdicts from thought experiments to (much more complex) real-life situations. See: https://www.goodthoughts.blog/p/astronomical-cake

Expand full comment