I think the answer to why one would prefer non-consequentialist norms is hidden in the initial framing. The choices are presented as "norms which are best for the collective on average vs. not best for the collective on average" as if we were, from a first-person perspective, making decisions from the perspective of the universal 3rd person. But the other option here is to endorse norms that allow us, individually, to pursue our own personal projects at the expense of the abstract global utility function.
So if "why should I care?" is the ultimate litmus test, then the consequentialist is actually in a much harder position than the non-consequentialist because the choices aren't "well being or not well being" but rather "well being of some abstract collective vs. my own well being, or that of my loved ones." Once we make this appropriate reframe, the consequentialist is in a tougher spot.
One way around this issue for the consequentialist is to say "morality *just is* about universalizing/impartial norms" which I'm open to, but at that point the more fundamental question then becomes "why be moral?" which isn't a clearly answerable question as initially presented.
Anyways, great stuff! I'm enjoying this line of enquiry.
Yeah, insofar as the appeal rests on self-interest, that might better motivate *rejecting morality* rather than *accepting deontology*. Another intermediate position would be a non-maximizing (satisficing or scalar) form of consequentialism, though as one of the global privileged we might still fear others taking from us for the greater good. Deontology might better serve to protect the privileges of the elite.
I worry that something along these lines also implies that your fn4 requires a bit more work. We might all like 'agent-neutral' reasons from behind the veil of ignorance, but once we know we are in the global elite we're hardly going to be too disappointed if society forgets to redistribute the vast majority of our resources to the global south. Or to put it more positively: I might be donating a lot of money, but perhaps I'm merely utilitarian whereas some other consequentialist (egalitarian or prioritarian) wants to take even more of my money. I hope they act according to deontology (don't steal) rather than according to their flavor of consequentialism (redistribute more).
Self-interest might even be too strong. The individual could ground their decisions in internal standards of rationality they assess to be requirements of action. This would get you something akin to Kantian deontology which would sometimes require not acting out of self-interest. So the first-person POV need not be based on egoism so much as their own vantage point for what they judge to be rational.
Thi Nguyen makes a point about games. The *aim* in playing a game is to win, but the *purpose* of playing a game is to have fun. Games are an interesting example where you adopt an aim that you don't have, because adopting that aim enables you to get something that you can't effectively get by aiming at it. (If you tried to play a game whose only rule was "have fun", it would be a lot like Calvinball, which isn't very fun at all.)
I think this is a useful way to think about epistemology. The *aim* of belief is to believe the truth. (I like to think of "evidence" as "anything that helps you get at the truth", so that it's possible to justify reliance on evidence - not because of the intrinsic nature of the thing that is treated as evidence, but only because it happens to be something that plays this functional role in this particular context.) But the *purpose* of being the kind of creature that has beliefs at all is that having beliefs (i.e., states that constitutively aim at the truth) turns out to be practically helpful for guiding action in various ways.
On this picture, the fact that sometimes it's practically worthwhile do something that makes you believe a falsehood is just a parallel of the fact that sometimes you realize that the game you're playing is no fun and you all might as well quit and do something else.
Sorry, but I'm a bit confused about what the upshot of this is and how it helps. I mean regardless of what purpose -- if any -- having beliefs may have; certain rules for updating those beliefs are truth conducive and others are not. Even if the purpose of beliefs was to have false ones it wouldn't change the facts about what things were evidence and what beliefs were justified.
Indeed, I'd argue that there isn't inherently any tension in the example above at all regarding beliefs. Any apparent tension is the result of conflating epistemic and practical/outcome rationality and there is no reason why they can't require incompatible behavior.
Hmm, maybe I'm just saying what you meant differently but it seemed to me you were suggesting there was some importance to be placed on this talk of purpose and aim. I guess I don't see that.
I do think most deontologists will be loath to say something like: it would be better if I believed I should act immorally. But it seems to me all the work is being done in the setup by postulating that there is a consequenctialist notion of "preferable" in the first place. Once you are committed to saying that "I morally ought to bring things about that are (objectively) preferable I not bring about" you are in a really tough spot.
----
P.S. To be pedantic I think when we play a game we usually do have the aim of winning the game. But i guess your point is we don't antecedantly have the aim to have our tokens arranged like such and such? But I think I may be confused.
I have both a logical objection and what I think is a better response from the deontologist.
I think the best thing for the deontologist to do is simply deny that there is an all things considered notion of "people/the world being better off". The force of your argument really turns on them granting that there is such a notion and then rejecting it. And of course, that's tough in large part because of the temptation to bake in 'is preferable' into our definition of better off.
Maybe they can admit that we can measure things like utility (tho plenty of paradoxes there) but they can still deny that this amounts to overall people being better off. Especially if you aren't willing to bite the bullet (as I am) of accepting pure hedonic pleasure as the quantity being maximized I think it's a pretty strong challenge to say: well you like to think that notion is meaningful but until you are willing to really commit to a clear way to measure it I don't see why I should believe this concept is well-defined in the first place (and every clear way to measure it seems like something even many consequentialists aren't willing to endorse).
But at a more logical level I'm not sure if you can go from "People would be better off if X" to "I should prefer that X."
Indeed, isn't kinda the whole point of a deontic POV to say exactly the opposite. Even when I know that people would be better off if I lie I should still prefer not to lie. Just follow that all the way down and they can also so I should prefer to not disengage from such and such to allow the consequenctialist norm to triumph.
But yes, I do think that it is a bit of a trick by presuming they will assent to the claim that there is a consequenctialist notion of being better off.
Something along those lines may be their best shot, but it's a tough one to pull off, for the sorts of reasons I discuss under the "It's Unavoidable" section of my old "Deontology and Preferability" post:
Note that I'm actually not intending to *reason* from "People would be better off if X" to "I should prefer that X." Rather, I'm inviting readers to *directly* consider what seems preferable, worth caring about, etc., and then *notice* that it would seem perverse to care more about moral abstractions than about people's lives and well-being.
The deontic POV is standardly expressed in terms of deontic judgments (which acts are permissible, impermissible, obligatory, or supererogatory) rather than what we should *prefer*. Now, one always *could* just bite the bullet and follow the deontological judgments "all the way down" as you say -- to form "robustly deontological" preferences over how you want the world to turn out -- but (1) that seems extremely counterintuitive in the Newspaper case, and (2) runs squarely into the argument of my "new paradox" paper: https://freeandequaljournal.org/article/id/18062/
I mean I ultimately agree because I've never found deontological moral views to even be slightly compelling. So I suspect that, to the extent we disagree, my position is basically: well yah when you accepted a deontic approach in the first place you were committing yourself to some pretty counterintuitive positions.
However, I think where I disagree with you is ultimately at this part of IV.C of your very excellent article:
> For, given the connection between all-things-considered preferability and rational choice, it would follow that there is similarly no fact of the matter regarding what we ought, all things considered, to do in these cases.
I guess I don't see why that should follow. Even if you can show that if there was a coherent notion of preferability defined as you so that it would entail certain things about rational choice it isn't clear to me why one needs to be committed to the position that if the notion of preferability is incoherent it therefore follows that there is no rationally requires choice.
Specifically, you say
> First, I use “preferability” in the fitting attitudes sense: W2 ≻ W1 if and only if it is uniquely fitting (for an ideal observer) to prefer W2 over W1, all things considered.
The deontologist should shoot back, asking which possible world should be preferred is fundamentally presuming that there is an appropriate notion of preference that depends purely on the state of the world. In fact, they may want to insist that not only must they be centered possible worlds they must even identify a particular timeslice (to deal with the issue of their own future deontic failures). Or, take an even more extreme position and just deny their is any uniquely fitting such preference in any way and that there are only uniquely fitting preference over your own acts or something.
BTW I found the talk about partial versus complete rejection of preferability a bit confusing. I mean you can have all sorts of considerations or hopes about different possibilities like hoping the child not be hit by lightning yet simply deny there is a well-defined coherent notion of what is uniquely fitting for an ideal observer to prefer. Like I worry there is a bit of slippage here between the definition and your talk of considerations or hopes as those need not amount to a commitment to something which satisfies the definition.
---
But I agree at the end of the day you really are forced to say something very weird as the deontologist. Looking at possible worlds and being like: ¯\_(ツ)_/¯ that's not how I think of preferability, tell me what my current timeslice is doing differently in the two worlds and I'll answer does intuitively feel selfish not moral to me.
But that's why I always found deontology absurd from the getgo. Whatever those people might be talking about it sure isn't anything I would think of as morality.
You could run the argument over centered worlds; it shouldn't make too much difference when assessing what *disinterested observers* should prefer (I do allow that agents may have different preferences from observers, due to their different position, as may observers who are related to one victim, etc.)
We just get different pressure points depending on which way the deontologist goes—"robustly" requiring disinterested observers to share distinctively deontological preferences (which is the view that paper targets), or opting for an approach on which deontological reasons are *purely* agent-relative (and so irrelevant to observers), which is more what my current blog posts are targeting.
I think the lightning examples rules out the most extreme views you mention ("just deny their is any uniquely fitting such preference in any way and that there are only uniquely fitting preference over your own acts or something"). So the challenge for the deontologist is to develop a plausible account of why it should be "gappy", together with a principled reason to suspend the general connection between preferability and rational choice in just those gappy cases.
If they can develop such a response, I think it would be an interesting one!
Ohh yes, as I said I think there is real pressure here. Yes, they would have to basically say that what is going on when disinterested observers 'prefer' isn't prefering in anything like the same sense (it's hoping but doesn't connect up to reasons for action) and that is very weird.
Can something in the vicinity be used as a general argument against moral realism? Imagine that we somehow discovered that our grasp of moral facts is completely wrong and that the only moral fact is that we each ought to collect 589281 paperclips. It seems like in this scenario we would just stop caring about the moral facts. But doesn't that at least imply that on some meta-level we already aren't fully committed to caring about moral facts, but only care about them if, for example, they line up with our desires or sensibilities.
Nice post! I think your paradox for deontologists nicely shows that they'll have to prefer that, say, people push the fat man. But if you prefer others push the fat man, then you should try to dispose yourself to push the fat man.
I basically think this is one of the reasons the version of deontology that prefers others take consequentialist actions in cases of conflict goes off the rails. The core problem is that preferences are tied up with what one tries to bring about, what one tries to convince others of, what one regrets, etc. So if you want others to behave as consequentialists, you have no reason to prevent wrong acts, regret wrong acts, etc.
Now, you can have an intermediate view where you hope that OTHERS follow the deontic norms but you hope YOU don't follow the deontic norm. But this view just seems nuts. I mean, if we agree that what you hope and what you try to do come apart, then why should you hope that you do the right thing even if it's predictably worse? You also get the weird result where, if you watch a video where either you or your twin considered pushing the fat man off the bridge, you hope if it was you that you didn't push, while if it was your twin, they did. But how could that be? The reasons for action are the same! At that point, I feel the original paradox of deontology has lots of force--it just seems like a weird form of egoism!
Right, in my book manuscript I develop a version of the Newspaper case where the reader has amnesia and isn't sure whether *they* might have been the agent or not. Seems incredibly weird (and objectionably self-centered) that their hopes about what happened to the six victims should turn on this self-locating fact!
The whole structure of the argument presupposes consequentialism's evaluative vocabulary from the start—"lamentable," "better outcomes," what "a decent person would hope for"—and then uses that to demonstrate that non-consequentialist norms are, by consequentialist lights, bad. It's essentially: using Framework C to judge all opposing frameworks, any other framework loses, therefore all losers must coordinate their (moral) failings.
The disjunction at the end is only doing cosmetic work. "Either consequentialism is correct or coordinate against morality" is obviously question-begging, like so much of the whole screed. Only could be parsed as not if one already accepted that "lamentable by consequentialist standards" = "always unacceptable, full stop." I'm actually shocked people are humoring this as anything more than the lopsided axe grinding that it is.
There's nothing distinctively consequentialist about the concepts of "lamentability" or "what decent people would hope for"; they're *fitting attitude* assessments (which many philosophers actually - though mistakenly - believe to be *incompatible* with consequentialism). For relevant background, see:
As with any argument, it's open to the deontologist to dispute my particular judgments: they might, for example, embrace the "robust deontology" view that decent people should sooner hope to read that the five died in the Newspaper case. (Just as a bullet-biting consequentialist might embrace the view that the surgeon ought to harvest the one's organs in Transplant.) Neither the Newspaper objection to deontology nor the Transplant objection to Consequentialism is thereby "obviously question-begging". They draw attention to potential costs of a view, and it's up to the defender to say what they can to defuse the cost.
Seriously, try applying your style of complaint to any objection deontologists offer to consequentialism. You'll soon find that your standards are simply incompatible with doing philosophy at all. Any objection whatsoever becomes "question-begging" and "lopsided axe grinding". Pfft.
The charge isn't that thought experiments can never draw attention to costs of a view. It's that your thought experiment is constructed so that only one type of cost registers as morally relevant. That's a design problem, not a generic objection to objections.
It's telling you keep directly name-checking deontology when cornered, despite my never bringing it up on either my note reply or my top level reply in your comment section. I am not a deontologist. I am not, unlike you, a system chauvinist. But when s system chauvinist decides to make the bold dual claims that anything except their system is wrong AND they can prove it, I think it's worth holding both those claims to strictest standards.
What I see every time I encounter you making such claims, however, is strawmanning, or failing to even account for, most/all objections. That you're now saying you need the freedom to do that or you can't do philosophy at all is a pretty damning self-report. I'm beginning to wonder if this is all an even bigger waste of time than I'd feared when I ignored your question begging provocation from last week.
I don't see you making any complaint that I couldn't just as easily make against literally every objection to consequentialism. You don't get to dismiss putative counterexamples as having a "design problem" just because they frame things in a way that's awkward for one's view. That's how counterexamples work!
If there's a real flaw in the argument, one demonstrates this through *engagement*, like how consequentialists engage with the Transplant thought experiment, to explain why we aren't convinced that it really works as a counterexample. If you can do that, do it. If you can't, then you're just making excuses. The proof is in the pudding; there is no point to such vacuously meta comments as you're offering here.
The Newspaper case assumes that the fitting attitude toward an outcome is the decisive consideration, but that's precisely what's contested. A non-systematic thinker would say the domain of agential obligation operates under different normative logic than the domain of outcome assessment, and that both generate genuine fitting attitudes that don't reduce to each other. Your case is designed so only one registers. That's presupposition of the unification of those domains, which is, like the other details, under dispute.
The deeper problem is that consequentialism's unification assumption (the idea that ALL moral questions across ALL domains answer to a single master principle) is itself unargued in your posts. You treat it as a methodological necessity when it's a substantive metaphysical claim that can't simply be taken for granted. When a framework consistently generates verdicts that outrun anyone's actual endorsement and requires constant patching at the edges, that's a potentially *fatal* flaw. The parsimonious explanation isn't that we need better consequentialism but that your preferred system was never fit to purpose as a catch-all answer. It's not equipped to handle domains it was never justified in claiming.
Thank you for *finally* saying something substantive; it would have been a much better exchange if you'd simply started with this. (Especially since you've now edited out the insults. I was about to remind you that this is my space and I will ban rude guests when my patience runs out.)
> "The Newspaper case assumes that the fitting attitude toward an outcome is the decisive consideration"
Decisive for what, exactly? Nobody could deny that what we ought to prefer has some relevance to practical rationality, which is the main use I make of the fitting attitude verdict.
Moreover, I've argued elsewhere (e.g. https://freeandequaljournal.org/article/id/18062/ ) that it would be hard to entirely disconnect one's preferences about act-inclusive outcomes from the rest of one's agency/psychology. Preferences are intimately tied to emotions such as hope, regret, etc., and it's very hard to make sense of one regarding an action as truly morally wrong while also hoping that one does it, etc.
So I think it will take a lot more work to get a response along your lines to work, but I'd certainly be curious to hear someone try to work out the details.
For the record, you should be aware that plenty of non-consequentialists agree with me that one's attitudes should cohere, such that agents should not prefer an act-inclusive outcome in which they act wrongly. (This doesn't require a "single master principle": one could be a coherent pluralist! But it does require coherence, in a way that makes the identified fitting attitude verdicts very dialectically significant.) So my argument (even in its first and strongest form, before even considering the two fallback options I subsequently develop!) is very plainly not question-begging: it starts from an assumption that many — perhaps most — of my interlocutors can and do accept.
You criticize “insults” but sure love to sling them. Oh well.
The coherence point is fair, but coherence is symmetric, it doesn't specify which attitude anchors the other. A non-systematist might say the permissibility assessment comes first and constrains what one can coherently hope for. Your Newspaper case assumes the outcome-preference is the anchor. That's still the presupposition under dispute, and "relevant to practical rationality" doesn't close the gap to "therefore coordinate against deontology," which is the actual conclusion you're drawing.
As for banning me over "rudeness," I'll remind you that your first reply elsewhere in this multi thread exchange claims that you don't care about niceties and welcomed blunt engagement. Apparently that offer wasn't genuine. Good to know. I’ll depart. Feel free to ban or block if you think that’s actually a defensible course of action. I'll not be bothering to engage again regardless.
"Since moral subjects could generally anticipate being better off if agents successfully followed utilitarian norms, there seem clear reasons for us to prefer utilitarian (rather than deontological) norms to be successfully followed."
There is no truly utilitarian agreement that has ever existed between agents.
Every agreement establishes a framework of rights and entitlements where "welfare" (however calculated) is merely one factor among many. All consensual agreements are inherently deontological.
Utilitarianism would never be agreed to by consenting adults, let alone serve as a universal moral framework. This should be viewed as an immediate defeater for the theory
What if the newspaper article of the trolley problem described the situation, and you learned the man pulled the lever to save five and be responsible for the death of one. But as you read on, you learn that he didn't have knowledge of the situation he found himself in, and it was a performance, or "magic trick" in which the five were never in any real danger and would have escaped their situation. By flipping the switch he killed one when none would have died had he done nothing. We can assume away all uncertainty to rid ourselves of this problem, but that's not real life.
Deontological rules generally seem to be conditional: don't physically harm someone (unless you are defending yourself), etc. In a situation like the trolley problem, the immoral act was the tying of people to the tracks, not the lever pulling--in other words it's not a real moral dilemma. There are too many unknown variables to assume away. If we assume them all away, then deontology could conditionally tell us to pull the lever, killing one to save five.
If we imagine a different case, of killing 1 or else *zero* die, then we don't get any disagreement between consequentialism and deontology. To assess the theories, we need to consider how they differ.
I think the answer to why one would prefer non-consequentialist norms is hidden in the initial framing. The choices are presented as "norms which are best for the collective on average vs. not best for the collective on average" as if we were, from a first-person perspective, making decisions from the perspective of the universal 3rd person. But the other option here is to endorse norms that allow us, individually, to pursue our own personal projects at the expense of the abstract global utility function.
So if "why should I care?" is the ultimate litmus test, then the consequentialist is actually in a much harder position than the non-consequentialist because the choices aren't "well being or not well being" but rather "well being of some abstract collective vs. my own well being, or that of my loved ones." Once we make this appropriate reframe, the consequentialist is in a tougher spot.
One way around this issue for the consequentialist is to say "morality *just is* about universalizing/impartial norms" which I'm open to, but at that point the more fundamental question then becomes "why be moral?" which isn't a clearly answerable question as initially presented.
Anyways, great stuff! I'm enjoying this line of enquiry.
Yeah, insofar as the appeal rests on self-interest, that might better motivate *rejecting morality* rather than *accepting deontology*. Another intermediate position would be a non-maximizing (satisficing or scalar) form of consequentialism, though as one of the global privileged we might still fear others taking from us for the greater good. Deontology might better serve to protect the privileges of the elite.
I worry that something along these lines also implies that your fn4 requires a bit more work. We might all like 'agent-neutral' reasons from behind the veil of ignorance, but once we know we are in the global elite we're hardly going to be too disappointed if society forgets to redistribute the vast majority of our resources to the global south. Or to put it more positively: I might be donating a lot of money, but perhaps I'm merely utilitarian whereas some other consequentialist (egalitarian or prioritarian) wants to take even more of my money. I hope they act according to deontology (don't steal) rather than according to their flavor of consequentialism (redistribute more).
Self-interest might even be too strong. The individual could ground their decisions in internal standards of rationality they assess to be requirements of action. This would get you something akin to Kantian deontology which would sometimes require not acting out of self-interest. So the first-person POV need not be based on egoism so much as their own vantage point for what they judge to be rational.
On the point about belief:
Thi Nguyen makes a point about games. The *aim* in playing a game is to win, but the *purpose* of playing a game is to have fun. Games are an interesting example where you adopt an aim that you don't have, because adopting that aim enables you to get something that you can't effectively get by aiming at it. (If you tried to play a game whose only rule was "have fun", it would be a lot like Calvinball, which isn't very fun at all.)
I think this is a useful way to think about epistemology. The *aim* of belief is to believe the truth. (I like to think of "evidence" as "anything that helps you get at the truth", so that it's possible to justify reliance on evidence - not because of the intrinsic nature of the thing that is treated as evidence, but only because it happens to be something that plays this functional role in this particular context.) But the *purpose* of being the kind of creature that has beliefs at all is that having beliefs (i.e., states that constitutively aim at the truth) turns out to be practically helpful for guiding action in various ways.
On this picture, the fact that sometimes it's practically worthwhile do something that makes you believe a falsehood is just a parallel of the fact that sometimes you realize that the game you're playing is no fun and you all might as well quit and do something else.
Sorry, but I'm a bit confused about what the upshot of this is and how it helps. I mean regardless of what purpose -- if any -- having beliefs may have; certain rules for updating those beliefs are truth conducive and others are not. Even if the purpose of beliefs was to have false ones it wouldn't change the facts about what things were evidence and what beliefs were justified.
Indeed, I'd argue that there isn't inherently any tension in the example above at all regarding beliefs. Any apparent tension is the result of conflating epistemic and practical/outcome rationality and there is no reason why they can't require incompatible behavior.
Hmm, maybe I'm just saying what you meant differently but it seemed to me you were suggesting there was some importance to be placed on this talk of purpose and aim. I guess I don't see that.
I do think most deontologists will be loath to say something like: it would be better if I believed I should act immorally. But it seems to me all the work is being done in the setup by postulating that there is a consequenctialist notion of "preferable" in the first place. Once you are committed to saying that "I morally ought to bring things about that are (objectively) preferable I not bring about" you are in a really tough spot.
----
P.S. To be pedantic I think when we play a game we usually do have the aim of winning the game. But i guess your point is we don't antecedantly have the aim to have our tokens arranged like such and such? But I think I may be confused.
I have both a logical objection and what I think is a better response from the deontologist.
I think the best thing for the deontologist to do is simply deny that there is an all things considered notion of "people/the world being better off". The force of your argument really turns on them granting that there is such a notion and then rejecting it. And of course, that's tough in large part because of the temptation to bake in 'is preferable' into our definition of better off.
Maybe they can admit that we can measure things like utility (tho plenty of paradoxes there) but they can still deny that this amounts to overall people being better off. Especially if you aren't willing to bite the bullet (as I am) of accepting pure hedonic pleasure as the quantity being maximized I think it's a pretty strong challenge to say: well you like to think that notion is meaningful but until you are willing to really commit to a clear way to measure it I don't see why I should believe this concept is well-defined in the first place (and every clear way to measure it seems like something even many consequentialists aren't willing to endorse).
But at a more logical level I'm not sure if you can go from "People would be better off if X" to "I should prefer that X."
Indeed, isn't kinda the whole point of a deontic POV to say exactly the opposite. Even when I know that people would be better off if I lie I should still prefer not to lie. Just follow that all the way down and they can also so I should prefer to not disengage from such and such to allow the consequenctialist norm to triumph.
But yes, I do think that it is a bit of a trick by presuming they will assent to the claim that there is a consequenctialist notion of being better off.
Something along those lines may be their best shot, but it's a tough one to pull off, for the sorts of reasons I discuss under the "It's Unavoidable" section of my old "Deontology and Preferability" post:
https://www.goodthoughts.blog/p/deontology-and-preferability#%C2%A7its-unavoidable
Note that I'm actually not intending to *reason* from "People would be better off if X" to "I should prefer that X." Rather, I'm inviting readers to *directly* consider what seems preferable, worth caring about, etc., and then *notice* that it would seem perverse to care more about moral abstractions than about people's lives and well-being.
The deontic POV is standardly expressed in terms of deontic judgments (which acts are permissible, impermissible, obligatory, or supererogatory) rather than what we should *prefer*. Now, one always *could* just bite the bullet and follow the deontological judgments "all the way down" as you say -- to form "robustly deontological" preferences over how you want the world to turn out -- but (1) that seems extremely counterintuitive in the Newspaper case, and (2) runs squarely into the argument of my "new paradox" paper: https://freeandequaljournal.org/article/id/18062/
I mean I ultimately agree because I've never found deontological moral views to even be slightly compelling. So I suspect that, to the extent we disagree, my position is basically: well yah when you accepted a deontic approach in the first place you were committing yourself to some pretty counterintuitive positions.
However, I think where I disagree with you is ultimately at this part of IV.C of your very excellent article:
> For, given the connection between all-things-considered preferability and rational choice, it would follow that there is similarly no fact of the matter regarding what we ought, all things considered, to do in these cases.
I guess I don't see why that should follow. Even if you can show that if there was a coherent notion of preferability defined as you so that it would entail certain things about rational choice it isn't clear to me why one needs to be committed to the position that if the notion of preferability is incoherent it therefore follows that there is no rationally requires choice.
Specifically, you say
> First, I use “preferability” in the fitting attitudes sense: W2 ≻ W1 if and only if it is uniquely fitting (for an ideal observer) to prefer W2 over W1, all things considered.
The deontologist should shoot back, asking which possible world should be preferred is fundamentally presuming that there is an appropriate notion of preference that depends purely on the state of the world. In fact, they may want to insist that not only must they be centered possible worlds they must even identify a particular timeslice (to deal with the issue of their own future deontic failures). Or, take an even more extreme position and just deny their is any uniquely fitting such preference in any way and that there are only uniquely fitting preference over your own acts or something.
BTW I found the talk about partial versus complete rejection of preferability a bit confusing. I mean you can have all sorts of considerations or hopes about different possibilities like hoping the child not be hit by lightning yet simply deny there is a well-defined coherent notion of what is uniquely fitting for an ideal observer to prefer. Like I worry there is a bit of slippage here between the definition and your talk of considerations or hopes as those need not amount to a commitment to something which satisfies the definition.
---
But I agree at the end of the day you really are forced to say something very weird as the deontologist. Looking at possible worlds and being like: ¯\_(ツ)_/¯ that's not how I think of preferability, tell me what my current timeslice is doing differently in the two worlds and I'll answer does intuitively feel selfish not moral to me.
But that's why I always found deontology absurd from the getgo. Whatever those people might be talking about it sure isn't anything I would think of as morality.
You could run the argument over centered worlds; it shouldn't make too much difference when assessing what *disinterested observers* should prefer (I do allow that agents may have different preferences from observers, due to their different position, as may observers who are related to one victim, etc.)
We just get different pressure points depending on which way the deontologist goes—"robustly" requiring disinterested observers to share distinctively deontological preferences (which is the view that paper targets), or opting for an approach on which deontological reasons are *purely* agent-relative (and so irrelevant to observers), which is more what my current blog posts are targeting.
I think the lightning examples rules out the most extreme views you mention ("just deny their is any uniquely fitting such preference in any way and that there are only uniquely fitting preference over your own acts or something"). So the challenge for the deontologist is to develop a plausible account of why it should be "gappy", together with a principled reason to suspend the general connection between preferability and rational choice in just those gappy cases.
If they can develop such a response, I think it would be an interesting one!
Ohh yes, as I said I think there is real pressure here. Yes, they would have to basically say that what is going on when disinterested observers 'prefer' isn't prefering in anything like the same sense (it's hoping but doesn't connect up to reasons for action) and that is very weird.
Can something in the vicinity be used as a general argument against moral realism? Imagine that we somehow discovered that our grasp of moral facts is completely wrong and that the only moral fact is that we each ought to collect 589281 paperclips. It seems like in this scenario we would just stop caring about the moral facts. But doesn't that at least imply that on some meta-level we already aren't fully committed to caring about moral facts, but only care about them if, for example, they line up with our desires or sensibilities.
Good question! I discuss this more in "Metaethics and Unconditional Mattering": https://www.goodthoughts.blog/p/metaethics-and-unconditional-mattering
Nice post! I think your paradox for deontologists nicely shows that they'll have to prefer that, say, people push the fat man. But if you prefer others push the fat man, then you should try to dispose yourself to push the fat man.
I basically think this is one of the reasons the version of deontology that prefers others take consequentialist actions in cases of conflict goes off the rails. The core problem is that preferences are tied up with what one tries to bring about, what one tries to convince others of, what one regrets, etc. So if you want others to behave as consequentialists, you have no reason to prevent wrong acts, regret wrong acts, etc.
Now, you can have an intermediate view where you hope that OTHERS follow the deontic norms but you hope YOU don't follow the deontic norm. But this view just seems nuts. I mean, if we agree that what you hope and what you try to do come apart, then why should you hope that you do the right thing even if it's predictably worse? You also get the weird result where, if you watch a video where either you or your twin considered pushing the fat man off the bridge, you hope if it was you that you didn't push, while if it was your twin, they did. But how could that be? The reasons for action are the same! At that point, I feel the original paradox of deontology has lots of force--it just seems like a weird form of egoism!
Right, in my book manuscript I develop a version of the Newspaper case where the reader has amnesia and isn't sure whether *they* might have been the agent or not. Seems incredibly weird (and objectionably self-centered) that their hopes about what happened to the six victims should turn on this self-locating fact!
The whole structure of the argument presupposes consequentialism's evaluative vocabulary from the start—"lamentable," "better outcomes," what "a decent person would hope for"—and then uses that to demonstrate that non-consequentialist norms are, by consequentialist lights, bad. It's essentially: using Framework C to judge all opposing frameworks, any other framework loses, therefore all losers must coordinate their (moral) failings.
The disjunction at the end is only doing cosmetic work. "Either consequentialism is correct or coordinate against morality" is obviously question-begging, like so much of the whole screed. Only could be parsed as not if one already accepted that "lamentable by consequentialist standards" = "always unacceptable, full stop." I'm actually shocked people are humoring this as anything more than the lopsided axe grinding that it is.
There's nothing distinctively consequentialist about the concepts of "lamentability" or "what decent people would hope for"; they're *fitting attitude* assessments (which many philosophers actually - though mistakenly - believe to be *incompatible* with consequentialism). For relevant background, see:
* https://www.goodthoughts.blog/p/deontology-and-preferability
* https://www.goodthoughts.blog/p/the-utilitarian-tradition-is-conceptually
As with any argument, it's open to the deontologist to dispute my particular judgments: they might, for example, embrace the "robust deontology" view that decent people should sooner hope to read that the five died in the Newspaper case. (Just as a bullet-biting consequentialist might embrace the view that the surgeon ought to harvest the one's organs in Transplant.) Neither the Newspaper objection to deontology nor the Transplant objection to Consequentialism is thereby "obviously question-begging". They draw attention to potential costs of a view, and it's up to the defender to say what they can to defuse the cost.
Seriously, try applying your style of complaint to any objection deontologists offer to consequentialism. You'll soon find that your standards are simply incompatible with doing philosophy at all. Any objection whatsoever becomes "question-begging" and "lopsided axe grinding". Pfft.
The charge isn't that thought experiments can never draw attention to costs of a view. It's that your thought experiment is constructed so that only one type of cost registers as morally relevant. That's a design problem, not a generic objection to objections.
It's telling you keep directly name-checking deontology when cornered, despite my never bringing it up on either my note reply or my top level reply in your comment section. I am not a deontologist. I am not, unlike you, a system chauvinist. But when s system chauvinist decides to make the bold dual claims that anything except their system is wrong AND they can prove it, I think it's worth holding both those claims to strictest standards.
What I see every time I encounter you making such claims, however, is strawmanning, or failing to even account for, most/all objections. That you're now saying you need the freedom to do that or you can't do philosophy at all is a pretty damning self-report. I'm beginning to wonder if this is all an even bigger waste of time than I'd feared when I ignored your question begging provocation from last week.
I don't see you making any complaint that I couldn't just as easily make against literally every objection to consequentialism. You don't get to dismiss putative counterexamples as having a "design problem" just because they frame things in a way that's awkward for one's view. That's how counterexamples work!
If there's a real flaw in the argument, one demonstrates this through *engagement*, like how consequentialists engage with the Transplant thought experiment, to explain why we aren't convinced that it really works as a counterexample. If you can do that, do it. If you can't, then you're just making excuses. The proof is in the pudding; there is no point to such vacuously meta comments as you're offering here.
The Newspaper case assumes that the fitting attitude toward an outcome is the decisive consideration, but that's precisely what's contested. A non-systematic thinker would say the domain of agential obligation operates under different normative logic than the domain of outcome assessment, and that both generate genuine fitting attitudes that don't reduce to each other. Your case is designed so only one registers. That's presupposition of the unification of those domains, which is, like the other details, under dispute.
The deeper problem is that consequentialism's unification assumption (the idea that ALL moral questions across ALL domains answer to a single master principle) is itself unargued in your posts. You treat it as a methodological necessity when it's a substantive metaphysical claim that can't simply be taken for granted. When a framework consistently generates verdicts that outrun anyone's actual endorsement and requires constant patching at the edges, that's a potentially *fatal* flaw. The parsimonious explanation isn't that we need better consequentialism but that your preferred system was never fit to purpose as a catch-all answer. It's not equipped to handle domains it was never justified in claiming.
Thank you for *finally* saying something substantive; it would have been a much better exchange if you'd simply started with this. (Especially since you've now edited out the insults. I was about to remind you that this is my space and I will ban rude guests when my patience runs out.)
> "The Newspaper case assumes that the fitting attitude toward an outcome is the decisive consideration"
Decisive for what, exactly? Nobody could deny that what we ought to prefer has some relevance to practical rationality, which is the main use I make of the fitting attitude verdict.
Moreover, I've argued elsewhere (e.g. https://freeandequaljournal.org/article/id/18062/ ) that it would be hard to entirely disconnect one's preferences about act-inclusive outcomes from the rest of one's agency/psychology. Preferences are intimately tied to emotions such as hope, regret, etc., and it's very hard to make sense of one regarding an action as truly morally wrong while also hoping that one does it, etc.
So I think it will take a lot more work to get a response along your lines to work, but I'd certainly be curious to hear someone try to work out the details.
For the record, you should be aware that plenty of non-consequentialists agree with me that one's attitudes should cohere, such that agents should not prefer an act-inclusive outcome in which they act wrongly. (This doesn't require a "single master principle": one could be a coherent pluralist! But it does require coherence, in a way that makes the identified fitting attitude verdicts very dialectically significant.) So my argument (even in its first and strongest form, before even considering the two fallback options I subsequently develop!) is very plainly not question-begging: it starts from an assumption that many — perhaps most — of my interlocutors can and do accept.
You criticize “insults” but sure love to sling them. Oh well.
The coherence point is fair, but coherence is symmetric, it doesn't specify which attitude anchors the other. A non-systematist might say the permissibility assessment comes first and constrains what one can coherently hope for. Your Newspaper case assumes the outcome-preference is the anchor. That's still the presupposition under dispute, and "relevant to practical rationality" doesn't close the gap to "therefore coordinate against deontology," which is the actual conclusion you're drawing.
As for banning me over "rudeness," I'll remind you that your first reply elsewhere in this multi thread exchange claims that you don't care about niceties and welcomed blunt engagement. Apparently that offer wasn't genuine. Good to know. I’ll depart. Feel free to ban or block if you think that’s actually a defensible course of action. I'll not be bothering to engage again regardless.
"Since moral subjects could generally anticipate being better off if agents successfully followed utilitarian norms, there seem clear reasons for us to prefer utilitarian (rather than deontological) norms to be successfully followed."
There is no truly utilitarian agreement that has ever existed between agents.
Every agreement establishes a framework of rights and entitlements where "welfare" (however calculated) is merely one factor among many. All consensual agreements are inherently deontological.
Utilitarianism would never be agreed to by consenting adults, let alone serve as a universal moral framework. This should be viewed as an immediate defeater for the theory
What if the newspaper article of the trolley problem described the situation, and you learned the man pulled the lever to save five and be responsible for the death of one. But as you read on, you learn that he didn't have knowledge of the situation he found himself in, and it was a performance, or "magic trick" in which the five were never in any real danger and would have escaped their situation. By flipping the switch he killed one when none would have died had he done nothing. We can assume away all uncertainty to rid ourselves of this problem, but that's not real life.
Deontological rules generally seem to be conditional: don't physically harm someone (unless you are defending yourself), etc. In a situation like the trolley problem, the immoral act was the tying of people to the tracks, not the lever pulling--in other words it's not a real moral dilemma. There are too many unknown variables to assume away. If we assume them all away, then deontology could conditionally tell us to pull the lever, killing one to save five.
If we imagine a different case, of killing 1 or else *zero* die, then we don't get any disagreement between consequentialism and deontology. To assess the theories, we need to consider how they differ.
Agreed. We also need to consider why they differ. Beyond 'consequence vs rule'