I agree that this is a strong challenge to non-consequentialists.
Suppose the non-consequentialist points to cases of mutually beneficial exploitation. You are stuck in a pit and will die of exposure or hunger if you are not rescued. Since the pit is in a very isolated area, the chances of someone happening on you by accident are very low. Still, I luckily cross your path and offer to help you out of the pit if you agree to work for me for a dollar a day for the next year. You agree and we both benefit: your life is saved, against the odds, and I get cheap labor for the next year.
Still, even though I have saved your life, and even though you are far better off by making the deal than by rejecting it, it seems plain that I have wronged you. But then not only have I wronged you without harming you but I've wronged you while making you better off than you would have been.
I'm sure you've considered this general sort of case and would be interested in hearing how you respond to it.
It's good to allow *sufficient* reward to make it worth the rescuer's while to find people to rescue. But if you allow *unlimited* leeway to make demands, the *extra* cost to the rescued party clearly outweighs the extra benefit to the agent. (We wouldn't agree to a year of slave-wages for ourselves merely in order to later secure a year of cheap labor from someone else.) So it's clearly welfare-promoting to constrain what can be demanded in such situations. We would all have good reason to agree to such limits from behind a veil of ignorance, for example.
I think there's another important component of this, which is that fairness is a thing that can be valued (and I certainly value) as a separate, additional thing to well-being.
Eg behind the veil of ignorance, I could prefer a situation where I'm 99% likelier to be much worse off, because I care about the 1% who are *significantly* worse off.
I feel this is stuck on blame, and thus on causation, where there is a correlation, here induced by timing . If person C intervenes to block A but not B, we might see them as harming S, depending on whether they perceived and had the ability to block both pathways. The total harm, "objectively", to S is the same, whatever the mechanism. In Dejardin's case above, rescuer could have done better - cue supererogation arguments.
How about when an action hurts overall well-being in an arbitrarily small amount (or simply doesn't add to overall wellbeing) but serves deontological thingamajigs in a substantial amount? I think that maybe a Rossian deontology/pluralism (that almost always approximates into consequentialism in our particular contingent world, ripe as it is with various inequalities and EA/Longtermist opportunities) is the only moral theory that accounts for all the data points here.
Could be worth accepting for "moral uncertainty" reasons, with an underlying thought like "What if this weird causal relation *really does matter immensely*, for reasons that I simply cannot fathom?" If the welfare cost is sufficiently low, may it's worth acting on such incomprehensible reasons. But it's notable that the underlying motivation here still involves a striking kind of normative alienation: a feeling of "this makes no sense to me, but just in case..."
This is far more appealing to me, though I believe there is (or, yes yes maybe just wish for there to be, haha) something we can ground morality in: agency, or actions aligned with reality in a "natural" sense. I'm exploring the idea of separating moral duties from virtuous behavior to try and clarify this structure, but I need to continue to think through it.
Regarding the point about caring about abstractions more than people:
You respond to the alienation objection to utilitarianism by pointing out that the utilitarian cares about overall wellbeing only because they care about each individual person. This eliminates the worry that the utilitarian cares more about some abstract thing than actual people. I wonder if the same thing can be said from a deonotological perspective. I care about the moral rules only because I care about each individual person, and the respect their intrinsic value demands from me. What do you think of this move?
Also, I just finished your introduction to utilitarianism. It was great! Thanks for that
This would be the most promising style of answer, I think. But it awaits spelling-out: insofar as the concept of well-being is precisely that of what is worth caring about *for the sake of the individual in question*, it's conceptually puzzling how one could claim to be acting *for others' sake* while doing what is collectively worse for them.
"As a potential victim, I care a lot about whether I end up dead, and very little about the causal details of precisely how I end up dead. Moral agents should take others’ interests and preferences into account. To prefer that five of us end up dead, rather than just one dead via a special causal chain, is implicitly to treat the special causal chain as more significant than four people’s lives. That’s pretty awful, IMO, and disrespectful of our value as persons.
It’s not as though the one cares vastly more about not being killed in this way than the five each care about being rescued, after all. So when deontologists prioritize the former over the latter, they are acting in a way that cannot be justified by reference to the interests or preferences of the affected parties. They’re introducing a novel (moralized) preference of their own into the situation, and treating it as more important than what the affected parties care about (their very lives)."
“Why do you care more about abstractions than about real people? Seems bad!”
The flaw with this sort of argument is that the person considering whether to betray his country or his friend would use it as justification to betray his country which after all is just an abstraction. I get the sense is your answer is that the country consists of a bunch of concrete people each of whom he just not care about, however again that seems like an abstract concern compared to the more direct concrete emotional attachments he has to his friend and so on.
Consequentialist considerations seem to require all kinds of abstract moves. For example one needs a way to figure out how to add and subtract pleasure and pain, desire and repulsion, satisfaction and frustration or whatever, there are various abstract laws one could use to do it, but which one to choose why choose one of those abstractions over actual people. This is hardly just a problem for consequentialism, the addict must decide whether to pursue their immediate cravings enjoying expected temporary highs or whether to try and overcome those immediate impulses by focusing on their current pains and discomforts and their expectations of more to come. Either desire might claim the other is the mere abstraction with as much reason. I'm not sure that matters.
As soon as we allow an abstraction can cut off the force of a concrete person or thing's demand your dictum loses its force.
What you really meant was "Why do you care more about this abstraction than about this other abstraction I've constructed that is at least according to me about real people?" To which the answer would presumably be because I subscribe to different principles of which abstractions I should care about.
I'm not sure I disagree with your substantive point, but I think your rhetoric rather badly misses the point.
Similar point your rhetoric seems to make it mysterious why people would have any end in itself (any non-instrumental goods). But I take it you think things like pleasure (or desire satisfaction or something like that) are non-instrumental goods. Why couldn't some non-instrumental goods turn out to be deontological moral goods, then it just likes you are engaged in semantic quibbles. Of course people can pursue honesty as an end in their self they just have to pursue it as pleasure not as a moral commitment, this just sounds like a semantic quibble about what moral means not a substantive point.
Also it just seems like there could be edge cases where we could easily recognize that some deontological or virtue consideration makes otherwise identical cases discriminable. Imagine there is some medication that can either cure a billion headaches or save one life (a life who's expected utility if they live happens to equal to 1/2 that of a billion avoided headaches). Imagine that if the billion people with the headache believe they are sacrificing for the sake of that one person they will evolve an amount of utility from satisfaction (and from the indirect effect of the increased instrumental efficacy of self-restraint implied by the population etc.) equal to 1/2 the disutility of the headache. From the point of view of general welfare then the case where the everyone take the medication for their headache is the same as the one where everyone with a headache nobly sacrifices so the person lives. However it hardly seems to hard hearted or even odd to view the case where a billion people display self-sacrifice as preferable though it entails no greater welfare. It is one thing to claim that general welfare is always a trump, but you come off as arguing the stronger claim that it must be the only moral factor.
Great! I think this gets to the heart of the dispute.
I think there's an important distinction between what we (fundamentally) care about versus other (e.g. epistemic) cognitive contributions to our decision-making. I agree that practical reasoning in general involves "all kinds of abstract moves". But a distinctive thing about consequentialism is that it allows all of our *fundamental concerns* to be about concrete individuals.
Caring about individuals' well-being seems self-evidently reasonable. (Indeed, one way of conceptualizing *well-being* is that it is whatever is worth caring about for an individual's own sake.)
Now consider how it seems questionable (in general) to prioritize other things over well-being. If you run into a burning building and rescue the paperclips rather than the people, it's gonna raise some questions. (Like: "WHY!?!?!") Traditionally, one of our most powerful ways to object to harmful or arbitrary ethical views (e.g. conservative views of sexual ethics) has been to ask, "Who does this norm help? Where's the harm? For whose sake should people prioritize religious teachings over their own desires and preferences?"
I'm basically wanting to generalize that methodology to suggest that non-consequentialist views should be regarded as akin to conservative religious ethics. Both direct us to care about things that — on further reflection — it doesn't make much sense to prioritize over people's lives and well-being.
I think it's important that there are (real) *normative reasons* to care about others (and act accordingly), to vindicate the claim that the non- and anti-beneficent are making a significant practical *mistake*.
I think that almost all of us have such normative reasons. But they are of an instrumental sort, because almost all of us care about suffering in others. But I can't see why someone lacking any and all prosocial emotions would have such reasons. (I'd love to be convinced otherwise!)
Do you have any argument for your claim that morality should serve the *welfare* interests of moral subjects rather than serve all their interests (including their non-welfare interests)? Arguably, I have an interest in your respecting my autonomy quite apart from whether your doing so would promote my welfare interests. For instance, I have an interest in your not injecting me with substances without my autonomous consent, and that holds even if your injecting me with a given substance without my autonomous consent would promote my welfare interests.
Not so much an argument as a question: Why *wouldn't* you want someone to inject you with helpful substances? Surely you would consent to it, given that it's good for you, unless you're irrational (like a young child) in which case what business do you have overriding people who are better positioned to serve your true interests?
In practice I don't want random people injecting me with stuff without asking because I don't trust them to actually know better than I do what would be good for me. Consent in this case seems purely a tool for protecting my welfare interests.
Well, if you concede that, for all you've argued, we may have interests (things that we rightly care about for their own sakes) besides our interest in acquiring more well-being, then we have no reason to accept your third premise, which claims: If putatively non-consequentialist norms don't serve the general welfare, then we should collectively agree to replace those non-consequentialist norms with alternatives that better serve moral subjects. After all, even if you're right that "morality should serve the interests of moral subjects," it doesn't follow that we should replace putatively non-consequentialist norms with alternatives that better serve only a proper subset of our interests. In any case, deontologists clearly think that we have a non-derivative interest in having our autonomy respected. So, if you're just going to deny this by claiming that our only interest is in greater well-being, then this argument is going to have very little dialectical force against the deontologist.
I'd be curious to hear your answers to the questions I asked.
But anyway, we can tweak P3 to make it more general — consequentialism needn't be strictly welfarist. If you give more weight to autonomy, you could end up with some form of autonomy consequentialism, for example:
You have to do more than "tweak" P3. You'll have to change P1 too. And I'm not sure what the justification for P2 will be once you make the necessary changes throughout. Unlike well-being, respect for autonomy is not something we pro tanto ought to promote — at least not in the sense of maximizing the total amount of it (or instances of it). Perhaps, you'll say that consequentialism offers a better explanation of why we should endorse both welfare-promoting norms and autonomy-respecting norms. But this isn't so obvious, at least, not given the way that I take you to be understanding 'consequentialism'.
You asked: "Why *wouldn't* you want someone to inject [me] with helpful substances?" It's not that I don't want them to inject me with helpful substances. It's that I don't want them to inject me with helpful substances without first getting my autonomous consent. And that's not only because I don't trust people with that sort of discretion, but also because I want them to respect my rational nature, which would involve their giving me the relevant information, appealing to my reason in the hopes of getting my autonomous consent, and refusing to proceed without it.
I guess that you're fine with people injecting you with helpful substances (say, while you're under general anesthesia for some other procedure) so long as they are genuinely helpful and you are no worse off in terms of your well-being for their doing this on the sly rather than on the up-and-up. (Note that I'm not talking about whether you would be fine with this becoming a social or institutional practice of some sort. I realize that you would find that problematic.)
Of course, maybe you don't care whether people respect your rational nature. Perhaps, you would be fine with a knowledgeable and trustworthy person injecting you with these substances without your even knowing. Fair enough. But you're not going to get far arguing against deontology if you do so by assuming that others don't care about having their autonomy respected, which is what you seemed to do in your original formulation of the argument, even if only inadvertently.
I find your case a bit puzzling, because I don't get what the agent's motivation is supposed to be for injecting me "on the sly" when they could easily first ask, and a policy of asking for consent is such a good one for protecting against risks of abuse. (I'd certainly *worry* about such an agent, but I think my reasons for being worried are ultimately entirely instrumental in nature.)
If the agent is an oracle who *knows* I will consent, it seems like they might be able to take my consenting disposition as sufficient - why waste time verbalizing it? By deferring to my consenting disposition, it seems they may still just as well qualify as respecting my rational nature.
Alternatively, if I would *not* consent, due to ignorance or irrationality, but I obviously *would* consent if I knew all the facts and was thinking clearly (this seems entailed by the fact that it's in my interests and hurts no-one else), where is my "rational nature" that they are supposed to respect? Over in the counterfactual world where I consent to the injection, not in this one where I don't. It's puzzling to suggest that deferring to temporary irrationality is a way of respecting one's rational nature.
So I'm overall puzzled by the suggestion that respect for one's rational nature has anything to do with deontology or securing consent in this sort of case. Can you point to an argument that credibly explains (rather than assumes) this connection?
***
re: "Unlike well-being, respect for autonomy is not something we pro tanto ought to promote" - The key move I'm making is inviting you to reflect, from a third-personal perspective, on what norms you have reason to want others to follow. As far as your own autonomy is concerned, it seems you have most reason to want violations to be minimized rather than respected in each instance (if the latter would result in your being more violated overall). You should prefer to be violated only once rather than five times, all else equal. So even our autonomy interests point towards wanting others to violate deontic constraints in "paradox of deontology"-type cases.
So there seems the basis here for a kind of contractarian argument for consequentialism, whatever the precise nature of our "interests" turns out to be.
"And that's not only because I don't trust people with that sort of discretion, but also because I want them to respect my rational nature, which would involve their giving me the relevant information, appealing to my reason in the hopes of getting my autonomous consent, and refusing to proceed without it."
Isn't that something we could still be considered a part of "welfare" and "consequences"?
I'm starting to wonder if there's no real difference here and it's just a semantics thing.
I think it all boils down to how expansive your conception of "well-being" is - if you think that part of what it means for someone to do well, or live well, is to display certain virtues and avoid certain vices, or to avoid certain impermissible acts and fulfill certain duties, then it's not too tricky to collapse almost any moral theory into a form of utilitarianism. But if you don't include those constraints, then it's not weird at all to me that we might sometimes sacrifice "amoral" well-being so as to avoid doing something horrible - plenty of deeply immoral pleasures just don't seem valuable in the first place to me!
Yeah, I'm drawn to expansive conceptions of well-being (feel free to exclude the evil pleasures!). But I don't think that makes the point trivial. On my understanding, the distinction between consequentialism and non-consequentialism is not axiological but structural: whether one assigns *non-instrumental* significance to deontic constraints. It's this particular normative commitment that I think makes no sense on reflection.
I ultimately agree with you, but as Lane Taylor said above, I do think there are commonsense ways of understanding deontological constraints as non-instrumentally significant but still connected in some deep way to a care for the specific individual involved and not the constraints in the abstract. Like, I think not pulling the trolley lever is crazy, but I still think a deontologist could honestly say they refrain out of concern for the dignity of the person on the track themselves, and not just because the rulebook says you can't. (Still a terrible decision, though!)
To me, the most compelling non-consequentialist reasons for moral norms are epistemic. I think it's good and important that we collectively believe true things, even in cases where doing so makes us worse off. For example, there might be a possible set of false religious ideas that would make us all better off if we all believed them, but it still seems bad to build a society around false ideas like that.
True beliefs are often instrumentally valuable in the long run. But in the exceptional cases where they are not, I think we may (rationally and correctly) act upon ourselves — e.g. by taking a magic belief-adjusting pill, were such a thing possible (maybe antidepressants count in some circumstances!) — to replace bad true beliefs with better false ones.
(Suppose an evil demon will torture everyone unless you take a magic pill that makes you believe that grass is purple. You should take the pill, because having true beliefs doesn’t matter compared to preventing torture.)
I agree. When it comes to hard cases deontology collapses into consequentialism anyway. But I do think that ‘rights talk’ etc is useful as a utility short cut. Most of the time I can assume eg free speech is a good thing or torture is wrong. Rights are utilities that have proven useful so often that we don’t want to have to keep calculating, even though the underlying logic is still consequentialist.
Have you ever struggled through Kamm’s Morality, Mortality? She’s got some of the best stuff on this (particularly in Vol. 2), but it’s not easy to get through.
Somewhat disagree: what’s wrong with the answer that it’s just analytic that what you shouldn’t do is what’s wrong even if it makes the world best? That’s what it means to be wrong!
Don’t get what you’re saying about changing the moral law. That’s not something we can do!
Compare the epistemic case. Even if epistemic norms govern belief (such that contrary-to-evidence beliefs are *objectively irrational* qua belief, no matter how instrumentally beneficial), we may—rationally and correctly—indirectly act upon ourselves to bring about better beliefs.
(Suppose an evil demon will torture everyone unless you take a magic pill that makes you believe that grass is purple. You should take the pill, because having justified beliefs doesn’t matter compared to preventing torture.)
I’m now suggesting exactly the same thing when it comes to moral norms. We can act indirectly to replace bad correct ones with better incorrect ones. There’s no reason to care about what’s “correct” when it comes apart from what’s good.
The argument is elegant and internally consistent, which is precisely what makes it worth pressing on. Consequentialism of this kind wins by setting the terms before the debate begins. Once welfare becomes the only legitimate currency of moral reasoning, non-consequentialist concerns do indeed look like arbitrary fetishes. But I think that move deserves scrutiny before we accept its conclusions.
The "harmless hypothetical" is doing enormous work here, and I am not sure it can bear the weight. It assumes that harm is fully measurable, that all relevant consequences can be surveyed in advance, and that a practice can be cleanly isolated from the social fabric it inhabits. These are not small assumptions. They are the entire contested terrain.
Here is what I keep coming back to: taboos are rarely arbitrary emotional fixations. More often they are compressed social knowledge, the accumulated result of communities learning across generations what kinds of practices corrode trust, dignity, and the basic conditions for living together. The fact that we cannot always produce a clean consequentialist justification for a taboo does not mean the justification is absent. It may mean it is too historically embedded, or too dependent on second and third order effects, to survive translation into a welfare calculus.
And I think the dilemma itself assumes something that history consistently undermines: that we can reliably identify which norms serve welfare and which do not. The track record of dismantling accumulated social wisdom in the name of rational reconstruction is, to put it gently, mixed.
Morality is made for man. But man is also, in part, made by morality. That recursion is what the harmless hypothetical cannot accommodate.
Personally, my problem with consequentialism is not that it's wrong (ignoring meta-ethics for a second), it is that it's prescriptively only part of the equation, and retrospectively so tautological that it's not actually useful.
A prescriptive theory of ethics should help you decide what to do. And sometimes when deciding what to do, you have to say, "Screw the consequences. I'm going to do what is right." An example of this might be Jet Li's character in the movie Hero. It's not immediately obvious from a consequentialist perspective why the main character accepts his fate at the end of the movie (I'd rather not spoil it, but it's worth a watch). But he did what he thought was right.
Retrospectively, you can spin up a story about how actually there are good consequences to the character accepting his fate. But that's post-hoc justification - the character himself is not considering the consequences. You might also argue from a consequentialist perspective that the set of norms Jet Li relies on can be argued, and the argument relies on consequentialism. But again that is post hoc justification, Chinese culture is under no obligation to justify its norms on consequentialist grounds. You can have a coherent set of ethics that is not grounded so.
Another path a consequentialist might take is to explain the event in such a way that actually good things are good, and therefore accepting his fate is actually consequentialism in disguise - but that's not a serious argument; it's a meaningless tautology. For a consequentialist, such a tautology may feel like it justifies itself. But as a non-consequentialist, such reasoning sounds childish and I don't even know how to respond. "Consequences" are just literally not how I define "good" and what more is there to say?
So, I suppose I'm closest to the second of your three dilemmas in that I'm not explicitly non-consequentialist - I believe consequences should be taken into account. But I deny the claim that "Consequentialism offers a better explanation of why we should endorse welfare-promoting norms" and will say instead that as a prescriptive theory it is too narrow to be uniquely useful or correct, and as a retrospective theory it is anemic and incapable of actually providing the justification that consequentialists want it to provide.
> A prescriptive theory of ethics should help you decide what to do.
See my post 'What Ethical Theory Is' - https://www.goodthoughts.blog/p/what-ethical-theory-is - for a cautionary note on that assumption. Like the relation between scientific theory and technical engineering, ethical theories may only *very indirectly* help our practical decision-making. It is not their primary task. (But I do think the indirect value of theoretical knowledge, in both science and ethics, is very high!)
There is a very easy answer to this. Your theory doesn’t care about humanity in itself. I do. So if you propose something that is “beneficial” in general but catastrophic for humanity, I’m going to oppose it.
If someone told me they had to kill my entire demographic to benefit humanity, I would reject that too.
The thing about humanity is that morality does not exist outside of us. There is no ethics among animals. Chickens don’t care about your rights. If you try to argue with one, you might as well argue with a rock and I don’t care about rock morality. People use morality to cooperate with other people and build civilization. Animals are not part of the project of civilization. At the very least, it makes sense to say that since animals are incapable of playing by any rules, we should treat them differently.
Well written but this just strikes me as a consequentialist argument for consequentialism, it's not going to pursaude anyone new. You take issue with deontological rules because they might hold us back from maximizing general welfare in some possible cases, but that's the whole point of deontological rules - to prevent a "tyranny of the welfare majority" scenario like killing one patient to save the five. Deontologists care about the general welfare just as much as consequentialists do, they just think there are some ethical lines you can't rightfully cross when trying maximizing good outcomes, which is a common moral intuition.
My key move is not to "take issue with deontological rules because they might hold us back from maximizing general welfare in some possible cases", but to invite the reader to consider what norms we have third-personal reason to *want* others to follow. This is a question that most ethicists don't talk about (they're more focused on forming direct deontic judgments), so I think it's more novel than you appreciate.
Rule consequentialist talk extensively about which social norms and folk morality would lead to the best overall welfare (or whatever's involved in the best moral outcome). Deontologists don't take themselves to be forming anything. They believe they are describing mind-independent moral laws. I think you're thinking deontologists have the same goals when doing ethics as rule consequentialists.
You misunderstand me. To attempt to "describe mind-independent moral laws" is to make (or "form") a deontic *judgment* about what one takes that moral reality to be. That's exactly what I'm saying is standard operating procedure in ethics, and the reason why it is novel to invite deontologists to reflect on the question of whether their "true" norms are practically endorsable or such that we can coherent WANT them to be followed.
It's not just about what he happen to actually want, but what we think on reflection it *makes sense* to want. Insofar as we think authoritative moral norms should make sense to want to see followed, doubting that this condition is met would give us reason to doubt that the identified norms are truly normatively authoritative after all!
Secondarily, it may—as mentioned toward the end of my post—give us practical reasons to try to co-ordinate to promote alternative (collectively better) norms, regardless of objective "correctness". It seems we have good reason to want to learn what these better norms would be, and to encourage others to follow the better norms. That is, in a conflict between betterness and correctness, it seems like betterness has the better claim to play the distinctive social-functional role that we associate with moral discourse.
I understand by "want" you don't just mean fulfilling people's arbitrary desires, you mean whatever is in the true best interest of the majority of rational agents (so, whatever they would rationally want). Why does that make a difference? Isn't it logically possible that the correct moral system isn't something rational agents would always rationally want? Here's the autopsy: You, like all consequentialists (myself included) believe consequences are the only relevant moral factor that could make an action right or wrong. Therefore, you think that any additional moral factor has to be justified in virtue of that one. Only rule consequentialists would agree with that. Deontologists do not agree that their additional moral factors (objective moral constraints) must be explained in terms of the consequences (what you call "practical reasons"). Deontologists justify their additional moral factors on the basis of brute moral intuition (not "practical reasons"), which you seem to think is relying on some sort of magical unexplained and abstract "correctness". It's not and it's called using moral intuition, and it's what you yourself use to justify your own belief that consequences are a moral factor that determines right from wrong actions. (It would be circular to defend the moral relevance of what you call "practical reasons" with practical reasons. Regardless, what you mean to say is consequences.) Read about rule consequentialism and how that differs from Deontology.
A Deontologist would agree with a consequentialist that saving 5 patients by killing one does result in a better outcome, and also that it maximizes the interests of rational agents. But they would disagree about whether that murder was immoral, despite yielding better overall consequences
> If they do, then they are not distinctively non-consequentialist. Consequentialism offers a better explanation of why we should endorse welfare-promoting norms.
Principlism is a respectable, common-folk ethical theory. It supplies us with action-guiding standards. The obligations we have are beneficence, respect for autonomy, non-maleficence, and justice.
I think it's clear, generally speaking, why this will track and why we should (—morally speaking) “care” about welfare. I’m also of the opinion (regarding my view of value theory) that what makes someone's life worth living is a life containing happiness, virtues, and autonomy; for its own sake.
So my ethical theory, and value pluralism is doing a lot of the work in explaining equally norms that generally promote welfare. But notice the kind of consequentialism in conjunction with several theses people would generally commit to, https://plato.stanford.edu/entries/consequentialism/#ClasUtil
If your endeavor is to capture that, then under the principle of parsimony, you should believe in my view.
I agree that this is a strong challenge to non-consequentialists.
Suppose the non-consequentialist points to cases of mutually beneficial exploitation. You are stuck in a pit and will die of exposure or hunger if you are not rescued. Since the pit is in a very isolated area, the chances of someone happening on you by accident are very low. Still, I luckily cross your path and offer to help you out of the pit if you agree to work for me for a dollar a day for the next year. You agree and we both benefit: your life is saved, against the odds, and I get cheap labor for the next year.
Still, even though I have saved your life, and even though you are far better off by making the deal than by rejecting it, it seems plain that I have wronged you. But then not only have I wronged you without harming you but I've wronged you while making you better off than you would have been.
I'm sure you've considered this general sort of case and would be interested in hearing how you respond to it.
It's good to allow *sufficient* reward to make it worth the rescuer's while to find people to rescue. But if you allow *unlimited* leeway to make demands, the *extra* cost to the rescued party clearly outweighs the extra benefit to the agent. (We wouldn't agree to a year of slave-wages for ourselves merely in order to later secure a year of cheap labor from someone else.) So it's clearly welfare-promoting to constrain what can be demanded in such situations. We would all have good reason to agree to such limits from behind a veil of ignorance, for example.
I think there's another important component of this, which is that fairness is a thing that can be valued (and I certainly value) as a separate, additional thing to well-being.
Eg behind the veil of ignorance, I could prefer a situation where I'm 99% likelier to be much worse off, because I care about the 1% who are *significantly* worse off.
I think what you are getting across is a view of harm, called comparativism.
I have an argument against it:
P1) The thought experiment, as in pre-emption cases, succeeds (P)
P2) If P1, then comparativism is false (¬Q). [P→ ¬Q]
Therefore, C1) comparativism is false (¬Q). [MP 1, 2]
A generic thought experiment about the pre-emption case:
At t1, either A or B will kill S at t2.
A and B are independent sufficient causes.
A acts slightly earlier and kills S.
If A had not acted, B would have killed S at the same time in the same way.
Actual world: S dies at t2 because of A.
Nearest ¬A world: S dies at t2 because of B.
So, S is not worse off with A than without A.
But many (including me) find it noticeable that A does harm S. If that judgment is correct, comparativism is false.
I feel this is stuck on blame, and thus on causation, where there is a correlation, here induced by timing . If person C intervenes to block A but not B, we might see them as harming S, depending on whether they perceived and had the ability to block both pathways. The total harm, "objectively", to S is the same, whatever the mechanism. In Dejardin's case above, rescuer could have done better - cue supererogation arguments.
Doesn't it aid the general welfare more to have citizens who will help each other out of pits for no reward, rather than for a year of servitude?
How about when an action hurts overall well-being in an arbitrarily small amount (or simply doesn't add to overall wellbeing) but serves deontological thingamajigs in a substantial amount? I think that maybe a Rossian deontology/pluralism (that almost always approximates into consequentialism in our particular contingent world, ripe as it is with various inequalities and EA/Longtermist opportunities) is the only moral theory that accounts for all the data points here.
Could be worth accepting for "moral uncertainty" reasons, with an underlying thought like "What if this weird causal relation *really does matter immensely*, for reasons that I simply cannot fathom?" If the welfare cost is sufficiently low, may it's worth acting on such incomprehensible reasons. But it's notable that the underlying motivation here still involves a striking kind of normative alienation: a feeling of "this makes no sense to me, but just in case..."
This is far more appealing to me, though I believe there is (or, yes yes maybe just wish for there to be, haha) something we can ground morality in: agency, or actions aligned with reality in a "natural" sense. I'm exploring the idea of separating moral duties from virtuous behavior to try and clarify this structure, but I need to continue to think through it.
Regarding the point about caring about abstractions more than people:
You respond to the alienation objection to utilitarianism by pointing out that the utilitarian cares about overall wellbeing only because they care about each individual person. This eliminates the worry that the utilitarian cares more about some abstract thing than actual people. I wonder if the same thing can be said from a deonotological perspective. I care about the moral rules only because I care about each individual person, and the respect their intrinsic value demands from me. What do you think of this move?
Also, I just finished your introduction to utilitarianism. It was great! Thanks for that
This would be the most promising style of answer, I think. But it awaits spelling-out: insofar as the concept of well-being is precisely that of what is worth caring about *for the sake of the individual in question*, it's conceptually puzzling how one could claim to be acting *for others' sake* while doing what is collectively worse for them.
Or as I put it here: https://www.goodthoughts.blog/p/bleeding-heart-consequentialism#footnote-anchor-4-96538931
"As a potential victim, I care a lot about whether I end up dead, and very little about the causal details of precisely how I end up dead. Moral agents should take others’ interests and preferences into account. To prefer that five of us end up dead, rather than just one dead via a special causal chain, is implicitly to treat the special causal chain as more significant than four people’s lives. That’s pretty awful, IMO, and disrespectful of our value as persons.
It’s not as though the one cares vastly more about not being killed in this way than the five each care about being rescued, after all. So when deontologists prioritize the former over the latter, they are acting in a way that cannot be justified by reference to the interests or preferences of the affected parties. They’re introducing a novel (moralized) preference of their own into the situation, and treating it as more important than what the affected parties care about (their very lives)."
“Why do you care more about abstractions than about real people? Seems bad!”
The flaw with this sort of argument is that the person considering whether to betray his country or his friend would use it as justification to betray his country which after all is just an abstraction. I get the sense is your answer is that the country consists of a bunch of concrete people each of whom he just not care about, however again that seems like an abstract concern compared to the more direct concrete emotional attachments he has to his friend and so on.
Consequentialist considerations seem to require all kinds of abstract moves. For example one needs a way to figure out how to add and subtract pleasure and pain, desire and repulsion, satisfaction and frustration or whatever, there are various abstract laws one could use to do it, but which one to choose why choose one of those abstractions over actual people. This is hardly just a problem for consequentialism, the addict must decide whether to pursue their immediate cravings enjoying expected temporary highs or whether to try and overcome those immediate impulses by focusing on their current pains and discomforts and their expectations of more to come. Either desire might claim the other is the mere abstraction with as much reason. I'm not sure that matters.
As soon as we allow an abstraction can cut off the force of a concrete person or thing's demand your dictum loses its force.
What you really meant was "Why do you care more about this abstraction than about this other abstraction I've constructed that is at least according to me about real people?" To which the answer would presumably be because I subscribe to different principles of which abstractions I should care about.
I'm not sure I disagree with your substantive point, but I think your rhetoric rather badly misses the point.
Similar point your rhetoric seems to make it mysterious why people would have any end in itself (any non-instrumental goods). But I take it you think things like pleasure (or desire satisfaction or something like that) are non-instrumental goods. Why couldn't some non-instrumental goods turn out to be deontological moral goods, then it just likes you are engaged in semantic quibbles. Of course people can pursue honesty as an end in their self they just have to pursue it as pleasure not as a moral commitment, this just sounds like a semantic quibble about what moral means not a substantive point.
Also it just seems like there could be edge cases where we could easily recognize that some deontological or virtue consideration makes otherwise identical cases discriminable. Imagine there is some medication that can either cure a billion headaches or save one life (a life who's expected utility if they live happens to equal to 1/2 that of a billion avoided headaches). Imagine that if the billion people with the headache believe they are sacrificing for the sake of that one person they will evolve an amount of utility from satisfaction (and from the indirect effect of the increased instrumental efficacy of self-restraint implied by the population etc.) equal to 1/2 the disutility of the headache. From the point of view of general welfare then the case where the everyone take the medication for their headache is the same as the one where everyone with a headache nobly sacrifices so the person lives. However it hardly seems to hard hearted or even odd to view the case where a billion people display self-sacrifice as preferable though it entails no greater welfare. It is one thing to claim that general welfare is always a trump, but you come off as arguing the stronger claim that it must be the only moral factor.
Great! I think this gets to the heart of the dispute.
I think there's an important distinction between what we (fundamentally) care about versus other (e.g. epistemic) cognitive contributions to our decision-making. I agree that practical reasoning in general involves "all kinds of abstract moves". But a distinctive thing about consequentialism is that it allows all of our *fundamental concerns* to be about concrete individuals.
Caring about individuals' well-being seems self-evidently reasonable. (Indeed, one way of conceptualizing *well-being* is that it is whatever is worth caring about for an individual's own sake.)
Now consider how it seems questionable (in general) to prioritize other things over well-being. If you run into a burning building and rescue the paperclips rather than the people, it's gonna raise some questions. (Like: "WHY!?!?!") Traditionally, one of our most powerful ways to object to harmful or arbitrary ethical views (e.g. conservative views of sexual ethics) has been to ask, "Who does this norm help? Where's the harm? For whose sake should people prioritize religious teachings over their own desires and preferences?"
I'm basically wanting to generalize that methodology to suggest that non-consequentialist views should be regarded as akin to conservative religious ethics. Both direct us to care about things that — on further reflection — it doesn't make much sense to prioritize over people's lives and well-being.
Great post.
"As a result, if deontology were true, I’d rather be a beneficent amoralist, saying “screw morality; just be good.”"
- this is part of the reason I defend beneficent amoralism/nihilism. Utilitarianism is a radical re-engineered approach for how to do ethics that doesn't capture intuitions well. But so what?https://walterveit.substack.com/p/why-effective-altruists-and-everyone
I think it's important that there are (real) *normative reasons* to care about others (and act accordingly), to vindicate the claim that the non- and anti-beneficent are making a significant practical *mistake*.
I think that almost all of us have such normative reasons. But they are of an instrumental sort, because almost all of us care about suffering in others. But I can't see why someone lacking any and all prosocial emotions would have such reasons. (I'd love to be convinced otherwise!)
Do you have any argument for your claim that morality should serve the *welfare* interests of moral subjects rather than serve all their interests (including their non-welfare interests)? Arguably, I have an interest in your respecting my autonomy quite apart from whether your doing so would promote my welfare interests. For instance, I have an interest in your not injecting me with substances without my autonomous consent, and that holds even if your injecting me with a given substance without my autonomous consent would promote my welfare interests.
Not so much an argument as a question: Why *wouldn't* you want someone to inject you with helpful substances? Surely you would consent to it, given that it's good for you, unless you're irrational (like a young child) in which case what business do you have overriding people who are better positioned to serve your true interests?
In practice I don't want random people injecting me with stuff without asking because I don't trust them to actually know better than I do what would be good for me. Consent in this case seems purely a tool for protecting my welfare interests.
Well, if you concede that, for all you've argued, we may have interests (things that we rightly care about for their own sakes) besides our interest in acquiring more well-being, then we have no reason to accept your third premise, which claims: If putatively non-consequentialist norms don't serve the general welfare, then we should collectively agree to replace those non-consequentialist norms with alternatives that better serve moral subjects. After all, even if you're right that "morality should serve the interests of moral subjects," it doesn't follow that we should replace putatively non-consequentialist norms with alternatives that better serve only a proper subset of our interests. In any case, deontologists clearly think that we have a non-derivative interest in having our autonomy respected. So, if you're just going to deny this by claiming that our only interest is in greater well-being, then this argument is going to have very little dialectical force against the deontologist.
I'd be curious to hear your answers to the questions I asked.
But anyway, we can tweak P3 to make it more general — consequentialism needn't be strictly welfarist. If you give more weight to autonomy, you could end up with some form of autonomy consequentialism, for example:
https://www.goodthoughts.blog/p/autonomy-consequentialism
Nothing in this vein would seem to motivate the distinctive structure of (non-instrumental) deontic constraints.
You have to do more than "tweak" P3. You'll have to change P1 too. And I'm not sure what the justification for P2 will be once you make the necessary changes throughout. Unlike well-being, respect for autonomy is not something we pro tanto ought to promote — at least not in the sense of maximizing the total amount of it (or instances of it). Perhaps, you'll say that consequentialism offers a better explanation of why we should endorse both welfare-promoting norms and autonomy-respecting norms. But this isn't so obvious, at least, not given the way that I take you to be understanding 'consequentialism'.
You asked: "Why *wouldn't* you want someone to inject [me] with helpful substances?" It's not that I don't want them to inject me with helpful substances. It's that I don't want them to inject me with helpful substances without first getting my autonomous consent. And that's not only because I don't trust people with that sort of discretion, but also because I want them to respect my rational nature, which would involve their giving me the relevant information, appealing to my reason in the hopes of getting my autonomous consent, and refusing to proceed without it.
I guess that you're fine with people injecting you with helpful substances (say, while you're under general anesthesia for some other procedure) so long as they are genuinely helpful and you are no worse off in terms of your well-being for their doing this on the sly rather than on the up-and-up. (Note that I'm not talking about whether you would be fine with this becoming a social or institutional practice of some sort. I realize that you would find that problematic.)
Of course, maybe you don't care whether people respect your rational nature. Perhaps, you would be fine with a knowledgeable and trustworthy person injecting you with these substances without your even knowing. Fair enough. But you're not going to get far arguing against deontology if you do so by assuming that others don't care about having their autonomy respected, which is what you seemed to do in your original formulation of the argument, even if only inadvertently.
I find your case a bit puzzling, because I don't get what the agent's motivation is supposed to be for injecting me "on the sly" when they could easily first ask, and a policy of asking for consent is such a good one for protecting against risks of abuse. (I'd certainly *worry* about such an agent, but I think my reasons for being worried are ultimately entirely instrumental in nature.)
If the agent is an oracle who *knows* I will consent, it seems like they might be able to take my consenting disposition as sufficient - why waste time verbalizing it? By deferring to my consenting disposition, it seems they may still just as well qualify as respecting my rational nature.
Alternatively, if I would *not* consent, due to ignorance or irrationality, but I obviously *would* consent if I knew all the facts and was thinking clearly (this seems entailed by the fact that it's in my interests and hurts no-one else), where is my "rational nature" that they are supposed to respect? Over in the counterfactual world where I consent to the injection, not in this one where I don't. It's puzzling to suggest that deferring to temporary irrationality is a way of respecting one's rational nature.
So I'm overall puzzled by the suggestion that respect for one's rational nature has anything to do with deontology or securing consent in this sort of case. Can you point to an argument that credibly explains (rather than assumes) this connection?
***
re: "Unlike well-being, respect for autonomy is not something we pro tanto ought to promote" - The key move I'm making is inviting you to reflect, from a third-personal perspective, on what norms you have reason to want others to follow. As far as your own autonomy is concerned, it seems you have most reason to want violations to be minimized rather than respected in each instance (if the latter would result in your being more violated overall). You should prefer to be violated only once rather than five times, all else equal. So even our autonomy interests point towards wanting others to violate deontic constraints in "paradox of deontology"-type cases.
So there seems the basis here for a kind of contractarian argument for consequentialism, whatever the precise nature of our "interests" turns out to be.
"And that's not only because I don't trust people with that sort of discretion, but also because I want them to respect my rational nature, which would involve their giving me the relevant information, appealing to my reason in the hopes of getting my autonomous consent, and refusing to proceed without it."
Isn't that something we could still be considered a part of "welfare" and "consequences"?
I'm starting to wonder if there's no real difference here and it's just a semantics thing.
I think it all boils down to how expansive your conception of "well-being" is - if you think that part of what it means for someone to do well, or live well, is to display certain virtues and avoid certain vices, or to avoid certain impermissible acts and fulfill certain duties, then it's not too tricky to collapse almost any moral theory into a form of utilitarianism. But if you don't include those constraints, then it's not weird at all to me that we might sometimes sacrifice "amoral" well-being so as to avoid doing something horrible - plenty of deeply immoral pleasures just don't seem valuable in the first place to me!
Yeah, I'm drawn to expansive conceptions of well-being (feel free to exclude the evil pleasures!). But I don't think that makes the point trivial. On my understanding, the distinction between consequentialism and non-consequentialism is not axiological but structural: whether one assigns *non-instrumental* significance to deontic constraints. It's this particular normative commitment that I think makes no sense on reflection.
I ultimately agree with you, but as Lane Taylor said above, I do think there are commonsense ways of understanding deontological constraints as non-instrumentally significant but still connected in some deep way to a care for the specific individual involved and not the constraints in the abstract. Like, I think not pulling the trolley lever is crazy, but I still think a deontologist could honestly say they refrain out of concern for the dignity of the person on the track themselves, and not just because the rulebook says you can't. (Still a terrible decision, though!)
To me, the most compelling non-consequentialist reasons for moral norms are epistemic. I think it's good and important that we collectively believe true things, even in cases where doing so makes us worse off. For example, there might be a possible set of false religious ideas that would make us all better off if we all believed them, but it still seems bad to build a society around false ideas like that.
True beliefs are often instrumentally valuable in the long run. But in the exceptional cases where they are not, I think we may (rationally and correctly) act upon ourselves — e.g. by taking a magic belief-adjusting pill, were such a thing possible (maybe antidepressants count in some circumstances!) — to replace bad true beliefs with better false ones.
(Suppose an evil demon will torture everyone unless you take a magic pill that makes you believe that grass is purple. You should take the pill, because having true beliefs doesn’t matter compared to preventing torture.)
I agree. When it comes to hard cases deontology collapses into consequentialism anyway. But I do think that ‘rights talk’ etc is useful as a utility short cut. Most of the time I can assume eg free speech is a good thing or torture is wrong. Rights are utilities that have proven useful so often that we don’t want to have to keep calculating, even though the underlying logic is still consequentialist.
Have you ever struggled through Kamm’s Morality, Mortality? She’s got some of the best stuff on this (particularly in Vol. 2), but it’s not easy to get through.
Only in parts. I wasn't impressed by her take that recognizing instrumental value is incompatible with valuing people as ends in themselves - see https://www.utilitarianism.net/objections-to-utilitarianism/mere-means/#instrumental-favoritism - but I should try some other chapters!
Somewhat disagree: what’s wrong with the answer that it’s just analytic that what you shouldn’t do is what’s wrong even if it makes the world best? That’s what it means to be wrong!
Don’t get what you’re saying about changing the moral law. That’s not something we can do!
Compare the epistemic case. Even if epistemic norms govern belief (such that contrary-to-evidence beliefs are *objectively irrational* qua belief, no matter how instrumentally beneficial), we may—rationally and correctly—indirectly act upon ourselves to bring about better beliefs.
(Suppose an evil demon will torture everyone unless you take a magic pill that makes you believe that grass is purple. You should take the pill, because having justified beliefs doesn’t matter compared to preventing torture.)
I’m now suggesting exactly the same thing when it comes to moral norms. We can act indirectly to replace bad correct ones with better incorrect ones. There’s no reason to care about what’s “correct” when it comes apart from what’s good.
The argument is elegant and internally consistent, which is precisely what makes it worth pressing on. Consequentialism of this kind wins by setting the terms before the debate begins. Once welfare becomes the only legitimate currency of moral reasoning, non-consequentialist concerns do indeed look like arbitrary fetishes. But I think that move deserves scrutiny before we accept its conclusions.
The "harmless hypothetical" is doing enormous work here, and I am not sure it can bear the weight. It assumes that harm is fully measurable, that all relevant consequences can be surveyed in advance, and that a practice can be cleanly isolated from the social fabric it inhabits. These are not small assumptions. They are the entire contested terrain.
Here is what I keep coming back to: taboos are rarely arbitrary emotional fixations. More often they are compressed social knowledge, the accumulated result of communities learning across generations what kinds of practices corrode trust, dignity, and the basic conditions for living together. The fact that we cannot always produce a clean consequentialist justification for a taboo does not mean the justification is absent. It may mean it is too historically embedded, or too dependent on second and third order effects, to survive translation into a welfare calculus.
And I think the dilemma itself assumes something that history consistently undermines: that we can reliably identify which norms serve welfare and which do not. The track record of dismantling accumulated social wisdom in the name of rational reconstruction is, to put it gently, mixed.
Morality is made for man. But man is also, in part, made by morality. That recursion is what the harmless hypothetical cannot accommodate.
Personally, my problem with consequentialism is not that it's wrong (ignoring meta-ethics for a second), it is that it's prescriptively only part of the equation, and retrospectively so tautological that it's not actually useful.
A prescriptive theory of ethics should help you decide what to do. And sometimes when deciding what to do, you have to say, "Screw the consequences. I'm going to do what is right." An example of this might be Jet Li's character in the movie Hero. It's not immediately obvious from a consequentialist perspective why the main character accepts his fate at the end of the movie (I'd rather not spoil it, but it's worth a watch). But he did what he thought was right.
Retrospectively, you can spin up a story about how actually there are good consequences to the character accepting his fate. But that's post-hoc justification - the character himself is not considering the consequences. You might also argue from a consequentialist perspective that the set of norms Jet Li relies on can be argued, and the argument relies on consequentialism. But again that is post hoc justification, Chinese culture is under no obligation to justify its norms on consequentialist grounds. You can have a coherent set of ethics that is not grounded so.
Another path a consequentialist might take is to explain the event in such a way that actually good things are good, and therefore accepting his fate is actually consequentialism in disguise - but that's not a serious argument; it's a meaningless tautology. For a consequentialist, such a tautology may feel like it justifies itself. But as a non-consequentialist, such reasoning sounds childish and I don't even know how to respond. "Consequences" are just literally not how I define "good" and what more is there to say?
So, I suppose I'm closest to the second of your three dilemmas in that I'm not explicitly non-consequentialist - I believe consequences should be taken into account. But I deny the claim that "Consequentialism offers a better explanation of why we should endorse welfare-promoting norms" and will say instead that as a prescriptive theory it is too narrow to be uniquely useful or correct, and as a retrospective theory it is anemic and incapable of actually providing the justification that consequentialists want it to provide.
> A prescriptive theory of ethics should help you decide what to do.
See my post 'What Ethical Theory Is' - https://www.goodthoughts.blog/p/what-ethical-theory-is - for a cautionary note on that assumption. Like the relation between scientific theory and technical engineering, ethical theories may only *very indirectly* help our practical decision-making. It is not their primary task. (But I do think the indirect value of theoretical knowledge, in both science and ethics, is very high!)
There is a very easy answer to this. Your theory doesn’t care about humanity in itself. I do. So if you propose something that is “beneficial” in general but catastrophic for humanity, I’m going to oppose it.
Does it bother you that species chauvinism is structurally akin to, say, male chauvinism or white supremacy?
The question of moral circle expansion is independent of the the consequentialism-deontology distinction, though.
If someone told me they had to kill my entire demographic to benefit humanity, I would reject that too.
The thing about humanity is that morality does not exist outside of us. There is no ethics among animals. Chickens don’t care about your rights. If you try to argue with one, you might as well argue with a rock and I don’t care about rock morality. People use morality to cooperate with other people and build civilization. Animals are not part of the project of civilization. At the very least, it makes sense to say that since animals are incapable of playing by any rules, we should treat them differently.
Well written but this just strikes me as a consequentialist argument for consequentialism, it's not going to pursaude anyone new. You take issue with deontological rules because they might hold us back from maximizing general welfare in some possible cases, but that's the whole point of deontological rules - to prevent a "tyranny of the welfare majority" scenario like killing one patient to save the five. Deontologists care about the general welfare just as much as consequentialists do, they just think there are some ethical lines you can't rightfully cross when trying maximizing good outcomes, which is a common moral intuition.
My key move is not to "take issue with deontological rules because they might hold us back from maximizing general welfare in some possible cases", but to invite the reader to consider what norms we have third-personal reason to *want* others to follow. This is a question that most ethicists don't talk about (they're more focused on forming direct deontic judgments), so I think it's more novel than you appreciate.
Rule consequentialist talk extensively about which social norms and folk morality would lead to the best overall welfare (or whatever's involved in the best moral outcome). Deontologists don't take themselves to be forming anything. They believe they are describing mind-independent moral laws. I think you're thinking deontologists have the same goals when doing ethics as rule consequentialists.
You misunderstand me. To attempt to "describe mind-independent moral laws" is to make (or "form") a deontic *judgment* about what one takes that moral reality to be. That's exactly what I'm saying is standard operating procedure in ethics, and the reason why it is novel to invite deontologists to reflect on the question of whether their "true" norms are practically endorsable or such that we can coherent WANT them to be followed.
What would it prove if the correct moral system wasn't something we wanted to follow? Doesn't seem like that would make a difference to moral realists
It's not just about what he happen to actually want, but what we think on reflection it *makes sense* to want. Insofar as we think authoritative moral norms should make sense to want to see followed, doubting that this condition is met would give us reason to doubt that the identified norms are truly normatively authoritative after all!
Secondarily, it may—as mentioned toward the end of my post—give us practical reasons to try to co-ordinate to promote alternative (collectively better) norms, regardless of objective "correctness". It seems we have good reason to want to learn what these better norms would be, and to encourage others to follow the better norms. That is, in a conflict between betterness and correctness, it seems like betterness has the better claim to play the distinctive social-functional role that we associate with moral discourse.
Apologies if my earlier reply is still missing the mark or mischaracterizing, I would genuinely like to be corrected if i’m missing something
I understand by "want" you don't just mean fulfilling people's arbitrary desires, you mean whatever is in the true best interest of the majority of rational agents (so, whatever they would rationally want). Why does that make a difference? Isn't it logically possible that the correct moral system isn't something rational agents would always rationally want? Here's the autopsy: You, like all consequentialists (myself included) believe consequences are the only relevant moral factor that could make an action right or wrong. Therefore, you think that any additional moral factor has to be justified in virtue of that one. Only rule consequentialists would agree with that. Deontologists do not agree that their additional moral factors (objective moral constraints) must be explained in terms of the consequences (what you call "practical reasons"). Deontologists justify their additional moral factors on the basis of brute moral intuition (not "practical reasons"), which you seem to think is relying on some sort of magical unexplained and abstract "correctness". It's not and it's called using moral intuition, and it's what you yourself use to justify your own belief that consequences are a moral factor that determines right from wrong actions. (It would be circular to defend the moral relevance of what you call "practical reasons" with practical reasons. Regardless, what you mean to say is consequences.) Read about rule consequentialism and how that differs from Deontology.
A Deontologist would agree with a consequentialist that saving 5 patients by killing one does result in a better outcome, and also that it maximizes the interests of rational agents. But they would disagree about whether that murder was immoral, despite yielding better overall consequences
> If they do, then they are not distinctively non-consequentialist. Consequentialism offers a better explanation of why we should endorse welfare-promoting norms.
Principlism is a respectable, common-folk ethical theory. It supplies us with action-guiding standards. The obligations we have are beneficence, respect for autonomy, non-maleficence, and justice.
I think it's clear, generally speaking, why this will track and why we should (—morally speaking) “care” about welfare. I’m also of the opinion (regarding my view of value theory) that what makes someone's life worth living is a life containing happiness, virtues, and autonomy; for its own sake.
So my ethical theory, and value pluralism is doing a lot of the work in explaining equally norms that generally promote welfare. But notice the kind of consequentialism in conjunction with several theses people would generally commit to, https://plato.stanford.edu/entries/consequentialism/#ClasUtil
If your endeavor is to capture that, then under the principle of parsimony, you should believe in my view.