Replacing Unfortunate Norms
Even if they're objectively correct
Here’s a claim I find interesting and underexplored: either consequentialism is correct or morality is lamentable and beneficent motivations should rationally lead us to coordinate against it.
That was the intended upshot of my previous post. Some instead read it as table-thumping consequentialist intuitions: merely repeating “How could it be wrong to bring about better outcomes?” Of course, I do find it odd for anyone to oppose better outcomes. But there’s more to it than that.
Recap
The first distinctive move of my post was to step back from direct deontic intuitions about “wrongness”, etc., and invite independent reflection on what seems worth caring about. Insofar as one engages in narrow reflective equilibrium (i.e., capturing one’s pattern of intuitions about how to apply the term “wrong” across hypothetical cases), there’s a risk of discovering a patterned property that does the extensional job of tracking which acts we intuitively judge as “wrong”, but which makes little apparent sense to care about. In that case, it seems rational to disavow deontological properties as normatively irrelevant on further reflection, no matter our semantic intuitions about moral language.
Step 1: Seriously consider the possibility that our deontic intuitions aren’t tracking anything of fundamental moral significance. (I actually think this is clearly the case, and something of a methodological scandal for orthodox moral philosophy.)
My next distancing move was to shift away from thinking about normative judgments (or propositions) entirely, and instead ask what norms we have practical reason to want others to follow. That is:
Step 2: Consider the third-personal practical question of what we as bystanders should generally want others to do (as distinct from the first-personal question of what you as the agent ought to do).
Since moral subjects could generally anticipate being better off if agents successfully followed utilitarian norms, there seem clear reasons for us to prefer utilitarian (rather than deontological) norms to be successfully followed. Interestingly, this gives us reasons ex ante—before we discover our particular circumstances—to pre-commit to waiving any non-utilitarian rights we may have, on condition that others do likewise. That is:
Step 3: Consider whether, even if deontological norms turned out to be correct, we’d have reason to collectively work around them, and socially implement optimal norms instead.1
Replacing Unfortunate Norms
Some commenters—e.g. Bentham's Bulldog—thought this last distinctive step didn’t make much sense:
Don’t get what you’re saying about changing the moral law. That’s not something we can do!
Today’s post will try to clarify what, in this vicinity, we can do. I have three suggestions, each subsequent one offering a “fallback” option in case the stronger prior response(s) fail.
First, one might simply accept the pre-theoretic background principle that ethical success shouldn’t be predictably lamentable. Since successfully-followed deontological norms are predictably lamentable, this gives us pre-theoretic reason to think that they can’t be the correct norms after all.
Second, and more curiously, one might practically reject (or try to work around) even norms that one regards, intellectually, as objectively correct. For example, I’m an evidentialist about epistemic normativity: beliefs are justified or not based on our evidence, not pragmatic considerations. But I care about pragmatic considerations more than having justified beliefs. Suppose an evil demon credibly threatened to torture everyone unless I soon believe that grass is purple. In that case, if a magic “believe that grass is purple” pill happened to be lying around, I would take the pill. This is a rationally justified action, though it produces an objectively incorrect and irrational belief.2 As this example shows, avoiding normative mistakes (such as incorrect or irrational beliefs) should not be our greatest concern in life. If it was, that very priority would itself be a far graver normative error!
It’s interesting to consider applying a similar Parfitian structure of “rational irrationality” to action itself. Even if it would be wrong for us to φ, perhaps we could justifiably manipulate our dispositions towards φ-ing, if this would somehow better serve moral subjects. (I’ve argued elsewhere that we shouldn’t necessarily be averse to acting wrongly.) If we all agree that the norm against φ-ing is bad for moral subjects, it seems we would have strong reason to collectively manipulate ourselves into adopting different norms. At the very least, we would seem to have strong reason to want others to learn different norms—to want others’ moral education to be more impartially beneficial. And perhaps we could even be persuaded to let our own beliefs be manipulated into beneficial falsehoods on condition that others did likewise. Given the stipulation that it is truly beneficial… why not?
Finally, even if it turns out that we cannot “justifiably” dispose ourselves to act wrongly (for some suitably objective sense of justification), it may nonetheless be the case that reasonable and well-meaning (beneficent) agents can fortunately be tempted to wrongly adopt consequentialism. Given how much good this would do, it doesn’t seem like it could be a very serious wrong—even compared to, say, eating meat, which is another wrong that I’m personally pretty comfortable with. So my final move is to wave my pirate flag and just encourage people to wrongly be good, if that’s the best we can do!
If that sounds incoherent, then I suspect you must implicitly be supposing the principle mentioned earlier, on which true morality can’t be lamentable. If that principle is right, then deontology can’t be true. So the only situation in which we need my purely pragmatic argument (against following true deontology) is if we’re working with a conception of ethics on which it can be predictably lamentable. In that case, like epistemic normativity, it just doesn’t have the kind of overriding importance that makes it worth respecting if it clashes with what’s independently worth caring about. ¯\_(ツ)_/¯
Regrettable Rightness: The Newspaper Case
Imagine waking up one morning to the newspaper headline, “TROLLEY FOOTBRIDGE HAPPENS FOR REAL!” Before you read on, you pause to think about it. The agent in the situation was faced with the choice to either let five die or kill one to save the five. Once you read on, you’ll discover the outcome: whether the agent killed one or let five die. Between these two possible outcomes, which should you hope for?3
Claim: regardless of what’s right or wrong, a decent person in this situation would hope to learn that the one was killed rather than that the five died (assuming that no graver downstream harms would follow from this act). Even if it is a wrong-making feature of the action, the fact that the one will have died as a deliberate result of agential intervention is—many deontologists agree—not more inherently terrible than four more deaths. (What’s wrong and what’s most terrible/regrettable may come apart, for deontologists.)
Now here’s a maxim of practical rationality: We have reason to coordinate with others to secure desirable outcomes and to get fewer (and less severely) regrettable actions and events to occur. (Note that this is not itself a claim about which acts are permissible or impermissible, so I don’t take it to beg any questions. It’s just a supplemental claim about what norms we have practical reason to promote.) So we have reason to coordinate in opposition to deontology, and teach kids consequentialism instead (insofar as they’re competent to follow it successfully: it would be suitable to teach to young angels, for example).
Alternatively, if deontologists argue for additional moral prohibitions on colluding to promote falsehood and moral corruption,4 it seems like it would at least be rational for them to abstain from the public sphere and hope that sincere consequentialists win the day. After all, the lesson of Newspaper is that nobody wants to read that a deontologist was in charge of a high-stakes decision (if we could instead have had a competent consequentialist bring about a truly better outcome).
The key move: orthodox (agent-relative) deontology speaks to the obligations of the agent in the situation, but says nothing about what the rest of us should want to happen. If I’m right that the rest of us should want different things from what deontology demands of the agent, then it makes sense for us to adopt an adversarial attitude towards deontological morality in others, and even discourage it from being taught to other potential agents—in much the same way that we can’t coherently want others to be egoists (who would promote their agent-relative priorities over ours).
This brings out my disjunctive conclusion: either consequentialism is correct or morality is lamentable and beneficent motivations should rationally lead us to coordinate against it if we can. Either way, we have good reason to hope that others successfully act as consequentialism recommends. Consequentialists can promote this outcome (by advocating their theory) in full sincerity. Others may be constrained against acting to help, but still have reason to wish us success in promoting better norms than the ones they sadly believe in.
See my discussion of ICE/police accountability (versus appeals to armed agents’ putative “right to self-defense”) for an important example of how this theoretical difference can play out in practice. I think it really matters that we circumscribe rights, and craft moral norms, with an eye to the general good. As Sidgwick famously argued, a background utilitarian theory is extremely helpful for delineating answers to the tough questions on which commonsense morality is hopelessly vague.
Importantly, it’s not that pragmatic reasons outweigh epistemic ones, making the false belief all-things-considered “justified”. No, the belief remains totally unwarranted. The point is just that our belief-directed actions can aim at goals other than maximizing the rationality of the targeted attitude. By analogy, our act-directed actions might conceivably aim at goals other than the permissibility of the downstream actions. When an initial act A brings it about that you subsequently perform an impermissible act B, it is at least an open question whether act A itself may yet have been permissible.
Thanks to a helpful anonymous referee (of my ‘New Paradox’ paper) for suggesting this neat thought experiment.
Insofar as another’s “moral corruption” simply involves their failure to recognize and act upon agent-relative reasons (like those posited by orthodox deontology), it’s actually quite obscure why anyone else should care. What would be more of a worry is if some awful doctrine were preventing people from appreciating the force of their agent-neutral reasons, e.g. to promote the good. After all “agent-neutral” reasons are ones that we all share: that gives us all reason to be disappointed or upset when another fails to act upon them! Agent-relative reasons, by contrast, would seem of no inherent interest to anyone but the agent to whom they are relativized. (Imagine being upset by an ethical egoist failing to act as selfishly as they agent-relatively “ought”!)



Can something in the vicinity be used as a general argument against moral realism? Imagine that we somehow discovered that our grasp of moral facts is completely wrong and that the only moral fact is that we each ought to collect 589281 paperclips. It seems like in this scenario we would just stop caring about the moral facts. But doesn't that at least imply that on some meta-level we already aren't fully committed to caring about moral facts, but only care about them if, for example, they line up with our desires or sensibilities.
I think the answer to why one would prefer non-consequentialist norms is hidden in the initial framing. The choices are presented as "norms which are best for the collective on average vs. not best for the collective on average" as if we were, from a first-person perspective, making decisions from the perspective of the universal 3rd person. But the other option here is to endorse norms that allow us, individually, to pursue our own personal projects at the expense of the abstract global utility function.
So if "why should I care?" is the ultimate litmus test, then the consequentialist is actually in a much harder position than the non-consequentialist because the choices aren't "well being or not well being" but rather "well being of some abstract collective vs. my own well being, or that of my loved ones." Once we make this appropriate reframe, the consequentialist is in a tougher spot.
One way around this issue for the consequentialist is to say "morality *just is* about universalizing/impartial norms" which I'm open to, but at that point the more fundamental question then becomes "why be moral?" which isn't a clearly answerable question as initially presented.
Anyways, great stuff! I'm enjoying this line of enquiry.