Satisficing consequentialism allows that some sub-maximal but good enough level of value-promotion can qualify as ‘permissible’. This view has faced two main objections (widely regarded as utterly decisive): (1) that it lacks a principled basis for drawing the line of what counts as good enough in any particular place, and (2) that it licenses the gratuitous prevention of goodness above the satisficing baseline.
In ‘Willpower Satisficing’ (Noûs, 2019) I presented a new version of the view that solves these two problems. It avoids the gratuitous prevention of goodness by requiring agents to do they best they can without undue burden (and there is no “burden” to avoiding a gratuitously harmful action). And it proposes outsourcing the principled delineation of the baseline to an independent account of quality of will and fitting attitudes. I suggest that there’s some minimal level of good will that is sufficient to avoid deserving blame. But this could be context-dependent. If you could do vastly more good with just a tiny bit more effort, for example, that extra effort may become required of you.
A couple of interesting recent papers by Joe Slater, Satisficing Consequentialism Still Doesn't Satisfy (Utilitas, 2020) and Satisficers Still Get Away with Murder! (Ergo, 2023) raise new challenges for my approach to satisficing. In this post, I’ll briefly explain the core worries and my thoughts about them.
(1) Is Some Inefficiency OK?
I take the heart of the first paper to be the idea that some forms of inefficiency—especially for acts that are supererogatory to begin with—are intuitively permissible:
Imagine the minimum you can do for Mother's Day is send a card and make a phone call. You realise also that the best thing to do would be to take her out to her favourite restaurant (Mother's Choice). However, a close second would be to take her out to a restaurant you prefer (Child's Choice), which she also likes. The amount you prefer the Child's Choice (by stipulation) is slightly less than the amount your mother prefers Mother's Choice. Assuming attending either restaurant would take the same amount of effort—the price of the food is similar, the distance to travel is the same, etc.—WSC is committed to condemning taking her to Child's Choice. But you are permitted to simply call and send a card, which generates much less utility!
I’m not too troubled by this implication. It’s a familiar point—from “All or Nothing” cases—that there can be something objectionable about inefficient supererogation. Once you accept the cost of running into a burning building, it would be wrong to rescue a bird and leave a baby to die. This seems true, even though it would have been permissible to not enter the burning building at all, which does even less good. (It just goes to show that permissibility is not all that matters.)
That said, what my account is really aiming at is to rule out gratuitously suboptimal behaviour. Slater is effectively arguing that some forms of inefficient moral behaviour are nonetheless not objectionably gratuitous—at least according to the lights of common sense. It’s an interesting question whether common sense is justified in these claims, and if so, why. If common sense is to be vindicated here, it seems like it should be possible to construct a systematic account of when and why moral inefficiency is justifiable. But then it would seem open to the satisficer to modify their account to allow precisely these justifications. So I’m happy to leave that an open possibility, pending further research. (I’m also very open to the revisionary possibility that common sense may fail to be vindicated on this point. I wouldn’t be terribly surprised if it turns out that all inefficiency really is objectionably gratuitous.)
(2) Are Hard-to-resist Murders Sometimes OK?
I especially like Slater’s second paper. It raises some really tricky issues about how best to deal with ever-increasing incremental payoffs, and offers a neat “ratio” solution that I like a lot (p. 1372). But, he objects:
This type of account yields problems when dealing with agents with unusual psychologies. To illustrate my point, recall Mulgan’s trolley case, where an agent may choose between shooting Bob to stop the runaway trolley, or pushing a heavy sandbag. In this case, it is not permissible to do nothing, but the case may be modified to change this. Let us instead imagine that the agent is unable to do anything from their current location, but may take a dangerous path to the sandbag, which will result in the loss of her legs… They may also take an equally dangerous but different path to the gun, from where she could shoot Bob. If the paths are horrible enough, it seems like it is permissible for the agent to do nothing.
So far, the active options (the sandbag or the gun) are equally difficult. But imagine that our agent is motivationally unusual. Perhaps because of a deep hatred of Bob… she has a very strong urge to go towards the gun. So strong is her desire that taking the route to the sandbag would be painful for her. While for most people, it might be equally difficult to motivate themselves to go to either route (probably more difficult to go to the gun, knowing that the purpose would be to kill someone), for our agent, it would take vastly more willpower to head to the sandbag.
If the required willpower is sufficiently high (lowering the result reached by dividing the marginal good by the amount of effort), our amended [account] makes it permissible for this agent to kill Bob. This is not a gratuitous prevention of goodness, because there is some reason for her, but this still looks like terrible grounds for condoning a murder. The feature of the case that would make it (apparently) permissible is the relative ease of this option, which only manifests due to the hatred our agent has for Bob. Surely this is not an acceptable justification. Again, the satisficer is left condoning murder, and in a case where a nonmurder option is readily available with better consequences. Such a result looks embarrassing for consequentialists.
I think consequentialists should be OK with this!
To help bring out how psychological difficulty can excuse, suppose that the sandbag is (somehow) a perfect simulacrum of one’s child (perhaps via clever use of holograms, or a neural implant that gives very specific hallucinations). You know the appearance is fake, and it’s really just a sandbag. Still, you would find it much more psychologically difficult to push what appears to be your child in front of a train than to just shoot Bob (let’s suppose the neural implant makes him appear just like a cartoon zombie from your favorite video game). You know all the facts of the matter, so you know the latter action is worse, despite appearances to the contrary. As a morally conscientious agent, you would permissibly do nothing, and let the many die, if it were impermissible to kill Bob in this case. But it’s clearly better to stop the trolley by killing Bob than to do nothing, and so you do so. You’re willing to trek through a dangerous and difficult path to save the many; you’re just not willing to go via a track that would require you to push what appears to be your child in front of the train. That seems pretty reasonable!
Now, if we return to Slater’s version of the case, where the difficulty instead stems from a moral failing of the agent (their hatred of Bob) rather than a moral virtue (their love of their child), the agent is more criticizable. It reflects poorly on them that they hate Bob so severely, and they can be criticized for this background feature of their psychology. But the quality of their decision-making, in opting to save the many at a high (but not maximally high) level of personal psychological difficulty, and the cost of one other life, seems equivalent to my above case—i.e., good and reasonable. So I don’t see any problem with judging the act to be permissible. (But again, that’s not to say that the agent is immune from criticism.)
In all these cases, I take permitted suboptimality to be something that is more excused than justified; a kind of compromise with human nature and moral imperfection. The optimal action is, of course, better. But if we’re going to forgive people for doing less than the best, we must also forgive some suboptimal (yet non-gratuitous) murders, at least in theory. Consequentialists should find this result unsurprising and unproblematic.
Thanks to Joe for these thought-provoking objections! You can find more of his work here.
I think that in any type of satisficing condequentialism, people should be expected to use willpower to do the things they are required to do anyway, as part of Hare's two-level utilitarianism. People are expected to overcome their urge to murder people in their day-to-day life. Not murdering is, in the words of Chris Rock, stuff "you're supposed to do."
The willpower that "willpower satisficing" is talking about is whatever willpower you have left over after doing the "stuff you're supposed to do" in order to meet the basic moral obligations of Harean two-level utilitarianism. It's true that people are psychologically diverse, so it takes some people more willpower to meet those obligations than others. But I don't think basic Harean stuff should "count" towards the amount of willpower you have to "spend" before an action becomes "permissable."
Under this model, it is not permissable to shoot Bob because otherwise you would have to use willpower to overcome your urge to kill him, because overcoming your urge to kill is something "you're supposed to do." This does create the seemingly odd conclusion that the permissiability of shooting Bob depends on the motive for doing so. It might be permissable in Richard's "child simulacrum" situation, or in a situation where the path to Bob is significantly less perilous than the path to the sandbag. That is because overcoming the urge to murder is something you are "supposed to do," whereas overcoming the urge to protect your child and overcoming the urge to protect yourself from injury is not. That might sound strange, but it's no stranger than any of the other quasi-deontological rules that two-level utilitarianism recommends we follow.
I should also mention that treating willpower as a single fixed pool we draw on for all actions is probably an oversimplification. It is an accurate enough model for most situations, but it's probably not the whole story, and that may explain some of our inconsistent intuitions about these edge cases. For instance, it may be that there are multiple "pools" of willpower in the mind, and that actions that two-level utilitarianism would consider "obligatory" draw from a different pool than ones that it considers "superogatory."
If you use a plural pronoun when referring to an individual of unknown sex you've lost me. The longstanding convention of using masculine singular pronouns for the same purpose was fine and dandy, IMO, but if feminist blowback has put that beyond the pale "he/she" is a better recourse than "they," fer cryssakes!