In the middle chapters of Morality by Degrees, Alastair Norcross argues that there is no principled way to determine the absolute value of an action (incl. whether it is ‘good’ or ‘bad’), only whether it is better or worse than specific alternatives.1
It’s natural to assume that we should judge an action good (bad) just to the extent that it makes things go better (worse) than if the act hadn’t been performed. But consider Button Pusher (p. 62): faced with ten buttons labelled ‘0’ – ‘9’, Agent is told that pressing a numbered button will result in that number of people being killed. If no button is pressed within thirty seconds, ten people will be killed. Our natural account implies that Agent’s pushing ‘9’ would be a good act (at least if they are otherwise disposed to do nothing), which seems an inapt evaluation of gratuitously killing nine people. It may be better than letting all ten be killed, but it surely isn’t a good act given that Agent could just as easily have saved everyone.
On the other hand, sometimes saving just one out of ten people does seem positively good, e.g. if each life saved requires a separate sprint into a burning building (p. 77). So we can’t just call anything suboptimal ‘bad’. Similar variation is found in our use of ‘harm’: intuitively, Agent harms nine people by pushing the button that kills them when he could have saved them, even though they still would have died had he done nothing. But the imperfect rescuer does not, of course, harm those that he fails to save. So it seems that whether an act is good, bad, harmful, or the like, is not something that follows simply from a neutral accounting of how the outcome compares to what would have happened otherwise. This is a striking and important result. Which alternative we take to constitute the relevant baseline can differ from case to case.
Norcross takes this to motivate a contextualist view on which conversational context selects a salient alternative as the one deemed “relevant” in that context. But it’s an interesting question whether a more principled determination might yet be possible. For example, we might take the relevant alternative to be determined by what could be reasonably expected of any (minimally decent) agent. In Button Pusher, we expect any minimally decent agent to push the ‘0’ button to save all lives costlessly, so anything worse is outright ‘bad’. In cases where greater self-sacrifice is involved, any aid at all might strike us as ‘good’ in virtue of being more than is minimally expected.
This alternative account depends on there being an objective threshold of adequate moral concern: a least amount of altruistic motivation an agent must exhibit in order to qualify as minimally decent. Norcross does not explicitly discuss such an idea, but it seems clear that he would be skeptical. It certainly goes beyond the conceptual resources that he allows himself. But it’s not clear why the rest of us must feel so constrained.
Our contrasting expectations in the Button Pusher vs Fire Rescue cases seem to reflect genuine normative differences between the cases, not just the arbitrary expectations embedded in conversational contexts. Against a background where Agent is known to be villainous (such that everyone expected him to watch all ten die), we might resignedly sigh, “Well, it’s a good thing he only killed nine people this time,” as an implicit comparative claim. But I’m still inclined to insist that costlessly saving all ten is the normatively privileged alternative for determining whether the act was absolutely good (warranting a distinctive kind of pro-attitude on our part, perhaps).
It’s worth asking what hangs on Norcross’ contextualist analyses. In contrasting his contextualism to an error theory about the associated terms, Norcross notes that on his account, “it is possible, even quite common, to express substantively true or false propositions involving” these terms (p. 110). But why care about that? Defining ‘God’ to mean love, one could express substantively true or false propositions involving the term ‘God’, but they wouldn’t have theological significance. Matching ordinary usage in the assignment of truth values to linguistic strings adds further constraints, but still doesn’t seem all that philosophically significant. We should care less about the words, I think, and more about their inferential roles: what follows from calling something good, bad, or harmful? The answers may push us away from contextualism. If harms warrant resentment, for example, contextualism about ‘harm’ would seem to saddle us with the awkward implication that whether resentment is truly warranted could depend upon arbitrary conversational context. But that can’t be right.
I’ve previously discussed several reasons why consequentialists should be open to a wider range of normative concepts, including that of fitting attitudes. The present discussion suggests an important addition to the list. We need fitting attitudes in order to fix a principled baseline for determining something as seemingly simple as whether an action is good or bad.
The following draws from my book review in Ethics.
I'm not seeing why it's problematic to only be able to say that an action is good relative to a specific concrete counterfactual or relative to a counterfactual expectation, rather than be able to say that an action is good end stop.
"Similar variation is found in our use of ‘harm’: intuitively, Agent harms nine people by pushing the button that kills them when he could have saved them, even though they still would have died had he done nothing."
I don't think that's right—and it's certainly not intuitive—although it might turn on the mechanism of action. Ten bullets are heading to each of ten individuals. Pressing button n lowers 10-n shields in front of 10-n individuals, protecting them from the bullet heading in their directions. Pressing button 9, would lower a shield in front of a single person, leaving the other shields out of the way. If that's the set up, then the person who presses button 9 doesn't kill anyone—nor do they harm anyone. They let the 9 be harmed / lets them die.
What mechanism are you imaginging?