Evaluating Acts and Outcomes
Rejecting consequentialism vs reconsidering values
Some objections to consequentialism rely upon a strangely non-normative conception of value (disconnected from the normative question of which outcome is impartially preferable). They assume that some outcome X maximizes value, note that it seems we ought to bring about alternative Y instead, and so infer that consequentialism must be false. But they neglect the fact that outcome Y also seems better than X — meaning that they really have an argument against their value assumptions, not an argument for the non-consequentialist claim that we should bring about a worse outcome.
Methodological lesson: if faced with a putative “counterexample to consequentialism”, remember to assess the outcomes as well as the actions. (Often you can “naturalize” a case by replacing actions with purely natural causes, and ask whether or not we should prefer for a gust of wind to trigger X. If X isn’t actually preferable—i.e., better—then consequentialism trivially agrees that we shouldn’t bring about X. So the case fails in its ambition to distinguish consequentialist and non-consequentialist views.)
Three Examples
Exhibit A is the mistaken belief that objections to crude value hedonism (e.g., evil pleasures, or the experience machine) are objections to welfarist consequentialism per se. It would be bizarre to have the combined thought that we shouldn’t torture innocent people for the glee of sadists while also thinking it would be a good thing for the sadistic outcome to occur (say if the sadists’ victims were randomly struck by lightning). Clearly, the upshot of our anti-sadism intuitions is not that we should bring about worse outcomes, but rather that evil pleasure isn’t good. That is, it calls for refining our theory of value, not abandoning the idea that better outcomes are worth pursuing.
Exhibit B is population ethics. Sometimes people tell me that they’re a non-consequentialist because they don’t think we should bring about Parfit’s repugnant world Z. What I don’t understand is why, then, they are assuming that world Z constitutes a better outcome. Again, that just seems like a bizarre combination of views. If you don’t like the repugnant conclusion (and I can certainly respect that!), then don’t assume that Totalism is the correct population axiology. Maybe there are other considerations (whether average welfare, perfectionist excellences, or whatever) that matter more to determining the overall quality (value) of an outcome.
If you insist on assuming a notoriously “repugnant” conception of value, and then reject the principle that we should promote value because you find the population-ethical implications intuitively repugnant, you should at least pause to consider whether you might have misdiagnosed the problem!
Exhibit C is Scanlon’s famous Transmitter Room case:
Jones has suffered an accident in the transmitter room of a television station. To save Jones from an hour of severe pain, we would have to cancel part of the broadcast of a football game, which is giving pleasure to very many people.1
When I teach about this case, nobody in the room has the intuition that it would be a better outcome were Jones to be electrocuted for an hour so that billions can enjoy watching the World Cup final live. So it cannot possibly serve as a counterexample to the consequentialist claim that we should bring about better outcomes.
For that, you instead need a case (e.g. Martian Transplant) where you think we intuitively ought to bring about a (transparently) worse outcome: that is, where the consequentialist-recommended alternative is something we should hope to happen via natural causes, but may not do ourselves.
Reflecting on Anti-Aggregative Intuitions
How should we respond to the Transmitter Room case? I see three main options:
(1) You could fully embrace the intuition via lexicalism, and hold that no amount of trivial pleasures can outweigh the disvalue of an hour of agony. (The problem with this view is that it violates transitivity, given that one can always create chains of ever-less-trivial value kinds V(n) where a sufficient number of kind V(n-1) clearly can outweigh a little of V(n), and thereby breach any supposed lexical thresholds.)
(2) You could partly accommodate the intuition via Parfit’s prioritarianism; but this just gets the intuition that Jones’ agony isn’t easily outweighed, not that literally no number of trivial goods could eventually outweigh it.
OR
(3) You could argue that our intuitions must have gone awry. Yes, it sure seems worse for the one to suffer lots. But that one person is very salient, whereas we can’t really grasp the full reality of billions of tiny benefits—instead we implicitly, but mistakenly, round them down to nothing. So we should not trust our intuition that saving Jones makes for a better outcome. So nor should we trust our intuition that we ought to save Jones (since this may very well rest upon the former intuition).
Indeed, as Parfit goes on to show, our anti-aggregative intuitions (according to which some benefits are so small as to be strictly normatively “irrelevant”) are provably unreliable:2
[W]e might claim that
(1) we ought to give one person one more year of life rather than lengthening any number of other people’s lives by only one minute.And we might claim that
(2) we ought to save one person from a whole year of pain rather than saving any number of others from one minute of the same pain.These lesser benefits, we might say, fall below the triviality threshold.
These claims, though plausible, are false. A year contains about half a million minutes. Suppose that we are a community of just over a million people, each of whom we could benefit once in the way described by (1). Each of these acts would give one person half a million more minutes of life rather than giving one more minute to each of the million others. Since these effects would be equally distributed, these acts would be worse for everyone. If we always acted in this way, everyone would lose one year of life. Suppose next that we could benefit each person once in the way described by (2). Each of these acts would save one person from half a million minutes of pain rather than saving a million other people from one such minute. As before, these acts would be worse for everyone. If we always acted in this way, everyone would have one more year of pain.
Note that the (expected) value of each choice is clearly independent of the others—it does not matter how many others have made the same choice, or indeed whether it is repeated at all. As a result, the fact that repeating the choice of concentrated benefits across the whole population results in an overall worse outcome (than the alternative choice of greater distributed benefits) establishes that each such choice is worse.
That is, while we intuitively feel that
(1*) it is a better outcome for one person to have one more year of life rather than lengthening any number of other people’s lives by only one minute.
Parfit’s iteration argument proves that (1*) is false, and thus our anti-aggregative intuitions are unreliable.
Given that our anti-aggregative intuitions seem to apply just as strongly to evaluative matters as to deontic ones, and yet are demonstrably mistaken about the former, there’s a real challenge for anti-aggregationists to show why their deontic intuitions should be trusted.
Conclusion
One cannot reject consequentialism on the basis of an intuition about what we ought to do that perfectly accords with our intuition about which outcome would be best. Instead, such intuitions give us reason to reconsider our prior assumptions about value (if they’re inconsistent with the newly intuited verdict).
That’s not to say that the new intuition will necessarily win out: sometimes we should reject our intuitions about cases as unreliable and confused, especially when they pit concentrated salient interests against widely-distributed, less-salient ones. It’s entirely predictable that we’ll be biased against the latter. We should try to overcome this bias, and still give full weight to the interests of those who are less easily seen.
This abbreviated version of the case is taken from Parfit’s (2003) ‘Justifiability to Each Person’, p. 375.
Ibid, p. 385.



I think this insight takes the force out of every objection to consequentialism. Very few people think “it would be great if the surgeons hand slipped and they killed the person and distributed their organs but it would be wrong to do that knowingly.” Most objections to consequentialism seem hard to stomach if you imagine that it would be good if the wrong act happened.
>As a result, the fact that repeating the choice of concentrated benefits across the whole population results in an overall worse outcome (than the alternative choice of greater distributed benefits) establishes that each such choice is worse.
Do you take this to be a general principle you just find directly highly intuitive and hard to reject, or is there some further argument you would make for this? Maybe it follows from some kind of universalizability or independence principle?
I'm thinking that there are other ways to consistently be anti-aggregationist. But when faced with iterated or collective decisions, you should use more sophisticated reasoning.
For example, if faced with the prospect of potentially having to make many choices like this, you can only make so many as to not get an overall worse outcome for everyone, and you would want to pre-commit yourself this, i.e. make it too costly or even (psychologically or practically) impossible for yourself to violate later. Once you make the choice N times, the costs become too high to make it again. This is similar to how one might try to resolve Partfit's hitchhiker, as well as some problems involving unbounded utility (https://forum.effectivealtruism.org/posts/KGfBhsFzCqr9vq6Y6/utilitarianism-is-irrational-or-self-undermining-2#Unbounded_utility_functions_are_irrational).
Or, you could think of whatever choice you make as evidence for the choices you will make later (or other anti-aggregationists will make), and use evidential (or otherwise acausal) reasoning. Then, you would avoid picking an option that would result in all these decisions together making everyone worse off.