I think to see what equality really justifies, it's worth comparing interpersonal utility comparisons to comparisons within a single person. Every point in the article becomes quite clearly true in the case of a single individual who values their moments equally. If you valued all parts of your life equally, you wouldn't value equality for its own sake between moments. Nor would you say 'you don't really value the moments of your life going well, you only value the things that make your life go well'.

Similarly, you wouldn't treat all the moments equally. If one moment was better than another and you had to jettison one of them, you should obviously jettison the worse one. And if you could choose between one moment that would both be pleasant and increase your lifespan by two more moments or two moments that don't increase your lifespan, of course you'd choose the one moment -- the single person pharmacy case is very obvious.

Expand full comment

I genuinely can't imagine disagreeing with utilitarianism about the pharmacist case. Great article!

Expand full comment

“everyone matters equally. “

To whom? For what?

Must implies can.

Can't implies need not (“not must”? “Mustn’t” is not the negation of “must.” How to express that in English? “Needn’t,” I suppose.)

No one can care about everyone equally, nor would we wish to, if we could.

No one is obligated to care about everyone equally.

If no one can or must care about everyone equally, what does it mean for everyone to matter equally? Should we perhaps try to approximate an ideal we can’t actually achieve? I suppose Kant has set a precedent.

It makes sense to criticize or reform social institutions on the grounds that they treat people as if they have different significance, if the criteria of discrimination have nothing to do with the purpose of the social institution. Everyone has an interest in this, since even if the arbitrary discrimination goes in my favor now, there is no guarantee it will discriminate correctly and never arbitrarily change to something less advantageous. It isn’t a prisoners' dilemma.

That seems to fall short of “everyone matters equally.” If we divide social experience crudely into institutional experience and personal experience, “everyone matters equally” only in the institutional cases (and ignoring obvious relevant differences, e.g. hospitals are for sick people). Does your principle apply also to personal experience? Or is this qualification so obvious that it doesn’t need mentioning?

Expand full comment

I think it's an interesting open question how much personal partiality is justified. But there are at least *many* important moral choices in which the agent should be impartial (with public policy or "institutional" cases being especially important). So you can take my five theses as applying to those cases, even if you think there are *other* cases in which partiality is justified.

Expand full comment

"Institutional" cases is exactly the right place to apply utilitarianism. For someone making policy decisions, most of the people impacted are strangers and we don't want our leaders favoring loved ones anyhow. So utilitarian norms convey a degree of impartiality expected of leaders. I wonder if there is even a natural selection effect for men to adopt more utilitarian reasoning so that they appear better candidates for power?

Still, we don't want our leaders valuing distant strangers as much as their own countrymen...so our political utilitarianism needs some limits, too.

But surely the greatest weakness for utilitarianism as an all-encompassing moral theory is that in our personal decisions it makes no sense at all to give equal consideration to all living things at the expense of friends and family. As much as we might wish we were angels, humans are animals evolved with specific desires and sensibilities. Even our preference for morality is a result of this evolution. To abstract all of this and find some imagined limit point of morality that isn't based in our actual experience, and then claim that this is the source or measure of all morality is quite a stretch.

Expand full comment

If you agree that angels would be more impartial than us, aren't you implicitly granting the point that there's something morally imperfect or flawed about our partiality?

(I'm not very confident in this aspect of utilitarianism, though. Some more agent-relative form of welfarist consequentialism also seems very reasonable to me.)

Expand full comment

yes, this gets at one of the key questions: how do we understand moral idealizations? is idealized morality something that we want to get ever closer to as we get better?

My view is that no, this is not a good way of looking at things. Moral theories are like scientific theories in that they require abstractions and are only meaningful within certain parameters. Life is full of infinite detail and no scientific theory will ever account for all that detail. So it makes simplifications. In many cases those simplifications are appropriate and helps us make "good enough" predictions, but sometimes the details ignored by those simplifications are quite important.

Utilitarianism very obviously hides a lot of detail by treating everyone/everything as equal. In some situations, those details aren't relevant but in personal decisions those details become very relevant.

So for me the salient thing about "angels" is that they are not real. They are an oversimplified model of what good looks like that breaks down when the oversimplifications become important.

Another way of putting it is that utilitarianism is an analytical "yang" way of looking at morality, and then there are less analytical "yin" models that capture all the infinite complexity of things (or at least don't try to simplify everything). There is definitely a use for "yang" but people can do really bad things if they embrace abstraction/oversimplification entirely.

Expand full comment

If it is an interesting question how much personal partiality is justified, does that mean you have an opinion but as yet no analysis, or just have no opinion yet?

“Everyone matters equally” seems to come with implicit qualifications, but the qualifications are hard to specify. I feel some sympathy for this slogan, because many important social problems could be described as persons acting partially instead of impartially. But it seems unwise to conclude that we should put impartiality on top everywhere. It may not be possible or wise. We can’t escape Hayek's dilemma so easily.

"Part of our present difficulty is that we must constantly adjust our lives, our thoughts and our emotions, in order to live simultaneously within the different kinds of orders according to different rules. If we were to apply the unmodified, uncurbed, rules of the micro-cosmos (i.e. of the small band or troop, or of, say, our families) to the macro-cosmos (our wider civilisation), as our instincts and sentimental yearnings often make us wish to do, we would destroy it. Yet if we were always to apply the rules of the extended order to our more intimate groupings, we would crush them. So we must learn to live in two sorts of world at once."

F.A. Hayek, The Fatal Conceit: The Errors of Socialism, 1988

Expand full comment

I think I mostly agree with your main thrust here, which, if I understand it, is basically this: Even if we treat everyone equally, we’ll still always inevitably end up with unequal outcomes simply because everyone by nature does not posses exactly equal abilities, desires, drives, ambitions, IQ etc. (I don’t mean this by race or gender of course but by individual genetics and childhood environment, etc.)

Expand full comment

To me, it seems like utilitarianism would hold the proposition of a rich person having children as morally superior to the proposition of a poor person having children. How would you respond to this?

I talk about this and some other criticisms of utilitarianism (as well as a weird solution) in the inaugural post of my recently launched Substack. But I'm not an actual philosopher. I would appreciate any feedback you might have if you want to check it out:


Expand full comment

I'm not sure what you mean by "morally superior". The rich person sure isn't any more virtuous just for their wealth. But it could well make for a better outcome, on average. (Presumably nobody could seriously deny that. Judging outcomes as "better" or "worse" on the basis of well-being is far from unique to utilitarianism!) So it may follow that wealthier individuals have slightly *more moral reason* to have kids than others. Again, I don't think anyone could seriously deny this. I think many people just don't like to acknowledge obvious truths that sound bad, or that could easily be misinterpreted as implying something else (e.g. that the rich are more virtuous -- which I've already flagged doesn't follow).

Expand full comment

This seems obvious. Let's imagine that you could choose to have a child who would have a great life or one with a pretty good life. Clearly, the great life would be better to create. If this is true, then it seems that, if you were impartially choosing to bring a child into the world, it would be better if it were born to rich parents and given a better life rather than poor parents and given a worse life. If we accept the following

1) A child being born to a rich family and living a pretty good life is just as good as if a child were born to a poor family with an equally good life.

2) A child being born to a rich family and living a pretty good life is less good than if a child were born to a rich family and lived a great life.

Thus, by transitivity. a child being born to a rich family is better than it being born to a poor family, if we assume this would make it have a better life -- which is the only way utilitarianism produces that judgment.

Expand full comment

"For those who are interested, my linked paper also refutes more sophisticated versions of the ‘value receptacle’ objection, including the worry that (some forms of) utilitarianism treat individuals as fungible means to the aggregate welfare."

Again, I don't think you can address this issue without talking about population ethics, which you don't in the present post. If I recall, you maybe mentioned it in the linked paper, but only very briefly, which is inadequate since population ethics is **the central issue** in a discussion of the value-receptacle view.

Expand full comment

Hmm, It didn't seem "the central issue" to me (or, from what I could tell, the authors I was engaging with). Do you have some particular citations in mind here?

One reason for skepticism about your claim is that the value receptacle objection is typically presented as an objection to utilitarianism per se, not to any particular population-ethical theory (e.g. totalism or averagism), and utilitarianism per se is compatible with any number of views in population ethics. It seems straightforward enough to present the worry in a "same-numbers" case, for example, just redistributing the value from one already-existing "receptacle" to another.

That said, I can certainly imagine someone putting a distinctively person-affecting spin on it by worrying especially about creation & replacement (which is something I discuss a fair bit in my paper, iirc).

In any case, if you'd like to begin a more "adequate" discussion, feel free to expand on what you have in mind.

Expand full comment
Nov 5, 2022Liked by Richard Y Chappell

Thank you for your engagement. I hope to exposit my perspective on this at some point when I have more time and will share then.

Expand full comment

For me, the relevant is not so much about totalism vs. averagism., but about person-affecting vs. non-person-affecting views, and the more general question of how to take into account the interests of potentially existing persons. Roughly speaking, the widespread utilitarian view that "it is good to create persons with positive utility" seems very close to the view that we should create persons in order that they might serve as receptacles for utility. In particular, it seems difficult to reconcile utilitarian pronatalism with your claim that "'utility' only matters because it is good for individual welfare subjects," since utility cannot be "good for" a welfare subject who doesn't (and might not) exist.

(Edit: I see now that you refer to this issue in footnote 8, saying it is "beyond the scope of the paper." But I agree with nonalt that this is much closer to being "the central issue" than merely "an interesting question.")

Expand full comment
Nov 6, 2022·edited Nov 6, 2022Author

I think your reasoning here is mistaken. Every sane view agrees that we need to take the welfare of possible future people into consideration, since it would clearly be wrong to bring about a life of abject suffering. So we all agree that negative welfare can be "bad for" a welfare subject who doesn't yet (and either might or might not) exist. We all agree that you can (non-comparatively) harm people by bringing them into (a negatively-valued) existence. They could justifiably feel resentment towards us if we brought them into existence after appreciating these moral reasons.

Once you've granted that, there's simply no basis for denying the positive analogues. Positive welfare can be similarly "good for" a welfare subject who doesn't yet (and either might or might not) exist. We could (non-comparatively) benefit them by bringing them into (a positively-valued) existence. They could justifiably feel gratitude towards us if we brought them into existence after appreciating these moral reasons.

Some philosophers *posit* an asymmetry between the two cases, in a desperate attempt to rescue their "intuition of neutrality" about adding happy lives. But there's absolutely nothing in the underlying concept of individual-based welfarism to *justify* such a move. The commonly-perceived connection between "person-affecting ethics" and "the asymmetry" is wishful thinking, nothing more.

Expand full comment

>Some philosophers *posit* an asymmetry between the two cases, in a desperate attempt...

Philosophers have certainly said many foolish things while desperate, but the relevant asymmetry doesn't seem to me at all implausible to me: it strikes me simply as a specific case of the more general asymmetry between positive and negative welfare characteristics. The assumption of "symmetry" between positive and negative welfare seems to me highly questionable, since it doesn't reflect the phenomenology of positive and negative experiences in our own lives. Pleasure (+ other forms of positive welfare) and pain (+ other forms of negative welfare) are not related to each other simply by a change of sign — they are fundamentally different dimensions of our experience.

So even setting aside reservations about whether it is possible to "value all people equally" when "all people" includes potential as well as actual people, I am unconvinced by this symmetry argument for pronatalism. Even if I accept that it is bad to create people with extremely negative overall welfare (which I do!), and that it is good to improve the welfare of actually existing persons, there is no inconsistency in rejecting the claim that we ought to create new persons with positive overall welfare. (This isn't to say that I do reject the claim — only that I find it somewhat doubtful, and that the argument from symmetry, while superficially plausible, is ultimately unpersuasive.)

Expand full comment

You're switching between evaluative ("bad") and deontic ("ought") terms in a distracting way here. The relevant inconsistency would be in rejecting the claim that it is GOOD to create new persons -- and, more specifically, that we can *benefit* those individuals by creating them.

It's a further question how much one cares about benefits vs harms, whether it ever rises to the level of an obligation, etc. etc. But the relevant point here is that there's no obvious grounds for a "value receptacle" objection here once we appreciate that creation can constitute a (non-comparative) harm or benefit to the individual created. The claim that population ethics is "the central issue" there is just mistaken.

Expand full comment

>switching between evaluative and deontic

(Isn't the fundamental premise of consequentialism that we don't need to distinguish between the evaluative and the deontic, since the latter is ultimately reduceable to the former?)

Anyway, to rephrase the whole thing more carefully in evaluative terms, it seems to me that one can consistently hold the following:

(1) It is morally bad to create miserable people.

(2) It is morally good to improve the welfare of actually existing persons.

(3) The moral value of creating new non-miserable persons is neutral (or perhaps undefined).

Where is the inconsistency? We can certainly reject (3) if we accept "it is GOOD to create new persons... we can benefit individuals by creating them" — but this seems to be an additional premise, rather than something that can be deduced from (1) and (2).

>"the central issue"

You write in your footnote: "It’s an interesting question, beyond the scope of the paper, whether we should prefer to benefit actual people rather than bringing into existence new, happier people." For many people, the idea that it might be good to bring "new, happier people" into existence at the expense of "actual people" inspires reactions ranging from raised eyebrows to visceral revulsion — not least because they (and everyone they know) are "actual people," so hearing philosophers speculate about how they might prefer to benefit "new, happier people" is naturally going to raise some objections. I think the versions of the value receptacle argument that you encounter on twitter stem from these sort of reactions, and not from the sorts of abstruse deontological concerns about "treating people as mere means" that you address in your paper.

You might not think these reactions carry sufficient philosophical weight to make population ethics "the central issue" here, but if you want to understand the versions of the argument coming up in your twitter feed, this needs to be your starting point — drawing careful distinctions between token-monistic and token-pluralistic versions of utilitarianism probably isn't going to help much.

(And in any case, Singer's discussion of "value receptacles" seems to have been motivated by his reflections on the total hedonic view — so I don't think even my caricatured twitter mob hasn't strayed too far from the philosophical mainstream in thinking that the central issue with value receptacles is at least closely related to concerns about population ethics.)

Expand full comment

>"We all agree that you can (non-comparatively) harm people by bringing them into (a negatively-valued) existence."

No, I don't think I do agree. But I'm not sure I understand what you mean by "non-comparatively" here, so I may be misunderstanding your claim. Perhaps this will clarify our disagreement: On your view, can you harm someone by *not* bringing them into existence, if that existence would have been a happy one? Or benefit them by not bringing them into existence, if that existence would have been negatively-valued overall? (I guess your answer will be the same in both cases, but I'm asking both just to be sure.)

Expand full comment

No, that's the "non-comparative" part. The absence of this benefit does not harm anyone, just as in the negative case (if you don't bring about the miserable life) there is nobody who you benefit by refraining from bringing them into a harmful existence.

Do you agree that we have moral reason not to bring miserable lives into existence? And do you agree that if we fail to follow this reason, and instead bring about a miserable life, the miserable person then has reason to personally resent us and regret our choice? If you agree with those two claims, I'm not sure how you could explain these truths other than by appeal to the idea that our act of creation harmed that individual in a morally relevant sense. (I would wonder what you even meant by 'harm', if this didn't qualify. Granted, it's not a *comparative* harm, since the individual isn't worse off than they would have otherwise been -- assuming that non-existence counts as *undefined* rather than *zero* welfare. But as the negative case shows, comparative harms and benefits are not the only morally relevant kind. We should also take into account non-comparative harms and benefits: putting someone into an absolutely good or bad state.)

Expand full comment

>Do you agree that we have moral reason not to bring miserable lives into existence? And do you agree that if we fail to follow this reason, and instead bring about a miserable life, the miserable person then has reason to personally resent us and regret our choice?

I don't feel fluent in externalism about reasons, so what I say here will be quite tentative. I can agree to the first part, since it translates easily into various flavors of antirealism ("Don't bring miserable lives into existence!"; "Bringing miserable lives into existence! Boo!" etc.), but I'm not so sure about how to translate the second part into terms that I can understand. I can say the miserable person is *likely* to resent us and regret our choice, but I don't feel I know what it would mean to "have reason" to do so. (Wouldn't they "have reason" to make peace with their situation rather than resenting and regretting it?) The idea that one could (let alone should) feel either gratitude or resentment concerning one's own existence is quite alien to me.

Expand full comment

In the meantime, I will say, I suspect there is a lot of interesting work to be done in this area at the intersection of ethics, metaphysics, and logic. When I was studying philosophy a few years ago, some metaphysics logic stuff I found in this area included:

- Borghini & Torrengo "The Metaphysics of the Thin Red Line"

- Various technical stuff by Nuel Belnap and other logicians on branching time structures and temporal/modal logics.

- Future Contingents (SEP) https://plato.stanford.edu/entries/future-contingents/

- Ted Sider had stuff on Presentism, Eternalism, Actualism, and stuff: https://tedsider.org/papers/presentism.pdf.

I'm not a philosopher, but last time I looked at this stuff (~5 years ago), it seemed to me that there wasn't that much dialogue between the population ethicists on the one hand, and the logicians/metaphysicians on the other; and that this was limiting progress. [Maybe Caspar Hare was one person who seemed to know both literatures a bit.]

Expand full comment

Caspar Hare's work is great!

In general, I've been underwhelmed by the value of technical logic or metaphysics for thinking through normative issues. I think the real work is in attaining conceptual clarity, as you can't even tell which formalisms are relevant or accurate until you've done that. As an example (since it sounds like you might be interested in issues related to logical fatalism), see: https://www.philosophyetc.net/2008/12/future-without-fatalism.html

Expand full comment

Fair enough. But when people like Narveson say, "We are in favour of making people happy, but neutral about making happy people", I'm not sure there is conceptual clarity on what that means.

Expand full comment