36 Comments
User's avatar
Elliott Thornley's avatar

Bentham and Mill did (briefly and obliquely) touch on some issues in population ethics. Johan Gustafsson gives some examples in footnote 2 here: https://drive.google.com/file/d/1H8eOd-ML94Fnj3JnEeZEIYq-j9w76I1P/view?usp=drivesdk. He guesses that Bentham would have leaned totalist and that Mill would have leaned averagist.

Richard Y Chappell's avatar

Interesting, thanks!

Daniel Muñoz's avatar

Very well said. I thought this was a convincing reply with lots of useful references.

Fabian McGonnagal's avatar

> I’m not sure what the problem is supposed to be with “the claim that what matters is the actual outcome of an action”

If I manage a charity's finances and I fly to Vegas and bet the whole endowment on red 15, I've done something morally wrong. The morality of my action isn't determined by the subsequent spin of the roulette wheel. If red 15 comes up, the charity benefits greatly and that matters in itself, but my action was still blameworthy - the outcome itself wasn't important *to that judgement* and I should get no moral credit for it. At least, that's an example I've always found compelling.

Richard Y Chappell's avatar

Questions of "moral credit" (praise/blame-worthiness) are distinct from deontic right/wrong as the objectivist conceives of these concepts. All the views under discussion can agree that reckless-but-lucky actions are "still blameworthy". See: https://www.goodthoughts.blog/p/how-intention-matters

John Quiggin's avatar

Thanks for this useful discussion. As already noted, the early utilitarians did discuss population and (with the possible exception of Bentham) took an average view. Most notably, as I pointed out, Place actively promoted family planning.

As I mentioned in comments at Crooked Timber, I don't see Sidgwick as addressing issues no one had thought of before. Rather I see him as attempting to apply C19 standards of rigour in the hope of developing a complete and consistent theory based on appealing axioms. Exercises of this kind failed consistently in C20, as shown by various impossibility and completeness theorems.

Among other things, this means that the existence of powerful objections to a theory isn't a big deal. A theory that isn't subject to powerful objections would refute impossibility theorems for the issue in question.

In the context of population ethics, I'm not seeking to make a general theoretical claim that average welfare is the right answer in any conceivable situation (space hermits, etc). But in the actual situation where families decide how many children to have and governments help or hinder, the right number is that which will be the family as a whole happiest on average.

Richard Y Chappell's avatar

> "I'm not seeking to make a general theoretical claim"

Then why are you trying to respond to Sidgwick? Or talking at all about average vs total views of population ethics? These are debates in *fundamental ethical theory*, not heuristics that he's recommending that families or governments directly try to follow.

> "in the actual situation where families decide how many children to have and governments help or hinder, the right number is that which will be the family as a whole happiest on average."

Simple counterexample (adapted from Parfit): suppose a family is currently composed of three individuals who are all very miserable (represent this with "-100" each). At some cost to all the existing individuals (reducing them to -110 each), they could have an additional moderately-miserable child, with -50 welfare. This would increase the family's average welfare (from -100 to -95), but is clearly a terrible idea.

Perhaps you merely mean to suggest that considerations of average welfare may be a useful rough guide in *many* ordinary situations. (A Sidgwickian could well agree! Which heuristics are advisable depends on empirical details, of course, in addition to foundational evaluative questions.) But the Parfitian counterexample clearly shows that average welfare can't possibly be the criterion that *makes* the action "right".

Amicus's avatar

> Rather I see him as attempting to apply C19 standards of rigour in the hope of developing a complete and consistent theory based on appealing axioms. Exercises of this kind failed consistently in C20, as shown by various impossibility and completeness theorems.

I'm not sure which theorems you're referring to here, but on any reading I can think of this doesn't seem true.

If you treat utilitarianism as a social welfare function - which is extra structure, albeit a pretty natural one - then it's cardinal (no Arrow), nonliberal (no Sen), and not strategy-proof (no Gibbard).

If you treat it as a formal theory then it's clearly no stronger than the theory of real closed fields, which *is* complete and consistent.

John Quiggin's avatar

As I said (and as many others have observed), the impossibility theorem here is the Repugnant Conclusion.

Amicus's avatar

That's not an impossibility theorem for Sidgwick's utilitarianism: he's quite explicit that he does *not* take intuitions ("apparent intuitions", in his language) about particular cases to be axiomatic, however strong their appeal. He wants to ground morality in claims about the demands of rationality in the abstract, which he takes to be

1. indifference between future and present self-interest

2. agent neutrality

the repugnant conclusion does not contradict either

Richard Y Chappell's avatar

The impossibility theorems show that any view which *rejects* the repugnant conclusion will end up committed to some other counterintuitive implication. So, as you say, the correct lesson is just that implying the RC "isn't a big deal" (if every alternative is as bad or worse). I'm just not sure how to reconcile this with your OP where you seemed to treat the RC as a devastating objection?

I don't take any of this to rule out the project of developing "a complete and consistent theory based on appealing axioms"; it just means that any such theory will also imply some intuitive costs. We can still compare which costs seem more or less acceptable, which axioms most appealing all things considered, and end up with a complete and consistent theory that's better than the alternatives.

John Quiggin's avatar

My point was that the whole idea of pursuing a complete and consistent theory is a mistake. I reject the specific proposition that we (that is, actual people in 2026) are morally obliged to have more children, even if they would have low-quality lives. If that breaks something else in a would-be complete ethical system, so much the worse for the ethical system.

nonalt's avatar

Not sure what C19 and C20 are.

John Quiggin's avatar

19th and 20th centuries

nonalt's avatar

You write "The classical utilitarians argued ... This applied both to the current population and to the children who would actually be born as a result of their choices, but not to hypothetical additional people who might raise the sum of total utility." I think that distinction is impossible to make in a reasonable way. Then again, you could say the same thing about other distinctions that deontologists make such as doing vs. allowing. But I think it's worse here for multiple reasons, including non-identity issues.

John Quiggin's avatar

Think about it in the context of a family making these choices, as I suggested.

nonalt's avatar

"and to the children who would actually be born as a result of their choices". That set of people is endogenous to their choice.

So if I think (with 90% chance) that I will have another kid, then that kid becomes 90% actual, so then I should care 90% about their welfare and thus I should have them, thus bumping my probability up to 95%. But if I'm only 20% sure I'll have them then I shouldn't have them because they're likely not actual ...

It gets into the "ratifiability" stuff from causal/evidential decision theory. To me, this sort of reasoning is just bizzare.

JerL's avatar

I don't think that's how you'd view it: you'd say, "if I have the kid, there's a 100% chance they'd exist, so I have to count them as a person whose interests I'm bound to respect; if I choose not to have the kid, there's a 0% chance they exist, so I'm not bound to respect their interests--so the only acceptable options are have the kid and count their interests, or don't and don't"--calculating probabilities over your own actions seems like one of those things that just leads you into paradoxes about agency; you ought to treat your actions not as something you are probabilistically predicting but as outcomes wholly controlled by you.

I agree this may not be realistic, but I think that's more a problem with the decision theory framework than with this specific decision.

nonalt's avatar

Ok then. If you have the kid, they will be actual and the total or average welfare of actual people will change depending on how the future kid's welfare relates to zero or the current average, respectively. If you don't, it will stay the same. The actualist approach just collapses to the corresponding standard total or average view.

JerL's avatar

I think the average case is a little different because if you don't create them, they don't appear in the denominator either. The total case, yeah, but I think the difference is rather that you don't evaluate the world in which they come to exist as a possible world: it would have higher utility were it possible, but it's not possible... It's sort of like someone who likes even numbers saying "it would be better if the first digit of pi were even rather than odd"--there is maybe some sense in which that could be true, but you would want to be careful just doing axiology between "world where the first digit of pi is even" vs "world where it's odd".

John Quiggin's avatar

It's bizarre, but it's your reasoning not mine

Rafael Ruiz's avatar

Just Richard Chappell being correct as usual

John Quiggin's avatar

My comment on the link to the Bentham site "All of this would be fascinating in a world where fully formed adults were produced by pressing buttons. With the actual, rather messy, process available to us, not so much"

Richard Y Chappell's avatar

Like physicists thinking about frictionless planes, ethicists use tidy thought experiments to cleanly separate out the moral force of distinct considerations, to improve our foundational understanding before trying to grapple with the full "mess" of real-life situations.

JerL's avatar

Otoh, the kinds of thought experiments we ask is an important detail in deciding how to apply what we've learned to the real world; taking the "God's eye view" as many philosophical thought experiments ask us to do might already be committing us to a questionable framework.

One example, related to the procreative asymmetry and the matter at hand, is that to imagine myself as standing at the God's eye viewpoint, I necessarily imagine myself as existing: I can't imagine myself choosing between a world where I exist and a world where I never come to exist from the god's eye view--the fact of placing me there heads off the possibility of my never-having-existed.

Maybe this is fine, but I worry that we might just be using a thought experiment framework that isn't capable of drawing conclusions about existence/non existence by its very nature.

John Quiggin's avatar

I've expressed my dislike of thought experiments many times over the years, going right back to this 2003 post, also about actual vs expected consequences.

https://johnquiggin.com/2003/05/29/economists-v-philosophers-round-v/#more-1427

Here's an example of an actual decision where this dispute had potentially momentous consequences

https://johnquiggin.com/2004/06/08/risk-and-reagan/

John Quiggin's avatar

As regards actual vs expected outcomes, there appear to be two separate debates going on. First, there is one around the terms "possibilism vs actualism" which seems to be framed in terms of what a decision theorist would call dynamic consistency. Roughly speaking, in a multi-stage choice should I choose a path that would be best if I took a particular subsequent course of action, knowing that I will actually deviate. The standard DT answer is behavioural consistency, that is, acting now on the basis of how you expect to act in the future, but there are various arguments either way.

The claim I was imputing to Sidgwick is that the best action in a situation of uncertainty is the one with the best actual outcome. This seems to me to be either wrong or meaningless. For example, if I have the opportunity to bet on a horse race about which I know nothing, the best action in terms of actual outocmes would be to back the winner. But since I don't know which horse will win, and I know the bookie is taking a cut, the best action in expectation is to refrain from betting. Even in retrospect, saying "it would have been better to back the winner" seems as relevant as doing my best in a race, then saying "if only I'd gone faster, I would have won".

In making this point, I'm mostly concerned to point out that Sidgwick's "rigorous" approach leads down a bunch of pointless theoretical rabbit holes.

Richard Y Chappell's avatar

It sounds like you just aren't interested in ethical theory? It's completely standard in ethics (not just in the utilitarian tradition) to separately inquire into "reasons" (understood as *facts* that count in favor of an action) and "rationality" (or how to deal with uncertainty). There are lots of extant disagreements about what people ought to do *even in ideal conditions of perfect knowledge*, and a lot of ethicists are interested in pursuing those disagreements, bracketing issues about uncertainty.

Now, if you're ONLY interested in applied ethics and public policy, then a lot of ethical theory may look at first glance like "pointless theoretical rabbit holes". But fundamental ("ideal theory") disagreements may have important downstream implications for what we ought to do in practice. After all, it's hard to know how to respond appropriately to uncertainty if you can't even answer the easier question of what would be preferable in the absence of any uncertainty! If you're interested, here are a couple of posts where I get more into the background methodology of ethical theory:

* https://www.goodthoughts.blog/p/axiology-deontics-and-the-telic-question

* https://www.goodthoughts.blog/p/ethical-theory-and-practice

And why it's important to applied ethics:

* https://www.goodthoughts.blog/p/theory-driven-applied-ethics

* https://www.goodthoughts.blog/p/analytic-vs-conventional-bioethics

Some philosophical debates are empty/irrelevant (as I think the "debate" between objective and subjective oughts is—just different theorists talking past each other without realizing it). But this can be difficult to diagnose from the outside, without attempting to seriously grapple with what might be at stake in the debates. Other times, a debate *looks* "pointless" to outsiders simply because those outsiders lack the context for grasping (i) what the debate is actually about, and (ii) what implications it has for other questions that are more obviously worth caring about.