3. "The people with horrible lives that will be created until we achieve utopia are the kid in the basement."
4. "The problem with this argument is that it doesn't recognize the fact that societies and civilizations go through cycles of booms and bust, do even if we achieve "utopia" it's would be unwise to assume that we would not regress again."
I agree with #3. Omelas is a very good place, and it's deeply irrational to condemn it. We can demonstrate this by noting that from behind a veil of ignorance, where you had an equal chance to be any affected individual (including the kid in the basement), it would be prudent to gamble on Omelas. Far better than any real society that has ever existed; and also better than embracing the void and not existing at all.
1. "Utopia being better than a barren rock doesn't really mean much in the real world because an utopia will never be achieved. Life is pretty dystopian already imo, so the author talking about a small chance life will be dystopian is also meaningless to me."
2. "The author isn't arguing in good faith. He is bargaining for pleasures at the price of what is moral. His opinion is therefore of no value. Positive utilitarian's are nature's junkies, they will justify any suffering and harm as long as they get their feel good hit. Why would anyone trust their reasoning unless they also prefer to get high?"
Thanks for sharing! I don't expect anti-natalists to like my article or be convinced by it. After all, my starting premise is that they are crazy. The purpose of the article is to explain to *other* people how, if they share my starting premise, that should also influence how they think about some other important issues in philosophy.
I liked your blog and your perspective so much that I decided to share it with my friends, who also love discussing philosophy (specifically population ethics). You are one of the most articulate and on-the-point bloggers I've seen. Discussions with you bring me great pleasure and serve as food for thought. Most responses from my group so far are very positive, but one of my friends disagreed. Here is what she wrote:
"This is what me, Benatar, and most other antinatalists really mean by “we shouldn’t have been born”: it’s not that we believe that being not alive is better than being alive and thus should desire suicide but we realize we couldn’t have existed and experienced “goodness” without inevitably some other people ending up having a miserable life. We thus realize that we and all of the pleasures and freedoms we experience have effectively existed at the expense of a few unlucky individuals being made miserable by unfortunately being given a miserable life via their birth. In order for them to have never experienced or continue to experience a miserable life, everyone on earth today or in the past had to have not been born. Most other antinatalists like me thus consider human procreation to be very immoral and no different in principle from the torture of a few to benefit of the many or preventing raped women from getting abortions so the fetus’s inside of them can turn into people who then experience massive amounts of pleasure over the course of their lives. We thus believe that happy, content future people have no right to exist because they can only exist if miserable people are also brought into existence alongside them. This is akin to acknowledging that while Auschwitz today is a valuable museum that provides lots of value to both visitors and the employees working there so it shouldn’t be demolished, it’s still a place that only existed because of the Holocaust and thus shouldn’t have existed so we ought to ensure that similar places like Auschwitz do not arise in the future,even if they were to provide value after they stopped being used as concentration camps."
She says that this argument is her strongest one that she'd like to present, but that she doesn't have a Substack account.
Thanks, that's an interesting challenge. But I think it's misleading to present it as happy people exploiting miserable people by bringing them into existence. I don't think anyone ever deliberately creates a miserable child! What's going on is rather that life is a gamble.
Every morning we wake up, there's a (hopefully tiny) risk that something awful could happen to us. But most of us think that risk is worth taking. We don't want to die, just because our future isn't *guaranteed* to be good. We're willing to take our chances (and vigorously guard against risks that threaten to end our lives prematurely). And many of us -- I think reasonably and even generously -- decide to extend that gamble to our children, to give them a chance at life too. Even though it isn't guaranteed to work out, we think it's worth the shot.
It's tragic that sometimes the gamble fails, and some people (extremely few, I hope) end up so miserable that their life is "worse than nothing". In some cases, they may be able to mitigate the damage by voluntarily ending their life. (I think physician-assisted suicide should be available to aid those who are not physically capable of implementing the decision for themselves.) But sad as it is, this tragedy is like nothing compared to how tragic it would be for all the goodness in the world to cease to be -- to end all love, music, beauty, and knowledge; all striving, determination, creativity, and discovery; all gratitude, mentorship, kindness, and companionship. I'm more than willing to endure a proportionate share of sadness in exchange for a proportionate share of the great goods in life, and I would think it a kind of cowardice to choose otherwise.
We should certainly do what we can to prevent great harms from occurring in future, only... taking care not to equally prevent all that is good.
"I am very excited to speak with someone willing to provide philosophical pushback. The argument is solid, but I don't agree with the central premise. I don't think there is as much happiness and goods in life as is claimed. I think that a lot of the things we think we do in order to become happier, we actually do to avoid suffering from the deprivation.
For example:
1. We do not eat just for pleasure, but to prevent hunger. The fundamental thing about a meal is to calm hunger.
2. We do not have sex solely for pleasure but to avoid the pain caused by the frustration of unresolved sexual tension.
3. We do not look for a companion just for being happy together but also for not being sad and lonely.
4. We do not go on vacation once a year to a distant country only to enjoy new experiences, exotic foods, and paradisiacal places, but to avoid the boredom and frustration of staying in the usual city, always doing the same.
If I became convinced that people do the majority of their life activities in pursuit of happiness, pleasure, or eudaimonia and not because they want to avoid suffering associated with not doing said activities, then I would concede that antinatalism is philosophically undermined."
I find that argument puzzling, because a far more effective way to avoid frustration, hunger, etc., is to simply kill oneself, and yet most of us clearly have no wish to do that. The fact that we're still alive, and (more or less) happy to be alive, shows that we value positive things and not just the avoidance of negative things.
At root, the core question is just: "Is life worth it, all things considered?" We shouldn't assume that the answer will be the same for everyone. If someone's answer is "no," then that's really sad (for their sake) to hear. But obviously many of us answer more positively.
On the assumption that attitudes here are to some extent hereditary, it seems a good rule of thumb would be for miserable people to refrain from reproducing, and for people who are happy with their lives to go ahead and have kids (if they want to). I certainly wouldn't want to pressure an anti-natalist to change their personal decisions around reproduction. That's their choice. But I guess I do think it's irresponsible for them to discourage happier people from having kids, just based on their own personal experiences being negative. People vary!
My previous comment didn't go through for some reason. I broadly agree with your thoughts that utopia is better than a barren world. A picture of a lifeless world is the one from which I recoil. However, I am not yet convinced on the other part. We give surgery patients anesthesia to avert the agony they would feel if they remained conscious. Suppose some drug became available that gave people a joy as intense as the pain averted by anesthesia, and suppose that there were no drawbacks in the consumption of this drug. Doesn't it seem plausible to you that the provision of this drug would be less important than the administration of anesthesia?
Yeah, I find that plausible; but that's largely because pleasure doesn't strike me as an especially important good. A better example to illustrate that goods can outweigh bads is the wisdom of the old saying, "Better to have loved and lost than never to have loved at all."
Oh, that's very interesting. What goods would you consider to be the most important then? Is it like a plurality of wisdom, achievement, fulfillment, etc?
Sorry to start yet another thread, but I wanted to mention another thought that occurred to me while reading your post against subjectivism:
I agree that "Normative subjectivists have trouble accommodating the datum that all agents have reason to want to avoid future agony" gets at a real problem for subjectivism; but I find it telling that the strongest example you can come up with is avoiding pain. At least for me, my intuitions just really are very asymmetrical with respect to pleasure and pain, and I suspect you picked "avoiding future" agony rather than "achieving future joy" because you have the intuition that the former is a harder bullet to bite than the latter.
I think this asymmetry is why I feel intuitively bound to rank the unfortunate child against the void in a way I don't feel when it comes to the happy child; and why I don't like the idea of us turning ourselves into anti-humans, but I don't have a strong intuitive reaction against us choosing the void--I think our reasons for avoiding pain are much more convincing and inarguable than our reasons for pursuing pleasure.
I think in general, utilitarianism has a harder time working out the details of what should count toward *positive* utility--this may just be my impression, but I'd guess there's a lot more controversy over what counts as well-being, and what sorts of things contribute to it, and in what way, than over what sorts of things contribute to *negative* utility.
I think maybe the reason I think of pleasure and pain as asymmetric, then, is that I find utilitarianism's arguments much more convincing when talking about suffering; so maybe one doesn't need to adopt an extreme view like "all utility functions are bounded above by 0" to explain why it feels more intuitive to reason about preventing suffering than about promoting joy; maybe it's a matter of moral uncertainty: no plausible competitor can think it's good to let someone suffer pointlessly, that's more or less the strongest moral datum we have. But plausible competitors *can* disagree with utilitarian conclusions about well-being.
Yes, I agree that it's much more controversial exactly what contributes to positive well-being. (This isn't specific to utilitarianism.) FWIW, my own view is that positive hedonic states don't really matter all *that* much; they're nice and all, but the things that *really* make life worth living are more objective/external: things like loving relationships, achievements, etc. But as you note, that specific claim about the good is going to be much more controversial than "pain is bad", so it makes it a bit more difficult to make specific claims about what's worth pursuing. That's why I try to keep the claim more general: the best lives, *whatever it is* that makes them worth living, are better to have exist than to not exist.
That makes sense to me; I re-read your older post on killing vs. failing to create, and I think "strong personal reasons" to worry about people who will exist independently of our choices, vs. "weak impersonal reasons" to worry about bringing into existence future people is a distinction I find intuitive.
I think one thing I hadn't done a good job separating out is, in arguments contrasting the void with future Utopias, often the Utopias are stipulated to be filled with staggeringly large numbers of people, so that even with only weak impersonal reasons to create future lives, the number of people involved is big enough that the overall product is still a huge number--I think part of my intuitive rejection of this sort of reasoning is it feels a bit to me like a Pascal's mugging. But I was conflating that with a contrast between the void and Utopia *at all*.
And I guess the void still has the unique property that the void precludes *any* number of future people existing, so comparisons with it will always have something of a Pascal-ish quality.
Anyway, thanks for a very interesting discussion! I really appreciate your willingness to engage with amateurs like me, and I really enjoy the blog as a whole. I loved Reasons and Persons when I read it years ago, and I'm really glad I've found a place where I can not just follow, but even participate in, discussions on the same issues.
It's only a Pascal's mugging if the one making the argument can just make up any number they want, with no independent argument for an expected range. Some people peripherally involved in long-termist arguments online undoubtedly do this, but the central figures in long-termism do make indepedent arguments based upon the history and mathematics of population growth, technology and wealth growth, and predictions about the colonization of space.
That's a fair point; it's definitely a lot better that the numbers filling the postulated utopias are not just ex culo.
And I don't want to keep fighting this point on an otherwise dead thread, but I just want to articulate my feeling that, at least in the formulation above, there's still something fuzzy about the math: it's not clear how exactly to multiply "weak impersonal reasons" by large numbers (and, also of course, by the probability that these numbers are actually attainable) to come to clear conclusions, and it sometimes feels like the strength of these arguments derives from the stupefaction one feels at the largeness of the large numbers.
But, as I say, it's a pretty good reminder that actually, the large numbers are in some ways the least controversial part of that calculation--definitely in comparison to quantifications of "weak impersonal reasons", and probably in comparison to the probabilities too--they are not (usually) just picked to be stupendously large out of convenience, so thanks for pointing that out.
"One might infer that good states above a sufficient level have diminishing marginal value"
Can't one just restate the original gamble, but now with the Utopia stipulated to have arbitrarily large value, instead of whatever other good it was measured in before? If value itself is the unit of evaluation, then shouldn't a non-risk-averse person be indifferent between a decent world, and a 50/50 gamble with outcomes + N value, - N value, for any N?
Even if you think there is a maximum possible value (which as you note in the other post, has its own problems), it doesn't seem outrageous to me that the maximum would be large enough to admit a gamble of this form that would still be very counterintuitive for most people to actually accept over the alternative.
To the general point: I made a similar argument in the comments to an earlier post on a similar topic, but isn't it enough to note that most people have a preference for Utopia over the void, and argue that Utopia is better on the grounds that it satisfies our current preferences more? Does there need to be an *intrinsic* reason why Utopia is better than the void?
In general, the idea of intrinsic value seems odd to me. What appeals to me about consequentialism and utilitarianism is that they are very person-centric: utility is about what's good *for people*, unlike deontology or divine command or whatever, which center "goodness" somewhere else, somewhere outside of what actually affects and matters to people.
Obviously the above is too naive a conception of utilitarianism to be all that useful: we often face dilemmas where we have to decide how to evaluate situations that are good for some people but not for others, or where we face uncertainty over how good something is, or whether it's good at all, and so we need a more complex theory to help us deal with these issues.
But when contemplating the void, it feels to me like we aren't in one of these situations: there are no people in the void, and so no one for whom it to be good or bad; the only people for whom it can be good or bad are the people now who are contemplating it, and so we should be free to value it however we want, with no worry of our values coming into conflict with those of the people who live in that world. As it happens, we (mostly) currently very strongly dis-prefer the void--but there's no intrinsic reason we have to, and if we were to collectively change our minds on the point, that would be fine.
You could also restate the gamble in terms of *risk-adjusted value*, where +/- N risk-adjusted value is just whatever it takes for a risk-averse agent to be indifferent to the gamble. But I think these restatements aren't so troubling, because we no longer have a clear idea of what the gamble is supposed to be (and hence whether it's really a bad deal). If the worry is just that, structurally, we're committed to accepting *some* 50/50 gamble, I guess I'd want to hear more about what view avoids this possibility, and what other problems it faces instead.
> In general, the idea of intrinsic value seems odd to me.
It sounds like you may be mixing up issues in normative ethics and metaethics here. Intrinsic value, as a normative concept, is just the idea of something's being non-instrumentally desirable. While I happen to be a moral realist, and think there are objective facts about this, you could just as well endorse my claims while being an expressivist. In that case, when you say "X is intrinsically valuable", you're just expressing that you're in favour of people non-instrumentally desiring X. So, in particular, an expressivist who wants there to be good lives in future, and favours others sharing this desire, could affirm this by calling good lives "intrinsically valuable". There's nothing metaphysical about it. It's just a normative claim.
> "[Why not] argue that Utopia is better on the grounds that it satisfies our current preferences more?"
Well, depending on who counts in the "we", just imagine it were otherwise. Suppose you were surrounded by anti-natalists. Mightn't you nonetheless oppose their view, and want utopia to come into existence? I sure would! As a moral realist, I happen to think this is also the *correct* attitude to have. But even if I were an expressivist, I wouldn't want my support for utopia to be conditional on others' attitudes (or even my own: I would be "pro-utopia, even on the hypothetical condition that I cease to be pro-utopia").
> "there are no people in the void, and so no one for whom it to be good or bad... and so we should be free to value it however we want"
This seems wrong. Suppose that Sally is considering having a child with a genetic condition that would cause it unbearable suffering. Clearly, it would be wrong to bring the miserable child into existence. The void is better. There's just no denying that negative welfare is impersonally and intrinsically bad: we have to oppose it, even if there isn't (yet) any definite person for whose sake we would be acting when we tell Sally not to have the miserable child.
By parity of reasoning, there's no basis for denying the parallel claims about positive welfare being intrinsically good. Just as the miserable child would be (non-comparatively) harmed by being brought into existence, and we should oppose that, so a wonderfully happy child would be (non-comparatively) benefited by being brought into existence, and we should support that. So, these conclusions are forced on us by a simple mixture of (i) basic decency and (ii) intellectual consistency.
> If the worry is just that, structurally, we're committed to accepting *some* 50/50 gamble, I guess I'd want to hear more about what view avoids this possibility, and what other problems it faces instead.
Oh sure, I agree that you can't avoid having to pick some gamble like that. I guess the question is, does the move to diminishing marginal value matter here, or do we just want to say something like, yes, expected-value-maximization says we should take some gamble of this form, but
a) your alternative pet theory probably does the same, a la your "Puzzles for Everyone" post, and
b) we shouldn't imagine we are correctly conceptualizing both ends of the gamble "correctly" so we should be wary of relying too heavily on our intuition here.
> So, in particular, an expressivist who wants there to be good lives in future, and favours others sharing this desire, could affirm this by calling good lives "intrinsically valuable".
I'm not sure I totally understand--how is this different from the expressivist just having a preference for future good lives? I suppose from their point of view, they would say "I don't think this is good just because it satisfies my preferences", but from an outside view, it seems to me hard to distinguish an opinion on "intrinsic value" from a preference, at least from the point of view of a non-moral-realist.
> I would be "pro-utopia, even on the hypothetical condition that I cease to be pro-utopia").
I guess this is where we disagree. I am basically fine with the idea of the anti-natalists winning, as long as they do so by honourable means.
> even if there isn't (yet) any definite person for whose sake we would be acting when we tell Sally not to have the miserable child.
> By parity of reasoning, there's no basis for denying the parallel claims about positive welfare being intrinsically good.
I agree that *if* we bring a child into the world, and they love their life, we can regard that as a benefit for the child...but only conditional on bringing them into the world. If I had to summarize the view I think I'm arguing for, it would be something like, "you only have to care about the benefits/harms to a person in the worlds where they actually exist"--so Sally's child is harmed by being "forced" to live in a world where they will suffer; and a person with a good life is benefited by being born in any of the worlds in which they are, in fact, born. But in the worlds where a person is not born, we don't have to weight their benefits/harms in our calculation of what to do. We can *choose* to do so, as a matter of our personal preferences, or for other instrumental reasons, but I don't see why there is any intrinsic reason to do so.
Quick counterexample to your last claim: suppose Sally flips a coin to decide whether to create a miserable child. Fortunately, the coin directs her not to. But now your view implies that Sally needn't have taken into account the interests of the child who would've been miserable. But this seems wrong. Sally was wrong to flip a coin, and take a 50% risk of causing such suffering. She should have outright (100%) chosen not to have the miserable child, and she should have done it out of concern for that child's interests.
> from an outside view, it seems to me hard to distinguish an opinion on "intrinsic value" from a preference, at least from the point of view of a non-moral-realist.
Yeah, I'm no expert on expressivism, but a couple of possibilities:
(1) The relevant thing might be that it's a special kind of universal higher-order preference: they want *everyone* to have the relevant first-order preference.
(2) Alternatively, it might be that they're in favour of blaming or otherwise morally criticizing people who don't have the relevant first-order preference.
Sorry, I realized overnight that I missed the point that in the example where we don't create the child, the void is ranked against the world the miserable child is born; if we can do a comparison in that case, why not in the other case?
That actually feels pretty convincing to me; I still feel conflicted about this, but I think if I really want to believe that the void isn't worse than Utopia I really do need an explicit person-affecting view, or to have an explicit asymmetry between negative welfare and positive welfare.
Ha, well, I also don't want to downplay or sugarcoat how bad I think the view is. Sometimes philosophers defend crazy things! I think this is one of those times. Saying so is apt to make some people feel insulted, so I figured I should acknowledge that while clarifying that I'm not *aiming* to make anyone feel bad. But I'm basically OK with it being a foreseen side-effect of clearly conveying (i) how bad the view is, and hence (ii) why others should generally be on board with taking its rejection as a non-negotiable premise (as is needed for the rest of the post to get off the ground).
I am not sure if there is some issue with my account or if you just haven't come around to responding to my questions that I posted in the comments to this post. I would genuinely appreciate a discussion with you, as this is one of the most fascinating topics in the entire field of philosophy (at least for me).
Question 1. What are the positive non-relieving goods that can outweigh suffering? Or in other words, what has positive intrinsic value in the same way that suffering has negative intrinsic value?
Question 2. Consider a world that contains many creatures, some of whom are flourishing and some of whom are suffering. As it happens, the world has a net neutral balance of happiness and suffering. Wouldn't your view imply that it would be preferable to destroy such a world over adding a smallest unit of suffering (like a pingprick)?
Question 3. Whenever someone is suffering, this fact includes a call to relieve this suffering. However, when someone is flourishing, this fact doesn't include a call to increase the flourishing. Doesn't this prove a really important asymmetry?
(1) Positive well-being, i.e. whatever makes life "worth living". People can reasonably disagree about precisely what that consists in -- see https://utilitarianism.net/theories-of-wellbeing/ -- but I would include goods such as happiness, loving relationships, and achievement.
I wouldn't hesitate to endorse living a life that contains some suffering alongside vastly more of these welfare goods. Indeed, I think the view that *there are only bads, no goods*, such that no life is positively "worth living" at all, is among the most insane philosophical views I've ever heard proposed. (Just reporting my judgment here, no offense intended.)
(2) Assuming no possibility of future change, and by "happiness" you also mean to include non-hedonic welfare goods, then sure. Any view on which there is good and bad will presumably imply that there is some point at which one more bit of bad would make the world bad overall, i.e. worse than nothing. (Though on some views there could be an element of vagueness or imprecision as to the location of the threshold.)
(3) I'm not sure what you mean by "includes a call". We have normative reason to relieve suffering (suffering is such that we should want it gone). But we equally have normative reason to promote flourishing (flourishing is such that we should want it present). Doesn't that mean that the flourishing "includes a call" to pursue it, or see it realized? If someone is only flourishing a bit, I think the potential for better does indeed "call" us to realize that potential. So no, I don't see any important asymmetry here.
In valuing entire populations, the majority’s intuitions are asymmetric about happiness and suffering. If this asymmetry is applied to the life evaluations of the World Happiness Report, then the aggregated total turns negative. I have written an article about it (I will post a link to it below). An unbiased evaluation would be even more negative, because the most suffering people don’t participate in surveys. From a strictly impartial view the predominance of suffering is pretty much confirmed. I believe that this makes for a very strong case for antinatalism. If you are arguing against anti-procreation doctrine, then I think you'd do well to address my argument.
Why would global averages be relevant to a specific (non-average) couple's procreative decisions?
I'd suggest that miserable people avoid procreating, and happy people have more kids, and the predictable result will be more happy and fewer miserable people.
“taking it as a premise that positive intrinsic value is possible (utopia is better than a barren rock), “
Is one an application of the other, or are they unrelated? I can think that utopia is better than a barren rock without accepting anything about intrinsic value. Am I just using the terms differently?
How is intrinsic value different from utility? I guess instrumental value counts as utility also, although it derives its utility from the end to which it serves as means.
In this context, would extrinsic value and instrumental value be the same thing?
I agree with this, with one exception. I think that it is, in fact, possible to argue people out of the 'pleasure isn't good, but pain is bad position.' Among other things, even worse than implying utopia is worse than a barren rock, it implies it would be morally neutral to press a button that would make no future people ever happy again--and that utopia is no better than everyone just living slightly worthwhile lives with no suffering. That a life filled with love, good food, and general joy is no better than musak and potatoes.
I agree with this, with one exception. I think that it is, in fact, possible to argue people out of the 'pleasure isn't good, but pain is bad position.' Among other things, even worse than implying utopia is worse than a barren rock, it implies it would be morally neutral to press a button that would make no future people ever happy again--and that utopia is no better than everyone just living slightly worthwhile lives with no suffering. That a life filled with love, good food, and general joy is no better than musak and potatoes.
This argument works against a crude statement like "pain bad, pleasure neutral," but fails against the following formulations:
(1) All conscious existence has negative value. What we call "pleasure" can make it less negative, and sufficient quantities of "love, good food, and general joy" can help the value of a life asymptotically approach the zero level, but they can't make existence better than nonexistence.
(2) Lexical negative utilitarianism and related axiologies. (e.g. Pleasure is good, but not good enough to offset even trivial amounts of pain.)
> (1) All conscious existence has negative value. What we call "pleasure" can make it less negative, and sufficient quantities of "love, good food, and general joy" can help the value of a life asymptotically approach the zero level, but they can't make existence better than nonexistence.
This seems like an extreme formulation to me, but I admit that something a little like it has at last some intuitive appeal to me; I often feel that I'm attracted to a sort of "palliative" version of utilitarianism: an ethics that tries to offer comfort and ease of suffering. Whereas more "positive" formulations of utilitarianism leave me cold; they often leave me feeling like we are doing a "make line go up" for the sake of the *Universe* rather than for the sake of the people living within it--it feels much more right to me to say, "while we're here, we have a duty to make the world more pleasant and livable" than to say "we have a duty to remain here, and make the universe a certain way, even if no one wants it that way"...but I think what this discussion makes me realize is that it might be very hard or even impossible to formulate a logically consistent version of my view without resort to an extreme position like the position (1) that you articulate above.
>"very hard or even impossible to formulate a logically consistent version without resort to an extreme position like the position (1)"
For what it's worth, my own view is that trying to develop a logically consistent ethical system is a fundamentally misguided project, and that the ever-present temptation to borrow metaphors from mathematics (even basic ones like "good ~ positive" and "bad ~ negative") is especially likely to lead astray.
In addition to my last comment, here a some responses from a different post (https://www.reddit.com/r/negativeutilitarians/comments/1gl5m9g/dont_valorize_the_void/)
3. "The people with horrible lives that will be created until we achieve utopia are the kid in the basement."
4. "The problem with this argument is that it doesn't recognize the fact that societies and civilizations go through cycles of booms and bust, do even if we achieve "utopia" it's would be unwise to assume that we would not regress again."
I agree with #3. Omelas is a very good place, and it's deeply irrational to condemn it. We can demonstrate this by noting that from behind a veil of ignorance, where you had an equal chance to be any affected individual (including the kid in the basement), it would be prudent to gamble on Omelas. Far better than any real society that has ever existed; and also better than embracing the void and not existing at all.
#4 doesn't understand thought experiments.
Chat GPT wrote a poem about this argument.
Person affecting views in population ethics,
Raise objections with their implications,
A barren rock, just as good as utopia?
Such claims bring forth ethical complications.
How can we value an empty place,
Over a world full of love and grace?
Is it fair to equate a void to paradise,
And make both seem like they're in the same place?
The value of life is in its living,
Its richness, joy, and love it's giving,
To suggest a barren rock is equal to utopia,
Is to ignore the beauty in living.
A utopia might not exist today,
But the hope for a better world leads the way,
To settle for less and call it the same,
Is to make morality just a game.
Let us strive for the best we can achieve,
For a world of happiness and love we can conceive,
Where every life has value and meaning,
And barren rocks remain barren, unfeeling.
Hello, Richard! This article has been discussed on an antinatalist subreddit (https://www.reddit.com/r/antinatalism2/comments/1gda247/dont_valorize_the_void/), so I decided to share the critiques to give you a fair chance to respond.
1. "Utopia being better than a barren rock doesn't really mean much in the real world because an utopia will never be achieved. Life is pretty dystopian already imo, so the author talking about a small chance life will be dystopian is also meaningless to me."
2. "The author isn't arguing in good faith. He is bargaining for pleasures at the price of what is moral. His opinion is therefore of no value. Positive utilitarian's are nature's junkies, they will justify any suffering and harm as long as they get their feel good hit. Why would anyone trust their reasoning unless they also prefer to get high?"
Thanks for sharing! I don't expect anti-natalists to like my article or be convinced by it. After all, my starting premise is that they are crazy. The purpose of the article is to explain to *other* people how, if they share my starting premise, that should also influence how they think about some other important issues in philosophy.
I liked your blog and your perspective so much that I decided to share it with my friends, who also love discussing philosophy (specifically population ethics). You are one of the most articulate and on-the-point bloggers I've seen. Discussions with you bring me great pleasure and serve as food for thought. Most responses from my group so far are very positive, but one of my friends disagreed. Here is what she wrote:
"This is what me, Benatar, and most other antinatalists really mean by “we shouldn’t have been born”: it’s not that we believe that being not alive is better than being alive and thus should desire suicide but we realize we couldn’t have existed and experienced “goodness” without inevitably some other people ending up having a miserable life. We thus realize that we and all of the pleasures and freedoms we experience have effectively existed at the expense of a few unlucky individuals being made miserable by unfortunately being given a miserable life via their birth. In order for them to have never experienced or continue to experience a miserable life, everyone on earth today or in the past had to have not been born. Most other antinatalists like me thus consider human procreation to be very immoral and no different in principle from the torture of a few to benefit of the many or preventing raped women from getting abortions so the fetus’s inside of them can turn into people who then experience massive amounts of pleasure over the course of their lives. We thus believe that happy, content future people have no right to exist because they can only exist if miserable people are also brought into existence alongside them. This is akin to acknowledging that while Auschwitz today is a valuable museum that provides lots of value to both visitors and the employees working there so it shouldn’t be demolished, it’s still a place that only existed because of the Holocaust and thus shouldn’t have existed so we ought to ensure that similar places like Auschwitz do not arise in the future,even if they were to provide value after they stopped being used as concentration camps."
She says that this argument is her strongest one that she'd like to present, but that she doesn't have a Substack account.
Thanks, that's an interesting challenge. But I think it's misleading to present it as happy people exploiting miserable people by bringing them into existence. I don't think anyone ever deliberately creates a miserable child! What's going on is rather that life is a gamble.
Every morning we wake up, there's a (hopefully tiny) risk that something awful could happen to us. But most of us think that risk is worth taking. We don't want to die, just because our future isn't *guaranteed* to be good. We're willing to take our chances (and vigorously guard against risks that threaten to end our lives prematurely). And many of us -- I think reasonably and even generously -- decide to extend that gamble to our children, to give them a chance at life too. Even though it isn't guaranteed to work out, we think it's worth the shot.
It's tragic that sometimes the gamble fails, and some people (extremely few, I hope) end up so miserable that their life is "worse than nothing". In some cases, they may be able to mitigate the damage by voluntarily ending their life. (I think physician-assisted suicide should be available to aid those who are not physically capable of implementing the decision for themselves.) But sad as it is, this tragedy is like nothing compared to how tragic it would be for all the goodness in the world to cease to be -- to end all love, music, beauty, and knowledge; all striving, determination, creativity, and discovery; all gratitude, mentorship, kindness, and companionship. I'm more than willing to endure a proportionate share of sadness in exchange for a proportionate share of the great goods in life, and I would think it a kind of cowardice to choose otherwise.
We should certainly do what we can to prevent great harms from occurring in future, only... taking care not to equally prevent all that is good.
This is what she sent to me!
"I am very excited to speak with someone willing to provide philosophical pushback. The argument is solid, but I don't agree with the central premise. I don't think there is as much happiness and goods in life as is claimed. I think that a lot of the things we think we do in order to become happier, we actually do to avoid suffering from the deprivation.
For example:
1. We do not eat just for pleasure, but to prevent hunger. The fundamental thing about a meal is to calm hunger.
2. We do not have sex solely for pleasure but to avoid the pain caused by the frustration of unresolved sexual tension.
3. We do not look for a companion just for being happy together but also for not being sad and lonely.
4. We do not go on vacation once a year to a distant country only to enjoy new experiences, exotic foods, and paradisiacal places, but to avoid the boredom and frustration of staying in the usual city, always doing the same.
If I became convinced that people do the majority of their life activities in pursuit of happiness, pleasure, or eudaimonia and not because they want to avoid suffering associated with not doing said activities, then I would concede that antinatalism is philosophically undermined."
I find that argument puzzling, because a far more effective way to avoid frustration, hunger, etc., is to simply kill oneself, and yet most of us clearly have no wish to do that. The fact that we're still alive, and (more or less) happy to be alive, shows that we value positive things and not just the avoidance of negative things.
At root, the core question is just: "Is life worth it, all things considered?" We shouldn't assume that the answer will be the same for everyone. If someone's answer is "no," then that's really sad (for their sake) to hear. But obviously many of us answer more positively.
On the assumption that attitudes here are to some extent hereditary, it seems a good rule of thumb would be for miserable people to refrain from reproducing, and for people who are happy with their lives to go ahead and have kids (if they want to). I certainly wouldn't want to pressure an anti-natalist to change their personal decisions around reproduction. That's their choice. But I guess I do think it's irresponsible for them to discourage happier people from having kids, just based on their own personal experiences being negative. People vary!
This is a great and very inspiring response! I will be sure to tell her about it. Thanks!
My previous comment didn't go through for some reason. I broadly agree with your thoughts that utopia is better than a barren world. A picture of a lifeless world is the one from which I recoil. However, I am not yet convinced on the other part. We give surgery patients anesthesia to avert the agony they would feel if they remained conscious. Suppose some drug became available that gave people a joy as intense as the pain averted by anesthesia, and suppose that there were no drawbacks in the consumption of this drug. Doesn't it seem plausible to you that the provision of this drug would be less important than the administration of anesthesia?
Yeah, I find that plausible; but that's largely because pleasure doesn't strike me as an especially important good. A better example to illustrate that goods can outweigh bads is the wisdom of the old saying, "Better to have loved and lost than never to have loved at all."
Oh, that's very interesting. What goods would you consider to be the most important then? Is it like a plurality of wisdom, achievement, fulfillment, etc?
Sorry to start yet another thread, but I wanted to mention another thought that occurred to me while reading your post against subjectivism:
I agree that "Normative subjectivists have trouble accommodating the datum that all agents have reason to want to avoid future agony" gets at a real problem for subjectivism; but I find it telling that the strongest example you can come up with is avoiding pain. At least for me, my intuitions just really are very asymmetrical with respect to pleasure and pain, and I suspect you picked "avoiding future" agony rather than "achieving future joy" because you have the intuition that the former is a harder bullet to bite than the latter.
I think this asymmetry is why I feel intuitively bound to rank the unfortunate child against the void in a way I don't feel when it comes to the happy child; and why I don't like the idea of us turning ourselves into anti-humans, but I don't have a strong intuitive reaction against us choosing the void--I think our reasons for avoiding pain are much more convincing and inarguable than our reasons for pursuing pleasure.
I think in general, utilitarianism has a harder time working out the details of what should count toward *positive* utility--this may just be my impression, but I'd guess there's a lot more controversy over what counts as well-being, and what sorts of things contribute to it, and in what way, than over what sorts of things contribute to *negative* utility.
I think maybe the reason I think of pleasure and pain as asymmetric, then, is that I find utilitarianism's arguments much more convincing when talking about suffering; so maybe one doesn't need to adopt an extreme view like "all utility functions are bounded above by 0" to explain why it feels more intuitive to reason about preventing suffering than about promoting joy; maybe it's a matter of moral uncertainty: no plausible competitor can think it's good to let someone suffer pointlessly, that's more or less the strongest moral datum we have. But plausible competitors *can* disagree with utilitarian conclusions about well-being.
Yes, I agree that it's much more controversial exactly what contributes to positive well-being. (This isn't specific to utilitarianism.) FWIW, my own view is that positive hedonic states don't really matter all *that* much; they're nice and all, but the things that *really* make life worth living are more objective/external: things like loving relationships, achievements, etc. But as you note, that specific claim about the good is going to be much more controversial than "pain is bad", so it makes it a bit more difficult to make specific claims about what's worth pursuing. That's why I try to keep the claim more general: the best lives, *whatever it is* that makes them worth living, are better to have exist than to not exist.
That makes sense to me; I re-read your older post on killing vs. failing to create, and I think "strong personal reasons" to worry about people who will exist independently of our choices, vs. "weak impersonal reasons" to worry about bringing into existence future people is a distinction I find intuitive.
I think one thing I hadn't done a good job separating out is, in arguments contrasting the void with future Utopias, often the Utopias are stipulated to be filled with staggeringly large numbers of people, so that even with only weak impersonal reasons to create future lives, the number of people involved is big enough that the overall product is still a huge number--I think part of my intuitive rejection of this sort of reasoning is it feels a bit to me like a Pascal's mugging. But I was conflating that with a contrast between the void and Utopia *at all*.
And I guess the void still has the unique property that the void precludes *any* number of future people existing, so comparisons with it will always have something of a Pascal-ish quality.
Anyway, thanks for a very interesting discussion! I really appreciate your willingness to engage with amateurs like me, and I really enjoy the blog as a whole. I loved Reasons and Persons when I read it years ago, and I'm really glad I've found a place where I can not just follow, but even participate in, discussions on the same issues.
It's only a Pascal's mugging if the one making the argument can just make up any number they want, with no independent argument for an expected range. Some people peripherally involved in long-termist arguments online undoubtedly do this, but the central figures in long-termism do make indepedent arguments based upon the history and mathematics of population growth, technology and wealth growth, and predictions about the colonization of space.
That's a fair point; it's definitely a lot better that the numbers filling the postulated utopias are not just ex culo.
And I don't want to keep fighting this point on an otherwise dead thread, but I just want to articulate my feeling that, at least in the formulation above, there's still something fuzzy about the math: it's not clear how exactly to multiply "weak impersonal reasons" by large numbers (and, also of course, by the probability that these numbers are actually attainable) to come to clear conclusions, and it sometimes feels like the strength of these arguments derives from the stupefaction one feels at the largeness of the large numbers.
But, as I say, it's a pretty good reminder that actually, the large numbers are in some ways the least controversial part of that calculation--definitely in comparison to quantifications of "weak impersonal reasons", and probably in comparison to the probabilities too--they are not (usually) just picked to be stupendously large out of convenience, so thanks for pointing that out.
"One might infer that good states above a sufficient level have diminishing marginal value"
Can't one just restate the original gamble, but now with the Utopia stipulated to have arbitrarily large value, instead of whatever other good it was measured in before? If value itself is the unit of evaluation, then shouldn't a non-risk-averse person be indifferent between a decent world, and a 50/50 gamble with outcomes + N value, - N value, for any N?
Even if you think there is a maximum possible value (which as you note in the other post, has its own problems), it doesn't seem outrageous to me that the maximum would be large enough to admit a gamble of this form that would still be very counterintuitive for most people to actually accept over the alternative.
To the general point: I made a similar argument in the comments to an earlier post on a similar topic, but isn't it enough to note that most people have a preference for Utopia over the void, and argue that Utopia is better on the grounds that it satisfies our current preferences more? Does there need to be an *intrinsic* reason why Utopia is better than the void?
In general, the idea of intrinsic value seems odd to me. What appeals to me about consequentialism and utilitarianism is that they are very person-centric: utility is about what's good *for people*, unlike deontology or divine command or whatever, which center "goodness" somewhere else, somewhere outside of what actually affects and matters to people.
Obviously the above is too naive a conception of utilitarianism to be all that useful: we often face dilemmas where we have to decide how to evaluate situations that are good for some people but not for others, or where we face uncertainty over how good something is, or whether it's good at all, and so we need a more complex theory to help us deal with these issues.
But when contemplating the void, it feels to me like we aren't in one of these situations: there are no people in the void, and so no one for whom it to be good or bad; the only people for whom it can be good or bad are the people now who are contemplating it, and so we should be free to value it however we want, with no worry of our values coming into conflict with those of the people who live in that world. As it happens, we (mostly) currently very strongly dis-prefer the void--but there's no intrinsic reason we have to, and if we were to collectively change our minds on the point, that would be fine.
You could also restate the gamble in terms of *risk-adjusted value*, where +/- N risk-adjusted value is just whatever it takes for a risk-averse agent to be indifferent to the gamble. But I think these restatements aren't so troubling, because we no longer have a clear idea of what the gamble is supposed to be (and hence whether it's really a bad deal). If the worry is just that, structurally, we're committed to accepting *some* 50/50 gamble, I guess I'd want to hear more about what view avoids this possibility, and what other problems it faces instead.
> In general, the idea of intrinsic value seems odd to me.
It sounds like you may be mixing up issues in normative ethics and metaethics here. Intrinsic value, as a normative concept, is just the idea of something's being non-instrumentally desirable. While I happen to be a moral realist, and think there are objective facts about this, you could just as well endorse my claims while being an expressivist. In that case, when you say "X is intrinsically valuable", you're just expressing that you're in favour of people non-instrumentally desiring X. So, in particular, an expressivist who wants there to be good lives in future, and favours others sharing this desire, could affirm this by calling good lives "intrinsically valuable". There's nothing metaphysical about it. It's just a normative claim.
> "[Why not] argue that Utopia is better on the grounds that it satisfies our current preferences more?"
Well, depending on who counts in the "we", just imagine it were otherwise. Suppose you were surrounded by anti-natalists. Mightn't you nonetheless oppose their view, and want utopia to come into existence? I sure would! As a moral realist, I happen to think this is also the *correct* attitude to have. But even if I were an expressivist, I wouldn't want my support for utopia to be conditional on others' attitudes (or even my own: I would be "pro-utopia, even on the hypothetical condition that I cease to be pro-utopia").
> "there are no people in the void, and so no one for whom it to be good or bad... and so we should be free to value it however we want"
This seems wrong. Suppose that Sally is considering having a child with a genetic condition that would cause it unbearable suffering. Clearly, it would be wrong to bring the miserable child into existence. The void is better. There's just no denying that negative welfare is impersonally and intrinsically bad: we have to oppose it, even if there isn't (yet) any definite person for whose sake we would be acting when we tell Sally not to have the miserable child.
By parity of reasoning, there's no basis for denying the parallel claims about positive welfare being intrinsically good. Just as the miserable child would be (non-comparatively) harmed by being brought into existence, and we should oppose that, so a wonderfully happy child would be (non-comparatively) benefited by being brought into existence, and we should support that. So, these conclusions are forced on us by a simple mixture of (i) basic decency and (ii) intellectual consistency.
Thanks for the very good response!
> If the worry is just that, structurally, we're committed to accepting *some* 50/50 gamble, I guess I'd want to hear more about what view avoids this possibility, and what other problems it faces instead.
Oh sure, I agree that you can't avoid having to pick some gamble like that. I guess the question is, does the move to diminishing marginal value matter here, or do we just want to say something like, yes, expected-value-maximization says we should take some gamble of this form, but
a) your alternative pet theory probably does the same, a la your "Puzzles for Everyone" post, and
b) we shouldn't imagine we are correctly conceptualizing both ends of the gamble "correctly" so we should be wary of relying too heavily on our intuition here.
> So, in particular, an expressivist who wants there to be good lives in future, and favours others sharing this desire, could affirm this by calling good lives "intrinsically valuable".
I'm not sure I totally understand--how is this different from the expressivist just having a preference for future good lives? I suppose from their point of view, they would say "I don't think this is good just because it satisfies my preferences", but from an outside view, it seems to me hard to distinguish an opinion on "intrinsic value" from a preference, at least from the point of view of a non-moral-realist.
> I would be "pro-utopia, even on the hypothetical condition that I cease to be pro-utopia").
I guess this is where we disagree. I am basically fine with the idea of the anti-natalists winning, as long as they do so by honourable means.
> even if there isn't (yet) any definite person for whose sake we would be acting when we tell Sally not to have the miserable child.
> By parity of reasoning, there's no basis for denying the parallel claims about positive welfare being intrinsically good.
I agree that *if* we bring a child into the world, and they love their life, we can regard that as a benefit for the child...but only conditional on bringing them into the world. If I had to summarize the view I think I'm arguing for, it would be something like, "you only have to care about the benefits/harms to a person in the worlds where they actually exist"--so Sally's child is harmed by being "forced" to live in a world where they will suffer; and a person with a good life is benefited by being born in any of the worlds in which they are, in fact, born. But in the worlds where a person is not born, we don't have to weight their benefits/harms in our calculation of what to do. We can *choose* to do so, as a matter of our personal preferences, or for other instrumental reasons, but I don't see why there is any intrinsic reason to do so.
Quick counterexample to your last claim: suppose Sally flips a coin to decide whether to create a miserable child. Fortunately, the coin directs her not to. But now your view implies that Sally needn't have taken into account the interests of the child who would've been miserable. But this seems wrong. Sally was wrong to flip a coin, and take a 50% risk of causing such suffering. She should have outright (100%) chosen not to have the miserable child, and she should have done it out of concern for that child's interests.
> from an outside view, it seems to me hard to distinguish an opinion on "intrinsic value" from a preference, at least from the point of view of a non-moral-realist.
Yeah, I'm no expert on expressivism, but a couple of possibilities:
(1) The relevant thing might be that it's a special kind of universal higher-order preference: they want *everyone* to have the relevant first-order preference.
(2) Alternatively, it might be that they're in favour of blaming or otherwise morally criticizing people who don't have the relevant first-order preference.
(There may be other options!)
Sorry, I realized overnight that I missed the point that in the example where we don't create the child, the void is ranked against the world the miserable child is born; if we can do a comparison in that case, why not in the other case?
That actually feels pretty convincing to me; I still feel conflicted about this, but I think if I really want to believe that the void isn't worse than Utopia I really do need an explicit person-affecting view, or to have an explicit asymmetry between negative welfare and positive welfare.
>"Saying this risks coming off as insulting, but I don’t mean it that way"
>[the next paragraph:] "it’s insane to deny this premise... purely negative ethical views are insane"
(I think the "not being insulting" thing may need a little work here)
Ha, well, I also don't want to downplay or sugarcoat how bad I think the view is. Sometimes philosophers defend crazy things! I think this is one of those times. Saying so is apt to make some people feel insulted, so I figured I should acknowledge that while clarifying that I'm not *aiming* to make anyone feel bad. But I'm basically OK with it being a foreseen side-effect of clearly conveying (i) how bad the view is, and hence (ii) why others should generally be on board with taking its rejection as a non-negotiable premise (as is needed for the rest of the post to get off the ground).
>"a foreseen side-effect"
Now you're wounding like a deontologist!
(I originally meant to type "sounding like a deontologist," but I think it works this way as well.)
Folks over at Center for Reducing Suffering have recently written an article critiquing this very article here: https://reducingsuffering.substack.com/p/does-suffering-focused-ethics-valorize
Not sure if they notified you about it to give you a chance to respond, but in case they didn't, I am mentioning it here.
I am not sure if there is some issue with my account or if you just haven't come around to responding to my questions that I posted in the comments to this post. I would genuinely appreciate a discussion with you, as this is one of the most fascinating topics in the entire field of philosophy (at least for me).
I have 3 additional questions about your view:
Question 1. What are the positive non-relieving goods that can outweigh suffering? Or in other words, what has positive intrinsic value in the same way that suffering has negative intrinsic value?
Question 2. Consider a world that contains many creatures, some of whom are flourishing and some of whom are suffering. As it happens, the world has a net neutral balance of happiness and suffering. Wouldn't your view imply that it would be preferable to destroy such a world over adding a smallest unit of suffering (like a pingprick)?
Question 3. Whenever someone is suffering, this fact includes a call to relieve this suffering. However, when someone is flourishing, this fact doesn't include a call to increase the flourishing. Doesn't this prove a really important asymmetry?
-Bruno Contestabile
(1) Positive well-being, i.e. whatever makes life "worth living". People can reasonably disagree about precisely what that consists in -- see https://utilitarianism.net/theories-of-wellbeing/ -- but I would include goods such as happiness, loving relationships, and achievement.
I wouldn't hesitate to endorse living a life that contains some suffering alongside vastly more of these welfare goods. Indeed, I think the view that *there are only bads, no goods*, such that no life is positively "worth living" at all, is among the most insane philosophical views I've ever heard proposed. (Just reporting my judgment here, no offense intended.)
(2) Assuming no possibility of future change, and by "happiness" you also mean to include non-hedonic welfare goods, then sure. Any view on which there is good and bad will presumably imply that there is some point at which one more bit of bad would make the world bad overall, i.e. worse than nothing. (Though on some views there could be an element of vagueness or imprecision as to the location of the threshold.)
(3) I'm not sure what you mean by "includes a call". We have normative reason to relieve suffering (suffering is such that we should want it gone). But we equally have normative reason to promote flourishing (flourishing is such that we should want it present). Doesn't that mean that the flourishing "includes a call" to pursue it, or see it realized? If someone is only flourishing a bit, I think the potential for better does indeed "call" us to realize that potential. So no, I don't see any important asymmetry here.
In valuing entire populations, the majority’s intuitions are asymmetric about happiness and suffering. If this asymmetry is applied to the life evaluations of the World Happiness Report, then the aggregated total turns negative. I have written an article about it (I will post a link to it below). An unbiased evaluation would be even more negative, because the most suffering people don’t participate in surveys. From a strictly impartial view the predominance of suffering is pretty much confirmed. I believe that this makes for a very strong case for antinatalism. If you are arguing against anti-procreation doctrine, then I think you'd do well to address my argument.
https://www.socrethics.com/Folder2/Prevalence-of-Suffering.htm
-Bruno Contestabile
Why would global averages be relevant to a specific (non-average) couple's procreative decisions?
I'd suggest that miserable people avoid procreating, and happy people have more kids, and the predictable result will be more happy and fewer miserable people.
“taking it as a premise that positive intrinsic value is possible (utopia is better than a barren rock), “
Is one an application of the other, or are they unrelated? I can think that utopia is better than a barren rock without accepting anything about intrinsic value. Am I just using the terms differently?
How is intrinsic value different from utility? I guess instrumental value counts as utility also, although it derives its utility from the end to which it serves as means.
In this context, would extrinsic value and instrumental value be the same thing?
I agree with this, with one exception. I think that it is, in fact, possible to argue people out of the 'pleasure isn't good, but pain is bad position.' Among other things, even worse than implying utopia is worse than a barren rock, it implies it would be morally neutral to press a button that would make no future people ever happy again--and that utopia is no better than everyone just living slightly worthwhile lives with no suffering. That a life filled with love, good food, and general joy is no better than musak and potatoes.
"The fires of the soul are great, and burn with the same light as the stars"
Merlin (HPMOR)
I agree with this, with one exception. I think that it is, in fact, possible to argue people out of the 'pleasure isn't good, but pain is bad position.' Among other things, even worse than implying utopia is worse than a barren rock, it implies it would be morally neutral to press a button that would make no future people ever happy again--and that utopia is no better than everyone just living slightly worthwhile lives with no suffering. That a life filled with love, good food, and general joy is no better than musak and potatoes.
This argument works against a crude statement like "pain bad, pleasure neutral," but fails against the following formulations:
(1) All conscious existence has negative value. What we call "pleasure" can make it less negative, and sufficient quantities of "love, good food, and general joy" can help the value of a life asymptotically approach the zero level, but they can't make existence better than nonexistence.
(2) Lexical negative utilitarianism and related axiologies. (e.g. Pleasure is good, but not good enough to offset even trivial amounts of pain.)
> (1) All conscious existence has negative value. What we call "pleasure" can make it less negative, and sufficient quantities of "love, good food, and general joy" can help the value of a life asymptotically approach the zero level, but they can't make existence better than nonexistence.
This seems like an extreme formulation to me, but I admit that something a little like it has at last some intuitive appeal to me; I often feel that I'm attracted to a sort of "palliative" version of utilitarianism: an ethics that tries to offer comfort and ease of suffering. Whereas more "positive" formulations of utilitarianism leave me cold; they often leave me feeling like we are doing a "make line go up" for the sake of the *Universe* rather than for the sake of the people living within it--it feels much more right to me to say, "while we're here, we have a duty to make the world more pleasant and livable" than to say "we have a duty to remain here, and make the universe a certain way, even if no one wants it that way"...but I think what this discussion makes me realize is that it might be very hard or even impossible to formulate a logically consistent version of my view without resort to an extreme position like the position (1) that you articulate above.
>"very hard or even impossible to formulate a logically consistent version without resort to an extreme position like the position (1)"
For what it's worth, my own view is that trying to develop a logically consistent ethical system is a fundamentally misguided project, and that the ever-present temptation to borrow metaphors from mathematics (even basic ones like "good ~ positive" and "bad ~ negative") is especially likely to lead astray.