Nit: You might want to pick a word other than “indifference”, given the usage of “indifference” as a technical term to denote simply that an agent doesn’t prefer one outcome over another.
An agent might feel torn about a choice, and not indifferent about it in the sense of considering the choice unimportant, but still wind up indifferent in the sense that both choices seem equally good/bad.
Also…
I think your analogy about fudge and chocolate is identifying a different property from fungibility. After all, eating fudge may undermine the desirability of subsequently eating chocolate, but receiving a $20 bill does not undermine the desirability of subsequently receiving two $10 bills – even though I agree with you that money is fungible in a way people aren't.
(And for that matter, if one eats fudge one day, will it really reduce one's desire to eat chocolate the next day? For me, maybe a little, but typically I eat snacks on a daily basis, so my desire tends to reset each day. Not to mention that I've often felt torn between different snacks whose flavors I have separate and independent desires for – since I'd feel overstuffed if I ate both. I know it's just a thought experiment, but it comes out as an unintuitive one for this snack-lover! :)
I'm sympathetic to this view, but I think it implies that we should have person affecting views in general, and the non-total views in population ethics specifically. I think the really strong argument against this is smoking-mother type cases, where it seems like we have very strong reason to want the mother to stop smoking, even though this advances no individual persons interests, and that we should be willing to tradeoff harming the interests of individuals against generically improving welfare.
I think we can count existential benefits as "person-affecting" in a broad sense. So, e.g., the resulting baby gets a non-comparative benefit from good existence if the mother stops smoking, and this is greater than the benefit from life that the alternative baby would get (if she doesn't).
Yeah I think this might get into quite tricky questions about counterfactuals and the philosophy of action. I think that at the time one is advocating for the mother to stop smoking, one is advocating on behalf of someone who exists only in the counterfactual world in which the mother stops smoking. I agree there's a kind of broad person affecting view under which one can advocate for this in person affecting terms if one is considering people across all possible worlds, but my guess is that this view will run into some problems because of how broad possible worlds are.
I also not sure how in spirit of the attitude that we care about welfare of individuals, rather than welfare per se, this is. Existence seems like plausibly a necessary condition for caring about someones interests?
Yeah, it does seem tricky. Still, two claims that seem true (and consistent) to me:
* Insofar as we're talking about existing people, our concern for their contribution to the aggregate welfare should be downstream of our concern for *them* as particular individuals.
* We should accept a 'broad' solution to the non-identity problem which allows that we have strong reasons of procreative beneficence to prefer that happier lives come to exist rather than less-happy ones.
What do you think about the possibility that a *large enough* harm or benefit can't be outweighed by any number of *very small* harms or benefits? For example, torturing one person for a year can't be outweighed by a dust speck in the eye of each of a billion (or any number of) different people.
>Scanlon’s argument for numbers as tie-breakers hints in this direction—he notes it would seem disrespectful to a second person you could save if you didn’t treat their life or death as giving you any more reason to save the group than you would have had from the first person in the group alone. What I’m now suggesting is that this doesn’t go far enough. If you say that the importance of saving the second person is reduced at all by the presence of the first person, you have failed to value the two people separately and independently.
You only included this in a footnote, but I actually found it very helpful for understanding your point.
Would you go as far as to say that we should satisfy the separability axiom / independence of unaffected agents/patients for similar reasons? If this is violated, whether you do X or Y could depend on the conditions of people who are totally unaffected by the choice. This means how we count the benefits and harms to those affected by X vs Y depends on the conditions of people who are totally unaffected, and you have failed to count those benefits and harms *separately and independently* from the conditions of the unaffected people.
And would you take this to only hold in deterministic cases, or also ex ante? The latter can get you most of the way to expectational total utilitarianism, which is fanatical.
2. For ex ante: the same kind of arguments we make for deterministic separability/independence extend to arguments for ex ante separability/independence.
Yeah, I think there's a strong intuitive case here that respecting people as individuals should lead us to endorse ex ante separability. (Maybe the case could be outweighed if it seemed too costly in other respects.)
> Now, I cannot understand how anyone could seriously claim that Generic Jim is more virtuous than Amy (let alone how they could claim to thereby be respecting the separateness of persons better than a utilitarian who holds up Amy as their moral ideal). Jim stops caring about someone’s suffering the moment he finds someone else who is worse off! He fails to treat each person’s suffering as mattering in its own right—maintaining its full force and independent moral significance no matter what is going on with other people. All that matters to him is that someone is suffering a fixed-level harm; it doesn’t even matter how many people suffer so.
This seems wrong to me? Just because Jim has chosen as his strategy to always help the worst off, doesn't mean he doesn't dispositionally care about everyone. When he has helped the worst-off so that they are no longer the worst off, he will help another person who is now the worst off. It seems wrong to me to say that he then stopped caring about the former and started caring about the latter. It seems more right to me to say that he always cared about both, in that he was always ready to help either, should they be or become the most in need of help.
I guess "care" can be used in broader or narrower senses. It does seem a bit artificial to me to say that, in helping only whoever is the worst-off individual at any given time, Jim thereby "cares" about everyone. He certainly doesn't care as much about most improvements to most people's well-being as Amy does. And I guess it's that comparative claim that really matters to me, more than whether we allow that there's *some* sense in which Jim cares about everyone by caring in the condition that they become the worst-off person.
(Importantly, I'm not just wanting to think about Jim's pattern of behavior, but also his associated *attitudes*: what changes he regards as lamentable or as a matter of indifference. And my worry is that he seems indifferent to too much that should not be a matter of moral indifference.)
I guess maybe one way I would think about it is, when Jim is deciding how to act and whom to help with his act, and supposing he's deciding sort of algorithmically, before he knows who the most in need of help is, does he need to consider everyone's current well-being and the condition they're in, and *might* he at that point choose to help any of them? It seems to me the answer is yes. To decide whom to help, Jim needs to check or consider each person's well-being, or he would not know who was most in need, and that seems to me a kind of caring.
I do think your comparative claim sounds plausible. (I'm not a maxminer, btw, I'm just arguing for argument's sake.)
Yeah, I agree that Jim treats each person as *having moral status* in this sense. They're eligible to affect his decisions if they meet the right condition, and it's a condition that's perfectly impartial between the various individuals as such.
But he doesn't straightforwardly desire that each person's life go well. So I think that's an important kind of caring - distinct from intellectually appreciating moral status in an abstract sense - that he's missing.
If Jim really engages in pure maximin, then he really *doesn't* care about anyone's joys or sufferings *except* insofar as that person might become the worst off person in the world.
Maybe we're using "care" differently? I would take it to either
1. indicate some feeling on Jim's part towards those people. I think it's perfectly possible for him to feel concern for those other people, while also choosing to adopt the strategy of maxmining when deciding whom to help.
or
2. indicate Jim attaching some importance to these people and their well-being. I think this is less clear, but I would say he does attach importance to them *as people*, and the only thing preventing him from helping them is the condition they happen to be in. I guess this is what you're saying with "he really *doesn't* care about anyone's joys or sufferings *except* insofar as that person might become the worst off person in the world", which does make it sound like you agree he is caring, but only very little and/or conditionally.
I’m using “care about” to mean something like “judge as better or worse”. If Jim is maximining then he doesn’t care (in this sense) about any changes that don’t affect the person at the bottom and don’t send anyone else to the bottom. He might have feelings, but if those feelings don’t have any impact, they aren’t relevant.
Kamm already argued in 1985 (and again in 1993) that you should think of Taurek’s view in terms of concern for individuals, not a de dicto concern. And I’ve pressed that point in several papers. So why are you imputed to anti-aggregationists a view that they often explicitly say they don’t hold?
It would be like me imputing to you a belief that all that matters is maximizing the amount of total utility.
Another principle worth bringing up, which complicates some of your complaints, is that most anti-aggregationists accept Pareto. I think Pareto is blindingly obviously true. In fact, I think this is a problem for aggregationists when it comes to infinite ethics. Why bother saving additional lives if it won’t affect the already-infinite total? Paretians can easily explain why. Aggregationists have a harder time.
Finally, I couldn’t tell why you were so confident that instrumental rationality requires any sort of aggregation of desires. (A point I associate with Tom Dougherty.) This was really the crucial move of the post, but what’s the justification? Does it just sound plausible? Or are you thinking of money-pump arguments? (I find those pretty unpersuasive given that you can money-pump people who are future-biased, as I think future bias is rationally permissible.)
If I recall the details from your FIU talk correctly (please let me know if I'm misremembering), one way to get Taurekian verdicts is to treat different people's lives/well-being as wildly incommensurable. So, in contrast to the sort of view I quote and respond to in this post, you *can* in principle move from "good for" to "good simpliciter", it's just that the latter verdicts -- e.g. about whether it's better to save 5 or a distinct 1 -- won't admit of precise answers. The values of the two options are rather "on a par".
That's an interesting view, but seems pretty different in structure and motivation from the view I quote and respond to in this post. I also don't see any motivation for believing in such extreme incommensurability. (iirc, you once motivated it with thoughts about the need to avoid fungibility, but that just seems to miss the lesson of 'Value Receptacles', that commensurability does not imply fungibility.)
So the quick answer is that I didn't take myself to be responding to your view here (from what I recall of it), but rather was just trying to think through how we could most naturally translate the specific normative claims I quoted into a Humean belief-desire psychology.
Re: the aggregation of desires: right, this is the key assumption. It does just seem truistic to me, e.g. from introspection and thinking about ordinary non-moral examples of the sort discussed early in the post.
In your follow-up reply you suggest that an agent may "compare the desires pairwise to see which side is favored by the strongest desire." I guess I don't think of desires as so inert. They aren't like pieces of information that different agents might use in different ways. I'm thinking of them as automatic motivational PULLs. On the psychological picture I have in mind, you just WILL BE pro tanto pulled twice as hard in a direction that satisfies twice as many of your (salient, activated) desires. That's just built into the conception of desire that I'm working with. Now, the agent may override their natural inclinations with deliberate reasoning and willpower. But I'm thinking about what would be the ideal motivational profile for the sort of virtuous agent who *doesn't need to deliberate at all* in order to do the right thing. By having the right desires, they're naturally pulled towards the right choice.
I hope that makes a bit more sense of where I'm coming from. (I expect it's not the sort of psychological picture that anti-aggregationists tend to start from. So in order to make sense of their view from within this picture, I kind of expect it to require some attributions that they aren't antecedently inclined to agree with.)
> Why bother saving additional lives if it won’t affect the already-infinite total? Paretians can easily explain why. Aggregationists have a harder time.
I take the Pareto principle to be the core of plausible aggregation, and everything else just follows from when tradeoffs result in equality rather than incomparability.
If we can establish a way to trade off good for one person against good for another person, then this establishes an interpersonal scale. If there is additionally just a finite set of people, then adding up the well-being of the individuals gives a summary statistic that turns out to be coextensive with which situation is overall better or worse. But it's not because anyone does or should care about the sum - it's because of each individual. And this is clear in the infinite case, when there is no such thing as the sum, but there are still facts about which overall situations are better and which are worse, because of the particular pattern of better and worse for individuals.
Extremely ignorant question: for finite individual cases, are pairwise comparisons between individuals enough to get you to aggregation? That's how I interpret the phrasing, is that correct?
And if so, can you get interesting theories assuming some pairwise comparisons can't be made, but you can augment with k-wise comparisons up to some k (presumably less than the finite size of the population?)?
When you are making pairwise comparisons, you have to be able to say whether it is better or worse or equal if X is at level a and Y is at level b, or if X is at level c and Y is at level d, while everyone else remains fixed. If these comparisons are invariant regardless of what levels everyone else is at, and if you can make such comparisons for every a, b, c, d, and if there are sufficiently many possible levels of well being for every person, then that suffices to define a numerical scale for every individual’s well-being levels, such that one distribution is better than another iff the sum of the numerical representations is larger.
You can get some interesting results even with weaker assumptions, like that some differences for one person are bigger than some differences for another, even if not all differences are comparable. I’m not sure what you mean with k wise comparisons though - there’s a few things that might come to mind but I’d need a more precise idea.
Thanks for this! I think what I have in mind is, weakening the assumption that interpair comparisons are invariant against the background level of everyone else. I'm not sure exactly where I'm going with this, so if you know of anything already studied I'm more interested in that than my attempts to assemble my thoughts, but my initial thought is something like:
A k-ary comparison rule says that interpair comparisons must transform in some simple way as at most (k-2) other individual levels change.
That is, f(x1,x2,...) must satisfy that f(a,b,x3,...) - f(c,d,x3,...) must exhibit some simple form when restricted to any set of variables {x1, x2, xi, xj,...} of size k, and obviously with symmetry so that x1, x2 aren't special. I dunno if that makes sense or not?
So a trinary comparison says, how binary comparisons depend on the level of everyone else must exhibit some (simple?) symmetry under a change in any one other person's level.
I don't have any idea what conditions I want to impose though.
Let me clarify what I said, since it was a bit quick.
I think the anti-aggregationist should care about each person separately and individually. To decide which of two groups to save, they don’t add up the number of desires to see which is strongest. They instead compare the desires pairwise to see which side is favored by the strongest desire.
This doesn’t mean that the added numbers on one side are totally idle. They rationally require you to save supersets of people (rather than subsets) wherever possible. So you don’t have to save Al and Bert rather than Carol, but you can’t just save Al if you could save him along with Bert. Thus the desire to save Bert has an effect on what’s rational for you to do.
I'm trying to wrap my head around this stuff and have enjoyed reading your, Richard's, and Flo's recent articles.
Maybe this is basic, but: if I'm a deontologist, don't I still have to weigh aggregations of duties, if not aggregations of intrinsic value? Like, if it is possible for me to discharge 5 compatible duties at a time (i.e., saving 5 lives), is it not required that I select that option over discharging just 1 duty (saving 1 life)?
Even in intrapersonal cases, in order to make sense of prudential reasoning, don't we need to be able to deal in aggregates of normative considerations? Arriving at an "all things considered" reason seems to require that I construct, rank, and weigh aggregates of instrumentally compatible ends.
Sublime work! I wonder what Daniel Munoz would say about this stuff. After reading his responses to you and Matt Adelstein during his chat with you and then with Matt, I still wasn't convinced by deontolology. This time... this shows my firmness or conviction regarding consequentialism.
I am not moved towards deontolology even by its smartest and most formidable defender!
What do you make of the following argument against consequentialism made by Gustafsson:
Suppose that you are choosing between becoming an investment banker and becoming a voluntary worker. Any future choices after this career choice are beyond your present voluntary control. As a matter of fact, you will choose to become an investment banker, and this will make you rich. But, because you are selfish, you won’t use your riches to do much good in the world. You would do more good if you were to become a voluntary worker instead. Accordingly, Act Consequentialism yields that becoming an investment banker is wrong, and, since you will become one, it yields that, if you were to become an investment banker, it would be wrong.
Suppose that the reason you will choose to become an investment banker is that you are selfish. And, in the closest possible world where you instead choose to become a voluntary worker, you are less selfish and more altruistic. Suppose moreover that, from the point of view of that world—that is, the closest possible world where you become a voluntary worker—the closest possible world where you become an investment banker isn’t the actual world but a world where you are still altruistic. And, in that world where you are altruistic and become an investment banker, you will do much more good with your riches than you could do as a voluntary worker. So, if you were to become a voluntary worker, it would be better to become an investment banker. Accordingly, Act Consequentialism yields that, if you were to become a voluntary worker, it would be wrong.
Hence each performable act in the situation (becoming an investment banker, becoming a voluntary worker) would be wrong if it were performed.
None of the fundamental normative claims of consequentialism seem undermined by that case. The apparent "problem" is just an artifact of talking about counterfactuals in a misleading way. It's perfectly possible for the agent to volunteer *given their actual motivations* (generally selfish motivations are compatible with having a temporary burst of moral conscientiousness), and in that case they would have acted rightly (and it would not be true that they should have become an investment banker instead).
Assuming no metaphysical indeterminism, each set of actual motivations will lead to exactly one outcome being actualized. In order for some other outcome to be actualized instead, we need to imagine that either a) the person's motivations are different at the time of the choice or b) the laws of the universe act differently at the time of the choice, such that an agent with the exact same motivations can make different choices in the exact same scenarios.
So keeping your actual motivations the same and keeping all facts about the world the same, it is not the case that you can become a voluntary worker, given the fact that we already know that you would become an investment banker in that scenario.
There's more to cognition than motivation, so even given determinism you could get different behavior / outcomes if you had different thoughts.
The most straightforward response is thus to dispute the claim that "in the closest possible world where you instead choose to become a voluntary worker, you are less selfish and more altruistic." I don't think you get to stipulate a claim like this. There's a possible world where the agent is generally selfish and yet chooses (in a moment of unusual generosity) to opt for volunteering, and that world qualifies as "closer" - for moral purposes - than one where they have systematically different motivations.
If you make false claims about modal space - i.e., asserting a necessary falsehood - then it shouldn't surprise us that paradoxes follow.
Nice article. How common is it for anti-aggregationists to endorse or be motivated by maximin (or pessimistic-optimistic rule or maximax or whatever other rules only consider a subset of the population)?
I'm not an expert in this literature, but I gather the standard anti-aggregationist view is that you should pick whichever option is supported by the strongest individual "claim". This may not exactly be maximin, since it could be that one has a stronger claim to avert going from +100 to -10 welfare than someone else does to avert going from -11 to -12. But it still seems to involve a troubling lack of concern for others if it prioritizes this one individual at the cost of (say) a million people each going from 0 to -10. Since the million together count for no more than any one of them alone would, there's an important sense in which each of them could claim to have been disregarded.
Nit: You might want to pick a word other than “indifference”, given the usage of “indifference” as a technical term to denote simply that an agent doesn’t prefer one outcome over another.
An agent might feel torn about a choice, and not indifferent about it in the sense of considering the choice unimportant, but still wind up indifferent in the sense that both choices seem equally good/bad.
Also…
I think your analogy about fudge and chocolate is identifying a different property from fungibility. After all, eating fudge may undermine the desirability of subsequently eating chocolate, but receiving a $20 bill does not undermine the desirability of subsequently receiving two $10 bills – even though I agree with you that money is fungible in a way people aren't.
(And for that matter, if one eats fudge one day, will it really reduce one's desire to eat chocolate the next day? For me, maybe a little, but typically I eat snacks on a daily basis, so my desire tends to reset each day. Not to mention that I've often felt torn between different snacks whose flavors I have separate and independent desires for – since I'd feel overstuffed if I ate both. I know it's just a thought experiment, but it comes out as an unintuitive one for this snack-lover! :)
I'm sympathetic to this view, but I think it implies that we should have person affecting views in general, and the non-total views in population ethics specifically. I think the really strong argument against this is smoking-mother type cases, where it seems like we have very strong reason to want the mother to stop smoking, even though this advances no individual persons interests, and that we should be willing to tradeoff harming the interests of individuals against generically improving welfare.
I think we can count existential benefits as "person-affecting" in a broad sense. So, e.g., the resulting baby gets a non-comparative benefit from good existence if the mother stops smoking, and this is greater than the benefit from life that the alternative baby would get (if she doesn't).
For related discussion, see:
(1) https://www.goodthoughts.blog/p/the-profoundest-error-in-population
(2) https://www.goodthoughts.blog/p/harman-harm-and-the-non-identity
Yeah I think this might get into quite tricky questions about counterfactuals and the philosophy of action. I think that at the time one is advocating for the mother to stop smoking, one is advocating on behalf of someone who exists only in the counterfactual world in which the mother stops smoking. I agree there's a kind of broad person affecting view under which one can advocate for this in person affecting terms if one is considering people across all possible worlds, but my guess is that this view will run into some problems because of how broad possible worlds are.
I also not sure how in spirit of the attitude that we care about welfare of individuals, rather than welfare per se, this is. Existence seems like plausibly a necessary condition for caring about someones interests?
Yeah, it does seem tricky. Still, two claims that seem true (and consistent) to me:
* Insofar as we're talking about existing people, our concern for their contribution to the aggregate welfare should be downstream of our concern for *them* as particular individuals.
* We should accept a 'broad' solution to the non-identity problem which allows that we have strong reasons of procreative beneficence to prefer that happier lives come to exist rather than less-happy ones.
What do you think about the possibility that a *large enough* harm or benefit can't be outweighed by any number of *very small* harms or benefits? For example, torturing one person for a year can't be outweighed by a dust speck in the eye of each of a billion (or any number of) different people.
I think Parfit showed pretty convincingly (via iteration) that those sorts of intuitions are confused. See the latter half of this post:
https://www.goodthoughts.blog/p/evaluating-acts-and-outcomes
>Scanlon’s argument for numbers as tie-breakers hints in this direction—he notes it would seem disrespectful to a second person you could save if you didn’t treat their life or death as giving you any more reason to save the group than you would have had from the first person in the group alone. What I’m now suggesting is that this doesn’t go far enough. If you say that the importance of saving the second person is reduced at all by the presence of the first person, you have failed to value the two people separately and independently.
You only included this in a footnote, but I actually found it very helpful for understanding your point.
Would you go as far as to say that we should satisfy the separability axiom / independence of unaffected agents/patients for similar reasons? If this is violated, whether you do X or Y could depend on the conditions of people who are totally unaffected by the choice. This means how we count the benefits and harms to those affected by X vs Y depends on the conditions of people who are totally unaffected, and you have failed to count those benefits and harms *separately and independently* from the conditions of the unaffected people.
And would you take this to only hold in deterministic cases, or also ex ante? The latter can get you most of the way to expectational total utilitarianism, which is fanatical.
I could see arguments going each way:
1. For deterministic: Ethics could be just concerned with actual (deterministic) betterness, and we have to use some decision theory to take uncertainty into account. And expectational utilitarianism, where the expected value is the sum of individual expected values, could even be irrational (https://forum.effectivealtruism.org/posts/KGfBhsFzCqr9vq6Y6/utilitarianism-is-irrational-or-self-undermining-2).
2. For ex ante: the same kind of arguments we make for deterministic separability/independence extend to arguments for ex ante separability/independence.
Yeah, I think there's a strong intuitive case here that respecting people as individuals should lead us to endorse ex ante separability. (Maybe the case could be outweighed if it seemed too costly in other respects.)
Really great post! Thank you
> Now, I cannot understand how anyone could seriously claim that Generic Jim is more virtuous than Amy (let alone how they could claim to thereby be respecting the separateness of persons better than a utilitarian who holds up Amy as their moral ideal). Jim stops caring about someone’s suffering the moment he finds someone else who is worse off! He fails to treat each person’s suffering as mattering in its own right—maintaining its full force and independent moral significance no matter what is going on with other people. All that matters to him is that someone is suffering a fixed-level harm; it doesn’t even matter how many people suffer so.
This seems wrong to me? Just because Jim has chosen as his strategy to always help the worst off, doesn't mean he doesn't dispositionally care about everyone. When he has helped the worst-off so that they are no longer the worst off, he will help another person who is now the worst off. It seems wrong to me to say that he then stopped caring about the former and started caring about the latter. It seems more right to me to say that he always cared about both, in that he was always ready to help either, should they be or become the most in need of help.
I guess "care" can be used in broader or narrower senses. It does seem a bit artificial to me to say that, in helping only whoever is the worst-off individual at any given time, Jim thereby "cares" about everyone. He certainly doesn't care as much about most improvements to most people's well-being as Amy does. And I guess it's that comparative claim that really matters to me, more than whether we allow that there's *some* sense in which Jim cares about everyone by caring in the condition that they become the worst-off person.
(Importantly, I'm not just wanting to think about Jim's pattern of behavior, but also his associated *attitudes*: what changes he regards as lamentable or as a matter of indifference. And my worry is that he seems indifferent to too much that should not be a matter of moral indifference.)
I guess maybe one way I would think about it is, when Jim is deciding how to act and whom to help with his act, and supposing he's deciding sort of algorithmically, before he knows who the most in need of help is, does he need to consider everyone's current well-being and the condition they're in, and *might* he at that point choose to help any of them? It seems to me the answer is yes. To decide whom to help, Jim needs to check or consider each person's well-being, or he would not know who was most in need, and that seems to me a kind of caring.
I do think your comparative claim sounds plausible. (I'm not a maxminer, btw, I'm just arguing for argument's sake.)
Yeah, I agree that Jim treats each person as *having moral status* in this sense. They're eligible to affect his decisions if they meet the right condition, and it's a condition that's perfectly impartial between the various individuals as such.
But he doesn't straightforwardly desire that each person's life go well. So I think that's an important kind of caring - distinct from intellectually appreciating moral status in an abstract sense - that he's missing.
If Jim really engages in pure maximin, then he really *doesn't* care about anyone's joys or sufferings *except* insofar as that person might become the worst off person in the world.
Maybe we're using "care" differently? I would take it to either
1. indicate some feeling on Jim's part towards those people. I think it's perfectly possible for him to feel concern for those other people, while also choosing to adopt the strategy of maxmining when deciding whom to help.
or
2. indicate Jim attaching some importance to these people and their well-being. I think this is less clear, but I would say he does attach importance to them *as people*, and the only thing preventing him from helping them is the condition they happen to be in. I guess this is what you're saying with "he really *doesn't* care about anyone's joys or sufferings *except* insofar as that person might become the worst off person in the world", which does make it sound like you agree he is caring, but only very little and/or conditionally.
I’m using “care about” to mean something like “judge as better or worse”. If Jim is maximining then he doesn’t care (in this sense) about any changes that don’t affect the person at the bottom and don’t send anyone else to the bottom. He might have feelings, but if those feelings don’t have any impact, they aren’t relevant.
I found this argument a bit puzzling.
Kamm already argued in 1985 (and again in 1993) that you should think of Taurek’s view in terms of concern for individuals, not a de dicto concern. And I’ve pressed that point in several papers. So why are you imputed to anti-aggregationists a view that they often explicitly say they don’t hold?
It would be like me imputing to you a belief that all that matters is maximizing the amount of total utility.
Another principle worth bringing up, which complicates some of your complaints, is that most anti-aggregationists accept Pareto. I think Pareto is blindingly obviously true. In fact, I think this is a problem for aggregationists when it comes to infinite ethics. Why bother saving additional lives if it won’t affect the already-infinite total? Paretians can easily explain why. Aggregationists have a harder time.
Finally, I couldn’t tell why you were so confident that instrumental rationality requires any sort of aggregation of desires. (A point I associate with Tom Dougherty.) This was really the crucial move of the post, but what’s the justification? Does it just sound plausible? Or are you thinking of money-pump arguments? (I find those pretty unpersuasive given that you can money-pump people who are future-biased, as I think future bias is rationally permissible.)
If I recall the details from your FIU talk correctly (please let me know if I'm misremembering), one way to get Taurekian verdicts is to treat different people's lives/well-being as wildly incommensurable. So, in contrast to the sort of view I quote and respond to in this post, you *can* in principle move from "good for" to "good simpliciter", it's just that the latter verdicts -- e.g. about whether it's better to save 5 or a distinct 1 -- won't admit of precise answers. The values of the two options are rather "on a par".
That's an interesting view, but seems pretty different in structure and motivation from the view I quote and respond to in this post. I also don't see any motivation for believing in such extreme incommensurability. (iirc, you once motivated it with thoughts about the need to avoid fungibility, but that just seems to miss the lesson of 'Value Receptacles', that commensurability does not imply fungibility.)
So the quick answer is that I didn't take myself to be responding to your view here (from what I recall of it), but rather was just trying to think through how we could most naturally translate the specific normative claims I quoted into a Humean belief-desire psychology.
Re: the aggregation of desires: right, this is the key assumption. It does just seem truistic to me, e.g. from introspection and thinking about ordinary non-moral examples of the sort discussed early in the post.
In your follow-up reply you suggest that an agent may "compare the desires pairwise to see which side is favored by the strongest desire." I guess I don't think of desires as so inert. They aren't like pieces of information that different agents might use in different ways. I'm thinking of them as automatic motivational PULLs. On the psychological picture I have in mind, you just WILL BE pro tanto pulled twice as hard in a direction that satisfies twice as many of your (salient, activated) desires. That's just built into the conception of desire that I'm working with. Now, the agent may override their natural inclinations with deliberate reasoning and willpower. But I'm thinking about what would be the ideal motivational profile for the sort of virtuous agent who *doesn't need to deliberate at all* in order to do the right thing. By having the right desires, they're naturally pulled towards the right choice.
I hope that makes a bit more sense of where I'm coming from. (I expect it's not the sort of psychological picture that anti-aggregationists tend to start from. So in order to make sense of their view from within this picture, I kind of expect it to require some attributions that they aren't antecedently inclined to agree with.)
> Why bother saving additional lives if it won’t affect the already-infinite total? Paretians can easily explain why. Aggregationists have a harder time.
I forget if I've talked with you about this paper: https://academic.oup.com/aristotelian/article-abstract/121/3/299/6367834
I take the Pareto principle to be the core of plausible aggregation, and everything else just follows from when tradeoffs result in equality rather than incomparability.
If we can establish a way to trade off good for one person against good for another person, then this establishes an interpersonal scale. If there is additionally just a finite set of people, then adding up the well-being of the individuals gives a summary statistic that turns out to be coextensive with which situation is overall better or worse. But it's not because anyone does or should care about the sum - it's because of each individual. And this is clear in the infinite case, when there is no such thing as the sum, but there are still facts about which overall situations are better and which are worse, because of the particular pattern of better and worse for individuals.
Extremely ignorant question: for finite individual cases, are pairwise comparisons between individuals enough to get you to aggregation? That's how I interpret the phrasing, is that correct?
And if so, can you get interesting theories assuming some pairwise comparisons can't be made, but you can augment with k-wise comparisons up to some k (presumably less than the finite size of the population?)?
When you are making pairwise comparisons, you have to be able to say whether it is better or worse or equal if X is at level a and Y is at level b, or if X is at level c and Y is at level d, while everyone else remains fixed. If these comparisons are invariant regardless of what levels everyone else is at, and if you can make such comparisons for every a, b, c, d, and if there are sufficiently many possible levels of well being for every person, then that suffices to define a numerical scale for every individual’s well-being levels, such that one distribution is better than another iff the sum of the numerical representations is larger.
You can get some interesting results even with weaker assumptions, like that some differences for one person are bigger than some differences for another, even if not all differences are comparable. I’m not sure what you mean with k wise comparisons though - there’s a few things that might come to mind but I’d need a more precise idea.
Thanks for this! I think what I have in mind is, weakening the assumption that interpair comparisons are invariant against the background level of everyone else. I'm not sure exactly where I'm going with this, so if you know of anything already studied I'm more interested in that than my attempts to assemble my thoughts, but my initial thought is something like:
A k-ary comparison rule says that interpair comparisons must transform in some simple way as at most (k-2) other individual levels change.
That is, f(x1,x2,...) must satisfy that f(a,b,x3,...) - f(c,d,x3,...) must exhibit some simple form when restricted to any set of variables {x1, x2, xi, xj,...} of size k, and obviously with symmetry so that x1, x2 aren't special. I dunno if that makes sense or not?
So a trinary comparison says, how binary comparisons depend on the level of everyone else must exhibit some (simple?) symmetry under a change in any one other person's level.
I don't have any idea what conditions I want to impose though.
Let me clarify what I said, since it was a bit quick.
I think the anti-aggregationist should care about each person separately and individually. To decide which of two groups to save, they don’t add up the number of desires to see which is strongest. They instead compare the desires pairwise to see which side is favored by the strongest desire.
This doesn’t mean that the added numbers on one side are totally idle. They rationally require you to save supersets of people (rather than subsets) wherever possible. So you don’t have to save Al and Bert rather than Carol, but you can’t just save Al if you could save him along with Bert. Thus the desire to save Bert has an effect on what’s rational for you to do.
I'm trying to wrap my head around this stuff and have enjoyed reading your, Richard's, and Flo's recent articles.
Maybe this is basic, but: if I'm a deontologist, don't I still have to weigh aggregations of duties, if not aggregations of intrinsic value? Like, if it is possible for me to discharge 5 compatible duties at a time (i.e., saving 5 lives), is it not required that I select that option over discharging just 1 duty (saving 1 life)?
Even in intrapersonal cases, in order to make sense of prudential reasoning, don't we need to be able to deal in aggregates of normative considerations? Arriving at an "all things considered" reason seems to require that I construct, rank, and weigh aggregates of instrumentally compatible ends.
Maybe I need to go back and read your article lol
"They rationally require you to save supersets of people (rather than subsets) wherever possible."
Is this meant to include the Pareto-dominant choice in Jim and the Hostages?
Sublime work! I wonder what Daniel Munoz would say about this stuff. After reading his responses to you and Matt Adelstein during his chat with you and then with Matt, I still wasn't convinced by deontolology. This time... this shows my firmness or conviction regarding consequentialism.
I am not moved towards deontolology even by its smartest and most formidable defender!
What do you make of the following argument against consequentialism made by Gustafsson:
Suppose that you are choosing between becoming an investment banker and becoming a voluntary worker. Any future choices after this career choice are beyond your present voluntary control. As a matter of fact, you will choose to become an investment banker, and this will make you rich. But, because you are selfish, you won’t use your riches to do much good in the world. You would do more good if you were to become a voluntary worker instead. Accordingly, Act Consequentialism yields that becoming an investment banker is wrong, and, since you will become one, it yields that, if you were to become an investment banker, it would be wrong.
Suppose that the reason you will choose to become an investment banker is that you are selfish. And, in the closest possible world where you instead choose to become a voluntary worker, you are less selfish and more altruistic. Suppose moreover that, from the point of view of that world—that is, the closest possible world where you become a voluntary worker—the closest possible world where you become an investment banker isn’t the actual world but a world where you are still altruistic. And, in that world where you are altruistic and become an investment banker, you will do much more good with your riches than you could do as a voluntary worker. So, if you were to become a voluntary worker, it would be better to become an investment banker. Accordingly, Act Consequentialism yields that, if you were to become a voluntary worker, it would be wrong.
Hence each performable act in the situation (becoming an investment banker, becoming a voluntary worker) would be wrong if it were performed.
None of the fundamental normative claims of consequentialism seem undermined by that case. The apparent "problem" is just an artifact of talking about counterfactuals in a misleading way. It's perfectly possible for the agent to volunteer *given their actual motivations* (generally selfish motivations are compatible with having a temporary burst of moral conscientiousness), and in that case they would have acted rightly (and it would not be true that they should have become an investment banker instead).
Assuming no metaphysical indeterminism, each set of actual motivations will lead to exactly one outcome being actualized. In order for some other outcome to be actualized instead, we need to imagine that either a) the person's motivations are different at the time of the choice or b) the laws of the universe act differently at the time of the choice, such that an agent with the exact same motivations can make different choices in the exact same scenarios.
So keeping your actual motivations the same and keeping all facts about the world the same, it is not the case that you can become a voluntary worker, given the fact that we already know that you would become an investment banker in that scenario.
There's more to cognition than motivation, so even given determinism you could get different behavior / outcomes if you had different thoughts.
The most straightforward response is thus to dispute the claim that "in the closest possible world where you instead choose to become a voluntary worker, you are less selfish and more altruistic." I don't think you get to stipulate a claim like this. There's a possible world where the agent is generally selfish and yet chooses (in a moment of unusual generosity) to opt for volunteering, and that world qualifies as "closer" - for moral purposes - than one where they have systematically different motivations.
If you make false claims about modal space - i.e., asserting a necessary falsehood - then it shouldn't surprise us that paradoxes follow.
Nice article. How common is it for anti-aggregationists to endorse or be motivated by maximin (or pessimistic-optimistic rule or maximax or whatever other rules only consider a subset of the population)?
I'm not an expert in this literature, but I gather the standard anti-aggregationist view is that you should pick whichever option is supported by the strongest individual "claim". This may not exactly be maximin, since it could be that one has a stronger claim to avert going from +100 to -10 welfare than someone else does to avert going from -11 to -12. But it still seems to involve a troubling lack of concern for others if it prioritizes this one individual at the cost of (say) a million people each going from 0 to -10. Since the million together count for no more than any one of them alone would, there's an important sense in which each of them could claim to have been disregarded.
Ah okay, yes that makes sense. Thanks!