Imagine two different characterizations of some mathematical object as the infinite limit of some process, say two different infinite sums, or an infinite sum vs an infinite product, or the argmax of a function found by exhaustive search through an infinite search space vs a fixed point of a dynamical system computed iteratively.
On the one hand, it would be absurd for the proponents of say an infinite product representation of a positive number to think that they've "refuted" an infinite sum representation because one of its partial sums is negative.
But on the other hand, depending on the task at hand, noting that a sum converges slowly, or that an exhaustive search has no optimality guarantees if truncated at finite steps, are genuinely valid criticisms of those representations even if, in some sense, they are correctly calculated approximations of the true ideal object.
Which is to say, I think there's a middle ground where we try and understand objective features of the finite approximations and how they compare to each other and to the ideal object they approach.
I gather from your writing that most deontologists aren't past the stage of pointing out, "the 1000th partial sum of your series for Pi is (negative/rational/algebraic) but everyone knows that Pi is (positive/irrational/transcendental)! Refuted!" so I think it's fair to stress the distinction between ideal and non ideal.
But I still think there is a better argument that has some of the same form that I don't think is addressed by the distinction, something like the claim that "consequentialism converges very, very slowly and requires intractably many terms to compute before you get results that can be achieved with just the first few terms of other theories" (note I don't claim this is true, but I think it's a better version of the sort of thing deontologists are claiming).
Appealing to the necessity of a better decision theory strikes me as similar to someone using the arctan series for Pi appealing to the need for better calculators. Maybe, but maybe it's a problem that you've chosen a slowly converging sum, and for any given strength of calculator you could do better (at certain tasks) by just picking a different representation.
I wholeheartedly agree with your analysis and hope it reaches a wider audience. In my view, ethics is a field where many different questions are often mixed together. If you're interested, I’ve written a paper on this issue (specifically in the context of teaching ethics).
(Though regarding your offhand mention of satisficing - I think the concept of satisficing has often been crudely adopted by philosophers as the idea that there is some threshold such that everything above that threshold is “good enough” and everything below that threshold isn’t, while I think Herb Simon intended it as a decision procedure where one evaluates options one by one in the order they come to mind and then does the first one that passes the threshold. It could well be that this decision procedure with a particular threshold actually optimizes the tradeoff of time considering vs goodness of act done.)
Thanks! And yeah, my interest in satisficing has more to do with demandingness-moderating prerogatives than with the search costs and such that originally motivated the economic concept (which, as you say, is plausibly tied to all-things-considered optimizing!).
I think this is an excellent and clarifying post. I don't have any object-level objections, but only two minor points of feedback:
1. The angel thought experiment is not very convincing to me. My intuition there is very similar to my intuition in the human case, because in a way it looks very similar. It's hard to just imagine away all the negative consequences of killing, apparently in Heaven as well as on Earth.
2. Have you considered writing against actual people who actually hold these views, including quoting them and summarizing their arguments? I think that would make the reading more fun and interesting than as it is now, where you are arguing against as sort of shadow amalgam. The most fun philosophy book I ever read was David Stove's The Plato Cult, and part of the fun is Stove referencing various real philosophers, explaining their views, quoting them, and then showing (or trying to show) why they are wrong.
> Utilitarianism (as a fundamental moral theory) answers the telic question, not the decision-theoretic one. To get practical action-guidance, it needs supplementation with an account of instrumental rationality, i.e. a decision theory. Previous philosophers have tended to simply assume naive instrumentalism; but this is only plausible in ideal theory. Obviously non-ideal decision theory will look different. So, utilitarianism + non-ideal decision theory will not yield the counterintuitive verdicts that people tend to associate with utilitarianism. For example, it will (most plausibly) not imply that the agent in Transplant should kill an innocent person for their organs.
Your distinction between ideal theory and non-ideal decision theory makes a lot of sense. I think that is a good reason to think we should e.g., not kill people even though naively it might seem justified to do so. But I am wondering if that undercuts your reasoning around vegetarianism (https://www.goodthoughts.blog/p/confessions-of-a-cheeseburger-ethicist)?
In that post, you write:
> Of course, some mistakes are more egregious than others. Perhaps many reserve the term ‘wrong’ for those moral mistakes that are so bad that you ought to feel significant guilt over them. I don’t think eating meat is wrong in that sense. It’s not like torturing puppies (just as failing to donate enough to charity isn’t like watching a child drown in this respect). Rather, it might require non-trivial effort for a generally decent person to pursue, and those efforts might be better spent elsewhere.
And sure, I agree eating meat is not as wrong as torturing puppies. But if we think that animals are at least somewhat worthy of concern, doesn't a good non-ideal decision theory recommend vegetarianism? Imagine we could walk into a supermarket today and buy human meat, the human having been raised in an enclosure in a factory farm -- literally treated like an animal his or her entire life -- and unceremoniously slaughtered for mere gustatory pleasure. In that situation most people have a strong intuition that eating human meat is very wrong. So as with the transplant case, you might want to say "don't eat human meat" is a good non-ideal norm that we should stick to, even when there are some seeming ideal reasons to think eating human meat really doesn't matter that much next to, say, failing to give to effective charities. But if we accept that "don't eat people" is a good practical (non-ideal) rule even when utilitarian math might suggest otherwise, shouldn't we similarly follow a practical rule like "don't eat animals"?
Yes, I agree in that post that vegetarianism is *recommended*! But not everything that goes against moral advice is *wrong* in the strong sense specified in the quote. I think the latter depends a lot on existing social norms.
If we imagine a society with well-established vegetarian norms, I think it would be plainly wrong for a person in that society to eat meat. Similarly, if there's a well-established norm that people donate 10% of their income to effective charities, then it would be wrong to do less than that. In general, it's wrong to violate good norms that are well-established. (This is partly because, as natural conformists, it's generally quite easy for decent human beings to do what's generally expected and followed by everyone around them.)
The trickier question is what to say about things that *would* be good norms, but (sadly) aren't in fact socially established. Assuming that it's not a weird case where there would be bad effects from just a few people following the norm when others don't, we can of course say that it's *recommended* that one do this good thing. The "ought of most reason" will prescribe it. But it's a further question whether it's *obligatory* in the stronger sense that it would be outright indecent and blameworthy to fail to do it.
Here I see two principled options (oversimplifying slightly):
(1) The strict view that it's actually outright indecent for people to live ordinary lives when our ordinary norms aren't what they should be. On this view, everyone is obligated to donate at least 10% (maybe more!), be vegan, etc.
(2) The moderate view that while it would be *good* to follow better norms, one isn't especially blameworthy or indecent so long as one is broadly honest and cooperative and follows the good norms that have been established as social expectations in one's community.
Thanks, the paper was a good read (or skim, in parts). Now I have a different confusion though.
On (2), I think it's reasonable to think it's not especially blameworthy to fail to follow every good practical rule, on the grounds that it would be too demanding (require too much willpower). And so that's a reason to require following practical rules that align with norms, but not necessarily unestablished practical rules. It's generally just easier for people to comply with norms.
But in that case, what about someone who, despite our norms, finds it personally easy to avoid eating animals but very difficult to avoid eating humans? It's not that the person is not exerting enough effort, it's just that the effort is best directed differently than most other people. They just happen to follow an unestablished practical rule instead of an established one. Is this person also not especially blameworthy?
Or if they are blameworthy for eating humans, what is special about the well-established practical rules that make those rules more important than equally sound yet unestablished practical rules? I guess failing to follow the former could be worse since it could cause social instability, outrage, scandal, and so on. But that's contingent. Maybe we should have a non-ideal meta-rule of sticking to social norms, but that would seem to justify status quo bias? I don't know.
I find it strange that you are taking such an approach. Philosophers are explicitly telling you they are talking about the ideal and you seem to be just saying "you mean non-ideal". What else can they do other than communicate to you that they are talking about the ideal?
Your intuitions may be about the ideal or the non-ideal. Looks like you think they are always on the non-ideal? But this means they are just reflections of society. Which predicts that your intuitions should almost never contradict societal norms. Is this the case?
I think it's not a coincidence that the putative "counterexamples" to consequentialism that people tend to find most compelling are cases in which it would plausibly *have bad results* for people to attempt to follow the supposedly "consequentialist" verdict (which raises the question of why we should consider that the "consequentialist" verdict to begin with).
But to make further dialectical progress, I'd encourage critics of consequentialism to engage more explicitly with what I call the "telic" question. Do you think it would be *undesirable* for the better outcome to occur? If so, that suggests that you really do reject consequentialism! But do you think it is *intuitively obvious* that the better outcome is undesirable, such that your intuitive response here counts as some kind of dialectical *datum* that any adequate theory must accommodate? Surely not. So stop treating superficial verdicts about cases as decisive, and start doing some deeper theorizing.
"which it would plausibly *have bad results* for people to attempt to follow the supposedly "consequentialist" verdict"
I think one of their objections is more along the lines that consequentialists are unable to recognize these as bad things, because of their (Gradgrindian) commitments.
“Non-ideal” is not the same as society. It is rather the status of being a finite physical being. Non-ideal intuitions are going to be cultivated by biology and physics as much as by society.
I didn’t mention evolution in particular. What I did mention is being a finite physical being, which has many, many aspects, only one subset of which come from evolution.
On ideal vs non ideal:
Imagine two different characterizations of some mathematical object as the infinite limit of some process, say two different infinite sums, or an infinite sum vs an infinite product, or the argmax of a function found by exhaustive search through an infinite search space vs a fixed point of a dynamical system computed iteratively.
On the one hand, it would be absurd for the proponents of say an infinite product representation of a positive number to think that they've "refuted" an infinite sum representation because one of its partial sums is negative.
But on the other hand, depending on the task at hand, noting that a sum converges slowly, or that an exhaustive search has no optimality guarantees if truncated at finite steps, are genuinely valid criticisms of those representations even if, in some sense, they are correctly calculated approximations of the true ideal object.
Which is to say, I think there's a middle ground where we try and understand objective features of the finite approximations and how they compare to each other and to the ideal object they approach.
I gather from your writing that most deontologists aren't past the stage of pointing out, "the 1000th partial sum of your series for Pi is (negative/rational/algebraic) but everyone knows that Pi is (positive/irrational/transcendental)! Refuted!" so I think it's fair to stress the distinction between ideal and non ideal.
But I still think there is a better argument that has some of the same form that I don't think is addressed by the distinction, something like the claim that "consequentialism converges very, very slowly and requires intractably many terms to compute before you get results that can be achieved with just the first few terms of other theories" (note I don't claim this is true, but I think it's a better version of the sort of thing deontologists are claiming).
Appealing to the necessity of a better decision theory strikes me as similar to someone using the arctan series for Pi appealing to the need for better calculators. Maybe, but maybe it's a problem that you've chosen a slowly converging sum, and for any given strength of calculator you could do better (at certain tasks) by just picking a different representation.
I wholeheartedly agree with your analysis and hope it reaches a wider audience. In my view, ethics is a field where many different questions are often mixed together. If you're interested, I’ve written a paper on this issue (specifically in the context of teaching ethics).
https://ojs.ub.rub.de/index.php/JDPh/article/view/10811/10995
Section 2 may be particularly relevant to your work, especially my analysis of the notion of "rightness" in the context of ethics.
This all sounds exactly right to me!
(Though regarding your offhand mention of satisficing - I think the concept of satisficing has often been crudely adopted by philosophers as the idea that there is some threshold such that everything above that threshold is “good enough” and everything below that threshold isn’t, while I think Herb Simon intended it as a decision procedure where one evaluates options one by one in the order they come to mind and then does the first one that passes the threshold. It could well be that this decision procedure with a particular threshold actually optimizes the tradeoff of time considering vs goodness of act done.)
Thanks! And yeah, my interest in satisficing has more to do with demandingness-moderating prerogatives than with the search costs and such that originally motivated the economic concept (which, as you say, is plausibly tied to all-things-considered optimizing!).
I think this is an excellent and clarifying post. I don't have any object-level objections, but only two minor points of feedback:
1. The angel thought experiment is not very convincing to me. My intuition there is very similar to my intuition in the human case, because in a way it looks very similar. It's hard to just imagine away all the negative consequences of killing, apparently in Heaven as well as on Earth.
2. Have you considered writing against actual people who actually hold these views, including quoting them and summarizing their arguments? I think that would make the reading more fun and interesting than as it is now, where you are arguing against as sort of shadow amalgam. The most fun philosophy book I ever read was David Stove's The Plato Cult, and part of the fun is Stove referencing various real philosophers, explaining their views, quoting them, and then showing (or trying to show) why they are wrong.
Actually, I do have an object-level comment.
> Utilitarianism (as a fundamental moral theory) answers the telic question, not the decision-theoretic one. To get practical action-guidance, it needs supplementation with an account of instrumental rationality, i.e. a decision theory. Previous philosophers have tended to simply assume naive instrumentalism; but this is only plausible in ideal theory. Obviously non-ideal decision theory will look different. So, utilitarianism + non-ideal decision theory will not yield the counterintuitive verdicts that people tend to associate with utilitarianism. For example, it will (most plausibly) not imply that the agent in Transplant should kill an innocent person for their organs.
Your distinction between ideal theory and non-ideal decision theory makes a lot of sense. I think that is a good reason to think we should e.g., not kill people even though naively it might seem justified to do so. But I am wondering if that undercuts your reasoning around vegetarianism (https://www.goodthoughts.blog/p/confessions-of-a-cheeseburger-ethicist)?
In that post, you write:
> Of course, some mistakes are more egregious than others. Perhaps many reserve the term ‘wrong’ for those moral mistakes that are so bad that you ought to feel significant guilt over them. I don’t think eating meat is wrong in that sense. It’s not like torturing puppies (just as failing to donate enough to charity isn’t like watching a child drown in this respect). Rather, it might require non-trivial effort for a generally decent person to pursue, and those efforts might be better spent elsewhere.
And sure, I agree eating meat is not as wrong as torturing puppies. But if we think that animals are at least somewhat worthy of concern, doesn't a good non-ideal decision theory recommend vegetarianism? Imagine we could walk into a supermarket today and buy human meat, the human having been raised in an enclosure in a factory farm -- literally treated like an animal his or her entire life -- and unceremoniously slaughtered for mere gustatory pleasure. In that situation most people have a strong intuition that eating human meat is very wrong. So as with the transplant case, you might want to say "don't eat human meat" is a good non-ideal norm that we should stick to, even when there are some seeming ideal reasons to think eating human meat really doesn't matter that much next to, say, failing to give to effective charities. But if we accept that "don't eat people" is a good practical (non-ideal) rule even when utilitarian math might suggest otherwise, shouldn't we similarly follow a practical rule like "don't eat animals"?
Yes, I agree in that post that vegetarianism is *recommended*! But not everything that goes against moral advice is *wrong* in the strong sense specified in the quote. I think the latter depends a lot on existing social norms.
If we imagine a society with well-established vegetarian norms, I think it would be plainly wrong for a person in that society to eat meat. Similarly, if there's a well-established norm that people donate 10% of their income to effective charities, then it would be wrong to do less than that. In general, it's wrong to violate good norms that are well-established. (This is partly because, as natural conformists, it's generally quite easy for decent human beings to do what's generally expected and followed by everyone around them.)
The trickier question is what to say about things that *would* be good norms, but (sadly) aren't in fact socially established. Assuming that it's not a weird case where there would be bad effects from just a few people following the norm when others don't, we can of course say that it's *recommended* that one do this good thing. The "ought of most reason" will prescribe it. But it's a further question whether it's *obligatory* in the stronger sense that it would be outright indecent and blameworthy to fail to do it.
Here I see two principled options (oversimplifying slightly):
(1) The strict view that it's actually outright indecent for people to live ordinary lives when our ordinary norms aren't what they should be. On this view, everyone is obligated to donate at least 10% (maybe more!), be vegan, etc.
(2) The moderate view that while it would be *good* to follow better norms, one isn't especially blameworthy or indecent so long as one is broadly honest and cooperative and follows the good norms that have been established as social expectations in one's community.
I'm more inclined to the second view. For more detail, see my paper on 'Willpower Satisficing': https://philpapers.org/rec/CHASBE-4
Thanks, the paper was a good read (or skim, in parts). Now I have a different confusion though.
On (2), I think it's reasonable to think it's not especially blameworthy to fail to follow every good practical rule, on the grounds that it would be too demanding (require too much willpower). And so that's a reason to require following practical rules that align with norms, but not necessarily unestablished practical rules. It's generally just easier for people to comply with norms.
But in that case, what about someone who, despite our norms, finds it personally easy to avoid eating animals but very difficult to avoid eating humans? It's not that the person is not exerting enough effort, it's just that the effort is best directed differently than most other people. They just happen to follow an unestablished practical rule instead of an established one. Is this person also not especially blameworthy?
Or if they are blameworthy for eating humans, what is special about the well-established practical rules that make those rules more important than equally sound yet unestablished practical rules? I guess failing to follow the former could be worse since it could cause social instability, outrage, scandal, and so on. But that's contingent. Maybe we should have a non-ideal meta-rule of sticking to social norms, but that would seem to justify status quo bias? I don't know.
I am now more confused.
I find it strange that you are taking such an approach. Philosophers are explicitly telling you they are talking about the ideal and you seem to be just saying "you mean non-ideal". What else can they do other than communicate to you that they are talking about the ideal?
Your intuitions may be about the ideal or the non-ideal. Looks like you think they are always on the non-ideal? But this means they are just reflections of society. Which predicts that your intuitions should almost never contradict societal norms. Is this the case?
I think it's not a coincidence that the putative "counterexamples" to consequentialism that people tend to find most compelling are cases in which it would plausibly *have bad results* for people to attempt to follow the supposedly "consequentialist" verdict (which raises the question of why we should consider that the "consequentialist" verdict to begin with).
But to make further dialectical progress, I'd encourage critics of consequentialism to engage more explicitly with what I call the "telic" question. Do you think it would be *undesirable* for the better outcome to occur? If so, that suggests that you really do reject consequentialism! But do you think it is *intuitively obvious* that the better outcome is undesirable, such that your intuitive response here counts as some kind of dialectical *datum* that any adequate theory must accommodate? Surely not. So stop treating superficial verdicts about cases as decisive, and start doing some deeper theorizing.
"which it would plausibly *have bad results* for people to attempt to follow the supposedly "consequentialist" verdict"
I think one of their objections is more along the lines that consequentialists are unable to recognize these as bad things, because of their (Gradgrindian) commitments.
“Non-ideal” is not the same as society. It is rather the status of being a finite physical being. Non-ideal intuitions are going to be cultivated by biology and physics as much as by society.
So then if you contradict society it must come from evolution? Also a weird inference.
I didn’t mention evolution in particular. What I did mention is being a finite physical being, which has many, many aspects, only one subset of which come from evolution.