Discussion about this post

User's avatar
meteor_runner's avatar

I loved this post - this is a big pet peeve of mine as well and I think you nailed it.

However, I think a lot of times when I see similar arguments 'in the wild', even if they are initially framed narrowly as critiques of utilitarianism they are in fact motivated by a broader feeling that there are limits to moral reasoning. Something like, we shouldn't expect our theories to have universal domain, and we don't get much leverage by trying to extend our theories far beyond the intuitions that initially motivated them.

The main example I have in mind is Tyler Cowen's recent conversation with Will. Tyler raises a number of objections to utilitarianism. At times I found this frustrating, because if viewed from the lens of figuring out the best moral theory he is making isolated demands for rigor. But I think Tyler's point instead is something more like the above, that we shouldn't rely too much on our theories outside of everyday contexts.

You do touch on this in the post, but only briefly. I'd be interested to hear more about your thoughts on this issue.

Expand full comment
Arnav Sood's avatar

Caveat: I'm not a philosopher, but rather an economist.

I think many of these paradoxes (Quinn's Self-Torturer, Parfit's "mere addition," etc.) have the following form:

> Start from state S. Operation O(S) is locally preferable (i.e., it produces a preferred state S'.) But if we iterate ad infinitum, we end up with S* that's not preferable to S.

The conclusion is usually either that S* actually _is_ preferable (i.e., our preferences are "rational" and therefore transitive), or that our preferences are seriously suspect. To the point where "maximizing" them is a hopelessly muddled concept.

I think there's another way to approach this. Behavioral economics deals with such problems ("time-inconsistent preferences") routinely. Consider a would-be smoker. He doesn't smoke his first cigarette, because he knows that his preferences display habit formation --- his first cigarette leads to the second, and so on.

In other words, the time 0 self has a genuinely different axiology than the time _t_ self. (Equivalently, preferences are state-dependent.) It would definitely be _cleaner_ if our rankings of future worlds were invariant to where we are today, but if the choice is between axiomatic hygiene and uncomfortable paradoxes, I'll take the mess.

(I think this also has something to say about, e.g., the demandingness objection. It's always locally preferable to save one more child, but the agent is justifiably wary of committing to a sequence of operations which turns him into a child-rescuing drone.)

Expand full comment
37 more comments...

No posts