Discussion about this post

User's avatar
Mark's avatar

How about when an action hurts overall well-being in an arbitrarily small amount (or simply doesn't add to overall wellbeing) but serves deontological thingamajigs in a substantial amount? I think that maybe a Rossian deontology/pluralism (that almost always approximates into consequentialism in our particular contingent world, ripe as it is with various inequalities and EA/Longtermist opportunities) is the only moral theory that accounts for all the data points here.

Lane Taylor's avatar

Regarding the point about caring about abstractions more than people:

You respond to the alienation objection to utilitarianism by pointing out that the utilitarian cares about overall wellbeing only because they care about each individual person. This eliminates the worry that the utilitarian cares more about some abstract thing than actual people. I wonder if the same thing can be said from a deonotological perspective. I care about the moral rules only because I care about each individual person, and the respect their intrinsic value demands from me. What do you think of this move?

Also, I just finished your introduction to utilitarianism. It was great! Thanks for that

11 more comments...

No posts

Ready for more?