How about when an action hurts overall well-being in an arbitrarily small amount (or simply doesn't add to overall wellbeing) but serves deontological thingamajigs in a substantial amount? I think that maybe a Rossian deontology/pluralism (that almost always approximates into consequentialism in our particular contingent world, ripe as it is with various inequalities and EA/Longtermist opportunities) is the only moral theory that accounts for all the data points here.
This is far more appealing to me, though I believe there is (or, yes yes maybe just wish for there to be, haha) something we can ground morality in: agency, or actions aligned with reality in a "natural" sense. I'm exploring the idea of separating moral duties from virtuous behavior to try and clarify this structure, but I need to continue to think through it.
Regarding the point about caring about abstractions more than people:
You respond to the alienation objection to utilitarianism by pointing out that the utilitarian cares about overall wellbeing only because they care about each individual person. This eliminates the worry that the utilitarian cares more about some abstract thing than actual people. I wonder if the same thing can be said from a deonotological perspective. I care about the moral rules only because I care about each individual person, and the respect their intrinsic value demands from me. What do you think of this move?
Also, I just finished your introduction to utilitarianism. It was great! Thanks for that
I agree that this is a strong challenge to non-consequentialists.
Suppose the non-consequentialist points to cases of mutually beneficial exploitation. You are stuck in a pit and will die of exposure or hunger if you are not rescued. Since the pit is in a very isolated area, the chances of someone happening on you by accident are very low. Still, I luckily cross your path and offer to help you out of the pit if you agree to work for me for a dollar a day for the next year. You agree and we both benefit: your life is saved, against the odds, and I get cheap labor for the next year.
Still, even though I have saved your life, and even though you are far better off by making the deal than by rejecting it, it seems plain that I have wronged you. But then not only have I wronged you without harming you but I've wronged you while making you better off than you would have been.
I'm sure you've considered this general sort of case and would be interested in hearing how you respond to it.
Somewhat disagree: what’s wrong with the answer that it’s just analytic that what you shouldn’t do is what’s wrong even if it makes the world best? That’s what it means to be wrong!
Don’t get what you’re saying about changing the moral law. That’s not something we can do!
I think it all boils down to how expansive your conception of "well-being" is - if you think that part of what it means for someone to do well, or live well, is to display certain virtues and avoid certain vices, or to avoid certain impermissible acts and fulfill certain duties, then it's not too tricky to collapse almost any moral theory into a form of utilitarianism. But if you don't include those constraints, then it's not weird at all to me that we might sometimes sacrifice "amoral" well-being so as to avoid doing something horrible - plenty of deeply immoral pleasures just don't seem valuable in the first place to me!
The problem I have with consequentialism is that it requires perfect knowledge to know what the consequence will be. We can say no, it doesn't require "perfect knowledge," that doesn't exist. But then how do you know the consequence your action will bring about? What threshold of certainty do you need to surpass in order to act? What kind of person is more likely to believe they have reached this threshold of certainty--someone who often acts without understanding the likely consequences of their actions, or the opposite?
Is it always moral to feed a homeless man, or could there be times when that action truly harms him in the long term--or does it depend on the homeless man and his particular circumstances?
I don’t think this objection goes through. Presumably, practical and epistemic modality will impose constraints on one’s obligations.
So, if you can't know something, how are you obligated to do something?
If you're going to respond by saying this is fatal to consequentialism because not all of our actions' consequences are known, then that's an extreme and controversial stance to hold.
Everyone has different understandings of the world, and of what actions will bring about what consequences. So different people have different "constraints." Does the actual consequence matter, or only the intended consequence of the actor, based on their relative constraints?
I think this devolves pretty quickly into people justifying just about anything, so long as they intend to promote the actual good.
I personally don't think its prudent to urge people to act (in interference with other people's lives) to achieve something they are uncertain of. Seems.... morally wrong to me. I'm far more comfortable saying we have negative moral duties. If you want to help people, and can successfully do so, you are more virtuous--but not more moral (the difference being, I'm not calling someone "immoral" after telling them they had an obligation to act and failed to act correctly, or for choosing not to act given what they knew).
So I find the objection fatal, can't speak for others.
> If they do, then they are not distinctively non-consequentialist. Consequentialism offers a better explanation of why we should endorse welfare-promoting norms.
Principlism is a respectable, common-folk ethical theory. It supplies us with action-guiding standards. The obligations we have are beneficence, respect for autonomy, non-maleficence, and justice.
I think it's clear, generally speaking, why this will track and why we should (—morally speaking) “care” about welfare. I’m also of the opinion (regarding my view of value theory) that what makes someone's life worth living is a life containing happiness, virtues, and autonomy; for its own sake.
So my ethical theory, and value pluralism is doing a lot of the work in explaining equally norms that generally promote welfare. But notice the kind of consequentialism in conjunction with several theses people would generally commit to, https://plato.stanford.edu/entries/consequentialism/#ClasUtil
If your endeavor is to capture that, then under the principle of parsimony, you should believe in my view.
To me, the most compelling non-consequentialist reasons for moral norms are epistemic. I think it's good and important that we collectively believe true things, even in cases where doing so makes us worse off. For example, there might be a possible set of false religious ideas that would make us all better off if we all believed them, but it still seems bad to build a society around false ideas like that.
How about when an action hurts overall well-being in an arbitrarily small amount (or simply doesn't add to overall wellbeing) but serves deontological thingamajigs in a substantial amount? I think that maybe a Rossian deontology/pluralism (that almost always approximates into consequentialism in our particular contingent world, ripe as it is with various inequalities and EA/Longtermist opportunities) is the only moral theory that accounts for all the data points here.
This is far more appealing to me, though I believe there is (or, yes yes maybe just wish for there to be, haha) something we can ground morality in: agency, or actions aligned with reality in a "natural" sense. I'm exploring the idea of separating moral duties from virtuous behavior to try and clarify this structure, but I need to continue to think through it.
Regarding the point about caring about abstractions more than people:
You respond to the alienation objection to utilitarianism by pointing out that the utilitarian cares about overall wellbeing only because they care about each individual person. This eliminates the worry that the utilitarian cares more about some abstract thing than actual people. I wonder if the same thing can be said from a deonotological perspective. I care about the moral rules only because I care about each individual person, and the respect their intrinsic value demands from me. What do you think of this move?
Also, I just finished your introduction to utilitarianism. It was great! Thanks for that
I agree that this is a strong challenge to non-consequentialists.
Suppose the non-consequentialist points to cases of mutually beneficial exploitation. You are stuck in a pit and will die of exposure or hunger if you are not rescued. Since the pit is in a very isolated area, the chances of someone happening on you by accident are very low. Still, I luckily cross your path and offer to help you out of the pit if you agree to work for me for a dollar a day for the next year. You agree and we both benefit: your life is saved, against the odds, and I get cheap labor for the next year.
Still, even though I have saved your life, and even though you are far better off by making the deal than by rejecting it, it seems plain that I have wronged you. But then not only have I wronged you without harming you but I've wronged you while making you better off than you would have been.
I'm sure you've considered this general sort of case and would be interested in hearing how you respond to it.
I think what you are getting across is a view of harm, called comparativism.
I have an argument against it:
P1) The thought experiment, as in pre-emption cases, succeeds (P)
P2) If P1, then comparativism is false (¬Q). [P→ ¬Q]
Therefore, C1) comparativism is false (¬Q). [MP 1, 2]
A generic thought experiment about the pre-emption case:
At t1, either A or B will kill S at t2.
A and B are independent sufficient causes.
A acts slightly earlier and kills S.
If A had not acted, B would have killed S at the same time in the same way.
Actual world: S dies at t2 because of A.
Nearest ¬A world: S dies at t2 because of B.
So, S is not worse off with A than without A.
But many (including me) find it noticeable that A does harm S. If that judgment is correct, comparativism is false.
Somewhat disagree: what’s wrong with the answer that it’s just analytic that what you shouldn’t do is what’s wrong even if it makes the world best? That’s what it means to be wrong!
Don’t get what you’re saying about changing the moral law. That’s not something we can do!
I think it all boils down to how expansive your conception of "well-being" is - if you think that part of what it means for someone to do well, or live well, is to display certain virtues and avoid certain vices, or to avoid certain impermissible acts and fulfill certain duties, then it's not too tricky to collapse almost any moral theory into a form of utilitarianism. But if you don't include those constraints, then it's not weird at all to me that we might sometimes sacrifice "amoral" well-being so as to avoid doing something horrible - plenty of deeply immoral pleasures just don't seem valuable in the first place to me!
The problem I have with consequentialism is that it requires perfect knowledge to know what the consequence will be. We can say no, it doesn't require "perfect knowledge," that doesn't exist. But then how do you know the consequence your action will bring about? What threshold of certainty do you need to surpass in order to act? What kind of person is more likely to believe they have reached this threshold of certainty--someone who often acts without understanding the likely consequences of their actions, or the opposite?
Is it always moral to feed a homeless man, or could there be times when that action truly harms him in the long term--or does it depend on the homeless man and his particular circumstances?
Deontology has its issues as well.
I don’t think this objection goes through. Presumably, practical and epistemic modality will impose constraints on one’s obligations.
So, if you can't know something, how are you obligated to do something?
If you're going to respond by saying this is fatal to consequentialism because not all of our actions' consequences are known, then that's an extreme and controversial stance to hold.
Everyone has different understandings of the world, and of what actions will bring about what consequences. So different people have different "constraints." Does the actual consequence matter, or only the intended consequence of the actor, based on their relative constraints?
There’s already a response to this. Ideally, you should promote the actual good. In practical cases, do what you **can** to promote the actual good.
I think this devolves pretty quickly into people justifying just about anything, so long as they intend to promote the actual good.
I personally don't think its prudent to urge people to act (in interference with other people's lives) to achieve something they are uncertain of. Seems.... morally wrong to me. I'm far more comfortable saying we have negative moral duties. If you want to help people, and can successfully do so, you are more virtuous--but not more moral (the difference being, I'm not calling someone "immoral" after telling them they had an obligation to act and failed to act correctly, or for choosing not to act given what they knew).
So I find the objection fatal, can't speak for others.
> If they do, then they are not distinctively non-consequentialist. Consequentialism offers a better explanation of why we should endorse welfare-promoting norms.
Principlism is a respectable, common-folk ethical theory. It supplies us with action-guiding standards. The obligations we have are beneficence, respect for autonomy, non-maleficence, and justice.
I think it's clear, generally speaking, why this will track and why we should (—morally speaking) “care” about welfare. I’m also of the opinion (regarding my view of value theory) that what makes someone's life worth living is a life containing happiness, virtues, and autonomy; for its own sake.
So my ethical theory, and value pluralism is doing a lot of the work in explaining equally norms that generally promote welfare. But notice the kind of consequentialism in conjunction with several theses people would generally commit to, https://plato.stanford.edu/entries/consequentialism/#ClasUtil
If your endeavor is to capture that, then under the principle of parsimony, you should believe in my view.
To me, the most compelling non-consequentialist reasons for moral norms are epistemic. I think it's good and important that we collectively believe true things, even in cases where doing so makes us worse off. For example, there might be a possible set of false religious ideas that would make us all better off if we all believed them, but it still seems bad to build a society around false ideas like that.