Can something in the vicinity be used as a general argument against moral realism? Imagine that we somehow discovered that our grasp of moral facts is completely wrong and that the only moral fact is that we each ought to collect 589281 paperclips. It seems like in this scenario we would just stop caring about the moral facts. But doesn't that at least imply that on some meta-level we already aren't fully committed to caring about moral facts, but only care about them if, for example, they line up with our desires or sensibilities.
I think the answer to why one would prefer non-consequentialist norms is hidden in the initial framing. The choices are presented as "norms which are best for the collective on average vs. not best for the collective on average" as if we were, from a first-person perspective, making decisions from the perspective of the universal 3rd person. But the other option here is to endorse norms that allow us, individually, to pursue our own personal projects at the expense of the abstract global utility function.
So if "why should I care?" is the ultimate litmus test, then the consequentialist is actually in a much harder position than the non-consequentialist because the choices aren't "well being or not well being" but rather "well being of some abstract collective vs. my own well being, or that of my loved ones." Once we make this appropriate reframe, the consequentialist is in a tougher spot.
One way around this issue for the consequentialist is to say "morality *just is* about universalizing/impartial norms" which I'm open to, but at that point the more fundamental question then becomes "why be moral?" which isn't a clearly answerable question as initially presented.
Anyways, great stuff! I'm enjoying this line of enquiry.
Yeah, insofar as the appeal rests on self-interest, that might better motivate *rejecting morality* rather than *accepting deontology*. Another intermediate position would be a non-maximizing (satisficing or scalar) form of consequentialism, though as one of the global privileged we might still fear others taking from us for the greater good. Deontology might better serve to protect the privileges of the elite.
"Since moral subjects could generally anticipate being better off if agents successfully followed utilitarian norms, there seem clear reasons for us to prefer utilitarian (rather than deontological) norms to be successfully followed."
There is no truly utilitarian agreement that has ever existed between agents.
Every agreement establishes a framework of rights and entitlements where "welfare" (however calculated) is merely one factor among many. All consensual agreements are inherently deontological.
Utilitarianism would never be agreed to by consenting adults, let alone serve as a universal moral framework. This should be viewed as an immediate defeater for the theory
What if the newspaper article of the trolley problem described the situation, and you learned the man pulled the lever to save five and be responsible for the death of one. But as you read on, you learn that he didn't have knowledge of the situation he found himself in, and it was a performance, or "magic trick" in which the five were never in any real danger and would have escaped their situation. By flipping the switch he killed one when none would have died had he done nothing. We can assume away all uncertainty to rid ourselves of this problem, but that's not real life.
Deontological rules generally seem to be conditional: don't physically harm someone (unless you are defending yourself), etc. In a situation like the trolley problem, the immoral act was the tying of people to the tracks, not the lever pulling--in other words it's not a real moral dilemma. There are too many unknown variables to assume away. If we assume them all away, then deontology could conditionally tell us to pull the lever, killing one to save five.
If we imagine a different case, of killing 1 or else *zero* die, then we don't get any disagreement between consequentialism and deontology. To assess the theories, we need to consider how they differ.
Can something in the vicinity be used as a general argument against moral realism? Imagine that we somehow discovered that our grasp of moral facts is completely wrong and that the only moral fact is that we each ought to collect 589281 paperclips. It seems like in this scenario we would just stop caring about the moral facts. But doesn't that at least imply that on some meta-level we already aren't fully committed to caring about moral facts, but only care about them if, for example, they line up with our desires or sensibilities.
Good question! I discuss this more in "Metaethics and Unconditional Mattering": https://www.goodthoughts.blog/p/metaethics-and-unconditional-mattering
I think the answer to why one would prefer non-consequentialist norms is hidden in the initial framing. The choices are presented as "norms which are best for the collective on average vs. not best for the collective on average" as if we were, from a first-person perspective, making decisions from the perspective of the universal 3rd person. But the other option here is to endorse norms that allow us, individually, to pursue our own personal projects at the expense of the abstract global utility function.
So if "why should I care?" is the ultimate litmus test, then the consequentialist is actually in a much harder position than the non-consequentialist because the choices aren't "well being or not well being" but rather "well being of some abstract collective vs. my own well being, or that of my loved ones." Once we make this appropriate reframe, the consequentialist is in a tougher spot.
One way around this issue for the consequentialist is to say "morality *just is* about universalizing/impartial norms" which I'm open to, but at that point the more fundamental question then becomes "why be moral?" which isn't a clearly answerable question as initially presented.
Anyways, great stuff! I'm enjoying this line of enquiry.
Yeah, insofar as the appeal rests on self-interest, that might better motivate *rejecting morality* rather than *accepting deontology*. Another intermediate position would be a non-maximizing (satisficing or scalar) form of consequentialism, though as one of the global privileged we might still fear others taking from us for the greater good. Deontology might better serve to protect the privileges of the elite.
"Since moral subjects could generally anticipate being better off if agents successfully followed utilitarian norms, there seem clear reasons for us to prefer utilitarian (rather than deontological) norms to be successfully followed."
There is no truly utilitarian agreement that has ever existed between agents.
Every agreement establishes a framework of rights and entitlements where "welfare" (however calculated) is merely one factor among many. All consensual agreements are inherently deontological.
Utilitarianism would never be agreed to by consenting adults, let alone serve as a universal moral framework. This should be viewed as an immediate defeater for the theory
What if the newspaper article of the trolley problem described the situation, and you learned the man pulled the lever to save five and be responsible for the death of one. But as you read on, you learn that he didn't have knowledge of the situation he found himself in, and it was a performance, or "magic trick" in which the five were never in any real danger and would have escaped their situation. By flipping the switch he killed one when none would have died had he done nothing. We can assume away all uncertainty to rid ourselves of this problem, but that's not real life.
Deontological rules generally seem to be conditional: don't physically harm someone (unless you are defending yourself), etc. In a situation like the trolley problem, the immoral act was the tying of people to the tracks, not the lever pulling--in other words it's not a real moral dilemma. There are too many unknown variables to assume away. If we assume them all away, then deontology could conditionally tell us to pull the lever, killing one to save five.
If we imagine a different case, of killing 1 or else *zero* die, then we don't get any disagreement between consequentialism and deontology. To assess the theories, we need to consider how they differ.
Agreed. We also need to consider why they differ. Beyond 'consequence vs rule'