13 Comments
User's avatar
Ghatanathoah's avatar

My issue with this is that it assumes that all moral gadflies are correct and that those condemning their do-gooderism are maliciously lowering the bar. If some percentage of gadflies are wrong and the thing they are condemning is good or neutral, then by raising the bar they are at best inconveniencing everyone for no good reason. At worst they may be demanding everyone make large sacrifices in order to make the world a worse place.

On social media I have encountered many moral gadflies. Here are some of the "atrocities" they are trying to raise the bar on:

- People enjoying art produced by artists who disagree with the gadfly about politics, or who have been accused of crimes.

- Television shows that children watch having gay characters.

- People watching pornography.

- People failing to helicopter-parent their children.

-Women having sex with more than one man in their lifetimes.

- White people practicing customs or using products that originated in non-white cultures.

I think many people would agree that many of these things are harmless or beneficial, and that the gadflies condemning them are doing harm. In these cases, anti-gadfly rhetoric is a public service.

It is not clear to me whether anti-gadflies do more harm than good. If all anti-gadfly rhetoric disappeared, would most people stop eating factory farmed meat? Or would they not have time to get around to that because they were too busy resisting the urge to watch porn, or making sure that their favorite writers hadn't said something problematic?

Expand full comment
Richard Y Chappell's avatar

See footnote 2! I agree we should oppose *misguided* moralizing. But that's because it's misguided. (Further, I think it's easy enough to tell the difference between real harms and fake ones.)

Expand full comment
Daniel Elstein's avatar

One way of thinking about this issue is in terms of how internalisation costs differentially affect which principles it makes sense to internalise in different people. Consider the contrast between these two questions:

(a) Which principles is it best to try to internalise in myself (given the relevant internalisation costs)?

(b) Which principles is it best to try to internalise in others (given the relevant internalisation costs)?

For some people the answers to these questions will diverge: at least for those who are already strongly morally committed, the principles selected by (a) will likely be more demanding than those selected by (b). (They may differ in other ways too given e.g. differences in decoupling.)

I wonder whether it is really helpful here to think in terms of a difference between aiming at truth vs. being strategic. What are the "true" moral principles here? The ones selected by (a)? But that's a rather parochial idea of truth. You might have in mind a different question to select the true principles:

(c) Which principles is it best to try to internalise in an ideal moral agent (who faces no internalisation costs)?

But I am sceptical about the relevance of (c) to us - the version of moral truth that emerges from that question might be quite alien to our moral concerns. If that is right, then both (a) and (b) are important questions here, and they both get at important aspects of the moral truth, even though (b) also invokes more strategic concerns.

Expand full comment
Richard Y Chappell's avatar

That is interesting, though I'm generally not here thinking about "internalizing principles" at all. We could imagine an anti-reliable agent (who reliably achieves the opposite of what he's fundamentally aiming at, on some suitable understanding of "opposite"), who we would want to have internalize a bunch of moral falsehoods (so that he'll aim at the bad, and thus achieve the good). But that doesn't change what's true.

Expand full comment
Daniel Elstein's avatar

I thought the issue was what kind of moral persuasion works best, where we hope to influence people's behaviour by changing their moral views. I was assuming that the way this works is that people feel some kind of pressure to align their behaviour with their moral views, and it is this which makes changing moral views a potentially effective way of changing behaviour. Your anti-reliable agent case involves something like deviant causation from this point of view. If we had to deal with such an agent, it would seem as though we were simply manipulating them rather than engaging in anything resembling moral persuasion.

On this idea that there is a fixed stock of true moral principles, such that the right way to descibe the strategic way of talking to the unenlightened is getting people to believe falsehoods (or approximations to the truth), my worry starts here:... How do we understand the truth-conditions of derivative moral principles, given that we are (I take it) both assuming an underlying utilitarianism? I just assume we need something like the machinery of internalisation to avoid the rule worhip / collapse dilemma (but obviously that just reflects my own reading of the literature). But in any case, we tend to have some formula of the form: derivative principle P is true iff P is a member of the optimal set of principles for agents A to stand in relation R to (where "optimal" gets some utilitarian gloss, there's a question about which agents are relevant, and there are various options for what the relation R is - it could be behavioural conformity, belief, something to do with internalisation etc.). If that's not broadly your framework, then I guess my worries are misdirected! But if it is, then I think the worries I expressed in my comment above will end up applying.

Expand full comment
Neeraj Krishnan's avatar

For a specific example you cite, factory farming, and for a class of such problems (I don't know how to define the class), it might be better to not moralize at all. Ratchet up existing Animal Welfare Act (AWA), Preventing Animal Cruelty and Torture Act (PACT), etc. through focused pressure and persist. The good kind of interest group!

For example, they seem to have (sadly) succeeded in completely shutting down nuclear power without some large scale debate about its merits, just by making the regulator choke every single proposal. If the safely inspector never certifies a factory farm as safe...

Expand full comment
Murali's avatar

I think here is where the concept of wrongness (as opposed to goodness or badness) plays a part. Only if an act is genuinely wrong (and not merely worse than some other actions) is it permissible to hold people to account for it. Otherwise you would be being objectionably authoritarian: you would be acting as if you had the power to create duties for other people simply by making a moral demand on them.

Insofar as you're trying to raise the bar of permissibility, people are going to push back. After all, from their perspective, we seemed to be doing well enough with the lower bar. Raising the bar from their perspective seems like you gratuitously imposing a burden on them and it would be fitting for people to resent those who gratuitously impose burdens on them.

If, instead of raising the bar, you pointed out that certain actions which they thought cleared the bar, in fact didn't, the resentment would not be fitting (at least insofar as you could genuinely show that the act is forbidden by standards they already accept).

It is here that limiting yourself only to publicly justifiable demands is important. We should really try hard to ensure that our insights are really accessible (at least in-principle) to all morally committed agents. Or else, some of them would really be justified in resenting us for trying to raise standards.

The distinction between raising the bar and showing that current practices actually don't already clear the bar is important. I know that they are only metaphors, but the distinction matters in terms of what kinds of justifications and arguments we offer.

Expand full comment
Richard Y Chappell's avatar

To clarify, my post is not about the "bar of permissibility" but rather the bar of what's socially normal or "expected" (in some loose sense).

I don't think it's "gratuitous" if it has very good effects, like saving and improving lives. You seem to be writing from a perspective on which nothing matters except deontic status. "If it's not obligatory, there's *no point* in saving this child from dying of malaria, so don't burden me with such a request!" Seems like a bad perspective to me.

I don't think it's ever justified to resent someone for doing what's impartially best. That would just be selfish.

Expand full comment
Murali's avatar

The bar of permissibility and the bar of what's socially normal or "expected" are not completely independent concepts. Arguably, the bar of what's socially normal or "expected" is where most people (or the dominant groups) judge the bar of permissibility to be.

I'm not saying that nothing matters except for deontic status. I'm saying that deontic status matters for what we can demand that other people do. Trivially, it is better to do better things. But, if it's not obligatory, I don't *have* to do it. So, it's not that there's no point to saving the child dying of malaria. But, *if* I'm not obligated to do it, you shouldn't *insist* that I do it.

That's why deontic status matters. It matters for what standards of behaviour we can hold others to (and what standards we can be held to). I know you have a project where you want to do away with deontic status, but I think you are making a mistake there in failing to understand how deontic concepts function in our practical lives

Expand full comment
Richard Y Chappell's avatar

I'm not talking about "insisting" on anything - just encouraging - so I think betterness suffices for my purposes!

Expand full comment
Murali's avatar

Of course to be clear, I don't think people are necessarily correct or even justified in their judgments about where the bar of permissibility lies. The vast majority of people in the world think there is something wrong with living or working in a country without the "correct" paperwork. They are wrong. I think they are wrong for reasons that they themselves can appreciate.

Expand full comment
Murali's avatar

Fair enough

Expand full comment
Patrick D. Caton's avatar

As long as 99% of people are do-nothing hypocrites this won’t resolve anything. Go the pragmatic route and rate actions not words.

Expand full comment