I recall a time about a decade ago when I made a comment on Facebook to the effect that it would make sense for the Movement for Black Lives to prioritize traffic fatalities over police violence, because traffic fatalities are a type of death heaped disproportionately on minority communities that are far more common than police violence. This started a massive comment thread pile on where I recall someone saying I was comparing apples and space ships. I think the best counter argument to my discussion was that people were scared of police violence in a way that they weren’t scared of traffic fatalities.
I think those folks would want to distinguish between injustice and misfortune and say the former is worse. They might think of traffic fatalities as more about misfortune; that said, a much better example of misfortune would be something like getting killed by a tornado.
Do you think that view is defensible upon reflection?
Suppose you have to choose between either (i) preventing a painless unjust killing, e.g. a lethal injection while the innocent victim sleeps, versus (ii) preventing some horrific misfortune, say a child very painfully burning to death in a forest fire.
Wouldn't basically everyone pick the second option? I think a lot of injustice-focus really just stems from people being blindly driven by emotional responses in a way that they wouldn't (and couldn't) rationally endorse upon further reflection.
The thing I could say in defense of this is that for many contexts that people have lived in (perhaps most) changing someone’s intentions did more to affect intentional death than changing anything else humans had access to did to change unintentional death. That probably changed some time in the modern era (perhaps as recently as the early 20th century) when safety standards started being really significant. But our intuitions about which things to try to change may still be shaped by a time when intentions were a more effective thing to change than other conditions.
And the other factor that would have substantially affected death in ancestral environments would have been getting rid of seriously I'll or disabled people, which combined with your hypothesis seems like a pretty good explanation for their being heavily moralizing in many cultures.
I agree with you on the ethics, but I think as a matter of revealed preference, most people clearly find human inflicted deaths more horrifying. You can see this from the fact that for example, the pro-life movement cares about fighting abortion instead of fighting malaria, and people find fighting for human rights and opposing military intervention, a lot more motivating in practice, than fighting poverty in the developing world. In fact, I know many intelligent people who have completely sincerely expressed the view that human inflicted suffering is worse. Amartya Sen and some other academics have even defended this view in writing. I think we just have to live with the fact that people find fighting perceived bad actors more motivating than fighting what they perceive as natural disasters, even if the only reason the natural disaster is so much of a problem is because of human decisions and inaction. You and I might disagree, but we hold what are clearly very unusual moral opinions among the human population, and this does not appear likely to change any time soon.
That said I do think if you frame things correctly, you can get some people who otherwise work on police brutality to work on car accidents. but I expect this is very framing sensitive and will not work if they perceive you as attacking them and their beliefs. That will just activate their tribal instincts in the opposite of the way you want. Of course given how trigger happy some people are when comes to perceptions of attacks on their beliefs or tribe, some level of tribal attacks maybe unavoidable.
I certainly agree that there are a wide range of cases where people find human-inflicted injustice more emotionally compelling! I'm just skeptical that many would reflectively endorse a strict universal generalization of the sort that would require them to prioritize preventing the painless lethal injection over preventing the child's burning to death in my above scenario.
Put another way: the best interpretation (in most cases; there may be some exceptions) is not that they think injustice is *worse*, but rather that considerations of worseness are not what's guiding their behavior.
You might think that the commission of injustice is really bad for the perpetrator, such that preventing the painless unjust killing would be better. You might also think that injustice itself is a sort of worsening-factor in one's death, so that an unjust death is worse than a comparably painful death that does not involve injustice.
Note: I am not claiming that (i) injustice-prevention should take lexical priority over the prevention of non-injustice related suffering, that (ii) the priorities of social justice activists reliably track real-world injustices, or that (iii) furthering some half-baked revolutionary activist plot is a good reason to abstain from real-world beneficence (I in-fact deny all of these claims). My point is just that one might think that preventing a smaller number of unjust bad things *could in principle* take priority over preventing a larger number of non-injustice related bad things.
That's a great suggestion about reducing traffic fatalities - I had no idea they hurt minority communities so much. I suppose part of it is that it's harder to protest against. You can chant "defund the police" and direct your anger at cops, but all the reckless drivers are either unknown to you or already in jail.
The Dutch discovered that chanting “Stop de Kindermoord” and advocating road safety design changes was actually highly effective, as well as emotionally cathartic!
At the last APA smoker, I was talking to a scholar of Kant's theoretical philosophy, who said -- and I quote -- "I have no idea what 'anti-woke' is supposed to mean."
There you have it. The Culture War is officially more abstruse than the Critique of Pure Reason.
You should have given three wildly confusing and distinct formulations of what wokeness was, and then claimed they were all really the same. That would have cleared it up for them!
I think the best counterargument to your observation that things like the long-term future, global poverty, and animal suffering are obviously way more important than social justice is arguing that these cause areas don’t trade off like that. If social justice didn’t exist, I don’t think there would be more people working on Effective altruist cause areas because most people working on social justice would realistically work on something else of no more than comparable priority. Social justice is the current intellectual fashion in academia, so it might appear that it’s trading of against everything else, but I expect if it weren’t there, there would be equally low priority popular causes that would get elites fired up. Scott Alexander had a very nice post discussing how caring about long-term AI risks more as a society doesn’t mean short-term AI risks would suffer as a cause area, and I think the same principle might apply here. To be clear, I am personally agnostic about the argument I just made, but I am mentioning it as a plausible counter argument to what I think would otherwise be your strongest argument if successful. Certainly doesn’t seem that unlikely that, just as caring about occupational licensing more as a society or having a stronger social movement against regulations that are only a normal level of bad would not harm the Effective Altruism movement despite drawing from somewhat similar demographics, the social justice movement might not be trading of against EA. Of course, there are in fact, some reasons to think they might trade off in this way. I’m just drawing attention to the fact that this is very much a contestable point.
Right, a key question is what we treat as the counterfactual. Since some priorities are clearly better than woke ones, and others are clearly worse, the value of "Person X shifting away from woke priorities" is indeterminate: it depends on what they move *to*.
If there was, for example, just a big clumsy lever that swung between "woke" and "MAGA", then I'd want to leave it nearer the "woke" end!
But two more specific departures from social justice thinking that strike me as worth encouraging:
* It's robustly good to think more explicitly about trade-offs, opportunity costs, and cause prioritization
* It's robustly good to encourage more open-minded epistemic practices, tolerance of disagreement, etc.
Thanks for writing this. I've grown weary of watching so many highly intelligent philosophers go from "progressive", which is respectable even if you disagree with it, to "woke", which includes all the shameful behavior you wrote about. It's very roughly akin to how the Republican party went from Reagan doctrines, some of which were respectable even if you disagreed with them, to the nonsense we have today.
I have only a vague and evidentially weak hypothesis regarding how so many philosophically acute professors can be so intellectually immature. About ten years ago, I started living abroad and meeting lots of people from very different backgrounds, cultural, political, religious, sexual, intellectual, and financial. I came to realize that despite all my extensive learning from the first 50 years of my life, I was still wildly clueless about so much of life. Kind of embarrassing! I think most philosophers are a lot like how I was: not just clueless about some important matters, but clueless about our cluelessness. It's easy to admit that in a general way, but when it comes to specifics, humility tends to vanish.
One obvious optimization that might help is people not being unduly swayed by claims of helping minority groups when assessing what charities to donate to. More accurate charity assessment = more efficient money allocation = more lives saved (hopefully). In particular, I don't like psychological biases and fuzzy farming makes people disproportionately inclined to insist on donating to charities like Make-a-wish, which are obviously horrendously inefficient at doing work. I could easily believe there's charities like that which pull on social justice levers instead.
My only caution here is that, I don't think there's a hard and fast distinction between things that are "culture war" and things that aren't; and I wouldn't want to dismiss a point of view just because it's "culture war-y".
Indeed, I think you can already see the culture war starting to assimilate EA/beneficentrist ideas, what with the fight over PEPFAR, and the arguments in Scott Alexander's comments about malaria charities.
I think it's possible the right will negative polarize against EA ideas because of USAID stuff in a way that makes currently non-culture war EA ideas become culture war-y in the future, and it would be a mistake to penalize those ideas just because of that.
I don't think you're in danger of making this mistake, but I think some people dismiss "woke" ideas as *inherently* culture war, when some of them are just normal ideas that have been through this process.
I do not troll people, but I don't care if they don't like me. I got some people very unhappy with me about the BLM protests when I told them that while we should get the active miscreants out of the police force, our poor areas were greatly underpoliced and that we needed a lot more policing. I told them that it was better to view inappropriate killings by police as analagous to friendly fire in military actions. Somewhere between 5% and 20% of military casualties may be caused by your own fire - and this can only be minimized so much before you start taking more casualties from enemy fire. The killings of unarmed civilians by the police is measured in the 10's, the killings of civilians by civilians (including killings of criminals by criminals) is in the tens of thousands. Now I rank the killings of unarmed civilians by police as somewhat more of an affront than killings of civilians by criminals. I am much less concerned about the killing of criminals by civilians or killing of criminals by criminals. A useful, but arbitrary heuristic, but a commonly held one. But an increase in policing that increases the killings of unarmed civilians by 'i', where 'i' is a small integer while decreasing the killings of civilians by criminals by 'xi', where 'x' is a large integer is probably a desirable tradeoff.
In system design we are frequently forced to set parameters that result in tradeoffs. In this case, we can increase policing and policing intensity. Doing so results in higher killings of unarmed civilians. It also results in decreased killings of civilians by criminals, decreased killings of criminals by criminals, and probably reduced killings of criminals by civilians. Decreasing policing intensity results in an increase in killings (as was observed in the aftermath of the BLM demonstrations).
Yes, we want to reduce the killings of innocents and of minor offenders. But police work can be rather like irregular combat, full of stress, with a wide opportunity to mispercieve actions and make mistakes. And police, like combat soldiers get burned out by the stress and start making bad decisions. And a lot of the characters they deal with have their own problems, which makes the mutual interactions more difficult and far more dangerous. Yes, do reforms to improving policing quality, but society in the US is vastly underpoliced by European standards and far more heavily armed.
My comments were not well received. I am not claiming to accurately percieve all of the factors involved, but I think that I have a stronger grasp of the underlying reality than most of the critics, who are to my mind are blinded by their values and morals.
There are varying levels of representations of reality. There are actions and consequences, and potential actions and probable consequences. Values and morals are to help you choose among possible actions in light of the action and the probably consequences. Values and morals should not be filters that blind you to reality (really reality approximations/models) though.
“For those who think social justice issues are properly a higher priority than I’m inclined to think, I’d love to see some quantitative analysis (even just a very rough “back-of-the-envelope” calculation estimating the scales of various harms and proposed remedies) to back this up.”
From experience this is typically met with skepticism about quantitative analysis. They can’t be wrong about magnitudes because asking about magnitudes is wrong.
This may be true of social justice people who criticise EA and abnormal amount, but from personal experience, a lot of SJ people will admit that their pet issue isn’t the world’s biggest problem if you ask them in a way they don’t perceive as an attack on their tribe. They’ll just shrug and say that nobody actually picks causes to work on by quantifying which is the world‘s biggest problem. I really doubt, for example that most social justice people don’t know that sexism is a much bigger problem in the developing world than in America. They just are not consequentialists and are not trying to maximise global utility or anything like that.
I recall a time about a decade ago when I made a comment on Facebook to the effect that it would make sense for the Movement for Black Lives to prioritize traffic fatalities over police violence, because traffic fatalities are a type of death heaped disproportionately on minority communities that are far more common than police violence. This started a massive comment thread pile on where I recall someone saying I was comparing apples and space ships. I think the best counter argument to my discussion was that people were scared of police violence in a way that they weren’t scared of traffic fatalities.
I think those folks would want to distinguish between injustice and misfortune and say the former is worse. They might think of traffic fatalities as more about misfortune; that said, a much better example of misfortune would be something like getting killed by a tornado.
Do you think that view is defensible upon reflection?
Suppose you have to choose between either (i) preventing a painless unjust killing, e.g. a lethal injection while the innocent victim sleeps, versus (ii) preventing some horrific misfortune, say a child very painfully burning to death in a forest fire.
Wouldn't basically everyone pick the second option? I think a lot of injustice-focus really just stems from people being blindly driven by emotional responses in a way that they wouldn't (and couldn't) rationally endorse upon further reflection.
The thing I could say in defense of this is that for many contexts that people have lived in (perhaps most) changing someone’s intentions did more to affect intentional death than changing anything else humans had access to did to change unintentional death. That probably changed some time in the modern era (perhaps as recently as the early 20th century) when safety standards started being really significant. But our intuitions about which things to try to change may still be shaped by a time when intentions were a more effective thing to change than other conditions.
And the other factor that would have substantially affected death in ancestral environments would have been getting rid of seriously I'll or disabled people, which combined with your hypothesis seems like a pretty good explanation for their being heavily moralizing in many cultures.
*ill WTF, autocorrect?
*heavily moralized
fyi, you should be able to edit your original comment to correct typos if you prefer! :-)
I'll think more about what might justify focusing on injustice, but I think that's something both the left and the right agree on.
I agree with you on the ethics, but I think as a matter of revealed preference, most people clearly find human inflicted deaths more horrifying. You can see this from the fact that for example, the pro-life movement cares about fighting abortion instead of fighting malaria, and people find fighting for human rights and opposing military intervention, a lot more motivating in practice, than fighting poverty in the developing world. In fact, I know many intelligent people who have completely sincerely expressed the view that human inflicted suffering is worse. Amartya Sen and some other academics have even defended this view in writing. I think we just have to live with the fact that people find fighting perceived bad actors more motivating than fighting what they perceive as natural disasters, even if the only reason the natural disaster is so much of a problem is because of human decisions and inaction. You and I might disagree, but we hold what are clearly very unusual moral opinions among the human population, and this does not appear likely to change any time soon.
That said I do think if you frame things correctly, you can get some people who otherwise work on police brutality to work on car accidents. but I expect this is very framing sensitive and will not work if they perceive you as attacking them and their beliefs. That will just activate their tribal instincts in the opposite of the way you want. Of course given how trigger happy some people are when comes to perceptions of attacks on their beliefs or tribe, some level of tribal attacks maybe unavoidable.
I certainly agree that there are a wide range of cases where people find human-inflicted injustice more emotionally compelling! I'm just skeptical that many would reflectively endorse a strict universal generalization of the sort that would require them to prioritize preventing the painless lethal injection over preventing the child's burning to death in my above scenario.
Put another way: the best interpretation (in most cases; there may be some exceptions) is not that they think injustice is *worse*, but rather that considerations of worseness are not what's guiding their behavior.
You might think that the commission of injustice is really bad for the perpetrator, such that preventing the painless unjust killing would be better. You might also think that injustice itself is a sort of worsening-factor in one's death, so that an unjust death is worse than a comparably painful death that does not involve injustice.
Note: I am not claiming that (i) injustice-prevention should take lexical priority over the prevention of non-injustice related suffering, that (ii) the priorities of social justice activists reliably track real-world injustices, or that (iii) furthering some half-baked revolutionary activist plot is a good reason to abstain from real-world beneficence (I in-fact deny all of these claims). My point is just that one might think that preventing a smaller number of unjust bad things *could in principle* take priority over preventing a larger number of non-injustice related bad things.
That's a great suggestion about reducing traffic fatalities - I had no idea they hurt minority communities so much. I suppose part of it is that it's harder to protest against. You can chant "defund the police" and direct your anger at cops, but all the reckless drivers are either unknown to you or already in jail.
The Dutch discovered that chanting “Stop de Kindermoord” and advocating road safety design changes was actually highly effective, as well as emotionally cathartic!
https://www.dutchreach.org/car-child-murder-protests-safer-nl-roads/
At the last APA smoker, I was talking to a scholar of Kant's theoretical philosophy, who said -- and I quote -- "I have no idea what 'anti-woke' is supposed to mean."
There you have it. The Culture War is officially more abstruse than the Critique of Pure Reason.
You should have given three wildly confusing and distinct formulations of what wokeness was, and then claimed they were all really the same. That would have cleared it up for them!
The antimony of woke reason: we don’t know what woke means, we don’t know what anti woke means, and yet both seem equally appealing.
I think the best counterargument to your observation that things like the long-term future, global poverty, and animal suffering are obviously way more important than social justice is arguing that these cause areas don’t trade off like that. If social justice didn’t exist, I don’t think there would be more people working on Effective altruist cause areas because most people working on social justice would realistically work on something else of no more than comparable priority. Social justice is the current intellectual fashion in academia, so it might appear that it’s trading of against everything else, but I expect if it weren’t there, there would be equally low priority popular causes that would get elites fired up. Scott Alexander had a very nice post discussing how caring about long-term AI risks more as a society doesn’t mean short-term AI risks would suffer as a cause area, and I think the same principle might apply here. To be clear, I am personally agnostic about the argument I just made, but I am mentioning it as a plausible counter argument to what I think would otherwise be your strongest argument if successful. Certainly doesn’t seem that unlikely that, just as caring about occupational licensing more as a society or having a stronger social movement against regulations that are only a normal level of bad would not harm the Effective Altruism movement despite drawing from somewhat similar demographics, the social justice movement might not be trading of against EA. Of course, there are in fact, some reasons to think they might trade off in this way. I’m just drawing attention to the fact that this is very much a contestable point.
Right, a key question is what we treat as the counterfactual. Since some priorities are clearly better than woke ones, and others are clearly worse, the value of "Person X shifting away from woke priorities" is indeterminate: it depends on what they move *to*.
If there was, for example, just a big clumsy lever that swung between "woke" and "MAGA", then I'd want to leave it nearer the "woke" end!
But two more specific departures from social justice thinking that strike me as worth encouraging:
* It's robustly good to think more explicitly about trade-offs, opportunity costs, and cause prioritization
* It's robustly good to encourage more open-minded epistemic practices, tolerance of disagreement, etc.
Thanks for writing this. I've grown weary of watching so many highly intelligent philosophers go from "progressive", which is respectable even if you disagree with it, to "woke", which includes all the shameful behavior you wrote about. It's very roughly akin to how the Republican party went from Reagan doctrines, some of which were respectable even if you disagreed with them, to the nonsense we have today.
I have only a vague and evidentially weak hypothesis regarding how so many philosophically acute professors can be so intellectually immature. About ten years ago, I started living abroad and meeting lots of people from very different backgrounds, cultural, political, religious, sexual, intellectual, and financial. I came to realize that despite all my extensive learning from the first 50 years of my life, I was still wildly clueless about so much of life. Kind of embarrassing! I think most philosophers are a lot like how I was: not just clueless about some important matters, but clueless about our cluelessness. It's easy to admit that in a general way, but when it comes to specifics, humility tends to vanish.
One obvious optimization that might help is people not being unduly swayed by claims of helping minority groups when assessing what charities to donate to. More accurate charity assessment = more efficient money allocation = more lives saved (hopefully). In particular, I don't like psychological biases and fuzzy farming makes people disproportionately inclined to insist on donating to charities like Make-a-wish, which are obviously horrendously inefficient at doing work. I could easily believe there's charities like that which pull on social justice levers instead.
My only caution here is that, I don't think there's a hard and fast distinction between things that are "culture war" and things that aren't; and I wouldn't want to dismiss a point of view just because it's "culture war-y".
Indeed, I think you can already see the culture war starting to assimilate EA/beneficentrist ideas, what with the fight over PEPFAR, and the arguments in Scott Alexander's comments about malaria charities.
I think it's possible the right will negative polarize against EA ideas because of USAID stuff in a way that makes currently non-culture war EA ideas become culture war-y in the future, and it would be a mistake to penalize those ideas just because of that.
I don't think you're in danger of making this mistake, but I think some people dismiss "woke" ideas as *inherently* culture war, when some of them are just normal ideas that have been through this process.
I do not troll people, but I don't care if they don't like me. I got some people very unhappy with me about the BLM protests when I told them that while we should get the active miscreants out of the police force, our poor areas were greatly underpoliced and that we needed a lot more policing. I told them that it was better to view inappropriate killings by police as analagous to friendly fire in military actions. Somewhere between 5% and 20% of military casualties may be caused by your own fire - and this can only be minimized so much before you start taking more casualties from enemy fire. The killings of unarmed civilians by the police is measured in the 10's, the killings of civilians by civilians (including killings of criminals by criminals) is in the tens of thousands. Now I rank the killings of unarmed civilians by police as somewhat more of an affront than killings of civilians by criminals. I am much less concerned about the killing of criminals by civilians or killing of criminals by criminals. A useful, but arbitrary heuristic, but a commonly held one. But an increase in policing that increases the killings of unarmed civilians by 'i', where 'i' is a small integer while decreasing the killings of civilians by criminals by 'xi', where 'x' is a large integer is probably a desirable tradeoff.
In system design we are frequently forced to set parameters that result in tradeoffs. In this case, we can increase policing and policing intensity. Doing so results in higher killings of unarmed civilians. It also results in decreased killings of civilians by criminals, decreased killings of criminals by criminals, and probably reduced killings of criminals by civilians. Decreasing policing intensity results in an increase in killings (as was observed in the aftermath of the BLM demonstrations).
Yes, we want to reduce the killings of innocents and of minor offenders. But police work can be rather like irregular combat, full of stress, with a wide opportunity to mispercieve actions and make mistakes. And police, like combat soldiers get burned out by the stress and start making bad decisions. And a lot of the characters they deal with have their own problems, which makes the mutual interactions more difficult and far more dangerous. Yes, do reforms to improving policing quality, but society in the US is vastly underpoliced by European standards and far more heavily armed.
My comments were not well received. I am not claiming to accurately percieve all of the factors involved, but I think that I have a stronger grasp of the underlying reality than most of the critics, who are to my mind are blinded by their values and morals.
There are varying levels of representations of reality. There are actions and consequences, and potential actions and probable consequences. Values and morals are to help you choose among possible actions in light of the action and the probably consequences. Values and morals should not be filters that blind you to reality (really reality approximations/models) though.
Correct, correct, and correct. Also, the footnote is too real.
“For those who think social justice issues are properly a higher priority than I’m inclined to think, I’d love to see some quantitative analysis (even just a very rough “back-of-the-envelope” calculation estimating the scales of various harms and proposed remedies) to back this up.”
From experience this is typically met with skepticism about quantitative analysis. They can’t be wrong about magnitudes because asking about magnitudes is wrong.
Yeah, part of the inspiration for 'Refusing to Quantify is Refusing to Think (about tradeoffs)':
https://www.goodthoughts.blog/p/refusing-to-quantify-is-refusing
This may be true of social justice people who criticise EA and abnormal amount, but from personal experience, a lot of SJ people will admit that their pet issue isn’t the world’s biggest problem if you ask them in a way they don’t perceive as an attack on their tribe. They’ll just shrug and say that nobody actually picks causes to work on by quantifying which is the world‘s biggest problem. I really doubt, for example that most social justice people don’t know that sexism is a much bigger problem in the developing world than in America. They just are not consequentialists and are not trying to maximise global utility or anything like that.
I like that post!