My colleagues and I recently conducted a series of psychological studies to understand how ordinary people think about such collective impact situations. Here's the abstract:
> When people act prosocially, there are both benefits to society and personal costs to those who act. We find a many-one bias in how people weigh these costs and benefits when assessing others’ actions. People more often judge the prosocial benefits to outweigh the personal costs when there are many actors than when there is just one, even if both cases share an identical cost-benefit ratio. We document this effect across eleven experiments (N = 6,413), including samples of adults and children, individualistic and collectivistic sample populations, elected policymakers, and lawyers and judges, and in both judgments of hypothetical scenarios and real decisions about how other participants should spend bonus payments. The many-one bias has public-policy implications. Policymakers might earn public support for policies that mandate or incentivize prosocial action for many people, even while the public would not find that same prosocial action worthwhile on an individual basis.
Isn't a lot of that likely because people aren't really actually evaluating the impact but rather whether they will be seen as a good or bad person based on what they say?
There are studies showing that merely asking people to think that they are a single deciscion maker instead of one voter amoung many that shows it substantially effects what policies people advocate.
As to the actual impact of calling for people to not use AI, it is hard to imagine a more harmful thing for academics to do regarding the environment. I am personally both an academic and someone who thinks climate change is important and when I hear academics who I know fly around the world to fancy conferences rather than zoom, work in offices and live in houses that are nicely air conditioned and generally consume as large a fraction of their salary as the next person (one of the best predictora of CO2 impact) it makes me disgusted so I can only imagine the impact on people who actually identity against that kind of academic elite.
It's the equivalent of wealthy free market advocates making a lower top marginal tax bracket a big visible plank of their proposal. Even if that is consistent with your overall worldview the rhetorical impact is obviously going to be to undermine your credibility.
For academics in particular, picking on AI as something people should avoid -- when AI represents both a threat to their societal status as experts and, at a visceral level, it is clear their primary emotional reaction to AI is negative not positive -- just further calls into question whether this is a serious concern or motivated reasoning to benefit themselves.
As part of the lefty urban elite it seems pretty damn obvious that if you care about climate change you should be vocal about the things that code the opposite way. STFU about giving up things that you don't really do or want to do, avoid perpetuating the impression that you want to lecture people without giving up your comforts and talk about giving up things that you are perceived to like or talk up aspects of environmentalism that code conservative (the potential for jobs and industry by harnessing tidal power, talk up nuclear power, the patriotic pride we could feel at being number one in some of these industries) don't say the things people realize you would like regardless.
That's interesting, but I think it assumes a fundamentally incorrect framework for thinking about what people are doing in these situations. The costs and benefits that are relevant are less those of the policy and more the social impact of announcing their support.
The reason why so few people support internalizing the externalities is that what they are actually balancing isn't societal costs versus benefits of a policy but their own personal social reputation. There are studies showing that people behave quite differently if they truly believe they are making the final call on an issue than as one voter amoung many.
There is a common sense explanation for all of this. If you know, eg as an academic complaining about AI, that your opinion is unlikely to have much of any effect you are likely to say whatever best expresses your values. Saying AI is bad because it uses energy signals to everyone around you that you care about the environment and that you are part of their team. Saying "umm actually it's inconsistent to treat it any different than any other good people consume and it's actually overall consumption that is the best predictor of carbon impact" risks creating rifts or causing people to suspect you don't really care that much about the environment. If you get appointed to the presidential task force on policy the seriousness of your decision will weigh on you in a way that makes you willing to endure/offset those costs.
Of course if your values are -- say like mine -- being contrarian and disagreeable you do that even when that isn't the best thing to do too.
I think the Pigouvian taxation solution, while making sense, would be a political disaster.
The Red Tribe wouldn't be happy with being made to pay more tax for something they often think is overblown anyway, and the Blue Tribe would have conniptions. They'd see this as the rich being able to buy their way out of doing their fair share at the expense of the poor.
I've noticed that something like the total cost fallacy is common when a celebrity or artist supports a controversial political cause. Some people treat buying or enjoying anything that person has made as equivalent to advocating their views yourself, and also treat you as personally responsible for any financial contributions that celebrity has made towards that cause.
I recently encountered some people who angrily denounced anyone who still read "Harry Potter" because of JK Rowling's anti-trans views. They were not happy when I pointed out that Rowling only donated a tiny fraction of her income to support her views, so buying her books had negligible impact. I also suggested that donating a few dollars to a pro-trans charity would probably have vastly more impact than boycotting Rowling, this just made them upset.
Yeah, in that case I suspect what's going on is a kind of purity-contagion effect, like how most people wouldn't feel comfortable wearing a sweater that was previously owned by a serial killer. Once certain political causes become venerated as sacred, and the opposing side as taboo, then any form of association with the taboo-ed risks rubbing off on you.
“I want to be able to impose costs on others for the sake of a lesser benefit to myself”
How could you rephrase this to make the unreasonableness more salient? This just sounds standard operating procedure for people (which you quickly acknowledge).
Also, in resonance with your first line, I would suggest that AI abolitionists, in addition to making targeted donations, also drastically shift their diet towards plants as a more effective lifestyle choice for the climate, for farmed animals and for a better world in general.
“Think AI is bad for the planet? Eat more plants.”
If that were really "standard" then we'd expect most interactions to be negative-sum. Best go hide in a cave somewhere because other people will steal, enslave, or otherwise exploit you at any opportunity. They in turn expect the same from you, so everyone can expect to be worse off as a result of living in a society with other people.
I take it that law and common-sense morality alike are designed around trying to prevent that dystopian scenario. But that requires generally constraining our interactions with others to ensure mutual benefit: we either get their consent (a reliable sign that it's worth it for them), or else we leave them alone. By contrast, it would clearly be unreasonable to, e.g., secretly dump our trash in our neighbor's yard.
Now it's controversial to what extent we should go out of our way to *benefit* others (at some - lesser - cost to ourselves). But "don't harm others (for your own lesser benefit)!" seems a less controversial moral demand.
I’m a bit confused now because you acknowledge that most people don’t want to internalize costs via taxes. You also say:
“I guess most people just aren’t that committed to being even minimally reasonable (or avoiding egregious selfishness, so long as it’s normal), which is a depressing thing to realize.”
An example that immediately came to mind for me is of people driving loud motorcycles or with modified mufflers. Aren’t they imposing costs on everyone for some small benefit to themselves? Perhaps you’d say this may be too common but not “standard”.
Right, it's not unheard of for people to be "unreasonable" on occasion! I suspect it also tends to involve motivated reasoning or self-deception, such that the selfish person doesn't explicitly conceive of themselves as imposing greater costs on others. (That would too clearly be unreasonable, I take it, whereas most people don't like to think of themselves as being unreasonable - even when they in fact are.) So, one bit of evidence that this is commonly viewed as unreasonable is that you won't often find people openly admitting (even to themselves, I expect!) to harming others more than they help themselves.
Thanks for this great post, Richard!
My colleagues and I recently conducted a series of psychological studies to understand how ordinary people think about such collective impact situations. Here's the abstract:
> When people act prosocially, there are both benefits to society and personal costs to those who act. We find a many-one bias in how people weigh these costs and benefits when assessing others’ actions. People more often judge the prosocial benefits to outweigh the personal costs when there are many actors than when there is just one, even if both cases share an identical cost-benefit ratio. We document this effect across eleven experiments (N = 6,413), including samples of adults and children, individualistic and collectivistic sample populations, elected policymakers, and lawyers and judges, and in both judgments of hypothetical scenarios and real decisions about how other participants should spend bonus payments. The many-one bias has public-policy implications. Policymakers might earn public support for policies that mandate or incentivize prosocial action for many people, even while the public would not find that same prosocial action worthwhile on an individual basis.
https://osf.io/preprints/psyarxiv/bkcne
Isn't a lot of that likely because people aren't really actually evaluating the impact but rather whether they will be seen as a good or bad person based on what they say?
There are studies showing that merely asking people to think that they are a single deciscion maker instead of one voter amoung many that shows it substantially effects what policies people advocate.
As to the actual impact of calling for people to not use AI, it is hard to imagine a more harmful thing for academics to do regarding the environment. I am personally both an academic and someone who thinks climate change is important and when I hear academics who I know fly around the world to fancy conferences rather than zoom, work in offices and live in houses that are nicely air conditioned and generally consume as large a fraction of their salary as the next person (one of the best predictora of CO2 impact) it makes me disgusted so I can only imagine the impact on people who actually identity against that kind of academic elite.
It's the equivalent of wealthy free market advocates making a lower top marginal tax bracket a big visible plank of their proposal. Even if that is consistent with your overall worldview the rhetorical impact is obviously going to be to undermine your credibility.
For academics in particular, picking on AI as something people should avoid -- when AI represents both a threat to their societal status as experts and, at a visceral level, it is clear their primary emotional reaction to AI is negative not positive -- just further calls into question whether this is a serious concern or motivated reasoning to benefit themselves.
As part of the lefty urban elite it seems pretty damn obvious that if you care about climate change you should be vocal about the things that code the opposite way. STFU about giving up things that you don't really do or want to do, avoid perpetuating the impression that you want to lecture people without giving up your comforts and talk about giving up things that you are perceived to like or talk up aspects of environmentalism that code conservative (the potential for jobs and industry by harnessing tidal power, talk up nuclear power, the patriotic pride we could feel at being number one in some of these industries) don't say the things people realize you would like regardless.
That's interesting, but I think it assumes a fundamentally incorrect framework for thinking about what people are doing in these situations. The costs and benefits that are relevant are less those of the policy and more the social impact of announcing their support.
The reason why so few people support internalizing the externalities is that what they are actually balancing isn't societal costs versus benefits of a policy but their own personal social reputation. There are studies showing that people behave quite differently if they truly believe they are making the final call on an issue than as one voter amoung many.
There is a common sense explanation for all of this. If you know, eg as an academic complaining about AI, that your opinion is unlikely to have much of any effect you are likely to say whatever best expresses your values. Saying AI is bad because it uses energy signals to everyone around you that you care about the environment and that you are part of their team. Saying "umm actually it's inconsistent to treat it any different than any other good people consume and it's actually overall consumption that is the best predictor of carbon impact" risks creating rifts or causing people to suspect you don't really care that much about the environment. If you get appointed to the presidential task force on policy the seriousness of your decision will weigh on you in a way that makes you willing to endure/offset those costs.
Of course if your values are -- say like mine -- being contrarian and disagreeable you do that even when that isn't the best thing to do too.
I think the Pigouvian taxation solution, while making sense, would be a political disaster.
The Red Tribe wouldn't be happy with being made to pay more tax for something they often think is overblown anyway, and the Blue Tribe would have conniptions. They'd see this as the rich being able to buy their way out of doing their fair share at the expense of the poor.
Maybe the best shot would be to rebate the tax revenues on an equal per capita basis. Great for the poor and others who consume less than average.
But yeah, most people are too awful for good policies to be politically feasible.
I've noticed that something like the total cost fallacy is common when a celebrity or artist supports a controversial political cause. Some people treat buying or enjoying anything that person has made as equivalent to advocating their views yourself, and also treat you as personally responsible for any financial contributions that celebrity has made towards that cause.
I recently encountered some people who angrily denounced anyone who still read "Harry Potter" because of JK Rowling's anti-trans views. They were not happy when I pointed out that Rowling only donated a tiny fraction of her income to support her views, so buying her books had negligible impact. I also suggested that donating a few dollars to a pro-trans charity would probably have vastly more impact than boycotting Rowling, this just made them upset.
Yeah, in that case I suspect what's going on is a kind of purity-contagion effect, like how most people wouldn't feel comfortable wearing a sweater that was previously owned by a serial killer. Once certain political causes become venerated as sacred, and the opposing side as taboo, then any form of association with the taboo-ed risks rubbing off on you.
“I want to be able to impose costs on others for the sake of a lesser benefit to myself”
How could you rephrase this to make the unreasonableness more salient? This just sounds standard operating procedure for people (which you quickly acknowledge).
Also, in resonance with your first line, I would suggest that AI abolitionists, in addition to making targeted donations, also drastically shift their diet towards plants as a more effective lifestyle choice for the climate, for farmed animals and for a better world in general.
“Think AI is bad for the planet? Eat more plants.”
If that were really "standard" then we'd expect most interactions to be negative-sum. Best go hide in a cave somewhere because other people will steal, enslave, or otherwise exploit you at any opportunity. They in turn expect the same from you, so everyone can expect to be worse off as a result of living in a society with other people.
I take it that law and common-sense morality alike are designed around trying to prevent that dystopian scenario. But that requires generally constraining our interactions with others to ensure mutual benefit: we either get their consent (a reliable sign that it's worth it for them), or else we leave them alone. By contrast, it would clearly be unreasonable to, e.g., secretly dump our trash in our neighbor's yard.
Now it's controversial to what extent we should go out of our way to *benefit* others (at some - lesser - cost to ourselves). But "don't harm others (for your own lesser benefit)!" seems a less controversial moral demand.
I’m a bit confused now because you acknowledge that most people don’t want to internalize costs via taxes. You also say:
“I guess most people just aren’t that committed to being even minimally reasonable (or avoiding egregious selfishness, so long as it’s normal), which is a depressing thing to realize.”
An example that immediately came to mind for me is of people driving loud motorcycles or with modified mufflers. Aren’t they imposing costs on everyone for some small benefit to themselves? Perhaps you’d say this may be too common but not “standard”.
Right, it's not unheard of for people to be "unreasonable" on occasion! I suspect it also tends to involve motivated reasoning or self-deception, such that the selfish person doesn't explicitly conceive of themselves as imposing greater costs on others. (That would too clearly be unreasonable, I take it, whereas most people don't like to think of themselves as being unreasonable - even when they in fact are.) So, one bit of evidence that this is commonly viewed as unreasonable is that you won't often find people openly admitting (even to themselves, I expect!) to harming others more than they help themselves.