Discussion about this post

User's avatar
Lucius Caviola's avatar

Thanks for this great post, Richard!

My colleagues and I recently conducted a series of psychological studies to understand how ordinary people think about such collective impact situations. Here's the abstract:

> When people act prosocially, there are both benefits to society and personal costs to those who act. We find a many-one bias in how people weigh these costs and benefits when assessing others’ actions. People more often judge the prosocial benefits to outweigh the personal costs when there are many actors than when there is just one, even if both cases share an identical cost-benefit ratio. We document this effect across eleven experiments (N = 6,413), including samples of adults and children, individualistic and collectivistic sample populations, elected policymakers, and lawyers and judges, and in both judgments of hypothetical scenarios and real decisions about how other participants should spend bonus payments. The many-one bias has public-policy implications. Policymakers might earn public support for policies that mandate or incentivize prosocial action for many people, even while the public would not find that same prosocial action worthwhile on an individual basis.

https://osf.io/preprints/psyarxiv/bkcne

Expand full comment
Peter Gerdes's avatar

As to the actual impact of calling for people to not use AI, it is hard to imagine a more harmful thing for academics to do regarding the environment. I am personally both an academic and someone who thinks climate change is important and when I hear academics who I know fly around the world to fancy conferences rather than zoom, work in offices and live in houses that are nicely air conditioned and generally consume as large a fraction of their salary as the next person (one of the best predictora of CO2 impact) it makes me disgusted so I can only imagine the impact on people who actually identity against that kind of academic elite.

It's the equivalent of wealthy free market advocates making a lower top marginal tax bracket a big visible plank of their proposal. Even if that is consistent with your overall worldview the rhetorical impact is obviously going to be to undermine your credibility.

For academics in particular, picking on AI as something people should avoid -- when AI represents both a threat to their societal status as experts and, at a visceral level, it is clear their primary emotional reaction to AI is negative not positive -- just further calls into question whether this is a serious concern or motivated reasoning to benefit themselves.

As part of the lefty urban elite it seems pretty damn obvious that if you care about climate change you should be vocal about the things that code the opposite way. STFU about giving up things that you don't really do or want to do, avoid perpetuating the impression that you want to lecture people without giving up your comforts and talk about giving up things that you are perceived to like or talk up aspects of environmentalism that code conservative (the potential for jobs and industry by harnessing tidal power, talk up nuclear power, the patriotic pride we could feel at being number one in some of these industries) don't say the things people realize you would like regardless.

Expand full comment
10 more comments...

No posts

Ready for more?