Discussion about this post

User's avatar
Muster the Squirrels's avatar

Here's one way to make a consequentialist critique of EA as it currently exists.

Consider the US-China status quo. The US is not attacking China in pursuit of regime change, and China is not conquering Taiwan. The risk of the former seems minute; the risk of the latter does not. What if a 5% increase in the chance that this status quo holds was more of a net positive than all non-x-risk-related EA efforts combined?

Here are some of the possible negative outcomes if China tries to conquer Taiwan:

-conventional and nuclear war between China and the US, and their allies, with the possibility of up to several billion deaths;

-hundreds of satellite shootdowns causing Kessler syndrome, leading to the destruction of most other satellites, leaving us with little warning of impending natural disasters such as typhoons and drought;

-sidelining of AI safety concerns, in the rush to create AGI for military purposes;

-end to US-China biosecurity cooperation, and possible biowarfare by whichever side feels it is losing (which might be both sides at once - nuclear war would be a very confusing experience);

-wars elsewhere following the withdrawal of overburdened US forces, e.g. a Russian invasion of Eastern and Central Europe backed by the threat of nuclear attack, or an Israeli/Saudi/Emirati versus Iranian/Hezbollah war that destroys a substantial share of global oil production;

-economic catastrophe: a deep global depression; widespread blackouts; years of major famines and fuel shortages, leading to Sri Lanka type riots in dozens of countries at once, with little chance of multinational bailouts.

-substantial decline in efforts to treat/reduce/vaccinate against HIV, malaria, antibiotic resistant infections (e.g. XDR/MDR tuberculosis), COVID-19, etc.

If your simplified approach to international relations is more realist than anything else, you probably believe that a major factor in whether war breaks out over Taiwan is the credibility of US deterrence.

How much of EA works on preserving, or else improving, the status quo between the US and China, whether through enhancing the credibility of US deterrence (the probable realist approach) or anything else? Very little. Is that due solely to calculation of risk? Is it also because the issue doesn't seem tractable? If so, that should at least be regularly acknowledged. Could the average EA's attitude to politics be playing a role?

To the extent that the US-China war risk is discussed in EA, I do not think it is done with the subtle political awareness that you find in non-EA national security circles. Compare e.g. the discussions here (https://forum.effectivealtruism.org/topics/great-power-conflict) with the writing of someone like Tanner Greer (https://scholars-stage.org/) and those he links to.

In case you are wondering, I have no strong opinion on which US political party would be better at avoiding WW3. There are arguments for both, and I continue to weigh them, probably incompetently. I do think it would be better if there were plenty of EAs in both parties.

I have no meaningful thoughts on how to decide whether unaligned AI or WW3 is a bigger threat. (Despite 30-40 hours of reading about AI in the past few months, I still understand very little.)

Expand full comment
Sjlver's avatar

I've read one alternative approach that is well written and made in good faith: Bruce Wydick's book "Shrewd Samaritan" [0].

It's a Christian perspective on doing good, and arrives at many conclusions that are similar to effective altruism. The main difference is an emphasis on "flourishing" in a more holistic way than what is typically done by a narrowly-focused effective charity like AMF. Wydick relates this to the Hebrew concept of Shalom, that is, holistic peace and wellbeing and blessing.

In practical terms, this means that Wydick more strongly (compared to, say, GiveWell) recommends interventions that focus on more than one aspect of wellbeing. For example, child sponsorships or graduation approaches, where poor people get an asset (cash or a cow or similar) plus the ability to save (e.g., a bank account) plus training.

I believe that these approaches fare pretty well when evaluated, and indeed there are some RCTs evaluating them [1]. These programs are more complex to evaluate, however, than programs that do one thing, like distributing bednets. That said, the rationale that "cash + saving + training > cash only" is intuitive to me, and so this might be an area where GiveWell/EA is a bit biased toward stuff that is more easily measurable.

[0]: https://www.goodreads.com/book/show/42772060-shrewd-samaritan

[1]: https://blog.brac.net/ultra-poor-graduation-the-strongest-case-so-far-for-why-financial-services-must-be-a-part-of-the-solution-to-extreme-poverty/

Expand full comment
19 more comments...

No posts