54 Comments
Apr 4Liked by Richard Y Chappell

Marxists: Every single leader we've ever produced has been a complete moral monster. This has no bearing on whether our view is bad. However, SBF's existence decisively refutes EA.

Expand full comment

I absolutely agree with this, but I can imagine a Marxist saying, "we've been here, we know what can happen when you have a hard edged unsentimental view of how to make the world better and a tight knit community that is sometimes impervious to criticism.... We've learned our lesson, but we can see you sleepwalking into the same mistake"... In practice though, the loudest criticisms of EA seem to come from people who have least learned their lessons from Marxism

Expand full comment
Apr 4Liked by Richard Y Chappell

The greatest challenge weaving through these points is Plant's "Meat-eater's Problem": undercutting much of 10, greatly shifting 15 and greatly forking 19, underscoring 20, providing the clearest example for 25, and profoundly underscoring 31, for starters.

Expand full comment
author

Very much an intra-mural dispute there, though!

Expand full comment
Apr 6·edited Apr 6

I was just thinking that one can disagree with 19 but still be a good EA as long as you don’t advocate for one’s position. Is that what you mean by forking?

I’ve now refreshed my memory about the meat eater problem. And there is also Benatar’s asymmetry argument.

Expand full comment

Nah, I meant that the relative goodness/badness of the extinction of humans forks greatly on how the future humanity that avoids extinction goes on to treat sentient non-humans. (Not only the holocaust of factory farming, but also eventually suffering in the wild, quality of life for synanthropic species, and digital minds.) If the bad scenarios in these respects are more likely than the good, than the extinction of humanity might be the best thing that has ever happened.

Expand full comment

Well you suggest that it's good to help people. That seems like it assumes utilitarianism, and thus you're committed to thinking that you should feed your children to the utility monster.

Or so I've heard from EA critics.

Expand full comment

But it is undeniably true that effective altruism is rooted in utilitarianism, specifically Peter Singer's utilitarianism. Ideologically, it would seem to be a branch or subset of utilitarianism.

Expand full comment
author
Apr 5·edited Apr 5Author

"rooted in" may be ambiguous between "depends upon" and "inspired by". It is the latter, but certainly not the former. So it is straightforwardly false to call EA a "branch or subset of utilitarianism".

See: https://rychappell.substack.com/p/beneficence-is-not-exclusively-utilitarian

Expand full comment
Apr 4Liked by Richard Y Chappell

I think 13 (hits-based giving) is potentially objectionable, citing #1 on your list of things you don’t believe. Depending on how concerned you are about the general tendency to skew our reasoning toward our own benefit, it might be the case that even a little (attempted) hits-based giving results in a lot of self-serving waste, such that it doesn’t pencil out.

Expand full comment
author

It seems like there will be plenty of cases where there is no conceivable self-serving benefit to any of the candidate causes one is considering. (E.g. I'm not an AI safety researcher, so donating to AI safety doesn't benefit me at all -- except insofar as it ends up benefiting everyone, and I'm a part of everyone.) Would such a constrained version of hits-based giving (exclude self-serving options) still seem objectionable?

Expand full comment
Apr 4Liked by Richard Y Chappell

As someone who is not a part of the EA movement, but has generally viewed it sympathetically as essentially standing for the simple proposition that people who can afford to should give more of their money to charity and do so in the way that maximizes its impact, I find I agree with most of these points, and appreciate the post as a deeper dive into it. I also think that a lot of the criticism I've read seems more driven by a distaste for those involved in EA that's not really driven by objections to EA itself. For example, the recent Wired article seemed a more persuasive case against giving to charity generally than against EA.

However, I do think points 12 and 22 are potentially problematic for a couple of reasons. Together, they seem to stand for the proposition that it's self indulgent to engage in hands on efforts to improve the world when you have the capacity to make a lot of money doing something else and use the proceeds to fund a greater impact than your hands on efforts could. Thus, a plastic surgeon would be self-indulgent to take a low paying job working for an NGO treating burn victims in a war zone, when they could make a whole lot of money running an aesthetic practice on Park Ave and fund the salary of a dozen doctors at that NGO. An educator should resist the temptation to work in an underperforming public school in favor of starting a test prep company for the affluent and using the proceeds to provide greater resources for underperforming schools.

One problem with this approach is that successfully diverting skilled people from working directly to better the world in favor of a focus on making money would mean that at some point there would not be skilled people working to directly better the world.

The second problem is that what "via permissible means" refers to is not obvious and it's doing a lot of work. In the above examples, both running a test prep company and running an aesthetic plastic surgery practice seem morally neutral superficially, but it's hard to gauge the large scale impact of different kinds of work. Even for these, one could be said to contribute to harmful views on body image and the other to exacerbating inequality in access to higher education. Other lucrative careers are more murky-working in finance may mean helping to increase the wealth of companies which will in turn use it to fight against environmental regulations, etc.

Then there's the problem of weighing adjustments to the permissibility of one's work against the potential gains in profits and the resulting good one can do, which seems like it could lead to a slippery slope. Perhaps the plastic surgeon can double his profits by using substandard materials with only a small risk to his patients for example. I have not followed the SBF situation closely, so don't know if this is the kind of reasoning that led to it, but it seems at least superficially plausible that it could.

Expand full comment
author

On the first point: I should clarify that I think moderate amounts of "self-indulgence" are perfectly fine. Nobody's a saint, and I'm not suggesting any obligations in this vicinity. But I do think it's *morally better and more virtuous* to prioritize the impartial good over warm feels.

> "at some point there would not be skilled people working to directly better the world"

Then direct work would do more impartial good, and be recommended by my principle.

> "what "via permissible means" refers to is not obvious and it's doing a lot of work"

I agree! That's left open, but it's certainly important.

Expand full comment
Apr 4Liked by Richard Y Chappell

I'm pro-EA, but my one misgiving is something like the following: certain kinds of actions are more legible to the EA framework than others, and I worry both that this introduces a bias as a whole to the way EA evaluates interventions, and maybe more worryingly that if EA became widely enough adopted it could have perverse incentives to ignore or exacerbate solvable problems because they're not as legible.

I think this sort of objection can be covered by some of the points you make above, and it's certainly not unique to EA, but it makes me somewhat more sympathetic that while more EA is better on the current margin, universal adoption of EA norms makes me a little more nervous.

As an example of what I mean: consider the current war in Gaza (a less controversial topic than EA itself :P). I think EA-style analyses will more easily be able to evaluate interventions like increasing food donations and vaccines, but will struggle a lot more to evaluate interventions like "write a letter to your congressperson encouraging them to insist on conditional arms sales to Israel", or whatever... But it may be that the most effective thing to do for Gazans is to compel Israel to end the war, even though the contribution of any individual action to that outcome is basically unanalyzable. Obviously, the EA view is seeing something true and important: while the war is ongoing, we should absolutely do what we can to get food and medicine into Gaza--but I wouldn't want people to ignore the importance of building a political coalition to shape US-Israeli political relations.

Fwiw, my framing above suggests a pro-Palestinian point of view, but you can make the same style of argument in the other direction: one might think that the best thing to do for Gaza is to overthrow Hamas and so individual actions that contribute to the US ensuring Israel has enough freedom of action to achieve this are important, but they will be almost impossible to quantify.

In our current world where we have a zillion marches for Palestine that achieve nothing, and where people think giving food to Gazans is some underhanded ploy by Biden, I think more of the EA thinking is necessary.... But I can imagine a world where we look at warzones and only think "how can I get more vitamin A into this warzone?" and not "can we do anything to end this war?" because the latter is too hard to quantify, especially at the level of individual actions.

Expand full comment
author

Yeah, this is definitely a common sort of worry. I think it rests on a further assumptions that I would reject (and think other EAs should too, but perhaps not enough actually would?), namely that we always need explicit quantification in order to justify things from an "EA" perspective. But insofar as the "EA perspective" is just meant to be something like impartial beneficence + instrumental rationality, it's obviously not instrumentally rational to ignore hard-to-quantify promising prospects.

I think what's going on here is that many assume that quantitative tools are meant to *substitute* for good judgment, whereas I think they're best thought of as mere tools. I should write a separate post on this sometime.

Expand full comment
Apr 5Liked by Richard Y Chappell

Again, I want to preface by clarifying I'm being a bit devil's advocate-ish here, but:

I think the issue extends beyond just quantification, and maybe has more to do with representability-as-a-model, or something like that.

Reasoning about incentives, or trying to solve for equilibrium in a complex social system with lots of feedback loops and common knowledge and alternative coordination points, etc, is probably always going to feel a lot more vibes-y, and hard to categorize what counts as good judgement.

It's not that EAs can't think about such systems in a good way, but I think interventions that go through social dynamics are always going to generate less consensus among EAs than more "direct interventions", and so probably play less of a role in EA thinking.

Expand full comment
Apr 4Liked by Richard Y Chappell

Nothing is obviously objectionable. These are good thoughts and I think that your perspective is quite palatable to the average smart person. The tiling the world and doubling are quite repugnant, although they could be defended on strictly utilitarian grounds. While those sorts of arguments belong in philosophy journals, I think these belong on the forum and Substack.

The most powerful critiques of EA would need to be pretty sophisticated and not necessarily obvious. For example, as one commenter below noted -- the meat eater problem, more people and a longterm future might mean lots of animal suffering. Another would be the sort of collective-action dynamics / evolution of altruistic behavior and fertility as another noted. But these ideas are like a 9/10 on the repugnance scale and need to be defended carefully. I think the person best equipped to do that would be an EA actually.

Overall, EA seems to be the best/smartest large organizations/social movements out there.

Expand full comment
Apr 4Liked by Richard Y Chappell

Great piece, and I agree! The one thing I would say that should have a bit more epistemic humility on would be that “Ethical cosmopolitanism is correct: It’s better and more virtuous for one’s sympathy to extend to a broader moral circle.” This is because, although it does fit with our WEIRD psychologies, historically this has not and there is very little you can do to convince someone who doesn’t have those egalitarian intuitions to care about ethical cosmopolitanism. It’s also a little more complicated considering that I assume even someone like Peter singer ethically cares about his mom more than a random stranger in Uganda. Nonetheless, a very great read.

Expand full comment
author
Apr 4·edited Apr 4Author

To clarify: "extending one's sympathy to a broader moral circle" does not entail strict impartiality (e.g. between loved ones and strangers). I think we should be impartial *between various strangers*, but that's a much weaker claim.

I don't generally find the mere fact of disagreement to be epistemically undermining. For example, our inability to convince Nazis of the badness of genocide should not in the slightest undermine our (very reasonable!) confidence that genocide is indeed extremely bad.

Expand full comment

That's a good point. I would say the fact that we really can't convince people about moral claims says something about the nature of morality and how ours is a bit more of a psychological fact than a clear logical progression. Also, although we can't convince the Nazis that our moral system is correct, and we can agree that the Nazis are for sure bad, there are many other groups that aren't particularly cosmopolitan (maybe some communitarians who just place much higher moral value in people close to them) who, I would argue, don't have an obviously worse moral system than us like the nazis do.

Expand full comment

I think one important apparently anti-cosmopolitan point is that we can often collectively be more effective at being neutrally beneficent if each of us aims our efforts at local communities whose needs we understand better and that we are in a better position to aid. Now the modern world creates various defeaters for those assumptions - in particular, global wealth inequality often means that there are distant needs that we are in better position to aid than local ones. But there’s still some sort of reason of effectiveness for focusing a bit more locally than some EA discussions assume.

Expand full comment
Apr 4Liked by Richard Y Chappell

31 (risk of appearing boastful makes talking about the good you do *more* praiseworthy) might have a good chesterton’s fence argument against it, somewhere. This is a half-baked take, so maybe someone else can refine it, but I haven’t explored the strongest arguments in favor of boast-avoidance as a social norm enough to feel confident in rejecting them.

Expand full comment
Apr 5Liked by Richard Y Chappell

I think the way I would develop this line of thought would be to say that the real risk is not “appearing boastful.” The real risk is of damaging your own character by boasting, thereby becoming attached to a self-aggrandising view of yourself that will damage both your ability to teach and your ability to learn.

Expand full comment
author

How does that risk compare in importance to the risk of failing to displace excessively complacent / non-altruistic social norms? Or the risk of adopting an excessively self-centered (in the most literal sense) view of ethics, which presumably poses an even greater threat to one's abilities to teach and to learn (about what really matters, which is mostly outside of oneself)?

Expand full comment

Virtue ethics can indeed become excessively centred-on-self and not on the outside world. Moderation in all things, including even moral development! But arrogance is an epistemic risk. Insofar as all moral judgment relies on our ability to see the world accurately, I think we have to rank it quite highly. It has to come above “failing to displace excessively complacent norms” because we won’t be able to give good moral instruction if we cannot see ourselves and others clearly.

I would put this in the context of a broader risk, actually. Points 25, 26, 27 and 31 have the potential to combine badly. In particular, 25 says “Don’t criticise EA unless you think it’s so important that you’re willing to kill people for it,” 26 and 27 say “the default judgment of people who criticise EA should be that they are vicious, bad and wrong, unless you can give reasons why they’re not” and 31 says “Don’t worry about seeming arrogant or boastful.” Combining these could be a recipe for becoming arrogant in your own views and hateful towards critics.

Notwithstanding the risks, then, I would say that epistemic hygiene requires us not to place great weight on the risk of people dying as a result of criticisms of EA. Ultimately, there are certain practices that lead to better understanding of ourselves and of the world, and these include openness to criticism and a willingness to look for the good in other people’s arguments instead of demonising our opponents. It’s often difficult to calculate the downsides of failing to do this, precisely because they are located in what we don’t know. Defaulting to good argumentative norms is the only sensible way to deal with this, all calculations aside.

Similarly, as you’d expect from someone who leans virtue ethical, I think we should care about our internal states when talking about charitable giving. There are moral risks involved in boasting. I don’t think we need to purge ourselves of all personal pride before speaking about our charitable donations, but I do think that in so speaking it is wise to try to be honest with ourselves about our motives, and be suspicious of possible arrogance. The wisdom of encouraging others to do the same is a reason to speak anyway, even if we suspect ourselves of arrogance, but it’s not a reason to think that arrogance is a non-issue.

Expand full comment
author
Apr 5·edited Apr 5Author

#25 does not say that. It does say that you should take care not to discourage people from doing good things, and that discouraging effective donations is especially risky. But there are obvious ways that one could responsibly criticize big-EA without discouraging people from effective donations.

> "[Arrogance] has to come above “failing to displace excessively complacent norms” because we won’t be able to give good moral instruction if we cannot see ourselves and others clearly."

Part of my question was how great a risk it really was that a beneficent person trying to normalize effective giving would *actually* be arrogant, especially to such a degree that they "cannot see... clearly" and end up "unable to give good moral instruction". There are a lot of speculative leaps here, compared to much more clear-cut risks of *letting people die unnecessarily*. I personally find it very unlikely that you have the priority ordering right. But I appreciate your taking the time to explain your view!

Expand full comment
Apr 6Liked by Richard Y Chappell

I'll come and bite as well. No one has voiced what I think is the main and real point of disagreement with EA: A lot of people don't believe you can do or are in fact doing all of these things.

1) Can you do it? Is it possible to quantify "good" at all? And does the EA movement have the ability to quantify it accurately?

Is it possible to quantify good? There are several reasons to doubt this: (a) people disagree on what is good (more on that below); (b) good is very hard to quantify for lots of real-world-is-messy normal social science reasons; (c) good is generally a two-edged sword, involving good actors (virtue) and good actions (consequentialist impact).

More specifically, can EA people quantify the good? Well, SBF couldn't even work properly with money, and that's much more easily quantified. And one of the criticisms that actually hit home in that horrible Wired article the other week was the fact that the GiveWell rankings seem to have experienced significant churn. The methodology has not been shown to be sound enough.

2) Are EA people really doing what they say? Or are they virtue signalling?

It's a commonplace that anyone or any group who says "we're doing good" ultimately proves to be... not that. See: Catholic church. I think this is a really good heuristic! We are bombarded by cult messages all the time, from various churches, to Trump, to L'Oreal, to boomers, to... all groups of society who believe that they know better and are better than the rest of us. We treat these messages with the utmost suspicion, and rightly so.

When faced with a new group who claim to be doing good in the world, the little Bayesian homunculus in our heads considers two possibilities: (1) these guys have got it right and are telling the truth; (2) these guys are as deluded as all the others, and mostly just scratching an itch for group identity and moral superiority. And (2) wins every time, because based on previous experience, it's 100x more likely. To overcome that Bayesian gradient, EAs will just have to be patient and demonstrate that they are who they say they are, over a period of decades.

***

Now to a couple of specifics:

(3) "global poverty, factory-farming, future pandemics" - lots of people genuinely don't believe in those as moral areas. Either god made us rich and poor; or it's not the USA's job to cure global poverty; animal suffering is not a thing; Covid (some bullshit). I personally don't agree with any of those positions, but you must know that there are very large numbers of people who think each of those. And here's the problem: as soon as someone finds one position that EA stands for which they disagree with, that means that EA no longer, in their eyes, stands for what is morally good. And that instantly throws every single other EA calculation into question - even if that person might have agreed with those calculations otherwise.

This is already too long, so I'm going to stop. I personally am pro-EA. But I don't think it's hard to see why commentators would enjoy picking on it. Commentators gonna commentate!

Expand full comment
author

re: (1), my claim #11 speaks to this (as well as #6 - 10).

A key philosophical disagreement is how we should react to the following:

> "the GiveWell rankings seem to have experienced significant churn. The methodology has not been shown to be sound enough."

This seems to rest on an assumption that it's better to do nothing than to fallibly try one's best.

That assumption is surely false. (Compare: "scientific consensus changes over time. So science isn't sound enough to be worth listening to at all." Clearly invalid reasoning!) GiveWell isn't, and doesn't claim to be, perfect. But it's some of the best guidance we have. And it's better to follow the best guidance we have over any alternative which constitutes *worse guidance*.

re: (2) "Are EA people really doing what they say? Or are they virtue signalling?"

I don't think this matters to the most important questions. Most of the claims I listed are just first-order claims about good things to do (e.g. give more money to the poor!) that don't depend at all on any judgments about "EA people".

What you're implicitly highlighting here is that many people want to turn debates over EA into a status fight: should we think well or poorly of "EA people"? I obviously have views on that. But I also just think it's objectively very unimportant compared to the first-order issues. (Hence my claims #40 and 41.) Critics would do better to engage with the specific claims I've listed here.

re: (3): Sure, there are some people who just don't think it matters when non-American children die of malaria. But they're not my target audience. None of my academic colleagues would ever confess to such a view, for example--it's too transparently morally bad.

Expand full comment

I mean... a lot of true Scotsmen are happy with your answers!

When you say, "they're not my target audience," you're answering your own question. Why do people hate on EA? Because they're not your target audience. I think you're letting yourself/EA off the hook a little too easily with that one. If you don't care what non-target folk think, then why write a long article musing on the problem of why they think stuff?

I hope you don't really disregard the opinions of the great other. I don't think you do, hence the article!

On the other points... I mean, there are arguments to be made on all of them. I suspect you know these arguments! I'll try to briefly list them below, but my point is, there are a lot of arguments in the world. The fact that you/EA have some good ones doesn't mean everyone else is going to immediately agree. Other people have good arguments, too!

OK, here's a little list:

"assumption...better to do nothing than to fallibly try one's best...surely false" Not so surely: communism, nationalism, utopianism of any kind... there's a history of people trying to do their best and famine ensuing.

"scientific consensus changes over time" No, it doesn't. The cutting edge changes over time; the consensus remains stable. If the consensus changed all the time, we wouldn't believe it. There is a real correlation between longevity and credibility. This correlation is probably spurious! But real nonetheless.

"the best guidance we have" But is EA really the best? That's an EA claim. For the rest of the world, very much TBC.

"virtue signalling...I don't think this matters" That's a crazy thing to say. Your article asks specifically about the public perception of and reactions to EA. Of course the actions of EA people inform these perceptions. If you want to write about the theoretical underpinnings of EA, that's fine. But that's not what you're doing in reaction to the comedy stylings of Leif whatsisname in Wired.

"should we think well or poorly of "EA people"?...status fight" Here I think you're missing a feature of a moral movement, as opposed to a philosophical, scientific, or political movement. If EA is right, then the people who practice EA are genuinely better. And we should think well of them. Conversely, if EA people are not better, then there is no reason to take their movement seriously, because they've failed at the one thing they said they could do. You can't retreat to "the theory is right" on this. EA stands or falls on the question, are EA people doing more good in the world than other people?

"just don't think it matters when non-American children die of malaria" That's a contentious reading of my arguments that you don't want to make! Apparently you don't think it matters when non-American children die of malaria, or you wouldn't have bought that [insert last thing you purchased for about $800]. With the possible exception of Peter Singer, we all live in glass houses on that question. But there are lots of theories, on multiple levels (philosophical, sociological, economic) that might suggest sending overseas aid is not a good use of resources. Jumping directly to the "you don't do my theory therefore you don't care" is precisely the sort of thing that makes you look like you're virtue signalling.

I'll just finish by reiterating my basic point: What are you trying to do? Who are you trying to persuade? You don't have to persuade me. I'm basically already a fan. You asked on a blog, "Why do they hate us?" and I'm trying to offer some reasons. There's no point responding to these arguments to me! In the words of the Trisolarians, do not reply! But at the same time, do not dismiss. If you reject a large category of people with "they're not my core audience," those people are likely to reject you, too. And that's not the goal, right?

Expand full comment
author
Apr 7·edited Apr 7Author

>"just don't think it matters when non-American children die of malaria" That's a contentious reading of my arguments that you don't want to make!

Huh? Read your (3) again. You wrote: "lots of people genuinely don't believe in those as moral areas. Either god made us rich and poor; or it's not the USA's job to cure global poverty..."

That sounds to me like you're describing uneducated nativists with very narrow moral circles. They're not my target audience. Philosophers (academics, students, philosophically-interested general audiences) are. Plenty of people in my target audience are hostile to EA. But not because they think "God made us rich and poor" or because "it's not the USA's job to cure global poverty" (which is what I paraphrased as not caring about non-American children).

> ""you don't do my theory therefore you don't care" is precisely the sort of thing that makes you look like you're virtue signalling."

I'm so baffled by this dialectic. YOU are the one who said "lots of people genuinely don't believe in [i.e. care about] those as moral areas." My response was that my target audience does NOT reject EA for such transparently vicious reasons as it seemed that you were describing. (So, obviously, I do not think that rejecting EA entails not caring about non-Americans!)

Perhaps there has been a miscommunication.

Expand full comment
Apr 6Liked by Richard Y Chappell

Well put, I agree with just about all of these. And I'm glad that you wrote this up, and that you're writing these posts generally, given how hostile many philosophers (of all people) seem to be (have become?) towards EA.

> If there’s a risk that others will perceive you negatively (e.g. as boastful), accepting this reputational cost for the sake of better promoting norms of beneficence is even more virtuous. Staying quiet for fear of seeming arrogant or boastful would be selfish in comparison.

Hmm, I don't think staying quiet to avoid being perceived as arrogant or boastful is necessarily selfish. Arrogance and boastfulness can make other people feel bad -- for example, if someone boasts that they own a large apartment, or donate $5K per month, it could cause the listener to feel inadequate or worthless or resentful. I would suppose that's one reason why we do have norms against those things. (Of course the negative effect may be smaller than the increased likelihood -- if there is one -- that others act more virtuously as a result, or it may be larger, who knows.)

Expand full comment
Apr 4Liked by Richard Y Chappell

Very good piece, thanks !

Expand full comment

I used to read objections from critics of EA some years ago and never found them very compelling. I haven't seen more recent critiques. I think it'd be worth directly engaging critics in conversation. They may raise legitimate concerns, if not with EA in principle, than at least in practice, and one might have some success in persuading them that the less defensible objections don't withstand scrutiny.

Expand full comment
author

Right, I intend this post precisely as opening such a conversation :-)

Expand full comment

Re Point 27:

Consider the case of two prosperous and highly-intelligent individuals, A and B. A is an effective altruist who donates 75% of his ample income to charities approved by GiveWell. Mindful of the ecological impact of human population growth, moreover, A scrupulously avoids procreation, eventually dying with a clear conscience and zero offspring. B, on the other hand, contributes nothing to charity, either during his lifetime or posthumously, leaving his entire estate to the four surviving children he had by two wives. Which deserves more credit for "making the world better," A or B? Would it make the world better if ALL highly-intelligent people followed the same course as A?

Expand full comment
author

Sufficiently misguided (e.g. anti-natalist) scrupulosity could indeed be unfortunate; universal anti-natalism extremely so. But that's compatible with #27, that one shouldn't deliberately or negligently make the world worse.

Expand full comment

I wasn't asking about universal anti-natalism but more specifically about anti-natalism among those of high intelligence. Which is where it -- and, more broadly, sub-replacement-level reproduction -- is most prevalent.

What I'm getting at is that the effect of voluntary conduct on the median innate intelligence of subsequent generations is ethically important.

Expand full comment

Great summary, thanks for taking the time to write this! ☺️

Expand full comment
author

A paywalled link doesn't seem to contribute much to the discussion. But you're very welcome to expand upon your view here: which of my 42 theses do you reject, and why?

Expand full comment