10 Comments
User's avatar
Bentham's Bulldog's avatar

How I wish I could take that course.

Expand full comment
Richard Y Chappell's avatar

Doesn't David Manley teach a related course at U. Michigan?

Expand full comment
Bentham's Bulldog's avatar

He does indeed; I happen to be taking that course! But you can never take too many EA related courses!

Expand full comment
Don Quixote's avatar

I've been reading What We Owe the Future as per your recommendation and I do have a question about it (or at least about the underlying theory of longtermism). The idea that is presented in the book is that we should perhaps sacrifice some of our convenience today (along with some of our present-day concerns), for the sake of future generations. But wouldn't this hold true for the future generations as well?

Imagine that we had a button that, if pressed, will rob the current generation of all happiness, but will ensure that there will exist a larger generation which is significantly happier than our generation would ever be. Pressing the button seems morally justified, since the loss of happiness for the current generation will be far outweighed by the gain in happiness for the next generation. But imagine that this button has been offered to the next generation as well, once they come into existence. From their perspective, it is still best to press the button, since their loss in happiness will pale in comparison to what the next generation will gain. But if that is the case, then this reasoning would prescribe that each individual generation should press the button. But if this prescription is followed, then nobody will actually experience any happiness, since every generation will move it to the next. And so it looks like following those prescriptions will result in a state of affairs that is worse by longtermist's own lights, which doesn't seem right.

To be clear, the concern here isn't that we will actually have such a button somewhere in the future. Rather it is a theoretical worry about the fact that the same sort of totalist utilitarian reasoning that results in longtermist conclusions would also compel each generation to press the button. And this conclusion seems deeply suspicious, so we should be somewhat suspicious of totalist utilitarianism in general.

Expand full comment
Richard Y Chappell's avatar

This is a perfectly general puzzle about rationality: a single-person version involves "Ever Better Wine" that gets better with age: when could an immortal ever reasonably drink it, if it would be better tomorrow? The simple answer is that you need to take into account the risk of perpetual deferment, and hence adopt a decision procedure that reduces the risk of attaining ZERO value. (Obviously any decision procedure that results in less value rather than more cannot be an accurate representation of what consequentialism recommends!)

A natural solution is to pick some large but finite number (e.g. 1 googolplex), and commit to stopping the regress at that point. I discuss related issues in this old essay: https://www.philosophyetc.net/2006/12/writing-sample-global-rationality.html

Expand full comment
Nicolas Delon's avatar

This is super helpful. I'm teaching an upper-level course on Effective Altruism this semester. I hope you don't mind me using this as inspiration for my syllabus.

Expand full comment
Richard Y Chappell's avatar

Feel free! Pablo Stafforini has collected a bunch of other useful sample syllabi here:

https://www.stafforini.com/blog/courses-on-longtermism/

I'm currently putting together a new grad syllabus specifically on longtermism, and found the two most useful inspirations to be:

(1) Baker & Elga's Princeton syllabus: https://docs.google.com/document/d/152ZZVnMf0T3OW_RyepObfaGaBVneW-VVtq_9VPCvG_Q/edit

(2) MacAskill & Tarnsey's Topics in Global Priorities Research:

https://globalprioritiesinstitute.org/topics-in-global-priorities-research/

Expand full comment
Benjamin Yeoh's avatar

Have you read Larry Temkin’s critique? I think it’s better than Amia’s and good food for thought although applies more to international aid critiques and only a little to long-termism. Might be useful adding his book in the critiques section.

https://www.thendobetter.com/arts/2022/7/24/larry-temkin-transitivity-critiques-of-effective-altruism-international-aid-pluralism-podcast

Expand full comment
Richard Y Chappell's avatar

I haven't had a chance to read it yet; thanks for the suggestion!

Expand full comment
Don Quixote's avatar

Sorry if that question (from my previous comment) came a bit out of the blue. I just know you as someone who can articulate utilitarian-adjacent ideas with great clarity and insight, so I really value your responses.

Expand full comment