Discussion about this post

User's avatar
Jesse Clifton's avatar

To me, the most plausible justification for assigning higher probability to S1 than S2 is that we ought to have priors that penalize more complex laws. More generally, it seems to me that we should be specifying priors at the level of theories / mechanistic models / etc, from which we then derive our priors about propositions like S1, S2, N-bad, N-good, “value concordance”. As opposed to directly consulting our intuitions about the latter.

So in the case of nuclear war, our priors over the long-run welfare consequences should be derived from our priors over the parameters of mechanistic models that we would use to predict how the world evolves conditional on nuclear war vs no nuclear war. And it seems much less clear that there will be a privileged prior over these parameters and that this prior will favor N-bad. (It seems plausible that the appropriate response would be to have imprecise priors over these parameters, and that this would lead to an indeterminate judgement about the total welfare consequences of nuclear war.)

Expand full comment
Eric Schwitzgebel's avatar

Thanks for the always helpful and interesting engagement, Richard!

I'd like to clarify the Nuclear War argument a bit. I am claiming that we are clueless about whether a nuclear war in the near future would overall have good vs bad consequences over a billion-years-plus time frame continuing at least to the heat death of the universe. I do think a nuclear war would be bad for humanity! The way you summarize my claim, which depends on a certain way of thinking about what is "bad for humanity", makes my view sound more sharply in conflict with common sense than I think it actually is.

Clarifying "N-Bad" as *that* claim, it's not clear to me that denying it is commonsensical or that it should have a high prior.

(I do also make a shorter-term claim about nuclear war: That if we have a nuclear war soon, we might learn an enduring lesson about existential risk that durably convinces us to take such risks seriously, and if this even slightly decreases existential risk, then humanity would be more likely to exist in 10,000 years than without nuclear war. My claim for this argument is only that it is similar in style to and as plausible as other types of longtermist arguments; and that's grounds for something like epoche (skeptical indifference) regarding arguments of this sort.)

Expand full comment
33 more comments...

No posts