I think I recognize some ideas about moral uncertainty and bargaining from Harry Lloyd's dissertation! (whoops didn't notice footnote, probably wouldn't have mentioned it, but was pleased to see it either way, having been on his committee, and thinking it was an excellent dissertation.)
While I certainly hope I wouldn't have supported the Nazis had I been a German in the 30s, I do think there's a good chance I would have been a loyalist had I been an American in the 1770s. (To be clear, I think that would have been a mistake.)
I'm basically sure I wouldn't have seen anything wrong with slavery had I been an ancient Athenian. (Even if I'd been a slave, I think I'd probably have wished I was free and had slaves myself, rather than wishing for a totally different social order in which there were no slaves.)
What I'm trying to get at is that it's tricky to evaluate your current reasoning heuristics by thinking about how they'd have served in various historical settings, since it strikes me as pretty likely that any realistic set of reasoning heuristics will have *some* serious failure modes.
Yeah, I agree that realistic heuristics will surely fail in some circumstances. Still, it seems worth giving some thought to what sorts of dispositions can help us to better guard against a broad range of (realistic) failure modes.
Re: Harry: yes, it's good stuff! (Other readers are encouraged to follow the link in footnote 3 to learn more...)
I was just re-reading this paper ( https://philarchive.org/rec/EASDAI-3 ) Reuben Stern and I wrote about diachronic and interpersonal coherence; and I think we end up with some similar suggestions, despite starting with a totally different seeming project about epistemic rationality.
The idea in that abstract strikes me as extremely attractive, though I wonder what you'd think about the following slight tweak. I sort of like the idea that what you're saying is required for there to be normative pressure for different time slices of a person to be coherent maybe is part of what's involved in them being time slices of a single person in the first place. Eg, part of why we say jekyll and Hyde are two people sharing one body (rather than one person who undergoes some serious mood swings) is because those conditions aren't in place.
Yes, though I might put it as "what it is to be one agent" rather than "what it is to be one person". (And also, as we discuss in the paper, the relations here come in degrees, and when they're not perfectly full force, they aren't fully transitive, which can make it hard to count persons or agents if we're too literal about this.)
This is great. I think sometimes philosophers are prone to thinking "first we deduce exactly what the right thing to do is in a particular situation, and then we act." But that's not how action in difficult situations works in practice. Far more often, what we need to do is "act wisely in the face of uncertainty", as you put it, which is a much trickier and less exact practice. These strike me as wise guidelines for doing so.
Whenever you hear an argument for doing X, and you can’t immediately refute it, you must hold X as uncertain and enquire further if you deem it worthwhile.
It's fine to be unsure. But I also think there are plenty of cases where you can reasonably be *skeptical* of an argument, and expect that it is probably flawed somehow, even if you haven't yet identified precisely where the flaw lies.
And importantly, whether you're unsure or skeptical or both, sometimes you still need to act anyway – recognizing that in this sort of context, inaction is effectively its own sort of action.
I think I recognize some ideas about moral uncertainty and bargaining from Harry Lloyd's dissertation! (whoops didn't notice footnote, probably wouldn't have mentioned it, but was pleased to see it either way, having been on his committee, and thinking it was an excellent dissertation.)
While I certainly hope I wouldn't have supported the Nazis had I been a German in the 30s, I do think there's a good chance I would have been a loyalist had I been an American in the 1770s. (To be clear, I think that would have been a mistake.)
I'm basically sure I wouldn't have seen anything wrong with slavery had I been an ancient Athenian. (Even if I'd been a slave, I think I'd probably have wished I was free and had slaves myself, rather than wishing for a totally different social order in which there were no slaves.)
What I'm trying to get at is that it's tricky to evaluate your current reasoning heuristics by thinking about how they'd have served in various historical settings, since it strikes me as pretty likely that any realistic set of reasoning heuristics will have *some* serious failure modes.
Yeah, I agree that realistic heuristics will surely fail in some circumstances. Still, it seems worth giving some thought to what sorts of dispositions can help us to better guard against a broad range of (realistic) failure modes.
Re: Harry: yes, it's good stuff! (Other readers are encouraged to follow the link in footnote 3 to learn more...)
I was just re-reading this paper ( https://philarchive.org/rec/EASDAI-3 ) Reuben Stern and I wrote about diachronic and interpersonal coherence; and I think we end up with some similar suggestions, despite starting with a totally different seeming project about epistemic rationality.
The idea in that abstract strikes me as extremely attractive, though I wonder what you'd think about the following slight tweak. I sort of like the idea that what you're saying is required for there to be normative pressure for different time slices of a person to be coherent maybe is part of what's involved in them being time slices of a single person in the first place. Eg, part of why we say jekyll and Hyde are two people sharing one body (rather than one person who undergoes some serious mood swings) is because those conditions aren't in place.
Yes, though I might put it as "what it is to be one agent" rather than "what it is to be one person". (And also, as we discuss in the paper, the relations here come in degrees, and when they're not perfectly full force, they aren't fully transitive, which can make it hard to count persons or agents if we're too literal about this.)
This is great. I think sometimes philosophers are prone to thinking "first we deduce exactly what the right thing to do is in a particular situation, and then we act." But that's not how action in difficult situations works in practice. Far more often, what we need to do is "act wisely in the face of uncertainty", as you put it, which is a much trickier and less exact practice. These strike me as wise guidelines for doing so.
Whenever you hear an argument for doing X, and you can’t immediately refute it, you must hold X as uncertain and enquire further if you deem it worthwhile.
Don't be afraid to say "I'm not sure."
It's fine to be unsure. But I also think there are plenty of cases where you can reasonably be *skeptical* of an argument, and expect that it is probably flawed somehow, even if you haven't yet identified precisely where the flaw lies.
And importantly, whether you're unsure or skeptical or both, sometimes you still need to act anyway – recognizing that in this sort of context, inaction is effectively its own sort of action.
Skepticism always seems "reasonable"; skepticism about heliocentrism, or germ theory, or plate tectonics seemed reasonable ONCE. And then ...
The lesson must be that "reasonable skepticism" is still uncertainty.