5 Comments
User's avatar
Aidan Alexander's avatar

I’ve heard this sub-agent idea before described as a “moral parliament”. Where as it kind of sounds like you advocate for dividing up your resources amongs the sub-agents, the parliament metaphor might lead you to think about how different factions in the parliament with different numbers of seats will negotiate to form a coalition. This seems helpful for thinking about big resource pots (like your career) that you can’t break into little chunks as easily as you can with donations.

Any thoughts on which metaphor is a more appropriate way to model value uncertainty / pluralism, or perhaps you’d recommend different models for different contexts?

Richard Y Chappell's avatar

Yeah, one downside of the "parliament" approach is that it would seem to imply that a majority faction can exert total control over your decisions, whereas I think a more decentralized / market-based approach seems preferable for purposes of pluralistic resource allocation.

There's still room for "moral trade" / bargaining between the different representatives, though. Maybe two would prefer a deal in which they *each* opt for their second choices (if they strongly disprefer the other's first choice), for example.

For big indivisible pots (like career choice): iirc, Harry Lloyd suggests forced divisibility via lottery tickets. But I'd worry about the risk of a low-credibility worldview winning the lottery and exercising outsized control. So I think I would be inclined to prefer the old-fashioned parliament-style approach for those cases! (Or maybe sufficient ex ante bargaining, in advance of the lottery, could yield adequate insurance against bad outcomes? I'll have to think about it more.)

Aidan Alexander's avatar

Perhaps when it comes to big non-divisible pots you can do some division by zooming out even further, e.g global health and development you get the career (perhaps because of personal fit) but then you cede donations and diet preferences to animals (my sense is this specific example is somewhat common)

Felipe Doria's avatar

Are there philosophically rigorous defenses of worldview diversification over straightforward expected value maximization? Even under uncertainty about a philosophical thesis (e.g., hedonistic utilitarianism, or shrimp sentience), it may still be the case that donating all my altruistic resources to shrimp welfare has the highest expected value. If so, why should I allocate resources proportionally across worldviews rather than simply maximizing expected value under uncertainty?

Richard Y Chappell's avatar

Well, hardly anyone is willing to do expected value maximization, so in practice the choice for most people is between worldview diversification vs something even *more* risk-averse (and lower in expected value).

But even compared to EVM, I see three main reasons one might reasonably prefer worldview diversification (WD):

(1) An appealing aspect of EVM in the face of ordinary uncertainty is that it will tend to produce better results in the long run. Individual gambles may face a high "risk" of failure, but the overall portfolio of independent diversified bets may nonetheless reliably do more good than any alternative strategy.

That nice feature no longer holds when we're dealing with philosophical uncertainty. The issues under dispute are non-contingent, and errors are apt to be highly correlated and persist over time. There's also much less grounds for confidence in one's philosophical credences being reasonable or well-calibrated. One could easily be *way* off. All of this makes uncompromising EV maximization in the face of philosophical uncertainty seem *extremely unreliable*. There's little reason to think that following it will lead to actually-better results in the long run. It could instead very easily be an unmitigated disaster.

WD, by contrast, better matches the sort of "portfolio" approach that's recognized as prudent in other high-stakes domains.

(2) The "subagents" metaphor really resonates with me: some forms of "moral uncertainty" feel less like *uncertainty* and more like *internal conflict*. It thus seems appropriate to give each representative a voice (bargaining with the others where mutually agreeable), rather than allowing any one to *completely swamp* all the others simply in virtue of claiming higher stakes.

(3) If we're overall guided by the methodology of reflective equilibrium, it seems like a WD/subagents/bargaining approach will yield much more intuitively plausible verdicts than strict EVM. It also seems theoretically "cheaper", in that one doesn't need to solve the problem of inter-theoretic value comparisons. (If Nietzschean perfectionism is true, how does the value of Greatness compare to the value that hedonistic views attribute to pleasure?)

I'm not sufficiently familiar with the literature on the topic to know much about what others regard as the strongest reasons here. But you might look more into Harry Lloyd's work on bargaining if you're interested!