I wasn't aware of this Regan idea! I like it, and find it interestingly similar to Functional Decision Theory (https://arxiv.org/abs/1710.05060) - what is rational to do is the thing such that, if your decision algorithm were to output that thing in all places where it's instantiated, the outcome would be best. Your algorithm should cooperate in Prisoner's Dilemmas against your twins and other time slices, and one-box in Newcomb problems where the predictor is using your algorithm to predict (but not in medical Newcomb problems where your algorithm is downstream of the common cause correlating some behavior with cancer).
It's definitely good enough for this rough characterization, but I think it runs into problems with the fact that there just isn't any precise set of cooperators (or places where your decision algorithm is instantiated). Probably some people will cooperate with some plans and not others, and you should take into account how much good could come of each of those different amounts of cooperation.
Also, it's not always clear that marginal returns diminish. If you put a lot of resources into vaccination against polio, or measles, or smallpox, there's definitely some diminution of return for a while. But at a certain point, you can actually eliminate the relevant virus, and at that point there's a huge jump in returns! It's much better to fully eradicate one of these viruses than it is to do half the effort it takes to eradicate one and half the work to eradicate the other.
Interestingly, if you have a totally fungible supply of resources, and a bunch of orthogonal causes where returns always diminish such that the amount of resources it takes to do a given amount of good in each cause grows precisely quadratically, then the optimal distribution of resources among those causes *is* in fact to divide the resources in proportion to the marginal good done by applying resources at the current state. (But I assume that Max's case doesn't fit these stipulations.)
re: increasing returns (e.g. from eradication) - that's what I had in mind with inviting consideration of the "best marginal cost-effectiveness for up to $X", rather than just the best marginal return on the next $1.
As I intended this to be read: If the first (x-1) dollars to Eradication do nothing, but $x suddenly has massive benefits, then I take it that the value of donating $x to Eradication on current margins is massive, even though the marginal value of the first x-1 dollars is zero. And that step of the schema invites us to consider the marginal value of every donation up to $X, and thus will pick up on the huge gains from passing thresholds like eradication.
I was also confused in the same way as Kenny. I think the reason is that the algorithm you roughly lay out doesn't successfully optimize for what you think it does.
Imagine we have $4 to give and three causes. The first cause provides 5 utils for $3 (nothing for less than that, no more for more than that), the second and third cause each provide 3 utils in response to an investment of $2 (nothing for less, no more for more). Your algorithm would assess that 5 utils for $3 is the best value of these options and that after $3 the effectiveness drops below the next option. Youo spend $3 on that and end up with 5 utils total. But obviously you could have gotten 6 utils if you had jsut ignored the cause with the best effectiveness.
I'm pretty sure even with the similifications you have in place the problem you describe is NP-Complete (you can translate the items in a knapsack problem to causes of the form in my example), therefore you aren't going to find an efficient algorithm to describe what you want here without even more simplifications to the model (like assuming diminishing marginal returns, though I completely see why you don't want to do that).
Also your point (iii) talks about *the* point at which marginal effectiveness drops below a certain level, without the diminishing returns assumption there is no reason there shouldn't be two or more such points.
Right, that part is really more of a heuristic than an algorithm — just illustrating the point that "prioritization is temporary", and with enough resources we should expect to do best by funding a fairly wide range of causes. But you're quite right that there are possible cases where, due to various threshold effects, donating to the most "cost-effective" choice would not be part of the optimal plan.
(Basically: if you cannot fruitfully use the last dollar, then spending $3 on Cause-1 is effectively using up all of your $4 of spending power. And 5 utils for $4 of spending potential is not as effective a use of those resources than the 6 utils you'd get from spending the $4 on Causes 2 & 3. This seems fairly commonsensical, but I'm not sure what the best way to precisely re-word the rule to accommodate this point would be.)
Wholeheartedly agree with you that "they’re wonderfully admirable principles, and I wish more people found them as inspiring as I do".
I wasn't aware of this Regan idea! I like it, and find it interestingly similar to Functional Decision Theory (https://arxiv.org/abs/1710.05060) - what is rational to do is the thing such that, if your decision algorithm were to output that thing in all places where it's instantiated, the outcome would be best. Your algorithm should cooperate in Prisoner's Dilemmas against your twins and other time slices, and one-box in Newcomb problems where the predictor is using your algorithm to predict (but not in medical Newcomb problems where your algorithm is downstream of the common cause correlating some behavior with cancer).
It's definitely good enough for this rough characterization, but I think it runs into problems with the fact that there just isn't any precise set of cooperators (or places where your decision algorithm is instantiated). Probably some people will cooperate with some plans and not others, and you should take into account how much good could come of each of those different amounts of cooperation.
Also, it's not always clear that marginal returns diminish. If you put a lot of resources into vaccination against polio, or measles, or smallpox, there's definitely some diminution of return for a while. But at a certain point, you can actually eliminate the relevant virus, and at that point there's a huge jump in returns! It's much better to fully eradicate one of these viruses than it is to do half the effort it takes to eradicate one and half the work to eradicate the other.
Interestingly, if you have a totally fungible supply of resources, and a bunch of orthogonal causes where returns always diminish such that the amount of resources it takes to do a given amount of good in each cause grows precisely quadratically, then the optimal distribution of resources among those causes *is* in fact to divide the resources in proportion to the marginal good done by applying resources at the current state. (But I assume that Max's case doesn't fit these stipulations.)
re: increasing returns (e.g. from eradication) - that's what I had in mind with inviting consideration of the "best marginal cost-effectiveness for up to $X", rather than just the best marginal return on the next $1.
As I intended this to be read: If the first (x-1) dollars to Eradication do nothing, but $x suddenly has massive benefits, then I take it that the value of donating $x to Eradication on current margins is massive, even though the marginal value of the first x-1 dollars is zero. And that step of the schema invites us to consider the marginal value of every donation up to $X, and thus will pick up on the huge gains from passing thresholds like eradication.
Good! I can see how that means that now, but it's a hard subtlety to convey in text.
I was also confused in the same way as Kenny. I think the reason is that the algorithm you roughly lay out doesn't successfully optimize for what you think it does.
Imagine we have $4 to give and three causes. The first cause provides 5 utils for $3 (nothing for less than that, no more for more than that), the second and third cause each provide 3 utils in response to an investment of $2 (nothing for less, no more for more). Your algorithm would assess that 5 utils for $3 is the best value of these options and that after $3 the effectiveness drops below the next option. Youo spend $3 on that and end up with 5 utils total. But obviously you could have gotten 6 utils if you had jsut ignored the cause with the best effectiveness.
I'm pretty sure even with the similifications you have in place the problem you describe is NP-Complete (you can translate the items in a knapsack problem to causes of the form in my example), therefore you aren't going to find an efficient algorithm to describe what you want here without even more simplifications to the model (like assuming diminishing marginal returns, though I completely see why you don't want to do that).
Also your point (iii) talks about *the* point at which marginal effectiveness drops below a certain level, without the diminishing returns assumption there is no reason there shouldn't be two or more such points.
Right, that part is really more of a heuristic than an algorithm — just illustrating the point that "prioritization is temporary", and with enough resources we should expect to do best by funding a fairly wide range of causes. But you're quite right that there are possible cases where, due to various threshold effects, donating to the most "cost-effective" choice would not be part of the optimal plan.
(Basically: if you cannot fruitfully use the last dollar, then spending $3 on Cause-1 is effectively using up all of your $4 of spending power. And 5 utils for $4 of spending potential is not as effective a use of those resources than the 6 utils you'd get from spending the $4 on Causes 2 & 3. This seems fairly commonsensical, but I'm not sure what the best way to precisely re-word the rule to accommodate this point would be.)