EA - Diversification is Underrated by Justis
The Nonlinear Library: EA Forum - Un pódcast de The Nonlinear Fund
Categorías:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Diversification is Underrated, published by Justis on November 17, 2022 on The Effective Altruism Forum.Note: This is not an FTX post, and I don't think its content hinges on current events. Also - though this is probably obvious - I'm speaking in a strictly personal capacity.Formal optimization problems often avail themselves of one solution - there can be multiple optima, but by default there tends to be one optimum for any given problem setup, and the highest expected value move is just to dump everything into that optimum.As a community, we tend to enjoy framing things as formal optimization problems. This is pretty good! But the thing about formal problem setups is they encode lots of assumptions, and those assumptions can have several degrees of freedom. Sometimes the assumptions are just plain qualitative, where quantifying them misses the point; the key isn't to just add another order-of-magnitude (or three) variable to express uncertainty. Rather, the key is to adopt a portfolio approach such that you're hitting optima or near-optima under a variety of plausible assumptions, even mutually exclusive ones.This isn't a new idea. In various guises and on various scales, it's called moral parliament, buckets, cluster thinking, or even just plain hedging. As a community, to our credit, we do a lot of this stuff.But I think we could do more, and be more confident and happy about it.Case study: meI do/have done the following things, that are likely EA-related:Every month, I donate 10% of my pre-tax income to the Against Malaria Foundation.I also donate $100 to Compassion in World Farming, mostly because I feel bad about eating meat.In my spare time, I provide editing services to various organizations as a contractor. The content I edit is often informed by a longtermist perspective, and the modal topic is probably AI safety.I once was awarded (part of a) LTFF (not FTX, the EA Funds one) grant, editing writeups on current cutting-edge AI safety research and researchers.Case study from a causes perspectiveOn a typical longtermist view, my financial donations don't make that much sense - they're morally fine, but it'd be dramatically better in expectation to donate toward reducing x-risk.On a longtermist-skeptical view, the bulk of my editing doesn't accomplish much for altruistic purposes. It's morally fine, but it'd be better to polish general outreach communications for the more legible global poverty and health sector.And depending on how you feel about farmed animals, that smaller piece of the pie could dwarf everything else (even just the $100 a month is plausibly saving more chickens from bad lives than my AMF donations save human lives), or irrelevant (if you don't care about chicken welfare basically at all).I much prefer my situation to a more "aligned" situation, where all my efforts go the same single direction.It's totally plausible to me that work being done right now on AI safety makes a really big difference for how well things go in the next couple decades. It's also plausible to me that none of it matters, either because we're doomed in any case or because our current trajectory is just basically fine.Similarly, it's plausible to me (though I think unlikely) that I learn that AMF's numbers are super inflated somehow, or that its effectiveness collapsed and nobody bothered to check. And it's plausible that in 20 years, we will have made sufficient progress in global poverty and health that there no longer exist donation opportunities in the space as high leverage as there are right now, and so now is a really important time.So I'm really happy to just do both. I don't have quantitative credences here, though I'm normally a huge fan of those. I just don't think they work that well for the outside view of the portfolio approach - I've ...
