I suspect that tech billionaires didn’t want to become heroes simply by saving individual lives. This is exactly what firefighters do; This is exactly what Spider-Man does. Billionaires wanted to be heroes who saved the entire human race. That’s what Iron Man does. SBF was never really interested in mosquito nets. His total commitment was to a philosophy aimed at creating maximum value for the universe, until its end.
So this also became the goal of philosophers. They began to emphasize a philosophy, the long view, that the moral value of unborn generations is equal to the value of living generations. And in reality, unborn generations could be worth much more than we are today, given population growth. What is the point of deworming a few hundred children in Tanzania when you could spend that money on astronautical research and help millions of unborn souls escape Earth and live happily among the stars?
Ideally for billionaires and philosophers alike, the long term eliminates any risk of making a mistake. No more hassle like bad press for the charities you recommended. From now on, they could boast of guarding the galaxy, without fearing the retaliation of the people of the 22nd century.
Some intelligent young AEs dared to publicly oppose the primitive and dangerous long-term reasoning. High-ranking figures in the movement have tried to suppress their criticism, saying any criticism of the EA’s central figures could threaten future funding. It was in this endgame of EA that things got really strange.
What is good in the long term is not new, and what is new is not good. Sages, from Gautama Buddha to Jeremy Bentham, have encouraged us to strive for the well-being of all sentient creatures. What long-termists add is the party trick of assigning probabilities to everything. Ord, in his long-term book, estimates the risk that humanity will suffer an existential catastrophe this century is just under 17%.
What MacAskill adds is once again an amusing lack of humility. In his own long-term book, he explains how we must “structure global society” and “safeguard civilization.” When it comes to preventing nuclear war, fighting deadly pathogens, and choosing the values that will guide humanity’s future, MacAskill claims to know what the world should do.
In other words, we’re still in the land of accurate guesses based on weak evidence, but now the stakes are higher and the numbers are remote probabilities. The long view shows that the EA method is actually a way to maximize appearance of intelligence while minimizing expertise and responsibility. Even if the thing you gave a 57% chance of happening never happens, you can still claim you were right. These expected value statements therefore correspond to the most philosophically rigorous definition of bullshit.
Furthermore, if applying expected value thinking to aid is dubious, applying it to the distant future becomes downright super-naughty. When you truly believe that your job is to save hundreds of billions of future lives, you are rationally obligated to sacrifice countless current lives to do so. Remember the SBF bet: a long-termer must risk the extinction of humanity on a coin toss, double or nothing, over and over again. MacAskill sometimes tries to escape this murderous logic by explaining why violating rights is almost never the best way to create a better world. In his long-termist book, he explains this point in a few pale paragraphs. But you can’t deny your party trick right before she breaks the punch bowl.