HOISTED FROM THE ARCHIVES: Reading Will MacAskill’s "What We Owe the Future"
Five book prospectuses stapled together: a great one on effective altruism proper, a very good one on making humanity resilient, a good but unconvincing one (because Derek Parfit is wrong) on how...
Five book prospectuses stapled together: a great one on effective altruism proper, a very good one on making humanity resilient, a good but unconvincing one (because Derek Parfit is wrong) on how Derek Parfit is right, an unsatisfactory grandiose one about our place in human history, and a barking mad one on the threat of the Robot Uprising. But overall, quite good! I enjoyed reading! You should read!…
Back when I started this SubStack, it was more of a diary-of-the-day—often a “FOCUS”, followed by a “BRIEFLY NOTED”. That turns out to be a mess from the standpoint of finding anything. So as I come across them, and liking them, I am pulling out and republishing “FOCUS” sections:
From:
Reading Will MacAskill’s What We Owe the Future:
I was going to write a review of Will McCaskill's What We Owe the Future <https://www.amazon.com/dp/1541618629>, which I liked a lot.
But it then got submerged in the mishegas surrounding my own launch of Slouching Towards Utopia: The Economic History of the 20th Century <bit.ly/3pP3Krk>. In short, I never got around to it. But last week I noticed the book on my shelves again. So here we are:
Looking back on this, this turned out to be crankier than I wanted it to be. That is a result of my instinctive reaction to philosophy. Philosophy is really hard! If philosophers could make with conviction arguments that were convincing—well, those philosophers who have had that success with their philosophy have by that token become something other than philosophers, and their philosophy has become some other discipline.
First, although it is a good book, it is not a great book. I don’t see it as Ezra Klein does—“a book that will change your sense of how grand the sweep of human history could be, where you fit into it, and how much you could do to change it for the better. It's as simple, and as ambitious, as that”—because it tries to do too much and also too little. The result is that it is very wide in its scope, but not that deep in any of its arguments.
It really feels to me like five 60-page book prospectuses stapled together, with a few linkage pages.
The five prospectuses are.:
A Manual on Giving the Future Options: Empowering those who will know more than we do by being politically active, spreading good ideas, having children, earning to give—and remember: significance x persistence x contingency x tractability x neglectedness.
On Resilience: Avoiding human extinction, recovering from civilizational collapse, and heading-off civilizational stagnation.
A Primer on Derek Parfit: Why his take on utilitarianism is right.
Right Now the Future Is Big & Plastic, But That Will Soon Change: Thus we are under a strong moral geas not to say: sufficient unto the day is the evil thereof. What moral-philosophical mistakes we make in our generation will dog the universe as long as humanity survives, or longer.
Artificial General Intelligence the Biggest Menace: Specifically, our task of developing artificial general intelligence without triggering The Robot Uprising is the most important problem facing humanity today, and one that we are not thinking nearly hard enough about.
I loved reading each of these five 60-page prospectus-length arguments—not, mind you, that I agreed with them. (5) strikes me as a weird California religious cult. (4) strikes me as unbelievably grandiose. (3) is pointless, because Derek Parfit is wrong. (2) I find myself in substantial agreement with. And (1) I enthusiastically endorse—and note its inconsistency with (4).
But even so I wound up unsatisfied with the book qua-book, and not because I think differently from MacAskill. I think each of his arguments would be much better if MacAskill had given himself space to spread out. I wish he had written five books, each of 300 pages. I would have greatly enjoyed reading them all.
Perhaps the odds I would wind up agreeing with any of them would have been higher if Will had been able to make his arguments at greater length…
The book that the first prospectus proposes, on enriching the present and empowering the future—I would hand it to people, saying: this is near gospel. I think the only major disagreement I would have with it is with Will MacAskill’s belief that having and raising children is among the very best things for humanity one could do. Perhaps he could convince me if he made the argument at greater length, but I doubt it. I cannot buy his conclusion that the best world is one with many many many more people than the one we are in. This is not to say that I have strong and informed views on what the size of the human population should be. But I do know I gravely doubt anyone who thinks that they know this.
The book that the second prospectus proposes—on how to make human civilization resilient—would get my enthusiastic endorsement and unqualified approbation (save for what I see as an excessive “Robot Uprising” tick).
The book that the third prospectus proposes—on Derek Parfit’s utilitarianism—would, I think, be the best written and the clearest. Will should write it! The world needs an introduction to Derek Parfit! But I would finish it unconvinced, for I find Parfit’s pushing the boundaries of utilitarianism terribly unconvincing. I read Parfit. I find myself thinking: Sometimes it is just atoms in the void, sometimes it is dancing points of light with transcendental meaning, and it switches from one to the other by rules I do not grok. So I think: If I could switch from “atoms in the void” lane to “dancing points of light with transcendental meaning” lane whenever I wanted, I can prove anything! So I am left unsatisfied. For to prove everything is to do nothing useful.
Let me hasten to say that this does mean that Parfit is a bad philosopher, or has failed to do something he should have done.
As I said at the top: philosophy.
I have, perhaps, bigger problems with the other two prospectuses that, stapled-together, make up What We Owe the Future.
The book that the fourth prospectus proposes—that the future is very important, and because we can have extraordinary influence over it we should prioritize working for humanity in the future over working for humanity today—I simply do not believe. Yes, we should guard against human extinction and civilizational collapse in our day. But people in the future have agency too. To say that right now humanity’s being is uniquely plastic, and thus what we do in our generation shapes the future to he end of time, and that the most important segment of humans alive today shaping the future are moral philosophers—well, that is such a stunningly grandiose denial of the Principle of Mediocrity that I find myself unable to credit it at all. We are but one link in a very long chain, even if unusual in the pace at which humanity’s wealth is increasing. We are almost surely not the only link that matters.
And as for the book that the fifth prospectus proposes, the “Artificial General Intelligence the Biggest Menace” prospectus—this I do not understand at all. Where does this bizarre flowering of the TechBro mindset come from? Is it from people who had issues with their fathers? Is it a strange mutation of High Calvinist TULIP theology, with Robot put into the place of God? I do not grok it at all.
This has turned out to be crankier than I wanted it to be. So let me close by saying that I found reading What We Owe the Future to be enormous fun, and that if Will MacAskill had spread out and written all five books I would have had much more fun.
References:
DeLong, J. Bradford. 2023. Slouching Towards Utopia: The Economic History of the 20th Century. New York: Basic Books. <bit.ly/3pP3Krk>
McCaskill, Will. 2023. What We Owe the Future. New York: Basic Books. <https://www.amazon.com/dp/1541618629>.
Well, this was interesting. I learned a concept to use when describing a book “tried to do both too little & too much” plus two new (for me) words: geas & TULIP