Intelligence Scarcity is the the best problem we should solve.
(Originally published here.)
A starting point for thinking about how to ensure the best possible future for all people -
In general, people would be better off with fewer problems.
A vast majority of problems come down to scarcity of one or more resources [a].
Ultimately, the only resources that are actually scarce are energy and intelligence - we will eliminate the scarcity for everything that matters when we have (i) sufficient energy to create any other resource, and (ii) sufficient intelligence to determine what, where, and when to create and deliver [b].
Energy scarcity is on its way to being solved. We're going towards an energy-abundant future, not an energy-constrained one. However, just having abundant energy is not enough - even if it is clean energy [c].
Therefore, not only is Intelligence Scarcity the the most promising problems worth solving, it is also critical that we solve it [d].
<Domain X> is important for <reasons Y> and has substantial capacity for improvements with AI due to <factors Z>.
Therefore, let's create an AI for <X>.
Start building an AI for your <X>!
— — —
[a] Limiting ourselves to the vast set of problems caused by resource scarcity, there's the problem of 1) distribution, which can itself be solved with sufficient energy and intelligence and 2) non-abundance of the thing we'd like to distribute. However, we can safely assume that abundant clean energy, autonomous vehicles, robotic warehouses, etc. will asymptotically decrease distribution costs. So perhaps focusing on making sufficient quantity of (better) things with fewer resources is a more fruitful area to focus on now.
Whether we will agree to fairly distribute non-scarce resources under near-zero distribution costs remains to be seen, but it does seem like we are inherently ethical creatures.
[b] There is, IMO, at least two kinds of intelligence scarcity - 1) scarcity of sufficiently trained and/or sufficiently motivated humans for some tasks and 2) unsuitableness of human intelligence for many important tasks because of the complexity and/or pace of the task.
There's also the question of where human intelligence and labor should be used. For example, most people would agree that it would be OK for their grandchildren to go <pick fruit> for fun, but would be terrible if they have to <pick fruit> all their life to ensure survival.
[c] It can be argued easily that energy abundance, without intelligence abundance, will almost certainly lead to crashing the planetary ecosystem faster.
[d] Yes, there are numerous problems that should be solved urgently, but thinking on a multi-decadal and civilizational scale, AI stands out as a sui generis opportunity.
— — —
A few good reads:
(2000) The Age of Spiritual Machines: Ray Kurzweil’s landmark imagination of scenarios from an accelerating future.
(2014) Superintelligence: Paths, Dangers, Strategies: Nick Bostrom focuses more on the dangers than the positives, but definitely worth reading. (Here’s a long-read article in The New Yorker on Bostrom and this book.)
(1966) The Moon Is a Harsh Mistress: Quite possibly Robert Heinlein’s best. An accidental AI plays a crucial role in the political struggle for Moon’s independence from Earth.
(1989) The Player of Games: An excellent entry point to Iain M. Bank’s Culture series of novels set in a post-scarcity, autarchic, galactic civilization with human-machine cooperation.
The potential for eliminating scarcity with AI has of course been identified numerous time including, in the recent past, by Kevin Kelly (here) Peter Diamandis (here and here), and touched upon by Kevin Drum (here) to name very few.
There are many more books and articles on this topic that are worth reading. Please leave your recommendations (and thoughts on the argument above) in the comments. If you’ve seen essentially this type of first-principles argument somewhere, let me know about that especially!
— — —
Many thanks to Girish Chowdhary, Uma Soman, Ajit Datar, Yuvraaj Kelkar, Allan Axelrod, Satlaj Dighe, and Keyur Karambelkar for beta-testing the article and giving feedback to help refine the central argument here (though I probably did not take a fair bit of the feedback as seriously as I should have). All fallacies in the argument are my own responsibility, of course.