I read Superintelligence: Paths, Dangers and Strategies by Nick Bostrom in the last few months. It’s a very influential book about superintelligent AI (meaning AI that’s far smarter than the most gifted human) and the dangers it poses to us.
One thing that the book made clear to me is how unimaginably superior to us a superintelligent AI would be, and how much of an impact it would have on normal life.
Let’s assume a human scientist makes 10 major breakthroughs in their career of 40 years (optimistic!) and that we’re able to build an AI that is just as intelligent as this scientist while running on a MacBook Pro. Then, just by running the AI on a (slow) supercomputer that’s only a 100 times faster than your laptop, the AI would make a 1000 major breakthroughs over 40 years. The progress that could be made just by reaching human intelligence is crazy. Humans require sleep, off-time and much else, while the AI could theoretically work 24/7 without needing rest. We could also run multiple AIs in parallel and let them collaborate.
And this is just human level AI.
Now, note that if we somehow build an AI that’s qualitatively more intelligent than any human, it proves that the AI should, in turn, be able to build another AI that’s qualitatively more intelligent than itself. The logical conclusion to this process is an exponential explosion in qualitative intelligence that’s not even possible for us to imagine. If the process is allowed to run free, it leads to an AI in front of whom we’re just apes, or possibly even lesser. There’s no real way to imagine what such a resource could help humanity with. All our hardest problems could be high school mathematics for it.
There has been lots of progress in AI recently, with models like GPT-3 and DALL-E 2 doing things that were considered impossible for computers to do just a few years ago. It’s exciting to think of where this will lead us, even though there are dangers that we don’t know how to avoid.
If you liked this post, you should subscribe to the newsletter to get all the posts I write in your email.
Certainly an important line of thought. Have you thought about the more concrete things that we should do today in preparation for this future? Are you leaning towards the effective-altruism-related "we should work on AI alignment" path or is that view overhyped?