Is a world where superintelligence exists fun?

, 17 min read

I recently started thinking about superintelligence, and I’m not sure if I want us to build it. I’m still processing my thoughts, but it’s a feeling that started growing as I thought about various future scenarios where superintelligence exists.

First I will explain what superintelligence is and why I think it’s possible to build it. Then I will share my worries, and when you are done reading this post you will hopefully reach out arguing why I shouldn’t be concerned.

What is information, memory, computation, and intelligence

To understand superintelligence, let’s first agree on some definition of intelligence. This involves discussing information, memory, and computation.

We know that from a physics perspective, everything is just energy and matter moving around.

Information is matter organized in a way that means something to us. Take a world map, for example; it shows the shapes of land on Earth. If these shapes were different, the particles making up the map would be arranged differently too.

Memory is information encoded in some long-lived form, so we can recall it later. Encoding means setting a common way to represent information so that everyone involved understands it the same way. For instance, we created language to communicate with others about the things we all see and experience. Language encodes physical things like trees and tigers, and imaginary things like money and laws.

Imagine your ancestor 200,000 years ago discovering a juicy berry tree nearby. They would want to share this information with their friends. Maybe they kept the information in their brain, sharing it down across generations; or perhaps they drew it in a cave. Around 2,000 years ago they might have written it on papyrus. Today, they could email it, save it on a USB, or send a text with a photo of the berry tree with geolocation metadata attached. Across all these different ways of information storage, the core message is the same: “there is a damn juicy berry tree near valley #2”. More importantly, the information itself is independent of the physical entity where it’s stored – whether it’s a human brain, a cave wall, a papyrus, or a hard drive in a data center. Scientifically, we say that information is independent of its substrate, which is a fancy word for the object storing the information.

Computation is the process of transforming states of matter. Since information is meaningfully arranged matter, computation is calculating a new output state from given input states. When you take two numbers, 3 and 5 and add them together, you get the sum, 8. You process two input pieces of information through a function to get the output. Think of a function as the blueprint or recipe for a computation – it details in steps the actions to be done. The physical medium doing the computation doesn’t matter. We don’t care whether 8 is computed in our brain, on a calculator, an abacus, or a smartphone. The important thing is that the computation is the same in all cases, and given the same inputs you will get the same new state. So computation is also substrate independent.

We can extend this definition of computation – using functions to transform input states to a desired output state – to more complex situations, such as deciding what to study after finishing school. Your life’s experiences and the surrounding environment are input states processed through a decision-making function – which you try to convince yourself is based on logic while in reality you just decide based on vibe – and you end up with your chosen field of study as output. The process you used to decide is the “function”.

All the above finally leads us to intelligence. Experts don’t agree on a single definition, and the more nuanced a definition gets, the more nitpicky people become about it. Here I will use Max Tegmark’s definition: Intelligence is the ability to reach complex goals. This ties with our definition of computation, since using a function to turn input states into a desired output state is essentially the same as achieving a goal.

This is also what machine learning models do. Their goal is to learn a function that transforms inputs to the expected output state. The hope is that, after training, these models can find the right outputs for new inputs they haven’t “seen” before – this is what people in the field call “the model generalizes to new inputs”. Think of machine learning as creating funky calculators. Instead of inputting data and a function for them to calculate the outcome, you feed them both inputs and outputs and ask them to calculate out the function.

If intelligence is using long-lived information (memory) and processing it through functions (computation) in order to achieve a goal, and all these are independent of a physical medium, then intelligence itself is substrate independent. Going back to the physics perspective we started with, intelligence is using energy to move matter around, but in a smart way and for a purpose. Why do I fuss so much about substrate independence, and intelligence being just goal-oriented action? Because it drives the key suggestions of this very long introduction:

  1. Intelligence doesn’t need a physical body to exist.
  2. Intelligence is not exclusive to humans or living beings, which means artificial intelligence is possible.

But why would machine intelligence be smarter or better than a human?

Biological vs machine intelligence

We all intuitively agree that humans are intelligent without really knowing why we think that. Being able to learn a range of skills after birth is one thing that makes humans so remarkable in the field of intelligence. Just by experiencing something, and then practicing, humans can achieve goals that they previously couldn’t. This is something that other animals can do to some extent, but humans can learn an incredible amount of new skills and abilities.

In contrast, most AI models today “learn” only during their training phase, and will not learn new things after that no matter how many new experiences they are exposed to. This is why ChatGPT doesn’t become smarter and doesn’t learn anything new despite the fact that millions of people talk to it every day. In fact, one of the greatest challenges in AI today is writing learning algorithms that resemble human learning more 1.

Humans are strong in some forms of computation, and weak in others. We are great in things like locomotion, eye-hand coordination, and social interaction. We are pretty bad at arithmetic and memorization, areas where machines have surpassed us decades ago. Generally, our strengths depend on our biologically-limited human hardware, which evolved to cater to the needs specific to our survival.

Computers on the other hand, are universal computing machines. Their potential is bounded only by the hardware and the algorithms we build them with. Year by year, computers become good at things they were previously bad at, surpassing human performance. It’s hard to argue that there’s a fundamental reason why they can’t become better than humans in everything given enough time.

Take human memory for example. The synapses in my brain that store all my knowledge, memories, and skills can hold about 100 Terabytes of information. That’s all I have. Despite the limit of storage, the coolest humans continue learning throughout their entire lifetime, which suggests that the brain has mechanisms to make the encoding of information efficient over time, effectively “compressing” information, or re-using past synapses to “save space”. Still, the limited number of synapses results in a limit to the amount of information that can be stored. On a machine, we can always add more memory and more computational power.

How about computing speed? Human thinking happens with neurons firing in the brain, and computer thinking happens with on-off cycles of electrical current. The firing rate of human neurons varies between 1 to 1000 Hz (times per second), while computers today have cycles in the GHz range (around 1 million times faster).

Such numerical comparisons are obviously not accurate, and I could make an extensive and nuanced contrast of biological and artificial hardware making this post even longer, but that would be just a roundabout way to my point: artificial hardware can be purpose-designed and enhanced to what we need it to be, while biological hardware is limited to what it is.

So, if we can keep making computers stronger, how much better can they get compared to what they are now? The physical limit of computation speed 2 is 33 orders of magnitude ($10^{33}$, or one decillion) more than the state of the art in 2017. That’s 1,000,000,000,000,000,000,000,000,000,000,000 faster. This is a big number with a lot of zeroes. To compare, Earth is “only” around $4.5 \times 10^9$ (4.5 billion) years old, and to the best of our knowledge the Universe itself is “only” around $14 \times 10^{9}$ years old.

Just creating beefier computers that come closer to this limit will probably result in machines unimaginably smarter than the ones that we have today, but it is much more reasonable to assume that we will also achieve significant advances in the algorithms and software we use, as was the case in every single point in the past to this day for every technology and especially computers.

Superintelligence is possible from a physics POV

Most people use the term “superintelligence” to describe a kind of intelligence that’s much greater than human intelligence. We defined intelligence as the ability to reach complex goals, which can be done with memory and computation. Storing and manipulating information.

Human intelligence is limited by our biological design, and machine intelligence is limited only by physics. As I mentioned earlier, the limits of computation are still extremely far from what we are currently able to juice out of the best computers on the planet. Also, the current algorithms we use to create AI are rather primitive compared to the way human brains work, and they still work impressively well. Considering our history of improving technology, it’s sensible to believe that we will develop better algorithms and build more powerful computers. This will naturally result in better AIs.

Once we figure out how to create algorithms that let an AI learn and improve on its own, then one can easily imagine a scenario where that AI enters a feedback loop in which it continuously creates better versions of itself at a rapid pace, resulting in superintelligence 3. People came up with different names for the scenario in which such as “singularity” and “intelligence explosion”.

Superintelligence is possible from a physics standpoint, and there are ways it can practically happen in reality as well. I’m not saying we will definitely make this happen in our lifetimes, or at all. I don’t know that. All I am saying is that it’s possible for this thing to exist. A lot of extremely smart people think that it’s not only possible, but very likely to happen in the next decades. Other extremely smart people think this is all crap. To me this disagreement among experts just means that we don’t know, which is a good enough reason for me to spend a good chunk of brain cycles overthinking about the implications.

Since it’s possible, and we saw a way that it could happen – rapid self-improvement feedback loop – it comes down to personal opinion to accept the probability the event happening within our lifetime. Which allows us to move to the culprit of my brain’s anxiety during the past weeks.

Is a world where superintelligence exists fun?

My main issue with any form of hypothetical superintelligent being, is that I cannot comprehend or imagine it more than an ant comprehends me. The same way you find it hard to have an intellectually stimulating conversation with your cat, a super-smart being would have a tough time to connect with you. In general, when two creatures that think at dramatically different speeds and have extremely gap in their abilities, having meaningful communication as equals is hard.

I cannot think of a scenario where an incredibly smarter existence comes in contact with humans, and it ends well for us. To be fair, the idea of contact with superintelligent beings has captivated humans for centuries, which is also there are so many theories about Pharaohs being aliens passing down their technology to the human race or us living in the Matrix. It didn’t take me long to discover that a bunch of people thought about how does an AI aftermath looks like and there is nothing there that looks like fun to me.

In some scenarios the lines between physical and virtual reality blur, as well as the lines between human and machine. In others, AI eliminates humanity either through misperceiving the goals it was programmed with, or because it just can and decides so. There are “zookeeper” scenarios where humans are kept well fed and reasonably comfortable, but essentially caged like animals in a zoo, and “protector god” scenarios where AI leaves us to our own but just makes sure we are safe; preventing us from creating new superintelligent AIs, blowing ourselves up, or aliens destroying us. There are utopia scenarios where AI is optimizing for human happiness – which can be very creepy as Brave New World aptly illustrates – or where the AI is optimizing for human meaning, and acts as a global enforcer of peace. In all scenarios, AI is in control, just because it’s so much more powerful than us. In the “enslaved god” scenario, humans keep AI under their control and use it to create wonderfully great or evil things. However, there is the permanent risk of AI breaking out of it’s confinement, and therefore the scenario is transformed to any of the other ones.

The main reason people want to create super AI is because it will enable rapid innovation. Having the ability to manipulate matter in an optimal way towards any goal can allow us to have unlimited energy, eradicate sickness and poverty, and vastly increase the quality of life of every human on earth. There is no limit to what can be achieved apart from the limits imposed by the laws of physics.

Perhaps more importantly, AI can help us solve problems that might result to our extinction such as thermonuclear war, resource depletion, population collapse, asteroid impact etc. It can help us become multi-planetary, and use the resources on Earth much better. What confuses me is that we are trying to prevent extinction by creating an entity that is capable of causing our extinction, enslavement, or both. It reminds me of how people were thinking about World War I, the war to end all wars, which didn’t go that well.

At the same time, I’m very optimistic about AI improving every single area of our lives, and in general I am pro-technology. I am eagerly waiting for technology that will finally allow me to have my own Jarvis, like Tony Stark’s AI companion in Iron Man. I look forward to finally using computers that do work for me instead of me working on them, and the ability to get trivial things done or look up information without breaking my flow state. And these are just my little selfish desires. The possibilities are endless and incredible. Most of all, I don’t want us to slow down or stop AI development, because the potential benefits are humongous and there is currently no evidence that we are anywhere near to achieving self-improving intelligence anyway. We can stop when we get closer to it. For all we know we might be making incredibly wrong projections, similar to flying cars, and we won’t create superintelligent entities.

My current thinking is that we’ll be much better off with creating intelligence that automates the boring stuff, and helps us with discovery and progress, but we let ourselves with enough problems unsolved, science to be discovered, and frontiers to be conquered for life to have meaning. After all, “life loses much of its point if we are fated to spend it staring stupidly at ultra-intelligent machines as they try to describe their ever more spectacular discoveries in baby-talk that we can understand” (Hans Moravec, Mind Children 1988). There’s no point in trying to create anything, if you can get it from an all-powerful AI by simply asking.

Note: Most of what I write about here is a mix of notes I took while reading Max Tegmark’s Life 3.0 and personal thoughts. I haven’t finished reading the book yet, but when I do, I will share my book notes.

  1. The “holy grail” according to many is neural networks with online and local learning. It would make the post too long if I added such details but look it up it’s a fascinating area of research. 

  2. Seth Lloyd proved in a paper in 2000 that the speed with which a physical device can process information is limited by its energy. Doing an elementary logical operation in time $T$ requires an average energy of $E = h / 4T$, where $h$ is Planck’s constant. While the theoretical limit might be very hard to reach, Lloyd believes that the practical limits are not that far from the theoretical ones. It’s a very cool paper: https://arxiv.org/pdf/quant-ph/9908043 

  3. The scenario of an AI entering a feedback loop of rapid improvement is called recursive self-improvement in AI literature.