The I in AI: What is Intelligence?
How will intelligence facilitate the next phase of humanity's growth?
Intelligence is an incredibly difficult quantity to get your head around. It seems like something fundamental. To do justice to the topic, we should talk about complexity and emergence, about brain science and genetics, about computer science and epistemology. These are fields I have only a loose grasp on so you can read up elsewhere on these. (You could do worse than one of David Deutsch’s books)
I prefer to start by thinking about intelligence from the ground up. So let us rather start with energy. Energy is most definitely something fundamental in the universe. Laws of entropy and conservation rule our understanding of the natural world. Intelligence is not possible without energy - nothing is.
Technologist Kevin Kelly says if you’re about to say something “goes without saying”, it’s better for everyone if you go ahead and say it. So let us define things quickly. Energy is activity, it describes how matter moves. Energy is the shaking and shimmering of the universe. Particles move in certain ways - this actual and potential movement is energy.
Energy and Intelligence
So how does intelligence relate to energy? I’ve started to think of intelligence as the direction of energy (and therefore of matter), I’m tempted to call it meta-energy. Intelligence is a set of algorithms - instantiated in physical form - that harnesses energy to achieve some goal. The goals of intelligence can range from “proliferate cellular material” to “conquer the world”. Intelligence only makes sense in context, with evolution - natural and cultural. If some transformation occurs without a form of life involved at some stage, I’d consider that a physical process, not necessarily intelligence. And life, as we think about has certain key properties: cells, DNA, ATP energy.
All life has intelligence, life is intelligence - cells are little computers, organising according to genetic code and replicating with microscopic precision. We have a first-hand experience of biological intelligence. The other type of intelligence we call artificial, and by this we mean a computer coded intelligence. This kind of intelligence is instruction written in software, directing electrons in silicon which then links to the physical world with monitors or robotics.
The key thing is that intelligence is stored instruction. Intelligence is substrate independent: which means it does not matter if its algorithms are stored in brain matter or silicon mother-boards. In human brains, intelligence primarily takes form in neural patterns. In computers, we crystallise instruction in code. Our human intelligence did not evolve with the goal of optimal intelligence - our brains evolved to aid our survival and reproduction. But it turns out greater intelligence greatly aids biological goals. We developed a general intelligence - the only one we know about - that can think and use abstractions. We can reason from first principles and can process remarkably varied problem contexts.
Intelligence is layers and layers of algorithms and interactions and feedback. Complex systems like animals have many levels of intelligence. Our cells operate unconsciously, growing and proliferating, responding in certain ways to temperature and threats of virus and hunger and so on. This is a microscopic intelligence. Cells together make organs - these systems form a new layer of operations (though still scripted by the genetic code in every cell) - hearts pump blood, pancreas makes insulin and so on. The intelligence unleashed by our pre-fontal cortex is the first kind unmoored from our genetics. The pre-frontal cortex, and its potential, is shaped by a lifetime of exposure. You are limited to - or made by - the type of inputs you’ve received. The trauma, the lessons, the examples, the ideas - the CULTURE - you developed in.
You will struggle to separate human genetic and human cognitive intelligence. Much of what we do is subconsciously driven by our evolutionary goals. When we eventually get to AI, and the existential fear of this new intelligence - most are very confused about goals. Every intelligence has goals - implicit and explicit. The more layers, the more complex the intelligence. Intelligence interacts with an environment and learns. Brains learn with new ideas. Genes learn with evolutionary change in changing environments. The learning process is how intelligence changes and refines. The more inputs and learning opportunities the more nuanced an intelligent agent can become. Think of babies compared to adults, think of a deep learning model trained on one image compared to one million images.
AI Fundamentals
If you’re still with me, you’ll agree the work of defining intelligence is no trivial feat. But in short, we might agree that energy is activity, and intelligence is stored instruction through substrate-independent algorithms. The goals of intelligence depend on the layers of algorithms involved. We can separate intelligence into agents. Organisms, like a person or a dolphin, are agents. Each has a different genetic code, and a different set of experiences.
We can reason about artificial intelligence from biological intelligence, but only up to a point. Broadly, we can say all intelligence relies on three primary factors:
Energy (the power source)
Compute (the capacity to store and process instructions)
Data (the inputs to learning)
The energy can be ATP in cells, or electricity in circuits. Compute can be DNA or CPU chips. Data can be digital (1s and 0s), or physical sense and environment data.
As things become increasingly complex and heated in the AI race, I find it comforting to return to the fundamentals of intelligence. I like to think about potentialities in terms of energy, compute, and data. Compute seems to be the hardest part - will brute computation get us there, or will we need some smarter systems - something closer to the 3D transformers like brains and cells?
For years we could deride Artificial intelligence as narrow intelligence. Google Maps is a narrow AI. It sets routes and calculate times based on the journey data it has crunched. The algorithm parameters are clear to us. Google Maps cannot make us a sandwich no matter how much we beg, it has a narrow domain of problem-solving. Currently artificial intelligence models train with very specific goal parameters, but this is changing as we see sufficiently large models do unexpected things. Just as humans defy our overriding biological code with birth control and philosophy.
Fools rush in
I recently read the Pulitzer Prize winner called The Prize: The Epic Quest for Oil, Money, and Power. This tracks the great story of oil, and how it fueled the most astonishing growth century in history. The wide-scale wielding of fossil fuels changed everything. Fossil fuels liberated human intelligence. We had lights at night to read and study and work. We could power ships that warred and traded across the high seas. We grew a global and interconnected economy and society. Energy allowed us to link up and form collective intelligence hubs that sparked advances in quantum physics, and industrial practice, and construction, and miraculous new medicines. Our development in the 20th century was built on abundant energy. If the 20th century is about liberating energy, our current century is surely about liberating intelligence.
So the new oil rush has begun: it’s the age of AI. US-based OpenAI leads the way with billion dollar funding rounds. Their CEO, Sam Altman, talks about the need for trillion dollar investments in chip and power plants. A host of AI startups frantically compete to develop new use cases for new types of AI. All speak about the holy grail of a general artificial intelligence. To liberate intelligence, we will need quality data and software chips and energy supply in abundance!
Narrow intelligence is widening quickly. As recently as yesterday, OpenAI released their omni AI ChatGPT4o. This AI is conversational. It can read in video and image and voice text. The term narrow AI is becoming inappropriate. I used to think general intelligence was a kind of radical leap, but maybe we’re all just swimming in the pond that OpenAI researchers are heating up with each new release.
What can we do with more intelligence?
Everyone is nervous about the risks, but let me play techno-optimist for a moment. Think of the benefits of greater and more sophisticated intelligence. Machines for everything. Think about the dirtiest jobs for a start. Smart garbage trucks that come to take our trash. What about smart ships that scour the oceans to pick up plastic. We already have smart cars that ferry yuppies around San Franciso, what next for this kind of tech? No need for drivers to sit in cars for countless idle hours. What happens to education? To the sciences? In every field, for every problem, we will have more intelligence, more problem-solving aids. The 20th century computer is a very limited AI - few people object to a computer with better specs - we should start seeing AI in this light.
I do have a quibble about AI conversations. At every conference I go to, at nearly every talk on nearly any topic, I cringe when AI is mentioned. What is very clear is that there are many muddled ideas out there. We must get more specific about what we mean by AI at this early phase. The concept has not yet concretized to suit one ubiquitous label. We must distinguish between Youtube’s recommendation AI (that suggests a fun video) and DeepMind’s AlphaFold AI (that predicts a protein's 3D structure from its amino acid sequence). A statistical prediction model is not equivalent in complexity to a deep learning video recognition model. All AI’s are not equal. All AI’s are based on energy and data and compute, but the amount of compute and the complexity and depth of learning can very very different. So let’s be clearer when we talk about AI, and that will entail everyone understanding a little bit of the basics.
Intelligence Explosion
I mentioned AlphaFold. AI has started to impact the biological sciences. We are creating intelligence that whispers secrets back to us about how we work on scales and complexity that human brains cannot comprehend. Artificial programs can process any information if we equip them correctly.
We can think of intelligent agents - computers or people. We can think of appended intelligence - humans using computers/AIs. We can think of collective intelligence - many humans, many computers. Intelligence begets intelligence. Some seers say we are on the first part of an exponential intelligence growth curve. Maybe we measure this crudely as how much instruction is out there (computational capacity) - artificial and biological. And how much learning data exists.
Cells create more cells which might mutate or proliferate further. AIs respond to prompt data and create more data - they call this synthetic data. In a few short years we will be drowning in AI-generated content. Intelligence with taste will win in a world with too much data and a lot of dumb AIs. In the explosion of intelligence and data, the strongest signals will win attention and resources.
Are AIs people?
AI researcher and entrepreneur Mustafa Suleyman wants us to think about AI with a new metaphor: a new digital species. The many AIs we create will have echoes of personhood. Another public intellectual I admire, David Deutsch, often talks about potential AIs as people. They will have a kind of intelligence we will not fully understand. The basis and substrate will not be biological like ours (and their drives will be different, and their capacity for things like suffering will be limited)- but they will have memory and coded emotion and - much like us. There are many uncomfortable questions ahead, mainly in the way we see pieces of our own intelligence in the machines we create.
Suleyman highlights the dangers we must watch out for are 1) autonomy, and 2) recursive self-improvement. We are far from this reality, and we will likely have control if these abilities are coded in. I discovered this TED talk by Mustafa mid-way writing this piece - he talks about many of the things I have included. AI is the cumulation of the billions of years of biological evolution and millions of years of cultural evolution. This is another inflection point in our existence.
Artificial Creativity
What happens next? We’re making rapid progress. Our models will continue to improve. We will make interaction with AIs better and smoother. The next frontier will be creativity - which is required for general intelligence. Creativity is the capacity to wield intelligence in new ways, in new niches, in proper context. To explore, to generate, to ameliorate.
Perhaps the 22nd century will be the age of creativity. Where more people than ever are engaged in properly creative pursuits across science and art and business. Could the automation of much of necessary work remove the drudgery from human life? Can we then focus on the fun stuff?
My feeling in the face of all this is foolish optimism. Of course, I worry about unintended consequences, but I can’t suppress the notion that things will turn out well. We can look back to 1900 and judge the effects of energy liberated to near-abundance. In 100 years the critics will point to all the ways society changed with new AI. I hope to point to a hopeful society: with diseases cured, fewer people suffering crippling loneliness, marvelous scientific advances, and the dawn of an AI-enhanced creative efflorescence.