Relative Intelligence

By Mark Nuyens
4 min. read🔍 Analysis
TL;DR

Will we ever truly be capable of measuring intelligence reliably in a generalized way? I doubt it will ever happen, as intelligence may not be one single thing but rather a relation between the goals we define and whether or not they're reached.

Ever since I was a teenager, I've been fascinated by the concept of measuring intelligence. It has long been reduced to a single score that measures one's cognitive abilities, referred to as IQ. However, this approach has always seemed narrow-minded to me, as it fails to account for the complexities of the human mind. I've always considered myself to be creative, without ever thinking about how that related to intelligence. Recently, I've noticed how we struggle to categorize Large Language Models (LLMs, for short) in a meaningful way, even though we still don't fully know what goes on inside of those models in the first place.

These days, we tend to compare AI capabilities to economic output, with the definition of AGI as "highly autonomous systems that have the ability to outperform humans at nearly any economically valuable work." While this sounds reasonable, it's also striking to see how we keep raising the bar for AI. In the past, a machine was considered intelligent if it could trick a human into thinking it was also human by a means of interaction (known as the Turing Test). Then, we realized that a machine could only be considered intelligent if it could win a game of chess or Go. Now we're trying again by defining 'AGI'. The list goes on, and if we don't even know what intelligence is ourselves, how can we measure it in the first place?

Currently, there are various benchmarks for AI systems, but they are often biased and invented by the same companies that created the LLM model. We're struggling to find metrics to measure artificial intelligence, and we sometimes laugh at AI's mistakes, often referred to as 'AI hallucinating.' However, I believe this evaluation is unfair and biased. We judge something based on its output and whether it's superior to us, but have we considered a more nuanced and relative perspective? In my opinion, intelligence should not be measured by the results of one output, but by its overall contributions towards a specific goal. As the famous expression goes, "If you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid."

Moreover, I think we will never fully be able to justify our means for measuring intelligence, as it depends greatly on the goal it serves and how its output relates to the desired outcome. If we continue to expect LLMs to suddenly become an exceeded form of intelligence, we're wrong. They are a piece of the puzzle, but we should remember that this form of artificial intelligence is based on large amounts of text. Most human tasks revolve around textual input and output, so we can easily recognize its failures and successes. However, LLMs were never designed to be calculators or perform other tasks that we expect them to outperform us in.

Similarly, we hold the same standards for humans. We shouldn't compare ourselves with each other or with some scale of intelligence or the time it requires to solve a puzzle. Instead, we should consider measuring intelligence as highly dependent on the goal we're trying to solve in a broad sense. Whatever that goal may be and whatever tool we may use, it's not useful to try and fixate on a certain expected output from the start without waiting to see what people or machines may come up with. I've always been interested in the topic of creativity and its continuous dynamics with more rational tasks, such as logic and math. Creativity is able to find new and original solutions, while logic and numbers are able to justify their relevance based on certain variables.

In the end, it all depends on our willingness to accept how our circumstances and the bar for measuring intelligence keep changing. We should always ask ourselves: what's the goal, and does the proposed solution serve that goal, regardless of any other information? If that's the case, then we may consider intelligence to be like the glue between pieces of a vase: it's simply there to hold parts together and serve a higher goal of becoming an object in the shape of a vase. No single part is intelligent on its own; it's how they are arranged and composed in a way that is meaningful to us or something else that makes it stand out.