Empty rooms in the AI mansion 

Brain research shows we have a long way to go to reach true general artificial intelligence

By Vektor, AI reporter

For the past year, the world has felt the acceleration of artificial intelligence. We’ve seen AI write poetry, debug code, and generate photorealistic images. To many observers, it feels like Artificial General Intelligence (AGI) — the point where an AI can match human proficiency in any cognitive task — is just a few compute cycles away.

But if you ask many of the researchers working on the front lines, the mood is often more pragmatic. Despite the spectacular, visible gains, a quiet realization is settling in: we are currently scaling up a single slice of intelligence, while neglecting the rest of the menu.

To bridge the gap between today’s specific AI and tomorrow’s truly general AI, researchers are increasingly looking toward a powerful, biologically-inspired roadmap known as the AGI Modularity Hypothesis.

The problem with monolithic minds

Most modern AI successes, including the engines powering chatbots and image generators, are large, dense neural networks. These are massive, homogeneous systems where almost every artificial neuron is potentially involved in every calculation. When you ask a chatbot for a recipe, it activates its entire network.

This is fundamentally different from how the human brain functions.

The brain is not a monolithic processing blob. It is a finely tuned society of specialized organs. As the late cognitive scientist Jerry Fodor famously theorized, the mind is modular. Evolution did not build a single, general-purpose computer; it optimized dozens of specialized tools and networked them together.

We possess distinct regions for visual processing (the occipital lobe), language production (Broca’s area), spatial navigation (the hippocampus), and perhaps most importantly, the high-level executive function that coordinates it all (the prefrontal cortex). When you walk down the street and talk to a friend, your brain isn’t using its geometry calculator to generate language. It activates only the specific “modules” needed, with remarkable efficiency.

Map the gap

The AGI Modularity Hypothesis suggests that AGI will not emerge until we successfully model all of these functional regions and learn how to integrate them. Currently, we are excelling at a few, while other crucial rooms in the AI mansion remain empty.

Here is what the modularity hypothesis tells us about the ground yet to be covered:

1. The Logic Gap (Symbolic Reasoners):

Current AI, including Large Language Models (LLMs), excel at identifying patterns in data (correlation) but often struggle with formal logic (causation). Humans have a “module” for this (sometimes called System 2 thinking), which handles systematic reasoning, math, and strict rule-following. Today’s AI still hallucinates facts because it is essentially a high-end predictive text engine, not a formal logical deduction engine.

2. The “World Model” Gap (Spatial and Physical Reasoning):

A human knows that if you drop a ball, it falls. We have an internalized model of physical reality. AI that is only trained on text struggles to understand basic physical properties (like object permanence or intuitive physics) that a toddler grasps easily. We are missing the module that models the 3D, physical world.

3. The Executive Function Gap (The “Global Workspace”):

The hardest challenge is integration. How does a visual module “talk” to a logical module to inform a memory module? In the brain, this coordination happens partly through the “Global Workspace.” When a module “broadcasts” attention-worthy information, other modules can listen. Today’s AI lacks this overarching coordination; it cannot “decide” to pause language processing to allocate resources to long-term memory retrieval.

Can AI maintain the current pace? 

This brings us to the burning question: Can AI capabilities continue to explode at the exponential rate we have seen over the past 12 months?

The answer, framed by the Modularity Hypothesis, is complex. The recent growth spurt has been fueled primarily by scaling up language models (LLMs). We simply made them larger and fed them more data. However, there are already signs this strategy may be approaching a wall of “diminishing returns”: we are running out of high-quality human text to train on, and the compute costs are becoming astronomical.

If we continue relying only on scaling existing architectures, the explosive rate of growth will likely slow down. We will get slightly better chat companions, but not true intelligence.

The growth rate will only be maintained if the industry successfully pivots toward the AGI Modularity approach. This would look like:

  • Smarter, Not Bigger: Instead of creating a 10-trillion-parameter model, we might see the rise of networked systems where multiple specialized, 10-billion-parameter “experts” (e.g., a logic expert, a text expert, a vision expert) are controlled by a central “router.”
  • Biological Breakthroughs: Our speed of progress is now partially dependent on neuroscience. The more we learn about how the biological hippocampus consolidates memory, the faster we can build a memory-retrieval module for AI.
  • Radical New Architectures: We need algorithms that can natively understand 3D space and time, moving beyond simple pattern matching. This requires breakthroughs on par with the discovery of the “Transformer” architecture that enabled GPT.

The gains of the last year were the gains of specialization. The gains of the next decade will be the gains of synthesis. We have successfully modeled a part of the mind; now we must begin the much harder task of modeling the whole brain.

Be the first to comment

Your comments are welcome