AI Blog

AI Blog

by Michele Laurelli

What Makes Intelligence Intelligent?

What Makes Intelligence Intelligent?
AI Philosophy · Intelligence · AI

"Beyond pattern matching and statistical correlation—exploring what distinguishes true intelligence from sophisticated computation, and why the question matters for how we build AI."

Published on
Reading time
4 min read

Every few months, someone declares that large language models "understand" or "don't understand" language. Both claims miss the point. Intelligence isn't binary.

The Spectrum of Capability

A thermostat responds to temperature. A chess engine evaluates positions. A language model generates text. A human understands context, forms intentions, adapts strategies, learns from single examples, and transfers knowledge across domains.

Where does "intelligence" begin on this spectrum? The question implies a threshold that doesn't exist. Intelligence describes a cluster of capabilities, not a single property.

What Neural Networks Actually Do

Neural networks approximate functions. Show them inputs and desired outputs, and they learn the mapping. This sounds reductive, but it's precise.

The magic emerges from what functions they can approximate and how they generalize beyond training data. A network that memorizes training examples without learning patterns is useless. A network that captures underlying structure and applies it to novel situations demonstrates something we recognize as intelligent behavior.

The Role of Compression

Intelligence might be inseparable from compression. To compress data, you must find patterns, regularities, and structure. Random noise doesn't compress.

When a neural network learns, it builds a compressed representation of its training distribution. The quality of this compression determines generalization. Poor compression: the model overfits, memorizing rather than understanding. Good compression: the model extracts the essential patterns.

This perspective makes intelligence measurable: how efficiently can a system compress its domain? How few bits does it need to represent the patterns that matter?

Abstraction and Hierarchies

Human intelligence builds hierarchies of abstraction. We don't think about letters when reading—we process words, sentences, concepts, arguments. Each level emerges from the one below, but operates independently.

Deep neural networks mirror this structure. Early layers detect edges and textures. Middle layers combine these into shapes and objects. Late layers recognize scenes and relationships. The hierarchy enables compositional generalization—combining learned components in novel ways.

But our hierarchies extend further. We build theories, frameworks, and meta-frameworks. We reason about our own reasoning. Current AI architectures don't naturally develop these deeper abstractions without explicit architectural design.

Causality vs. Correlation

Statistical models find correlations. Intelligence requires understanding causation. The difference matters.

A correlation: ice cream sales and drowning rates both increase in summer. A causal model: temperature drives both, ice cream sales don't cause drowning.

Most machine learning identifies correlations. Causal inference—understanding interventions and counterfactuals—remains challenging. This limits AI in domains where correlation patterns break under distribution shift.

The Hard Parts

What remains difficult for current AI reveals something about intelligence:

Few-shot learning: Humans learn concepts from single examples. Neural networks typically require thousands.

Transfer across domains: Skills learned in one context rarely transfer to different contexts without extensive training.

Compositional reasoning: Combining learned components in ways never seen during training.

Common sense: The vast web of background knowledge humans use effortlessly.

Intentionality: Acting with purpose toward goals, not just optimizing reward functions.

These capabilities likely require architectural innovations we haven't discovered, not just more parameters or training data.

Why This Matters

How we think about intelligence shapes what we build. If we believe intelligence is just pattern matching at scale, we build ever-larger pattern matchers. If we recognize intelligence as a collection of distinct capabilities, we architect systems that develop those capabilities.

The Talents framework, for instance, emerged from recognizing that human expertise involves specialized, persistent knowledge—not general-purpose pattern matching applied to everything.

Maestro addresses coordination and specialization because we observed that complex intelligence emerges from collaboration, not monolithic processing.

The Philosophical Edge

Some argue that discussing machine "intelligence" anthropomorphizes computation. Others claim current AI already achieves general intelligence. Both extremes obscure useful engineering questions.

I care less about whether AI "truly understands" and more about what capabilities systems demonstrate, how reliably they perform, and what failure modes they exhibit.

Philosophy matters when it clarifies thinking. It becomes a distraction when it replaces measurement with definitional debates.

What Comes Next

The next generation of AI won't come from scaling alone. It will require:

Better architectures for compositional reasoning

Mechanisms for causal inference

Systems that build and use abstract models

Integration of symbolic reasoning with learned representations

Frameworks for continual learning without catastrophic forgetting

These aren't distant dreams. They're active research areas with early practical implementations.

Intelligence isn't one thing. It's many capabilities, some we've replicated well, others we're still learning to build.

— ✦ —
What Makes Intelligence Intelligent? | Michele Laurelli - AI Research & Engineering