What Can Physics Teach Us About AI?
New research from Bar-Ilan University reveals that artificial intelligence isn’t just powerful because it’s big, but because its parts learn to specialize and work together.
Image
Artificial intelligence may seem worlds away from physics. One deals with chatbots and language models. The other deals with magnets, particles, and the laws of nature.
But new research from Bar-Ilan University suggests they may have more in common than we think.
At the heart of the study by Prof. Ido Kanter is a famous scientific idea called “More is Different,” introduced by Nobel Prize-winning physicist Philip W. Anderson in 1972. The idea is simple: when many parts come together, something new can emerge that cannot be understood by looking at one part alone.
Prof. Kanter asked whether this idea is also relevant to AI.
His answer is yes, but with an important twist. From an information perspective, the roles of AI and physics are reversed.
AI Is Not Just Getting Bigger, It’s Getting Organized
The study shows that as AI models learn, their internal units (nodes) do not all keep doing the same job. They begin to specialize.
One node may become especially useful for identifying certain words or patterns, while others take on different roles. The system becomes more powerful not only because it is bigger, but because its parts function differently.
That may sound abstract, but it points to something important. AI does not work only by scale. Its power also comes from cooperation.
When One Part Knows More Than You’d Expect
One of the most surprising findings is that even a single node in a language model contains meaningful information about the model’s overall task.
And when several nodes work together, they can achieve more than the simple sum of their individual abilities.
These systems are not just growing. They are organizing.
Why AI Is Different from Physics
The study also highlights a key difference between AI and physical systems.
In many physical systems, the state of one or even many parts contains essentially the same information about the overall system. This reflects an information-based principle described as “More is the Same.”
Adding more parts does not increase the amount of information about the global state.
In AI, however, learning changes the picture. As the network trains, its nodes begin to carry pieces of the global task, and the total information increases with the number of nodes. In this sense, AI follows “More is Different.”
This difference may help explain why AI is so effective, and could eventually help researchers design smaller, more efficient, and more understandable models.
From Machines to the Brain
The implications may go even further. Prof. Kanter connects this work to neuroscience, based on experimental findings, raising the possibility that the brain may rely on small units, neurons, that are far more powerful and specialized than we once assumed.
So, What’s the Big Idea?
AI may be intelligent not just because it is large, but because learning enables its parts to divide the task, specialize, and cooperate.
Sometimes, understanding the future of artificial intelligence begins with an old question from physics.