Every few months, someone publishes a breathless article about how AI will finally unlock the mysteries of the brain. Train a big enough model on enough neural data, the thinking goes, and out pops understanding. It's seductive logic, especially when you see what foundation models can do with language or images. A perspective piece in Neuron takes a hard look at this assumption and finds it wanting.
The short version: AI is a very good tool. It is not magic, and it probably won't do the hard work of understanding for us.
The Prediction Trap
Here's something AI is genuinely good at: prediction. Feed a modern machine learning model enough examples of brain activity, and it can often predict what comes next with impressive accuracy. It can look at neural firing patterns and tell you what stimulus the brain is processing, or what action the animal is about to take. The numbers look great. The papers get published. Everyone is happy.
But here's the uncomfortable question nobody wants to dwell on: is prediction actually what neuroscience is trying to achieve?
The goal of neuroscience isn't to predict what a brain will do. It's to understand why a brain does what it does. And there's a surprisingly wide gap between those two things. You can predict tomorrow's weather without understanding anything about atmospheric physics. You can predict stock prices (sometimes) without understanding the economy. And you can predict neural activity without having any clue how the underlying circuits actually work.
A foundation model might achieve perfect prediction of neural responses and still leave you knowing absolutely nothing about the mechanisms that generate those responses. "The model said this would happen" is not the same as "this is how the brain works."
The Black Box Problem (It's Not Going Away)
If you've ever tried to understand why a large language model produces a particular output, you know the frustration. These systems are black boxes. They process information through millions or billions of parameters in ways that even their creators don't fully understand. They work. We're just not entirely sure why.
Now imagine applying this approach to brain science. You train a massive model on neural data. It predicts brain activity really well. Great. But what have you actually learned about neuroscience?
The brain operates through specific mechanisms: particular cell types, specific connections between neurons, molecular processes that regulate signaling. When we say we "understand" a brain function, we mean we can connect behavior to circuits to cells to molecules. We can draw the diagram. We can explain why manipulating this component changes that output.
AI models don't do any of that. They find statistical regularities in data. "A pattern in the data" is not the same as "we understand how neurons do this." The pattern might be real and important. But identifying it is not the same as explaining it.
What Would Real Success Look Like?
The authors of the Neuron piece aren't AI skeptics. They acknowledge that foundation models could genuinely transform neuroscience. But they're clear-eyed about what that would require: linking computations to mechanisms.
In plain language: the mathematical operations happening inside an AI model would need to correspond somehow to actual neural processes. The model wouldn't just predict brain activity; it would tell you something true about how the brain computes.
This is a tall order. Current AI architectures were designed to solve engineering problems, not to recapitulate biological computation. There's no guarantee that the solutions they find bear any resemblance to what brains actually do. Two systems can produce identical outputs through completely different mechanisms. Just because an AI can predict neural activity doesn't mean it's solving the problem the same way the brain does.
Where AI Actually Helps
None of this means AI is useless for neuroscience. Far from it. These tools can do things that would take human researchers years to accomplish manually.
Foundation models can sift through massive datasets and spot patterns that human eyes would miss. They can generate hypotheses worth testing. They can identify interesting neurons or brain regions or timepoints that merit closer investigation. They can handle the grunt work of data analysis, freeing up researchers to think about what the results mean.
But at some point, someone still has to design an experiment to test whether the pattern is real and figure out what mechanism produces it. You still have to open the black box and look inside. AI can tell you where to look. It can't do the looking for you.
The Hype Cycle Needs a Reality Check
We're in the middle of an AI hype cycle where every field is being told that foundation models will revolutionize everything. Some of these predictions will come true. Others won't. Neuroscience falls somewhere in the middle.
AI will almost certainly become an essential tool in brain research. It already is. But the vision of training a big enough model and having understanding pop out the other side is probably not how this works. Understanding mechanisms requires a different kind of work than optimizing prediction accuracy. It requires experiments that probe causality, not just observations that reveal correlation.
Brains have been hard to understand for as long as we've been trying to understand them. AI makes them slightly less hard by handling some of the computational heavy lifting. That's real progress. But it's not revolution. Anyone selling you the idea that a sufficiently large language model will explain consciousness or decode the neural basis of memory is probably selling you something else too.
The hard work of neuroscience remains hard. AI is a better hammer. It doesn't change what needs to be built.
Reference: Serre T, Pavlick E. (2025). From prediction to understanding: Will AI foundation models transform brain science? Neuron. doi: 10.1016/j.neuron.2025.09.039 | PMID: 41130210
Disclaimer: The image accompanying this article is for illustrative purposes only and does not depict actual experimental results, data, or biological mechanisms.