Summary: Researchers found that humans and AI interact similarly between two learning systems: flexible and rapid contextual learning, and gradual incremental learning. Experiments show that AI can develop contextual learning skills after extensive additional practice, just as humans do.
Both also demonstrated a reciprocal relationship between flexibility and memory: more difficult tasks strengthened memory, while easier tasks increased adaptability. These findings could influence the design of AI systems that interact more intuitively with human cognition.
Key facts
- Common strategy: Both humans and AI use context and incremental learning in a complementary way.
- Advances in meta-learning: The AI only learned flexible context after thousands of additional training tasks.
- Compensation: Like humans, AI seeks a balance between flexibility (learning rules quickly) and retention (updating long-term memory).
Source: Brown University
New research reveals similarities in the way humans and artificial intelligence integrate the two types of learning. It offers new insights into how people learn and how to develop more intuitive AI tools.
The study, led by Jack Rosson, a postdoctoral researcher in computer science at Brown University, showed that by training AI systems, flexible and incremental learning mechanisms work in a way that works like working memory and long-term memory in humans.
“These findings help explain why people appear to be rule-based learners in some situations and progressive learners in others,” says Rosen. “They also suggest something that the most advanced AI systems share with the human brain.”
Russin holds a joint appointment in the laboratories of Michael Frank, professor of cognitive and psychological sciences and director of the Center for Computational Brain Science at Brown’s Carney Institute for Brain Science, and Eli Pavlik, associate professor of computer science and director of the Interaction AI Research Institute at Brown University’s Assistant AI Research Institute.
This research was published in the Proceedings of the National Academy of Sciences, highlighting a deeper understanding of how humans acquire and process new information depending on the nature of the task. In some scenarios, individuals can grasp concepts quickly through contextual learning, which allows them to form a mental model based on just a few examples. A common illustration of this is learning the rules of a simple game like tic-tac-toe—after observing only a handful of plays, a person can understand the objective and anticipate moves, demonstrating how rapidly contextual cues can lead to effective comprehension.
In contrast, other tasks require a more gradual process known as incremental learning, where understanding is built up over time through repeated practice and refinement. For example, learning to play a song on the piano involves mastering individual notes and techniques through consistent, focused effort before achieving fluency. Unlike contextual learning, this form of knowledge acquisition relies on sustained engagement and feedback, often taking days or weeks to show significant progress. The study’s findings emphasize the importance of designing AI systems that can mirror these dual modes of learning—quickly adapting in some situations while steadily improving in others.
While researchers knew that both humans and AI integrate forms of learning, it was unclear how the two interacted. During the research team’s ongoing collaboration, Russin—whose work bridges machine learning with computational neuroscience—developed a theory that the dynamics might be similar to the interaction between human working memory and long-term memory.
To test this theory, Rosson used meta-learning (a form of training that helps AI systems learn about how to learn on their own) to uncover key features of both learning methods. Experiments showed that the AI system’s ability to learn from context emerged after meta-learning using multiple examples.
One experiment, based on an experiment with humans, tested contextual learning by challenging AI to combine similar ideas to deal with new situations: If AI were taught a list of colors and a list of animals, would it be able to correctly identify combinations of colors and animals (for example, a green giraffe) that it had not seen together before?
After meta-learning by performing 12,000 similar tasks, the AI gained the ability to successfully identify new color and animal combinations.
The results show that both humans and AI learn faster and more flexibly according to context after a certain degree of incremental learning.

“It takes a while to master your first board game,” Pavlik said. “By the time you master your hundredth board game, you can learn the rules quickly, even if you’ve never seen that particular game before.”
The team also discovered trade-offs, for example between knowledge retention and flexibility: As with humans, the harder it is for an AI to perform a task correctly, the more likely it is to learn how to perform that task in the future.
Frank, who has studied this paradox in humans, suggests that this is because errors signal the brain to update information stored in long-term memory, while error-free actions learned in context increase flexibility but do not engage long-term memory in the same way.
For Frank, who specializes in building biologically inspired computer models to understand human learning and decision-making, the team’s work showed how analyzing the strengths and weaknesses of different learning strategies in an artificial neural network can yield new insights into the human brain.
“Our results are robust across multiple tasks and bring together diverse aspects of human learning that neuroscientists have not previously grouped together,” Frank said.
This research also raises important issues for the development of intuitive and reliable AI tools, especially in sensitive areas like mental health.
“For useful and reliable AI assistants, humans and AI must be aware of how they work and their differences and similarities,” Pavlik said. “These results are a great first step.”
Funding: This research was supported by the Centers of Excellence in Biomedical Research of the Office of Naval Research and the National Institute of General Medical Sciences.
About this AI and learning research news
Author: Kevin Stacey
Source: Brown University
Contact: Kevin Stacey – Brown University
Image: The image is credited to StackZone Neuro
Original Research: Closed access.
“Parallel trade-offs in human cognition and neural networks: The dynamic interplay between in-context and in-weight learning” by Jake Russin et al. PNAS
Abstract
Parallel trade-offs in human cognition and neural networks: dynamic interplay between context-based learning and weighted learning
Human learning exhibits a striking duality: sometimes we can quickly derive and establish logical principles, benefiting from a structured curriculum (for example, in formal education), while at other times we rely on an incremental or trial-and-error approach, learning best from a randomly arranged curriculum.
Influential psychological theories explain this seemingly contradictory behavioral evidence by suggesting two qualitatively different learning systems: one for fast, rule-based inference (e.g., in working memory) and the other for slow, incremental adaptation (e.g., in long-term and procedural memory).
It is not yet clear how such theories can be combined with neural networks. They learn through incremental weight updates and therefore constitute a natural model for neural networks. However, they are not automatically compatible with neural networks.
However, recent data suggest that meta-learning neural networks and large language models are capable of in-context learning (ICL), that is, the ability to flexibly infer the structure of a new task from a few examples.
Unlike standard weight-based learning (IWL), which is analogous to synaptic switching, ICL is naturally linked to activation-based dynamics that are thought to underlie working memory in humans.
Here we show that the interaction between ICL and IWL naturally links a wide range of learning phenomena in humans, including curricular effects on category learning tasks, compositionality, and trade-offs maintaining flexibility in the brain and behavior.
Our work demonstrates how emergent ICL can endow neural networks with fundamentally different learning properties that can coexist with their native IWL. We thus offer an integrated perspective on dual-process theories of human cognition.

