Tiny AI Models Reveal How We Really Make Decisions
Tiny AI Models Reveal How We Really Make Decisions

Tiny AI Models Reveal How We Really Make Decisions

Summary: Many decisions are shaped through a process of experimentation and learning from mistakes. However, traditional models assume that we always do what is best based on our experiences. A groundbreaking study employed compact, interpretable artificial neural networks to explore real-world decision-making in humans and animals. These models revealed how choices are made not just ideally, but practically, reflecting everyday behavior. By analyzing patterns, researchers identified the most effective strategies commonly used. The networks offered deeper insight than traditional cognitive models. This approach marks a major step forward in understanding natural decision processes.

These models predicted individual decisions more accurately than traditional theories, thus reflecting imperfect real-world behavior. This work could transform our understanding of cognitive strategies and enable tailored interventions for mental health and behavior.

Important facts:

  • Realistic perspective: Small AI models have shown that decision-making strategies are often the best but systematic.
  • Individual differences: The models have predicted individual behavior better than the best-based framework.
  • Major effects: By mapping cognitive diversity, the results can be incorporated into a mental health perspective.

Source: New York University

Researchers have long been interested in how humans and animals make decisions based on current information, focusing on trial-and-error behavior.

However, traditional approaches to understanding these behaviors may ignore some of the realities of decision-making because they assume that we make the best decisions after considering our previous experiences.

A recently published study by a team of scientists uses AI in an innovative way to better understand this process.

Using small artificial neural networks, the researchers’ work sheds detailed light on the motivations behind an individual’s real-life decisions and whether these decisions are optimal or not.

“Instead of assuming that the brain must learn to improve its decisions, we developed an alternative method to explore how individual brains learn to make decisions,” says Marcelo Mater, assistant professor in the Department of Psychology at New York University and one of the authors of the article.

“This method acts like a cognitive sleuth, uncovering the real-world decision-making processes of both humans and animals. By employing compact yet highly capable neural networks, researchers were able to decode complex behavioral patterns with surprising clarity. These interpretable models revealed strategic approaches to decision-making that had long been overlooked by traditional scientific frameworks”.

The study authors reported that small neural networks (simplified versions of neural networks commonly used in commercial AI applications) can predict animal decisions much better than traditional cognitive models that maximize behavior because they are able to detect sub-behavioral patterns.

In laboratory work, these predictions are as good as those of large neural networks, such as those used in commercial AI applications.

“J.N. Lee, a doctoral researcher in neuroscience at the University of California, explains that one key benefit of using compact neural networks is their interpretability. These smaller models make it easier to apply mathematical tools to uncover the underlying mechanisms behind human decision-making—something that would be far more challenging with the complex, large-scale networks typically used in mainstream AI systems”.

New research uses small AI models to mirror human decision-making, revealing what really drives our thoughts and behaviors. Credit: StackZone Neuro
New research uses small AI models to mirror human decision-making, revealing what really drives our thoughts and behaviors. Credit: StackZone Neuro

Marcus Benna, assistant professor of neurobiology at UC San Diego, highlights the predictive strength of large-scale neural networks in artificial intelligence. These expansive models are particularly adept at forecasting outcomes by processing vast datasets, making them powerful tools for pattern recognition and decision modeling, even if their internal logic often remains opaque.

Their strength lies in processing vast amounts of data to forecast outcomes with impressive accuracy—though their complexity often makes it difficult to interpret the reasoning behind those predictions.

For example, they can predict which movie you’ll want to watch next. However, it’s very difficult to briefly describe the strategies these complex machine learning models use to make their predictions, such as why they think you’ll like one movie more than another.

“By using streamlined AI models to forecast animal decision-making and applying physics-based analysis to their behavior, researchers can reveal the underlying mechanisms in a way that’s far more interpretable and accessible”.

Understanding how animals and humans learn from experience to make decisions is not only a fundamental goal of science, but also has broad applications in economics, politics, and technology.

However, because current models of this process aim to describe optimal decision-making, they often fail to capture realistic behavior.

Overall, the model described in the new Nature study is consistent with the decision-making processes of humans, nonhuman primates and laboratory mice.

Notably, the model also predicted the most decisions, thus better reflecting the “real” nature of decision-making, in contrast to the assumptions of traditional models that focus on explaining optimal decision-making.

Furthermore, the model developed by researchers at NYU and UC San Diego was able to predict decision-making at the individual level, showing how each participant uses different strategies to reach their decisions.

“Just as the study of individual differences in physical characteristics has revolutionized medicine, understanding individual differences in decision-making strategies could change our perspective on mental health and cognitive function,” Mattar concluded.

Abstract

“Understanding how humans and animals learn from experience to make flexible, goal-directed choices is central to both neuroscience and psychology”.

In this study, researchers employed small-scale recurrent neural networks to model and reveal the underlying cognitive strategies that shape everyday decision-making. These streamlined models captured complex behavioral patterns with surprising accuracy, offering fresh insights into how brains adapt through trial, feedback, and learning over time.

These minimalist models, despite their size, revealed nuanced behavioral patterns often missed by traditional frameworks—offering a clearer, more interpretable view into how brains navigate uncertainty and learn from trial and error.

Standard modeling frameworks, such as Bayesian inference and reinforcement learning, provide valuable insights into the principles of adaptive behavior.

However, the simplicity of these frameworks often limits their ability to capture realistic biological behavior, resulting in hand-crafted adjustment cycles subject to researcher subjectivity.

Here, we present a new modeling approach that uses recurrent neural networks to uncover cognitive algorithms that guide biological decision-making.

Researchers have demonstrated that ultra-compact neural networks—containing just one to four units, can outperform traditional cognitive models and rival the predictive power of much larger AI systems. In six extensively studied reward learning tasks, these minimalist networks accurately forecast the decision-making patterns of both humans and animals, revealing that complexity isn’t always necessary for high performance.

 Importantly, we can interpret trained networks using dynamical systems concepts, which allows for consistent comparison of cognitive models and reveals detailed mechanisms underlying selective behavior.

Our approach also estimates the behavioral dimension and provides insights into the algorithms learned by artificial intelligence agents using meta-reinforcement learning.

Overall, we present a systematic approach to uncovering interpretable cognitive strategies in decision-making, providing insights into neural mechanisms and laying the foundation for the study of healthy and disordered cognition.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *