Krishn Bera

Cognitive Science PhD Student, Brown University

Fast and robust Bayesian inference for modular combinations of dynamic learning and decision models


Conference paper


Krishn Bera, Alexander Fengler, Michael J. Frank
Annual Meeting of the Society for Mathematical Psychology, MathPsych, 2025

Cite

Cite

APA   Click to copy
Bera, K., Fengler, A., & Frank, M. J. (2025). Fast and robust Bayesian inference for modular combinations of dynamic learning and decision models. In Annual Meeting of the Society for Mathematical Psychology. MathPsych.


Chicago/Turabian   Click to copy
Bera, Krishn, Alexander Fengler, and Michael J. Frank. “Fast and Robust Bayesian Inference for Modular Combinations of Dynamic Learning and Decision Models.” In Annual Meeting of the Society for Mathematical Psychology. MathPsych, 2025.


MLA   Click to copy
Bera, Krishn, et al. “Fast and Robust Bayesian Inference for Modular Combinations of Dynamic Learning and Decision Models.” Annual Meeting of the Society for Mathematical Psychology, MathPsych, 2025.


BibTeX   Click to copy

@inproceedings{krishn2025a,
  title = {Fast and robust Bayesian inference for modular combinations of dynamic learning and decision models},
  year = {2025},
  publisher = {MathPsych},
  author = {Bera, Krishn and Fengler, Alexander and Frank, Michael J.},
  booktitle = {Annual Meeting of the Society for Mathematical Psychology}
}

Abstract

In cognitive neuroscience, there has been growing interest in adopting sequential sampling models (SSM) as the choice function for reinforcement learning (RLSSM), opening up new avenues for exploring generative processes that can jointly account for decision dynamics within and across trials. To date, such approaches have been limited by computational tractability, due to lack of closed-form likelihoods for the decision process and expensive trial-by-trial evaluation of complex reinforcement learning (RL) processes. By combining differentiable RL likelihoods with Likelihood Approximation Networks (LANs), and leveraging gradient-based inference methods including Hamiltonian Monte Carlo or Variational Inference (VI), we enable fast and efficient hierarchical Bayesian estimation for a broad class of RLSSM models. By exploiting the differentiability of RL likelihoods, this method improves scalability and enables faster convergence with gradient-based optimizers or MCMC samplers for complex RL processes. To showcase the combination of these approaches, we consider the Reinforcement Learning - Working Memory (RLWM) task and model with multiple interacting generative learning processes. This RLWM model is then combined with decision-process modules via LANs. We show that this approach can be combined with hierarchical variational inference to accurately recover the posterior parameter distributions in arbitrarily complex RLSSM paradigms. In comparison, fitting a choice-only model yields a biased estimator of the true generative process. Our method allows us to uncover a hitherto undescribed cognitive process within the RLWM task, whereby participants proactively adjust the boundary threshold of the choice process as a function of working memory load.