Dear colleagues, I would like to announce the release of pymdp 1.0.0, an open-source Python package for building and simulating active inference agents with discrete POMDP generative models. Repository: https://github.com/infer-actively/pymdp Documentation: https://pymdp-rtd.readthedocs.io/en/latest/ Examples: https://pymdp-rtd.readthedocs.io/en/latest/tutorials/notebooks/ Release notes: https://github.com/infer-actively/pymdp/releases/tag/v1.0.0 pymdp was originally developed as a NumPy-based library implementing core active inference routines for perception, planning, learning, and action selection. Version 1.0.0 is a substantial update that rebuilds the library around a JAX backend. Main changes in 1.0.0 include: - GPU/TPU-ready simulation of agents and environments - autodifferentiable inference, planning, learning, and action selection - JIT-compiled agent-environment loops for substantially faster execution - straightforward batching over agents and environments via vmap() - easier integration with JAX-native probabilistic programming tools such as NumPyro <https://github.com/pyro-ppl/numpyro> and pybefit <https://github.com/dimarkov/pybefit> In addition to the backend rewrite, the release includes several algorithmic and modeling improvements: - tree-search planning with sophisticated inference <https://arxiv.org/abs/2006.04120>, with compatibility with Monte Carlo Tree Search through DeepMind's mctx package - inductive inference, which augments planning with backward goal-reachability constraints and is particularly useful in long-horizon, deterministic or near-deterministic settings - exact HMM filtering and smoothing with associative scan - optimized differentiable implementations of marginal message passing and variational message passing - support for sparse dependency structure in large graphical models - more flexible many-to-many action dependencies between control factors and hidden-state factors One motivation for the JAX transition was to make active inference models easier to integrate into modern differentiable and probabilistic workflows. We expect this to be especially useful for researchers working in computational neuroscience, cognitive modeling, and computational psychiatry, where fitting decision-making models to behavior is a common goal. Feedback, bug reports, and contributions are very welcome. Best wishes, Conor Heins