Guides POMDP-style reasoning with variational free energy: beliefs over hidden states, likelihood/transition matrices, and action selection balancing epistemic vs pragmatic value. Use when reasoning under partial observability, explicit belief updates, goal vs exploration trade-offs, or when the user mentions free energy, EFE, or Active Inference patterns in code or design.
A Partially Observable Markov Decision Process (POMDP) engine that selects actions by minimising Variational Free Energy rather than maximising expected reward.
| Matrix | Shape | Semantics |
|---|---|---|
| A | [O × S] | Likelihood: `P(observation |
| B | [S × S × A] | Transition: `P(next_state |
| C | [O] | Log-preferences over observations (goal encoding) |
| D | [S] | Prior belief over initial states |
All matrices are typed numpy arrays with dtype=float64; rows of A and columns of B must sum to 1 (validated at construction).
Variational Free Energy (scalar form used in implementation notes):
F = 0.5 * e^T * PI * e