NimbusNimbus
ProductTechnologyStudioTeamDocumentation
DemoBook a Demo
← Back to Blog

Reactive Message Passing for BCI: How RxInfer.jl Brings Active Inference to Real Time

April 15, 2026

Active Inference has earned real traction in the BCI community — not as an academic curiosity, but as a framework that handles uncertainty, non-stationarity, and closed-loop action in ways that discriminative models struggle with. Yet a persistent gap exists between the theory and the code. Most introductions end with a set of equations and leave the reader wondering: how does this actually run on streaming EEG data at 250 Hz?

The answer is reactive message passing, and the tool that makes it practical is RxInfer.jl.

The Gap Between Free Energy and Inference

The Free Energy Principle tells you what to optimise: minimise variational free energy, which simultaneously keeps your beliefs accurate and your predictions consistent with the model. What it doesn't specify is how to do that computation efficiently — especially when the model is hierarchical, the data arrives as a continuous stream, and you need decisions in milliseconds.

Traditional variational inference approaches (mean-field, ADVI, black-box VI) treat inference as a batch optimisation problem. You collect data, you run an optimiser, you get posteriors. That cadence is incompatible with real-time BCI. You cannot pause a motor-imagery decoder to run 1000 gradient steps every time a new EEG epoch arrives.

Message passing offers a different contract: rather than optimising a global objective, you pass local messages — probability distributions or their sufficient statistics — along the edges of a factor graph that encodes your generative model. Each node updates only when its neighbours update. The result is an algorithm that is inherently incremental, naturally parallel, and amenable to online operation.

Factor Graphs and Variational Message Passing

A factor graph is a bipartite graph whose nodes are either variables (latent states, parameters, observations) or factors (likelihood terms, priors, transition models). The graph directly mirrors the conditional independence structure of your generative model.

For a simple EEG decoder you might have:

  • A latent state node representing the current motor-imagery class
  • A transition factor encoding how class probabilities evolve over time
  • An emission factor mapping latent states to observed EEG features
  • An observation node receiving the current epoch's feature vector

Variational Message Passing (VMP) — the algorithm implemented by RxInfer.jl — passes approximate posterior messages around this graph iteratively. Each message is computed analytically when the factor belongs to the exponential family, which covers the Gaussian, Dirichlet, Beta, and categorical distributions you will encounter in most BCI generative models.

The critical property: once you have specified the graph, RxInfer derives the message update rules automatically. You write a model; the library writes the inference engine.

Why Reactive?

RxInfer.jl sits on top of Rocket.jl, a reactive programming framework inspired by RxJS and ReactiveX. In reactive programming, data sources are observables that emit values over time, and computations are subscriptions that react to those emissions.

This maps onto streaming BCI data in a near-perfect way. Your EEG acquisition system emits epochs as an observable stream. Your RxInfer model subscribes to that stream. Each new epoch triggers a round of message passing — updating beliefs over latent states, propagating uncertainty forward through the hierarchy, and emitting a posterior over the current class label — all before the next epoch arrives.

The reactive model also handles asynchronous updates gracefully. In a real pipeline you rarely have a single data source: you might have EEG, EMG, gaze, and stimulus markers arriving on different clocks. Reactive message passing lets each modality update the relevant part of the factor graph independently, without forcing a global synchronisation step.

Structuring a Real-Time BCI Decoder

A minimal RxInfer BCI loop looks like this conceptually:

  1. Define the generative model — specify priors, transition dynamics, and the EEG emission model as a @model block. For a two-class motor-imagery task this might be a switching state-space model with Gaussian observations and a Dirichlet prior over class probabilities.
  2. Compile the factor graph — RxInfer's inference function inspects the model declaration and constructs the factor graph with appropriate VMP update rules for each factor type.
  3. Subscribe to the data stream — connect the observation nodes to your EEG epoch observable. Each emission triggers one inference pass.
  4. Read the posterior — subscribe to the output distribution over the latent class variable. In a closed-loop system, this posterior feeds directly into your action-selection policy (e.g., cursor control, stimulator gating).

The Nimbus toolchain is designed around this pattern. Nimbus Studio handles the pipeline scaffolding — connecting acquisition hardware, preprocessing stages, and feature extraction — so that by the time data reaches your inference model it is already in the right shape. This separation keeps the generative model itself clean and portable, while the pipeline configuration handles the real-world messiness of hardware clocks, missing packets, and session-level artefacts.

Precision, Priors, and Model Complexity

One of the practical advantages of message passing over gradient-based VI is that precision parameters — the inverse variances that weight sensory evidence against prior beliefs — emerge naturally as learnable nodes in the factor graph. You don't need a separate hyperparameter sweep; you add a Gamma prior over each precision variable and let VMP estimate it from data.

This matters for BCI because EEG noise is neither stationary nor Gaussian. Electrode impedance drifts, muscle artefacts arrive in bursts, and inter-session variability can shift the signal distribution substantially. A model that explicitly tracks and adapts its precision estimates degrades gracefully under these conditions, rather than silently overfitting to a stale noise estimate.

The tradeoff is model specification effort. Writing a good generative model requires thinking carefully about the causal structure of your BCI task — what latent variables are you positing, how do they evolve, and what is the emission process? This upfront investment pays off in interpretability and robustness, but it is a different skill from tuning a neural network.

Conclusion

Reactive message passing is not a niche theoretical tool — it is a practical inference engine that runs at BCI-relevant speeds, handles streaming data naturally, and makes the probabilistic structure of your decoder explicit and inspectable. RxInfer.jl lowers the barrier to using this approach by automating the derivation of update rules from a high-level model specification.

If you have been following the Active Inference series and wondering how to go from equations to a running decoder, the factor graph is your bridge. Define your generative model, let RxInfer compile the inference graph, connect it to your data stream, and you have a decoder that reasons about its own uncertainty on every epoch — without a batch optimiser in sight.

In the next post, we will walk through a concrete two-class motor-imagery example end to end, including how to initialise priors from a short calibration block and evaluate decoding performance against a Bayesian LDA baseline.

Nimbus Studio

Stop writing boilerplate. Start publishing papers. Built by researchers, for researchers.

LinkedInX
Navigation
ProductTechnologyStudioTeamDocumentationResources
Nimbus Studio
ComparisonFeaturesBenefitsWho It's ForResourcesFAQ
© 2026 Nimbus Studio. All rights reserved.
Nimbus BCI Inc., USA
PrivacyTermsCookies