Factor Graphs and Message Passing: The Engine Behind Real-Time Bayesian BCI

Brain-computer interfaces live and die by latency. A prosthetic limb that responds 200ms after a motor intention feels broken. A communication BCI that drifts mid-session loses a user's trust. The core algorithmic challenge isn't just accuracy — it's computing accurate probabilistic estimates fast enough to matter.
Classical Kalman filters get close, but they assume linearity and Gaussian noise. Deep learning models get accuracy, but lack calibrated uncertainty and resist online adaptation. Full Bayesian inference is theoretically ideal, but solving it naively — inverting large covariance matrices, computing exact posteriors — is far too expensive for real-time pipelines.
The solution that underpins modern probabilistic BCI engines, including RxInfer and the Nimbus stack, is message passing on factor graphs. If you've heard of it but found the literature impenetrable, this post is for you.
What Is a Factor Graph?
A factor graph is a way of representing a joint probability distribution by breaking it into a product of simpler, local functions called factors. The graph is bipartite: one set of nodes represents random variables (e.g., the user's intended motor command, the current brain state, observed EEG features), and the other set represents factors — the local probability functions that connect them.
For a typical motor-imagery BCI pipeline, a factor graph might encode:
- A prior over latent neural states
- A likelihood linking EEG covariance features to mental states
- A transition model describing how brain states evolve between time steps
- Observation noise at each EEG channel
The graph makes the statistical structure of your model explicit and executable. Each edge in the graph represents a probabilistic dependency. Nothing is hidden in opaque matrix algebra.
Belief Propagation: Inference as Local Message Passing
Once you have a factor graph, inference reduces to a beautifully simple algorithm: belief propagation (BP), also called the sum-product algorithm.
The idea is that instead of computing a global posterior all at once — which requires integrating over every variable jointly — you pass local messages between neighboring nodes in the graph. Each message is a function that summarizes what one part of the graph believes about a shared variable.
After a few forward-and-backward passes, the messages converge to the correct marginal posteriors for every variable in the model. In tree-structured graphs, this convergence is exact and linear in the number of nodes. For graphs with loops, approximate variants (like loopy BP or variational message passing) are used.
For BCI pipelines, this has a critical implication: you pay only for the parts of the model that change at each timestep. Stable priors don't need to be recomputed. Observations only propagate through the subgraph they touch. This is exactly the kind of structured computation that makes sub-20ms inference feasible.
Why Factor Graphs Are a Natural Fit for Neural Decoding
EEG and intracortical signals are noisy, high-dimensional, and non-stationary. The brain is not a fixed encoder — it drifts across sessions, adapts to feedback, and varies with fatigue and attention. Any serious BCI classifier has to model this uncertainty explicitly.
Factor graphs are well-suited to this problem for three reasons:
- Modularity. You can swap out individual factors — change the observation likelihood, update the transition model — without redesigning the entire inference algorithm. This maps directly to the iterative pipeline design that BCI research demands.
- Temporal structure. Sequential decoding (e.g., decoding a stream of EEG epochs in a P300 speller) maps naturally onto dynamic Bayesian networks represented as factor graphs unrolled over time. Belief propagation through this structure is essentially a generalized Kalman filter.
- Calibrated uncertainty. Because message passing computes full marginal distributions — not point estimates — every prediction comes with a confidence score. You always know how sure the model is, which is essential for safe assistive technology.
How RxInfer Automates All of This
RxInfer.jl is a Julia package for reactive, streaming Bayesian inference via message passing. Instead of requiring engineers to derive and implement message-passing schedules by hand, RxInfer takes a model specified in a probabilistic programming language, constructs the corresponding factor graph automatically, and runs inference using a library of pre-derived message update rules.
This is the core technology behind the Nimbus engine. When Nimbus processes EEG signals and outputs a calibrated prediction — with full uncertainty — in under 20ms, it's because RxInfer is running structured message passing on a factor graph, not brute-force numerical integration.
The reactive design also means RxInfer handles streaming data natively. New observations trigger local message updates, not global recomputation. This is what makes it suitable for real-time BCI at production latency.
From Theory to Pipeline with Nimbus Studio
Understanding factor graphs is one thing. Implementing a working BCI pipeline with them is another. Nimbus Studio bridges that gap.
In Studio, the visual pipeline canvas is a factor graph editor, in spirit: you connect preprocessing nodes (bandpass filter → CSP → feature extraction) to classification nodes (NimbusLDA, NimbusSTS) to output nodes (confidence score → decision logic). Each connection encodes a probabilistic dependency. When you change a node's configuration, Studio propagates that change through the downstream pipeline automatically.
The NimbusSDK models — NimbusLDA, NimbusQDA, NimbusSTS — are themselves factor-graph-based classifiers compiled to efficient message-passing schedules. NimbusSTS, in particular, is a Bayesian structural time-series model designed for long sessions where neural drift is unavoidable. It uses EKF-style inference to keep its internal state beliefs up to date as the session progresses — a direct application of online message passing.
Conclusion
Factor graphs and message passing are not just theoretical tools from a statistics textbook. They are the architectural pattern that makes real-time, uncertainty-aware Bayesian inference tractable in production BCI systems. By decomposing inference into local, parallelizable computations, they let you build models that are fast enough for millisecond-level neural decoding and expressive enough to capture the full complexity of the brain's variability.
If you're coming from a classical ML background, the mental shift is significant — but the payoff is equally large. You get calibrated confidence, principled adaptation, and a modular model structure that survives contact with real neural data. Nimbus Studio and RxInfer are designed to make that shift as frictionless as possible, so you can spend less time on inference infrastructure and more time on the actual neuroscience.
Interested in trying probabilistic BCI pipelines yourself? Explore Nimbus Studio and get early access.