NimbusNimbus
ProductTechnologyStudioTeamDocumentation
DemoBook a Demo
← Back to Blog

Active Inference for Closed-Loop BCI: The Self-Correcting Architecture

March 10, 2026

Closing_the_Loop.png

Most BCI pipelines share the same fundamental architecture: record → preprocess → decode → act. It's clean, it's fast, and it works — until it doesn't. EEG signals drift over a session. The user's mental state shifts. The classifier you trained at 10 am is a stranger by 3 pm.

Active Inference offers a fundamentally different architecture. Instead of a one-shot decoder, it proposes a generative model that continuously predicts what the brain should be doing, compares that prediction against what it is doing, and updates both its model and its actions to minimise the gap. The result is a BCI that doesn't just read the brain — it converses with it.

This post builds on our earlier Introduction to Active Inference for BCI and Free Energy Principle explainer to show what a closed-loop Active Inference architecture looks like in practice, and how Nimbus's probabilistic stack maps onto each component. (If these are intended to be specific articles, consider swapping in the direct URLs.)


The Problem with Decode-and-Execute

Conventional BCI classifiers are trained offline on a static dataset and then frozen at deployment. The implicit assumption is that the relationship between neural signals and intended actions stays constant. It doesn't.

Neural non-stationarity — the slow drift of EEG statistics over time — is well documented. Electrode impedance changes, fatigue shifts spectral power, and even subtle head movements alter spatial filters. In practice, accuracy can degrade substantially over a session without any change in the user's intent.

The standard response is periodic recalibration: pause, collect new data, retrain. For a researcher, that's an inconvenience. For a locked-in patient depending on the device to communicate, it's a system failure.

Active Inference reframes the problem. Rather than asking "what class does this signal belong to?" it asks "what generative process is most consistent with the signal I just observed, given everything I know so far?" That framing keeps the model alive and updating throughout the session.


The Perception-Action Loop in a BCI Context

Active Inference formalises perception and action as two sides of the same coin: both are mechanisms for minimising variational free energy — a tractable upper bound on the surprise of observed data under the agent's generative model.

In a BCI context, the loop looks like this:

  1. Predict — the generative model emits a prior expectation over incoming EEG features (e.g., expected mu-rhythm suppression for a left-hand motor imagery trial).
  2. Observe — the preprocessing pipeline delivers actual EEG features to the model.
  3. Infer — the model computes a prediction error: the mismatch between expected and observed features. This propagates upward through the model hierarchy (predictive coding).
  4. Update — the model updates its posterior beliefs about the latent state (e.g., *"the user is imagining left-hand movement with 87% confidence"").
  5. Act — the system emits a control signal and adjusts the generative model's parameters to reduce future prediction error.

Step 5 is the key difference from a standard Bayesian classifier. The model doesn't just decode — it actively refines itself to get better at predicting the next observation. Over a session, it learns the user's current signal statistics without ever pausing for explicit recalibration.


Predictive Coding as a Neural Decoder

Predictive coding — the hierarchical implementation of Active Inference — maps cleanly onto the layered structure of EEG analysis.

Consider a two-level hierarchy:

  • Lower level: models raw EEG features (band power, spatial filter outputs). Prediction errors here are fast and reactive — they track moment-to-moment signal changes.
  • Higher level: models latent cognitive states (motor imagery class, attention level, fatigue). Prediction errors here are slower and contextual — they update the model's long-horizon beliefs about the user.

This mirrors the architecture of NimbusSTS (our Bayesian Structural Time Series model), which maintains a hidden state vector that evolves over time and propagates uncertainty upward. In our adaptive Bayesian models post, we showed how NimbusSTS outperforms static classifiers in long sessions — predictive coding explains why: it's doing exactly this two-level prediction-error minimisation implicitly.

In a full Active Inference implementation, you'd make the hierarchy explicit, allowing higher-level beliefs ("this user tends to drift toward lower alpha power after 20 minutes") to shape lower-level predictions before each trial.


Building a Closed-Loop Pipeline in Nimbus Studio

Nimbus Studio's node-based pipeline editor is well-suited to closed-loop architectures because the data flow is bidirectional by design: preprocessing nodes can receive feedback from classifier nodes, and classifier nodes can update their priors based on downstream signals.

A minimal closed-loop Active Inference pipeline in Studio looks like this:

[EEG Stream] → [Bandpass + CSP] → [NimbusSTS Decoder]
                    ↑                        |
              [Prior Update] ← [Prediction Error Node]

The Prediction Error Node computes the signed difference between the NimbusSTS model's predicted feature distribution and the observed features. The Prior Update node feeds this back to the spatial filter stage, nudging the CSP weights toward the current session's covariance structure.

This loop runs every trial — or even every 250 ms in a continuous-control paradigm — requiring no explicit recalibration window. In our internal benchmarks on the MOABB dataset (which includes dozens of publicly available EEG datasets), this architecture materially reduces session-end accuracy degradation compared to a frozen NimbusLDA baseline.

You can scaffold this pipeline from a template in Nimbus Studio at nimbusbci.com/studio.


Conclusion

The decode-and-execute model treats the brain as a static signal source. Active Inference treats it as a dynamic partner in a continuous conversation — one where both sides are constantly updating their model of the other.

For BCI engineers, the practical payoff is significant: longer sessions without recalibration, more robust performance under signal drift, and a principled framework for incorporating prior knowledge about the user. The theoretical payoff is equally compelling: a single mathematical objective (free energy minimisation) unifies perception, learning, and action in a way that standard classifier pipelines simply don't.

The Nimbus stack — NimbusSTS for stateful inference, RxInfer for efficient message passing, and Nimbus Studio for visual closed-loop pipeline design — is built with exactly this architecture in mind. If you're ready to move beyond decode-and-execute, get early access to Nimbus Studio and try the closed-loop template.


Next in this series: a deep dive into variational message passing with RxInfer.jl and how it compares to standard variational inference for real-time BCI workloads.

Nimbus Studio

Stop writing boilerplate. Start publishing papers. Built by researchers, for researchers.

LinkedInX
Navigation
ProductTechnologyStudioTeamDocumentationResources
Nimbus Studio
ComparisonFeaturesBenefitsWho It's ForResourcesFAQ
© 2026 Nimbus Studio. All rights reserved.
Nimbus BCI Inc., USA
PrivacyTermsCookies