Implementing Closed-Loop Active Inference BCI Control in Nimbus Studio

If you already buy the premise of closed-loop Active Inference, the remaining question is practical: what does the loop look like when you implement it as a real-time graph in Nimbus Studio?
This post is intentionally execution-focused. We’ll keep theory to a minimum and spend the time where most teams get stuck: wiring the update signals, choosing what gets fed back, and making the loop stable enough to deploy.
What you need before you start
- A working open-loop pipeline in Studio (stream → preprocessing → decoder → output)
- One measurable post-action signal you can route back (more on options below)
- A decoder node that supports stateful updating (NimbusSTS is the simplest starting point)
The one thing that makes it “closed-loop”
In Studio terms, “closed-loop” means you add at least one feedback edge: some signal downstream of the decoder (the “action” and its outcome) is routed back as an input that updates the decoder’s state.
If you don’t have that feedback edge, you still have a real-time pipeline — but it’s open-loop.
What you can feed back (practical options)
Pick one feedback signal you can reliably compute online:
- Outcome / reward proxy: task success, cursor error, hit/miss, response time
- Signal quality: artifact score, impedance proxy, EMG contamination score
- Post-action neural delta: change in bandpower or ERP amplitude after feedback/stimulus
- Confidence gating: the model’s own posterior confidence (used to modulate action or learning rate)
The key is to choose something that is (1) causal relative to the action, and (2) stable enough that you aren’t just injecting noise back into the model.
Scaffolding a Closed-Loop Pipeline in Nimbus Studio
Building a closed-loop pipeline from scratch involves coordinating preprocessing, model inference, feedback rendering, and belief updating in a single real-time loop. Traditionally, this required custom code across multiple frameworks — a task measured in days or weeks.
Nimbus Studio reduces this to a visual pipeline design task. Here is a representative closed-loop workflow:
- Signal ingestion: Connect a BrainFlow-compatible device (OpenBCI, g.tec, Muse, etc.) via the hardware node. Studio handles driver abstraction and timestamped streaming.
- Preprocessing: Add bandpass and notch filter nodes, then a CSP or FBCSP spatial filter node. All parameters are configurable without touching code.
- Adaptive model: Drop in a NimbusSTS node. Configure the state dimension to match your feature space, set the process noise covariance, and enable the EKF update step. NimbusSTS will track signal statistics across the session in real time.
- Feedback node: Connect the model output to a feedback renderer (visual, auditory, or via a REST endpoint). This is the "action" in the perception-action loop.
- Belief update input: Route the feedback outcome — or any observable post-action signal change — back into the NimbusSTS node as an additional observation. This closes the loop.
The full pipeline can be scaffolded in minutes and exported to clean Python code at any point for further customization or deployment. Reproducibility is guaranteed: every parameter and connection is saved and shareable with one click.
Conclusion
Closed-loop BCI is not a new idea, but it has historically been hard to implement rigorously. Active Inference provides the theoretical grounding that turns closed-loop control from an engineering workaround into a principled inference architecture — one where perception and action are coupled by design, and adaptation is continuous rather than episodic.
For ML and BCI engineers, the practical implication is this: you do not need to choose between a fast open-loop decoder and a slow adaptive system. With the right generative model and tooling, you get both. NimbusSTS running inside a Nimbus Studio pipeline is a concrete starting point — real-time, adaptive, and ready to deploy to hardware without a code rewrite.
In the next post, we will go deeper into expected free energy and show how it can be used to implement active sensing — where the BCI system itself selects stimuli or feedback to reduce its own uncertainty about the user's intent.