NimbusNimbus
ProductTechnologyStudioTeamDocumentation
DemoBook a Demo
← Back to Blog

From EEG to Action: Building Your First Real-Time BCI Pipeline with Nimbus Studio

March 8, 2026

If you've worked with EEG data before, you know the pipeline tax. Before you can test a single hypothesis, you've already written hundreds of lines of glue code. Signal acquisition, bandpass filtering, artifact rejection, feature extraction, model training, real-time inference — each step is a mini-project, and none of them talk to each other cleanly.


Screenshot_2026-03-07_at_22.10.20.png

This post walks you through building a complete, real-time BCI pipeline: from raw EEG all the way to live, hardware-deployed predictions. We'll use Nimbus Studio's visual pipeline builder and the NimbusSDK's Bayesian classifiers. By the end, you'll have a reproducible pipeline that trains offline and deploys live — with zero rewrites between the two modes.

What "Real-Time" Actually Means in BCI

Real-time in BCI is a spectrum. For offline research pipelines, "real-time" often means fast enough to process a trial within the inter-stimulus interval. For assistive technology or motor imagery control, it means sub-20ms latency — the difference between fluid control and a system that feels broken.

The Nimbus engine is designed to target <20 ms end-to-end latency, compared to typical 200 ms+ end-to-end latency you may see in standard Python or MATLAB stacks (depending on hardware and pipeline design). Most of this headroom comes from Bayesian-based reactive message passing under the hood, which enables true incremental Bayesian updates rather than batch recomputation.

This distinction matters architecturally. A batch-first pipeline can't simply be "sped up" into a real-time one — it needs to be redesigned from the ground up. Nimbus Studio's pipeline model is streaming-first by default, so offline training and online deployment run the same computational graph.

Designing Your Pipeline in the Visual Builder

Open Nimbus Studio and create a new pipeline. The canvas shows a directed graph of processing nodes. Add them left to right:

  1. Data Source — connect a public dataset (MOABB includes dozens of BCI datasets) or stream from hardware via BrainFlow
  2. Bandpass Filter — 8–30 Hz for motor imagery; Nimbus pre-fills sensible defaults per paradigm
  3. Epoch Extractor — window around stimulus events with configurable length and baseline correction
  4. CSP (Common Spatial Patterns) — spatial filter that maximizes variance differences between classes
  5. NimbusLDA Classifier — Bayesian LDA with shared covariance estimation and built-in confidence scores

Each node exposes its parameters inline. Change a value and the downstream preview updates immediately — no file saves, no re-runs, no waiting.

The visual builder enforces type compatibility between nodes. If you try to pipe raw continuous EEG into a classifier that expects epoched data, the edge turns red and validation catches it before you run anything. This alone eliminates a class of silent runtime bugs that typically cost hours to debug in code.

Bayesian Classification: Getting Confidence for Free

Traditional BCI pipelines output a hard label. The classifier says "left hand" or "right hand" — full stop. For a robotic arm, a wheelchair, or a communication device, a wrong prediction with 51% confidence should be treated very differently from one with 99% confidence.

NimbusLDA is a Bayesian Linear Discriminant Analysis model. It estimates a posterior distribution over class labels rather than a point prediction. This gives you:

  • Calibrated confidence scores — the reported probability matches empirical accuracy across trials
  • Uncertainty thresholding — reject low-confidence predictions instead of forcing a wrong decision
  • Class imbalance handling — built-in prior adjustment so unbalanced training sets don't silently skew your classifier

In Nimbus Studio, the classifier node outputs both the predicted label and a live confidence trace. Watching the posterior sharpen as the signal epoch accumulates is a genuinely different way to think about BCI classification — and a direct entry point into Active Inference thinking, where the model continuously updates its beliefs rather than committing to a single answer.

If your session data shows accuracy degrading over time — common in EEG due to electrode impedance drift, fatigue, and neural adaptation — swap NimbusLDA for NimbusSTS. The Bayesian Structural Time Series model maintains a stateful posterior across trials and re-estimates model parameters incrementally as new data arrives, without requiring a full retrain.

From Training to Hardware: Zero Rewrites

The most painful step in traditional BCI development is productionization. You train a model in Python, then discover that your preprocessing pipeline relies on .fit() state scattered across five objects, your epoch extractor assumes access to the full recording, and your random seeds aren't serialized anywhere. Porting to real-time routinely takes weeks.

In Nimbus Studio, the same pipeline node graph runs in two modes:

  • Offline mode — reads from a dataset, processes all trials, evaluates with cross-validation metrics
  • Online mode — streams from a BrainFlow-compatible device, processes each epoch as it arrives, outputs predictions with latency measurements

Switch between modes with a single toggle. No code changes, no node reconfiguration. The model state trained in offline mode is automatically serialized and loaded for online inference.

Connect your device — OpenBCI Ganglion, g.tec Unicorn, Muse 2, Emotiv EPOC, or any BrainFlow-compatible hardware — hit Deploy, and watch live predictions with confidence scores appear in the output panel.

Exporting, Extending, and Publishing

Nimbus Studio is not a black box. Every pipeline can be exported to clean, documented Python code. This is useful when you need a preprocessing step that isn't yet a built-in node, when you want to integrate the pipeline into a product backend via the REST API, or when submitting reproducible code alongside a paper.

The exported code uses the NimbusSDK directly, preserving all Bayesian models and their uncertainty outputs. You're never locked into the visual interface — the tool accelerates iteration, but your work remains fully portable.

For teams using Julia, the same SDK models are available natively. The RxInfer-based backend means you can swap between Python and Julia without changing your model semantics.

Conclusion

The infrastructure gap in BCI development is real. Getting from "I have EEG data" to "I have a live, reliable classifier" takes most teams weeks on the first pass. Nimbus Studio compresses that to under an hour by solving the signal chain, the Bayesian model layer, and the training-to-deployment problem in a single visual environment.

If you're new to Active Inference or probabilistic BCI more broadly, the core insight is this: confidence is information. A system that knows what it doesn't know can refuse to act, ask for more data, or flag a low-quality signal — behaviors that are simply unavailable to hard-prediction classifiers. Bayesian models give you that signal for free, and Nimbus Studio makes building them as fast as wiring nodes on a canvas.

Get early access at nimbusbci.com/studio.

Nimbus Studio

Stop writing boilerplate. Start publishing papers. Built by researchers, for researchers.

LinkedInX
Navigation
ProductTechnologyStudioTeamDocumentationResources
Nimbus Studio
ComparisonFeaturesBenefitsWho It's ForResourcesFAQ
© 2026 Nimbus Studio. All rights reserved.
Nimbus BCI Inc., USA
PrivacyTermsCookies