Why Your BCI Degrades Over Time — and How Adaptive Bayesian Models Fix It
If you have ever run a motor imagery BCI session longer than 20 minutes, you probably noticed the same frustrating pattern: the classifier starts strong and quietly falls apart. By the end of the session, what felt like a working system is producing near-random predictions. This is not a bug in your code. It is a fundamental property of EEG signals — and most BCI software is simply not built to handle it.
This post explains why non-stationarity happens, why it breaks classical approaches, and how adaptive Bayesian state-space models offer a principled solution. We will use Nimbus Studio and the NimbusSTS model as a concrete reference throughout.
What Non-Stationarity Actually Means in EEG
In signal processing, a stationary process is one whose statistical properties — mean, variance, covariance — do not change over time. Most classical classifiers (LDA, SVM, even neural nets trained offline) are implicitly built on this assumption. They learn a fixed decision boundary during a calibration phase and expect that boundary to hold during real-time use.
EEG violates this assumption constantly. Several overlapping mechanisms are responsible:
- Electrode drift. Gel impedance changes as it dries. Contact quality degrades subtly over tens of minutes.
- Fatigue and attention. Neural oscillatory patterns shift as the user tires or habituates to the task.
- Hemodynamic coupling. Blood flow changes in underlying cortex alter the amplitude envelope of low-frequency EEG components.
- Session-to-session variability. Even the same user on the same task on a different day can show dramatically different spatial patterns.
The net effect is that the feature distribution — the likelihood of observing feature vector given class — shifts continuously. A classifier trained on the first 5 minutes of a session may be modeling a distribution that no longer exists by minute 25.
Why Classical Approaches Struggle
The standard workaround is periodic recalibration: pause the session, collect new labeled data, retrain the classifier. This works in a lab setting but is completely impractical for real-world or clinical deployment. Wheelchair users, rehabilitation patients, and gaming users do not want to stop every few minutes to recalibrate.
Another common approach is online covariance adaptation — methods like the Riemannian geometry classifiers that track the geometric mean of covariance matrices. These help, but they are heuristic. They do not model why the distribution is drifting, and they offer no principled uncertainty quantification over the adapted parameters.
This is exactly where Bayesian state-space models come in.
The State-Space View of a Drifting Classifier
Instead of treating classifier parameters as fixed after training, a state-space model treats them as latent variables that evolve over time according to a dynamic process. Concretely, if represents the classifier parameters at time , we define:
This is a random walk prior over parameters. The observation model then connects to the EEG features and class labels observed at each trial:
Given this formulation, inference reduces to computing the posterior — the distribution over current parameters given everything observed so far. This posterior is updated after every trial, meaning the model continuously adapts without any explicit recalibration step.
The practical challenge is that this posterior is generally intractable. For linear Gaussian models, the Kalman filter gives the exact solution. For the non-linear, non-Gaussian cases typical in BCI (softmax likelihoods, multinomial classes), we need approximate inference.
NimbusSTS: Extended Kalman Filtering for BCI
NimbusSTS implements this approach using an Extended Kalman Filter (EKF)-style inference scheme. At each trial, it maintains a Gaussian approximation to the parameter posterior:
The predict step propagates uncertainty forward via the process noise :
The update step then incorporates the new observation using a linearized likelihood, yielding updated mean and covariance. The key insight is that grows during the predict step (uncertainty increases as parameters drift) and shrinks during the update step (new data provides information). This creates a natural balance: the model adapts quickly when it is uncertain and conservatively when it is confident.
In Nimbus Studio, NimbusSTS integrates seamlessly into the visual pipeline builder. You drag it into your pipeline in place of NimbusLDA or NimbusQDA, and the same pipeline that trains offline now adapts in real time — no code rewrites, no separate adaptation loop to wire up.
When to Use Adaptive Models — and When Not To
NimbusSTS is not always the right choice. For short sessions (under 10–15 minutes) with a stable user, the adaptation overhead may not be worth it. NimbusLDA or NimbusQDA, which assume stationarity, will often perform comparably with lower computational cost.
NimbusSTS becomes the clear choice in three scenarios:
- Long sessions (30+ minutes) where electrode drift and fatigue are substantial.
- Clinical or assistive technology applications where stopping for recalibration is not an option.
- Cross-session deployment where you want a model that carries over learned state from a previous session and adapts quickly at the start of a new one.
The model's stateful design also makes it naturally suited for passive BCIs — applications where the user is not performing discrete mental imagery tasks but where the system continuously monitors cognitive or affective state over a long period.
Conclusion
Non-stationarity is not an edge case in BCI — it is the default. Building systems that assume a static world and periodically recalibrate is a stopgap, not a solution. Adaptive Bayesian state-space models offer a principled alternative: instead of ignoring drift, they model it explicitly and maintain a posterior over parameters that updates trial by trial.
NimbusSTS brings this approach into a practical, production-ready package that integrates directly into the Nimbus Studio pipeline builder. If you are building a BCI system that needs to stay accurate across an entire session — without interruptions, without retraining, and with full uncertainty quantification — it is worth adding to your toolkit.
In a future post, we will go deeper into the process noise matrix and how to tune it for different session profiles and paradigms. For now, the best place to start is Nimbus Studio, where you can drop NimbusSTS into an existing pipeline and see live adaptation in action within minutes.