NimbusNimbus
ProductTechnologyStudioTeamDocumentation
DemoBook a Demo
← Back to Blog

March at Nimbus Studio: EEG calibration and stronger research workflows

April 1, 2026

We shipped a dense month focused on how teams learn the product, run real hardware in the builder, and benchmark across subjects with clearer results and AI context.

Live EEG calibration in Build mode

You can connect hardware (or a synthetic board), run a cue protocol, and export HDF5 that lines up with the Custom Data node – fullscreen UI, WebSocket-driven cues, pause/resume, partial save on cancel, and accessibility improvements (dialog roles, live regions, clearer contrast). A Calibration Setup Wizard helps you get a template run configured quickly.

Multi-subject benchmarking & execution intelligence

Public-data runs can loop per subject with aggregate metrics (mean ± std for accuracy, kappa, ITR) and best/worst subject callouts.

The results experience leads with aggregates, then per-subject tables and tabs for detail – including per-class accuracy and confusion matrices. The canvas and terminal stay honest during runs: subject-tagged logs, smarter deduplication, and aggregate badges on the results node.

The AI assistant now gets a structured multi-subject summary so it doesn’t mistake the last subject for the whole study.

Semantic channel mapping

A topomap-based mapper in hardware settings maps physical channels to normalized 10-20 positions; the stream layer reorders and pads to match what downstream nodes expect.

Register for the beta

Want to try Nimbus Studio on real pipelines and help shape what we ship next? Register for the beta here: https://nimbusbci.com/studio

Nimbus Studio

Stop writing boilerplate. Start publishing papers. Built by researchers, for researchers.

LinkedInX
Navigation
ProductTechnologyStudioTeamDocumentationResources
Nimbus Studio
ComparisonFeaturesBenefitsWho It's ForResourcesFAQ
© 2026 Nimbus Studio. All rights reserved.
Nimbus BCI Inc., USA
PrivacyTermsCookies