Non-Invasive Brain–Computer Interface for Real-Time Intent Decoding
Description:
The team develops an end-to-end BCI pipeline that progresses from offline EEG classification to real-time control, implementing data acquisition, preprocessing (filtering, artifact removal), feature extraction (e.g., CSP, spectral features), and model training (e.g., EEGNet/transformer). They benchmark accuracy, latency, and information-transfer rate on standard paradigms (motor imagery or SSVEP) and then close the loop via LabStreamingLayer to drive a simple on-screen cursor or robotic demo, releasing a reproducible code/tutorial package.
Duration: Fall/Spring
Team: 1 graduate student and 1 undergraduate student
Compensation: Unpaid
Preferred Skill and Knowledge:
-
Basics of EEG/BCI paradigms (motor imagery, SSVEP) and non-invasive safety/ethics (IRB, consent)
-
Signal processing (filters, artifact handling) with Python (NumPy, SciPy, MNE) or MATLAB (EEGLAB)
-
Machine learning/deep learning for time-series (scikit-learn; PyTorch or TensorFlow; EEGNet/transformer familiarity a plus)
-
Real-time frameworks and I/O (LabStreamingLayer, UDP/WebSocket), basic Git and reproducible workflows
-
Optional: Experience with EEG hardware setup (cap montage, impedance checks) and simple UI development for feedback
Application Procedure:
Please send your resume and a short explanation of the project areas you're interested in and why to Chang S. "CS" Nam (csnam@ncat.edu).