Bary, Tim
[UCL]
Macq, Benoît
[UCL]
Transformer neural networks require a large amount of labeled data to train effectively. Such data is often scarce in electroencephalography, as annotations made by medical experts are costly. This is why self-supervised training, using unlabeled data, has to be performed beforehand. In this paper, we present a way to design several labeled datasets from unlabeled electroencephalogram (EEG) data. These can then be used to pre-train transformers to learn representations of EEG signals. We tested this method on an epileptic seizure forecasting task on the Temple University Seizure Detection Corpus using a Multi-channel Vision Transformer. Our results suggest that 1) Models pre-trained using our approach demonstrate significantly faster training times, reducing fine-tuning duration by more than 50\% for the specific task, and 2) Pre-trained models exhibit improved accuracy, with an increase from 90.93\% to 92.16\%, as well as a higher AUC, rising from 0.9648 to 0.9702 when compared to non-pre-trained models.
Bibliographic reference |
Bary, Tim ; Macq, Benoît. Designing Pre-training Datasets from Unlabeled Data for EEG Classification with Transformers.22nd IEEE Mediterranean Electrotechnical Conference (MELECON 2024) (Porto, Portugal, du 25/06/2024 au 27/06/2024). In: 22nd IEEE Mediterranean Electrotechnical Conference (MELECON 2024), (2024) |
Permanent URL |
http://hdl.handle.net/2078.1/287300 |