SeizurEShield Mission Statement
MIDS Capstone Project Summer 2024

SeizurEShield

Problem and Motivation

Approximately 1 in 10 people will experience a seizure in their lifetime, and for some, these seizures become chronic, resulting in a condition known as epilepsy. Affecting an estimated 50 million people worldwide, it’s one of the most common neurological disorders globally, alongside Alzheimer's and Parkinson's. The primary challenge of living with epilepsy is its unpredictability; while about 65% of people with generalized epilepsy report experiencing "auras" or warning sensations before a seizure, many do not, and there are four other types of epilepsy where such warnings are even less common. 

This unpredictability poses significant quality of life and safety concerns, as seizures can occur without warning, and range from brief periods of lost time to severe convulsions and unconsciousness, the latter of which necessitates immediate hospitalization. Given that medications for epilepsy do not guarantee complete control over seizures and only four seizure detection products are currently commercially available, our mission is to leverage deep learning and machine learning techniques to detect seizure activity from EEG recordings of the human brain. This will allow for enhanced speed, accuracy, and reliability of seizure detection, thereby improving the quality of life and safety for individuals with epilepsy, offering greater independence and peace of mind.

Data Source and Data Science Approach

The data used in this project was sourced from the Temple University Hospital's EEG Corpus (https://isip.piconepress.com/projects/nedc/html/tuh_eeg/), and spans 17 years of data collection efforts with a total of 16,986 sessions from 10,874 unique subjects. The recordings in the dataset were taken via an 18 to 31 lead electroencephalography, or EEG, from approximately equal ratios of male and female patients (49% and 51% respectively), with an age range of <1 year to >90 years (median of 51.6 years). All data was thoroughly de-identified and randomized by the TUH Department of Neurology prior to addition to the corpus to comply with the HIPAA Privacy Rule.  Approximately 87% of the recordings were taken due to epileptic activity, with another 12% due to strokes and the remaining 1% due to concussions.

As the total dataset required ~600 GB of storage space, our team elected to use Amazon SageMaker as our main platform for staging the training and testing of our machine learning solutions. Initial preprocessing involved data cleaning and file conversions, joining physician annotations to the data to provide timestamps, and channel standardization via the use of EEG montage mappings. After comparing several different algorithms, the team decided to implement a Recurrent Neural Network (RNN) and the ResNet-18 models for comparison, chosen for their aptitude in working with time-series data and performing image analysis, respectively.  

Data Pipeline

 

Evaluation

After training and testing our models, we found that the RNN model worked well in identifying when there was not a seizure, since we had a very low false positive rate (1.74%) with a high accuracy (95%). We believe this was due to the fact that most of the data did not include seizures. In line with this, we also discovered the model did poorly at identifying seizures, illustrated by a higher false negative rate (35%). The ResNet-18 model performed along the same lines with many of the same faults, likely due to the class imbalances within the training and testing sets, with a 17% false positive rate, 38% false negative rate, and an overall accuracy of 90%. 

Our results proved to be adequate when compared to already-existing research in the literature- a 2021 paper by Khalkhali et. al. from the Neural Engineering Data Consortium achieved similar results, as shown in the table below.

Model Comparisons

True Positive

True Negative

False Positive

False Negative

Overall Accuracy

Neural Engineering Data Consortium’s ResNet-18[1]

83%

60%

17%

40%

Unstated

SeizurEShield ResNet-18

89%

62%

11%

38%

90%

SeizurEShield RNN+LSTM

70%

98%

1.74%

32%

95%

[1] “Low Latency Real-Time Seizure Detection Using Transfer Deep Learning” - Neural Engineering Data Consortium, Temple University, Philadelphia, Pennsylvania, USA

Key Learnings and Impact

After evaluating the performance of both models, it has become abundantly clear that the class imbalances present in the training and testing sets are severely impacting both models' performances. While stratifying the training sets would adequately address the class imbalance issue, we believe we could further improve both models through a variety of steps such as more in-depth hyperparameter tuning, varying the model architectures, implementing more signal filtering techniques in our data preprocessing pipeline, and (in the case of the RNN) investigating the use of more advanced xLSTM layers in place of the normal LSTM layers. 

While the project was unable to achieve real-time detection capabilities during the time allotted for the capstone course due to computing resource costs, we believe that the work performed throughout the course of the project is still novel enough to be of some note, and that the seizure detection algorithm we created could potentially be useful in decreasing the time required for EEG interpretations after neurological consultations. This would then serve to decrease overall variability in the various time and financial costs associated with these types of appointments, making adequate epilepsy care more accessible overall. 

Acknowledgements

We'd like to thank Joe Picone, Head of the Institute for Signal and Information Processing and Professor in Temple University’s Electrical and Computer Engineering Department, for his assistance in getting us access to the Temple University EEG Corpus; and Korin Reid and Joyce Shen, CEO of Ellison Laboratories and Investment and Operating Partner at Tenfore Holdings, respectively, and Adjunct Professors in UC Berkeley Data Science Masters’ Program, for mentoring us and providing us with the opportunity to work on this project. 

We couldn't have done it without any of your assistances, and we are forever grateful that all of you enabled us to perform this work. 

 

Last updated: August 5, 2024