zju眨眼数据集_脑电(EEG)等公开数据集汇总

本文列举了多个公开的EEG数据集,包括运动想象、情绪识别等多个领域的实验数据,如Grasp and Lift EEG Challenge、BCI Competitions、DEAP等。这些数据集涵盖了不同数量的参与者、通道数和任务类型,对于研究EEG信号处理、BCI系统及情感分析等领域极具价值。
摘要由CSDN通过智能技术生成

EEG-Datasets,公共EEG数据集的列表。运动想象,情绪识别等公开数据集汇总​mp.weixin.qq.com

运动想象数据

1. [Left/Right Hand MI](Supporting data for "EEG datasets for motor imagery brain computer interface"): Includes 52 subjects (38 validated subjects with discriminative features), results of physiological and psychological questionnares, EMG Datasets, location of 3D EEG electrodes, and EEGs for non-task related states

2. [Motor Movement/Imagery Dataset](https://www.physionet.org/physiobank/database/eegmmidb/): Includes 109 volunteers, 64 electrodes, 2 baseline tasks (eye-open and eye-closed), motor movement, and motor imagery (both fists or both feet)

3. [Grasp and Lift EEG Challenge](https://www.kaggle.com/c/grasp-and-lift-eeg-detection/data): 12 subjects, 32channels@500Hz, for 6 grasp and lift events, namely a). HandStart b). FirstDigitTouch c). BothStartLoadPhase d). LiftOff e). Replace f). BothReleased

4. [The largest SCP data of Motor-Imagery](A large electroencephalographic motor imagery dataset for electroencephalographic brain computer interfaces): The dataset contains 60 hours of EEG BCI recordings across 75 recording sessions of 13 participants, 60,000 mental imageries, and 4 BCI interaction paradigms, with multiple recording sessions and paradigms of the same individuals. BCI interactions involving up to 6 mental imagery states are considered. [[Article]](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6190745/pdf/sdata2018211.pdf)

5. [BCI Competition IV-1](BCI Competition IV): 64 EEG channels at 1000Hz sampling rate for 2 classes of left hand, right hand, foot (+ idle state) for 7 subjects. Evaluation data is continuous EEG which contains also periods of idle state.

6. [BCI Competition IV-2a](BCI Competition IV): 22-electrode EEG motor-imagery dataset, with 9 subjects and 2 sessions, each with 288 four-second trials of imagined movements per subject. Includes movements of the left hand,the right hand, the feet and the tongue. [[Dataset Description]](http://www.bbci.de/competition/iv/desc_2a.pdf)

7. [BCI Competition IV-2b](BCI Competition IV): 3-electrode EEG motor-imagery dataset with 9 subjects and 5 sessions of imagined movements of the left or the right hand, the latest 3 sessions include online feedback. [[Dataset Description]](http://www.bbci.de/competition/iv/desc_2b.pdf)

8. [High-Gamma Dataset](robintibor/high-gamma-dataset): 128-electrode dataset obtained from 14 healthy subjects with roughly 1000 four-second trials of executed movements divided into 13 runs per subject. The four classes of movements were movements of either the left hand, the right hand, both feet, and rest.

9. [Left/Right Hand 1D/2D movements](https://sites.google.com/site/projectbci/): 19-electrode data of one subject with various combinations of 1D and 2D hand movements (actual execution).

10. [Imagination of Right-hand Thumb Movement](Planning Relax Data Set): In every trial, subjects were asked to rest and rest data was recorded for 5 mins. Further, 5 second epoch data was also recorded when subjects were asked to imagine right hand thumb movement. 5 of such imagined motor movement, and rest state was recorded for each trial. Single subject, 8 electrodes at 256Hz.

11. [Mental-Imagery Dataset](A large electroencephalographic motor imagery dataset for electroencephalographic brain computer interfaces): 13 participants with over 60,000 examples of motor imageries in 4 interaction paradigms recorded with 38 channels medical-grade EEG system. It contains data for upto 6 mental imageries primarily for the motor moements. [[Article]](https://www.nature.com/articles/sdata2018211#ref-CR57)

情绪识别数据

1. [DEAP](A Dataset for Emotion Analysis using Physiological and Audiovisual Signals): Includes 32 subjects, each watchine 1-min long excerpts of music-videos, rated by users in terms of arousal/valence/like-dislike/dominanace/famaliarity, and frontal face recording of 22/32 subejcts.

2. [Enterface'06](http://www.enterface.net/results/): Enterface'06 Project 07: EEG(64 Channels) + fNIRS + face video, Includes 16 subjects, where emotions were elicited through selected subset of IAPS dataset.

3. [Imagined Emotion](Studies: show): 31 subjects, subjects listen to voice recordings that suggest an emotional feeling and ask subjects to imagine an emotional scenario or to recall an experience in which they have felt that emotion before.

4. [NeuroMarketing](https://drive.google.com/open?id=0B2T1rQUvyyWcSGVVaHZBZzRtTms): 25 subjects, 14 electrodes, Like/Dislike on commercial e-commerce products over 14 categories with 3 images each. Article for the dataset: Analysis of EEG signals and its application to neuromarketing. [[Article]](Analysis of EEG signals and its application to neuromarketing)

5. [SEED](SEED Dataset): 15 subjects were shown video clips eliciting positive/negative/neutral emotion and EEG was recorded over 62 channels.

6. [SEED-IV](SEED Dataset): 15 subjects were shown video clips ellicity happy/sad/neutral/fear emotions and EEG was recorded over 62 channels (with eye-tracking) for 3 sessions per subject (24 trials per session).

7. [SEED-VIG](SEED Dataset): Vigilance labels with EEG data in a simulated driving task. 18 electrodes and eye-tracking included.

8. [HCI-Tagging](HCI Tagging Database - Home): Subjetcs were shown video clips (fragments of movies) and they were asked to annotate the emotional state on the scale of valence and arousal. During the whole experiment, audio, video, gaze data and physiological data were recorded simultaneously with accurate synchronisation between sensors.

误差相关电位(ErrP)

1. [BCI-NER Challenge](https://www.kaggle.com/c/inria-bci-challenge): 26 subjects, 56 EEG Channels for a P300 Speller task, and labeled dataset for the response elicited when P300 decodes a correct or incorrect letter.

2. [Monitoring ErrP in a target selection task](Data sets - BNCI Horizon 2020): 6 subjects with 64 EEG electrodes, watching a cursor move towards a target square, and elicited responses are labeled based on whether the cursor moves in right or wrong direction. [[Dataset Description]](https://lampx.tugraz.at/~bci/database/013-2015/description.pdf)

4. [HCI-Tagging](HCI Tagging Database - Home): Subjetcs were shown images or movie fragments with a tag at the bottom of the screen. In some cases, the tag correctly described something about the situation. However, in other cases the tag did not actually apply to the media item. After each item, a participant was asked to press a green button if they agreed with the tag being applicable to the media item, or press a red button if not. During the whole experiment, audio, video, gaze data and physiological data were recorded simultaneously with accurate synchronisation between sensors.

视觉诱发电位(VEPs)

2. [c-VEP BCI with dry electrodes](https://www-ti.informatik.uni-tuebingen.de/~spueler/eeg_data/dry_cVEP_dataset.rar): 9 subjects, 15 dry-EEG Channels for a VEP BCI speller (32 characters) task, and labeled dataset for the response elicited for the label associated with the speller. [[Article]](A high-speed brain-computer interface (BCI) using dry EEG electrodes)

3. [SSVEP - Visual Search/Discrimination and Handshake](EEG Steady-State Visual Evoked Potential Signals Data Set): Includes 3 different tests, (i) Five Box visual test: attnded and unattended disc and square based stimuli, (ii) visual search within natural images: search of a yellow dot stimuli in B&W natural images, (iii) hand shake test: showing left/right hand closed/open images. 30 subjects, 14 electrodes. [[Article 1]](http://www.journalijar.com/uploads/154_IJAR-13703.pdf) [[Article 2]](Feature Extraction of EEG Signal upon BCI Systems Based on Steady-State Visual Evoked Potentials Using the Ant Colony Optimization Algorithm) [[More Dataset: Dataset 2]](Download test datasets)

4. [Synchronized Brainwave Dataset](https://www.kaggle.com/berkeley-biosense/synchronized-brainwave-dataset): 15 people were presented with 2 different video stimulus including blinks, relaxation, mental mathematics, counting color boxes, and watching superbowl ads. [[Stimulus 1]](https://www.youtube.com/watch?v=zkGoPdpRvaU&feature=youtu.be) [[Stimulus 2]](https://www.youtube.com/watch?v=sxqlOoBBjvc&feature=youtu.be)

事件相关电位(ERPs)

1. [Pattern Visual Evoked Potentials](Software & Data - University of Leicester): Dataset#5, 2 subjects for checkboard light pattern (oddball paradigm) recorded at O1 position.

2. [Face vs. House Discrimination](Data and analyses for "Spontaneous Decoding of the Timing and Content of Human Object Perception from Cortical Surface Recordings Reveals Complementary Information in the Event-Related Potential and Broadband Spectral Change"): 7 Epileptic subjects were presented with 50 grayscale stimulations each for Face and House pictures. For each subject, total 3 experimental runs were conducted resulting in 300 stimulations.

3. [Target Versus Non-Target](Building Brain Invaders: EEG data of an experimental validation): 25 subjects testing Brain Invaders, a visual P300 Brain-Computer Interface using oddball paradigm. 16-electrodes, wet. [publication](https://hal.archives-ouvertes.fr/hal-02126068), [code](plcrodrigues/py.BI.EEG.2012-GIPSA). Dataset id: BI.EEG.2012-GIPSA.

4. [Target Versus Non-Target](Brain Invaders Adaptive versus Non-Adaptive P300 Brain-Computer Interface dataset): 24 subjects playing Brain Invaders, a visual P300 Brain-Computer Interface using oddball paradigm. 16-electrodes, wet. Up to 8 sessions per subject. Two experiemental conditions: with and without adaptive calibration using Riemannian geometry. [publication](Brain Invaders Adaptive versus Non-Adaptive P300 Brain-Computer Interface dataset), [code](plcrodrigues/py.BI.EEG.2013-GIPSA). Dataset id: BI.EEG.2013-GIPSA.

5. [Target Versus Non-Target](Brain Invaders calibration-less P300-based BCI using dry EEG electrodes Dataset (bi2014a)): 71 subjects playing Brain Invaders, a visual P300 Brain-Computer Interface using oddball paradigm with adapative Riemannian Geometry (no-calibration). 16-electrodes, dry. [publication](Brain Invaders calibration-less P300-based BCI using dry EEG electrodes Dataset (bi2014a)), [code](plcrodrigues/py.BI.EEG.2014a-GIPSA). Dataset id: bi2014a.

6. [Target Versus Non-Target](Brain Invaders Solo versus Collaboration: Multi-User P300-based Brain-Computer Interface Dataset (bi2014b)): 38 subjects playing a multiplayer and collaborative version of Brain Invaders, a visual P300 Brain-Computer Interface using oddball paradigm with adapative Riemannian Geometry (no-calibration). 32-electrodes per subject, wet, 2 subjects during each session. [publication](Brain Invaders Solo versus Collaboration: Multi-User P300-based Brain-Computer Interface Dataset (bi2014b)), [code](plcrodrigues/py.BI.EEG.2014b-GIPSA). Dataset id: bi2014b.

7. [Target Versus Non-Target](Brain Invaders calibration-less P300-based BCI with modulation of flash duration Dataset (bi2015a)): 50 subjects playing Brain Invaders, a visual P300 Brain-Computer Interface using oddball paradigm with adapative Riemannian Geometry (no-calibration). 32-electrodes, wet. 3 sessions per subjects with modulation of flash duration. [publication](Brain Invaders calibration-less P300-based BCI with modulation of flash duration Dataset (bi2015a)), [code](https://github.com/plcrodrigues/py.BI.EEG.2015a-GIPSA). Dataset id: bi2015a.

8. [Target Versus Non-Target](Brain Invaders Cooperative versus Competitive: Multi-User P300-based Brain-Computer Interface Dataset (bi2015b)): 44 subjects playing a multiplayer (cooperation and competition) version of Brain Invaders, a visual P300 Brain-Computer Interface using oddball paradigm with adapative Riemannian Geometry (no-calibration). 32-electrodes per subject, wet, 2 subjects for each session. [publication](Brain Invaders Cooperative versus Competitive: Multi-User P300- based Brain-Computer Interface Dataset (bi2015b)), [code](plcrodrigues/py.BI.EEG.2015b-GIPSA). Dataset id: bi2015b.

9. [Impedance Data](https://drive.google.com/drive/folders/0B3jfvN2T6iLMLWJMMVJMSXBqajg): 12 subjects for P300 task (Oddball paradigm) with 20% of rare stimuli. In total, there were 128 target stimuli and 512 standard stimuli. The dataset was collected in a way such that one recording contains different impedances in electrodes. [[Article]](https://static1.squarespace.com/static/5abefa62d274cb16de90e935/t/5ac6962a8a922d0b8b8be6a1/1522964012664/Kappenman+2010+Psychophys+Impedance.pdf)

10. [Sustained-Attention Driving](Multi-channel EEG recordings during a sustained-attention driving task (raw dataset)): 27 subjects for sustained-attention driving in a VR settin for monitoring event-related potentials. Each subject participated in two 90 min sessions (w/o and with kinesthetic feedback) and recorded with 32-channels and 500Hz. [[Article]](https://www.nature.com/articles/s41597-019-0027-4#Sec12) [[Pre-processed dataset]](Multi-channel EEG recordings during a sustained-attention driving task (pre-processed dataset))

11. [Dryad-Speech](Dryad): 5 different experiments for studying natural speech comprehension through a variety of tasks including audio, visual stimulus and imagined speech. (i) Audio-book version of a popular mid-20th century American work of fiction - 19 subjects, (ii) presentation of the same trials in the same order, but with each of the 28 speech segments played in reverse, (iii) N400 experiment: subjects read 300 sentences presented with the rest of the sentence and half which ended with an incongruent word - , (iv) cocktail party experiment: 33 subjects undertook 30 trials, each of 60 s in length, where they were presented with 2 classic works of fiction: one to the left ear, and the other to the right ear. Subjects were divided into 2 groups of 17 and 16 (+1 excluded subject) with each group instructed to attend to the story in either the left or right ear throughout the entire 30 trials, (v) multisensory experiment: stimuli were drawn from a set of videos that consisted of a male speaking American English in a conversational-like manner. [[Main Article]](Electrophysiological Correlates of Semantic Dissimilarity Reflect the Comprehension of Natural, Narrative Speech) [[Supplemntary Article]](Low-Frequency Cortical Entrainment to Speech Reflects Phoneme-Level Processing.)

慢皮质电位

1. [Mental-Imagery Dataset](A large electroencephalographic motor imagery dataset for electroencephalographic brain computer interfaces): 13 participants with over 60,000 examples of motor imageries in 4 interaction paradigms recorded with 38 channels medical-grade EEG system. It contains data for upto 6 mental imageries primarily for the motor moements. [[Article]](https://www.nature.com/articles/sdata2018211#ref-CR57)

休息状态

2. [EID-M, EID-S](https://drive.google.com/drive/folders/1t6tL434ZOESb06ZvA4Bw1p9chzxzbRbj): 8 subjects in rest state (with eyes closed) recorded from 14 electrodes using EPOC+ for 54s at 128 Hz (7000 samples each). EID-M has three trials and EID-S is a signle trial dataset. The dataset was used to develop a person identification system through brainwaves. [[Article]](https://arxiv.org/pdf/1711.06149.pdf)

3. [SPIS Resting State Dataset](Build software better, together): 10 subjects, 64 channels, 2.5 minutes recording in each state (eyes-closed and eyes-open) prior to a 105-minute session of Sustained Attention to Response Task with fixed-sequence and varying ISIs. [[Artcile]](Prediction of Reaction Time and Vigilance Variability from Spatio-Spectral Features of Resting-State EEG in a Long Sustained Attention Task.)

音乐与EEG

1. [Music Imagery Information Retrieval](sstober/openmiir): 10 subjects, 64 EEG Channels for a music imagery task of 12 different pieces w/ different meter, length and tempo. [[Article]](https://pdfs.semanticscholar.org/cde4/b1ec89f2c05a41f1143792a890a00e89541a.pdf)

眨眼/眼动

1. [Involuntary Eye Movements during Face Perception](Download test datasets): Dataset 1, 26 electrodes, 500Hz sampling rate, and 120 trials. Eye movements and pupil diameter record, EEG and EOG data is present when subject is presented a happy/sad/angry face on the screen. [[Article]](http://www.jneurosci.org/content/suppl/2009/09/30/29.39.12321.DC1/Supplemental_Material.pdf) [P.S: Dataset available on request only]

2. [Voluntary-Involuntary Eye-Blinks](https://drive.google.com/file/d/0By5iwWd39NblS2tRWmVTdmRzZUU/view?usp=sharing): Voluntary eye-blinks (subject were asked to blink voluntarily within 1s of audio stimulus) and involuntary eye-blinks (natural) was recorded for 20 subjects on 14 electrodes using g.tec. For each subject, 3 sessions with 20 trials each are present in .mat format. [[Article]](Assessing the effects of voluntary and involuntary eyeblinks in independent components of electroencephalogram)

3. [EEG-eye state](EEG Eye State Data Set): Eye-state labeled data for one continuous recording of EEG of 117 seconds with eye-closed and eye-open labels. The dataset was recorded from Emotiv headset.

4. [EEG-IO](EEG Eye Blinks): Voluntary single eye-blinks (external stimulation was provided) and EEG was recorded for frontal electrodes (Fp1, Fp2) for 20 subjects using OpenBCI Device and BIOPAC Cap100C. One session was conducted including around 25 blinks per subject. Manual annotation was done using video feed. [[Article]](https://proceedings.allerton.csl.illinois.edu/media/files/0174.pdf)

5. [EEG-VV, EEG-VR](EEG Eye Blinks): Involuntary eye-blinks (natural blinks) and EEG was recorded for frontal electrodes (Fp1, Fp2) for 12 subjects using OpenBCI Device and BIOPAC Cap100C. Subjects performed two activities - watching a video (EEG-VV) and reading an article (EEG-VR). Manual annotation was done using video feed. [[Article]](https://proceedings.allerton.csl.illinois.edu/media/files/0174.pdf)

6. [Eye State Prediction](http://suendermann.com/corpus/EEG_Eyes.arff.gz): 117 seconds recording of a single subject with labeled eye state data (open and closed) recorded using EPOC headset (14 electrodes). [[Article]](http://suendermann.com/su/pdf/aihls2013.pdf)

7. [Kara-One](The KARA ONE database): Imagined and vocalized phonemic and single-word prompts to access the language and speech production. 14 subjects recorded using 64-channel Neuroscan Quick-cap, along with face tracking and audio. [[Article]](http://www.cs.toronto.edu/~complingweb/data/karaOne/ZhaoRudzicz15.pdf)

Miscellaneous

1. [MNIST Brain Digits](MindBigData the MNIST of Brain Digits): EEG data when a digit(0-9) is shown to the subject, recorded 2s for a single subject using Minwave, EPOC, Muse, Insight. Includes over 1.2M samples.

2. [Imagenet Brain](MindBigData the MNIST of Brain Digits): A random image is shown (out of 14k images from the Imagenet ILSVRC2013 train dataset) and EEG signals are recorded for 3s for one subject. Includes over 70k samples.

3. [Working Memory](pbashivan/EEGLearn): Participants briefly observe an array containing multiple English characters SET (500ms) and maintain the information for three seconds. A TEST character is then presented and participants respond by press of a button if TEST charter matches one of the characters in the SET. 15 students, 64 electrodes and 500Hz sampling rate. Only a small subset of data is available publicly. [[Original Paper]](https://www.memphis.edu/acnl/publications/pdfs/ejn2014b.pdf) [[Further Analysis in ICLR]](https://arxiv.org/pdf/1511.06448.pdf)

4. [Deep Sleep Slow Osciallation](Challenge data): 10 seconds of recording starting 10 seconds before the end of a slow oscillation. Data is recorded with a goal to predict whether or not a slow oscillation will be followed by another one in sham condition, i.e. without any stimulation.

5. [Genetic Predisposition to Alcoholism](EEG Database Data Set): 120 trials for 120 subjects recorded from 64 electrides at 256Hz. Two groups of subjects were considered, alcoholic and control. Stimuli details are given in the paper.

6. [Confusion during MOOC](https://www.kaggle.com/wanghaohan/confused-eeg): 10 students watching MOOC videos in two categories - confusing (e.g., basic maths) and non-confusing (e.g., quantum theory). 2-minute duration 10 videos in each category. Recorded from single-channel wireless MindSet over frontal channel. [[Article]](http://www.cs.cmu.edu/~kkchang/paper/WangEtAl.2013.AIED.EEG-MOOC.pdf)

临床脑电图

1. [TUH EEG Resources](Temple University EEG Corpus): Massive amount of data for (i) Abnormal EEG and (ii) EEG Seizures

2. [Predict-UNM](Predict - Home): A large repository of clinical EEG datasets

其他一些数据集

6. EEG Databases for Emotion Recognition, NTU

15. ERP Core Dataset (Coming Soon) ERP CORE — ERP Info

28. Links for more datasets: Where can I find open access MEG/EEG data? (might include some duplicates)

29. EEG dataset a paper with the same title is also there

30. [Search Enginer: Might include a lot of duplicates] eeg brain-compu... in Datasets

32. Another platform for Neuro datasets: OpenNeuro

34. User-security based public datasets in section 4.2 of the paper, "A Survey on Brain Biometrics"

上述EEG公开数据集汇总整理参考Github用户:meagmohit

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值