Skip to content

A project intended to generate music fitted to a person's emotional state as determined through real-time EEG recordings.

Notifications You must be signed in to change notification settings

matthewcso/NRGYorkU-Brain-Computer-Music-Interface

Repository files navigation

NRGYorkU Brain Computer Music Interface

Goal:

To develop a Brain-Computer Music Interface (BCMI) to generate music based on a person's emotions, as determined by real-time EEG classification using an Emotiv kit.

Potential Steps:

There are 3 main parts of this project that can be developed in parallel. There may exist a number of technical difficulties involved in connecting multiple real-time data feeds to each other.

  • EEG data feed: We need to develop a pipeline for acquiring a real-time EEG data feed that can be attached to a classifier. This may be the most difficult step conceptually, as there are few good libraries available for acquiring this type of data from the Emotiv EPOC.
  • Emotion Classification: We need to extract robust EEG features and we need to train a classifier or regressor to determine the emotional content from EEG data. We will use publicly available EEG datasets for this task, and might want to incorporate some elements of semi-supervised learning (given the abundance of EEG data without labels).
  • Music Generation: We need to generate music based on the identified emotions. This can be done using Erlich's algorithm, which has been translated to Python at this point in time. As a brief note, it might potentially be better to do away with Erlich's algorithm entirely in the future and to use a generative adversarial music generation algorithm instead; however, this will be a technically difficult challenge.

Requirements:

  • Python 3.x. Anaconda installation highly recommended. Ability to run Jupyter Notebooks highly recommended.
  • All libraries listed in requirements.txt. Run pip install -r requirements.txt from command line to install dependencies.
  • VirtualMIDISynth. You can download this software from here.
  • Sound fonts for VirtualMIDISynth. I used these soundfonts.

Citations

About

A project intended to generate music fitted to a person's emotional state as determined through real-time EEG recordings.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •