
The Modular EEG Mapping Echosystem (MEME) is a bundle of MaxMsp devices using EEG signal for dynamical audio-visual composition. It is designed to provide flexible mapping of real-time EEG signals for digital artists.
This bundle is primarilly intended to be coupled with the use of EEG_synth (https://github.com/hyruuk/eegsynth/tree/dev_cbc) and Muse headset. If the user doesn’t have a Muse headset, the module EEG_playback allows to stream pre-recorded EEG signals. Examples of data are provided with the toolbox.
By interfacing a modular codebase in Python dedicated to extract features from EEG signals with MEME, it becomes possible to easily construct adaptive audiovisual content driven by neurofeedback.
You can find the code and more details on the modules here:
https://github.com/AntoineBellemare/EEG_m4l
The neurofeedback pipeline consists in recording EEG signal, computing real-time feature extraction (in Python), and sending the data via OSC to MEME. Then, modular mapping can be achieve in a variety of software.

Here you can see the Graphical User Interfaces (GUIs) of the MEME modules. You can find more detailed documentation on the use of these modules HERE

The examples above illustrate the use of MEME with Ableton Live, while a similar interfacing is possible with other environments such as Resolume for adaptive mapping and Unity or Unreal Engine for 3d adaptive design.
MEME demo
Numina
Numina is a project using the MEME toolbox. The narrative journey of the video provides an understanding of the mechanisms and techniques involved in creative brain-machine interfaces. The intention is to bring to life the sensations accompanying the interactive experience, favoring a transmission of the state of mind of a user to the viewer of the demo.
Collaboration:
Antoine Bellemare
Yann Harel
Mat Moebius