A multimodal dataset for authoring and editing multimedia content: The MAMEM project
We present a dataset that combines multimodal biosignals and eye tracking information gathered under a human-computer interaction framework. The dataset was developed in the vein of the MAMEM project that aims to endow people with motor disabilities with the ability to edit and author multimedia con...
Main Authors: | , , , , , , , , , , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2017-12-01
|
Series: | Data in Brief |
Online Access: | http://www.sciencedirect.com/science/article/pii/S2352340917305930 |