A multimodal speech-production dataset with time-aligned articulography, EEG, audio, and vocal-tract anatomy

Ref. 3123

  

Aperçu du jeu de données

Titre du jeu de données

A multimodal speech-production dataset with time-aligned articulography, EEG, audio, and vocal-tract anatomy

DOI canonique

Permet de citer l’ensemble du jeu de données, peu importe les mises à jour.

https://doi.org/10.48656/6y1s-px92

DOI

Permet de citer une version spécifique du jeu de données.

https://doi.org/10.48656/vc7s-pt02

Langue de description du jeu de données

Anglais

URL du jeu de données

-

Disponibilité des données

-

Description du jeu de données

We present a multimodal speech-production dataset combining simultaneous electromagnetic articulography (EMA; 1,250 Hz), electroencephalography (EEG; 2,048 Hz), and 48 kHz audio from 29 German-speaking adults. All participants have external craniofacial anthropometry; an anatomical subset (N = 18) adds acoustic pharyngometry, rhinometry, and 3D head meshes. Speech materials include alternating and sequential motion-rate syllable sequences at habitual and maximally fast rates with high trial counts; EMA + audio also cover passage reading, sustained vowels, palate tracing, and non-speech oromotor actions. EEG was recorded for syllable blocks. Cross-modal synchronisation is achieved via hardware triggers, yielding sub-millisecond alignment. All streams are released as raw and minimally processed files with a stable event-code map, machine-readable metadata, and open-source Python scripts (with a reproducible container) for data loading, synchronisation, and minimal preprocessing. This resource supports analyses of neural activity time-locked to articulatory onsets, gesture sequencing and rate effects, benchmarking of overt-speech EEG artefact handling, and studies linking vocal-tract anatomy to articulatory dynamics.

Remarques sur la documentation

-

Numéro de la version

1.0

Date de fin de l’embargo

-

Date de publication

07.11.2025

Notes sur la version

Version 1.0

Citation bibliographique

Friedrichs, D., Vyshnevetska, V., Lancheros Pompeyo, M. P., Bolt, E., Dellwo, V., & Moran, S. ([DATE UNKNOWN]). A multimodal speech-production dataset with time-aligned articulography, EEG, audio, and vocal-tract anatomy (Version 1.0) [Data set]. LaRS - Language Repository of Switzerland. https://doi.org/10.48656/vc7s-pt02

Hash MD5 du DIP

fbb8df388de4771c0cd536db8e2a41f6

Contenu du jeu de données

/
metadata.yaml